Build

Regressions (compared to v6.13-1268-gc0e75905caf3)

dragonboard-845c, boot

qemu-arm64-compat, ltp-fs

qemu-arm64-protected, kvm-unit-tests

qemu-armv7, kselftest-timers

qemu-x86_64-compat, ltp-fs

qemu-x86_64-compat, ltp-cve

qemu-x86_64, ltp-sched

rk3399-rock-pi-4b-nvhe, kselftest-kvm

rk3399-rock-pi-4b-protected, kselftest-kvm

rk3399-rock-pi-4b-vhe, kselftest-kvm

rk3399-rock-pi-4b, ltp-controllers

rk3399-rock-pi-4b, ltp-fs

Fixes (compared to v6.13-1268-gc0e75905caf3)

qemu-arm64-compat, ltp-cve

qemu-arm64-protected, kvm-unit-tests

qemu-arm64, ltp-fs

qemu-arm64, ltp-syscalls

qemu-x86_64-compat, ltp-cve

qemu-x86_64-compat, ltp-syscalls

qemu-x86_64, ltp-sched

rk3399-rock-pi-4b-nvhe, kvm-unit-tests

x15, ltp-controllers

x86-compat, ltp-controllers

Build Parser (compared to v6.13-1268-gc0e75905caf3)

arm, log-parser-build-gcc

s390, log-parser-build-gcc

x86_64, log-parser-build-clang

x86_64, log-parser-build-gcc

Boot Parser (compared to v6.13-1268-gc0e75905caf3)

qemu-arm64, log-parser-boot

qemu-x86_64, log-parser-boot

x15, log-parser-boot

Test Parser (compared to v6.13-1268-gc0e75905caf3)

qemu-arm64-compat, log-parser-test

qemu-arm64, log-parser-test

qemu-riscv64, log-parser-test

qemu-x86_64, log-parser-test

rk3399-rock-pi-4b, log-parser-test

x15, log-parser-test

Test log snippets

test log snippet regressions 1

Home

None

test log snippet regressions 2

Home

fs_fill fs_fill
fs_fill.c:115: TINFO: Running 4 writer threads
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/AOF
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 57090ms
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/file0
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/file0
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/file0
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/file0
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 89693ms
fs_fill.c:115: TINFO: Running 4 writer threads
fs_fill.c:115: TINFO: Running 4 writer threads
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/AOF
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 26649ms
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/file4
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/file4
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/file6
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/file4
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 44448ms
fs_fill.c:115: TINFO: Running 4 writer threads
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/AOF
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 27391ms
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/file0
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/file0
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/file0
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/file0
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 52219ms
fs_fill.c:115: TINFO: Running 4 writer threads
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/AOF
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 11164ms
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/file4
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/file5
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/file5
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/file5
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 29683ms

test log snippet regressions 3

Home

timer-ptimer-busy-loop-interrupt-signal-pending fail

test log snippet regressions 4

Home

timers_raw_skew fail

test log snippet regressions 5

Home

fs_fill fs_fill
fs_fill.c:115: TINFO: Running 4 writer threads
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/AOF
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 19947ms
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/file4
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/file4
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/file4
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/file4
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 36982ms
fs_fill.c:115: TINFO: Running 4 writer threads
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/AOF
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 21696ms
fs_fill.c:115: TINFO: Running 4 writer threads
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/AOF
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 6842ms
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/file1
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/file1
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/file1
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/file1
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 16218ms
fs_fill.c:115: TINFO: Running 4 writer threads
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/AOF
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/AOF
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 2449ms
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread2/file4
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread3/file5
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread1/file5
fs_fill.c:55: TINFO: Unlinking mntpoint/subdir/thread4/file5
fs_fill.c:93: TPASS: Got 4 ENOSPC runtime 10496ms

test log snippet regressions 6

Home

cve-2017-5754 meltdown

test log snippet regressions 7

Home

starvation starvation
starvation.c:98: TINFO: Setting affinity to CPU 0
starvation.c:52: TINFO: CPU did 120000000 loops in 286474us
starvation.c:148: TFAIL: Scheduller starvation reproduced.

test log snippet regressions 8

Home

kvm_arch_timer_edge_cases fail

test log snippet regressions 9

Home

kvm_arch_timer_edge_cases fail

test log snippet regressions 10

Home

kvm_arch_timer_edge_cases fail

test log snippet regressions 11

Home

cgroup_core01   cgroup_core01
cgroup_core02   cgroup_core02
cgroup_core03   cgroup_core03
cgroup      cgroup_regression_test.sh
cgroup_fj_function_debug cgroup_fj_function.sh debug
cgroup_fj_function_cpuset cgroup_fj_function.sh cpuset
cgroup_fj_function_cpu cgroup_fj_function.sh cpu
cgroup_fj_function_cpuacct cgroup_fj_function.sh cpuacct
cgroup_fj_function_memory cgroup_fj_function.sh memory
cgroup_fj_function_freezer cgroup_fj_function.sh freezer
cgroup_fj_function_devices cgroup_fj_function.sh devices
cgroup_fj_function_blkio cgroup_fj_function.sh blkio
cgroup_fj_function_net_cls cgroup_fj_function.sh net_cls
cgroup_fj_function_perf_event cgroup_fj_function.sh perf_event
cgroup_fj_function_net_prio cgroup_fj_function.sh net_prio
cgroup_fj_function_hugetlb cgroup_fj_function.sh hugetlb
cgroup_fj_stress_debug_2_2_none cgroup_fj_stress.sh debug 2 2 none
cgroup_fj_stress_debug_3_3_none cgroup_fj_stress.sh debug 3 3 none
cgroup_fj_stress_debug_4_4_none cgroup_fj_stress.sh debug 4 4 none
cgroup_fj_stress_debug_2_9_none cgroup_fj_stress.sh debug 2 9 none
cgroup_fj_stress_debug_10_3_none cgroup_fj_stress.sh debug 10 3 none
cgroup_fj_stress_debug_1_200_none cgroup_fj_stress.sh debug 1 200 none
cgroup_fj_stress_debug_200_1_none cgroup_fj_stress.sh debug 200 1 none
cgroup_fj_stress_debug_2_2_one cgroup_fj_stress.sh debug 2 2 one
cgroup_fj_stress_debug_3_3_one cgroup_fj_stress.sh debug 3 3 one
cgroup_fj_stress_debug_4_4_one cgroup_fj_stress.sh debug 4 4 one
cgroup_fj_stress_debug_2_9_one cgroup_fj_stress.sh debug 2 9 one
cgroup_fj_stress_debug_10_3_one cgroup_fj_stress.sh debug 10 3 one
cgroup_fj_stress_debug_1_200_one cgroup_fj_stress.sh debug 1 200 one
cgroup_fj_stress_debug_200_1_one cgroup_fj_stress.sh debug 200 1 one
cgroup_fj_stress_debug_2_2_each cgroup_fj_stress.sh debug 2 2 each
cgroup_fj_stress_debug_3_3_each cgroup_fj_stress.sh debug 3 3 each
cgroup_fj_stress_debug_4_4_each cgroup_fj_stress.sh debug 4 4 each
cgroup_fj_stress_debug_2_9_each cgroup_fj_stress.sh debug 2 9 each
cgroup_fj_stress_debug_10_3_each cgroup_fj_stress.sh debug 10 3 each
cgroup_fj_stress_debug_1_200_each cgroup_fj_stress.sh debug 1 200 each
cgroup_fj_stress_debug_200_1_each cgroup_fj_stress.sh debug 200 1 each
cgroup_fj_stress_cpuset_2_2_none cgroup_fj_stress.sh cpuset 2 2 none
cgroup_fj_stress_cpuset_3_3_none cgroup_fj_stress.sh cpuset 3 3 none
cgroup_fj_stress_cpuset_4_4_none cgroup_fj_stress.sh cpuset 4 4 none
cgroup_fj_stress_cpuset_2_9_none cgroup_fj_stress.sh cpuset 2 9 none
cgroup_fj_stress_cpuset_10_3_none cgroup_fj_stress.sh cpuset 10 3 none
cgroup_fj_stress_cpuset_1_200_none cgroup_fj_stress.sh cpuset 1 200 none
cgroup_fj_stress_cpuset_200_1_none cgroup_fj_stress.sh cpuset 200 1 none
cgroup_fj_stress_cpuset_2_2_one cgroup_fj_stress.sh cpuset 2 2 one
cgroup_fj_stress_cpuset_3_3_one cgroup_fj_stress.sh cpuset 3 3 one
cgroup_fj_stress_cpuset_4_4_one cgroup_fj_stress.sh cpuset 4 4 one
cgroup_fj_stress_cpuset_2_9_one cgroup_fj_stress.sh cpuset 2 9 one
cgroup_fj_stress_cpuset_10_3_one cgroup_fj_stress.sh cpuset 10 3 one
cgroup_fj_stress_cpuset_1_200_one cgroup_fj_stress.sh cpuset 1 200 one
cgroup_fj_stress_cpuset_200_1_one cgroup_fj_stress.sh cpuset 200 1 one
cgroup_fj_stress_cpuset_2_2_each cgroup_fj_stress.sh cpuset 2 2 each
cgroup_fj_stress_cpuset_3_3_each cgroup_fj_stress.sh cpuset 3 3 each
cgroup_fj_stress_cpuset_4_4_each cgroup_fj_stress.sh cpuset 4 4 each
cgroup_fj_stress_cpuset_2_9_each cgroup_fj_stress.sh cpuset 2 9 each
cgroup_fj_stress_cpuset_10_3_each cgroup_fj_stress.sh cpuset 10 3 each
cgroup_core01.c:63: TPASS: test_lesser_euid_open passed: EACCES (13)
cgroup_core02.c:84: TPASS: test_lesser_ns_open passed: ENOENT (2)
cgroup_core03.c:99: TPASS: cg_count_procs(cg_child_test_simple) passed
cgroup_core03.c:105: TPASS: cg_count_procs(cg_child_test_simple) passed
cgroup_regression_test 1 TINFO: Running: cgroup_regression_test.sh 
cgroup_regression_test 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_regression_test 1 TINFO: Using /tmp/LTP_cgroup_regression_test.dnqA82KRUM as tmpdir (tmpfs filesystem)
cgroup_regression_test 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_regression_test 1 TINFO: timeout per run is 0h 55m 0s
cgroup_regression_test 1 TPASS: no kernel bug was found
cgroup_regression_test 2 TFAIL: Failed to mount cgroup filesystem
cgroup_regression_test 3 TCONF: CONFIG_SCHED_DEBUG is not enabled
cgroup_regression_test 4 TCONF: CONFIG_LOCKDEP is not enabled
cgroup_regression_test 5 TINFO: The '/tmp/LTP_cgroup_regression_test.dnqA82KRUM/cgroup' is not mounted, skipping umount
cgroup_regression_test 5 TPASS: no kernel bug was found
cgroup_regression_test 6 TPASS: no kernel bug was found
cgroup_regression_test 7 TPASS: no kernel bug was found for test 1
cgroup_regression_test 7 TCONF: skip rest of testing due possible oops triggered by reading /proc/sched_debug
cgroup_regression_test 7 TPASS: no kernel bug was found for test 2
cgroup_regression_test 8 TPASS: no kernel bug was found
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh debug
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.IOm6Xa910K as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_function 1 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh cpuset
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.GxPfzbcINQ as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TINFO: test starts with cgroup version 2
cgroup_fj_function 1 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 2 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 3 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 4 TINFO: Attaching task 14205 to /sys/fs/cgroup/ltp/test-14154/ltp_1
cgroup_fj_function 4 TPASS: Group /sys/fs/cgroup/ltp/test-14154/ltp_1 with task 14205 cannot be removed
cgroup_fj_function 5 TPASS: Dir /sys/fs/cgroup/ltp/test-14154/ltp_1 with subdir 'a' cannot be removed
cgroup_fj_function 6 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 7 TPASS: Tasks file /sys/fs/cgroup/ltp/test-14154/ltp_1/tasks cannot be removed
cgroup_fj_function 8 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 9 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 10 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh cpu
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.uVlHMCqikp as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TINFO: test starts with cgroup version 2
cgroup_fj_function 1 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 2 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 3 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 4 TINFO: Attaching task 14306 to /sys/fs/cgroup/ltp/test-14255/ltp_1
cgroup_fj_function 4 TPASS: Group /sys/fs/cgroup/ltp/test-14255/ltp_1 with task 14306 cannot be removed
cgroup_fj_function 5 TPASS: Dir /sys/fs/cgroup/ltp/test-14255/ltp_1 with subdir 'a' cannot be removed
cgroup_fj_function 6 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 7 TPASS: Tasks file /sys/fs/cgroup/ltp/test-14255/ltp_1/tasks cannot be removed
cgroup_fj_function 8 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 9 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 10 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh cpuacct
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.FfcfZbr1rY as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TINFO: test starts with cgroup version 1
cgroup_fj_function 1 TINFO: Attaching task 14403 to /tmp/cgroup_cpuacct/ltp/test-14353/ltp_1
cgroup_fj_function 1 TINFO: Attaching task 14403 to /tmp/cgroup_cpuacct/ltp/test-14353
cgroup_fj_function 1 TPASS: Task attached successfully
cgroup_fj_function 2 TPASS: Successfully moved /tmp/cgroup_cpuacct/ltp/test-14353/ltp_2 to /tmp/cgroup_cpuacct/ltp/test-14353/ltp_3
cgroup_fj_function 3 TPASS: Failed to move /tmp/cgroup_cpuacct/ltp/test-14353/ltp_2 over existing /tmp/cgroup_cpuacct/ltp/test-14353/ltp_1
cgroup_fj_function 4 TINFO: Attaching task 14403 to /tmp/cgroup_cpuacct/ltp/test-14353/ltp_1
cgroup_fj_function 4 TPASS: Group /tmp/cgroup_cpuacct/ltp/test-14353/ltp_1 with task 14403 cannot be removed
cgroup_fj_function 5 TPASS: Dir /tmp/cgroup_cpuacct/ltp/test-14353/ltp_1 with subdir 'a' cannot be removed
cgroup_fj_function 6 TPASS: Subgroup /tmp/cgroup_cpuacct/ltp/test-14353/ltp_1 cannot be moved to /tmp/LTP_cgroup_fj_function.FfcfZbr1rY/ltp
cgroup_fj_function 7 TPASS: Tasks file /tmp/cgroup_cpuacct/ltp/test-14353/ltp_1/tasks cannot be removed
cgroup_fj_function 8 TPASS: Cannot write invalid values to /tmp/cgroup_cpuacct/ltp/test-14353/ltp_1/notify_on_release
cgroup_fj_function 9 TPASS: Set /tmp/cgroup_cpuacct/ltp/test-14353/ltp_1/notify_on_release to 1
cgroup_fj_function 10 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh memory
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.iknnsYmv3k as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TINFO: test starts with cgroup version 2
cgroup_fj_function 1 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 2 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 3 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 4 TINFO: Attaching task 14512 to /sys/fs/cgroup/ltp/test-14461/ltp_1
cgroup_fj_function 4 TPASS: Group /sys/fs/cgroup/ltp/test-14461/ltp_1 with task 14512 cannot be removed
cgroup_fj_function 5 TPASS: Dir /sys/fs/cgroup/ltp/test-14461/ltp_1 with subdir 'a' cannot be removed
cgroup_fj_function 6 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 7 TPASS: Tasks file /sys/fs/cgroup/ltp/test-14461/ltp_1/tasks cannot be removed
cgroup_fj_function 8 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 9 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 10 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh freezer
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.RiaBmR3hf6 as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TINFO: test starts with cgroup version 1
cgroup_fj_function 1 TINFO: Attaching task 14609 to /tmp/cgroup_freezer/ltp/test-14559/ltp_1
cgroup_fj_function 1 TINFO: Attaching task 14609 to /tmp/cgroup_freezer/ltp/test-14559
cgroup_fj_function 1 TPASS: Task attached successfully
cgroup_fj_function 2 TPASS: Successfully moved /tmp/cgroup_freezer/ltp/test-14559/ltp_2 to /tmp/cgroup_freezer/ltp/test-14559/ltp_3
cgroup_fj_function 3 TPASS: Failed to move /tmp/cgroup_freezer/ltp/test-14559/ltp_2 over existing /tmp/cgroup_freezer/ltp/test-14559/ltp_1
cgroup_fj_function 4 TINFO: Attaching task 14609 to /tmp/cgroup_freezer/ltp/test-14559/ltp_1
cgroup_fj_function 4 TPASS: Group /tmp/cgroup_freezer/ltp/test-14559/ltp_1 with task 14609 cannot be removed
cgroup_fj_function 5 TPASS: Dir /tmp/cgroup_freezer/ltp/test-14559/ltp_1 with subdir 'a' cannot be removed
cgroup_fj_function 6 TPASS: Subgroup /tmp/cgroup_freezer/ltp/test-14559/ltp_1 cannot be moved to /tmp/LTP_cgroup_fj_function.RiaBmR3hf6/ltp
cgroup_fj_function 7 TPASS: Tasks file /tmp/cgroup_freezer/ltp/test-14559/ltp_1/tasks cannot be removed
cgroup_fj_function 8 TPASS: Cannot write invalid values to /tmp/cgroup_freezer/ltp/test-14559/ltp_1/notify_on_release
cgroup_fj_function 9 TPASS: Set /tmp/cgroup_freezer/ltp/test-14559/ltp_1/notify_on_release to 1
cgroup_fj_function 10 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh devices
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.J93RyEB5gm as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TINFO: test starts with cgroup version 1
cgroup_fj_function 1 TINFO: Attaching task 14717 to /tmp/cgroup_devices/ltp/test-14667/ltp_1
cgroup_fj_function 1 TINFO: Attaching task 14717 to /tmp/cgroup_devices/ltp/test-14667
cgroup_fj_function 1 TPASS: Task attached successfully
cgroup_fj_function 2 TPASS: Successfully moved /tmp/cgroup_devices/ltp/test-14667/ltp_2 to /tmp/cgroup_devices/ltp/test-14667/ltp_3
cgroup_fj_function 3 TPASS: Failed to move /tmp/cgroup_devices/ltp/test-14667/ltp_2 over existing /tmp/cgroup_devices/ltp/test-14667/ltp_1
cgroup_fj_function 4 TINFO: Attaching task 14717 to /tmp/cgroup_devices/ltp/test-14667/ltp_1
cgroup_fj_function 4 TPASS: Group /tmp/cgroup_devices/ltp/test-14667/ltp_1 with task 14717 cannot be removed
cgroup_fj_function 5 TPASS: Dir /tmp/cgroup_devices/ltp/test-14667/ltp_1 with subdir 'a' cannot be removed
cgroup_fj_function 6 TPASS: Subgroup /tmp/cgroup_devices/ltp/test-14667/ltp_1 cannot be moved to /tmp/LTP_cgroup_fj_function.J93RyEB5gm/ltp
cgroup_fj_function 7 TPASS: Tasks file /tmp/cgroup_devices/ltp/test-14667/ltp_1/tasks cannot be removed
cgroup_fj_function 8 TPASS: Cannot write invalid values to /tmp/cgroup_devices/ltp/test-14667/ltp_1/notify_on_release
cgroup_fj_function 9 TPASS: Set /tmp/cgroup_devices/ltp/test-14667/ltp_1/notify_on_release to 1
cgroup_fj_function 10 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh blkio
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.SMugh8TL4W as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_function 1 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh net_cls
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.QgbAx02sU8 as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TINFO: test starts with cgroup version 1
cgroup_fj_function 1 TINFO: Attaching task 14867 to /tmp/cgroup_net_cls/ltp/test-14817/ltp_1
cgroup_fj_function 1 TINFO: Attaching task 14867 to /tmp/cgroup_net_cls/ltp/test-14817
cgroup_fj_function 1 TPASS: Task attached successfully
cgroup_fj_function 2 TPASS: Successfully moved /tmp/cgroup_net_cls/ltp/test-14817/ltp_2 to /tmp/cgroup_net_cls/ltp/test-14817/ltp_3
cgroup_fj_function 3 TPASS: Failed to move /tmp/cgroup_net_cls/ltp/test-14817/ltp_2 over existing /tmp/cgroup_net_cls/ltp/test-14817/ltp_1
cgroup_fj_function 4 TINFO: Attaching task 14867 to /tmp/cgroup_net_cls/ltp/test-14817/ltp_1
cgroup_fj_function 4 TPASS: Group /tmp/cgroup_net_cls/ltp/test-14817/ltp_1 with task 14867 cannot be removed
cgroup_fj_function 5 TPASS: Dir /tmp/cgroup_net_cls/ltp/test-14817/ltp_1 with subdir 'a' cannot be removed
cgroup_fj_function 6 TPASS: Subgroup /tmp/cgroup_net_cls/ltp/test-14817/ltp_1 cannot be moved to /tmp/LTP_cgroup_fj_function.QgbAx02sU8/ltp
cgroup_fj_function 7 TPASS: Tasks file /tmp/cgroup_net_cls/ltp/test-14817/ltp_1/tasks cannot be removed
cgroup_fj_function 8 TPASS: Cannot write invalid values to /tmp/cgroup_net_cls/ltp/test-14817/ltp_1/notify_on_release
cgroup_fj_function 9 TPASS: Set /tmp/cgroup_net_cls/ltp/test-14817/ltp_1/notify_on_release to 1
cgroup_fj_function 10 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh perf_event
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.dNt33Xixj9 as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TINFO: test starts with cgroup version 1
cgroup_fj_function 1 TINFO: Attaching task 14975 to /tmp/cgroup_perf_event/ltp/test-14925/ltp_1
cgroup_fj_function 1 TINFO: Attaching task 14975 to /tmp/cgroup_perf_event/ltp/test-14925
cgroup_fj_function 1 TPASS: Task attached successfully
cgroup_fj_function 2 TPASS: Successfully moved /tmp/cgroup_perf_event/ltp/test-14925/ltp_2 to /tmp/cgroup_perf_event/ltp/test-14925/ltp_3
cgroup_fj_function 3 TPASS: Failed to move /tmp/cgroup_perf_event/ltp/test-14925/ltp_2 over existing /tmp/cgroup_perf_event/ltp/test-14925/ltp_1
cgroup_fj_function 4 TINFO: Attaching task 14975 to /tmp/cgroup_perf_event/ltp/test-14925/ltp_1
cgroup_fj_function 4 TPASS: Group /tmp/cgroup_perf_event/ltp/test-14925/ltp_1 with task 14975 cannot be removed
cgroup_fj_function 5 TPASS: Dir /tmp/cgroup_perf_event/ltp/test-14925/ltp_1 with subdir 'a' cannot be removed
cgroup_fj_function 6 TPASS: Subgroup /tmp/cgroup_perf_event/ltp/test-14925/ltp_1 cannot be moved to /tmp/LTP_cgroup_fj_function.dNt33Xixj9/ltp
cgroup_fj_function 7 TPASS: Tasks file /tmp/cgroup_perf_event/ltp/test-14925/ltp_1/tasks cannot be removed
cgroup_fj_function 8 TPASS: Cannot write invalid values to /tmp/cgroup_perf_event/ltp/test-14925/ltp_1/notify_on_release
cgroup_fj_function 9 TPASS: Set /tmp/cgroup_perf_event/ltp/test-14925/ltp_1/notify_on_release to 1
cgroup_fj_function 10 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh net_prio
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.plan1d4fux as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TINFO: test starts with cgroup version 1
cgroup_fj_function 1 TINFO: Attaching task 15083 to /tmp/cgroup_net_prio/ltp/test-15033/ltp_1
cgroup_fj_function 1 TINFO: Attaching task 15083 to /tmp/cgroup_net_prio/ltp/test-15033
cgroup_fj_function 1 TPASS: Task attached successfully
cgroup_fj_function 2 TPASS: Successfully moved /tmp/cgroup_net_prio/ltp/test-15033/ltp_2 to /tmp/cgroup_net_prio/ltp/test-15033/ltp_3
cgroup_fj_function 3 TPASS: Failed to move /tmp/cgroup_net_prio/ltp/test-15033/ltp_2 over existing /tmp/cgroup_net_prio/ltp/test-15033/ltp_1
cgroup_fj_function 4 TINFO: Attaching task 15083 to /tmp/cgroup_net_prio/ltp/test-15033/ltp_1
cgroup_fj_function 4 TPASS: Group /tmp/cgroup_net_prio/ltp/test-15033/ltp_1 with task 15083 cannot be removed
cgroup_fj_function 5 TPASS: Dir /tmp/cgroup_net_prio/ltp/test-15033/ltp_1 with subdir 'a' cannot be removed
cgroup_fj_function 6 TPASS: Subgroup /tmp/cgroup_net_prio/ltp/test-15033/ltp_1 cannot be moved to /tmp/LTP_cgroup_fj_function.plan1d4fux/ltp
cgroup_fj_function 7 TPASS: Tasks file /tmp/cgroup_net_prio/ltp/test-15033/ltp_1/tasks cannot be removed
cgroup_fj_function 8 TPASS: Cannot write invalid values to /tmp/cgroup_net_prio/ltp/test-15033/ltp_1/notify_on_release
cgroup_fj_function 9 TPASS: Set /tmp/cgroup_net_prio/ltp/test-15033/ltp_1/notify_on_release to 1
cgroup_fj_function 10 TINFO: Removing all ltp subgroups...
cgroup_fj_function 1 TINFO: Running: cgroup_fj_function.sh hugetlb
cgroup_fj_function 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_function 1 TINFO: Using /tmp/LTP_cgroup_fj_function.AqFLKggCR6 as tmpdir (tmpfs filesystem)
cgroup_fj_function 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_function 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_function 1 TINFO: test starts with cgroup version 2
cgroup_fj_function 1 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 2 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 3 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 4 TINFO: Attaching task 15192 to /sys/fs/cgroup/ltp/test-15141/ltp_1
cgroup_fj_function 4 TPASS: Group /sys/fs/cgroup/ltp/test-15141/ltp_1 with task 15192 cannot be removed
cgroup_fj_function 5 TPASS: Dir /sys/fs/cgroup/ltp/test-15141/ltp_1 with subdir 'a' cannot be removed
cgroup_fj_function 6 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 7 TPASS: Tasks file /sys/fs/cgroup/ltp/test-15141/ltp_1/tasks cannot be removed
cgroup_fj_function 8 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 9 TCONF: Controller mounted on cgroup2 hierarchy, skipping test
cgroup_fj_function 10 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 2 2 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.XhdHlH9b5o as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 3 3 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.ZRXnwQARCA as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 4 4 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.hSvj2QZKUF as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 2 9 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.0y9Jze0cIq as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 10 3 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.xRAuhKtt0o as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 1 200 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.hFcLE0o1Sd as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 200 1 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.FD2nq6icvj as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 2 2 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.yoqrtzP26g as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 3 3 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.dzUU98WnDQ as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 4 4 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.cHkKjkP2F4 as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 2 9 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.Ug7Qle1Dc5 as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 10 3 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.RQTI0yRNbd as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 1 200 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.mbX6oOPz06 as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 200 1 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.IlwInNpGjs as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 2 2 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.q7p6OF8zqk as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 3 3 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.6ptBm8BCsW as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 4 4 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.dO5eVyELch as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 2 9 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.aH8EE8Wxxo as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 10 3 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.tfhciKeSNh as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 1 200 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.ez780hL2VT as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh debug 200 1 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.KsQmbVV2Pz as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TCONF: 'tst_cgctl require' exited. Controller is probably not available?
cgroup_fj_stress 1 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 2 2 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.XdZKreVxcM as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 7 times
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 3 3 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.r2BZHSQZwl as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 40 times
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 4 4 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.qZDf8nNy4u as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 341 times
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 2 9 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.8ilbMAdBh1 as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 1023 times
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 10 3 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.0IaRvmT8SF as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 1111 times
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 1 200 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.2g8BoJTpF5 as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 201 times
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 200 1 none
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.7BnarGR3zz as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 201 times
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 2 2 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.s1IMcO9aje as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 7 times
cgroup_fj_stress 1 TINFO: Moving one task around
cgroup_fj_stress 1 TINFO: Attaching task 26297 to /sys/fs/cgroup/ltp/test-26228/1/1
cgroup_fj_stress 1 TINFO: Attaching task 26297 to /sys/fs/cgroup/ltp/test-26228/1/1
cgroup_fj_stress 1 TINFO: Attaching task 26297 to /sys/fs/cgroup/ltp/test-26228/1/2
cgroup_fj_stress 1 TINFO: Attaching task 26297 to /sys/fs/cgroup/ltp/test-26228/1/2
cgroup_fj_stress 1 TINFO: Attaching task 26297 to /sys/fs/cgroup/ltp/test-26228/2/1
cgroup_fj_stress 1 TINFO: Attaching task 26297 to /sys/fs/cgroup/ltp/test-26228/2/1
cgroup_fj_stress 1 TINFO: Attaching task 26297 to /sys/fs/cgroup/ltp/test-26228/2/2
cgroup_fj_stress 1 TINFO: Attaching task 26297 to /sys/fs/cgroup/ltp/test-26228/2/2
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 3 3 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.BjoLGgTcjU as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 40 times
cgroup_fj_stress 1 TINFO: Moving one task around
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 26489 to /sys/fs/cgroup/ltp/test-26344/3/3/3
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 4 4 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.sJMZlYiBy9 as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 341 times
cgroup_fj_stress 1 TINFO: Moving one task around
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/1/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/2/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/3/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 27513 to /sys/fs/cgroup/ltp/test-26694/4/4/4/4
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 2 9 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.vGCBqIyMbW as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 1023 times
cgroup_fj_stress 1 TINFO: Moving one task around
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/1/2/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/1/2/2/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/1/2/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 31845 to /sys/fs/cgroup/ltp/test-29236/2/2/2/2/2/2/2/2/2
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 10 3 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.8Ayv4mn8bF as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 1111 times
cgroup_fj_stress 1 TINFO: Moving one task around
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/1/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/2/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/3/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/4/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/5/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/6/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/7/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/8/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/9/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/1/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/2/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/3/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/4/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/5/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/6/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/7/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/8/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/9/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/1
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/2
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/3
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/4
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/5
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/6
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/7
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/8
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/9
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/10
cgroup_fj_stress 1 TINFO: Attaching task 38341 to /sys/fs/cgroup/ltp/test-35956/10/10/10
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 1 200 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.KS6TpdHmgm as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 201 times
cgroup_fj_stress 1 TINFO: Moving one task around
cgroup_fj_stress 1 TINFO: Attaching task 45240 to /sys/fs/cgroup/ltp/test-44584/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 45240 to /sys/fs/cgroup/ltp/test-44584/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 200 1 one
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.nN5rq2xzgb as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 201 times
cgroup_fj_stress 1 TINFO: Moving one task around
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/1
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/1
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/2
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/2
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/3
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/3
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/4
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/4
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/5
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/5
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/6
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/6
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/7
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/7
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/8
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/8
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/9
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/9
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/10
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/10
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/11
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/11
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/12
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/12
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/13
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/13
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/14
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/14
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/15
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/15
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/16
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/16
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/17
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/17
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/18
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/18
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/19
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/19
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/20
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/20
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/21
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/21
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/22
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/22
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/23
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/23
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/24
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/24
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/25
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/25
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/26
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/26
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/27
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/27
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/28
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/28
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/29
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/29
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/30
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/30
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/31
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/31
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/32
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/32
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/33
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/33
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/34
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/34
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/35
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/35
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/36
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/36
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/37
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/37
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/38
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/38
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/39
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/39
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/40
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/40
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/41
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/41
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/42
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/42
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/43
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/43
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/44
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/44
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/45
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/45
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/46
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/46
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/47
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/47
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/48
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/48
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/49
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/49
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/50
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/50
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/51
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/51
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/52
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/52
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/53
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/53
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/54
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/54
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/55
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/55
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/56
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/56
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/57
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/57
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/58
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/58
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/59
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/59
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/60
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/60
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/61
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/61
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/62
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/62
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/63
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/63
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/64
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/64
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/65
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/65
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/66
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/66
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/67
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/67
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/68
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/68
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/69
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/69
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/70
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/70
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/71
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/71
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/72
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/72
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/73
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/73
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/74
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/74
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/75
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/75
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/76
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/76
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/77
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/77
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/78
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/78
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/79
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/79
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/80
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/80
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/81
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/81
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/82
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/82
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/83
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/83
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/84
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/84
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/85
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/85
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/86
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/86
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/87
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/87
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/88
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/88
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/89
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/89
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/90
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/90
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/91
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/91
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/92
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/92
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/93
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/93
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/94
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/94
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/95
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/95
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/96
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/96
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/97
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/97
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/98
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/98
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/99
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/99
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/100
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/100
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/101
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/101
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/102
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/102
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/103
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/103
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/104
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/104
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/105
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/105
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/106
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/106
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/107
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/107
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/108
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/108
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/109
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/109
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/110
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/110
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/111
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/111
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/112
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/112
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/113
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/113
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/114
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/114
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/115
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/115
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/116
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/116
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/117
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/117
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/118
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/118
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/119
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/119
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/120
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/120
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/121
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/121
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/122
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/122
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/123
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/123
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/124
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/124
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/125
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/125
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/126
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/126
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/127
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/127
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/128
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/128
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/129
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/129
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/130
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/130
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/131
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/131
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/132
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/132
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/133
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/133
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/134
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/134
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/135
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/135
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/136
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/136
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/137
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/137
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/138
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/138
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/139
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/139
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/140
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/140
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/141
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/141
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/142
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/142
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/143
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/143
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/144
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/144
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/145
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/145
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/146
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/146
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/147
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/147
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/148
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/148
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/149
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/149
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/150
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/150
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/151
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/151
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/152
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/152
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/153
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/153
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/154
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/154
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/155
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/155
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/156
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/156
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/157
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/157
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/158
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/158
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/159
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/159
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/160
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/160
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/161
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/161
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/162
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/162
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/163
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/163
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/164
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/164
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/165
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/165
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/166
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/166
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/167
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/167
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/168
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/168
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/169
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/169
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/170
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/170
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/171
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/171
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/172
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/172
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/173
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/173
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/174
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/174
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/175
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/175
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/176
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/176
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/177
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/177
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/178
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/178
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/179
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/179
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/180
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/180
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/181
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/181
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/182
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/182
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/183
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/183
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/184
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/184
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/185
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/185
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/186
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/186
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/187
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/187
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/188
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/188
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/189
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/189
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/190
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/190
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/191
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/191
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/192
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/192
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/193
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/193
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/194
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/194
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/195
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/195
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/196
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/196
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/197
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/197
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/198
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/198
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/199
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/199
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/200
cgroup_fj_stress 1 TINFO: Attaching task 46118 to /sys/fs/cgroup/ltp/test-45663/200
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 2 2 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.PSkWhVQD1V as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 7 times
cgroup_fj_stress 1 TINFO: Attaching task to each subgroup
cgroup_fj_stress 1 TINFO: Attaching task 47410 to /sys/fs/cgroup/ltp/test-47337/1/1
cgroup_fj_stress 1 TINFO: Attaching task 47414 to /sys/fs/cgroup/ltp/test-47337/1/2
cgroup_fj_stress 1 TINFO: Attaching task 47420 to /sys/fs/cgroup/ltp/test-47337/2/1
cgroup_fj_stress 1 TINFO: Attaching task 47424 to /sys/fs/cgroup/ltp/test-47337/2/2
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 3 3 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.Tb5gDZkmhR as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 40 times
cgroup_fj_stress 1 TINFO: Attaching task to each subgroup
cgroup_fj_stress 1 TINFO: Attaching task 47608 to /sys/fs/cgroup/ltp/test-47457/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 47612 to /sys/fs/cgroup/ltp/test-47457/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 47616 to /sys/fs/cgroup/ltp/test-47457/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 47622 to /sys/fs/cgroup/ltp/test-47457/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 47626 to /sys/fs/cgroup/ltp/test-47457/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 47630 to /sys/fs/cgroup/ltp/test-47457/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 47636 to /sys/fs/cgroup/ltp/test-47457/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 47640 to /sys/fs/cgroup/ltp/test-47457/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 47644 to /sys/fs/cgroup/ltp/test-47457/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 47652 to /sys/fs/cgroup/ltp/test-47457/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 47656 to /sys/fs/cgroup/ltp/test-47457/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 47660 to /sys/fs/cgroup/ltp/test-47457/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 47666 to /sys/fs/cgroup/ltp/test-47457/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 47670 to /sys/fs/cgroup/ltp/test-47457/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 47674 to /sys/fs/cgroup/ltp/test-47457/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 47680 to /sys/fs/cgroup/ltp/test-47457/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 47684 to /sys/fs/cgroup/ltp/test-47457/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 47688 to /sys/fs/cgroup/ltp/test-47457/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 47696 to /sys/fs/cgroup/ltp/test-47457/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 47700 to /sys/fs/cgroup/ltp/test-47457/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 47704 to /sys/fs/cgroup/ltp/test-47457/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 47710 to /sys/fs/cgroup/ltp/test-47457/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 47714 to /sys/fs/cgroup/ltp/test-47457/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 47718 to /sys/fs/cgroup/ltp/test-47457/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 47724 to /sys/fs/cgroup/ltp/test-47457/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 47728 to /sys/fs/cgroup/ltp/test-47457/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 47732 to /sys/fs/cgroup/ltp/test-47457/3/3/3
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 4 4 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.szQkAKr7k2 as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 341 times
cgroup_fj_stress 1 TINFO: Attaching task to each subgroup
cgroup_fj_stress 1 TINFO: Attaching task 48658 to /sys/fs/cgroup/ltp/test-47831/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 48662 to /sys/fs/cgroup/ltp/test-47831/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 48666 to /sys/fs/cgroup/ltp/test-47831/1/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 48670 to /sys/fs/cgroup/ltp/test-47831/1/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 48676 to /sys/fs/cgroup/ltp/test-47831/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 48680 to /sys/fs/cgroup/ltp/test-47831/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 48684 to /sys/fs/cgroup/ltp/test-47831/1/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 48688 to /sys/fs/cgroup/ltp/test-47831/1/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 48694 to /sys/fs/cgroup/ltp/test-47831/1/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 48698 to /sys/fs/cgroup/ltp/test-47831/1/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 48702 to /sys/fs/cgroup/ltp/test-47831/1/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 48706 to /sys/fs/cgroup/ltp/test-47831/1/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 48712 to /sys/fs/cgroup/ltp/test-47831/1/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 48716 to /sys/fs/cgroup/ltp/test-47831/1/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 48720 to /sys/fs/cgroup/ltp/test-47831/1/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 48724 to /sys/fs/cgroup/ltp/test-47831/1/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 48732 to /sys/fs/cgroup/ltp/test-47831/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 48736 to /sys/fs/cgroup/ltp/test-47831/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 48740 to /sys/fs/cgroup/ltp/test-47831/1/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 48744 to /sys/fs/cgroup/ltp/test-47831/1/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 48750 to /sys/fs/cgroup/ltp/test-47831/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 48754 to /sys/fs/cgroup/ltp/test-47831/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 48758 to /sys/fs/cgroup/ltp/test-47831/1/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 48762 to /sys/fs/cgroup/ltp/test-47831/1/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 48768 to /sys/fs/cgroup/ltp/test-47831/1/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 48772 to /sys/fs/cgroup/ltp/test-47831/1/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 48776 to /sys/fs/cgroup/ltp/test-47831/1/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 48780 to /sys/fs/cgroup/ltp/test-47831/1/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 48786 to /sys/fs/cgroup/ltp/test-47831/1/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 48790 to /sys/fs/cgroup/ltp/test-47831/1/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 48794 to /sys/fs/cgroup/ltp/test-47831/1/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 48798 to /sys/fs/cgroup/ltp/test-47831/1/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 48806 to /sys/fs/cgroup/ltp/test-47831/1/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 48810 to /sys/fs/cgroup/ltp/test-47831/1/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 48814 to /sys/fs/cgroup/ltp/test-47831/1/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 48818 to /sys/fs/cgroup/ltp/test-47831/1/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 48824 to /sys/fs/cgroup/ltp/test-47831/1/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 48828 to /sys/fs/cgroup/ltp/test-47831/1/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 48832 to /sys/fs/cgroup/ltp/test-47831/1/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 48836 to /sys/fs/cgroup/ltp/test-47831/1/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 48842 to /sys/fs/cgroup/ltp/test-47831/1/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 48846 to /sys/fs/cgroup/ltp/test-47831/1/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 48850 to /sys/fs/cgroup/ltp/test-47831/1/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 48854 to /sys/fs/cgroup/ltp/test-47831/1/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 48860 to /sys/fs/cgroup/ltp/test-47831/1/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 48864 to /sys/fs/cgroup/ltp/test-47831/1/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 48868 to /sys/fs/cgroup/ltp/test-47831/1/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 48872 to /sys/fs/cgroup/ltp/test-47831/1/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 48880 to /sys/fs/cgroup/ltp/test-47831/1/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 48884 to /sys/fs/cgroup/ltp/test-47831/1/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 48888 to /sys/fs/cgroup/ltp/test-47831/1/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 48892 to /sys/fs/cgroup/ltp/test-47831/1/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 48898 to /sys/fs/cgroup/ltp/test-47831/1/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 48902 to /sys/fs/cgroup/ltp/test-47831/1/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 48906 to /sys/fs/cgroup/ltp/test-47831/1/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 48910 to /sys/fs/cgroup/ltp/test-47831/1/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 48916 to /sys/fs/cgroup/ltp/test-47831/1/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 48920 to /sys/fs/cgroup/ltp/test-47831/1/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 48924 to /sys/fs/cgroup/ltp/test-47831/1/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 48928 to /sys/fs/cgroup/ltp/test-47831/1/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 48934 to /sys/fs/cgroup/ltp/test-47831/1/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 48938 to /sys/fs/cgroup/ltp/test-47831/1/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 48942 to /sys/fs/cgroup/ltp/test-47831/1/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 48946 to /sys/fs/cgroup/ltp/test-47831/1/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 48956 to /sys/fs/cgroup/ltp/test-47831/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 48960 to /sys/fs/cgroup/ltp/test-47831/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 48964 to /sys/fs/cgroup/ltp/test-47831/2/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 48968 to /sys/fs/cgroup/ltp/test-47831/2/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 48974 to /sys/fs/cgroup/ltp/test-47831/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 48978 to /sys/fs/cgroup/ltp/test-47831/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 48982 to /sys/fs/cgroup/ltp/test-47831/2/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 48986 to /sys/fs/cgroup/ltp/test-47831/2/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 48992 to /sys/fs/cgroup/ltp/test-47831/2/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 48996 to /sys/fs/cgroup/ltp/test-47831/2/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49000 to /sys/fs/cgroup/ltp/test-47831/2/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49004 to /sys/fs/cgroup/ltp/test-47831/2/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49010 to /sys/fs/cgroup/ltp/test-47831/2/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49014 to /sys/fs/cgroup/ltp/test-47831/2/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49018 to /sys/fs/cgroup/ltp/test-47831/2/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49022 to /sys/fs/cgroup/ltp/test-47831/2/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49030 to /sys/fs/cgroup/ltp/test-47831/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49034 to /sys/fs/cgroup/ltp/test-47831/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49038 to /sys/fs/cgroup/ltp/test-47831/2/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49042 to /sys/fs/cgroup/ltp/test-47831/2/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49048 to /sys/fs/cgroup/ltp/test-47831/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49052 to /sys/fs/cgroup/ltp/test-47831/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49056 to /sys/fs/cgroup/ltp/test-47831/2/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49060 to /sys/fs/cgroup/ltp/test-47831/2/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49066 to /sys/fs/cgroup/ltp/test-47831/2/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49070 to /sys/fs/cgroup/ltp/test-47831/2/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49074 to /sys/fs/cgroup/ltp/test-47831/2/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49078 to /sys/fs/cgroup/ltp/test-47831/2/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49084 to /sys/fs/cgroup/ltp/test-47831/2/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49088 to /sys/fs/cgroup/ltp/test-47831/2/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49092 to /sys/fs/cgroup/ltp/test-47831/2/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49096 to /sys/fs/cgroup/ltp/test-47831/2/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49104 to /sys/fs/cgroup/ltp/test-47831/2/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49108 to /sys/fs/cgroup/ltp/test-47831/2/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49112 to /sys/fs/cgroup/ltp/test-47831/2/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49116 to /sys/fs/cgroup/ltp/test-47831/2/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49122 to /sys/fs/cgroup/ltp/test-47831/2/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49126 to /sys/fs/cgroup/ltp/test-47831/2/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49130 to /sys/fs/cgroup/ltp/test-47831/2/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49134 to /sys/fs/cgroup/ltp/test-47831/2/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49140 to /sys/fs/cgroup/ltp/test-47831/2/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49144 to /sys/fs/cgroup/ltp/test-47831/2/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49148 to /sys/fs/cgroup/ltp/test-47831/2/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49152 to /sys/fs/cgroup/ltp/test-47831/2/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49158 to /sys/fs/cgroup/ltp/test-47831/2/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49162 to /sys/fs/cgroup/ltp/test-47831/2/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49166 to /sys/fs/cgroup/ltp/test-47831/2/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49170 to /sys/fs/cgroup/ltp/test-47831/2/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49178 to /sys/fs/cgroup/ltp/test-47831/2/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49182 to /sys/fs/cgroup/ltp/test-47831/2/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49186 to /sys/fs/cgroup/ltp/test-47831/2/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49190 to /sys/fs/cgroup/ltp/test-47831/2/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49196 to /sys/fs/cgroup/ltp/test-47831/2/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49200 to /sys/fs/cgroup/ltp/test-47831/2/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49204 to /sys/fs/cgroup/ltp/test-47831/2/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49208 to /sys/fs/cgroup/ltp/test-47831/2/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49214 to /sys/fs/cgroup/ltp/test-47831/2/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49218 to /sys/fs/cgroup/ltp/test-47831/2/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49222 to /sys/fs/cgroup/ltp/test-47831/2/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49226 to /sys/fs/cgroup/ltp/test-47831/2/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49232 to /sys/fs/cgroup/ltp/test-47831/2/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49236 to /sys/fs/cgroup/ltp/test-47831/2/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49240 to /sys/fs/cgroup/ltp/test-47831/2/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49244 to /sys/fs/cgroup/ltp/test-47831/2/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49254 to /sys/fs/cgroup/ltp/test-47831/3/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49258 to /sys/fs/cgroup/ltp/test-47831/3/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49262 to /sys/fs/cgroup/ltp/test-47831/3/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49266 to /sys/fs/cgroup/ltp/test-47831/3/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49272 to /sys/fs/cgroup/ltp/test-47831/3/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49276 to /sys/fs/cgroup/ltp/test-47831/3/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49280 to /sys/fs/cgroup/ltp/test-47831/3/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49284 to /sys/fs/cgroup/ltp/test-47831/3/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49290 to /sys/fs/cgroup/ltp/test-47831/3/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49294 to /sys/fs/cgroup/ltp/test-47831/3/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49298 to /sys/fs/cgroup/ltp/test-47831/3/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49302 to /sys/fs/cgroup/ltp/test-47831/3/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49308 to /sys/fs/cgroup/ltp/test-47831/3/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49312 to /sys/fs/cgroup/ltp/test-47831/3/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49316 to /sys/fs/cgroup/ltp/test-47831/3/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49320 to /sys/fs/cgroup/ltp/test-47831/3/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49328 to /sys/fs/cgroup/ltp/test-47831/3/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49332 to /sys/fs/cgroup/ltp/test-47831/3/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49336 to /sys/fs/cgroup/ltp/test-47831/3/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49340 to /sys/fs/cgroup/ltp/test-47831/3/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49346 to /sys/fs/cgroup/ltp/test-47831/3/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49350 to /sys/fs/cgroup/ltp/test-47831/3/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49354 to /sys/fs/cgroup/ltp/test-47831/3/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49358 to /sys/fs/cgroup/ltp/test-47831/3/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49364 to /sys/fs/cgroup/ltp/test-47831/3/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49368 to /sys/fs/cgroup/ltp/test-47831/3/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49372 to /sys/fs/cgroup/ltp/test-47831/3/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49376 to /sys/fs/cgroup/ltp/test-47831/3/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49382 to /sys/fs/cgroup/ltp/test-47831/3/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49386 to /sys/fs/cgroup/ltp/test-47831/3/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49390 to /sys/fs/cgroup/ltp/test-47831/3/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49394 to /sys/fs/cgroup/ltp/test-47831/3/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49402 to /sys/fs/cgroup/ltp/test-47831/3/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49406 to /sys/fs/cgroup/ltp/test-47831/3/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49410 to /sys/fs/cgroup/ltp/test-47831/3/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49414 to /sys/fs/cgroup/ltp/test-47831/3/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49420 to /sys/fs/cgroup/ltp/test-47831/3/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49424 to /sys/fs/cgroup/ltp/test-47831/3/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49428 to /sys/fs/cgroup/ltp/test-47831/3/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49432 to /sys/fs/cgroup/ltp/test-47831/3/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49438 to /sys/fs/cgroup/ltp/test-47831/3/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49442 to /sys/fs/cgroup/ltp/test-47831/3/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49446 to /sys/fs/cgroup/ltp/test-47831/3/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49450 to /sys/fs/cgroup/ltp/test-47831/3/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49456 to /sys/fs/cgroup/ltp/test-47831/3/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49460 to /sys/fs/cgroup/ltp/test-47831/3/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49464 to /sys/fs/cgroup/ltp/test-47831/3/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49468 to /sys/fs/cgroup/ltp/test-47831/3/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49476 to /sys/fs/cgroup/ltp/test-47831/3/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49480 to /sys/fs/cgroup/ltp/test-47831/3/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49484 to /sys/fs/cgroup/ltp/test-47831/3/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49488 to /sys/fs/cgroup/ltp/test-47831/3/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49494 to /sys/fs/cgroup/ltp/test-47831/3/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49498 to /sys/fs/cgroup/ltp/test-47831/3/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49502 to /sys/fs/cgroup/ltp/test-47831/3/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49506 to /sys/fs/cgroup/ltp/test-47831/3/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49512 to /sys/fs/cgroup/ltp/test-47831/3/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49516 to /sys/fs/cgroup/ltp/test-47831/3/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49520 to /sys/fs/cgroup/ltp/test-47831/3/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49524 to /sys/fs/cgroup/ltp/test-47831/3/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49530 to /sys/fs/cgroup/ltp/test-47831/3/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49534 to /sys/fs/cgroup/ltp/test-47831/3/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49538 to /sys/fs/cgroup/ltp/test-47831/3/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49542 to /sys/fs/cgroup/ltp/test-47831/3/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49552 to /sys/fs/cgroup/ltp/test-47831/4/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49556 to /sys/fs/cgroup/ltp/test-47831/4/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49560 to /sys/fs/cgroup/ltp/test-47831/4/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49564 to /sys/fs/cgroup/ltp/test-47831/4/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49570 to /sys/fs/cgroup/ltp/test-47831/4/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49574 to /sys/fs/cgroup/ltp/test-47831/4/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49578 to /sys/fs/cgroup/ltp/test-47831/4/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49582 to /sys/fs/cgroup/ltp/test-47831/4/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49588 to /sys/fs/cgroup/ltp/test-47831/4/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49592 to /sys/fs/cgroup/ltp/test-47831/4/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49596 to /sys/fs/cgroup/ltp/test-47831/4/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49600 to /sys/fs/cgroup/ltp/test-47831/4/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49606 to /sys/fs/cgroup/ltp/test-47831/4/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49610 to /sys/fs/cgroup/ltp/test-47831/4/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49614 to /sys/fs/cgroup/ltp/test-47831/4/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49618 to /sys/fs/cgroup/ltp/test-47831/4/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49626 to /sys/fs/cgroup/ltp/test-47831/4/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49630 to /sys/fs/cgroup/ltp/test-47831/4/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49634 to /sys/fs/cgroup/ltp/test-47831/4/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49638 to /sys/fs/cgroup/ltp/test-47831/4/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49644 to /sys/fs/cgroup/ltp/test-47831/4/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49648 to /sys/fs/cgroup/ltp/test-47831/4/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49652 to /sys/fs/cgroup/ltp/test-47831/4/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49656 to /sys/fs/cgroup/ltp/test-47831/4/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49662 to /sys/fs/cgroup/ltp/test-47831/4/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49666 to /sys/fs/cgroup/ltp/test-47831/4/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49670 to /sys/fs/cgroup/ltp/test-47831/4/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49674 to /sys/fs/cgroup/ltp/test-47831/4/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49680 to /sys/fs/cgroup/ltp/test-47831/4/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49684 to /sys/fs/cgroup/ltp/test-47831/4/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49688 to /sys/fs/cgroup/ltp/test-47831/4/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49692 to /sys/fs/cgroup/ltp/test-47831/4/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49700 to /sys/fs/cgroup/ltp/test-47831/4/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49704 to /sys/fs/cgroup/ltp/test-47831/4/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49708 to /sys/fs/cgroup/ltp/test-47831/4/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49712 to /sys/fs/cgroup/ltp/test-47831/4/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49718 to /sys/fs/cgroup/ltp/test-47831/4/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49722 to /sys/fs/cgroup/ltp/test-47831/4/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49726 to /sys/fs/cgroup/ltp/test-47831/4/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49730 to /sys/fs/cgroup/ltp/test-47831/4/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49736 to /sys/fs/cgroup/ltp/test-47831/4/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49740 to /sys/fs/cgroup/ltp/test-47831/4/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49744 to /sys/fs/cgroup/ltp/test-47831/4/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49748 to /sys/fs/cgroup/ltp/test-47831/4/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49754 to /sys/fs/cgroup/ltp/test-47831/4/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49758 to /sys/fs/cgroup/ltp/test-47831/4/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49762 to /sys/fs/cgroup/ltp/test-47831/4/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49766 to /sys/fs/cgroup/ltp/test-47831/4/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 49774 to /sys/fs/cgroup/ltp/test-47831/4/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 49778 to /sys/fs/cgroup/ltp/test-47831/4/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 49782 to /sys/fs/cgroup/ltp/test-47831/4/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 49786 to /sys/fs/cgroup/ltp/test-47831/4/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 49792 to /sys/fs/cgroup/ltp/test-47831/4/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 49796 to /sys/fs/cgroup/ltp/test-47831/4/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 49800 to /sys/fs/cgroup/ltp/test-47831/4/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 49804 to /sys/fs/cgroup/ltp/test-47831/4/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 49810 to /sys/fs/cgroup/ltp/test-47831/4/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 49814 to /sys/fs/cgroup/ltp/test-47831/4/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 49818 to /sys/fs/cgroup/ltp/test-47831/4/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 49822 to /sys/fs/cgroup/ltp/test-47831/4/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 49828 to /sys/fs/cgroup/ltp/test-47831/4/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 49832 to /sys/fs/cgroup/ltp/test-47831/4/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 49836 to /sys/fs/cgroup/ltp/test-47831/4/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 49840 to /sys/fs/cgroup/ltp/test-47831/4/4/4/4
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 2 9 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.87eaRJmpfE as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 1023 times
cgroup_fj_stress 1 TINFO: Attaching task to each subgroup
cgroup_fj_stress 1 TINFO: Attaching task 53169 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53173 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53179 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53183 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53191 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53195 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53201 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53205 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53215 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53219 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53225 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53229 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53237 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53241 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53247 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53251 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53263 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53267 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53273 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53277 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53285 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53289 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53295 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53299 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53309 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53313 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53319 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53323 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53331 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53335 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53341 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53345 to /sys/fs/cgroup/ltp/test-50542/1/1/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53359 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53363 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53369 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53373 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53381 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53385 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53391 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53395 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53405 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53409 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53415 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53419 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53427 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53431 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53437 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53441 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53453 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53457 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53463 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53467 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53475 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53479 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53485 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53489 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53499 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53503 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53509 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53513 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53521 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53525 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53531 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53535 to /sys/fs/cgroup/ltp/test-50542/1/1/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53551 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53555 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53561 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53565 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53573 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53577 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53583 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53587 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53597 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53601 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53607 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53611 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53619 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53623 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53629 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53633 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53645 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53649 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53655 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53659 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53667 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53671 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53677 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53681 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53691 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53695 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53701 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53705 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53713 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53717 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53723 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53727 to /sys/fs/cgroup/ltp/test-50542/1/1/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53741 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53745 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53751 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53755 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53763 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53767 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53773 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53777 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53787 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53791 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53797 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53801 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53809 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53813 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53819 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53823 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53835 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53839 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53845 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53849 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53857 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53861 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53867 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53871 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53881 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53885 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53891 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53895 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53903 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53907 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53913 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53917 to /sys/fs/cgroup/ltp/test-50542/1/1/2/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53935 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53939 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53945 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53949 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53957 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53961 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53967 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53971 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 53981 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 53985 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 53991 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 53995 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54003 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54007 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54013 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54017 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54029 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54033 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54039 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54043 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54051 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54055 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54061 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54065 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54075 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54079 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54085 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54089 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54097 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54101 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54107 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54111 to /sys/fs/cgroup/ltp/test-50542/1/2/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54125 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54129 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54135 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54139 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54147 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54151 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54157 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54161 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54171 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54175 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54181 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54185 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54193 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54197 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54203 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54207 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54219 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54223 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54229 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54233 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54241 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54245 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54251 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54255 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54265 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54269 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54275 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54279 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54287 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54291 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54297 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54301 to /sys/fs/cgroup/ltp/test-50542/1/2/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54317 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54321 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54327 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54331 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54339 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54343 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54349 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54353 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54363 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54367 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54373 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54377 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54385 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54389 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54395 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54399 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54411 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54415 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54421 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54425 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54433 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54437 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54443 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54447 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54457 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54461 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54467 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54471 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54479 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54483 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54489 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54493 to /sys/fs/cgroup/ltp/test-50542/1/2/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54507 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54511 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54517 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54521 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54529 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54533 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54539 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54543 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54553 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54557 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54563 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54567 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54575 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54579 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54585 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54589 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54601 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54605 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54611 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54615 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54623 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54627 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54633 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54637 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54647 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54651 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54657 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54661 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54669 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54673 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54679 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54683 to /sys/fs/cgroup/ltp/test-50542/1/2/2/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54703 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54707 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54713 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54717 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54725 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54729 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54735 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54739 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54749 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54753 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54759 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54763 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54771 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54775 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54781 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54785 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54797 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54801 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54807 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54811 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54819 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54823 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54829 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54833 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54843 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54847 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54853 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54857 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54865 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54869 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54875 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54879 to /sys/fs/cgroup/ltp/test-50542/2/1/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54893 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54897 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54903 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54907 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54915 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54919 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54925 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54929 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54939 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54943 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54949 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54953 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54961 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54965 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54971 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 54975 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 54987 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 54991 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 54997 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55001 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55009 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55013 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55019 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55023 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55033 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55037 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55043 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55047 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55055 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55059 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55065 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55069 to /sys/fs/cgroup/ltp/test-50542/2/1/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55085 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55089 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55095 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55099 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55107 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55111 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55117 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55121 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55131 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55135 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55141 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55145 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55153 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55157 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55163 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55167 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55179 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55183 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55189 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55193 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55201 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55205 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55211 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55215 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55225 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55229 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55235 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55239 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55247 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55251 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55257 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55261 to /sys/fs/cgroup/ltp/test-50542/2/1/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55275 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55279 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55285 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55289 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55297 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55301 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55307 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55311 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55321 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55325 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55331 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55335 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55343 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55347 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55353 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55357 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55369 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55373 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55379 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55383 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55391 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55395 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55401 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55405 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55415 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55419 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55425 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55429 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55437 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55441 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55447 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55451 to /sys/fs/cgroup/ltp/test-50542/2/1/2/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55469 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55473 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55479 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55483 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55491 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55495 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55501 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55505 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55515 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55519 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55525 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55529 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55537 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55541 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55547 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55551 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55563 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55567 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55573 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55577 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55585 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55589 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55595 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55599 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55609 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55613 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55619 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55623 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55631 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55635 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55641 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55645 to /sys/fs/cgroup/ltp/test-50542/2/2/1/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55659 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55663 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55669 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55673 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55681 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55685 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55691 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55695 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55705 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55709 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55715 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55719 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55727 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55731 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55737 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55741 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55753 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55757 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55763 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55767 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55775 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55779 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55785 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55789 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55799 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55803 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55809 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55813 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55821 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55825 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55831 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55835 to /sys/fs/cgroup/ltp/test-50542/2/2/1/2/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55851 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55855 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55861 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55865 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55873 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55877 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55883 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55887 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55897 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55901 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55907 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55911 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55919 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55923 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55929 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55933 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55945 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55949 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55955 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55959 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55967 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55971 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 55977 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 55981 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 55991 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 55995 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 56001 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 56005 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 56013 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 56017 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 56023 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 56027 to /sys/fs/cgroup/ltp/test-50542/2/2/2/1/2/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 56041 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 56045 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 56051 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 56055 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 56063 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 56067 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 56073 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 56077 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 56087 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 56091 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 56097 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 56101 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 56109 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 56113 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 56119 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 56123 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/1/2/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 56135 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/1/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 56139 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/1/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 56145 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/1/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 56149 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/1/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 56157 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/1/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 56161 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/1/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 56167 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/1/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 56171 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/1/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 56181 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/2/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 56185 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/2/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 56191 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/2/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 56195 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/2/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 56203 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/2/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 56207 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/2/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 56213 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/2/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 56217 to /sys/fs/cgroup/ltp/test-50542/2/2/2/2/2/2/2/2/2
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...
cgroup_fj_stress 1 TINFO: Running: cgroup_fj_stress.sh cpuset 10 3 each
cgroup_fj_stress 1 TINFO: Tested kernel: Linux runner-vwmj3eza-project-40964107-concurrent-2 6.13.0 #1 SMP PREEMPT @1737548066 aarch64 GNU/Linux
cgroup_fj_stress 1 TINFO: Using /tmp/LTP_cgroup_fj_stress.8jxFO1eHA5 as tmpdir (tmpfs filesystem)
cgroup_fj_stress 1 TINFO: ceiling LTP_TIMEOUT_MUL to 11
cgroup_fj_stress 1 TINFO: timeout per run is 0h 55m 0s
cgroup_fj_stress 1 TINFO: test starts with cgroup version 2
cgroup_fj_stress 1 TINFO: Creating subgroups ...
cgroup_fj_stress 1 TINFO: ... mkdired 1111 times
cgroup_fj_stress 1 TINFO: Attaching task to each subgroup
cgroup_fj_stress 1 TINFO: Attaching task 60673 to /sys/fs/cgroup/ltp/test-58282/1/1/1
cgroup_fj_stress 1 TINFO: Attaching task 60677 to /sys/fs/cgroup/ltp/test-58282/1/1/2
cgroup_fj_stress 1 TINFO: Attaching task 60681 to /sys/fs/cgroup/ltp/test-58282/1/1/3
cgroup_fj_stress 1 TINFO: Attaching task 60685 to /sys/fs/cgroup/ltp/test-58282/1/1/4
cgroup_fj_stress 1 TINFO: Attaching task 60689 to /sys/fs/cgroup/ltp/test-58282/1/1/5
cgroup_fj_stress 1 TINFO: Attaching task 60693 to /sys/fs/cgroup/ltp/test-58282/1/1/6
cgroup_fj_stress 1 TINFO: Attaching task 60697 to /sys/fs/cgroup/ltp/test-58282/1/1/7
cgroup_fj_stress 1 TINFO: Attaching task 60701 to /sys/fs/cgroup/ltp/test-58282/1/1/8
cgroup_fj_stress 1 TINFO: Attaching task 60705 to /sys/fs/cgroup/ltp/test-58282/1/1/9
cgroup_fj_stress 1 TINFO: Attaching task 60709 to /sys/fs/cgroup/ltp/test-58282/1/1/10
cgroup_fj_stress 1 TINFO: Attaching task 60715 to /sys/fs/cgroup/ltp/test-58282/1/2/1
cgroup_fj_stress 1 TINFO: Attaching task 60719 to /sys/fs/cgroup/ltp/test-58282/1/2/2
cgroup_fj_stress 1 TINFO: Attaching task 60723 to /sys/fs/cgroup/ltp/test-58282/1/2/3
cgroup_fj_stress 1 TINFO: Attaching task 60727 to /sys/fs/cgroup/ltp/test-58282/1/2/4
cgroup_fj_stress 1 TINFO: Attaching task 60731 to /sys/fs/cgroup/ltp/test-58282/1/2/5
cgroup_fj_stress 1 TINFO: Attaching task 60735 to /sys/fs/cgroup/ltp/test-58282/1/2/6
cgroup_fj_stress 1 TINFO: Attaching task 60739 to /sys/fs/cgroup/ltp/test-58282/1/2/7
cgroup_fj_stress 1 TINFO: Attaching task 60743 to /sys/fs/cgroup/ltp/test-58282/1/2/8
cgroup_fj_stress 1 TINFO: Attaching task 60747 to /sys/fs/cgroup/ltp/test-58282/1/2/9
cgroup_fj_stress 1 TINFO: Attaching task 60751 to /sys/fs/cgroup/ltp/test-58282/1/2/10
cgroup_fj_stress 1 TINFO: Attaching task 60757 to /sys/fs/cgroup/ltp/test-58282/1/3/1
cgroup_fj_stress 1 TINFO: Attaching task 60761 to /sys/fs/cgroup/ltp/test-58282/1/3/2
cgroup_fj_stress 1 TINFO: Attaching task 60765 to /sys/fs/cgroup/ltp/test-58282/1/3/3
cgroup_fj_stress 1 TINFO: Attaching task 60769 to /sys/fs/cgroup/ltp/test-58282/1/3/4
cgroup_fj_stress 1 TINFO: Attaching task 60773 to /sys/fs/cgroup/ltp/test-58282/1/3/5
cgroup_fj_stress 1 TINFO: Attaching task 60777 to /sys/fs/cgroup/ltp/test-58282/1/3/6
cgroup_fj_stress 1 TINFO: Attaching task 60781 to /sys/fs/cgroup/ltp/test-58282/1/3/7
cgroup_fj_stress 1 TINFO: Attaching task 60785 to /sys/fs/cgroup/ltp/test-58282/1/3/8
cgroup_fj_stress 1 TINFO: Attaching task 60789 to /sys/fs/cgroup/ltp/test-58282/1/3/9
cgroup_fj_stress 1 TINFO: Attaching task 60793 to /sys/fs/cgroup/ltp/test-58282/1/3/10
cgroup_fj_stress 1 TINFO: Attaching task 60799 to /sys/fs/cgroup/ltp/test-58282/1/4/1
cgroup_fj_stress 1 TINFO: Attaching task 60803 to /sys/fs/cgroup/ltp/test-58282/1/4/2
cgroup_fj_stress 1 TINFO: Attaching task 60807 to /sys/fs/cgroup/ltp/test-58282/1/4/3
cgroup_fj_stress 1 TINFO: Attaching task 60811 to /sys/fs/cgroup/ltp/test-58282/1/4/4
cgroup_fj_stress 1 TINFO: Attaching task 60815 to /sys/fs/cgroup/ltp/test-58282/1/4/5
cgroup_fj_stress 1 TINFO: Attaching task 60819 to /sys/fs/cgroup/ltp/test-58282/1/4/6
cgroup_fj_stress 1 TINFO: Attaching task 60823 to /sys/fs/cgroup/ltp/test-58282/1/4/7
cgroup_fj_stress 1 TINFO: Attaching task 60827 to /sys/fs/cgroup/ltp/test-58282/1/4/8
cgroup_fj_stress 1 TINFO: Attaching task 60831 to /sys/fs/cgroup/ltp/test-58282/1/4/9
cgroup_fj_stress 1 TINFO: Attaching task 60835 to /sys/fs/cgroup/ltp/test-58282/1/4/10
cgroup_fj_stress 1 TINFO: Attaching task 60841 to /sys/fs/cgroup/ltp/test-58282/1/5/1
cgroup_fj_stress 1 TINFO: Attaching task 60845 to /sys/fs/cgroup/ltp/test-58282/1/5/2
cgroup_fj_stress 1 TINFO: Attaching task 60849 to /sys/fs/cgroup/ltp/test-58282/1/5/3
cgroup_fj_stress 1 TINFO: Attaching task 60853 to /sys/fs/cgroup/ltp/test-58282/1/5/4
cgroup_fj_stress 1 TINFO: Attaching task 60857 to /sys/fs/cgroup/ltp/test-58282/1/5/5
cgroup_fj_stress 1 TINFO: Attaching task 60861 to /sys/fs/cgroup/ltp/test-58282/1/5/6
cgroup_fj_stress 1 TINFO: Attaching task 60865 to /sys/fs/cgroup/ltp/test-58282/1/5/7
cgroup_fj_stress 1 TINFO: Attaching task 60869 to /sys/fs/cgroup/ltp/test-58282/1/5/8
cgroup_fj_stress 1 TINFO: Attaching task 60873 to /sys/fs/cgroup/ltp/test-58282/1/5/9
cgroup_fj_stress 1 TINFO: Attaching task 60877 to /sys/fs/cgroup/ltp/test-58282/1/5/10
cgroup_fj_stress 1 TINFO: Attaching task 60883 to /sys/fs/cgroup/ltp/test-58282/1/6/1
cgroup_fj_stress 1 TINFO: Attaching task 60887 to /sys/fs/cgroup/ltp/test-58282/1/6/2
cgroup_fj_stress 1 TINFO: Attaching task 60891 to /sys/fs/cgroup/ltp/test-58282/1/6/3
cgroup_fj_stress 1 TINFO: Attaching task 60895 to /sys/fs/cgroup/ltp/test-58282/1/6/4
cgroup_fj_stress 1 TINFO: Attaching task 60899 to /sys/fs/cgroup/ltp/test-58282/1/6/5
cgroup_fj_stress 1 TINFO: Attaching task 60903 to /sys/fs/cgroup/ltp/test-58282/1/6/6
cgroup_fj_stress 1 TINFO: Attaching task 60907 to /sys/fs/cgroup/ltp/test-58282/1/6/7
cgroup_fj_stress 1 TINFO: Attaching task 60911 to /sys/fs/cgroup/ltp/test-58282/1/6/8
cgroup_fj_stress 1 TINFO: Attaching task 60915 to /sys/fs/cgroup/ltp/test-58282/1/6/9
cgroup_fj_stress 1 TINFO: Attaching task 60919 to /sys/fs/cgroup/ltp/test-58282/1/6/10
cgroup_fj_stress 1 TINFO: Attaching task 60925 to /sys/fs/cgroup/ltp/test-58282/1/7/1
cgroup_fj_stress 1 TINFO: Attaching task 60929 to /sys/fs/cgroup/ltp/test-58282/1/7/2
cgroup_fj_stress 1 TINFO: Attaching task 60933 to /sys/fs/cgroup/ltp/test-58282/1/7/3
cgroup_fj_stress 1 TINFO: Attaching task 60937 to /sys/fs/cgroup/ltp/test-58282/1/7/4
cgroup_fj_stress 1 TINFO: Attaching task 60941 to /sys/fs/cgroup/ltp/test-58282/1/7/5
cgroup_fj_stress 1 TINFO: Attaching task 60945 to /sys/fs/cgroup/ltp/test-58282/1/7/6
cgroup_fj_stress 1 TINFO: Attaching task 60949 to /sys/fs/cgroup/ltp/test-58282/1/7/7
cgroup_fj_stress 1 TINFO: Attaching task 60953 to /sys/fs/cgroup/ltp/test-58282/1/7/8
cgroup_fj_stress 1 TINFO: Attaching task 60957 to /sys/fs/cgroup/ltp/test-58282/1/7/9
cgroup_fj_stress 1 TINFO: Attaching task 60961 to /sys/fs/cgroup/ltp/test-58282/1/7/10
cgroup_fj_stress 1 TINFO: Attaching task 60967 to /sys/fs/cgroup/ltp/test-58282/1/8/1
cgroup_fj_stress 1 TINFO: Attaching task 60971 to /sys/fs/cgroup/ltp/test-58282/1/8/2
cgroup_fj_stress 1 TINFO: Attaching task 60975 to /sys/fs/cgroup/ltp/test-58282/1/8/3
cgroup_fj_stress 1 TINFO: Attaching task 60979 to /sys/fs/cgroup/ltp/test-58282/1/8/4
cgroup_fj_stress 1 TINFO: Attaching task 60983 to /sys/fs/cgroup/ltp/test-58282/1/8/5
cgroup_fj_stress 1 TINFO: Attaching task 60987 to /sys/fs/cgroup/ltp/test-58282/1/8/6
cgroup_fj_stress 1 TINFO: Attaching task 60991 to /sys/fs/cgroup/ltp/test-58282/1/8/7
cgroup_fj_stress 1 TINFO: Attaching task 60995 to /sys/fs/cgroup/ltp/test-58282/1/8/8
cgroup_fj_stress 1 TINFO: Attaching task 60999 to /sys/fs/cgroup/ltp/test-58282/1/8/9
cgroup_fj_stress 1 TINFO: Attaching task 61003 to /sys/fs/cgroup/ltp/test-58282/1/8/10
cgroup_fj_stress 1 TINFO: Attaching task 61009 to /sys/fs/cgroup/ltp/test-58282/1/9/1
cgroup_fj_stress 1 TINFO: Attaching task 61013 to /sys/fs/cgroup/ltp/test-58282/1/9/2
cgroup_fj_stress 1 TINFO: Attaching task 61017 to /sys/fs/cgroup/ltp/test-58282/1/9/3
cgroup_fj_stress 1 TINFO: Attaching task 61021 to /sys/fs/cgroup/ltp/test-58282/1/9/4
cgroup_fj_stress 1 TINFO: Attaching task 61025 to /sys/fs/cgroup/ltp/test-58282/1/9/5
cgroup_fj_stress 1 TINFO: Attaching task 61029 to /sys/fs/cgroup/ltp/test-58282/1/9/6
cgroup_fj_stress 1 TINFO: Attaching task 61033 to /sys/fs/cgroup/ltp/test-58282/1/9/7
cgroup_fj_stress 1 TINFO: Attaching task 61037 to /sys/fs/cgroup/ltp/test-58282/1/9/8
cgroup_fj_stress 1 TINFO: Attaching task 61041 to /sys/fs/cgroup/ltp/test-58282/1/9/9
cgroup_fj_stress 1 TINFO: Attaching task 61045 to /sys/fs/cgroup/ltp/test-58282/1/9/10
cgroup_fj_stress 1 TINFO: Attaching task 61051 to /sys/fs/cgroup/ltp/test-58282/1/10/1
cgroup_fj_stress 1 TINFO: Attaching task 61055 to /sys/fs/cgroup/ltp/test-58282/1/10/2
cgroup_fj_stress 1 TINFO: Attaching task 61059 to /sys/fs/cgroup/ltp/test-58282/1/10/3
cgroup_fj_stress 1 TINFO: Attaching task 61063 to /sys/fs/cgroup/ltp/test-58282/1/10/4
cgroup_fj_stress 1 TINFO: Attaching task 61067 to /sys/fs/cgroup/ltp/test-58282/1/10/5
cgroup_fj_stress 1 TINFO: Attaching task 61071 to /sys/fs/cgroup/ltp/test-58282/1/10/6
cgroup_fj_stress 1 TINFO: Attaching task 61075 to /sys/fs/cgroup/ltp/test-58282/1/10/7
cgroup_fj_stress 1 TINFO: Attaching task 61079 to /sys/fs/cgroup/ltp/test-58282/1/10/8
cgroup_fj_stress 1 TINFO: Attaching task 61083 to /sys/fs/cgroup/ltp/test-58282/1/10/9
cgroup_fj_stress 1 TINFO: Attaching task 61087 to /sys/fs/cgroup/ltp/test-58282/1/10/10
cgroup_fj_stress 1 TINFO: Attaching task 61095 to /sys/fs/cgroup/ltp/test-58282/2/1/1
cgroup_fj_stress 1 TINFO: Attaching task 61099 to /sys/fs/cgroup/ltp/test-58282/2/1/2
cgroup_fj_stress 1 TINFO: Attaching task 61103 to /sys/fs/cgroup/ltp/test-58282/2/1/3
cgroup_fj_stress 1 TINFO: Attaching task 61107 to /sys/fs/cgroup/ltp/test-58282/2/1/4
cgroup_fj_stress 1 TINFO: Attaching task 61111 to /sys/fs/cgroup/ltp/test-58282/2/1/5
cgroup_fj_stress 1 TINFO: Attaching task 61115 to /sys/fs/cgroup/ltp/test-58282/2/1/6
cgroup_fj_stress 1 TINFO: Attaching task 61119 to /sys/fs/cgroup/ltp/test-58282/2/1/7
cgroup_fj_stress 1 TINFO: Attaching task 61123 to /sys/fs/cgroup/ltp/test-58282/2/1/8
cgroup_fj_stress 1 TINFO: Attaching task 61127 to /sys/fs/cgroup/ltp/test-58282/2/1/9
cgroup_fj_stress 1 TINFO: Attaching task 61131 to /sys/fs/cgroup/ltp/test-58282/2/1/10
cgroup_fj_stress 1 TINFO: Attaching task 61137 to /sys/fs/cgroup/ltp/test-58282/2/2/1
cgroup_fj_stress 1 TINFO: Attaching task 61141 to /sys/fs/cgroup/ltp/test-58282/2/2/2
cgroup_fj_stress 1 TINFO: Attaching task 61145 to /sys/fs/cgroup/ltp/test-58282/2/2/3
cgroup_fj_stress 1 TINFO: Attaching task 61149 to /sys/fs/cgroup/ltp/test-58282/2/2/4
cgroup_fj_stress 1 TINFO: Attaching task 61153 to /sys/fs/cgroup/ltp/test-58282/2/2/5
cgroup_fj_stress 1 TINFO: Attaching task 61157 to /sys/fs/cgroup/ltp/test-58282/2/2/6
cgroup_fj_stress 1 TINFO: Attaching task 61161 to /sys/fs/cgroup/ltp/test-58282/2/2/7
cgroup_fj_stress 1 TINFO: Attaching task 61165 to /sys/fs/cgroup/ltp/test-58282/2/2/8
cgroup_fj_stress 1 TINFO: Attaching task 61169 to /sys/fs/cgroup/ltp/test-58282/2/2/9
cgroup_fj_stress 1 TINFO: Attaching task 61173 to /sys/fs/cgroup/ltp/test-58282/2/2/10
cgroup_fj_stress 1 TINFO: Attaching task 61179 to /sys/fs/cgroup/ltp/test-58282/2/3/1
cgroup_fj_stress 1 TINFO: Attaching task 61183 to /sys/fs/cgroup/ltp/test-58282/2/3/2
cgroup_fj_stress 1 TINFO: Attaching task 61187 to /sys/fs/cgroup/ltp/test-58282/2/3/3
cgroup_fj_stress 1 TINFO: Attaching task 61191 to /sys/fs/cgroup/ltp/test-58282/2/3/4
cgroup_fj_stress 1 TINFO: Attaching task 61195 to /sys/fs/cgroup/ltp/test-58282/2/3/5
cgroup_fj_stress 1 TINFO: Attaching task 61199 to /sys/fs/cgroup/ltp/test-58282/2/3/6
cgroup_fj_stress 1 TINFO: Attaching task 61203 to /sys/fs/cgroup/ltp/test-58282/2/3/7
cgroup_fj_stress 1 TINFO: Attaching task 61207 to /sys/fs/cgroup/ltp/test-58282/2/3/8
cgroup_fj_stress 1 TINFO: Attaching task 61211 to /sys/fs/cgroup/ltp/test-58282/2/3/9
cgroup_fj_stress 1 TINFO: Attaching task 61215 to /sys/fs/cgroup/ltp/test-58282/2/3/10
cgroup_fj_stress 1 TINFO: Attaching task 61221 to /sys/fs/cgroup/ltp/test-58282/2/4/1
cgroup_fj_stress 1 TINFO: Attaching task 61225 to /sys/fs/cgroup/ltp/test-58282/2/4/2
cgroup_fj_stress 1 TINFO: Attaching task 61229 to /sys/fs/cgroup/ltp/test-58282/2/4/3
cgroup_fj_stress 1 TINFO: Attaching task 61233 to /sys/fs/cgroup/ltp/test-58282/2/4/4
cgroup_fj_stress 1 TINFO: Attaching task 61237 to /sys/fs/cgroup/ltp/test-58282/2/4/5
cgroup_fj_stress 1 TINFO: Attaching task 61241 to /sys/fs/cgroup/ltp/test-58282/2/4/6
cgroup_fj_stress 1 TINFO: Attaching task 61245 to /sys/fs/cgroup/ltp/test-58282/2/4/7
cgroup_fj_stress 1 TINFO: Attaching task 61249 to /sys/fs/cgroup/ltp/test-58282/2/4/8
cgroup_fj_stress 1 TINFO: Attaching task 61253 to /sys/fs/cgroup/ltp/test-58282/2/4/9
cgroup_fj_stress 1 TINFO: Attaching task 61257 to /sys/fs/cgroup/ltp/test-58282/2/4/10
cgroup_fj_stress 1 TINFO: Attaching task 61263 to /sys/fs/cgroup/ltp/test-58282/2/5/1
cgroup_fj_stress 1 TINFO: Attaching task 61267 to /sys/fs/cgroup/ltp/test-58282/2/5/2
cgroup_fj_stress 1 TINFO: Attaching task 61271 to /sys/fs/cgroup/ltp/test-58282/2/5/3
cgroup_fj_stress 1 TINFO: Attaching task 61275 to /sys/fs/cgroup/ltp/test-58282/2/5/4
cgroup_fj_stress 1 TINFO: Attaching task 61279 to /sys/fs/cgroup/ltp/test-58282/2/5/5
cgroup_fj_stress 1 TINFO: Attaching task 61283 to /sys/fs/cgroup/ltp/test-58282/2/5/6
cgroup_fj_stress 1 TINFO: Attaching task 61287 to /sys/fs/cgroup/ltp/test-58282/2/5/7
cgroup_fj_stress 1 TINFO: Attaching task 61291 to /sys/fs/cgroup/ltp/test-58282/2/5/8
cgroup_fj_stress 1 TINFO: Attaching task 61295 to /sys/fs/cgroup/ltp/test-58282/2/5/9
cgroup_fj_stress 1 TINFO: Attaching task 61299 to /sys/fs/cgroup/ltp/test-58282/2/5/10
cgroup_fj_stress 1 TINFO: Attaching task 61305 to /sys/fs/cgroup/ltp/test-58282/2/6/1
cgroup_fj_stress 1 TINFO: Attaching task 61309 to /sys/fs/cgroup/ltp/test-58282/2/6/2
cgroup_fj_stress 1 TINFO: Attaching task 61313 to /sys/fs/cgroup/ltp/test-58282/2/6/3
cgroup_fj_stress 1 TINFO: Attaching task 61317 to /sys/fs/cgroup/ltp/test-58282/2/6/4
cgroup_fj_stress 1 TINFO: Attaching task 61321 to /sys/fs/cgroup/ltp/test-58282/2/6/5
cgroup_fj_stress 1 TINFO: Attaching task 61325 to /sys/fs/cgroup/ltp/test-58282/2/6/6
cgroup_fj_stress 1 TINFO: Attaching task 61329 to /sys/fs/cgroup/ltp/test-58282/2/6/7
cgroup_fj_stress 1 TINFO: Attaching task 61333 to /sys/fs/cgroup/ltp/test-58282/2/6/8
cgroup_fj_stress 1 TINFO: Attaching task 61337 to /sys/fs/cgroup/ltp/test-58282/2/6/9
cgroup_fj_stress 1 TINFO: Attaching task 61341 to /sys/fs/cgroup/ltp/test-58282/2/6/10
cgroup_fj_stress 1 TINFO: Attaching task 61347 to /sys/fs/cgroup/ltp/test-58282/2/7/1
cgroup_fj_stress 1 TINFO: Attaching task 61351 to /sys/fs/cgroup/ltp/test-58282/2/7/2
cgroup_fj_stress 1 TINFO: Attaching task 61355 to /sys/fs/cgroup/ltp/test-58282/2/7/3
cgroup_fj_stress 1 TINFO: Attaching task 61359 to /sys/fs/cgroup/ltp/test-58282/2/7/4
cgroup_fj_stress 1 TINFO: Attaching task 61363 to /sys/fs/cgroup/ltp/test-58282/2/7/5
cgroup_fj_stress 1 TINFO: Attaching task 61367 to /sys/fs/cgroup/ltp/test-58282/2/7/6
cgroup_fj_stress 1 TINFO: Attaching task 61371 to /sys/fs/cgroup/ltp/test-58282/2/7/7
cgroup_fj_stress 1 TINFO: Attaching task 61375 to /sys/fs/cgroup/ltp/test-58282/2/7/8
cgroup_fj_stress 1 TINFO: Attaching task 61379 to /sys/fs/cgroup/ltp/test-58282/2/7/9
cgroup_fj_stress 1 TINFO: Attaching task 61383 to /sys/fs/cgroup/ltp/test-58282/2/7/10
cgroup_fj_stress 1 TINFO: Attaching task 61389 to /sys/fs/cgroup/ltp/test-58282/2/8/1
cgroup_fj_stress 1 TINFO: Attaching task 61393 to /sys/fs/cgroup/ltp/test-58282/2/8/2
cgroup_fj_stress 1 TINFO: Attaching task 61397 to /sys/fs/cgroup/ltp/test-58282/2/8/3
cgroup_fj_stress 1 TINFO: Attaching task 61401 to /sys/fs/cgroup/ltp/test-58282/2/8/4
cgroup_fj_stress 1 TINFO: Attaching task 61405 to /sys/fs/cgroup/ltp/test-58282/2/8/5
cgroup_fj_stress 1 TINFO: Attaching task 61409 to /sys/fs/cgroup/ltp/test-58282/2/8/6
cgroup_fj_stress 1 TINFO: Attaching task 61413 to /sys/fs/cgroup/ltp/test-58282/2/8/7
cgroup_fj_stress 1 TINFO: Attaching task 61417 to /sys/fs/cgroup/ltp/test-58282/2/8/8
cgroup_fj_stress 1 TINFO: Attaching task 61421 to /sys/fs/cgroup/ltp/test-58282/2/8/9
cgroup_fj_stress 1 TINFO: Attaching task 61425 to /sys/fs/cgroup/ltp/test-58282/2/8/10
cgroup_fj_stress 1 TINFO: Attaching task 61431 to /sys/fs/cgroup/ltp/test-58282/2/9/1
cgroup_fj_stress 1 TINFO: Attaching task 61435 to /sys/fs/cgroup/ltp/test-58282/2/9/2
cgroup_fj_stress 1 TINFO: Attaching task 61439 to /sys/fs/cgroup/ltp/test-58282/2/9/3
cgroup_fj_stress 1 TINFO: Attaching task 61443 to /sys/fs/cgroup/ltp/test-58282/2/9/4
cgroup_fj_stress 1 TINFO: Attaching task 61447 to /sys/fs/cgroup/ltp/test-58282/2/9/5
cgroup_fj_stress 1 TINFO: Attaching task 61451 to /sys/fs/cgroup/ltp/test-58282/2/9/6
cgroup_fj_stress 1 TINFO: Attaching task 61455 to /sys/fs/cgroup/ltp/test-58282/2/9/7
cgroup_fj_stress 1 TINFO: Attaching task 61459 to /sys/fs/cgroup/ltp/test-58282/2/9/8
cgroup_fj_stress 1 TINFO: Attaching task 61463 to /sys/fs/cgroup/ltp/test-58282/2/9/9
cgroup_fj_stress 1 TINFO: Attaching task 61467 to /sys/fs/cgroup/ltp/test-58282/2/9/10
cgroup_fj_stress 1 TINFO: Attaching task 61473 to /sys/fs/cgroup/ltp/test-58282/2/10/1
cgroup_fj_stress 1 TINFO: Attaching task 61477 to /sys/fs/cgroup/ltp/test-58282/2/10/2
cgroup_fj_stress 1 TINFO: Attaching task 61481 to /sys/fs/cgroup/ltp/test-58282/2/10/3
cgroup_fj_stress 1 TINFO: Attaching task 61485 to /sys/fs/cgroup/ltp/test-58282/2/10/4
cgroup_fj_stress 1 TINFO: Attaching task 61489 to /sys/fs/cgroup/ltp/test-58282/2/10/5
cgroup_fj_stress 1 TINFO: Attaching task 61493 to /sys/fs/cgroup/ltp/test-58282/2/10/6
cgroup_fj_stress 1 TINFO: Attaching task 61497 to /sys/fs/cgroup/ltp/test-58282/2/10/7
cgroup_fj_stress 1 TINFO: Attaching task 61501 to /sys/fs/cgroup/ltp/test-58282/2/10/8
cgroup_fj_stress 1 TINFO: Attaching task 61505 to /sys/fs/cgroup/ltp/test-58282/2/10/9
cgroup_fj_stress 1 TINFO: Attaching task 61509 to /sys/fs/cgroup/ltp/test-58282/2/10/10
cgroup_fj_stress 1 TINFO: Attaching task 61517 to /sys/fs/cgroup/ltp/test-58282/3/1/1
cgroup_fj_stress 1 TINFO: Attaching task 61521 to /sys/fs/cgroup/ltp/test-58282/3/1/2
cgroup_fj_stress 1 TINFO: Attaching task 61525 to /sys/fs/cgroup/ltp/test-58282/3/1/3
cgroup_fj_stress 1 TINFO: Attaching task 61529 to /sys/fs/cgroup/ltp/test-58282/3/1/4
cgroup_fj_stress 1 TINFO: Attaching task 61533 to /sys/fs/cgroup/ltp/test-58282/3/1/5
cgroup_fj_stress 1 TINFO: Attaching task 61537 to /sys/fs/cgroup/ltp/test-58282/3/1/6
cgroup_fj_stress 1 TINFO: Attaching task 61541 to /sys/fs/cgroup/ltp/test-58282/3/1/7
cgroup_fj_stress 1 TINFO: Attaching task 61545 to /sys/fs/cgroup/ltp/test-58282/3/1/8
cgroup_fj_stress 1 TINFO: Attaching task 61549 to /sys/fs/cgroup/ltp/test-58282/3/1/9
cgroup_fj_stress 1 TINFO: Attaching task 61553 to /sys/fs/cgroup/ltp/test-58282/3/1/10
cgroup_fj_stress 1 TINFO: Attaching task 61559 to /sys/fs/cgroup/ltp/test-58282/3/2/1
cgroup_fj_stress 1 TINFO: Attaching task 61563 to /sys/fs/cgroup/ltp/test-58282/3/2/2
cgroup_fj_stress 1 TINFO: Attaching task 61567 to /sys/fs/cgroup/ltp/test-58282/3/2/3
cgroup_fj_stress 1 TINFO: Attaching task 61571 to /sys/fs/cgroup/ltp/test-58282/3/2/4
cgroup_fj_stress 1 TINFO: Attaching task 61575 to /sys/fs/cgroup/ltp/test-58282/3/2/5
cgroup_fj_stress 1 TINFO: Attaching task 61579 to /sys/fs/cgroup/ltp/test-58282/3/2/6
cgroup_fj_stress 1 TINFO: Attaching task 61583 to /sys/fs/cgroup/ltp/test-58282/3/2/7
cgroup_fj_stress 1 TINFO: Attaching task 61587 to /sys/fs/cgroup/ltp/test-58282/3/2/8
cgroup_fj_stress 1 TINFO: Attaching task 61591 to /sys/fs/cgroup/ltp/test-58282/3/2/9
cgroup_fj_stress 1 TINFO: Attaching task 61595 to /sys/fs/cgroup/ltp/test-58282/3/2/10
cgroup_fj_stress 1 TINFO: Attaching task 61601 to /sys/fs/cgroup/ltp/test-58282/3/3/1
cgroup_fj_stress 1 TINFO: Attaching task 61605 to /sys/fs/cgroup/ltp/test-58282/3/3/2
cgroup_fj_stress 1 TINFO: Attaching task 61609 to /sys/fs/cgroup/ltp/test-58282/3/3/3
cgroup_fj_stress 1 TINFO: Attaching task 61613 to /sys/fs/cgroup/ltp/test-58282/3/3/4
cgroup_fj_stress 1 TINFO: Attaching task 61617 to /sys/fs/cgroup/ltp/test-58282/3/3/5
cgroup_fj_stress 1 TINFO: Attaching task 61621 to /sys/fs/cgroup/ltp/test-58282/3/3/6
cgroup_fj_stress 1 TINFO: Attaching task 61625 to /sys/fs/cgroup/ltp/test-58282/3/3/7
cgroup_fj_stress 1 TINFO: Attaching task 61629 to /sys/fs/cgroup/ltp/test-58282/3/3/8
cgroup_fj_stress 1 TINFO: Attaching task 61633 to /sys/fs/cgroup/ltp/test-58282/3/3/9
cgroup_fj_stress 1 TINFO: Attaching task 61637 to /sys/fs/cgroup/ltp/test-58282/3/3/10
cgroup_fj_stress 1 TINFO: Attaching task 61643 to /sys/fs/cgroup/ltp/test-58282/3/4/1
cgroup_fj_stress 1 TINFO: Attaching task 61647 to /sys/fs/cgroup/ltp/test-58282/3/4/2
cgroup_fj_stress 1 TINFO: Attaching task 61651 to /sys/fs/cgroup/ltp/test-58282/3/4/3
cgroup_fj_stress 1 TINFO: Attaching task 61655 to /sys/fs/cgroup/ltp/test-58282/3/4/4
cgroup_fj_stress 1 TINFO: Attaching task 61659 to /sys/fs/cgroup/ltp/test-58282/3/4/5
cgroup_fj_stress 1 TINFO: Attaching task 61663 to /sys/fs/cgroup/ltp/test-58282/3/4/6
cgroup_fj_stress 1 TINFO: Attaching task 61667 to /sys/fs/cgroup/ltp/test-58282/3/4/7
cgroup_fj_stress 1 TINFO: Attaching task 61671 to /sys/fs/cgroup/ltp/test-58282/3/4/8
cgroup_fj_stress 1 TINFO: Attaching task 61675 to /sys/fs/cgroup/ltp/test-58282/3/4/9
cgroup_fj_stress 1 TINFO: Attaching task 61679 to /sys/fs/cgroup/ltp/test-58282/3/4/10
cgroup_fj_stress 1 TINFO: Attaching task 61685 to /sys/fs/cgroup/ltp/test-58282/3/5/1
cgroup_fj_stress 1 TINFO: Attaching task 61689 to /sys/fs/cgroup/ltp/test-58282/3/5/2
cgroup_fj_stress 1 TINFO: Attaching task 61693 to /sys/fs/cgroup/ltp/test-58282/3/5/3
cgroup_fj_stress 1 TINFO: Attaching task 61697 to /sys/fs/cgroup/ltp/test-58282/3/5/4
cgroup_fj_stress 1 TINFO: Attaching task 61701 to /sys/fs/cgroup/ltp/test-58282/3/5/5
cgroup_fj_stress 1 TINFO: Attaching task 61705 to /sys/fs/cgroup/ltp/test-58282/3/5/6
cgroup_fj_stress 1 TINFO: Attaching task 61709 to /sys/fs/cgroup/ltp/test-58282/3/5/7
cgroup_fj_stress 1 TINFO: Attaching task 61713 to /sys/fs/cgroup/ltp/test-58282/3/5/8
cgroup_fj_stress 1 TINFO: Attaching task 61717 to /sys/fs/cgroup/ltp/test-58282/3/5/9
cgroup_fj_stress 1 TINFO: Attaching task 61721 to /sys/fs/cgroup/ltp/test-58282/3/5/10
cgroup_fj_stress 1 TINFO: Attaching task 61727 to /sys/fs/cgroup/ltp/test-58282/3/6/1
cgroup_fj_stress 1 TINFO: Attaching task 61731 to /sys/fs/cgroup/ltp/test-58282/3/6/2
cgroup_fj_stress 1 TINFO: Attaching task 61735 to /sys/fs/cgroup/ltp/test-58282/3/6/3
cgroup_fj_stress 1 TINFO: Attaching task 61739 to /sys/fs/cgroup/ltp/test-58282/3/6/4
cgroup_fj_stress 1 TINFO: Attaching task 61743 to /sys/fs/cgroup/ltp/test-58282/3/6/5
cgroup_fj_stress 1 TINFO: Attaching task 61747 to /sys/fs/cgroup/ltp/test-58282/3/6/6
cgroup_fj_stress 1 TINFO: Attaching task 61751 to /sys/fs/cgroup/ltp/test-58282/3/6/7
cgroup_fj_stress 1 TINFO: Attaching task 61755 to /sys/fs/cgroup/ltp/test-58282/3/6/8
cgroup_fj_stress 1 TINFO: Attaching task 61759 to /sys/fs/cgroup/ltp/test-58282/3/6/9
cgroup_fj_stress 1 TINFO: Attaching task 61763 to /sys/fs/cgroup/ltp/test-58282/3/6/10
cgroup_fj_stress 1 TINFO: Attaching task 61769 to /sys/fs/cgroup/ltp/test-58282/3/7/1
cgroup_fj_stress 1 TINFO: Attaching task 61773 to /sys/fs/cgroup/ltp/test-58282/3/7/2
cgroup_fj_stress 1 TINFO: Attaching task 61777 to /sys/fs/cgroup/ltp/test-58282/3/7/3
cgroup_fj_stress 1 TINFO: Attaching task 61781 to /sys/fs/cgroup/ltp/test-58282/3/7/4
cgroup_fj_stress 1 TINFO: Attaching task 61785 to /sys/fs/cgroup/ltp/test-58282/3/7/5
cgroup_fj_stress 1 TINFO: Attaching task 61789 to /sys/fs/cgroup/ltp/test-58282/3/7/6
cgroup_fj_stress 1 TINFO: Attaching task 61793 to /sys/fs/cgroup/ltp/test-58282/3/7/7
cgroup_fj_stress 1 TINFO: Attaching task 61797 to /sys/fs/cgroup/ltp/test-58282/3/7/8
cgroup_fj_stress 1 TINFO: Attaching task 61801 to /sys/fs/cgroup/ltp/test-58282/3/7/9
cgroup_fj_stress 1 TINFO: Attaching task 61805 to /sys/fs/cgroup/ltp/test-58282/3/7/10
cgroup_fj_stress 1 TINFO: Attaching task 61811 to /sys/fs/cgroup/ltp/test-58282/3/8/1
cgroup_fj_stress 1 TINFO: Attaching task 61815 to /sys/fs/cgroup/ltp/test-58282/3/8/2
cgroup_fj_stress 1 TINFO: Attaching task 61819 to /sys/fs/cgroup/ltp/test-58282/3/8/3
cgroup_fj_stress 1 TINFO: Attaching task 61823 to /sys/fs/cgroup/ltp/test-58282/3/8/4
cgroup_fj_stress 1 TINFO: Attaching task 61827 to /sys/fs/cgroup/ltp/test-58282/3/8/5
cgroup_fj_stress 1 TINFO: Attaching task 61831 to /sys/fs/cgroup/ltp/test-58282/3/8/6
cgroup_fj_stress 1 TINFO: Attaching task 61835 to /sys/fs/cgroup/ltp/test-58282/3/8/7
cgroup_fj_stress 1 TINFO: Attaching task 61839 to /sys/fs/cgroup/ltp/test-58282/3/8/8
cgroup_fj_stress 1 TINFO: Attaching task 61843 to /sys/fs/cgroup/ltp/test-58282/3/8/9
cgroup_fj_stress 1 TINFO: Attaching task 61847 to /sys/fs/cgroup/ltp/test-58282/3/8/10
cgroup_fj_stress 1 TINFO: Attaching task 61853 to /sys/fs/cgroup/ltp/test-58282/3/9/1
cgroup_fj_stress 1 TINFO: Attaching task 61857 to /sys/fs/cgroup/ltp/test-58282/3/9/2
cgroup_fj_stress 1 TINFO: Attaching task 61861 to /sys/fs/cgroup/ltp/test-58282/3/9/3
cgroup_fj_stress 1 TINFO: Attaching task 61865 to /sys/fs/cgroup/ltp/test-58282/3/9/4
cgroup_fj_stress 1 TINFO: Attaching task 61869 to /sys/fs/cgroup/ltp/test-58282/3/9/5
cgroup_fj_stress 1 TINFO: Attaching task 61873 to /sys/fs/cgroup/ltp/test-58282/3/9/6
cgroup_fj_stress 1 TINFO: Attaching task 61877 to /sys/fs/cgroup/ltp/test-58282/3/9/7
cgroup_fj_stress 1 TINFO: Attaching task 61881 to /sys/fs/cgroup/ltp/test-58282/3/9/8
cgroup_fj_stress 1 TINFO: Attaching task 61885 to /sys/fs/cgroup/ltp/test-58282/3/9/9
cgroup_fj_stress 1 TINFO: Attaching task 61889 to /sys/fs/cgroup/ltp/test-58282/3/9/10
cgroup_fj_stress 1 TINFO: Attaching task 61895 to /sys/fs/cgroup/ltp/test-58282/3/10/1
cgroup_fj_stress 1 TINFO: Attaching task 61899 to /sys/fs/cgroup/ltp/test-58282/3/10/2
cgroup_fj_stress 1 TINFO: Attaching task 61903 to /sys/fs/cgroup/ltp/test-58282/3/10/3
cgroup_fj_stress 1 TINFO: Attaching task 61907 to /sys/fs/cgroup/ltp/test-58282/3/10/4
cgroup_fj_stress 1 TINFO: Attaching task 61911 to /sys/fs/cgroup/ltp/test-58282/3/10/5
cgroup_fj_stress 1 TINFO: Attaching task 61915 to /sys/fs/cgroup/ltp/test-58282/3/10/6
cgroup_fj_stress 1 TINFO: Attaching task 61919 to /sys/fs/cgroup/ltp/test-58282/3/10/7
cgroup_fj_stress 1 TINFO: Attaching task 61923 to /sys/fs/cgroup/ltp/test-58282/3/10/8
cgroup_fj_stress 1 TINFO: Attaching task 61927 to /sys/fs/cgroup/ltp/test-58282/3/10/9
cgroup_fj_stress 1 TINFO: Attaching task 61931 to /sys/fs/cgroup/ltp/test-58282/3/10/10
cgroup_fj_stress 1 TINFO: Attaching task 61939 to /sys/fs/cgroup/ltp/test-58282/4/1/1
cgroup_fj_stress 1 TINFO: Attaching task 61943 to /sys/fs/cgroup/ltp/test-58282/4/1/2
cgroup_fj_stress 1 TINFO: Attaching task 61947 to /sys/fs/cgroup/ltp/test-58282/4/1/3
cgroup_fj_stress 1 TINFO: Attaching task 61951 to /sys/fs/cgroup/ltp/test-58282/4/1/4
cgroup_fj_stress 1 TINFO: Attaching task 61955 to /sys/fs/cgroup/ltp/test-58282/4/1/5
cgroup_fj_stress 1 TINFO: Attaching task 61959 to /sys/fs/cgroup/ltp/test-58282/4/1/6
cgroup_fj_stress 1 TINFO: Attaching task 61963 to /sys/fs/cgroup/ltp/test-58282/4/1/7
cgroup_fj_stress 1 TINFO: Attaching task 61967 to /sys/fs/cgroup/ltp/test-58282/4/1/8
cgroup_fj_stress 1 TINFO: Attaching task 61971 to /sys/fs/cgroup/ltp/test-58282/4/1/9
cgroup_fj_stress 1 TINFO: Attaching task 61975 to /sys/fs/cgroup/ltp/test-58282/4/1/10
cgroup_fj_stress 1 TINFO: Attaching task 61981 to /sys/fs/cgroup/ltp/test-58282/4/2/1
cgroup_fj_stress 1 TINFO: Attaching task 61985 to /sys/fs/cgroup/ltp/test-58282/4/2/2
cgroup_fj_stress 1 TINFO: Attaching task 61989 to /sys/fs/cgroup/ltp/test-58282/4/2/3
cgroup_fj_stress 1 TINFO: Attaching task 61993 to /sys/fs/cgroup/ltp/test-58282/4/2/4
cgroup_fj_stress 1 TINFO: Attaching task 61997 to /sys/fs/cgroup/ltp/test-58282/4/2/5
cgroup_fj_stress 1 TINFO: Attaching task 62001 to /sys/fs/cgroup/ltp/test-58282/4/2/6
cgroup_fj_stress 1 TINFO: Attaching task 62005 to /sys/fs/cgroup/ltp/test-58282/4/2/7
cgroup_fj_stress 1 TINFO: Attaching task 62009 to /sys/fs/cgroup/ltp/test-58282/4/2/8
cgroup_fj_stress 1 TINFO: Attaching task 62013 to /sys/fs/cgroup/ltp/test-58282/4/2/9
cgroup_fj_stress 1 TINFO: Attaching task 62017 to /sys/fs/cgroup/ltp/test-58282/4/2/10
cgroup_fj_stress 1 TINFO: Attaching task 62023 to /sys/fs/cgroup/ltp/test-58282/4/3/1
cgroup_fj_stress 1 TINFO: Attaching task 62027 to /sys/fs/cgroup/ltp/test-58282/4/3/2
cgroup_fj_stress 1 TINFO: Attaching task 62031 to /sys/fs/cgroup/ltp/test-58282/4/3/3
cgroup_fj_stress 1 TINFO: Attaching task 62035 to /sys/fs/cgroup/ltp/test-58282/4/3/4
cgroup_fj_stress 1 TINFO: Attaching task 62039 to /sys/fs/cgroup/ltp/test-58282/4/3/5
cgroup_fj_stress 1 TINFO: Attaching task 62043 to /sys/fs/cgroup/ltp/test-58282/4/3/6
cgroup_fj_stress 1 TINFO: Attaching task 62047 to /sys/fs/cgroup/ltp/test-58282/4/3/7
cgroup_fj_stress 1 TINFO: Attaching task 62051 to /sys/fs/cgroup/ltp/test-58282/4/3/8
cgroup_fj_stress 1 TINFO: Attaching task 62055 to /sys/fs/cgroup/ltp/test-58282/4/3/9
cgroup_fj_stress 1 TINFO: Attaching task 62059 to /sys/fs/cgroup/ltp/test-58282/4/3/10
cgroup_fj_stress 1 TINFO: Attaching task 62065 to /sys/fs/cgroup/ltp/test-58282/4/4/1
cgroup_fj_stress 1 TINFO: Attaching task 62069 to /sys/fs/cgroup/ltp/test-58282/4/4/2
cgroup_fj_stress 1 TINFO: Attaching task 62073 to /sys/fs/cgroup/ltp/test-58282/4/4/3
cgroup_fj_stress 1 TINFO: Attaching task 62077 to /sys/fs/cgroup/ltp/test-58282/4/4/4
cgroup_fj_stress 1 TINFO: Attaching task 62081 to /sys/fs/cgroup/ltp/test-58282/4/4/5
cgroup_fj_stress 1 TINFO: Attaching task 62085 to /sys/fs/cgroup/ltp/test-58282/4/4/6
cgroup_fj_stress 1 TINFO: Attaching task 62089 to /sys/fs/cgroup/ltp/test-58282/4/4/7
cgroup_fj_stress 1 TINFO: Attaching task 62093 to /sys/fs/cgroup/ltp/test-58282/4/4/8
cgroup_fj_stress 1 TINFO: Attaching task 62097 to /sys/fs/cgroup/ltp/test-58282/4/4/9
cgroup_fj_stress 1 TINFO: Attaching task 62101 to /sys/fs/cgroup/ltp/test-58282/4/4/10
cgroup_fj_stress 1 TINFO: Attaching task 62107 to /sys/fs/cgroup/ltp/test-58282/4/5/1
cgroup_fj_stress 1 TINFO: Attaching task 62111 to /sys/fs/cgroup/ltp/test-58282/4/5/2
cgroup_fj_stress 1 TINFO: Attaching task 62115 to /sys/fs/cgroup/ltp/test-58282/4/5/3
cgroup_fj_stress 1 TINFO: Attaching task 62119 to /sys/fs/cgroup/ltp/test-58282/4/5/4
cgroup_fj_stress 1 TINFO: Attaching task 62123 to /sys/fs/cgroup/ltp/test-58282/4/5/5
cgroup_fj_stress 1 TINFO: Attaching task 62127 to /sys/fs/cgroup/ltp/test-58282/4/5/6
cgroup_fj_stress 1 TINFO: Attaching task 62131 to /sys/fs/cgroup/ltp/test-58282/4/5/7
cgroup_fj_stress 1 TINFO: Attaching task 62135 to /sys/fs/cgroup/ltp/test-58282/4/5/8
cgroup_fj_stress 1 TINFO: Attaching task 62139 to /sys/fs/cgroup/ltp/test-58282/4/5/9
cgroup_fj_stress 1 TINFO: Attaching task 62143 to /sys/fs/cgroup/ltp/test-58282/4/5/10
cgroup_fj_stress 1 TINFO: Attaching task 62149 to /sys/fs/cgroup/ltp/test-58282/4/6/1
cgroup_fj_stress 1 TINFO: Attaching task 62153 to /sys/fs/cgroup/ltp/test-58282/4/6/2
cgroup_fj_stress 1 TINFO: Attaching task 62157 to /sys/fs/cgroup/ltp/test-58282/4/6/3
cgroup_fj_stress 1 TINFO: Attaching task 62161 to /sys/fs/cgroup/ltp/test-58282/4/6/4
cgroup_fj_stress 1 TINFO: Attaching task 62165 to /sys/fs/cgroup/ltp/test-58282/4/6/5
cgroup_fj_stress 1 TINFO: Attaching task 62169 to /sys/fs/cgroup/ltp/test-58282/4/6/6
cgroup_fj_stress 1 TINFO: Attaching task 62173 to /sys/fs/cgroup/ltp/test-58282/4/6/7
cgroup_fj_stress 1 TINFO: Attaching task 62177 to /sys/fs/cgroup/ltp/test-58282/4/6/8
cgroup_fj_stress 1 TINFO: Attaching task 62181 to /sys/fs/cgroup/ltp/test-58282/4/6/9
cgroup_fj_stress 1 TINFO: Attaching task 62185 to /sys/fs/cgroup/ltp/test-58282/4/6/10
cgroup_fj_stress 1 TINFO: Attaching task 62191 to /sys/fs/cgroup/ltp/test-58282/4/7/1
cgroup_fj_stress 1 TINFO: Attaching task 62195 to /sys/fs/cgroup/ltp/test-58282/4/7/2
cgroup_fj_stress 1 TINFO: Attaching task 62199 to /sys/fs/cgroup/ltp/test-58282/4/7/3
cgroup_fj_stress 1 TINFO: Attaching task 62203 to /sys/fs/cgroup/ltp/test-58282/4/7/4
cgroup_fj_stress 1 TINFO: Attaching task 62207 to /sys/fs/cgroup/ltp/test-58282/4/7/5
cgroup_fj_stress 1 TINFO: Attaching task 62211 to /sys/fs/cgroup/ltp/test-58282/4/7/6
cgroup_fj_stress 1 TINFO: Attaching task 62215 to /sys/fs/cgroup/ltp/test-58282/4/7/7
cgroup_fj_stress 1 TINFO: Attaching task 62219 to /sys/fs/cgroup/ltp/test-58282/4/7/8
cgroup_fj_stress 1 TINFO: Attaching task 62223 to /sys/fs/cgroup/ltp/test-58282/4/7/9
cgroup_fj_stress 1 TINFO: Attaching task 62227 to /sys/fs/cgroup/ltp/test-58282/4/7/10
cgroup_fj_stress 1 TINFO: Attaching task 62233 to /sys/fs/cgroup/ltp/test-58282/4/8/1
cgroup_fj_stress 1 TINFO: Attaching task 62237 to /sys/fs/cgroup/ltp/test-58282/4/8/2
cgroup_fj_stress 1 TINFO: Attaching task 62241 to /sys/fs/cgroup/ltp/test-58282/4/8/3
cgroup_fj_stress 1 TINFO: Attaching task 62245 to /sys/fs/cgroup/ltp/test-58282/4/8/4
cgroup_fj_stress 1 TINFO: Attaching task 62249 to /sys/fs/cgroup/ltp/test-58282/4/8/5
cgroup_fj_stress 1 TINFO: Attaching task 62253 to /sys/fs/cgroup/ltp/test-58282/4/8/6
cgroup_fj_stress 1 TINFO: Attaching task 62257 to /sys/fs/cgroup/ltp/test-58282/4/8/7
cgroup_fj_stress 1 TINFO: Attaching task 62261 to /sys/fs/cgroup/ltp/test-58282/4/8/8
cgroup_fj_stress 1 TINFO: Attaching task 62265 to /sys/fs/cgroup/ltp/test-58282/4/8/9
cgroup_fj_stress 1 TINFO: Attaching task 62269 to /sys/fs/cgroup/ltp/test-58282/4/8/10
cgroup_fj_stress 1 TINFO: Attaching task 62275 to /sys/fs/cgroup/ltp/test-58282/4/9/1
cgroup_fj_stress 1 TINFO: Attaching task 62279 to /sys/fs/cgroup/ltp/test-58282/4/9/2
cgroup_fj_stress 1 TINFO: Attaching task 62283 to /sys/fs/cgroup/ltp/test-58282/4/9/3
cgroup_fj_stress 1 TINFO: Attaching task 62287 to /sys/fs/cgroup/ltp/test-58282/4/9/4
cgroup_fj_stress 1 TINFO: Attaching task 62291 to /sys/fs/cgroup/ltp/test-58282/4/9/5
cgroup_fj_stress 1 TINFO: Attaching task 62295 to /sys/fs/cgroup/ltp/test-58282/4/9/6
cgroup_fj_stress 1 TINFO: Attaching task 62299 to /sys/fs/cgroup/ltp/test-58282/4/9/7
cgroup_fj_stress 1 TINFO: Attaching task 62303 to /sys/fs/cgroup/ltp/test-58282/4/9/8
cgroup_fj_stress 1 TINFO: Attaching task 62307 to /sys/fs/cgroup/ltp/test-58282/4/9/9
cgroup_fj_stress 1 TINFO: Attaching task 62311 to /sys/fs/cgroup/ltp/test-58282/4/9/10
cgroup_fj_stress 1 TINFO: Attaching task 62317 to /sys/fs/cgroup/ltp/test-58282/4/10/1
cgroup_fj_stress 1 TINFO: Attaching task 62321 to /sys/fs/cgroup/ltp/test-58282/4/10/2
cgroup_fj_stress 1 TINFO: Attaching task 62325 to /sys/fs/cgroup/ltp/test-58282/4/10/3
cgroup_fj_stress 1 TINFO: Attaching task 62329 to /sys/fs/cgroup/ltp/test-58282/4/10/4
cgroup_fj_stress 1 TINFO: Attaching task 62333 to /sys/fs/cgroup/ltp/test-58282/4/10/5
cgroup_fj_stress 1 TINFO: Attaching task 62337 to /sys/fs/cgroup/ltp/test-58282/4/10/6
cgroup_fj_stress 1 TINFO: Attaching task 62341 to /sys/fs/cgroup/ltp/test-58282/4/10/7
cgroup_fj_stress 1 TINFO: Attaching task 62345 to /sys/fs/cgroup/ltp/test-58282/4/10/8
cgroup_fj_stress 1 TINFO: Attaching task 62349 to /sys/fs/cgroup/ltp/test-58282/4/10/9
cgroup_fj_stress 1 TINFO: Attaching task 62353 to /sys/fs/cgroup/ltp/test-58282/4/10/10
cgroup_fj_stress 1 TINFO: Attaching task 62361 to /sys/fs/cgroup/ltp/test-58282/5/1/1
cgroup_fj_stress 1 TINFO: Attaching task 62365 to /sys/fs/cgroup/ltp/test-58282/5/1/2
cgroup_fj_stress 1 TINFO: Attaching task 62369 to /sys/fs/cgroup/ltp/test-58282/5/1/3
cgroup_fj_stress 1 TINFO: Attaching task 62373 to /sys/fs/cgroup/ltp/test-58282/5/1/4
cgroup_fj_stress 1 TINFO: Attaching task 62377 to /sys/fs/cgroup/ltp/test-58282/5/1/5
cgroup_fj_stress 1 TINFO: Attaching task 62381 to /sys/fs/cgroup/ltp/test-58282/5/1/6
cgroup_fj_stress 1 TINFO: Attaching task 62385 to /sys/fs/cgroup/ltp/test-58282/5/1/7
cgroup_fj_stress 1 TINFO: Attaching task 62389 to /sys/fs/cgroup/ltp/test-58282/5/1/8
cgroup_fj_stress 1 TINFO: Attaching task 62393 to /sys/fs/cgroup/ltp/test-58282/5/1/9
cgroup_fj_stress 1 TINFO: Attaching task 62397 to /sys/fs/cgroup/ltp/test-58282/5/1/10
cgroup_fj_stress 1 TINFO: Attaching task 62403 to /sys/fs/cgroup/ltp/test-58282/5/2/1
cgroup_fj_stress 1 TINFO: Attaching task 62407 to /sys/fs/cgroup/ltp/test-58282/5/2/2
cgroup_fj_stress 1 TINFO: Attaching task 62411 to /sys/fs/cgroup/ltp/test-58282/5/2/3
cgroup_fj_stress 1 TINFO: Attaching task 62415 to /sys/fs/cgroup/ltp/test-58282/5/2/4
cgroup_fj_stress 1 TINFO: Attaching task 62419 to /sys/fs/cgroup/ltp/test-58282/5/2/5
cgroup_fj_stress 1 TINFO: Attaching task 62423 to /sys/fs/cgroup/ltp/test-58282/5/2/6
cgroup_fj_stress 1 TINFO: Attaching task 62427 to /sys/fs/cgroup/ltp/test-58282/5/2/7
cgroup_fj_stress 1 TINFO: Attaching task 62431 to /sys/fs/cgroup/ltp/test-58282/5/2/8
cgroup_fj_stress 1 TINFO: Attaching task 62435 to /sys/fs/cgroup/ltp/test-58282/5/2/9
cgroup_fj_stress 1 TINFO: Attaching task 62439 to /sys/fs/cgroup/ltp/test-58282/5/2/10
cgroup_fj_stress 1 TINFO: Attaching task 62445 to /sys/fs/cgroup/ltp/test-58282/5/3/1
cgroup_fj_stress 1 TINFO: Attaching task 62449 to /sys/fs/cgroup/ltp/test-58282/5/3/2
cgroup_fj_stress 1 TINFO: Attaching task 62453 to /sys/fs/cgroup/ltp/test-58282/5/3/3
cgroup_fj_stress 1 TINFO: Attaching task 62457 to /sys/fs/cgroup/ltp/test-58282/5/3/4
cgroup_fj_stress 1 TINFO: Attaching task 62461 to /sys/fs/cgroup/ltp/test-58282/5/3/5
cgroup_fj_stress 1 TINFO: Attaching task 62465 to /sys/fs/cgroup/ltp/test-58282/5/3/6
cgroup_fj_stress 1 TINFO: Attaching task 62469 to /sys/fs/cgroup/ltp/test-58282/5/3/7
cgroup_fj_stress 1 TINFO: Attaching task 62473 to /sys/fs/cgroup/ltp/test-58282/5/3/8
cgroup_fj_stress 1 TINFO: Attaching task 62477 to /sys/fs/cgroup/ltp/test-58282/5/3/9
cgroup_fj_stress 1 TINFO: Attaching task 62481 to /sys/fs/cgroup/ltp/test-58282/5/3/10
cgroup_fj_stress 1 TINFO: Attaching task 62487 to /sys/fs/cgroup/ltp/test-58282/5/4/1
cgroup_fj_stress 1 TINFO: Attaching task 62491 to /sys/fs/cgroup/ltp/test-58282/5/4/2
cgroup_fj_stress 1 TINFO: Attaching task 62495 to /sys/fs/cgroup/ltp/test-58282/5/4/3
cgroup_fj_stress 1 TINFO: Attaching task 62499 to /sys/fs/cgroup/ltp/test-58282/5/4/4
cgroup_fj_stress 1 TINFO: Attaching task 62503 to /sys/fs/cgroup/ltp/test-58282/5/4/5
cgroup_fj_stress 1 TINFO: Attaching task 62507 to /sys/fs/cgroup/ltp/test-58282/5/4/6
cgroup_fj_stress 1 TINFO: Attaching task 62511 to /sys/fs/cgroup/ltp/test-58282/5/4/7
cgroup_fj_stress 1 TINFO: Attaching task 62515 to /sys/fs/cgroup/ltp/test-58282/5/4/8
cgroup_fj_stress 1 TINFO: Attaching task 62519 to /sys/fs/cgroup/ltp/test-58282/5/4/9
cgroup_fj_stress 1 TINFO: Attaching task 62523 to /sys/fs/cgroup/ltp/test-58282/5/4/10
cgroup_fj_stress 1 TINFO: Attaching task 62529 to /sys/fs/cgroup/ltp/test-58282/5/5/1
cgroup_fj_stress 1 TINFO: Attaching task 62533 to /sys/fs/cgroup/ltp/test-58282/5/5/2
cgroup_fj_stress 1 TINFO: Attaching task 62537 to /sys/fs/cgroup/ltp/test-58282/5/5/3
cgroup_fj_stress 1 TINFO: Attaching task 62541 to /sys/fs/cgroup/ltp/test-58282/5/5/4
cgroup_fj_stress 1 TINFO: Attaching task 62545 to /sys/fs/cgroup/ltp/test-58282/5/5/5
cgroup_fj_stress 1 TINFO: Attaching task 62549 to /sys/fs/cgroup/ltp/test-58282/5/5/6
cgroup_fj_stress 1 TINFO: Attaching task 62553 to /sys/fs/cgroup/ltp/test-58282/5/5/7
cgroup_fj_stress 1 TINFO: Attaching task 62557 to /sys/fs/cgroup/ltp/test-58282/5/5/8
cgroup_fj_stress 1 TINFO: Attaching task 62561 to /sys/fs/cgroup/ltp/test-58282/5/5/9
cgroup_fj_stress 1 TINFO: Attaching task 62565 to /sys/fs/cgroup/ltp/test-58282/5/5/10
cgroup_fj_stress 1 TINFO: Attaching task 62571 to /sys/fs/cgroup/ltp/test-58282/5/6/1
cgroup_fj_stress 1 TINFO: Attaching task 62575 to /sys/fs/cgroup/ltp/test-58282/5/6/2
cgroup_fj_stress 1 TINFO: Attaching task 62579 to /sys/fs/cgroup/ltp/test-58282/5/6/3
cgroup_fj_stress 1 TINFO: Attaching task 62583 to /sys/fs/cgroup/ltp/test-58282/5/6/4
cgroup_fj_stress 1 TINFO: Attaching task 62587 to /sys/fs/cgroup/ltp/test-58282/5/6/5
cgroup_fj_stress 1 TINFO: Attaching task 62591 to /sys/fs/cgroup/ltp/test-58282/5/6/6
cgroup_fj_stress 1 TINFO: Attaching task 62595 to /sys/fs/cgroup/ltp/test-58282/5/6/7
cgroup_fj_stress 1 TINFO: Attaching task 62599 to /sys/fs/cgroup/ltp/test-58282/5/6/8
cgroup_fj_stress 1 TINFO: Attaching task 62603 to /sys/fs/cgroup/ltp/test-58282/5/6/9
cgroup_fj_stress 1 TINFO: Attaching task 62607 to /sys/fs/cgroup/ltp/test-58282/5/6/10
cgroup_fj_stress 1 TINFO: Attaching task 62613 to /sys/fs/cgroup/ltp/test-58282/5/7/1
cgroup_fj_stress 1 TINFO: Attaching task 62617 to /sys/fs/cgroup/ltp/test-58282/5/7/2
cgroup_fj_stress 1 TINFO: Attaching task 62621 to /sys/fs/cgroup/ltp/test-58282/5/7/3
cgroup_fj_stress 1 TINFO: Attaching task 62625 to /sys/fs/cgroup/ltp/test-58282/5/7/4
cgroup_fj_stress 1 TINFO: Attaching task 62629 to /sys/fs/cgroup/ltp/test-58282/5/7/5
cgroup_fj_stress 1 TINFO: Attaching task 62633 to /sys/fs/cgroup/ltp/test-58282/5/7/6
cgroup_fj_stress 1 TINFO: Attaching task 62637 to /sys/fs/cgroup/ltp/test-58282/5/7/7
cgroup_fj_stress 1 TINFO: Attaching task 62641 to /sys/fs/cgroup/ltp/test-58282/5/7/8
cgroup_fj_stress 1 TINFO: Attaching task 62645 to /sys/fs/cgroup/ltp/test-58282/5/7/9
cgroup_fj_stress 1 TINFO: Attaching task 62649 to /sys/fs/cgroup/ltp/test-58282/5/7/10
cgroup_fj_stress 1 TINFO: Attaching task 62655 to /sys/fs/cgroup/ltp/test-58282/5/8/1
cgroup_fj_stress 1 TINFO: Attaching task 62659 to /sys/fs/cgroup/ltp/test-58282/5/8/2
cgroup_fj_stress 1 TINFO: Attaching task 62663 to /sys/fs/cgroup/ltp/test-58282/5/8/3
cgroup_fj_stress 1 TINFO: Attaching task 62667 to /sys/fs/cgroup/ltp/test-58282/5/8/4
cgroup_fj_stress 1 TINFO: Attaching task 62671 to /sys/fs/cgroup/ltp/test-58282/5/8/5
cgroup_fj_stress 1 TINFO: Attaching task 62675 to /sys/fs/cgroup/ltp/test-58282/5/8/6
cgroup_fj_stress 1 TINFO: Attaching task 62679 to /sys/fs/cgroup/ltp/test-58282/5/8/7
cgroup_fj_stress 1 TINFO: Attaching task 62683 to /sys/fs/cgroup/ltp/test-58282/5/8/8
cgroup_fj_stress 1 TINFO: Attaching task 62687 to /sys/fs/cgroup/ltp/test-58282/5/8/9
cgroup_fj_stress 1 TINFO: Attaching task 62691 to /sys/fs/cgroup/ltp/test-58282/5/8/10
cgroup_fj_stress 1 TINFO: Attaching task 62697 to /sys/fs/cgroup/ltp/test-58282/5/9/1
cgroup_fj_stress 1 TINFO: Attaching task 62701 to /sys/fs/cgroup/ltp/test-58282/5/9/2
cgroup_fj_stress 1 TINFO: Attaching task 62705 to /sys/fs/cgroup/ltp/test-58282/5/9/3
cgroup_fj_stress 1 TINFO: Attaching task 62709 to /sys/fs/cgroup/ltp/test-58282/5/9/4
cgroup_fj_stress 1 TINFO: Attaching task 62713 to /sys/fs/cgroup/ltp/test-58282/5/9/5
cgroup_fj_stress 1 TINFO: Attaching task 62717 to /sys/fs/cgroup/ltp/test-58282/5/9/6
cgroup_fj_stress 1 TINFO: Attaching task 62721 to /sys/fs/cgroup/ltp/test-58282/5/9/7
cgroup_fj_stress 1 TINFO: Attaching task 62725 to /sys/fs/cgroup/ltp/test-58282/5/9/8
cgroup_fj_stress 1 TINFO: Attaching task 62729 to /sys/fs/cgroup/ltp/test-58282/5/9/9
cgroup_fj_stress 1 TINFO: Attaching task 62733 to /sys/fs/cgroup/ltp/test-58282/5/9/10
cgroup_fj_stress 1 TINFO: Attaching task 62739 to /sys/fs/cgroup/ltp/test-58282/5/10/1
cgroup_fj_stress 1 TINFO: Attaching task 62743 to /sys/fs/cgroup/ltp/test-58282/5/10/2
cgroup_fj_stress 1 TINFO: Attaching task 62747 to /sys/fs/cgroup/ltp/test-58282/5/10/3
cgroup_fj_stress 1 TINFO: Attaching task 62751 to /sys/fs/cgroup/ltp/test-58282/5/10/4
cgroup_fj_stress 1 TINFO: Attaching task 62755 to /sys/fs/cgroup/ltp/test-58282/5/10/5
cgroup_fj_stress 1 TINFO: Attaching task 62759 to /sys/fs/cgroup/ltp/test-58282/5/10/6
cgroup_fj_stress 1 TINFO: Attaching task 62763 to /sys/fs/cgroup/ltp/test-58282/5/10/7
cgroup_fj_stress 1 TINFO: Attaching task 62767 to /sys/fs/cgroup/ltp/test-58282/5/10/8
cgroup_fj_stress 1 TINFO: Attaching task 62771 to /sys/fs/cgroup/ltp/test-58282/5/10/9
cgroup_fj_stress 1 TINFO: Attaching task 62775 to /sys/fs/cgroup/ltp/test-58282/5/10/10
cgroup_fj_stress 1 TINFO: Attaching task 62783 to /sys/fs/cgroup/ltp/test-58282/6/1/1
cgroup_fj_stress 1 TINFO: Attaching task 62787 to /sys/fs/cgroup/ltp/test-58282/6/1/2
cgroup_fj_stress 1 TINFO: Attaching task 62791 to /sys/fs/cgroup/ltp/test-58282/6/1/3
cgroup_fj_stress 1 TINFO: Attaching task 62795 to /sys/fs/cgroup/ltp/test-58282/6/1/4
cgroup_fj_stress 1 TINFO: Attaching task 62799 to /sys/fs/cgroup/ltp/test-58282/6/1/5
cgroup_fj_stress 1 TINFO: Attaching task 62803 to /sys/fs/cgroup/ltp/test-58282/6/1/6
cgroup_fj_stress 1 TINFO: Attaching task 62807 to /sys/fs/cgroup/ltp/test-58282/6/1/7
cgroup_fj_stress 1 TINFO: Attaching task 62811 to /sys/fs/cgroup/ltp/test-58282/6/1/8
cgroup_fj_stress 1 TINFO: Attaching task 62815 to /sys/fs/cgroup/ltp/test-58282/6/1/9
cgroup_fj_stress 1 TINFO: Attaching task 62819 to /sys/fs/cgroup/ltp/test-58282/6/1/10
cgroup_fj_stress 1 TINFO: Attaching task 62825 to /sys/fs/cgroup/ltp/test-58282/6/2/1
cgroup_fj_stress 1 TINFO: Attaching task 62829 to /sys/fs/cgroup/ltp/test-58282/6/2/2
cgroup_fj_stress 1 TINFO: Attaching task 62833 to /sys/fs/cgroup/ltp/test-58282/6/2/3
cgroup_fj_stress 1 TINFO: Attaching task 62837 to /sys/fs/cgroup/ltp/test-58282/6/2/4
cgroup_fj_stress 1 TINFO: Attaching task 62841 to /sys/fs/cgroup/ltp/test-58282/6/2/5
cgroup_fj_stress 1 TINFO: Attaching task 62845 to /sys/fs/cgroup/ltp/test-58282/6/2/6
cgroup_fj_stress 1 TINFO: Attaching task 62849 to /sys/fs/cgroup/ltp/test-58282/6/2/7
cgroup_fj_stress 1 TINFO: Attaching task 62853 to /sys/fs/cgroup/ltp/test-58282/6/2/8
cgroup_fj_stress 1 TINFO: Attaching task 62857 to /sys/fs/cgroup/ltp/test-58282/6/2/9
cgroup_fj_stress 1 TINFO: Attaching task 62861 to /sys/fs/cgroup/ltp/test-58282/6/2/10
cgroup_fj_stress 1 TINFO: Attaching task 62867 to /sys/fs/cgroup/ltp/test-58282/6/3/1
cgroup_fj_stress 1 TINFO: Attaching task 62871 to /sys/fs/cgroup/ltp/test-58282/6/3/2
cgroup_fj_stress 1 TINFO: Attaching task 62875 to /sys/fs/cgroup/ltp/test-58282/6/3/3
cgroup_fj_stress 1 TINFO: Attaching task 62879 to /sys/fs/cgroup/ltp/test-58282/6/3/4
cgroup_fj_stress 1 TINFO: Attaching task 62883 to /sys/fs/cgroup/ltp/test-58282/6/3/5
cgroup_fj_stress 1 TINFO: Attaching task 62887 to /sys/fs/cgroup/ltp/test-58282/6/3/6
cgroup_fj_stress 1 TINFO: Attaching task 62891 to /sys/fs/cgroup/ltp/test-58282/6/3/7
cgroup_fj_stress 1 TINFO: Attaching task 62895 to /sys/fs/cgroup/ltp/test-58282/6/3/8
cgroup_fj_stress 1 TINFO: Attaching task 62899 to /sys/fs/cgroup/ltp/test-58282/6/3/9
cgroup_fj_stress 1 TINFO: Attaching task 62903 to /sys/fs/cgroup/ltp/test-58282/6/3/10
cgroup_fj_stress 1 TINFO: Attaching task 62909 to /sys/fs/cgroup/ltp/test-58282/6/4/1
cgroup_fj_stress 1 TINFO: Attaching task 62913 to /sys/fs/cgroup/ltp/test-58282/6/4/2
cgroup_fj_stress 1 TINFO: Attaching task 62917 to /sys/fs/cgroup/ltp/test-58282/6/4/3
cgroup_fj_stress 1 TINFO: Attaching task 62921 to /sys/fs/cgroup/ltp/test-58282/6/4/4
cgroup_fj_stress 1 TINFO: Attaching task 62925 to /sys/fs/cgroup/ltp/test-58282/6/4/5
cgroup_fj_stress 1 TINFO: Attaching task 62929 to /sys/fs/cgroup/ltp/test-58282/6/4/6
cgroup_fj_stress 1 TINFO: Attaching task 62933 to /sys/fs/cgroup/ltp/test-58282/6/4/7
cgroup_fj_stress 1 TINFO: Attaching task 62937 to /sys/fs/cgroup/ltp/test-58282/6/4/8
cgroup_fj_stress 1 TINFO: Attaching task 62941 to /sys/fs/cgroup/ltp/test-58282/6/4/9
cgroup_fj_stress 1 TINFO: Attaching task 62945 to /sys/fs/cgroup/ltp/test-58282/6/4/10
cgroup_fj_stress 1 TINFO: Attaching task 62951 to /sys/fs/cgroup/ltp/test-58282/6/5/1
cgroup_fj_stress 1 TINFO: Attaching task 62955 to /sys/fs/cgroup/ltp/test-58282/6/5/2
cgroup_fj_stress 1 TINFO: Attaching task 62959 to /sys/fs/cgroup/ltp/test-58282/6/5/3
cgroup_fj_stress 1 TINFO: Attaching task 62963 to /sys/fs/cgroup/ltp/test-58282/6/5/4
cgroup_fj_stress 1 TINFO: Attaching task 62967 to /sys/fs/cgroup/ltp/test-58282/6/5/5
cgroup_fj_stress 1 TINFO: Attaching task 62971 to /sys/fs/cgroup/ltp/test-58282/6/5/6
cgroup_fj_stress 1 TINFO: Attaching task 62975 to /sys/fs/cgroup/ltp/test-58282/6/5/7
cgroup_fj_stress 1 TINFO: Attaching task 62979 to /sys/fs/cgroup/ltp/test-58282/6/5/8
cgroup_fj_stress 1 TINFO: Attaching task 62983 to /sys/fs/cgroup/ltp/test-58282/6/5/9
cgroup_fj_stress 1 TINFO: Attaching task 62987 to /sys/fs/cgroup/ltp/test-58282/6/5/10
cgroup_fj_stress 1 TINFO: Attaching task 62993 to /sys/fs/cgroup/ltp/test-58282/6/6/1
cgroup_fj_stress 1 TINFO: Attaching task 62997 to /sys/fs/cgroup/ltp/test-58282/6/6/2
cgroup_fj_stress 1 TINFO: Attaching task 63001 to /sys/fs/cgroup/ltp/test-58282/6/6/3
cgroup_fj_stress 1 TINFO: Attaching task 63005 to /sys/fs/cgroup/ltp/test-58282/6/6/4
cgroup_fj_stress 1 TINFO: Attaching task 63009 to /sys/fs/cgroup/ltp/test-58282/6/6/5
cgroup_fj_stress 1 TINFO: Attaching task 63013 to /sys/fs/cgroup/ltp/test-58282/6/6/6
cgroup_fj_stress 1 TINFO: Attaching task 63017 to /sys/fs/cgroup/ltp/test-58282/6/6/7
cgroup_fj_stress 1 TINFO: Attaching task 63021 to /sys/fs/cgroup/ltp/test-58282/6/6/8
cgroup_fj_stress 1 TINFO: Attaching task 63025 to /sys/fs/cgroup/ltp/test-58282/6/6/9
cgroup_fj_stress 1 TINFO: Attaching task 63029 to /sys/fs/cgroup/ltp/test-58282/6/6/10
cgroup_fj_stress 1 TINFO: Attaching task 63035 to /sys/fs/cgroup/ltp/test-58282/6/7/1
cgroup_fj_stress 1 TINFO: Attaching task 63039 to /sys/fs/cgroup/ltp/test-58282/6/7/2
cgroup_fj_stress 1 TINFO: Attaching task 63043 to /sys/fs/cgroup/ltp/test-58282/6/7/3
cgroup_fj_stress 1 TINFO: Attaching task 63047 to /sys/fs/cgroup/ltp/test-58282/6/7/4
cgroup_fj_stress 1 TINFO: Attaching task 63051 to /sys/fs/cgroup/ltp/test-58282/6/7/5
cgroup_fj_stress 1 TINFO: Attaching task 63055 to /sys/fs/cgroup/ltp/test-58282/6/7/6
cgroup_fj_stress 1 TINFO: Attaching task 63059 to /sys/fs/cgroup/ltp/test-58282/6/7/7
cgroup_fj_stress 1 TINFO: Attaching task 63063 to /sys/fs/cgroup/ltp/test-58282/6/7/8
cgroup_fj_stress 1 TINFO: Attaching task 63067 to /sys/fs/cgroup/ltp/test-58282/6/7/9
cgroup_fj_stress 1 TINFO: Attaching task 63071 to /sys/fs/cgroup/ltp/test-58282/6/7/10
cgroup_fj_stress 1 TINFO: Attaching task 63077 to /sys/fs/cgroup/ltp/test-58282/6/8/1
cgroup_fj_stress 1 TINFO: Attaching task 63081 to /sys/fs/cgroup/ltp/test-58282/6/8/2
cgroup_fj_stress 1 TINFO: Attaching task 63085 to /sys/fs/cgroup/ltp/test-58282/6/8/3
cgroup_fj_stress 1 TINFO: Attaching task 63089 to /sys/fs/cgroup/ltp/test-58282/6/8/4
cgroup_fj_stress 1 TINFO: Attaching task 63093 to /sys/fs/cgroup/ltp/test-58282/6/8/5
cgroup_fj_stress 1 TINFO: Attaching task 63097 to /sys/fs/cgroup/ltp/test-58282/6/8/6
cgroup_fj_stress 1 TINFO: Attaching task 63101 to /sys/fs/cgroup/ltp/test-58282/6/8/7
cgroup_fj_stress 1 TINFO: Attaching task 63105 to /sys/fs/cgroup/ltp/test-58282/6/8/8
cgroup_fj_stress 1 TINFO: Attaching task 63109 to /sys/fs/cgroup/ltp/test-58282/6/8/9
cgroup_fj_stress 1 TINFO: Attaching task 63113 to /sys/fs/cgroup/ltp/test-58282/6/8/10
cgroup_fj_stress 1 TINFO: Attaching task 63119 to /sys/fs/cgroup/ltp/test-58282/6/9/1
cgroup_fj_stress 1 TINFO: Attaching task 63123 to /sys/fs/cgroup/ltp/test-58282/6/9/2
cgroup_fj_stress 1 TINFO: Attaching task 63127 to /sys/fs/cgroup/ltp/test-58282/6/9/3
cgroup_fj_stress 1 TINFO: Attaching task 63131 to /sys/fs/cgroup/ltp/test-58282/6/9/4
cgroup_fj_stress 1 TINFO: Attaching task 63135 to /sys/fs/cgroup/ltp/test-58282/6/9/5
cgroup_fj_stress 1 TINFO: Attaching task 63139 to /sys/fs/cgroup/ltp/test-58282/6/9/6
cgroup_fj_stress 1 TINFO: Attaching task 63143 to /sys/fs/cgroup/ltp/test-58282/6/9/7
cgroup_fj_stress 1 TINFO: Attaching task 63147 to /sys/fs/cgroup/ltp/test-58282/6/9/8
cgroup_fj_stress 1 TINFO: Attaching task 63151 to /sys/fs/cgroup/ltp/test-58282/6/9/9
cgroup_fj_stress 1 TINFO: Attaching task 63155 to /sys/fs/cgroup/ltp/test-58282/6/9/10
cgroup_fj_stress 1 TINFO: Attaching task 63161 to /sys/fs/cgroup/ltp/test-58282/6/10/1
cgroup_fj_stress 1 TINFO: Attaching task 63165 to /sys/fs/cgroup/ltp/test-58282/6/10/2
cgroup_fj_stress 1 TINFO: Attaching task 63169 to /sys/fs/cgroup/ltp/test-58282/6/10/3
cgroup_fj_stress 1 TINFO: Attaching task 63173 to /sys/fs/cgroup/ltp/test-58282/6/10/4
cgroup_fj_stress 1 TINFO: Attaching task 63177 to /sys/fs/cgroup/ltp/test-58282/6/10/5
cgroup_fj_stress 1 TINFO: Attaching task 63181 to /sys/fs/cgroup/ltp/test-58282/6/10/6
cgroup_fj_stress 1 TINFO: Attaching task 63185 to /sys/fs/cgroup/ltp/test-58282/6/10/7
cgroup_fj_stress 1 TINFO: Attaching task 63189 to /sys/fs/cgroup/ltp/test-58282/6/10/8
cgroup_fj_stress 1 TINFO: Attaching task 63193 to /sys/fs/cgroup/ltp/test-58282/6/10/9
cgroup_fj_stress 1 TINFO: Attaching task 63197 to /sys/fs/cgroup/ltp/test-58282/6/10/10
cgroup_fj_stress 1 TINFO: Attaching task 63205 to /sys/fs/cgroup/ltp/test-58282/7/1/1
cgroup_fj_stress 1 TINFO: Attaching task 63209 to /sys/fs/cgroup/ltp/test-58282/7/1/2
cgroup_fj_stress 1 TINFO: Attaching task 63213 to /sys/fs/cgroup/ltp/test-58282/7/1/3
cgroup_fj_stress 1 TINFO: Attaching task 63217 to /sys/fs/cgroup/ltp/test-58282/7/1/4
cgroup_fj_stress 1 TINFO: Attaching task 63221 to /sys/fs/cgroup/ltp/test-58282/7/1/5
cgroup_fj_stress 1 TINFO: Attaching task 63225 to /sys/fs/cgroup/ltp/test-58282/7/1/6
cgroup_fj_stress 1 TINFO: Attaching task 63229 to /sys/fs/cgroup/ltp/test-58282/7/1/7
cgroup_fj_stress 1 TINFO: Attaching task 63233 to /sys/fs/cgroup/ltp/test-58282/7/1/8
cgroup_fj_stress 1 TINFO: Attaching task 63237 to /sys/fs/cgroup/ltp/test-58282/7/1/9
cgroup_fj_stress 1 TINFO: Attaching task 63241 to /sys/fs/cgroup/ltp/test-58282/7/1/10
cgroup_fj_stress 1 TINFO: Attaching task 63247 to /sys/fs/cgroup/ltp/test-58282/7/2/1
cgroup_fj_stress 1 TINFO: Attaching task 63251 to /sys/fs/cgroup/ltp/test-58282/7/2/2
cgroup_fj_stress 1 TINFO: Attaching task 63255 to /sys/fs/cgroup/ltp/test-58282/7/2/3
cgroup_fj_stress 1 TINFO: Attaching task 63259 to /sys/fs/cgroup/ltp/test-58282/7/2/4
cgroup_fj_stress 1 TINFO: Attaching task 63263 to /sys/fs/cgroup/ltp/test-58282/7/2/5
cgroup_fj_stress 1 TINFO: Attaching task 63267 to /sys/fs/cgroup/ltp/test-58282/7/2/6
cgroup_fj_stress 1 TINFO: Attaching task 63271 to /sys/fs/cgroup/ltp/test-58282/7/2/7
cgroup_fj_stress 1 TINFO: Attaching task 63275 to /sys/fs/cgroup/ltp/test-58282/7/2/8
cgroup_fj_stress 1 TINFO: Attaching task 63279 to /sys/fs/cgroup/ltp/test-58282/7/2/9
cgroup_fj_stress 1 TINFO: Attaching task 63283 to /sys/fs/cgroup/ltp/test-58282/7/2/10
cgroup_fj_stress 1 TINFO: Attaching task 63289 to /sys/fs/cgroup/ltp/test-58282/7/3/1
cgroup_fj_stress 1 TINFO: Attaching task 63293 to /sys/fs/cgroup/ltp/test-58282/7/3/2
cgroup_fj_stress 1 TINFO: Attaching task 63297 to /sys/fs/cgroup/ltp/test-58282/7/3/3
cgroup_fj_stress 1 TINFO: Attaching task 63301 to /sys/fs/cgroup/ltp/test-58282/7/3/4
cgroup_fj_stress 1 TINFO: Attaching task 63305 to /sys/fs/cgroup/ltp/test-58282/7/3/5
cgroup_fj_stress 1 TINFO: Attaching task 63309 to /sys/fs/cgroup/ltp/test-58282/7/3/6
cgroup_fj_stress 1 TINFO: Attaching task 63313 to /sys/fs/cgroup/ltp/test-58282/7/3/7
cgroup_fj_stress 1 TINFO: Attaching task 63317 to /sys/fs/cgroup/ltp/test-58282/7/3/8
cgroup_fj_stress 1 TINFO: Attaching task 63321 to /sys/fs/cgroup/ltp/test-58282/7/3/9
cgroup_fj_stress 1 TINFO: Attaching task 63325 to /sys/fs/cgroup/ltp/test-58282/7/3/10
cgroup_fj_stress 1 TINFO: Attaching task 63331 to /sys/fs/cgroup/ltp/test-58282/7/4/1
cgroup_fj_stress 1 TINFO: Attaching task 63335 to /sys/fs/cgroup/ltp/test-58282/7/4/2
cgroup_fj_stress 1 TINFO: Attaching task 63339 to /sys/fs/cgroup/ltp/test-58282/7/4/3
cgroup_fj_stress 1 TINFO: Attaching task 63343 to /sys/fs/cgroup/ltp/test-58282/7/4/4
cgroup_fj_stress 1 TINFO: Attaching task 63347 to /sys/fs/cgroup/ltp/test-58282/7/4/5
cgroup_fj_stress 1 TINFO: Attaching task 63351 to /sys/fs/cgroup/ltp/test-58282/7/4/6
cgroup_fj_stress 1 TINFO: Attaching task 63355 to /sys/fs/cgroup/ltp/test-58282/7/4/7
cgroup_fj_stress 1 TINFO: Attaching task 63359 to /sys/fs/cgroup/ltp/test-58282/7/4/8
cgroup_fj_stress 1 TINFO: Attaching task 63363 to /sys/fs/cgroup/ltp/test-58282/7/4/9
cgroup_fj_stress 1 TINFO: Attaching task 63367 to /sys/fs/cgroup/ltp/test-58282/7/4/10
cgroup_fj_stress 1 TINFO: Attaching task 63373 to /sys/fs/cgroup/ltp/test-58282/7/5/1
cgroup_fj_stress 1 TINFO: Attaching task 63377 to /sys/fs/cgroup/ltp/test-58282/7/5/2
cgroup_fj_stress 1 TINFO: Attaching task 63381 to /sys/fs/cgroup/ltp/test-58282/7/5/3
cgroup_fj_stress 1 TINFO: Attaching task 63385 to /sys/fs/cgroup/ltp/test-58282/7/5/4
cgroup_fj_stress 1 TINFO: Attaching task 63389 to /sys/fs/cgroup/ltp/test-58282/7/5/5
cgroup_fj_stress 1 TINFO: Attaching task 63393 to /sys/fs/cgroup/ltp/test-58282/7/5/6
cgroup_fj_stress 1 TINFO: Attaching task 63397 to /sys/fs/cgroup/ltp/test-58282/7/5/7
cgroup_fj_stress 1 TINFO: Attaching task 63401 to /sys/fs/cgroup/ltp/test-58282/7/5/8
cgroup_fj_stress 1 TINFO: Attaching task 63405 to /sys/fs/cgroup/ltp/test-58282/7/5/9
cgroup_fj_stress 1 TINFO: Attaching task 63409 to /sys/fs/cgroup/ltp/test-58282/7/5/10
cgroup_fj_stress 1 TINFO: Attaching task 63415 to /sys/fs/cgroup/ltp/test-58282/7/6/1
cgroup_fj_stress 1 TINFO: Attaching task 63419 to /sys/fs/cgroup/ltp/test-58282/7/6/2
cgroup_fj_stress 1 TINFO: Attaching task 63423 to /sys/fs/cgroup/ltp/test-58282/7/6/3
cgroup_fj_stress 1 TINFO: Attaching task 63427 to /sys/fs/cgroup/ltp/test-58282/7/6/4
cgroup_fj_stress 1 TINFO: Attaching task 63431 to /sys/fs/cgroup/ltp/test-58282/7/6/5
cgroup_fj_stress 1 TINFO: Attaching task 63435 to /sys/fs/cgroup/ltp/test-58282/7/6/6
cgroup_fj_stress 1 TINFO: Attaching task 63439 to /sys/fs/cgroup/ltp/test-58282/7/6/7
cgroup_fj_stress 1 TINFO: Attaching task 63443 to /sys/fs/cgroup/ltp/test-58282/7/6/8
cgroup_fj_stress 1 TINFO: Attaching task 63447 to /sys/fs/cgroup/ltp/test-58282/7/6/9
cgroup_fj_stress 1 TINFO: Attaching task 63451 to /sys/fs/cgroup/ltp/test-58282/7/6/10
cgroup_fj_stress 1 TINFO: Attaching task 63457 to /sys/fs/cgroup/ltp/test-58282/7/7/1
cgroup_fj_stress 1 TINFO: Attaching task 63461 to /sys/fs/cgroup/ltp/test-58282/7/7/2
cgroup_fj_stress 1 TINFO: Attaching task 63465 to /sys/fs/cgroup/ltp/test-58282/7/7/3
cgroup_fj_stress 1 TINFO: Attaching task 63469 to /sys/fs/cgroup/ltp/test-58282/7/7/4
cgroup_fj_stress 1 TINFO: Attaching task 63473 to /sys/fs/cgroup/ltp/test-58282/7/7/5
cgroup_fj_stress 1 TINFO: Attaching task 63477 to /sys/fs/cgroup/ltp/test-58282/7/7/6
cgroup_fj_stress 1 TINFO: Attaching task 63481 to /sys/fs/cgroup/ltp/test-58282/7/7/7
cgroup_fj_stress 1 TINFO: Attaching task 63485 to /sys/fs/cgroup/ltp/test-58282/7/7/8
cgroup_fj_stress 1 TINFO: Attaching task 63489 to /sys/fs/cgroup/ltp/test-58282/7/7/9
cgroup_fj_stress 1 TINFO: Attaching task 63493 to /sys/fs/cgroup/ltp/test-58282/7/7/10
cgroup_fj_stress 1 TINFO: Attaching task 63499 to /sys/fs/cgroup/ltp/test-58282/7/8/1
cgroup_fj_stress 1 TINFO: Attaching task 63503 to /sys/fs/cgroup/ltp/test-58282/7/8/2
cgroup_fj_stress 1 TINFO: Attaching task 63507 to /sys/fs/cgroup/ltp/test-58282/7/8/3
cgroup_fj_stress 1 TINFO: Attaching task 63511 to /sys/fs/cgroup/ltp/test-58282/7/8/4
cgroup_fj_stress 1 TINFO: Attaching task 63515 to /sys/fs/cgroup/ltp/test-58282/7/8/5
cgroup_fj_stress 1 TINFO: Attaching task 63519 to /sys/fs/cgroup/ltp/test-58282/7/8/6
cgroup_fj_stress 1 TINFO: Attaching task 63523 to /sys/fs/cgroup/ltp/test-58282/7/8/7
cgroup_fj_stress 1 TINFO: Attaching task 63527 to /sys/fs/cgroup/ltp/test-58282/7/8/8
cgroup_fj_stress 1 TINFO: Attaching task 63531 to /sys/fs/cgroup/ltp/test-58282/7/8/9
cgroup_fj_stress 1 TINFO: Attaching task 63535 to /sys/fs/cgroup/ltp/test-58282/7/8/10
cgroup_fj_stress 1 TINFO: Attaching task 63541 to /sys/fs/cgroup/ltp/test-58282/7/9/1
cgroup_fj_stress 1 TINFO: Attaching task 63545 to /sys/fs/cgroup/ltp/test-58282/7/9/2
cgroup_fj_stress 1 TINFO: Attaching task 63549 to /sys/fs/cgroup/ltp/test-58282/7/9/3
cgroup_fj_stress 1 TINFO: Attaching task 63553 to /sys/fs/cgroup/ltp/test-58282/7/9/4
cgroup_fj_stress 1 TINFO: Attaching task 63557 to /sys/fs/cgroup/ltp/test-58282/7/9/5
cgroup_fj_stress 1 TINFO: Attaching task 63561 to /sys/fs/cgroup/ltp/test-58282/7/9/6
cgroup_fj_stress 1 TINFO: Attaching task 63565 to /sys/fs/cgroup/ltp/test-58282/7/9/7
cgroup_fj_stress 1 TINFO: Attaching task 63569 to /sys/fs/cgroup/ltp/test-58282/7/9/8
cgroup_fj_stress 1 TINFO: Attaching task 63573 to /sys/fs/cgroup/ltp/test-58282/7/9/9
cgroup_fj_stress 1 TINFO: Attaching task 63577 to /sys/fs/cgroup/ltp/test-58282/7/9/10
cgroup_fj_stress 1 TINFO: Attaching task 63583 to /sys/fs/cgroup/ltp/test-58282/7/10/1
cgroup_fj_stress 1 TINFO: Attaching task 63587 to /sys/fs/cgroup/ltp/test-58282/7/10/2
cgroup_fj_stress 1 TINFO: Attaching task 63591 to /sys/fs/cgroup/ltp/test-58282/7/10/3
cgroup_fj_stress 1 TINFO: Attaching task 63595 to /sys/fs/cgroup/ltp/test-58282/7/10/4
cgroup_fj_stress 1 TINFO: Attaching task 63599 to /sys/fs/cgroup/ltp/test-58282/7/10/5
cgroup_fj_stress 1 TINFO: Attaching task 63603 to /sys/fs/cgroup/ltp/test-58282/7/10/6
cgroup_fj_stress 1 TINFO: Attaching task 63607 to /sys/fs/cgroup/ltp/test-58282/7/10/7
cgroup_fj_stress 1 TINFO: Attaching task 63611 to /sys/fs/cgroup/ltp/test-58282/7/10/8
cgroup_fj_stress 1 TINFO: Attaching task 63615 to /sys/fs/cgroup/ltp/test-58282/7/10/9
cgroup_fj_stress 1 TINFO: Attaching task 63619 to /sys/fs/cgroup/ltp/test-58282/7/10/10
cgroup_fj_stress 1 TINFO: Attaching task 63627 to /sys/fs/cgroup/ltp/test-58282/8/1/1
cgroup_fj_stress 1 TINFO: Attaching task 63631 to /sys/fs/cgroup/ltp/test-58282/8/1/2
cgroup_fj_stress 1 TINFO: Attaching task 63635 to /sys/fs/cgroup/ltp/test-58282/8/1/3
cgroup_fj_stress 1 TINFO: Attaching task 63639 to /sys/fs/cgroup/ltp/test-58282/8/1/4
cgroup_fj_stress 1 TINFO: Attaching task 63643 to /sys/fs/cgroup/ltp/test-58282/8/1/5
cgroup_fj_stress 1 TINFO: Attaching task 63647 to /sys/fs/cgroup/ltp/test-58282/8/1/6
cgroup_fj_stress 1 TINFO: Attaching task 63651 to /sys/fs/cgroup/ltp/test-58282/8/1/7
cgroup_fj_stress 1 TINFO: Attaching task 63655 to /sys/fs/cgroup/ltp/test-58282/8/1/8
cgroup_fj_stress 1 TINFO: Attaching task 63659 to /sys/fs/cgroup/ltp/test-58282/8/1/9
cgroup_fj_stress 1 TINFO: Attaching task 63663 to /sys/fs/cgroup/ltp/test-58282/8/1/10
cgroup_fj_stress 1 TINFO: Attaching task 63669 to /sys/fs/cgroup/ltp/test-58282/8/2/1
cgroup_fj_stress 1 TINFO: Attaching task 63673 to /sys/fs/cgroup/ltp/test-58282/8/2/2
cgroup_fj_stress 1 TINFO: Attaching task 63677 to /sys/fs/cgroup/ltp/test-58282/8/2/3
cgroup_fj_stress 1 TINFO: Attaching task 63681 to /sys/fs/cgroup/ltp/test-58282/8/2/4
cgroup_fj_stress 1 TINFO: Attaching task 63685 to /sys/fs/cgroup/ltp/test-58282/8/2/5
cgroup_fj_stress 1 TINFO: Attaching task 63689 to /sys/fs/cgroup/ltp/test-58282/8/2/6
cgroup_fj_stress 1 TINFO: Attaching task 63693 to /sys/fs/cgroup/ltp/test-58282/8/2/7
cgroup_fj_stress 1 TINFO: Attaching task 63697 to /sys/fs/cgroup/ltp/test-58282/8/2/8
cgroup_fj_stress 1 TINFO: Attaching task 63701 to /sys/fs/cgroup/ltp/test-58282/8/2/9
cgroup_fj_stress 1 TINFO: Attaching task 63705 to /sys/fs/cgroup/ltp/test-58282/8/2/10
cgroup_fj_stress 1 TINFO: Attaching task 63711 to /sys/fs/cgroup/ltp/test-58282/8/3/1
cgroup_fj_stress 1 TINFO: Attaching task 63715 to /sys/fs/cgroup/ltp/test-58282/8/3/2
cgroup_fj_stress 1 TINFO: Attaching task 63719 to /sys/fs/cgroup/ltp/test-58282/8/3/3
cgroup_fj_stress 1 TINFO: Attaching task 63723 to /sys/fs/cgroup/ltp/test-58282/8/3/4
cgroup_fj_stress 1 TINFO: Attaching task 63727 to /sys/fs/cgroup/ltp/test-58282/8/3/5
cgroup_fj_stress 1 TINFO: Attaching task 63731 to /sys/fs/cgroup/ltp/test-58282/8/3/6
cgroup_fj_stress 1 TINFO: Attaching task 63735 to /sys/fs/cgroup/ltp/test-58282/8/3/7
cgroup_fj_stress 1 TINFO: Attaching task 63739 to /sys/fs/cgroup/ltp/test-58282/8/3/8
cgroup_fj_stress 1 TINFO: Attaching task 63743 to /sys/fs/cgroup/ltp/test-58282/8/3/9
cgroup_fj_stress 1 TINFO: Attaching task 63747 to /sys/fs/cgroup/ltp/test-58282/8/3/10
cgroup_fj_stress 1 TINFO: Attaching task 63753 to /sys/fs/cgroup/ltp/test-58282/8/4/1
cgroup_fj_stress 1 TINFO: Attaching task 63757 to /sys/fs/cgroup/ltp/test-58282/8/4/2
cgroup_fj_stress 1 TINFO: Attaching task 63761 to /sys/fs/cgroup/ltp/test-58282/8/4/3
cgroup_fj_stress 1 TINFO: Attaching task 63765 to /sys/fs/cgroup/ltp/test-58282/8/4/4
cgroup_fj_stress 1 TINFO: Attaching task 63769 to /sys/fs/cgroup/ltp/test-58282/8/4/5
cgroup_fj_stress 1 TINFO: Attaching task 63773 to /sys/fs/cgroup/ltp/test-58282/8/4/6
cgroup_fj_stress 1 TINFO: Attaching task 63777 to /sys/fs/cgroup/ltp/test-58282/8/4/7
cgroup_fj_stress 1 TINFO: Attaching task 63781 to /sys/fs/cgroup/ltp/test-58282/8/4/8
cgroup_fj_stress 1 TINFO: Attaching task 63785 to /sys/fs/cgroup/ltp/test-58282/8/4/9
cgroup_fj_stress 1 TINFO: Attaching task 63789 to /sys/fs/cgroup/ltp/test-58282/8/4/10
cgroup_fj_stress 1 TINFO: Attaching task 63795 to /sys/fs/cgroup/ltp/test-58282/8/5/1
cgroup_fj_stress 1 TINFO: Attaching task 63799 to /sys/fs/cgroup/ltp/test-58282/8/5/2
cgroup_fj_stress 1 TINFO: Attaching task 63803 to /sys/fs/cgroup/ltp/test-58282/8/5/3
cgroup_fj_stress 1 TINFO: Attaching task 63807 to /sys/fs/cgroup/ltp/test-58282/8/5/4
cgroup_fj_stress 1 TINFO: Attaching task 63811 to /sys/fs/cgroup/ltp/test-58282/8/5/5
cgroup_fj_stress 1 TINFO: Attaching task 63815 to /sys/fs/cgroup/ltp/test-58282/8/5/6
cgroup_fj_stress 1 TINFO: Attaching task 63819 to /sys/fs/cgroup/ltp/test-58282/8/5/7
cgroup_fj_stress 1 TINFO: Attaching task 63823 to /sys/fs/cgroup/ltp/test-58282/8/5/8
cgroup_fj_stress 1 TINFO: Attaching task 63827 to /sys/fs/cgroup/ltp/test-58282/8/5/9
cgroup_fj_stress 1 TINFO: Attaching task 63831 to /sys/fs/cgroup/ltp/test-58282/8/5/10
cgroup_fj_stress 1 TINFO: Attaching task 63837 to /sys/fs/cgroup/ltp/test-58282/8/6/1
cgroup_fj_stress 1 TINFO: Attaching task 63841 to /sys/fs/cgroup/ltp/test-58282/8/6/2
cgroup_fj_stress 1 TINFO: Attaching task 63845 to /sys/fs/cgroup/ltp/test-58282/8/6/3
cgroup_fj_stress 1 TINFO: Attaching task 63849 to /sys/fs/cgroup/ltp/test-58282/8/6/4
cgroup_fj_stress 1 TINFO: Attaching task 63853 to /sys/fs/cgroup/ltp/test-58282/8/6/5
cgroup_fj_stress 1 TINFO: Attaching task 63857 to /sys/fs/cgroup/ltp/test-58282/8/6/6
cgroup_fj_stress 1 TINFO: Attaching task 63861 to /sys/fs/cgroup/ltp/test-58282/8/6/7
cgroup_fj_stress 1 TINFO: Attaching task 63865 to /sys/fs/cgroup/ltp/test-58282/8/6/8
cgroup_fj_stress 1 TINFO: Attaching task 63869 to /sys/fs/cgroup/ltp/test-58282/8/6/9
cgroup_fj_stress 1 TINFO: Attaching task 63873 to /sys/fs/cgroup/ltp/test-58282/8/6/10
cgroup_fj_stress 1 TINFO: Attaching task 63879 to /sys/fs/cgroup/ltp/test-58282/8/7/1
cgroup_fj_stress 1 TINFO: Attaching task 63883 to /sys/fs/cgroup/ltp/test-58282/8/7/2
cgroup_fj_stress 1 TINFO: Attaching task 63887 to /sys/fs/cgroup/ltp/test-58282/8/7/3
cgroup_fj_stress 1 TINFO: Attaching task 63891 to /sys/fs/cgroup/ltp/test-58282/8/7/4
cgroup_fj_stress 1 TINFO: Attaching task 63895 to /sys/fs/cgroup/ltp/test-58282/8/7/5
cgroup_fj_stress 1 TINFO: Attaching task 63899 to /sys/fs/cgroup/ltp/test-58282/8/7/6
cgroup_fj_stress 1 TINFO: Attaching task 63903 to /sys/fs/cgroup/ltp/test-58282/8/7/7
cgroup_fj_stress 1 TINFO: Attaching task 63907 to /sys/fs/cgroup/ltp/test-58282/8/7/8
cgroup_fj_stress 1 TINFO: Attaching task 63911 to /sys/fs/cgroup/ltp/test-58282/8/7/9
cgroup_fj_stress 1 TINFO: Attaching task 63915 to /sys/fs/cgroup/ltp/test-58282/8/7/10
cgroup_fj_stress 1 TINFO: Attaching task 63921 to /sys/fs/cgroup/ltp/test-58282/8/8/1
cgroup_fj_stress 1 TINFO: Attaching task 63925 to /sys/fs/cgroup/ltp/test-58282/8/8/2
cgroup_fj_stress 1 TINFO: Attaching task 63929 to /sys/fs/cgroup/ltp/test-58282/8/8/3
cgroup_fj_stress 1 TINFO: Attaching task 63933 to /sys/fs/cgroup/ltp/test-58282/8/8/4
cgroup_fj_stress 1 TINFO: Attaching task 63937 to /sys/fs/cgroup/ltp/test-58282/8/8/5
cgroup_fj_stress 1 TINFO: Attaching task 63941 to /sys/fs/cgroup/ltp/test-58282/8/8/6
cgroup_fj_stress 1 TINFO: Attaching task 63945 to /sys/fs/cgroup/ltp/test-58282/8/8/7
cgroup_fj_stress 1 TINFO: Attaching task 63949 to /sys/fs/cgroup/ltp/test-58282/8/8/8
cgroup_fj_stress 1 TINFO: Attaching task 63953 to /sys/fs/cgroup/ltp/test-58282/8/8/9
cgroup_fj_stress 1 TINFO: Attaching task 63957 to /sys/fs/cgroup/ltp/test-58282/8/8/10
cgroup_fj_stress 1 TINFO: Attaching task 63963 to /sys/fs/cgroup/ltp/test-58282/8/9/1
cgroup_fj_stress 1 TINFO: Attaching task 63967 to /sys/fs/cgroup/ltp/test-58282/8/9/2
cgroup_fj_stress 1 TINFO: Attaching task 63971 to /sys/fs/cgroup/ltp/test-58282/8/9/3
cgroup_fj_stress 1 TINFO: Attaching task 63975 to /sys/fs/cgroup/ltp/test-58282/8/9/4
cgroup_fj_stress 1 TINFO: Attaching task 63979 to /sys/fs/cgroup/ltp/test-58282/8/9/5
cgroup_fj_stress 1 TINFO: Attaching task 63983 to /sys/fs/cgroup/ltp/test-58282/8/9/6
cgroup_fj_stress 1 TINFO: Attaching task 63987 to /sys/fs/cgroup/ltp/test-58282/8/9/7
cgroup_fj_stress 1 TINFO: Attaching task 63991 to /sys/fs/cgroup/ltp/test-58282/8/9/8
cgroup_fj_stress 1 TINFO: Attaching task 63995 to /sys/fs/cgroup/ltp/test-58282/8/9/9
cgroup_fj_stress 1 TINFO: Attaching task 63999 to /sys/fs/cgroup/ltp/test-58282/8/9/10
cgroup_fj_stress 1 TINFO: Attaching task 64005 to /sys/fs/cgroup/ltp/test-58282/8/10/1
cgroup_fj_stress 1 TINFO: Attaching task 64009 to /sys/fs/cgroup/ltp/test-58282/8/10/2
cgroup_fj_stress 1 TINFO: Attaching task 64013 to /sys/fs/cgroup/ltp/test-58282/8/10/3
cgroup_fj_stress 1 TINFO: Attaching task 64017 to /sys/fs/cgroup/ltp/test-58282/8/10/4
cgroup_fj_stress 1 TINFO: Attaching task 64021 to /sys/fs/cgroup/ltp/test-58282/8/10/5
cgroup_fj_stress 1 TINFO: Attaching task 64025 to /sys/fs/cgroup/ltp/test-58282/8/10/6
cgroup_fj_stress 1 TINFO: Attaching task 64029 to /sys/fs/cgroup/ltp/test-58282/8/10/7
cgroup_fj_stress 1 TINFO: Attaching task 64033 to /sys/fs/cgroup/ltp/test-58282/8/10/8
cgroup_fj_stress 1 TINFO: Attaching task 64037 to /sys/fs/cgroup/ltp/test-58282/8/10/9
cgroup_fj_stress 1 TINFO: Attaching task 64041 to /sys/fs/cgroup/ltp/test-58282/8/10/10
cgroup_fj_stress 1 TINFO: Attaching task 64049 to /sys/fs/cgroup/ltp/test-58282/9/1/1
cgroup_fj_stress 1 TINFO: Attaching task 64053 to /sys/fs/cgroup/ltp/test-58282/9/1/2
cgroup_fj_stress 1 TINFO: Attaching task 64057 to /sys/fs/cgroup/ltp/test-58282/9/1/3
cgroup_fj_stress 1 TINFO: Attaching task 64061 to /sys/fs/cgroup/ltp/test-58282/9/1/4
cgroup_fj_stress 1 TINFO: Attaching task 64065 to /sys/fs/cgroup/ltp/test-58282/9/1/5
cgroup_fj_stress 1 TINFO: Attaching task 64069 to /sys/fs/cgroup/ltp/test-58282/9/1/6
cgroup_fj_stress 1 TINFO: Attaching task 64073 to /sys/fs/cgroup/ltp/test-58282/9/1/7
cgroup_fj_stress 1 TINFO: Attaching task 64077 to /sys/fs/cgroup/ltp/test-58282/9/1/8
cgroup_fj_stress 1 TINFO: Attaching task 64081 to /sys/fs/cgroup/ltp/test-58282/9/1/9
cgroup_fj_stress 1 TINFO: Attaching task 64085 to /sys/fs/cgroup/ltp/test-58282/9/1/10
cgroup_fj_stress 1 TINFO: Attaching task 64091 to /sys/fs/cgroup/ltp/test-58282/9/2/1
cgroup_fj_stress 1 TINFO: Attaching task 64095 to /sys/fs/cgroup/ltp/test-58282/9/2/2
cgroup_fj_stress 1 TINFO: Attaching task 64099 to /sys/fs/cgroup/ltp/test-58282/9/2/3
cgroup_fj_stress 1 TINFO: Attaching task 64103 to /sys/fs/cgroup/ltp/test-58282/9/2/4
cgroup_fj_stress 1 TINFO: Attaching task 64107 to /sys/fs/cgroup/ltp/test-58282/9/2/5
cgroup_fj_stress 1 TINFO: Attaching task 64111 to /sys/fs/cgroup/ltp/test-58282/9/2/6
cgroup_fj_stress 1 TINFO: Attaching task 64115 to /sys/fs/cgroup/ltp/test-58282/9/2/7
cgroup_fj_stress 1 TINFO: Attaching task 64119 to /sys/fs/cgroup/ltp/test-58282/9/2/8
cgroup_fj_stress 1 TINFO: Attaching task 64123 to /sys/fs/cgroup/ltp/test-58282/9/2/9
cgroup_fj_stress 1 TINFO: Attaching task 64127 to /sys/fs/cgroup/ltp/test-58282/9/2/10
cgroup_fj_stress 1 TINFO: Attaching task 64133 to /sys/fs/cgroup/ltp/test-58282/9/3/1
cgroup_fj_stress 1 TINFO: Attaching task 64137 to /sys/fs/cgroup/ltp/test-58282/9/3/2
cgroup_fj_stress 1 TINFO: Attaching task 64141 to /sys/fs/cgroup/ltp/test-58282/9/3/3
cgroup_fj_stress 1 TINFO: Attaching task 64145 to /sys/fs/cgroup/ltp/test-58282/9/3/4
cgroup_fj_stress 1 TINFO: Attaching task 64149 to /sys/fs/cgroup/ltp/test-58282/9/3/5
cgroup_fj_stress 1 TINFO: Attaching task 64153 to /sys/fs/cgroup/ltp/test-58282/9/3/6
cgroup_fj_stress 1 TINFO: Attaching task 64157 to /sys/fs/cgroup/ltp/test-58282/9/3/7
cgroup_fj_stress 1 TINFO: Attaching task 64161 to /sys/fs/cgroup/ltp/test-58282/9/3/8
cgroup_fj_stress 1 TINFO: Attaching task 64165 to /sys/fs/cgroup/ltp/test-58282/9/3/9
cgroup_fj_stress 1 TINFO: Attaching task 64169 to /sys/fs/cgroup/ltp/test-58282/9/3/10
cgroup_fj_stress 1 TINFO: Attaching task 64175 to /sys/fs/cgroup/ltp/test-58282/9/4/1
cgroup_fj_stress 1 TINFO: Attaching task 64179 to /sys/fs/cgroup/ltp/test-58282/9/4/2
cgroup_fj_stress 1 TINFO: Attaching task 64183 to /sys/fs/cgroup/ltp/test-58282/9/4/3
cgroup_fj_stress 1 TINFO: Attaching task 64187 to /sys/fs/cgroup/ltp/test-58282/9/4/4
cgroup_fj_stress 1 TINFO: Attaching task 64191 to /sys/fs/cgroup/ltp/test-58282/9/4/5
cgroup_fj_stress 1 TINFO: Attaching task 64195 to /sys/fs/cgroup/ltp/test-58282/9/4/6
cgroup_fj_stress 1 TINFO: Attaching task 64199 to /sys/fs/cgroup/ltp/test-58282/9/4/7
cgroup_fj_stress 1 TINFO: Attaching task 64203 to /sys/fs/cgroup/ltp/test-58282/9/4/8
cgroup_fj_stress 1 TINFO: Attaching task 64207 to /sys/fs/cgroup/ltp/test-58282/9/4/9
cgroup_fj_stress 1 TINFO: Attaching task 64211 to /sys/fs/cgroup/ltp/test-58282/9/4/10
cgroup_fj_stress 1 TINFO: Attaching task 64217 to /sys/fs/cgroup/ltp/test-58282/9/5/1
cgroup_fj_stress 1 TINFO: Attaching task 64221 to /sys/fs/cgroup/ltp/test-58282/9/5/2
cgroup_fj_stress 1 TINFO: Attaching task 64225 to /sys/fs/cgroup/ltp/test-58282/9/5/3
cgroup_fj_stress 1 TINFO: Attaching task 64229 to /sys/fs/cgroup/ltp/test-58282/9/5/4
cgroup_fj_stress 1 TINFO: Attaching task 64233 to /sys/fs/cgroup/ltp/test-58282/9/5/5
cgroup_fj_stress 1 TINFO: Attaching task 64237 to /sys/fs/cgroup/ltp/test-58282/9/5/6
cgroup_fj_stress 1 TINFO: Attaching task 64241 to /sys/fs/cgroup/ltp/test-58282/9/5/7
cgroup_fj_stress 1 TINFO: Attaching task 64245 to /sys/fs/cgroup/ltp/test-58282/9/5/8
cgroup_fj_stress 1 TINFO: Attaching task 64249 to /sys/fs/cgroup/ltp/test-58282/9/5/9
cgroup_fj_stress 1 TINFO: Attaching task 64253 to /sys/fs/cgroup/ltp/test-58282/9/5/10
cgroup_fj_stress 1 TINFO: Attaching task 64259 to /sys/fs/cgroup/ltp/test-58282/9/6/1
cgroup_fj_stress 1 TINFO: Attaching task 64263 to /sys/fs/cgroup/ltp/test-58282/9/6/2
cgroup_fj_stress 1 TINFO: Attaching task 64267 to /sys/fs/cgroup/ltp/test-58282/9/6/3
cgroup_fj_stress 1 TINFO: Attaching task 64271 to /sys/fs/cgroup/ltp/test-58282/9/6/4
cgroup_fj_stress 1 TINFO: Attaching task 64275 to /sys/fs/cgroup/ltp/test-58282/9/6/5
cgroup_fj_stress 1 TINFO: Attaching task 64279 to /sys/fs/cgroup/ltp/test-58282/9/6/6
cgroup_fj_stress 1 TINFO: Attaching task 64283 to /sys/fs/cgroup/ltp/test-58282/9/6/7
cgroup_fj_stress 1 TINFO: Attaching task 64287 to /sys/fs/cgroup/ltp/test-58282/9/6/8
cgroup_fj_stress 1 TINFO: Attaching task 64291 to /sys/fs/cgroup/ltp/test-58282/9/6/9
cgroup_fj_stress 1 TINFO: Attaching task 64295 to /sys/fs/cgroup/ltp/test-58282/9/6/10
cgroup_fj_stress 1 TINFO: Attaching task 64301 to /sys/fs/cgroup/ltp/test-58282/9/7/1
cgroup_fj_stress 1 TINFO: Attaching task 64305 to /sys/fs/cgroup/ltp/test-58282/9/7/2
cgroup_fj_stress 1 TINFO: Attaching task 64309 to /sys/fs/cgroup/ltp/test-58282/9/7/3
cgroup_fj_stress 1 TINFO: Attaching task 64313 to /sys/fs/cgroup/ltp/test-58282/9/7/4
cgroup_fj_stress 1 TINFO: Attaching task 64317 to /sys/fs/cgroup/ltp/test-58282/9/7/5
cgroup_fj_stress 1 TINFO: Attaching task 64321 to /sys/fs/cgroup/ltp/test-58282/9/7/6
cgroup_fj_stress 1 TINFO: Attaching task 64325 to /sys/fs/cgroup/ltp/test-58282/9/7/7
cgroup_fj_stress 1 TINFO: Attaching task 64329 to /sys/fs/cgroup/ltp/test-58282/9/7/8
cgroup_fj_stress 1 TINFO: Attaching task 64333 to /sys/fs/cgroup/ltp/test-58282/9/7/9
cgroup_fj_stress 1 TINFO: Attaching task 64337 to /sys/fs/cgroup/ltp/test-58282/9/7/10
cgroup_fj_stress 1 TINFO: Attaching task 64343 to /sys/fs/cgroup/ltp/test-58282/9/8/1
cgroup_fj_stress 1 TINFO: Attaching task 64347 to /sys/fs/cgroup/ltp/test-58282/9/8/2
cgroup_fj_stress 1 TINFO: Attaching task 64351 to /sys/fs/cgroup/ltp/test-58282/9/8/3
cgroup_fj_stress 1 TINFO: Attaching task 64355 to /sys/fs/cgroup/ltp/test-58282/9/8/4
cgroup_fj_stress 1 TINFO: Attaching task 64359 to /sys/fs/cgroup/ltp/test-58282/9/8/5
cgroup_fj_stress 1 TINFO: Attaching task 64363 to /sys/fs/cgroup/ltp/test-58282/9/8/6
cgroup_fj_stress 1 TINFO: Attaching task 64367 to /sys/fs/cgroup/ltp/test-58282/9/8/7
cgroup_fj_stress 1 TINFO: Attaching task 64371 to /sys/fs/cgroup/ltp/test-58282/9/8/8
cgroup_fj_stress 1 TINFO: Attaching task 64375 to /sys/fs/cgroup/ltp/test-58282/9/8/9
cgroup_fj_stress 1 TINFO: Attaching task 64379 to /sys/fs/cgroup/ltp/test-58282/9/8/10
cgroup_fj_stress 1 TINFO: Attaching task 64385 to /sys/fs/cgroup/ltp/test-58282/9/9/1
cgroup_fj_stress 1 TINFO: Attaching task 64389 to /sys/fs/cgroup/ltp/test-58282/9/9/2
cgroup_fj_stress 1 TINFO: Attaching task 64393 to /sys/fs/cgroup/ltp/test-58282/9/9/3
cgroup_fj_stress 1 TINFO: Attaching task 64397 to /sys/fs/cgroup/ltp/test-58282/9/9/4
cgroup_fj_stress 1 TINFO: Attaching task 64401 to /sys/fs/cgroup/ltp/test-58282/9/9/5
cgroup_fj_stress 1 TINFO: Attaching task 64405 to /sys/fs/cgroup/ltp/test-58282/9/9/6
cgroup_fj_stress 1 TINFO: Attaching task 64409 to /sys/fs/cgroup/ltp/test-58282/9/9/7
cgroup_fj_stress 1 TINFO: Attaching task 64413 to /sys/fs/cgroup/ltp/test-58282/9/9/8
cgroup_fj_stress 1 TINFO: Attaching task 64417 to /sys/fs/cgroup/ltp/test-58282/9/9/9
cgroup_fj_stress 1 TINFO: Attaching task 64421 to /sys/fs/cgroup/ltp/test-58282/9/9/10
cgroup_fj_stress 1 TINFO: Attaching task 64427 to /sys/fs/cgroup/ltp/test-58282/9/10/1
cgroup_fj_stress 1 TINFO: Attaching task 64431 to /sys/fs/cgroup/ltp/test-58282/9/10/2
cgroup_fj_stress 1 TINFO: Attaching task 64435 to /sys/fs/cgroup/ltp/test-58282/9/10/3
cgroup_fj_stress 1 TINFO: Attaching task 64439 to /sys/fs/cgroup/ltp/test-58282/9/10/4
cgroup_fj_stress 1 TINFO: Attaching task 64443 to /sys/fs/cgroup/ltp/test-58282/9/10/5
cgroup_fj_stress 1 TINFO: Attaching task 64447 to /sys/fs/cgroup/ltp/test-58282/9/10/6
cgroup_fj_stress 1 TINFO: Attaching task 64451 to /sys/fs/cgroup/ltp/test-58282/9/10/7
cgroup_fj_stress 1 TINFO: Attaching task 64455 to /sys/fs/cgroup/ltp/test-58282/9/10/8
cgroup_fj_stress 1 TINFO: Attaching task 64459 to /sys/fs/cgroup/ltp/test-58282/9/10/9
cgroup_fj_stress 1 TINFO: Attaching task 64463 to /sys/fs/cgroup/ltp/test-58282/9/10/10
cgroup_fj_stress 1 TINFO: Attaching task 64471 to /sys/fs/cgroup/ltp/test-58282/10/1/1
cgroup_fj_stress 1 TINFO: Attaching task 64475 to /sys/fs/cgroup/ltp/test-58282/10/1/2
cgroup_fj_stress 1 TINFO: Attaching task 64479 to /sys/fs/cgroup/ltp/test-58282/10/1/3
cgroup_fj_stress 1 TINFO: Attaching task 64483 to /sys/fs/cgroup/ltp/test-58282/10/1/4
cgroup_fj_stress 1 TINFO: Attaching task 64487 to /sys/fs/cgroup/ltp/test-58282/10/1/5
cgroup_fj_stress 1 TINFO: Attaching task 64491 to /sys/fs/cgroup/ltp/test-58282/10/1/6
cgroup_fj_stress 1 TINFO: Attaching task 64495 to /sys/fs/cgroup/ltp/test-58282/10/1/7
cgroup_fj_stress 1 TINFO: Attaching task 64499 to /sys/fs/cgroup/ltp/test-58282/10/1/8
cgroup_fj_stress 1 TINFO: Attaching task 64503 to /sys/fs/cgroup/ltp/test-58282/10/1/9
cgroup_fj_stress 1 TINFO: Attaching task 64507 to /sys/fs/cgroup/ltp/test-58282/10/1/10
cgroup_fj_stress 1 TINFO: Attaching task 64513 to /sys/fs/cgroup/ltp/test-58282/10/2/1
cgroup_fj_stress 1 TINFO: Attaching task 64517 to /sys/fs/cgroup/ltp/test-58282/10/2/2
cgroup_fj_stress 1 TINFO: Attaching task 64521 to /sys/fs/cgroup/ltp/test-58282/10/2/3
cgroup_fj_stress 1 TINFO: Attaching task 64525 to /sys/fs/cgroup/ltp/test-58282/10/2/4
cgroup_fj_stress 1 TINFO: Attaching task 64529 to /sys/fs/cgroup/ltp/test-58282/10/2/5
cgroup_fj_stress 1 TINFO: Attaching task 64533 to /sys/fs/cgroup/ltp/test-58282/10/2/6
cgroup_fj_stress 1 TINFO: Attaching task 64537 to /sys/fs/cgroup/ltp/test-58282/10/2/7
cgroup_fj_stress 1 TINFO: Attaching task 64541 to /sys/fs/cgroup/ltp/test-58282/10/2/8
cgroup_fj_stress 1 TINFO: Attaching task 64545 to /sys/fs/cgroup/ltp/test-58282/10/2/9
cgroup_fj_stress 1 TINFO: Attaching task 64549 to /sys/fs/cgroup/ltp/test-58282/10/2/10
cgroup_fj_stress 1 TINFO: Attaching task 64555 to /sys/fs/cgroup/ltp/test-58282/10/3/1
cgroup_fj_stress 1 TINFO: Attaching task 64559 to /sys/fs/cgroup/ltp/test-58282/10/3/2
cgroup_fj_stress 1 TINFO: Attaching task 64563 to /sys/fs/cgroup/ltp/test-58282/10/3/3
cgroup_fj_stress 1 TINFO: Attaching task 64567 to /sys/fs/cgroup/ltp/test-58282/10/3/4
cgroup_fj_stress 1 TINFO: Attaching task 64571 to /sys/fs/cgroup/ltp/test-58282/10/3/5
cgroup_fj_stress 1 TINFO: Attaching task 64575 to /sys/fs/cgroup/ltp/test-58282/10/3/6
cgroup_fj_stress 1 TINFO: Attaching task 64579 to /sys/fs/cgroup/ltp/test-58282/10/3/7
cgroup_fj_stress 1 TINFO: Attaching task 64583 to /sys/fs/cgroup/ltp/test-58282/10/3/8
cgroup_fj_stress 1 TINFO: Attaching task 64587 to /sys/fs/cgroup/ltp/test-58282/10/3/9
cgroup_fj_stress 1 TINFO: Attaching task 64591 to /sys/fs/cgroup/ltp/test-58282/10/3/10
cgroup_fj_stress 1 TINFO: Attaching task 64597 to /sys/fs/cgroup/ltp/test-58282/10/4/1
cgroup_fj_stress 1 TINFO: Attaching task 64601 to /sys/fs/cgroup/ltp/test-58282/10/4/2
cgroup_fj_stress 1 TINFO: Attaching task 64605 to /sys/fs/cgroup/ltp/test-58282/10/4/3
cgroup_fj_stress 1 TINFO: Attaching task 64609 to /sys/fs/cgroup/ltp/test-58282/10/4/4
cgroup_fj_stress 1 TINFO: Attaching task 64613 to /sys/fs/cgroup/ltp/test-58282/10/4/5
cgroup_fj_stress 1 TINFO: Attaching task 64617 to /sys/fs/cgroup/ltp/test-58282/10/4/6
cgroup_fj_stress 1 TINFO: Attaching task 64621 to /sys/fs/cgroup/ltp/test-58282/10/4/7
cgroup_fj_stress 1 TINFO: Attaching task 64625 to /sys/fs/cgroup/ltp/test-58282/10/4/8
cgroup_fj_stress 1 TINFO: Attaching task 64629 to /sys/fs/cgroup/ltp/test-58282/10/4/9
cgroup_fj_stress 1 TINFO: Attaching task 64633 to /sys/fs/cgroup/ltp/test-58282/10/4/10
cgroup_fj_stress 1 TINFO: Attaching task 64639 to /sys/fs/cgroup/ltp/test-58282/10/5/1
cgroup_fj_stress 1 TINFO: Attaching task 64643 to /sys/fs/cgroup/ltp/test-58282/10/5/2
cgroup_fj_stress 1 TINFO: Attaching task 64647 to /sys/fs/cgroup/ltp/test-58282/10/5/3
cgroup_fj_stress 1 TINFO: Attaching task 64651 to /sys/fs/cgroup/ltp/test-58282/10/5/4
cgroup_fj_stress 1 TINFO: Attaching task 64655 to /sys/fs/cgroup/ltp/test-58282/10/5/5
cgroup_fj_stress 1 TINFO: Attaching task 64659 to /sys/fs/cgroup/ltp/test-58282/10/5/6
cgroup_fj_stress 1 TINFO: Attaching task 64663 to /sys/fs/cgroup/ltp/test-58282/10/5/7
cgroup_fj_stress 1 TINFO: Attaching task 64667 to /sys/fs/cgroup/ltp/test-58282/10/5/8
cgroup_fj_stress 1 TINFO: Attaching task 64671 to /sys/fs/cgroup/ltp/test-58282/10/5/9
cgroup_fj_stress 1 TINFO: Attaching task 64675 to /sys/fs/cgroup/ltp/test-58282/10/5/10
cgroup_fj_stress 1 TINFO: Attaching task 64681 to /sys/fs/cgroup/ltp/test-58282/10/6/1
cgroup_fj_stress 1 TINFO: Attaching task 64685 to /sys/fs/cgroup/ltp/test-58282/10/6/2
cgroup_fj_stress 1 TINFO: Attaching task 64689 to /sys/fs/cgroup/ltp/test-58282/10/6/3
cgroup_fj_stress 1 TINFO: Attaching task 64693 to /sys/fs/cgroup/ltp/test-58282/10/6/4
cgroup_fj_stress 1 TINFO: Attaching task 64697 to /sys/fs/cgroup/ltp/test-58282/10/6/5
cgroup_fj_stress 1 TINFO: Attaching task 64701 to /sys/fs/cgroup/ltp/test-58282/10/6/6
cgroup_fj_stress 1 TINFO: Attaching task 64705 to /sys/fs/cgroup/ltp/test-58282/10/6/7
cgroup_fj_stress 1 TINFO: Attaching task 64709 to /sys/fs/cgroup/ltp/test-58282/10/6/8
cgroup_fj_stress 1 TINFO: Attaching task 64713 to /sys/fs/cgroup/ltp/test-58282/10/6/9
cgroup_fj_stress 1 TINFO: Attaching task 64717 to /sys/fs/cgroup/ltp/test-58282/10/6/10
cgroup_fj_stress 1 TINFO: Attaching task 64723 to /sys/fs/cgroup/ltp/test-58282/10/7/1
cgroup_fj_stress 1 TINFO: Attaching task 64727 to /sys/fs/cgroup/ltp/test-58282/10/7/2
cgroup_fj_stress 1 TINFO: Attaching task 64731 to /sys/fs/cgroup/ltp/test-58282/10/7/3
cgroup_fj_stress 1 TINFO: Attaching task 64735 to /sys/fs/cgroup/ltp/test-58282/10/7/4
cgroup_fj_stress 1 TINFO: Attaching task 64739 to /sys/fs/cgroup/ltp/test-58282/10/7/5
cgroup_fj_stress 1 TINFO: Attaching task 64743 to /sys/fs/cgroup/ltp/test-58282/10/7/6
cgroup_fj_stress 1 TINFO: Attaching task 64747 to /sys/fs/cgroup/ltp/test-58282/10/7/7
cgroup_fj_stress 1 TINFO: Attaching task 64751 to /sys/fs/cgroup/ltp/test-58282/10/7/8
cgroup_fj_stress 1 TINFO: Attaching task 64755 to /sys/fs/cgroup/ltp/test-58282/10/7/9
cgroup_fj_stress 1 TINFO: Attaching task 64759 to /sys/fs/cgroup/ltp/test-58282/10/7/10
cgroup_fj_stress 1 TINFO: Attaching task 64765 to /sys/fs/cgroup/ltp/test-58282/10/8/1
cgroup_fj_stress 1 TINFO: Attaching task 64769 to /sys/fs/cgroup/ltp/test-58282/10/8/2
cgroup_fj_stress 1 TINFO: Attaching task 64773 to /sys/fs/cgroup/ltp/test-58282/10/8/3
cgroup_fj_stress 1 TINFO: Attaching task 64777 to /sys/fs/cgroup/ltp/test-58282/10/8/4
cgroup_fj_stress 1 TINFO: Attaching task 64781 to /sys/fs/cgroup/ltp/test-58282/10/8/5
cgroup_fj_stress 1 TINFO: Attaching task 64785 to /sys/fs/cgroup/ltp/test-58282/10/8/6
cgroup_fj_stress 1 TINFO: Attaching task 64789 to /sys/fs/cgroup/ltp/test-58282/10/8/7
cgroup_fj_stress 1 TINFO: Attaching task 64793 to /sys/fs/cgroup/ltp/test-58282/10/8/8
cgroup_fj_stress 1 TINFO: Attaching task 64797 to /sys/fs/cgroup/ltp/test-58282/10/8/9
cgroup_fj_stress 1 TINFO: Attaching task 64801 to /sys/fs/cgroup/ltp/test-58282/10/8/10
cgroup_fj_stress 1 TINFO: Attaching task 64807 to /sys/fs/cgroup/ltp/test-58282/10/9/1
cgroup_fj_stress 1 TINFO: Attaching task 64811 to /sys/fs/cgroup/ltp/test-58282/10/9/2
cgroup_fj_stress 1 TINFO: Attaching task 64815 to /sys/fs/cgroup/ltp/test-58282/10/9/3
cgroup_fj_stress 1 TINFO: Attaching task 64819 to /sys/fs/cgroup/ltp/test-58282/10/9/4
cgroup_fj_stress 1 TINFO: Attaching task 64823 to /sys/fs/cgroup/ltp/test-58282/10/9/5
cgroup_fj_stress 1 TINFO: Attaching task 64827 to /sys/fs/cgroup/ltp/test-58282/10/9/6
cgroup_fj_stress 1 TINFO: Attaching task 64831 to /sys/fs/cgroup/ltp/test-58282/10/9/7
cgroup_fj_stress 1 TINFO: Attaching task 64835 to /sys/fs/cgroup/ltp/test-58282/10/9/8
cgroup_fj_stress 1 TINFO: Attaching task 64839 to /sys/fs/cgroup/ltp/test-58282/10/9/9
cgroup_fj_stress 1 TINFO: Attaching task 64843 to /sys/fs/cgroup/ltp/test-58282/10/9/10
cgroup_fj_stress 1 TINFO: Attaching task 64849 to /sys/fs/cgroup/ltp/test-58282/10/10/1
cgroup_fj_stress 1 TINFO: Attaching task 64853 to /sys/fs/cgroup/ltp/test-58282/10/10/2
cgroup_fj_stress 1 TINFO: Attaching task 64857 to /sys/fs/cgroup/ltp/test-58282/10/10/3
cgroup_fj_stress 1 TINFO: Attaching task 64861 to /sys/fs/cgroup/ltp/test-58282/10/10/4
cgroup_fj_stress 1 TINFO: Attaching task 64865 to /sys/fs/cgroup/ltp/test-58282/10/10/5
cgroup_fj_stress 1 TINFO: Attaching task 64869 to /sys/fs/cgroup/ltp/test-58282/10/10/6
cgroup_fj_stress 1 TINFO: Attaching task 64873 to /sys/fs/cgroup/ltp/test-58282/10/10/7
cgroup_fj_stress 1 TINFO: Attaching task 64877 to /sys/fs/cgroup/ltp/test-58282/10/10/8
cgroup_fj_stress 1 TINFO: Attaching task 64881 to /sys/fs/cgroup/ltp/test-58282/10/10/9
cgroup_fj_stress 1 TINFO: Attaching task 64885 to /sys/fs/cgroup/ltp/test-58282/10/10/10
cgroup_fj_stress 1 TPASS: All done!
cgroup_fj_stress 2 TINFO: Removing all ltp subgroups...

test log snippet regressions 12

Home

fs_fill fs_fill

test log snippet build parser 1

Home

make --silent --keep-going --jobs=8 O=/home/tuxbuild/.cache/tuxmake/builds/1/build INSTALL_MOD_STRIP=1 INSTALL_MOD_PATH=/home/tuxbuild/.cache/tuxmake/builds/1/build/modinstall ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- 'CC=sccache arm-linux-gnueabihf-gcc' 'HOSTCC=sccache gcc' modules_install
make[3]: *** No rule to make target '/home/tuxbuild/.cache/tuxmake/builds/1/build/modinstall/lib/modules/6.13.0/kernel/drivers/gpu/drm/clients/drm_client_lib.ko', needed by 'depmod'.

test log snippet build parser 2

Home

make --silent --keep-going --jobs=8 O=/home/tuxbuild/.cache/tuxmake/builds/1/build INSTALL_MOD_STRIP=1 INSTALL_MOD_PATH=/home/tuxbuild/.cache/tuxmake/builds/1/build/modinstall ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- 'CC=sccache arm-linux-gnueabihf-gcc' 'HOSTCC=sccache gcc' modules_install
make[3]: *** No rule to make target '/home/tuxbuild/.cache/tuxmake/builds/1/build/modinstall/lib/modules/6.13.0/kernel/drivers/gpu/drm/clients/drm_client_lib.ko', needed by 'depmod'.

test log snippet build parser 3

Home

make --silent --keep-going --jobs=8 O=/home/tuxbuild/.cache/tuxmake/builds/1/build ARCH=s390 CROSS_COMPILE=s390x-linux-gnu- 'CC=sccache s390x-linux-gnu-gcc' 'HOSTCC=sccache gcc'
s390x-linux-gnu-objcopy: stn6Vx8z: warning: allocated section `.got.plt' not in segment

test log snippet build parser 4

Home

make --silent --keep-going --jobs=8 O=/home/tuxbuild/.cache/tuxmake/builds/1/build ARCH=s390 CROSS_COMPILE=s390x-linux-gnu- 'CC=sccache s390x-linux-gnu-gcc' 'HOSTCC=sccache gcc'
s390x-linux-gnu-objcopy: stCinVPC: warning: allocated section `.got.plt' not in segment

test log snippet build parser 5

Home

make --silent --keep-going --jobs=8 O=/home/tuxbuild/.cache/tuxmake/builds/1/build ARCH=s390 CROSS_COMPILE=s390x-linux-gnu- 'CC=sccache s390x-linux-gnu-gcc' 'HOSTCC=sccache gcc'
s390x-linux-gnu-objcopy: st56XQPZ: warning: allocated section `.got.plt' not in segment

test log snippet build parser 6

Home

make --silent --keep-going --jobs=8 O=/home/tuxbuild/.cache/tuxmake/builds/1/build ARCH=s390 CROSS_COMPILE=s390x-linux-gnu- 'CC=sccache s390x-linux-gnu-gcc' 'HOSTCC=sccache gcc'
s390x-linux-gnu-objcopy: stPgwwA1: warning: allocated section `.got.plt' not in segment

test log snippet build parser 7

Home

make --silent --keep-going --jobs=8 O=/home/tuxbuild/.cache/tuxmake/builds/1/build INSTALL_PATH=/home/tuxbuild/.cache/tuxmake/builds/1/build/kselftest_install ARCH=x86_64 SRCARCH=x86 CROSS_COMPILE=x86_64-linux-gnu- 'HOSTCC=sccache clang' 'CC=sccache clang' LLVM=1 LLVM_IAS=1 kselftest-install
make[4]: Entering directory '/builds/linux/tools/testing/selftests/sgx'
/usr/bin/x86_64-linux-gnu-ld: warning: /tmp/test_encl_bootstrap-48e2a1.o: missing .note.GNU-stack section implies executable stack
/usr/bin/x86_64-linux-gnu-ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker

test log snippet build parser 8

Home

make --silent --keep-going --jobs=8 O=/home/tuxbuild/.cache/tuxmake/builds/1/build INSTALL_PATH=/home/tuxbuild/.cache/tuxmake/builds/1/build/kselftest_install ARCH=x86_64 SRCARCH=x86 CROSS_COMPILE=x86_64-linux-gnu- HOSTCC=clang CC=clang LLVM=1 LLVM_IAS=1 kselftest-install
make[4]: Entering directory '/builds/linux/tools/testing/selftests/sgx'
/usr/bin/x86_64-linux-gnu-ld: warning: /tmp/test_encl_bootstrap-81da31.o: missing .note.GNU-stack section implies executable stack
/usr/bin/x86_64-linux-gnu-ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker

test log snippet build parser 9

Home

make --silent --keep-going --jobs=8 O=/home/tuxbuild/.cache/tuxmake/builds/1/build INSTALL_PATH=/home/tuxbuild/.cache/tuxmake/builds/1/build/kselftest_install ARCH=x86_64 SRCARCH=x86 CROSS_COMPILE=x86_64-linux-gnu- 'CC=sccache x86_64-linux-gnu-gcc' 'HOSTCC=sccache gcc' kselftest-install
make[4]: Entering directory '/builds/linux/tools/testing/selftests/sgx'
/usr/bin/ld: warning: /tmp/ccIpUxZt.o: missing .note.GNU-stack section implies executable stack
/usr/bin/ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker

test log snippet build parser 10

Home

make --silent --keep-going --jobs=8 O=/home/tuxbuild/.cache/tuxmake/builds/1/build INSTALL_PATH=/home/tuxbuild/.cache/tuxmake/builds/1/build/kselftest_install ARCH=x86_64 SRCARCH=x86 CROSS_COMPILE=x86_64-linux-gnu- 'CC=sccache x86_64-linux-gnu-gcc' 'HOSTCC=sccache gcc' kselftest-install
make[4]: Entering directory '/builds/linux/tools/testing/selftests/sgx'
/usr/bin/ld: warning: /tmp/ccWQo0WO.o: missing .note.GNU-stack section implies executable stack
/usr/bin/ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker

test log snippet boot parser 1

Home

[   47.042335] Internal error: Oops: 0000000096000005 [#1] PREEMPT SMP
[   47.043492] Modules linked in:
[   47.047476] CPU: 0 UID: 0 PID: 387 Comm: kunit_try_catch Tainted: G                 N 6.13.0 #1
[   47.048725] Tainted: [N]=TEST
[   47.049211] Hardware name: linux,dummy-virt (DT)
[   47.050125] pstate: 82402009 (Nzcv daif +PAN -UAO +TCO -DIT -SSBS BTYPE=--)
[   47.050938] pc : kunit_test_null_dereference+0x2c/0x114
[   47.052783] lr : kunit_generic_run_threadfn_adapter+0x84/0x104
[   47.053631] sp : ffff800080957db0
[   47.054491] x29: ffff800080957dc0 x28: dfff800000000000 x27: 1fffa36068ffa945
[   47.055912] x26: fffd1b0347c810a0 x25: ffff800080fd7b48 x24: fffd1b0347c8c090
[   47.056916] x23: 1fffa36068f90201 x22: dfff800000000000 x21: dfff800000000000
[   47.057928] x20: ffff9661b6152a04 x19: fffd1b0347c81008 x18: 0000000000000002
[   47.059123] x17: 0000000000000075 x16: 0000000000000000 x15: 0000000000000001
[   47.060252] x14: 1fffa36068ffaa31 x13: 0000000000000000 x12: 0000000000000000
[   47.061267] x11: ffff236068ffaa32 x10: dfff800000000000 x9 : 1fffa36068f90202
[   47.062448] x8 : b08788fa9d47e100 x7 : ffff9661b807fe00 x6 : 0000000000000003
[   47.063659] x5 : fffd1b035aa6e260 x4 : 0000000000000001 x3 : ffff9661b7f13640
[   47.064962] x2 : 0000000000000001 x1 : 0000000000000001 x0 : ffff800080087af8
[   47.066389] Call trace:
[   47.067086]  kunit_test_null_dereference+0x2c/0x114 (P)
[   47.068101]  kunit_generic_run_threadfn_adapter+0x84/0x104
[   47.068751]  kthread+0x3f4/0x518
[   47.069347]  ret_from_fork+0x10/0x20
[   47.070687] Code: d2d00015 f9426508 f2fbfff5 f90007e8 (39c002a8) 
[   47.071935] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 2

Home

[   40.479095] Internal error: Oops: 0000000096000005 [#1] PREEMPT SMP
[   40.479934] Modules linked in:
[   40.483022] CPU: 1 UID: 0 PID: 388 Comm: kunit_try_catch Tainted: G                 N 6.13.0 #1
[   40.484128] Tainted: [N]=TEST
[   40.484569] Hardware name: linux,dummy-virt (DT)
[   40.485715] pstate: 12402009 (nzcV daif +PAN -UAO +TCO -DIT -SSBS BTYPE=--)
[   40.486430] pc : kunit_test_null_dereference+0x68/0x160
[   40.488132] lr : kunit_generic_run_threadfn_adapter+0x80/0xec
[   40.488822] sp : ffff800080fd7d30
[   40.489372] x29: ffff800080fd7d90 x28: 0000000000000000 x27: 0000000000000000
[   40.490689] x26: 1fff93f558f7de21 x25: fffc9faac7f0dc80 x24: 0000000000000004
[   40.491639] x23: fffc9faac7bf6e80 x22: ffffac766efd14e4 x21: fffc9faac7bf6e88
[   40.492571] x20: 1ffff000101fafa6 x19: ffff800080087a10 x18: 00000000a2798bf0
[   40.493819] x17: 000000005de103bf x16: 0000000000000100 x15: fffc9faaff80ce40
[   40.494783] x14: 1fff93f55b531cc3 x13: 00000000f1f1f1f1 x12: ffff13f558fe1c82
[   40.495709] x11: 1fff93f558fe1c81 x10: ffff13f558fe1c81 x9 : dfff800000000000
[   40.496686] x8 : fffc9faac7f0e40c x7 : 0000000000000001 x6 : 0000000041b58ab3
[   40.497764] x5 : ffff7000101fafa6 x4 : 00000000f1f1f1f1 x3 : 0000000000000003
[   40.498784] x2 : dfff800000000000 x1 : fffc9faac7f0dc80 x0 : ffff800080087a10
[   40.499901] Call trace:
[   40.500537]  kunit_test_null_dereference+0x68/0x160 (P)
[   40.501811]  kunit_generic_run_threadfn_adapter+0x80/0xec
[   40.502405]  kthread+0x37c/0x67c
[   40.502885]  ret_from_fork+0x10/0x20
[   40.504140] Code: b90004a3 d5384101 52800063 aa0003f3 (39c00042) 
[   40.505390] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 3

Home

------------[ cut here ]------------
[    5.368571] ### dt-test ### EXPECT \ : WARNING: <<all>>
[    5.369093] ### dt-test ### EXPECT \ : refcount_t: underflow; use-after-free.
[    5.369675] ### dt-test ### EXPECT \ : ---[ end trace <<int>> ]---
[    5.370234] ### dt-test ### pass of_unittest_lifecycle():3507
[    5.372017] ------------[ cut here ]------------
[    5.372586] refcount_t: underflow; use-after-free.
[    5.374296] WARNING: CPU: 0 PID: 1 at lib/refcount.c:28 refcount_warn_saturate+0xf4/0x148
[    5.376089] Modules linked in:
[    5.377487] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G                 N 6.13.0 #1
[    5.378537] Tainted: [N]=TEST
[    5.378928] Hardware name: linux,dummy-virt (DT)
[    5.379950] pstate: 61400009 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
[    5.380703] pc : refcount_warn_saturate+0xf4/0x148
[    5.381160] lr : refcount_warn_saturate+0xf4/0x148
[    5.381707] sp : ffff80008002bbe0
[    5.381991] x29: ffff80008002bbe0 x28: ffff9f5c57c9eb78 x27: ffff9f5c57e9adb8
[    5.383278] x26: ffff9f5c577b28f0 x25: ffff9f5c57c9eb78 x24: ffff9f5c577b3250
[    5.384008] x23: ffff9f5c57e9adb8 x22: fffcf1d9c158e2c0 x21: ffff9f5c58ac6000
[    5.384560] x20: ffff9f5c57e9ada8 x19: fffcf1d9c158e340 x18: 0000000000000006
[    5.385323] x17: 0000000000000000 x16: 0000000000000000 x15: ffff80008002b5f0
[    5.385937] x14: ffff80010002b7ca x13: ffff80008002b7d2 x12: 0000000000000000
[    5.386971] x11: fffffffffffe0000 x10: ffff9f5c5842e548 x9 : ffff9f5c55d49e38
[    5.387682] x8 : 00000000ffffefff x7 : ffff9f5c5842e548 x6 : 80000000fffff000
[    5.388361] x5 : 00000000000002a2 x4 : 0000000000000000 x3 : 0000000000000000
[    5.389034] x2 : 0000000000000000 x1 : 0000000000000000 x0 : fffcf1d9c02e0000
[    5.389796] Call trace:
[    5.390493]  refcount_warn_saturate+0xf4/0x148 (P)
[    5.391490]  kobject_put+0x16c/0x218
[    5.391798]  of_node_put+0x20/0x38
[    5.392213]  of_unittest+0x1ed0/0x2d28
[    5.392684]  do_one_initcall+0x60/0x298
[    5.393000]  kernel_init_freeable+0x204/0x3d8
[    5.393500]  kernel_init+0x28/0x1f0
[    5.393915]  ret_from_fork+0x10/0x20
[    5.395366] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 4

Home

[   36.014440] ==================================================================
[   36.015496] BUG: KASAN: alloca-out-of-bounds in kasan_alloca_oob_right+0x2e4/0x348
[   36.016296] Read of size 1 at addr ffff800080a57b4a by task kunit_try_catch/242
[   36.017133] 
[   36.017520] CPU: 1 UID: 0 PID: 242 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.017817] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.017920] Hardware name: linux,dummy-virt (DT)
[   36.018033] Call trace:
[   36.018108]  show_stack+0x20/0x38 (C)
[   36.018301]  dump_stack_lvl+0x8c/0xd0
[   36.018481]  print_report+0x2fc/0x5e0
[   36.018657]  kasan_report+0xc8/0x118
[   36.018805]  __asan_report_load1_noabort+0x20/0x30
[   36.018983]  kasan_alloca_oob_right+0x2e4/0x348
[   36.019147]  kunit_try_run_case+0x14c/0x3d0
[   36.019249]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.019343]  kthread+0x318/0x618
[   36.019429]  ret_from_fork+0x10/0x20
[   36.019513] 
[   36.026807] The buggy address belongs to stack of task kunit_try_catch/242
[   36.028325] 
[   36.028743] The buggy address belongs to the virtual mapping at
[   36.028743]  [ffff800080a50000, ffff800080a59000) created by:
[   36.028743]  kernel_clone+0x140/0x788
[   36.030193] 
[   36.030598] The buggy address belongs to the physical page:
[   36.031356] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10651a
[   36.032854] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.033754] raw: 0bfffe0000000000 0000000000000000 dead000000000122 0000000000000000
[   36.034634] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   36.035329] page dumped because: kasan: bad access detected
[   36.036294] 
[   36.036684] Memory state around the buggy address:
[   36.037256]  ffff800080a57a00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   36.038178]  ffff800080a57a80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   36.039163] >ffff800080a57b00: 00 00 00 00 ca ca ca ca 00 02 cb cb cb cb cb cb
[   36.040037]                                               ^
[   36.041078]  ffff800080a57b80: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 01 f2 04 f2
[   36.041893]  ffff800080a57c00: 00 f2 f2 f2 00 00 f3 f3 00 00 00 00 00 00 00 00
[   36.042762] ==================================================================

test log snippet boot parser 5

Home

[   33.709217] ==================================================================
[   33.710083] BUG: KASAN: double-free in kfree_sensitive+0x3c/0xb0
[   33.710872] Free of addr fff9b96e85f285a0 by task kunit_try_catch/181
[   33.711660] 
[   33.712040] CPU: 0 UID: 0 PID: 181 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   33.712286] Tainted: [B]=BAD_PAGE, [N]=TEST
[   33.712382] Hardware name: linux,dummy-virt (DT)
[   33.712477] Call trace:
[   33.712569]  show_stack+0x20/0x38 (C)
[   33.712729]  dump_stack_lvl+0x8c/0xd0
[   33.712881]  print_report+0x118/0x5e0
[   33.712970]  kasan_report_invalid_free+0xb0/0xd8
[   33.713056]  check_slab_allocation+0xd4/0x108
[   33.713138]  __kasan_slab_pre_free+0x2c/0x48
[   33.713220]  kfree+0xe8/0x3c8
[   33.713290]  kfree_sensitive+0x3c/0xb0
[   33.713369]  kmalloc_double_kzfree+0x168/0x308
[   33.713447]  kunit_try_run_case+0x14c/0x3d0
[   33.713525]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.713714]  kthread+0x318/0x618
[   33.713905]  ret_from_fork+0x10/0x20
[   33.714085] 
[   33.724813] Allocated by task 181:
[   33.725338]  kasan_save_stack+0x3c/0x68
[   33.725957]  kasan_save_track+0x20/0x40
[   33.726464]  kasan_save_alloc_info+0x40/0x58
[   33.727148]  __kasan_kmalloc+0xd4/0xd8
[   33.727693]  __kmalloc_cache_noprof+0x15c/0x3c0
[   33.728382]  kmalloc_double_kzfree+0xb8/0x308
[   33.729187]  kunit_try_run_case+0x14c/0x3d0
[   33.729846]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.730688]  kthread+0x318/0x618
[   33.731176]  ret_from_fork+0x10/0x20
[   33.731764] 
[   33.732761] Freed by task 181:
[   33.733126]  kasan_save_stack+0x3c/0x68
[   33.733746]  kasan_save_track+0x20/0x40
[   33.734356]  kasan_save_free_info+0x4c/0x78
[   33.735204]  __kasan_slab_free+0x6c/0x98
[   33.735739]  kfree+0x114/0x3c8
[   33.736274]  kfree_sensitive+0x80/0xb0
[   33.736832]  kmalloc_double_kzfree+0x11c/0x308
[   33.737502]  kunit_try_run_case+0x14c/0x3d0
[   33.738150]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.739088]  kthread+0x318/0x618
[   33.739634]  ret_from_fork+0x10/0x20
[   33.740489] 
[   33.740861] The buggy address belongs to the object at fff9b96e85f285a0
[   33.740861]  which belongs to the cache kmalloc-16 of size 16
[   33.742003] The buggy address is located 0 bytes inside of
[   33.742003]  16-byte region [fff9b96e85f285a0, fff9b96e85f285b0)
[   33.743143] 
[   33.743508] The buggy address belongs to the physical page:
[   33.744469] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   33.745435] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   33.746195] page_type: f5(slab)
[   33.746795] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   33.747753] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   33.748783] page dumped because: kasan: bad access detected
[   33.749443] 
[   33.749777] Memory state around the buggy address:
[   33.750329]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   33.751291]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   33.752907] >fff9b96e85f28580: fa fb fc fc fa fb fc fc fc fc fc fc fc fc fc fc
[   33.753991]                                ^
[   33.754817]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   33.755663]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   33.756965] ==================================================================

test log snippet boot parser 6

Home

[   36.824307] ==================================================================
[   36.825037] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0xa28/0xbc8
[   36.826666] Read of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.827408] 
[   36.827825] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.828078] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.828235] Hardware name: linux,dummy-virt (DT)
[   36.828379] Call trace:
[   36.828463]  show_stack+0x20/0x38 (C)
[   36.828657]  dump_stack_lvl+0x8c/0xd0
[   36.828796]  print_report+0x118/0x5e0
[   36.828886]  kasan_report+0xc8/0x118
[   36.828995]  __asan_report_load8_noabort+0x20/0x30
[   36.829093]  kasan_bitops_modify.constprop.0+0xa28/0xbc8
[   36.829186]  kasan_bitops_generic+0x110/0x1c8
[   36.829269]  kunit_try_run_case+0x14c/0x3d0
[   36.829353]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.829445]  kthread+0x318/0x618
[   36.829524]  ret_from_fork+0x10/0x20
[   36.829634] 
[   36.836030] Allocated by task 250:
[   36.836684]  kasan_save_stack+0x3c/0x68
[   36.837393]  kasan_save_track+0x20/0x40
[   36.838065]  kasan_save_alloc_info+0x40/0x58
[   36.838794]  __kasan_kmalloc+0xd4/0xd8
[   36.839434]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.840149]  kasan_bitops_generic+0xa0/0x1c8
[   36.840797]  kunit_try_run_case+0x14c/0x3d0
[   36.841423]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.842124]  kthread+0x318/0x618
[   36.842744]  ret_from_fork+0x10/0x20
[   36.843241] 
[   36.843569] The buggy address belongs to the object at fff9b96e85f285c0
[   36.843569]  which belongs to the cache kmalloc-16 of size 16
[   36.845048] The buggy address is located 8 bytes inside of
[   36.845048]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.846328] 
[   36.846675] The buggy address belongs to the physical page:
[   36.847199] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.848269] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.849232] page_type: f5(slab)
[   36.849821] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.850873] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.851854] page dumped because: kasan: bad access detected
[   36.852690] 
[   36.853066] Memory state around the buggy address:
[   36.853827]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.854674]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.855505] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.856453]                                               ^
[   36.857181]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.858152]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.858911] ==================================================================
---
[   36.485667] ==================================================================
[   36.487037] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0xa4c/0xbc8
[   36.488178] Read of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.489140] 
[   36.489471] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.489793] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.489897] Hardware name: linux,dummy-virt (DT)
[   36.490010] Call trace:
[   36.490086]  show_stack+0x20/0x38 (C)
[   36.490250]  dump_stack_lvl+0x8c/0xd0
[   36.490430]  print_report+0x118/0x5e0
[   36.490605]  kasan_report+0xc8/0x118
[   36.490762]  __asan_report_load8_noabort+0x20/0x30
[   36.490934]  kasan_bitops_modify.constprop.0+0xa4c/0xbc8
[   36.491091]  kasan_bitops_generic+0x110/0x1c8
[   36.491180]  kunit_try_run_case+0x14c/0x3d0
[   36.491263]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.491385]  kthread+0x318/0x618
[   36.491464]  ret_from_fork+0x10/0x20
[   36.491618] 
[   36.499142] Allocated by task 250:
[   36.500054]  kasan_save_stack+0x3c/0x68
[   36.501120]  kasan_save_track+0x20/0x40
[   36.502210]  kasan_save_alloc_info+0x40/0x58
[   36.502863]  __kasan_kmalloc+0xd4/0xd8
[   36.503421]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.504627]  kasan_bitops_generic+0xa0/0x1c8
[   36.505205]  kunit_try_run_case+0x14c/0x3d0
[   36.505880]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.506625]  kthread+0x318/0x618
[   36.507120]  ret_from_fork+0x10/0x20
[   36.507735] 
[   36.508284] The buggy address belongs to the object at fff9b96e85f285c0
[   36.508284]  which belongs to the cache kmalloc-16 of size 16
[   36.510201] The buggy address is located 8 bytes inside of
[   36.510201]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.511451] 
[   36.511912] The buggy address belongs to the physical page:
[   36.513059] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.513941] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.514830] page_type: f5(slab)
[   36.515336] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.516699] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.517646] page dumped because: kasan: bad access detected
[   36.518822] 
[   36.519395] Memory state around the buggy address:
[   36.520055]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.521347]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.522187] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.523065]                                               ^
[   36.523754]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.525387]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.526534] ==================================================================
---
[   36.697218] ==================================================================
[   36.698944] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0xaf4/0xbc8
[   36.699689] Read of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.701506] 
[   36.701880] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.702129] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.702229] Hardware name: linux,dummy-virt (DT)
[   36.702304] Call trace:
[   36.702345]  show_stack+0x20/0x38 (C)
[   36.702480]  dump_stack_lvl+0x8c/0xd0
[   36.702589]  print_report+0x118/0x5e0
[   36.702676]  kasan_report+0xc8/0x118
[   36.702760]  __asan_report_load8_noabort+0x20/0x30
[   36.702850]  kasan_bitops_modify.constprop.0+0xaf4/0xbc8
[   36.702955]  kasan_bitops_generic+0x110/0x1c8
[   36.703075]  kunit_try_run_case+0x14c/0x3d0
[   36.703159]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.703254]  kthread+0x318/0x618
[   36.703335]  ret_from_fork+0x10/0x20
[   36.703420] 
[   36.711329] Allocated by task 250:
[   36.711908]  kasan_save_stack+0x3c/0x68
[   36.712935]  kasan_save_track+0x20/0x40
[   36.713422]  kasan_save_alloc_info+0x40/0x58
[   36.714339]  __kasan_kmalloc+0xd4/0xd8
[   36.714995]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.716495]  kasan_bitops_generic+0xa0/0x1c8
[   36.717310]  kunit_try_run_case+0x14c/0x3d0
[   36.718095]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.718985]  kthread+0x318/0x618
[   36.719695]  ret_from_fork+0x10/0x20
[   36.720950] 
[   36.721210] The buggy address belongs to the object at fff9b96e85f285c0
[   36.721210]  which belongs to the cache kmalloc-16 of size 16
[   36.722891] The buggy address is located 8 bytes inside of
[   36.722891]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.724661] 
[   36.725091] The buggy address belongs to the physical page:
[   36.725940] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.726820] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.727610] page_type: f5(slab)
[   36.728167] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.729359] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.730102] page dumped because: kasan: bad access detected
[   36.730900] 
[   36.731073] Memory state around the buggy address:
[   36.731354]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.731951]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.733139] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.733985]                                               ^
[   36.734782]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.735597]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.736995] ==================================================================
---
[   36.570957] ==================================================================
[   36.571564] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0xa88/0xbc8
[   36.573166] Read of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.574122] 
[   36.575044] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.575328] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.575417] Hardware name: linux,dummy-virt (DT)
[   36.575528] Call trace:
[   36.575630]  show_stack+0x20/0x38 (C)
[   36.575798]  dump_stack_lvl+0x8c/0xd0
[   36.575965]  print_report+0x118/0x5e0
[   36.576121]  kasan_report+0xc8/0x118
[   36.576272]  __asan_report_load8_noabort+0x20/0x30
[   36.576442]  kasan_bitops_modify.constprop.0+0xa88/0xbc8
[   36.576638]  kasan_bitops_generic+0x110/0x1c8
[   36.576811]  kunit_try_run_case+0x14c/0x3d0
[   36.576974]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.577098]  kthread+0x318/0x618
[   36.577182]  ret_from_fork+0x10/0x20
[   36.577269] 
[   36.585740] Allocated by task 250:
[   36.586290]  kasan_save_stack+0x3c/0x68
[   36.586883]  kasan_save_track+0x20/0x40
[   36.587513]  kasan_save_alloc_info+0x40/0x58
[   36.588794]  __kasan_kmalloc+0xd4/0xd8
[   36.589379]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.589811]  kasan_bitops_generic+0xa0/0x1c8
[   36.590480]  kunit_try_run_case+0x14c/0x3d0
[   36.591500]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.592614]  kthread+0x318/0x618
[   36.593309]  ret_from_fork+0x10/0x20
[   36.593882] 
[   36.594212] The buggy address belongs to the object at fff9b96e85f285c0
[   36.594212]  which belongs to the cache kmalloc-16 of size 16
[   36.595948] The buggy address is located 8 bytes inside of
[   36.595948]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.597088] 
[   36.598427] The buggy address belongs to the physical page:
[   36.599163] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.600152] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.601209] page_type: f5(slab)
[   36.601710] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.602513] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.603300] page dumped because: kasan: bad access detected
[   36.604235] 
[   36.604878] Memory state around the buggy address:
[   36.605375]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.606116]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.606997] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.607781]                                               ^
[   36.608405]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.609683]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.610483] ==================================================================

test log snippet boot parser 7

Home

[   36.738488] ==================================================================
[   36.739360] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x344/0xbc8
[   36.740195] Write of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.741137] 
[   36.741505] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.741992] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.742093] Hardware name: linux,dummy-virt (DT)
[   36.742192] Call trace:
[   36.742269]  show_stack+0x20/0x38 (C)
[   36.742440]  dump_stack_lvl+0x8c/0xd0
[   36.742628]  print_report+0x118/0x5e0
[   36.742787]  kasan_report+0xc8/0x118
[   36.742939]  kasan_check_range+0x100/0x1a8
[   36.743103]  __kasan_check_write+0x20/0x30
[   36.743261]  kasan_bitops_modify.constprop.0+0x344/0xbc8
[   36.743438]  kasan_bitops_generic+0x110/0x1c8
[   36.743628]  kunit_try_run_case+0x14c/0x3d0
[   36.743724]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.743817]  kthread+0x318/0x618
[   36.743997]  ret_from_fork+0x10/0x20
[   36.744151] 
[   36.752229] Allocated by task 250:
[   36.753050]  kasan_save_stack+0x3c/0x68
[   36.754241]  kasan_save_track+0x20/0x40
[   36.755030]  kasan_save_alloc_info+0x40/0x58
[   36.755775]  __kasan_kmalloc+0xd4/0xd8
[   36.756761]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.757428]  kasan_bitops_generic+0xa0/0x1c8
[   36.757997]  kunit_try_run_case+0x14c/0x3d0
[   36.758700]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.759354]  kthread+0x318/0x618
[   36.759968]  ret_from_fork+0x10/0x20
[   36.760907] 
[   36.761206] The buggy address belongs to the object at fff9b96e85f285c0
[   36.761206]  which belongs to the cache kmalloc-16 of size 16
[   36.763019] The buggy address is located 8 bytes inside of
[   36.763019]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.764649] 
[   36.765227] The buggy address belongs to the physical page:
[   36.766079] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.767101] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.767860] page_type: f5(slab)
[   36.768300] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.770032] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.771000] page dumped because: kasan: bad access detected
[   36.772320] 
[   36.772640] Memory state around the buggy address:
[   36.773240]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.774128]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.775051] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.776645]                                               ^
[   36.777572]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.778457]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.779248] ==================================================================
---
[   36.443246] ==================================================================
[   36.444604] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x100/0xbc8
[   36.445573] Write of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.446590] 
[   36.447008] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.447253] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.447353] Hardware name: linux,dummy-virt (DT)
[   36.447457] Call trace:
[   36.447533]  show_stack+0x20/0x38 (C)
[   36.447719]  dump_stack_lvl+0x8c/0xd0
[   36.447879]  print_report+0x118/0x5e0
[   36.448040]  kasan_report+0xc8/0x118
[   36.448191]  kasan_check_range+0x100/0x1a8
[   36.448355]  __kasan_check_write+0x20/0x30
[   36.448511]  kasan_bitops_modify.constprop.0+0x100/0xbc8
[   36.448679]  kasan_bitops_generic+0x110/0x1c8
[   36.448770]  kunit_try_run_case+0x14c/0x3d0
[   36.448850]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.448942]  kthread+0x318/0x618
[   36.449026]  ret_from_fork+0x10/0x20
[   36.449111] 
[   36.458152] Allocated by task 250:
[   36.458620]  kasan_save_stack+0x3c/0x68
[   36.459388]  kasan_save_track+0x20/0x40
[   36.460318]  kasan_save_alloc_info+0x40/0x58
[   36.461122]  __kasan_kmalloc+0xd4/0xd8
[   36.461824]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.462509]  kasan_bitops_generic+0xa0/0x1c8
[   36.463160]  kunit_try_run_case+0x14c/0x3d0
[   36.463873]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.465039]  kthread+0x318/0x618
[   36.465572]  ret_from_fork+0x10/0x20
[   36.466205] 
[   36.466613] The buggy address belongs to the object at fff9b96e85f285c0
[   36.466613]  which belongs to the cache kmalloc-16 of size 16
[   36.467873] The buggy address is located 8 bytes inside of
[   36.467873]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.469422] 
[   36.469843] The buggy address belongs to the physical page:
[   36.470603] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.471692] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.472515] page_type: f5(slab)
[   36.473017] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.473830] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.474863] page dumped because: kasan: bad access detected
[   36.475812] 
[   36.476367] Memory state around the buggy address:
[   36.477221]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.478715]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.480206] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.480967]                                               ^
[   36.481702]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.482506]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.483306] ==================================================================
---
[   36.528833] ==================================================================
[   36.529494] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x1dc/0xbc8
[   36.530213] Write of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.531746] 
[   36.532360] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.532881] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.533000] Hardware name: linux,dummy-virt (DT)
[   36.533109] Call trace:
[   36.533190]  show_stack+0x20/0x38 (C)
[   36.533342]  dump_stack_lvl+0x8c/0xd0
[   36.533495]  print_report+0x118/0x5e0
[   36.533616]  kasan_report+0xc8/0x118
[   36.533700]  kasan_check_range+0x100/0x1a8
[   36.533788]  __kasan_check_write+0x20/0x30
[   36.533875]  kasan_bitops_modify.constprop.0+0x1dc/0xbc8
[   36.533966]  kasan_bitops_generic+0x110/0x1c8
[   36.534051]  kunit_try_run_case+0x14c/0x3d0
[   36.534134]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.534226]  kthread+0x318/0x618
[   36.534305]  ret_from_fork+0x10/0x20
[   36.534400] 
[   36.543262] Allocated by task 250:
[   36.544259]  kasan_save_stack+0x3c/0x68
[   36.544920]  kasan_save_track+0x20/0x40
[   36.545469]  kasan_save_alloc_info+0x40/0x58
[   36.546266]  __kasan_kmalloc+0xd4/0xd8
[   36.546886]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.547631]  kasan_bitops_generic+0xa0/0x1c8
[   36.548991]  kunit_try_run_case+0x14c/0x3d0
[   36.549785]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.550621]  kthread+0x318/0x618
[   36.551147]  ret_from_fork+0x10/0x20
[   36.551745] 
[   36.552337] The buggy address belongs to the object at fff9b96e85f285c0
[   36.552337]  which belongs to the cache kmalloc-16 of size 16
[   36.553883] The buggy address is located 8 bytes inside of
[   36.553883]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.555146] 
[   36.555574] The buggy address belongs to the physical page:
[   36.556479] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.557892] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.558251] page_type: f5(slab)
[   36.558496] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.559488] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.560697] page dumped because: kasan: bad access detected
[   36.561611] 
[   36.562063] Memory state around the buggy address:
[   36.562727]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.563736]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.564698] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.565529]                                               ^
[   36.566214]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.567586]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.568982] ==================================================================
---
[   36.611819] ==================================================================
[   36.612967] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x2b4/0xbc8
[   36.613732] Write of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.614592] 
[   36.614997] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.615242] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.615343] Hardware name: linux,dummy-virt (DT)
[   36.615455] Call trace:
[   36.615534]  show_stack+0x20/0x38 (C)
[   36.615726]  dump_stack_lvl+0x8c/0xd0
[   36.615881]  print_report+0x118/0x5e0
[   36.616034]  kasan_report+0xc8/0x118
[   36.616182]  kasan_check_range+0x100/0x1a8
[   36.616347]  __kasan_check_write+0x20/0x30
[   36.616509]  kasan_bitops_modify.constprop.0+0x2b4/0xbc8
[   36.616697]  kasan_bitops_generic+0x110/0x1c8
[   36.616860]  kunit_try_run_case+0x14c/0x3d0
[   36.617012]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.617117]  kthread+0x318/0x618
[   36.617200]  ret_from_fork+0x10/0x20
[   36.617287] 
[   36.626087] Allocated by task 250:
[   36.626736]  kasan_save_stack+0x3c/0x68
[   36.627256]  kasan_save_track+0x20/0x40
[   36.627917]  kasan_save_alloc_info+0x40/0x58
[   36.629569]  __kasan_kmalloc+0xd4/0xd8
[   36.630117]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.630776]  kasan_bitops_generic+0xa0/0x1c8
[   36.631065]  kunit_try_run_case+0x14c/0x3d0
[   36.631356]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.631873]  kthread+0x318/0x618
[   36.632773]  ret_from_fork+0x10/0x20
[   36.633815] 
[   36.634114] The buggy address belongs to the object at fff9b96e85f285c0
[   36.634114]  which belongs to the cache kmalloc-16 of size 16
[   36.635739] The buggy address is located 8 bytes inside of
[   36.635739]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.637387] 
[   36.637784] The buggy address belongs to the physical page:
[   36.639136] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.640186] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.641444] page_type: f5(slab)
[   36.642262] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.643091] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.644124] page dumped because: kasan: bad access detected
[   36.645096] 
[   36.645472] Memory state around the buggy address:
[   36.646074]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.647270]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.648834] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.649826]                                               ^
[   36.650890]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.652193]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.653037] ==================================================================
---
[   36.655074] ==================================================================
[   36.655829] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x320/0xbc8
[   36.656816] Write of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.658601] 
[   36.658977] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.659151] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.659223] Hardware name: linux,dummy-virt (DT)
[   36.659318] Call trace:
[   36.659400]  show_stack+0x20/0x38 (C)
[   36.659583]  dump_stack_lvl+0x8c/0xd0
[   36.659713]  print_report+0x118/0x5e0
[   36.659797]  kasan_report+0xc8/0x118
[   36.659929]  kasan_check_range+0x100/0x1a8
[   36.660083]  __kasan_check_write+0x20/0x30
[   36.660241]  kasan_bitops_modify.constprop.0+0x320/0xbc8
[   36.660396]  kasan_bitops_generic+0x110/0x1c8
[   36.660566]  kunit_try_run_case+0x14c/0x3d0
[   36.660720]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.660895]  kthread+0x318/0x618
[   36.661025]  ret_from_fork+0x10/0x20
[   36.661183] 
[   36.670807] Allocated by task 250:
[   36.671623]  kasan_save_stack+0x3c/0x68
[   36.672143]  kasan_save_track+0x20/0x40
[   36.672791]  kasan_save_alloc_info+0x40/0x58
[   36.673459]  __kasan_kmalloc+0xd4/0xd8
[   36.674787]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.675634]  kasan_bitops_generic+0xa0/0x1c8
[   36.676220]  kunit_try_run_case+0x14c/0x3d0
[   36.676935]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.677601]  kthread+0x318/0x618
[   36.678322]  ret_from_fork+0x10/0x20
[   36.679003] 
[   36.679274] The buggy address belongs to the object at fff9b96e85f285c0
[   36.679274]  which belongs to the cache kmalloc-16 of size 16
[   36.680784] The buggy address is located 8 bytes inside of
[   36.680784]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.681800] 
[   36.682165] The buggy address belongs to the physical page:
[   36.682927] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.684041] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.685508] page_type: f5(slab)
[   36.686166] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.687243] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.688368] page dumped because: kasan: bad access detected
[   36.689280] 
[   36.689584] Memory state around the buggy address:
[   36.690263]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.691219]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.692590] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.693131]                                               ^
[   36.693923]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.694787]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.695534] ==================================================================
---
[   36.781233] ==================================================================
[   36.782018] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x3b0/0xbc8
[   36.782974] Write of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.783917] 
[   36.784680] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.784925] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.785024] Hardware name: linux,dummy-virt (DT)
[   36.785128] Call trace:
[   36.785211]  show_stack+0x20/0x38 (C)
[   36.785368]  dump_stack_lvl+0x8c/0xd0
[   36.785461]  print_report+0x118/0x5e0
[   36.785573]  kasan_report+0xc8/0x118
[   36.785714]  kasan_check_range+0x100/0x1a8
[   36.785870]  __kasan_check_write+0x20/0x30
[   36.786029]  kasan_bitops_modify.constprop.0+0x3b0/0xbc8
[   36.786204]  kasan_bitops_generic+0x110/0x1c8
[   36.786355]  kunit_try_run_case+0x14c/0x3d0
[   36.786453]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.786609]  kthread+0x318/0x618
[   36.786767]  ret_from_fork+0x10/0x20
[   36.786928] 
[   36.795728] Allocated by task 250:
[   36.796739]  kasan_save_stack+0x3c/0x68
[   36.797395]  kasan_save_track+0x20/0x40
[   36.798072]  kasan_save_alloc_info+0x40/0x58
[   36.798747]  __kasan_kmalloc+0xd4/0xd8
[   36.799311]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.800203]  kasan_bitops_generic+0xa0/0x1c8
[   36.801708]  kunit_try_run_case+0x14c/0x3d0
[   36.802229]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.802931]  kthread+0x318/0x618
[   36.803553]  ret_from_fork+0x10/0x20
[   36.804471] 
[   36.804818] The buggy address belongs to the object at fff9b96e85f285c0
[   36.804818]  which belongs to the cache kmalloc-16 of size 16
[   36.805986] The buggy address is located 8 bytes inside of
[   36.805986]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.807283] 
[   36.807680] The buggy address belongs to the physical page:
[   36.808831] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.810158] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.810981] page_type: f5(slab)
[   36.811567] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.812898] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.813722] page dumped because: kasan: bad access detected
[   36.814461] 
[   36.814831] Memory state around the buggy address:
[   36.815555]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.816454]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.817988] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.818799]                                               ^
[   36.819532]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.820821]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.821611] ==================================================================

test log snippet boot parser 8

Home

[   37.043520] ==================================================================
[   37.044358] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0xad4/0xbc8
[   37.045516] Read of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   37.046468] 
[   37.046860] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   37.047165] Tainted: [B]=BAD_PAGE, [N]=TEST
[   37.047270] Hardware name: linux,dummy-virt (DT)
[   37.047372] Call trace:
[   37.047454]  show_stack+0x20/0x38 (C)
[   37.047639]  dump_stack_lvl+0x8c/0xd0
[   37.047802]  print_report+0x118/0x5e0
[   37.047955]  kasan_report+0xc8/0x118
[   37.048153]  __asan_report_load8_noabort+0x20/0x30
[   37.048371]  kasan_bitops_test_and_modify.constprop.0+0xad4/0xbc8
[   37.048589]  kasan_bitops_generic+0x11c/0x1c8
[   37.048759]  kunit_try_run_case+0x14c/0x3d0
[   37.048884]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   37.048982]  kthread+0x318/0x618
[   37.049062]  ret_from_fork+0x10/0x20
[   37.049149] 
[   37.056186] Allocated by task 250:
[   37.056714]  kasan_save_stack+0x3c/0x68
[   37.057225]  kasan_save_track+0x20/0x40
[   37.057857]  kasan_save_alloc_info+0x40/0x58
[   37.058358]  __kasan_kmalloc+0xd4/0xd8
[   37.058993]  __kmalloc_cache_noprof+0x15c/0x3c0
[   37.059669]  kasan_bitops_generic+0xa0/0x1c8
[   37.060211]  kunit_try_run_case+0x14c/0x3d0
[   37.060854]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   37.061593]  kthread+0x318/0x618
[   37.062106]  ret_from_fork+0x10/0x20
[   37.062672] 
[   37.063035] The buggy address belongs to the object at fff9b96e85f285c0
[   37.063035]  which belongs to the cache kmalloc-16 of size 16
[   37.064306] The buggy address is located 8 bytes inside of
[   37.064306]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   37.065554] 
[   37.065864] The buggy address belongs to the physical page:
[   37.066622] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   37.067480] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   37.068257] page_type: f5(slab)
[   37.068776] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   37.069633] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   37.070426] page dumped because: kasan: bad access detected
[   37.071066] 
[   37.071466] Memory state around the buggy address:
[   37.072137]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.072921]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.073680] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   37.074424]                                               ^
[   37.075219]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.076063]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.076882] ==================================================================
---
[   37.115432] ==================================================================
[   37.116124] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0xa44/0xbc8
[   37.117084] Read of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   37.117891] 
[   37.118287] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   37.118567] Tainted: [B]=BAD_PAGE, [N]=TEST
[   37.118668] Hardware name: linux,dummy-virt (DT)
[   37.118776] Call trace:
[   37.118851]  show_stack+0x20/0x38 (C)
[   37.119016]  dump_stack_lvl+0x8c/0xd0
[   37.119187]  print_report+0x118/0x5e0
[   37.119358]  kasan_report+0xc8/0x118
[   37.119483]  __asan_report_load8_noabort+0x20/0x30
[   37.119600]  kasan_bitops_test_and_modify.constprop.0+0xa44/0xbc8
[   37.119701]  kasan_bitops_generic+0x11c/0x1c8
[   37.119785]  kunit_try_run_case+0x14c/0x3d0
[   37.119870]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   37.119962]  kthread+0x318/0x618
[   37.120041]  ret_from_fork+0x10/0x20
[   37.120127] 
[   37.126629] Allocated by task 250:
[   37.127173]  kasan_save_stack+0x3c/0x68
[   37.127765]  kasan_save_track+0x20/0x40
[   37.128367]  kasan_save_alloc_info+0x40/0x58
[   37.128994]  __kasan_kmalloc+0xd4/0xd8
[   37.129618]  __kmalloc_cache_noprof+0x15c/0x3c0
[   37.130306]  kasan_bitops_generic+0xa0/0x1c8
[   37.130860]  kunit_try_run_case+0x14c/0x3d0
[   37.131387]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   37.132188]  kthread+0x318/0x618
[   37.132766]  ret_from_fork+0x10/0x20
[   37.133361] 
[   37.133752] The buggy address belongs to the object at fff9b96e85f285c0
[   37.133752]  which belongs to the cache kmalloc-16 of size 16
[   37.134973] The buggy address is located 8 bytes inside of
[   37.134973]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   37.136374] 
[   37.136765] The buggy address belongs to the physical page:
[   37.137425] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   37.138283] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   37.139038] page_type: f5(slab)
[   37.139588] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   37.140561] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   37.141397] page dumped because: kasan: bad access detected
[   37.141952] 
[   37.142313] Memory state around the buggy address:
[   37.143016]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.143726]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.144650] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   37.145426]                                               ^
[   37.146125]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.147049]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.147910] ==================================================================
---
[   36.901000] ==================================================================
[   36.902600] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0xa08/0xbc8
[   36.903523] Read of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.904759] 
[   36.905164] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.905413] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.905522] Hardware name: linux,dummy-virt (DT)
[   36.905658] Call trace:
[   36.905736]  show_stack+0x20/0x38 (C)
[   36.905902]  dump_stack_lvl+0x8c/0xd0
[   36.906011]  print_report+0x118/0x5e0
[   36.906099]  kasan_report+0xc8/0x118
[   36.906180]  __asan_report_load8_noabort+0x20/0x30
[   36.906269]  kasan_bitops_test_and_modify.constprop.0+0xa08/0xbc8
[   36.906366]  kasan_bitops_generic+0x11c/0x1c8
[   36.906462]  kunit_try_run_case+0x14c/0x3d0
[   36.906569]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.906664]  kthread+0x318/0x618
[   36.906743]  ret_from_fork+0x10/0x20
[   36.906828] 
[   36.913144] Allocated by task 250:
[   36.913705]  kasan_save_stack+0x3c/0x68
[   36.914326]  kasan_save_track+0x20/0x40
[   36.914895]  kasan_save_alloc_info+0x40/0x58
[   36.915567]  __kasan_kmalloc+0xd4/0xd8
[   36.916136]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.916820]  kasan_bitops_generic+0xa0/0x1c8
[   36.917426]  kunit_try_run_case+0x14c/0x3d0
[   36.918047]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.918843]  kthread+0x318/0x618
[   36.919372]  ret_from_fork+0x10/0x20
[   36.919978] 
[   36.920295] The buggy address belongs to the object at fff9b96e85f285c0
[   36.920295]  which belongs to the cache kmalloc-16 of size 16
[   36.921615] The buggy address is located 8 bytes inside of
[   36.921615]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.922870] 
[   36.923240] The buggy address belongs to the physical page:
[   36.923984] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.924946] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.925595] page_type: f5(slab)
[   36.926149] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.927047] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.927891] page dumped because: kasan: bad access detected
[   36.928577] 
[   36.928940] Memory state around the buggy address:
[   36.929614]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.930360]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.931302] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.932154]                                               ^
[   36.932741]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.933659]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.934505] ==================================================================
---
[   36.976769] ==================================================================
[   36.977408] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0xa50/0xbc8
[   36.978761] Read of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.979377] 
[   36.979790] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.980058] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.980157] Hardware name: linux,dummy-virt (DT)
[   36.980265] Call trace:
[   36.980337]  show_stack+0x20/0x38 (C)
[   36.980499]  dump_stack_lvl+0x8c/0xd0
[   36.980685]  print_report+0x118/0x5e0
[   36.980777]  kasan_report+0xc8/0x118
[   36.980861]  __asan_report_load8_noabort+0x20/0x30
[   36.980955]  kasan_bitops_test_and_modify.constprop.0+0xa50/0xbc8
[   36.981053]  kasan_bitops_generic+0x11c/0x1c8
[   36.981140]  kunit_try_run_case+0x14c/0x3d0
[   36.981223]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.981330]  kthread+0x318/0x618
[   36.981412]  ret_from_fork+0x10/0x20
[   36.981499] 
[   36.987974] Allocated by task 250:
[   36.988528]  kasan_save_stack+0x3c/0x68
[   36.989178]  kasan_save_track+0x20/0x40
[   36.989775]  kasan_save_alloc_info+0x40/0x58
[   36.990346]  __kasan_kmalloc+0xd4/0xd8
[   36.990909]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.991429]  kasan_bitops_generic+0xa0/0x1c8
[   36.992120]  kunit_try_run_case+0x14c/0x3d0
[   36.992801]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.993586]  kthread+0x318/0x618
[   36.994142]  ret_from_fork+0x10/0x20
[   36.994694] 
[   36.994998] The buggy address belongs to the object at fff9b96e85f285c0
[   36.994998]  which belongs to the cache kmalloc-16 of size 16
[   36.996383] The buggy address is located 8 bytes inside of
[   36.996383]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.997456] 
[   36.997844] The buggy address belongs to the physical page:
[   36.998598] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.999432] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   37.000301] page_type: f5(slab)
[   37.000824] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   37.001743] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   37.002633] page dumped because: kasan: bad access detected
[   37.003350] 
[   37.003732] Memory state around the buggy address:
[   37.004391]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.005174]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.005991] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   37.006759]                                               ^
[   37.007391]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.007963]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.008842] ==================================================================
---
[   37.149681] ==================================================================
[   37.150357] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0xa8c/0xbc8
[   37.151413] Read of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   37.152342] 
[   37.152757] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   37.153009] Tainted: [B]=BAD_PAGE, [N]=TEST
[   37.153106] Hardware name: linux,dummy-virt (DT)
[   37.153210] Call trace:
[   37.153287]  show_stack+0x20/0x38 (C)
[   37.153454]  dump_stack_lvl+0x8c/0xd0
[   37.153650]  print_report+0x118/0x5e0
[   37.153811]  kasan_report+0xc8/0x118
[   37.153967]  __asan_report_load8_noabort+0x20/0x30
[   37.154131]  kasan_bitops_test_and_modify.constprop.0+0xa8c/0xbc8
[   37.154236]  kasan_bitops_generic+0x11c/0x1c8
[   37.154327]  kunit_try_run_case+0x14c/0x3d0
[   37.154425]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   37.154518]  kthread+0x318/0x618
[   37.154623]  ret_from_fork+0x10/0x20
[   37.154709] 
[   37.161161] Allocated by task 250:
[   37.161697]  kasan_save_stack+0x3c/0x68
[   37.162317]  kasan_save_track+0x20/0x40
[   37.162826]  kasan_save_alloc_info+0x40/0x58
[   37.163490]  __kasan_kmalloc+0xd4/0xd8
[   37.164115]  __kmalloc_cache_noprof+0x15c/0x3c0
[   37.164824]  kasan_bitops_generic+0xa0/0x1c8
[   37.165371]  kunit_try_run_case+0x14c/0x3d0
[   37.165979]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   37.166777]  kthread+0x318/0x618
[   37.167351]  ret_from_fork+0x10/0x20
[   37.167959] 
[   37.168278] The buggy address belongs to the object at fff9b96e85f285c0
[   37.168278]  which belongs to the cache kmalloc-16 of size 16
[   37.169592] The buggy address is located 8 bytes inside of
[   37.169592]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   37.170795] 
[   37.171167] The buggy address belongs to the physical page:
[   37.171932] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   37.172821] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   37.173487] page_type: f5(slab)
[   37.174066] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   37.175004] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   37.175725] page dumped because: kasan: bad access detected
[   37.176447] 
[   37.176826] Memory state around the buggy address:
[   37.177434]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.178183]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.179040] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   37.179835]                                               ^
[   37.180564]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.181360]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.182125] ==================================================================

test log snippet boot parser 9

Home

[   37.078263] ==================================================================
[   37.079476] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x334/0xbc8
[   37.081283] Write of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   37.082288] 
[   37.082728] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   37.082965] Tainted: [B]=BAD_PAGE, [N]=TEST
[   37.083070] Hardware name: linux,dummy-virt (DT)
[   37.083183] Call trace:
[   37.083260]  show_stack+0x20/0x38 (C)
[   37.083417]  dump_stack_lvl+0x8c/0xd0
[   37.084005]  print_report+0x118/0x5e0
[   37.084174]  kasan_report+0xc8/0x118
[   37.084334]  kasan_check_range+0x100/0x1a8
[   37.084493]  __kasan_check_write+0x20/0x30
[   37.084683]  kasan_bitops_test_and_modify.constprop.0+0x334/0xbc8
[   37.084861]  kasan_bitops_generic+0x11c/0x1c8
[   37.085027]  kunit_try_run_case+0x14c/0x3d0
[   37.085190]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   37.085373]  kthread+0x318/0x618
[   37.085517]  ret_from_fork+0x10/0x20
[   37.085644] 
[   37.092611] Allocated by task 250:
[   37.093149]  kasan_save_stack+0x3c/0x68
[   37.093804]  kasan_save_track+0x20/0x40
[   37.094350]  kasan_save_alloc_info+0x40/0x58
[   37.094914]  __kasan_kmalloc+0xd4/0xd8
[   37.095431]  __kmalloc_cache_noprof+0x15c/0x3c0
[   37.096134]  kasan_bitops_generic+0xa0/0x1c8
[   37.096814]  kunit_try_run_case+0x14c/0x3d0
[   37.097471]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   37.098182]  kthread+0x318/0x618
[   37.098783]  ret_from_fork+0x10/0x20
[   37.099270] 
[   37.099661] The buggy address belongs to the object at fff9b96e85f285c0
[   37.099661]  which belongs to the cache kmalloc-16 of size 16
[   37.100893] The buggy address is located 8 bytes inside of
[   37.100893]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   37.102008] 
[   37.102374] The buggy address belongs to the physical page:
[   37.103129] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   37.104146] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   37.104860] page_type: f5(slab)
[   37.105261] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   37.105974] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   37.106929] page dumped because: kasan: bad access detected
[   37.107682] 
[   37.108017] Memory state around the buggy address:
[   37.108681]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.109478]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.110209] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   37.111136]                                               ^
[   37.112021]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.112786]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.113678] ==================================================================
---
[   36.861795] ==================================================================
[   36.863279] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0xfc/0xbc8
[   36.864145] Write of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.864973] 
[   36.865411] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.865714] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.865816] Hardware name: linux,dummy-virt (DT)
[   36.865923] Call trace:
[   36.866005]  show_stack+0x20/0x38 (C)
[   36.866171]  dump_stack_lvl+0x8c/0xd0
[   36.866285]  print_report+0x118/0x5e0
[   36.866375]  kasan_report+0xc8/0x118
[   36.866468]  kasan_check_range+0x100/0x1a8
[   36.866577]  __kasan_check_write+0x20/0x30
[   36.866663]  kasan_bitops_test_and_modify.constprop.0+0xfc/0xbc8
[   36.866759]  kasan_bitops_generic+0x11c/0x1c8
[   36.866844]  kunit_try_run_case+0x14c/0x3d0
[   36.866926]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.867018]  kthread+0x318/0x618
[   36.867096]  ret_from_fork+0x10/0x20
[   36.867182] 
[   36.874432] Allocated by task 250:
[   36.875032]  kasan_save_stack+0x3c/0x68
[   36.875671]  kasan_save_track+0x20/0x40
[   36.876294]  kasan_save_alloc_info+0x40/0x58
[   36.876943]  __kasan_kmalloc+0xd4/0xd8
[   36.877590]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.878276]  kasan_bitops_generic+0xa0/0x1c8
[   36.878915]  kunit_try_run_case+0x14c/0x3d0
[   36.879596]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.880291]  kthread+0x318/0x618
[   36.880935]  ret_from_fork+0x10/0x20
[   36.881530] 
[   36.882172] The buggy address belongs to the object at fff9b96e85f285c0
[   36.882172]  which belongs to the cache kmalloc-16 of size 16
[   36.883937] The buggy address is located 8 bytes inside of
[   36.883937]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.885632] 
[   36.886166] The buggy address belongs to the physical page:
[   36.887056] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.888129] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.888950] page_type: f5(slab)
[   36.889770] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.890866] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.891922] page dumped because: kasan: bad access detected
[   36.892798] 
[   36.893131] Memory state around the buggy address:
[   36.894082]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.894712]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.895499] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.896428]                                               ^
[   36.897193]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.898030]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.898850] ==================================================================
---
[   36.936154] ==================================================================
[   36.937392] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x1d8/0xbc8
[   36.938269] Write of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   36.939171] 
[   36.939557] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.939834] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.940085] Hardware name: linux,dummy-virt (DT)
[   36.940200] Call trace:
[   36.940263]  show_stack+0x20/0x38 (C)
[   36.940424]  dump_stack_lvl+0x8c/0xd0
[   36.940613]  print_report+0x118/0x5e0
[   36.940757]  kasan_report+0xc8/0x118
[   36.940854]  kasan_check_range+0x100/0x1a8
[   36.940942]  __kasan_check_write+0x20/0x30
[   36.941029]  kasan_bitops_test_and_modify.constprop.0+0x1d8/0xbc8
[   36.941127]  kasan_bitops_generic+0x11c/0x1c8
[   36.941212]  kunit_try_run_case+0x14c/0x3d0
[   36.941296]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.941401]  kthread+0x318/0x618
[   36.941480]  ret_from_fork+0x10/0x20
[   36.941624] 
[   36.950846] Allocated by task 250:
[   36.951447]  kasan_save_stack+0x3c/0x68
[   36.952210]  kasan_save_track+0x20/0x40
[   36.952968]  kasan_save_alloc_info+0x40/0x58
[   36.953775]  __kasan_kmalloc+0xd4/0xd8
[   36.954298]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.955209]  kasan_bitops_generic+0xa0/0x1c8
[   36.955985]  kunit_try_run_case+0x14c/0x3d0
[   36.956736]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.957606]  kthread+0x318/0x618
[   36.958071]  ret_from_fork+0x10/0x20
[   36.958992] 
[   36.959253] The buggy address belongs to the object at fff9b96e85f285c0
[   36.959253]  which belongs to the cache kmalloc-16 of size 16
[   36.960532] The buggy address is located 8 bytes inside of
[   36.960532]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   36.962012] 
[   36.962363] The buggy address belongs to the physical page:
[   36.963397] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   36.964297] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.965033] page_type: f5(slab)
[   36.965534] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   36.966348] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   36.967296] page dumped because: kasan: bad access detected
[   36.967953] 
[   36.968322] Memory state around the buggy address:
[   36.968985]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.969799]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   36.970516] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   36.971437]                                               ^
[   36.972104]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.972981]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.973776] ==================================================================
---
[   37.010218] ==================================================================
[   37.010974] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x25c/0xbc8
[   37.011734] Write of size 8 at addr fff9b96e85f285c8 by task kunit_try_catch/250
[   37.012523] 
[   37.012913] CPU: 0 UID: 0 PID: 250 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   37.013159] Tainted: [B]=BAD_PAGE, [N]=TEST
[   37.013258] Hardware name: linux,dummy-virt (DT)
[   37.013368] Call trace:
[   37.013443]  show_stack+0x20/0x38 (C)
[   37.013627]  dump_stack_lvl+0x8c/0xd0
[   37.013795]  print_report+0x118/0x5e0
[   37.013954]  kasan_report+0xc8/0x118
[   37.014116]  kasan_check_range+0x100/0x1a8
[   37.014277]  __kasan_check_write+0x20/0x30
[   37.014463]  kasan_bitops_test_and_modify.constprop.0+0x25c/0xbc8
[   37.014661]  kasan_bitops_generic+0x11c/0x1c8
[   37.014824]  kunit_try_run_case+0x14c/0x3d0
[   37.014981]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   37.015156]  kthread+0x318/0x618
[   37.015312]  ret_from_fork+0x10/0x20
[   37.015404] 
[   37.022188] Allocated by task 250:
[   37.022754]  kasan_save_stack+0x3c/0x68
[   37.023391]  kasan_save_track+0x20/0x40
[   37.023910]  kasan_save_alloc_info+0x40/0x58
[   37.024594]  __kasan_kmalloc+0xd4/0xd8
[   37.025184]  __kmalloc_cache_noprof+0x15c/0x3c0
[   37.025868]  kasan_bitops_generic+0xa0/0x1c8
[   37.026367]  kunit_try_run_case+0x14c/0x3d0
[   37.027053]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   37.027847]  kthread+0x318/0x618
[   37.028428]  ret_from_fork+0x10/0x20
[   37.029030] 
[   37.029404] The buggy address belongs to the object at fff9b96e85f285c0
[   37.029404]  which belongs to the cache kmalloc-16 of size 16
[   37.030478] The buggy address is located 8 bytes inside of
[   37.030478]  allocated 9-byte region [fff9b96e85f285c0, fff9b96e85f285c9)
[   37.031726] 
[   37.032098] The buggy address belongs to the physical page:
[   37.032848] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   37.033612] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   37.034485] page_type: f5(slab)
[   37.035071] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   37.035897] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   37.036861] page dumped because: kasan: bad access detected
[   37.037565] 
[   37.038452] Memory state around the buggy address:
[   37.038997]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.039894]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   37.040777] >fff9b96e85f28580: fa fb fc fc fa fb fc fc 00 01 fc fc fc fc fc fc
[   37.041137]                                               ^
[   37.041457]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.041832]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   37.042173] ==================================================================

test log snippet boot parser 10

Home

[   33.059878] ==================================================================
[   33.060965] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_16+0x3a0/0x3f8
[   33.061711] Write of size 16 at addr fff9b96e85f28560 by task kunit_try_catch/155
[   33.062476] 
[   33.062897] CPU: 0 UID: 0 PID: 155 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   33.063175] Tainted: [B]=BAD_PAGE, [N]=TEST
[   33.063268] Hardware name: linux,dummy-virt (DT)
[   33.063372] Call trace:
[   33.063442]  show_stack+0x20/0x38 (C)
[   33.063627]  dump_stack_lvl+0x8c/0xd0
[   33.063785]  print_report+0x118/0x5e0
[   33.063912]  kasan_report+0xc8/0x118
[   33.063995]  __asan_report_store16_noabort+0x20/0x30
[   33.064084]  kmalloc_oob_16+0x3a0/0x3f8
[   33.064159]  kunit_try_run_case+0x14c/0x3d0
[   33.064236]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.064322]  kthread+0x318/0x618
[   33.064397]  ret_from_fork+0x10/0x20
[   33.064478] 
[   33.070794] Allocated by task 155:
[   33.071357]  kasan_save_stack+0x3c/0x68
[   33.072007]  kasan_save_track+0x20/0x40
[   33.072674]  kasan_save_alloc_info+0x40/0x58
[   33.073265]  __kasan_kmalloc+0xd4/0xd8
[   33.073792]  __kmalloc_cache_noprof+0x15c/0x3c0
[   33.074511]  kmalloc_oob_16+0xb4/0x3f8
[   33.075180]  kunit_try_run_case+0x14c/0x3d0
[   33.075891]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.076685]  kthread+0x318/0x618
[   33.077221]  ret_from_fork+0x10/0x20
[   33.077863] 
[   33.078255] The buggy address belongs to the object at fff9b96e85f28560
[   33.078255]  which belongs to the cache kmalloc-16 of size 16
[   33.079603] The buggy address is located 0 bytes inside of
[   33.079603]  allocated 13-byte region [fff9b96e85f28560, fff9b96e85f2856d)
[   33.080864] 
[   33.081267] The buggy address belongs to the physical page:
[   33.081918] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   33.082843] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   33.083761] page_type: f5(slab)
[   33.084339] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   33.085329] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   33.086138] page dumped because: kasan: bad access detected
[   33.086842] 
[   33.087232] Memory state around the buggy address:
[   33.087950]  fff9b96e85f28400: fa fb fc fc 00 00 fc fc fa fb fc fc fa fb fc fc
[   33.088821]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   33.089681] >fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc 00 05 fc fc
[   33.090459]                                                           ^
[   33.091338]  fff9b96e85f28580: 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   33.092192]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   33.093041] ==================================================================

test log snippet boot parser 11

Home

[   36.053204] ==================================================================
[   36.054695] BUG: KASAN: slab-out-of-bounds in memchr+0x78/0x88
[   36.055261] Read of size 1 at addr fff9b96e86476d98 by task kunit_try_catch/244
[   36.056297] 
[   36.056951] CPU: 1 UID: 0 PID: 244 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.057562] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.057670] Hardware name: linux,dummy-virt (DT)
[   36.057785] Call trace:
[   36.057864]  show_stack+0x20/0x38 (C)
[   36.058032]  dump_stack_lvl+0x8c/0xd0
[   36.058195]  print_report+0x118/0x5e0
[   36.058300]  kasan_report+0xc8/0x118
[   36.058396]  __asan_report_load1_noabort+0x20/0x30
[   36.058492]  memchr+0x78/0x88
[   36.058590]  kasan_memchr+0x148/0x2d8
[   36.058670]  kunit_try_run_case+0x14c/0x3d0
[   36.058753]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.058844]  kthread+0x318/0x618
[   36.058925]  ret_from_fork+0x10/0x20
[   36.059013] 
[   36.067172] Allocated by task 244:
[   36.067716]  kasan_save_stack+0x3c/0x68
[   36.069285]  kasan_save_track+0x20/0x40
[   36.069929]  kasan_save_alloc_info+0x40/0x58
[   36.070700]  __kasan_kmalloc+0xd4/0xd8
[   36.071361]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.072521]  kasan_memchr+0xb0/0x2d8
[   36.073080]  kunit_try_run_case+0x14c/0x3d0
[   36.073792]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.074755]  kthread+0x318/0x618
[   36.075565]  ret_from_fork+0x10/0x20
[   36.076570] 
[   36.077062] The buggy address belongs to the object at fff9b96e86476d80
[   36.077062]  which belongs to the cache kmalloc-32 of size 32
[   36.078760] The buggy address is located 0 bytes to the right of
[   36.078760]  allocated 24-byte region [fff9b96e86476d80, fff9b96e86476d98)
[   36.080127] 
[   36.080565] The buggy address belongs to the physical page:
[   36.081272] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x106476
[   36.082309] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.083128] page_type: f5(slab)
[   36.083664] raw: 0bfffe0000000000 fff9b96e80001780 dead000000000122 0000000000000000
[   36.084656] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   36.085871] page dumped because: kasan: bad access detected
[   36.086501] 
[   36.086841] Memory state around the buggy address:
[   36.087500]  fff9b96e86476c80: 00 00 07 fc fc fc fc fc 00 00 00 fc fc fc fc fc
[   36.089057]  fff9b96e86476d00: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   36.089833] >fff9b96e86476d80: 00 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.090646]                             ^
[   36.091198]  fff9b96e86476e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.092321]  fff9b96e86476e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.093081] ==================================================================

test log snippet boot parser 12

Home

[   36.104268] ==================================================================
[   36.105570] BUG: KASAN: slab-out-of-bounds in memcmp+0x198/0x1d8
[   36.106438] Read of size 1 at addr fff9b96e86476e58 by task kunit_try_catch/246
[   36.107590] 
[   36.108060] CPU: 1 UID: 0 PID: 246 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.108312] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.108465] Hardware name: linux,dummy-virt (DT)
[   36.109070] Call trace:
[   36.109165]  show_stack+0x20/0x38 (C)
[   36.109266]  dump_stack_lvl+0x8c/0xd0
[   36.109353]  print_report+0x118/0x5e0
[   36.109439]  kasan_report+0xc8/0x118
[   36.109523]  __asan_report_load1_noabort+0x20/0x30
[   36.109641]  memcmp+0x198/0x1d8
[   36.109720]  kasan_memcmp+0x16c/0x300
[   36.109801]  kunit_try_run_case+0x14c/0x3d0
[   36.109884]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.109976]  kthread+0x318/0x618
[   36.110060]  ret_from_fork+0x10/0x20
[   36.110146] 
[   36.119254] Allocated by task 246:
[   36.119880]  kasan_save_stack+0x3c/0x68
[   36.121202]  kasan_save_track+0x20/0x40
[   36.122121]  kasan_save_alloc_info+0x40/0x58
[   36.122940]  __kasan_kmalloc+0xd4/0xd8
[   36.123708]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.124664]  kasan_memcmp+0xbc/0x300
[   36.125415]  kunit_try_run_case+0x14c/0x3d0
[   36.126191]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.127095]  kthread+0x318/0x618
[   36.127634]  ret_from_fork+0x10/0x20
[   36.128626] 
[   36.129376] The buggy address belongs to the object at fff9b96e86476e40
[   36.129376]  which belongs to the cache kmalloc-32 of size 32
[   36.130758] The buggy address is located 0 bytes to the right of
[   36.130758]  allocated 24-byte region [fff9b96e86476e40, fff9b96e86476e58)
[   36.132397] 
[   36.132777] The buggy address belongs to the physical page:
[   36.133413] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x106476
[   36.134376] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.135229] page_type: f5(slab)
[   36.135796] raw: 0bfffe0000000000 fff9b96e80001780 dead000000000122 0000000000000000
[   36.136694] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   36.137489] page dumped because: kasan: bad access detected
[   36.139128] 
[   36.139586] Memory state around the buggy address:
[   36.140762]  fff9b96e86476d00: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   36.141520]  fff9b96e86476d80: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   36.142404] >fff9b96e86476e00: 00 00 00 fc fc fc fc fc 00 00 00 fc fc fc fc fc
[   36.143221]                                                     ^
[   36.144035]  fff9b96e86476e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.144992]  fff9b96e86476f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.146166] ==================================================================

test log snippet boot parser 13

Home

[   36.305001] ==================================================================
[   36.305766] BUG: KASAN: slab-use-after-free in kasan_strings+0x858/0x8f8
[   36.306401] Read of size 1 at addr fff9b96e86476f90 by task kunit_try_catch/248
[   36.307164] 
[   36.307572] CPU: 1 UID: 0 PID: 248 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.307813] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.307914] Hardware name: linux,dummy-virt (DT)
[   36.308024] Call trace:
[   36.308105]  show_stack+0x20/0x38 (C)
[   36.308277]  dump_stack_lvl+0x8c/0xd0
[   36.308459]  print_report+0x118/0x5e0
[   36.308639]  kasan_report+0xc8/0x118
[   36.308786]  __asan_report_load1_noabort+0x20/0x30
[   36.308880]  kasan_strings+0x858/0x8f8
[   36.308959]  kunit_try_run_case+0x14c/0x3d0
[   36.309048]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.309140]  kthread+0x318/0x618
[   36.309219]  ret_from_fork+0x10/0x20
[   36.309305] 
[   36.315237] Allocated by task 248:
[   36.315780]  kasan_save_stack+0x3c/0x68
[   36.316337]  kasan_save_track+0x20/0x40
[   36.317244]  kasan_save_alloc_info+0x40/0x58
[   36.317929]  __kasan_kmalloc+0xd4/0xd8
[   36.318586]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.319231]  kasan_strings+0xb4/0x8f8
[   36.319727]  kunit_try_run_case+0x14c/0x3d0
[   36.320373]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.321150]  kthread+0x318/0x618
[   36.321774]  ret_from_fork+0x10/0x20
[   36.322389] 
[   36.322760] Freed by task 248:
[   36.323192]  kasan_save_stack+0x3c/0x68
[   36.323837]  kasan_save_track+0x20/0x40
[   36.324469]  kasan_save_free_info+0x4c/0x78
[   36.325137]  __kasan_slab_free+0x6c/0x98
[   36.325793]  kfree+0x114/0x3c8
[   36.326259]  kasan_strings+0x128/0x8f8
[   36.326825]  kunit_try_run_case+0x14c/0x3d0
[   36.327477]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.328262]  kthread+0x318/0x618
[   36.328783]  ret_from_fork+0x10/0x20
[   36.329374] 
[   36.329731] The buggy address belongs to the object at fff9b96e86476f80
[   36.329731]  which belongs to the cache kmalloc-32 of size 32
[   36.330924] The buggy address is located 16 bytes inside of
[   36.330924]  freed 32-byte region [fff9b96e86476f80, fff9b96e86476fa0)
[   36.332193] 
[   36.332595] The buggy address belongs to the physical page:
[   36.333325] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x106476
[   36.334122] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.335026] page_type: f5(slab)
[   36.335606] raw: 0bfffe0000000000 fff9b96e80001780 dead000000000122 0000000000000000
[   36.336483] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   36.337452] page dumped because: kasan: bad access detected
[   36.338194] 
[   36.338531] Memory state around the buggy address:
[   36.339180]  fff9b96e86476e80: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   36.339937]  fff9b96e86476f00: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   36.340795] >fff9b96e86476f80: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   36.341682]                          ^
[   36.342186]  fff9b96e86477000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   36.342843]  fff9b96e86477080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.343742] ==================================================================

test log snippet boot parser 14

Home

[   33.661325] ==================================================================
[   33.662562] BUG: KASAN: slab-use-after-free in kmalloc_double_kzfree+0x168/0x308
[   33.663398] Read of size 1 at addr fff9b96e85f285a0 by task kunit_try_catch/181
[   33.664088] 
[   33.664689] CPU: 0 UID: 0 PID: 181 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   33.664926] Tainted: [B]=BAD_PAGE, [N]=TEST
[   33.665017] Hardware name: linux,dummy-virt (DT)
[   33.665120] Call trace:
[   33.665188]  show_stack+0x20/0x38 (C)
[   33.665342]  dump_stack_lvl+0x8c/0xd0
[   33.665497]  print_report+0x118/0x5e0
[   33.665670]  kasan_report+0xc8/0x118
[   33.665817]  __kasan_check_byte+0x54/0x70
[   33.665972]  kfree_sensitive+0x30/0xb0
[   33.666127]  kmalloc_double_kzfree+0x168/0x308
[   33.666281]  kunit_try_run_case+0x14c/0x3d0
[   33.666394]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.666486]  kthread+0x318/0x618
[   33.666584]  ret_from_fork+0x10/0x20
[   33.666669] 
[   33.674488] Allocated by task 181:
[   33.675146]  kasan_save_stack+0x3c/0x68
[   33.676031]  kasan_save_track+0x20/0x40
[   33.676585]  kasan_save_alloc_info+0x40/0x58
[   33.677173]  __kasan_kmalloc+0xd4/0xd8
[   33.677829]  __kmalloc_cache_noprof+0x15c/0x3c0
[   33.678389]  kmalloc_double_kzfree+0xb8/0x308
[   33.679105]  kunit_try_run_case+0x14c/0x3d0
[   33.679689]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.680496]  kthread+0x318/0x618
[   33.681281]  ret_from_fork+0x10/0x20
[   33.681832] 
[   33.682229] Freed by task 181:
[   33.682806]  kasan_save_stack+0x3c/0x68
[   33.683481]  kasan_save_track+0x20/0x40
[   33.684239]  kasan_save_free_info+0x4c/0x78
[   33.684997]  __kasan_slab_free+0x6c/0x98
[   33.685756]  kfree+0x114/0x3c8
[   33.686270]  kfree_sensitive+0x80/0xb0
[   33.686865]  kmalloc_double_kzfree+0x11c/0x308
[   33.687503]  kunit_try_run_case+0x14c/0x3d0
[   33.688127]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.689105]  kthread+0x318/0x618
[   33.689609]  ret_from_fork+0x10/0x20
[   33.690056] 
[   33.690839] The buggy address belongs to the object at fff9b96e85f285a0
[   33.690839]  which belongs to the cache kmalloc-16 of size 16
[   33.692305] The buggy address is located 0 bytes inside of
[   33.692305]  freed 16-byte region [fff9b96e85f285a0, fff9b96e85f285b0)
[   33.693521] 
[   33.693904] The buggy address belongs to the physical page:
[   33.694532] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105f28
[   33.695331] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   33.696470] page_type: f5(slab)
[   33.697044] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   33.697881] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   33.698753] page dumped because: kasan: bad access detected
[   33.699468] 
[   33.699848] Memory state around the buggy address:
[   33.700444]  fff9b96e85f28480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   33.701717]  fff9b96e85f28500: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   33.702466] >fff9b96e85f28580: fa fb fc fc fa fb fc fc fc fc fc fc fc fc fc fc
[   33.703307]                                ^
[   33.703896]  fff9b96e85f28600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   33.704970]  fff9b96e85f28680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   33.705701] ==================================================================

test log snippet boot parser 15

Home

[   33.103262] ==================================================================
[   33.104911] BUG: KASAN: slab-use-after-free in kmalloc_uaf_16+0x3bc/0x438
[   33.105671] Read of size 16 at addr fff9b96e85fec560 by task kunit_try_catch/157
[   33.106477] 
[   33.106876] CPU: 1 UID: 0 PID: 157 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   33.107132] Tainted: [B]=BAD_PAGE, [N]=TEST
[   33.107217] Hardware name: linux,dummy-virt (DT)
[   33.107313] Call trace:
[   33.107386]  show_stack+0x20/0x38 (C)
[   33.107564]  dump_stack_lvl+0x8c/0xd0
[   33.107728]  print_report+0x118/0x5e0
[   33.107885]  kasan_report+0xc8/0x118
[   33.108028]  __asan_report_load16_noabort+0x20/0x30
[   33.108189]  kmalloc_uaf_16+0x3bc/0x438
[   33.108327]  kunit_try_run_case+0x14c/0x3d0
[   33.108453]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.108567]  kthread+0x318/0x618
[   33.108649]  ret_from_fork+0x10/0x20
[   33.108733] 
[   33.115640] Allocated by task 157:
[   33.116804]  kasan_save_stack+0x3c/0x68
[   33.117351]  kasan_save_track+0x20/0x40
[   33.117904]  kasan_save_alloc_info+0x40/0x58
[   33.118462]  __kasan_kmalloc+0xd4/0xd8
[   33.119413]  __kmalloc_cache_noprof+0x15c/0x3c0
[   33.119978]  kmalloc_uaf_16+0x140/0x438
[   33.121108]  kunit_try_run_case+0x14c/0x3d0
[   33.121954]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.122869]  kthread+0x318/0x618
[   33.123641]  ret_from_fork+0x10/0x20
[   33.124406] 
[   33.124761] Freed by task 157:
[   33.125191]  kasan_save_stack+0x3c/0x68
[   33.126259]  kasan_save_track+0x20/0x40
[   33.126749]  kasan_save_free_info+0x4c/0x78
[   33.127437]  __kasan_slab_free+0x6c/0x98
[   33.128434]  kfree+0x114/0x3c8
[   33.128966]  kmalloc_uaf_16+0x190/0x438
[   33.129518]  kunit_try_run_case+0x14c/0x3d0
[   33.130157]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.130844]  kthread+0x318/0x618
[   33.131419]  ret_from_fork+0x10/0x20
[   33.132015] 
[   33.132823] The buggy address belongs to the object at fff9b96e85fec560
[   33.132823]  which belongs to the cache kmalloc-16 of size 16
[   33.134130] The buggy address is located 0 bytes inside of
[   33.134130]  freed 16-byte region [fff9b96e85fec560, fff9b96e85fec570)
[   33.135374] 
[   33.135763] The buggy address belongs to the physical page:
[   33.136628] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x105fec
[   33.137902] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   33.138815] page_type: f5(slab)
[   33.139340] raw: 0bfffe0000000000 fff9b96e80001640 dead000000000122 0000000000000000
[   33.140386] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   33.141362] page dumped because: kasan: bad access detected
[   33.142304] 
[   33.142718] Memory state around the buggy address:
[   33.143295]  fff9b96e85fec400: 00 05 fc fc fa fb fc fc 00 02 fc fc fa fb fc fc
[   33.144102]  fff9b96e85fec480: fa fb fc fc fa fb fc fc fa fb fc fc fa fb fc fc
[   33.145213] >fff9b96e85fec500: 00 04 fc fc fa fb fc fc 00 00 fc fc fa fb fc fc
[   33.146186]                                                        ^
[   33.147125]  fff9b96e85fec580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   33.148751]  fff9b96e85fec600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   33.149454] ==================================================================

test log snippet boot parser 16

Home

[   34.073079] ==================================================================
[   34.074404] BUG: KASAN: slab-use-after-free in rcu_uaf_reclaim+0x64/0x70
[   34.075360] Read of size 4 at addr fff9b96e8653a940 by task swapper/1/0
[   34.075972] 
[   34.076810] CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Tainted: G    B            N 6.13.0 #1
[   34.077024] Tainted: [B]=BAD_PAGE, [N]=TEST
[   34.077196] Hardware name: linux,dummy-virt (DT)
[   34.077314] Call trace:
[   34.077359]  show_stack+0x20/0x38 (C)
[   34.077449]  dump_stack_lvl+0x8c/0xd0
[   34.077534]  print_report+0x118/0x5e0
[   34.077634]  kasan_report+0xc8/0x118
[   34.077712]  __asan_report_load4_noabort+0x20/0x30
[   34.077800]  rcu_uaf_reclaim+0x64/0x70
[   34.077876]  rcu_core+0x9f4/0x1e20
[   34.077950]  rcu_core_si+0x18/0x30
[   34.078024]  handle_softirqs+0x374/0xb20
[   34.078100]  __do_softirq+0x1c/0x28
[   34.078173]  ____do_softirq+0x18/0x30
[   34.078248]  call_on_irq_stack+0x24/0x58
[   34.078326]  do_softirq_own_stack+0x24/0x38
[   34.078418]  __irq_exit_rcu+0x1fc/0x318
[   34.078492]  irq_exit_rcu+0x1c/0x80
[   34.078582]  el1_interrupt+0x38/0x58
[   34.078668]  el1h_64_irq_handler+0x18/0x28
[   34.078751]  el1h_64_irq+0x6c/0x70
[   34.078902]  arch_local_irq_enable+0x4/0x8 (P)
[   34.078994]  do_idle+0x384/0x4e8
[   34.079069]  cpu_startup_entry+0x64/0x80
[   34.079146]  secondary_start_kernel+0x288/0x340
[   34.079231]  __secondary_switched+0xc0/0xc8
[   34.079322] 
[   34.090137] Allocated by task 187:
[   34.090779]  kasan_save_stack+0x3c/0x68
[   34.091455]  kasan_save_track+0x20/0x40
[   34.092132]  kasan_save_alloc_info+0x40/0x58
[   34.092846]  __kasan_kmalloc+0xd4/0xd8
[   34.093440]  __kmalloc_cache_noprof+0x15c/0x3c0
[   34.094173]  rcu_uaf+0xb0/0x2d0
[   34.094780]  kunit_try_run_case+0x14c/0x3d0
[   34.095372]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   34.096186]  kthread+0x318/0x618
[   34.096809]  ret_from_fork+0x10/0x20
[   34.097470] 
[   34.097858] Freed by task 0:
[   34.098299]  kasan_save_stack+0x3c/0x68
[   34.098811]  kasan_save_track+0x20/0x40
[   34.099441]  kasan_save_free_info+0x4c/0x78
[   34.100190]  __kasan_slab_free+0x6c/0x98
[   34.100865]  kfree+0x114/0x3c8
[   34.101515]  rcu_uaf_reclaim+0x28/0x70
[   34.102154]  rcu_core+0x9f4/0x1e20
[   34.102748]  rcu_core_si+0x18/0x30
[   34.103255]  handle_softirqs+0x374/0xb20
[   34.103953]  __do_softirq+0x1c/0x28
[   34.104566] 
[   34.104924] Last potentially related work creation:
[   34.105641]  kasan_save_stack+0x3c/0x68
[   34.106238]  __kasan_record_aux_stack+0xbc/0xe8
[   34.106845]  kasan_record_aux_stack_noalloc+0x14/0x20
[   34.107649]  __call_rcu_common.constprop.0+0x74/0xa10
[   34.108237]  call_rcu+0x18/0x30
[   34.108837]  rcu_uaf+0x14c/0x2d0
[   34.109418]  kunit_try_run_case+0x14c/0x3d0
[   34.110080]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   34.110850]  kthread+0x318/0x618
[   34.111406]  ret_from_fork+0x10/0x20
[   34.112151] 
[   34.112562] The buggy address belongs to the object at fff9b96e8653a940
[   34.112562]  which belongs to the cache kmalloc-32 of size 32
[   34.113666] The buggy address is located 0 bytes inside of
[   34.113666]  freed 32-byte region [fff9b96e8653a940, fff9b96e8653a960)
[   34.115090] 
[   34.115488] The buggy address belongs to the physical page:
[   34.116188] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10653a
[   34.117245] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   34.117998] page_type: f5(slab)
[   34.118620] raw: 0bfffe0000000000 fff9b96e80001780 dead000000000122 0000000000000000
[   34.119440] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   34.120377] page dumped because: kasan: bad access detected
[   34.121166] 
[   34.121607] Memory state around the buggy address:
[   34.122228]  fff9b96e8653a800: fa fb fb fb fc fc fc fc fa fb fb fb fc fc fc fc
[   34.123119]  fff9b96e8653a880: 00 00 00 fc fc fc fc fc 00 00 05 fc fc fc fc fc
[   34.123877] >fff9b96e8653a900: 00 00 07 fc fc fc fc fc fa fb fb fb fc fc fc fc
[   34.124831]                                            ^
[   34.125620]  fff9b96e8653a980: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   34.126462]  fff9b96e8653aa00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   34.127367] ==================================================================

test log snippet boot parser 17

Home

[   36.158603] ==================================================================
[   36.159857] BUG: KASAN: slab-use-after-free in strchr+0xcc/0xd8
[   36.160790] Read of size 1 at addr fff9b96e86476f90 by task kunit_try_catch/248
[   36.161723] 
[   36.162126] CPU: 1 UID: 0 PID: 248 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.162389] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.162493] Hardware name: linux,dummy-virt (DT)
[   36.163092] Call trace:
[   36.163182]  show_stack+0x20/0x38 (C)
[   36.163302]  dump_stack_lvl+0x8c/0xd0
[   36.163389]  print_report+0x118/0x5e0
[   36.163473]  kasan_report+0xc8/0x118
[   36.163581]  __asan_report_load1_noabort+0x20/0x30
[   36.163676]  strchr+0xcc/0xd8
[   36.163751]  kasan_strings+0x180/0x8f8
[   36.163836]  kunit_try_run_case+0x14c/0x3d0
[   36.164015]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.164192]  kthread+0x318/0x618
[   36.164306]  ret_from_fork+0x10/0x20
[   36.164397] 
[   36.171723] Allocated by task 248:
[   36.172780]  kasan_save_stack+0x3c/0x68
[   36.173259]  kasan_save_track+0x20/0x40
[   36.173793]  kasan_save_alloc_info+0x40/0x58
[   36.174456]  __kasan_kmalloc+0xd4/0xd8
[   36.175003]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.175695]  kasan_strings+0xb4/0x8f8
[   36.176250]  kunit_try_run_case+0x14c/0x3d0
[   36.177313]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.178030]  kthread+0x318/0x618
[   36.178553]  ret_from_fork+0x10/0x20
[   36.179146] 
[   36.179508] Freed by task 248:
[   36.180047]  kasan_save_stack+0x3c/0x68
[   36.180782]  kasan_save_track+0x20/0x40
[   36.181353]  kasan_save_free_info+0x4c/0x78
[   36.181875]  __kasan_slab_free+0x6c/0x98
[   36.182519]  kfree+0x114/0x3c8
[   36.183405]  kasan_strings+0x128/0x8f8
[   36.184063]  kunit_try_run_case+0x14c/0x3d0
[   36.184829]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.185622]  kthread+0x318/0x618
[   36.186197]  ret_from_fork+0x10/0x20
[   36.186733] 
[   36.187100] The buggy address belongs to the object at fff9b96e86476f80
[   36.187100]  which belongs to the cache kmalloc-32 of size 32
[   36.188675] The buggy address is located 16 bytes inside of
[   36.188675]  freed 32-byte region [fff9b96e86476f80, fff9b96e86476fa0)
[   36.189876] 
[   36.190251] The buggy address belongs to the physical page:
[   36.191010] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x106476
[   36.192265] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.193178] page_type: f5(slab)
[   36.193771] raw: 0bfffe0000000000 fff9b96e80001780 dead000000000122 0000000000000000
[   36.194668] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   36.195635] page dumped because: kasan: bad access detected
[   36.196576] 
[   36.196940] Memory state around the buggy address:
[   36.197645]  fff9b96e86476e80: 00 00 00 fc fc fc fc fc 00 00 00 fc fc fc fc fc
[   36.198471]  fff9b96e86476f00: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   36.199232] >fff9b96e86476f80: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   36.200042]                          ^
[   36.200603]  fff9b96e86477000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   36.201971]  fff9b96e86477080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.202868] ==================================================================

test log snippet boot parser 18

Home

[   36.257098] ==================================================================
[   36.258268] BUG: KASAN: slab-use-after-free in strcmp+0xc0/0xc8
[   36.259190] Read of size 1 at addr fff9b96e86476f90 by task kunit_try_catch/248
[   36.260645] 
[   36.261046] CPU: 1 UID: 0 PID: 248 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.261334] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.261442] Hardware name: linux,dummy-virt (DT)
[   36.261570] Call trace:
[   36.261655]  show_stack+0x20/0x38 (C)
[   36.261826]  dump_stack_lvl+0x8c/0xd0
[   36.261986]  print_report+0x118/0x5e0
[   36.262088]  kasan_report+0xc8/0x118
[   36.262195]  __asan_report_load1_noabort+0x20/0x30
[   36.262293]  strcmp+0xc0/0xc8
[   36.262368]  kasan_strings+0x248/0x8f8
[   36.262460]  kunit_try_run_case+0x14c/0x3d0
[   36.262569]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.262666]  kthread+0x318/0x618
[   36.262747]  ret_from_fork+0x10/0x20
[   36.262831] 
[   36.270157] Allocated by task 248:
[   36.270740]  kasan_save_stack+0x3c/0x68
[   36.271395]  kasan_save_track+0x20/0x40
[   36.271932]  kasan_save_alloc_info+0x40/0x58
[   36.272622]  __kasan_kmalloc+0xd4/0xd8
[   36.273235]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.274630]  kasan_strings+0xb4/0x8f8
[   36.275002]  kunit_try_run_case+0x14c/0x3d0
[   36.275814]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.276501]  kthread+0x318/0x618
[   36.276964]  ret_from_fork+0x10/0x20
[   36.277619] 
[   36.278046] Freed by task 248:
[   36.278654]  kasan_save_stack+0x3c/0x68
[   36.279129]  kasan_save_track+0x20/0x40
[   36.279405]  kasan_save_free_info+0x4c/0x78
[   36.279869]  __kasan_slab_free+0x6c/0x98
[   36.280755]  kfree+0x114/0x3c8
[   36.281432]  kasan_strings+0x128/0x8f8
[   36.281952]  kunit_try_run_case+0x14c/0x3d0
[   36.282492]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.283122]  kthread+0x318/0x618
[   36.283721]  ret_from_fork+0x10/0x20
[   36.284306] 
[   36.284705] The buggy address belongs to the object at fff9b96e86476f80
[   36.284705]  which belongs to the cache kmalloc-32 of size 32
[   36.285918] The buggy address is located 16 bytes inside of
[   36.285918]  freed 32-byte region [fff9b96e86476f80, fff9b96e86476fa0)
[   36.287124] 
[   36.287503] The buggy address belongs to the physical page:
[   36.288240] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x106476
[   36.289052] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.289913] page_type: f5(slab)
[   36.290478] raw: 0bfffe0000000000 fff9b96e80001780 dead000000000122 0000000000000000
[   36.291464] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   36.292296] page dumped because: kasan: bad access detected
[   36.292850] 
[   36.293226] Memory state around the buggy address:
[   36.293914]  fff9b96e86476e80: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   36.294780]  fff9b96e86476f00: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   36.295564] >fff9b96e86476f80: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   36.296439]                          ^
[   36.296964]  fff9b96e86477000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   36.297906]  fff9b96e86477080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.298699] ==================================================================

test log snippet boot parser 19

Home

[   36.345888] ==================================================================
[   36.346921] BUG: KASAN: slab-use-after-free in strlen+0xa8/0xb0
[   36.347691] Read of size 1 at addr fff9b96e86476f90 by task kunit_try_catch/248
[   36.348525] 
[   36.348955] CPU: 1 UID: 0 PID: 248 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.349116] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.349191] Hardware name: linux,dummy-virt (DT)
[   36.349251] Call trace:
[   36.349294]  show_stack+0x20/0x38 (C)
[   36.349388]  dump_stack_lvl+0x8c/0xd0
[   36.349474]  print_report+0x118/0x5e0
[   36.349709]  kasan_report+0xc8/0x118
[   36.349896]  __asan_report_load1_noabort+0x20/0x30
[   36.350077]  strlen+0xa8/0xb0
[   36.350188]  kasan_strings+0x320/0x8f8
[   36.350269]  kunit_try_run_case+0x14c/0x3d0
[   36.350353]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.350456]  kthread+0x318/0x618
[   36.350562]  ret_from_fork+0x10/0x20
[   36.350650] 
[   36.357322] Allocated by task 248:
[   36.358021]  kasan_save_stack+0x3c/0x68
[   36.358754]  kasan_save_track+0x20/0x40
[   36.359444]  kasan_save_alloc_info+0x40/0x58
[   36.360062]  __kasan_kmalloc+0xd4/0xd8
[   36.360844]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.361646]  kasan_strings+0xb4/0x8f8
[   36.362386]  kunit_try_run_case+0x14c/0x3d0
[   36.363137]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.363889]  kthread+0x318/0x618
[   36.364527]  ret_from_fork+0x10/0x20
[   36.365227] 
[   36.365670] Freed by task 248:
[   36.366192]  kasan_save_stack+0x3c/0x68
[   36.366909]  kasan_save_track+0x20/0x40
[   36.367563]  kasan_save_free_info+0x4c/0x78
[   36.368254]  __kasan_slab_free+0x6c/0x98
[   36.368940]  kfree+0x114/0x3c8
[   36.369499]  kasan_strings+0x128/0x8f8
[   36.370181]  kunit_try_run_case+0x14c/0x3d0
[   36.370855]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.371669]  kthread+0x318/0x618
[   36.372236]  ret_from_fork+0x10/0x20
[   36.372961] 
[   36.373360] The buggy address belongs to the object at fff9b96e86476f80
[   36.373360]  which belongs to the cache kmalloc-32 of size 32
[   36.374787] The buggy address is located 16 bytes inside of
[   36.374787]  freed 32-byte region [fff9b96e86476f80, fff9b96e86476fa0)
[   36.376164] 
[   36.376623] The buggy address belongs to the physical page:
[   36.377260] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x106476
[   36.378098] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.378967] page_type: f5(slab)
[   36.379569] raw: 0bfffe0000000000 fff9b96e80001780 dead000000000122 0000000000000000
[   36.380493] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   36.381489] page dumped because: kasan: bad access detected
[   36.382111] 
[   36.382528] Memory state around the buggy address:
[   36.383250]  fff9b96e86476e80: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   36.384100]  fff9b96e86476f00: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   36.385034] >fff9b96e86476f80: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   36.385866]                          ^
[   36.386409]  fff9b96e86477000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   36.387251]  fff9b96e86477080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.388118] ==================================================================

test log snippet boot parser 20

Home

[   36.391280] ==================================================================
[   36.392217] BUG: KASAN: slab-use-after-free in strnlen+0x80/0x88
[   36.393672] Read of size 1 at addr fff9b96e86476f90 by task kunit_try_catch/248
[   36.394347] 
[   36.394780] CPU: 1 UID: 0 PID: 248 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.394969] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.395060] Hardware name: linux,dummy-virt (DT)
[   36.395161] Call trace:
[   36.395244]  show_stack+0x20/0x38 (C)
[   36.395411]  dump_stack_lvl+0x8c/0xd0
[   36.395636]  print_report+0x118/0x5e0
[   36.395812]  kasan_report+0xc8/0x118
[   36.395974]  __asan_report_load1_noabort+0x20/0x30
[   36.396153]  strnlen+0x80/0x88
[   36.396305]  kasan_strings+0x380/0x8f8
[   36.396457]  kunit_try_run_case+0x14c/0x3d0
[   36.396607]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.396718]  kthread+0x318/0x618
[   36.396799]  ret_from_fork+0x10/0x20
[   36.396886] 
[   36.403517] Allocated by task 248:
[   36.404100]  kasan_save_stack+0x3c/0x68
[   36.404791]  kasan_save_track+0x20/0x40
[   36.405438]  kasan_save_alloc_info+0x40/0x58
[   36.406132]  __kasan_kmalloc+0xd4/0xd8
[   36.406831]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.407581]  kasan_strings+0xb4/0x8f8
[   36.408239]  kunit_try_run_case+0x14c/0x3d0
[   36.408790]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.409599]  kthread+0x318/0x618
[   36.410186]  ret_from_fork+0x10/0x20
[   36.410800] 
[   36.411170] Freed by task 248:
[   36.411845]  kasan_save_stack+0x3c/0x68
[   36.412725]  kasan_save_track+0x20/0x40
[   36.413392]  kasan_save_free_info+0x4c/0x78
[   36.414119]  __kasan_slab_free+0x6c/0x98
[   36.414784]  kfree+0x114/0x3c8
[   36.415250]  kasan_strings+0x128/0x8f8
[   36.415909]  kunit_try_run_case+0x14c/0x3d0
[   36.416579]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.417281]  kthread+0x318/0x618
[   36.417855]  ret_from_fork+0x10/0x20
[   36.418446] 
[   36.418849] The buggy address belongs to the object at fff9b96e86476f80
[   36.418849]  which belongs to the cache kmalloc-32 of size 32
[   36.420257] The buggy address is located 16 bytes inside of
[   36.420257]  freed 32-byte region [fff9b96e86476f80, fff9b96e86476fa0)
[   36.421609] 
[   36.421926] The buggy address belongs to the physical page:
[   36.422735] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x106476
[   36.423782] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.424613] page_type: f5(slab)
[   36.425169] raw: 0bfffe0000000000 fff9b96e80001780 dead000000000122 0000000000000000
[   36.426127] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   36.426977] page dumped because: kasan: bad access detected
[   36.427780] 
[   36.428177] Memory state around the buggy address:
[   36.428822]  fff9b96e86476e80: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   36.429688]  fff9b96e86476f00: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   36.430650] >fff9b96e86476f80: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   36.431490]                          ^
[   36.432067]  fff9b96e86477000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   36.432940]  fff9b96e86477080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.433801] ==================================================================

test log snippet boot parser 21

Home

[   36.206736] ==================================================================
[   36.207472] BUG: KASAN: slab-use-after-free in strrchr+0x6c/0x78
[   36.208499] Read of size 1 at addr fff9b96e86476f90 by task kunit_try_catch/248
[   36.209121] 
[   36.209502] CPU: 1 UID: 0 PID: 248 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   36.209702] Tainted: [B]=BAD_PAGE, [N]=TEST
[   36.209775] Hardware name: linux,dummy-virt (DT)
[   36.209839] Call trace:
[   36.209882]  show_stack+0x20/0x38 (C)
[   36.209975]  dump_stack_lvl+0x8c/0xd0
[   36.210061]  print_report+0x118/0x5e0
[   36.210145]  kasan_report+0xc8/0x118
[   36.210227]  __asan_report_load1_noabort+0x20/0x30
[   36.210320]  strrchr+0x6c/0x78
[   36.210407]  kasan_strings+0x1e0/0x8f8
[   36.210489]  kunit_try_run_case+0x14c/0x3d0
[   36.210641]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.210812]  kthread+0x318/0x618
[   36.210966]  ret_from_fork+0x10/0x20
[   36.211131] 
[   36.220398] Allocated by task 248:
[   36.220936]  kasan_save_stack+0x3c/0x68
[   36.221516]  kasan_save_track+0x20/0x40
[   36.222098]  kasan_save_alloc_info+0x40/0x58
[   36.222788]  __kasan_kmalloc+0xd4/0xd8
[   36.224137]  __kmalloc_cache_noprof+0x15c/0x3c0
[   36.224830]  kasan_strings+0xb4/0x8f8
[   36.225351]  kunit_try_run_case+0x14c/0x3d0
[   36.226399]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.227057]  kthread+0x318/0x618
[   36.227718]  ret_from_fork+0x10/0x20
[   36.228512] 
[   36.228958] Freed by task 248:
[   36.229651]  kasan_save_stack+0x3c/0x68
[   36.230411]  kasan_save_track+0x20/0x40
[   36.231053]  kasan_save_free_info+0x4c/0x78
[   36.231640]  __kasan_slab_free+0x6c/0x98
[   36.232478]  kfree+0x114/0x3c8
[   36.232884]  kasan_strings+0x128/0x8f8
[   36.233630]  kunit_try_run_case+0x14c/0x3d0
[   36.234371]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   36.235378]  kthread+0x318/0x618
[   36.235996]  ret_from_fork+0x10/0x20
[   36.236646] 
[   36.236980] The buggy address belongs to the object at fff9b96e86476f80
[   36.236980]  which belongs to the cache kmalloc-32 of size 32
[   36.238261] The buggy address is located 16 bytes inside of
[   36.238261]  freed 32-byte region [fff9b96e86476f80, fff9b96e86476fa0)
[   36.239701] 
[   36.241041] The buggy address belongs to the physical page:
[   36.241818] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x106476
[   36.242760] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   36.243595] page_type: f5(slab)
[   36.244503] raw: 0bfffe0000000000 fff9b96e80001780 dead000000000122 0000000000000000
[   36.245191] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   36.246097] page dumped because: kasan: bad access detected
[   36.246816] 
[   36.247215] Memory state around the buggy address:
[   36.247932]  fff9b96e86476e80: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   36.249260]  fff9b96e86476f00: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   36.250086] >fff9b96e86476f80: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   36.250899]                          ^
[   36.251471]  fff9b96e86477000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   36.252385]  fff9b96e86477080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   36.253915] ==================================================================

test log snippet boot parser 22

Home

[   34.146706] ==================================================================
[   34.147620] BUG: KASAN: slab-use-after-free in workqueue_uaf+0x480/0x4a8
[   34.149346] Read of size 8 at addr fff9b96e8653aa00 by task kunit_try_catch/189
[   34.150293] 
[   34.150726] CPU: 1 UID: 0 PID: 189 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   34.151500] Tainted: [B]=BAD_PAGE, [N]=TEST
[   34.151694] Hardware name: linux,dummy-virt (DT)
[   34.151807] Call trace:
[   34.151969]  show_stack+0x20/0x38 (C)
[   34.152144]  dump_stack_lvl+0x8c/0xd0
[   34.152262]  print_report+0x118/0x5e0
[   34.152342]  kasan_report+0xc8/0x118
[   34.152421]  __asan_report_load8_noabort+0x20/0x30
[   34.152510]  workqueue_uaf+0x480/0x4a8
[   34.152616]  kunit_try_run_case+0x14c/0x3d0
[   34.152697]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   34.152787]  kthread+0x318/0x618
[   34.152865]  ret_from_fork+0x10/0x20
[   34.152949] 
[   34.159354] Allocated by task 189:
[   34.160069]  kasan_save_stack+0x3c/0x68
[   34.160646]  kasan_save_track+0x20/0x40
[   34.161183]  kasan_save_alloc_info+0x40/0x58
[   34.162178]  __kasan_kmalloc+0xd4/0xd8
[   34.162718]  __kmalloc_cache_noprof+0x15c/0x3c0
[   34.163444]  workqueue_uaf+0x13c/0x4a8
[   34.164158]  kunit_try_run_case+0x14c/0x3d0
[   34.165098]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   34.165846]  kthread+0x318/0x618
[   34.166667]  ret_from_fork+0x10/0x20
[   34.167374] 
[   34.167949] Freed by task 74:
[   34.168556]  kasan_save_stack+0x3c/0x68
[   34.169198]  kasan_save_track+0x20/0x40
[   34.170239]  kasan_save_free_info+0x4c/0x78
[   34.170868]  __kasan_slab_free+0x6c/0x98
[   34.171770]  kfree+0x114/0x3c8
[   34.172648]  workqueue_uaf_work+0x18/0x30
[   34.173221]  process_one_work+0x530/0xf98
[   34.173825]  worker_thread+0x614/0xf28
[   34.174436]  kthread+0x318/0x618
[   34.174992]  ret_from_fork+0x10/0x20
[   34.175596] 
[   34.176001] Last potentially related work creation:
[   34.176586]  kasan_save_stack+0x3c/0x68
[   34.177120]  __kasan_record_aux_stack+0xbc/0xe8
[   34.178013]  kasan_record_aux_stack_noalloc+0x14/0x20
[   34.178650]  __queue_work+0x65c/0xfd8
[   34.179246]  queue_work_on+0xbc/0xf8
[   34.179826]  workqueue_uaf+0x210/0x4a8
[   34.180720]  kunit_try_run_case+0x14c/0x3d0
[   34.181270]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   34.181958]  kthread+0x318/0x618
[   34.182462]  ret_from_fork+0x10/0x20
[   34.183067] 
[   34.183418] The buggy address belongs to the object at fff9b96e8653aa00
[   34.183418]  which belongs to the cache kmalloc-32 of size 32
[   34.184875] The buggy address is located 0 bytes inside of
[   34.184875]  freed 32-byte region [fff9b96e8653aa00, fff9b96e8653aa20)
[   34.185995] 
[   34.186362] The buggy address belongs to the physical page:
[   34.187038] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10653a
[   34.188605] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   34.189268] page_type: f5(slab)
[   34.189837] raw: 0bfffe0000000000 fff9b96e80001780 dead000000000122 0000000000000000
[   34.191009] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   34.191408] page dumped because: kasan: bad access detected
[   34.192190] 
[   34.192564] Memory state around the buggy address:
[   34.193210]  fff9b96e8653a900: fa fb fb fb fc fc fc fc fa fb fb fb fc fc fc fc
[   34.194364]  fff9b96e8653a980: 00 00 07 fc fc fc fc fc 00 00 00 fc fc fc fc fc
[   34.195294] >fff9b96e8653aa00: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   34.196351]                    ^
[   34.196898]  fff9b96e8653aa80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   34.197753]  fff9b96e8653ab00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   34.198394] ==================================================================

test log snippet boot parser 23

Home

[   35.929601] ==================================================================
[   35.930979] BUG: KASAN: stack-out-of-bounds in kasan_stack_oob+0x230/0x268
[   35.931660] Read of size 1 at addr ffff800080a47c2a by task kunit_try_catch/238
[   35.932280] 
[   35.932678] CPU: 1 UID: 0 PID: 238 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   35.932933] Tainted: [B]=BAD_PAGE, [N]=TEST
[   35.933022] Hardware name: linux,dummy-virt (DT)
[   35.933120] Call trace:
[   35.933189]  show_stack+0x20/0x38 (C)
[   35.933344]  dump_stack_lvl+0x8c/0xd0
[   35.933503]  print_report+0x2fc/0x5e0
[   35.933932]  kasan_report+0xc8/0x118
[   35.934084]  __asan_report_load1_noabort+0x20/0x30
[   35.934252]  kasan_stack_oob+0x230/0x268
[   35.934412]  kunit_try_run_case+0x14c/0x3d0
[   35.934583]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   35.934711]  kthread+0x318/0x618
[   35.934791]  ret_from_fork+0x10/0x20
[   35.934874] 
[   35.943221] The buggy address belongs to stack of task kunit_try_catch/238
[   35.944578]  and is located at offset 138 in frame:
[   35.945335]  kasan_stack_oob+0x0/0x268
[   35.946041] 
[   35.946400] This frame has 4 objects:
[   35.947101]  [48, 49) '__assertion'
[   35.947279]  [64, 72) 'array'
[   35.947815]  [96, 112) '__assertion'
[   35.948615]  [128, 138) 'stack_array'
[   35.949632] 
[   35.950497] The buggy address belongs to the virtual mapping at
[   35.950497]  [ffff800080a40000, ffff800080a49000) created by:
[   35.950497]  kernel_clone+0x140/0x788
[   35.952654] 
[   35.952983] The buggy address belongs to the physical page:
[   35.953730] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1065b8
[   35.954526] flags: 0xbfffe0000000000(node=0|zone=2|lastcpupid=0x1ffff)
[   35.955445] raw: 0bfffe0000000000 0000000000000000 dead000000000122 0000000000000000
[   35.956317] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   35.957495] page dumped because: kasan: bad access detected
[   35.958123] 
[   35.958485] Memory state around the buggy address:
[   35.959794]  ffff800080a47b00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   35.960778]  ffff800080a47b80: 00 00 00 00 f1 f1 f1 f1 f1 f1 01 f2 00 f2 f2 f2
[   35.961627] >ffff800080a47c00: 00 00 f2 f2 00 02 f3 f3 00 00 00 00 00 00 00 00
[   35.962398]                                   ^
[   35.963103]  ffff800080a47c80: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00 00 f2 f2
[   35.963904]  ffff800080a47d00: 00 00 f2 f2 00 00 f3 f3 00 00 00 00 00 00 00 00
[   35.965210] ==================================================================

test log snippet boot parser 24

Home

[   41.833211] ==================================================================
[   41.834127] BUG: KFENCE: invalid free in test_double_free+0x1bc/0x238
[   41.834127] 
[   41.834887] Invalid free of 0x000000002bc86e1b (in kfence-#179):
[   41.835802]  test_double_free+0x1bc/0x238
[   41.836325]  kunit_try_run_case+0x14c/0x3d0
[   41.837058]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.837876]  kthread+0x318/0x618
[   41.838424]  ret_from_fork+0x10/0x20
[   41.839098] 
[   41.839497] kfence-#179: 0x000000002bc86e1b-0x000000005573eed3, size=32, cache=kmalloc-32
[   41.839497] 
[   41.840584] allocated by task 292 on cpu 1 at 41.832640s (0.007932s ago):
[   41.841659]  test_alloc+0x298/0x620
[   41.842368]  test_double_free+0xd4/0x238
[   41.842993]  kunit_try_run_case+0x14c/0x3d0
[   41.843635]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.844361]  kthread+0x318/0x618
[   41.844964]  ret_from_fork+0x10/0x20
[   41.845457] 
[   41.845896] freed by task 292 on cpu 1 at 41.832758s (0.013123s ago):
[   41.846702]  test_double_free+0x1ac/0x238
[   41.847374]  kunit_try_run_case+0x14c/0x3d0
[   41.848007]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.848766]  kthread+0x318/0x618
[   41.849314]  ret_from_fork+0x10/0x20
[   41.849914] 
[   41.850309] CPU: 1 UID: 0 PID: 292 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   41.851367] Tainted: [B]=BAD_PAGE, [N]=TEST
[   41.852025] Hardware name: linux,dummy-virt (DT)
[   41.852611] ==================================================================

test log snippet boot parser 25

Home

[   41.937118] ==================================================================
[   41.937849] BUG: KFENCE: invalid free in test_double_free+0x100/0x238
[   41.937849] 
[   41.938753] Invalid free of 0x000000001c356495 (in kfence-#180):
[   41.939611]  test_double_free+0x100/0x238
[   41.940434]  kunit_try_run_case+0x14c/0x3d0
[   41.941056]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.941652]  kthread+0x318/0x618
[   41.942321]  ret_from_fork+0x10/0x20
[   41.942900] 
[   41.943286] kfence-#180: 0x000000001c356495-0x000000008f4430b7, size=32, cache=test
[   41.943286] 
[   41.944179] allocated by task 294 on cpu 0 at 41.936674s (0.007493s ago):
[   41.945074]  test_alloc+0x22c/0x620
[   41.945736]  test_double_free+0xd4/0x238
[   41.946425]  kunit_try_run_case+0x14c/0x3d0
[   41.947137]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.947894]  kthread+0x318/0x618
[   41.948499]  ret_from_fork+0x10/0x20
[   41.949126] 
[   41.949553] freed by task 294 on cpu 0 at 41.936795s (0.012728s ago):
[   41.950365]  test_double_free+0xf0/0x238
[   41.951058]  kunit_try_run_case+0x14c/0x3d0
[   41.951655]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.952434]  kthread+0x318/0x618
[   41.952993]  ret_from_fork+0x10/0x20
[   41.953615] 
[   41.954002] CPU: 0 UID: 0 PID: 294 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   41.954999] Tainted: [B]=BAD_PAGE, [N]=TEST
[   41.955621] Hardware name: linux,dummy-virt (DT)
[   41.956172] ==================================================================

test log snippet boot parser 26

Home

[   42.144811] ==================================================================
[   42.145623] BUG: KFENCE: invalid free in test_invalid_addr_free+0xec/0x238
[   42.145623] 
[   42.146391] Invalid free of 0x00000000b50b7dc1 (in kfence-#182):
[   42.147113]  test_invalid_addr_free+0xec/0x238
[   42.147711]  kunit_try_run_case+0x14c/0x3d0
[   42.148351]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.149062]  kthread+0x318/0x618
[   42.149652]  ret_from_fork+0x10/0x20
[   42.150161] 
[   42.150564] kfence-#182: 0x000000006fb55d14-0x000000008329a1f2, size=32, cache=test
[   42.150564] 
[   42.151588] allocated by task 298 on cpu 1 at 42.144627s (0.006949s ago):
[   42.152382]  test_alloc+0x22c/0x620
[   42.152984]  test_invalid_addr_free+0xd4/0x238
[   42.153566]  kunit_try_run_case+0x14c/0x3d0
[   42.154204]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.154850]  kthread+0x318/0x618
[   42.155417]  ret_from_fork+0x10/0x20
[   42.155934] 
[   42.156337] CPU: 1 UID: 0 PID: 298 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   42.157315] Tainted: [B]=BAD_PAGE, [N]=TEST
[   42.157915] Hardware name: linux,dummy-virt (DT)
[   42.158583] ==================================================================

test log snippet boot parser 27

Home

[   42.040808] ==================================================================
[   42.041626] BUG: KFENCE: invalid free in test_invalid_addr_free+0x1ac/0x238
[   42.041626] 
[   42.042384] Invalid free of 0x00000000c2b21778 (in kfence-#181):
[   42.043138]  test_invalid_addr_free+0x1ac/0x238
[   42.043808]  kunit_try_run_case+0x14c/0x3d0
[   42.044628]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.045331]  kthread+0x318/0x618
[   42.045934]  ret_from_fork+0x10/0x20
[   42.046556] 
[   42.046922] kfence-#181: 0x000000007520f8eb-0x00000000c46c9338, size=32, cache=kmalloc-32
[   42.046922] 
[   42.047972] allocated by task 296 on cpu 1 at 42.040604s (0.007357s ago):
[   42.048792]  test_alloc+0x298/0x620
[   42.049375]  test_invalid_addr_free+0xd4/0x238
[   42.049956]  kunit_try_run_case+0x14c/0x3d0
[   42.050525]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.051187]  kthread+0x318/0x618
[   42.051799]  ret_from_fork+0x10/0x20
[   42.052389] 
[   42.052819] CPU: 1 UID: 0 PID: 296 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   42.053656] Tainted: [B]=BAD_PAGE, [N]=TEST
[   42.054270] Hardware name: linux,dummy-virt (DT)
[   42.054973] ==================================================================

test log snippet boot parser 28

Home

[   42.873026] ==================================================================
[   42.873906] BUG: KFENCE: memory corruption in test_corruption+0x1d8/0x378
[   42.873906] 
[   42.874864] Corrupted memory at 0x0000000094ae592e [ ! ] (in kfence-#189):
[   42.875918]  test_corruption+0x1d8/0x378
[   42.876407]  kunit_try_run_case+0x14c/0x3d0
[   42.877075]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.877726]  kthread+0x318/0x618
[   42.878299]  ret_from_fork+0x10/0x20
[   42.878792] 
[   42.879167] kfence-#189: 0x00000000ebb4ef8f-0x0000000037a4660b, size=32, cache=test
[   42.879167] 
[   42.880147] allocated by task 302 on cpu 0 at 42.872771s (0.007364s ago):
[   42.880864]  test_alloc+0x22c/0x620
[   42.881457]  test_corruption+0x198/0x378
[   42.882104]  kunit_try_run_case+0x14c/0x3d0
[   42.882773]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.883531]  kthread+0x318/0x618
[   42.884115]  ret_from_fork+0x10/0x20
[   42.884711] 
[   42.885023] freed by task 302 on cpu 0 at 42.872868s (0.012143s ago):
[   42.885921]  test_corruption+0x1d8/0x378
[   42.886565]  kunit_try_run_case+0x14c/0x3d0
[   42.887155]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.887863]  kthread+0x318/0x618
[   42.888427]  ret_from_fork+0x10/0x20
[   42.888969] 
[   42.889304] CPU: 0 UID: 0 PID: 302 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   42.890391] Tainted: [B]=BAD_PAGE, [N]=TEST
[   42.891037] Hardware name: linux,dummy-virt (DT)
[   42.891621] ==================================================================

test log snippet boot parser 29

Home

[   42.353667] ==================================================================
[   42.354594] BUG: KFENCE: memory corruption in test_corruption+0x284/0x378
[   42.354594] 
[   42.355277] Corrupted memory at 0x00000000eab81750 [ ! ] (in kfence-#184):
[   42.356586]  test_corruption+0x284/0x378
[   42.357150]  kunit_try_run_case+0x14c/0x3d0
[   42.357762]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.358388]  kthread+0x318/0x618
[   42.358988]  ret_from_fork+0x10/0x20
[   42.359485] 
[   42.359885] kfence-#184: 0x000000003c730ef1-0x00000000cadad36e, size=32, cache=kmalloc-32
[   42.359885] 
[   42.360999] allocated by task 300 on cpu 1 at 42.353124s (0.007863s ago):
[   42.361835]  test_alloc+0x298/0x620
[   42.362398]  test_corruption+0x198/0x378
[   42.363014]  kunit_try_run_case+0x14c/0x3d0
[   42.363566]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.364202]  kthread+0x318/0x618
[   42.364789]  ret_from_fork+0x10/0x20
[   42.365391] 
[   42.365777] freed by task 300 on cpu 1 at 42.353302s (0.012458s ago):
[   42.366569]  test_corruption+0x284/0x378
[   42.367144]  kunit_try_run_case+0x14c/0x3d0
[   42.367800]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.368520]  kthread+0x318/0x618
[   42.368988]  ret_from_fork+0x10/0x20
[   42.369607] 
[   42.369987] CPU: 1 UID: 0 PID: 300 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   42.370942] Tainted: [B]=BAD_PAGE, [N]=TEST
[   42.371596] Hardware name: linux,dummy-virt (DT)
[   42.372139] ==================================================================

test log snippet boot parser 30

Home

[   42.768919] ==================================================================
[   42.769519] BUG: KFENCE: memory corruption in test_corruption+0x120/0x378
[   42.769519] 
[   42.770222] Corrupted memory at 0x00000000211c54c2 [ ! . . . . . . . . . . . . . . . ] (in kfence-#188):
[   42.772395]  test_corruption+0x120/0x378
[   42.773263]  kunit_try_run_case+0x14c/0x3d0
[   42.774002]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.774935]  kthread+0x318/0x618
[   42.775632]  ret_from_fork+0x10/0x20
[   42.776427] 
[   42.776879] kfence-#188: 0x0000000003a2b36a-0x0000000080e6ba0c, size=32, cache=test
[   42.776879] 
[   42.778599] allocated by task 302 on cpu 0 at 42.768651s (0.009937s ago):
[   42.779590]  test_alloc+0x22c/0x620
[   42.780149]  test_corruption+0xdc/0x378
[   42.780775]  kunit_try_run_case+0x14c/0x3d0
[   42.781381]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.782066]  kthread+0x318/0x618
[   42.782516]  ret_from_fork+0x10/0x20
[   42.783132] 
[   42.783505] freed by task 302 on cpu 0 at 42.768746s (0.014747s ago):
[   42.784242]  test_corruption+0x120/0x378
[   42.784883]  kunit_try_run_case+0x14c/0x3d0
[   42.785441]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.786206]  kthread+0x318/0x618
[   42.786811]  ret_from_fork+0x10/0x20
[   42.787347] 
[   42.787769] CPU: 0 UID: 0 PID: 302 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   42.788733] Tainted: [B]=BAD_PAGE, [N]=TEST
[   42.789270] Hardware name: linux,dummy-virt (DT)
[   42.789933] ==================================================================

test log snippet boot parser 31

Home

[   42.249075] ==================================================================
[   42.249838] BUG: KFENCE: memory corruption in test_corruption+0x278/0x378
[   42.249838] 
[   42.250577] Corrupted memory at 0x00000000a548c009 [ ! . . . . . . . . . . . . . . . ] (in kfence-#183):
[   42.254495]  test_corruption+0x278/0x378
[   42.255028]  kunit_try_run_case+0x14c/0x3d0
[   42.255699]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.256336]  kthread+0x318/0x618
[   42.256895]  ret_from_fork+0x10/0x20
[   42.257443] 
[   42.257791] kfence-#183: 0x00000000b4bc5dd9-0x00000000628bc686, size=32, cache=kmalloc-32
[   42.257791] 
[   42.258771] allocated by task 300 on cpu 1 at 42.248639s (0.010121s ago):
[   42.259643]  test_alloc+0x298/0x620
[   42.260221]  test_corruption+0xdc/0x378
[   42.260795]  kunit_try_run_case+0x14c/0x3d0
[   42.261326]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.262124]  kthread+0x318/0x618
[   42.262593]  ret_from_fork+0x10/0x20
[   42.263186] 
[   42.263567] freed by task 300 on cpu 1 at 42.248813s (0.014738s ago):
[   42.264453]  test_corruption+0x278/0x378
[   42.265043]  kunit_try_run_case+0x14c/0x3d0
[   42.265694]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   42.266284]  kthread+0x318/0x618
[   42.266890]  ret_from_fork+0x10/0x20
[   42.267484] 
[   42.267914] CPU: 1 UID: 0 PID: 300 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   42.269001] Tainted: [B]=BAD_PAGE, [N]=TEST
[   42.269553] Hardware name: linux,dummy-virt (DT)
[   42.270110] ==================================================================

test log snippet boot parser 32

Home

[   47.453191] ==================================================================
[   47.453957] BUG: KFENCE: memory corruption in test_kmalloc_aligned_oob_write+0x214/0x2c0
[   47.453957] 
[   47.454900] Corrupted memory at 0x00000000caa979e8 [ ! . . . . . . . . . . . . . . . ] (in kfence-#233):
[   47.457421]  test_kmalloc_aligned_oob_write+0x214/0x2c0
[   47.458149]  kunit_try_run_case+0x14c/0x3d0
[   47.458775]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   47.459558]  kthread+0x318/0x618
[   47.460082]  ret_from_fork+0x10/0x20
[   47.460647] 
[   47.460970] kfence-#233: 0x000000004a7195e6-0x00000000f71f49c9, size=73, cache=kmalloc-96
[   47.460970] 
[   47.461961] allocated by task 314 on cpu 1 at 47.452700s (0.009249s ago):
[   47.462672]  test_alloc+0x298/0x620
[   47.463270]  test_kmalloc_aligned_oob_write+0xbc/0x2c0
[   47.464015]  kunit_try_run_case+0x14c/0x3d0
[   47.464661]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   47.465287]  kthread+0x318/0x618
[   47.465849]  ret_from_fork+0x10/0x20
[   47.466446] 
[   47.466844] freed by task 314 on cpu 1 at 47.452950s (0.013880s ago):
[   47.467513]  test_kmalloc_aligned_oob_write+0x214/0x2c0
[   47.468265]  kunit_try_run_case+0x14c/0x3d0
[   47.468885]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   47.469485]  kthread+0x318/0x618
[   47.470084]  ret_from_fork+0x10/0x20
[   47.470584] 
[   47.470996] CPU: 1 UID: 0 PID: 314 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   47.471942] Tainted: [B]=BAD_PAGE, [N]=TEST
[   47.472556] Hardware name: linux,dummy-virt (DT)
[   47.473194] ==================================================================

test log snippet boot parser 33

Home

[   47.349181] ==================================================================
[   47.349922] BUG: KFENCE: out-of-bounds read in test_kmalloc_aligned_oob_read+0x238/0x468
[   47.349922] 
[   47.350810] Out-of-bounds read at 0x0000000039dc5688 (105B right of kfence-#232):
[   47.351697]  test_kmalloc_aligned_oob_read+0x238/0x468
[   47.352382]  kunit_try_run_case+0x14c/0x3d0
[   47.353054]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   47.353714]  kthread+0x318/0x618
[   47.354209]  ret_from_fork+0x10/0x20
[   47.354848] 
[   47.355216] kfence-#232: 0x00000000e198667b-0x000000001d5a6324, size=73, cache=kmalloc-96
[   47.355216] 
[   47.356158] allocated by task 312 on cpu 0 at 47.348714s (0.007432s ago):
[   47.356984]  test_alloc+0x298/0x620
[   47.357589]  test_kmalloc_aligned_oob_read+0x100/0x468
[   47.358229]  kunit_try_run_case+0x14c/0x3d0
[   47.358903]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   47.359554]  kthread+0x318/0x618
[   47.360137]  ret_from_fork+0x10/0x20
[   47.360638] 
[   47.361046] CPU: 0 UID: 0 PID: 312 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   47.362102] Tainted: [B]=BAD_PAGE, [N]=TEST
[   47.362593] Hardware name: linux,dummy-virt (DT)
[   47.363217] ==================================================================

test log snippet boot parser 34

Home

[   40.377317] ==================================================================
[   40.378140] BUG: KFENCE: out-of-bounds read in test_out_of_bounds_read+0x1c8/0x3e0
[   40.378140] 
[   40.378949] Out-of-bounds read at 0x000000006ef5f8d9 (32B right of kfence-#165):
[   40.379838]  test_out_of_bounds_read+0x1c8/0x3e0
[   40.380424]  kunit_try_run_case+0x14c/0x3d0
[   40.381091]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   40.381935]  kthread+0x318/0x618
[   40.382417]  ret_from_fork+0x10/0x20
[   40.383028] 
[   40.383402] kfence-#165: 0x00000000b5028432-0x00000000c8fcaf00, size=32, cache=kmalloc-32
[   40.383402] 
[   40.384507] allocated by task 276 on cpu 0 at 40.376874s (0.007622s ago):
[   40.385336]  test_alloc+0x298/0x620
[   40.385890]  test_out_of_bounds_read+0x198/0x3e0
[   40.386595]  kunit_try_run_case+0x14c/0x3d0
[   40.387216]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   40.387922]  kthread+0x318/0x618
[   40.388418]  ret_from_fork+0x10/0x20
[   40.389002] 
[   40.389411] CPU: 0 UID: 0 PID: 276 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   40.390521] Tainted: [B]=BAD_PAGE, [N]=TEST
[   40.391046] Hardware name: linux,dummy-virt (DT)
[   40.391744] ==================================================================

test log snippet boot parser 35

Home

[   40.274448] ==================================================================
[   40.275266] BUG: KFENCE: out-of-bounds read in test_out_of_bounds_read+0x114/0x3e0
[   40.275266] 
[   40.276478] Out-of-bounds read at 0x00000000a8d1a292 (1B left of kfence-#164):
[   40.277484]  test_out_of_bounds_read+0x114/0x3e0
[   40.278116]  kunit_try_run_case+0x14c/0x3d0
[   40.279400]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   40.280012]  kthread+0x318/0x618
[   40.280601]  ret_from_fork+0x10/0x20
[   40.281059] 
[   40.281481] kfence-#164: 0x0000000018af47b2-0x000000003d5a78cd, size=32, cache=kmalloc-32
[   40.281481] 
[   40.282489] allocated by task 276 on cpu 0 at 40.273830s (0.008647s ago):
[   40.283417]  test_alloc+0x298/0x620
[   40.284017]  test_out_of_bounds_read+0xdc/0x3e0
[   40.284587]  kunit_try_run_case+0x14c/0x3d0
[   40.285242]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   40.286011]  kthread+0x318/0x618
[   40.286596]  ret_from_fork+0x10/0x20
[   40.287244] 
[   40.287633] CPU: 0 UID: 0 PID: 276 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   40.288669] Tainted: [B]=BAD_PAGE, [N]=TEST
[   40.289152] Hardware name: linux,dummy-virt (DT)
[   40.289822] ==================================================================

test log snippet boot parser 36

Home

[   40.480998] ==================================================================
[   40.481907] BUG: KFENCE: out-of-bounds read in test_out_of_bounds_read+0x114/0x3e0
[   40.481907] 
[   40.483097] Out-of-bounds read at 0x0000000067aee51a (1B left of kfence-#166):
[   40.484129]  test_out_of_bounds_read+0x114/0x3e0
[   40.484942]  kunit_try_run_case+0x14c/0x3d0
[   40.485526]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   40.486411]  kthread+0x318/0x618
[   40.487052]  ret_from_fork+0x10/0x20
[   40.487762] 
[   40.488185] kfence-#166: 0x00000000bcf4b24e-0x000000008db2fa03, size=32, cache=test
[   40.488185] 
[   40.489273] allocated by task 278 on cpu 1 at 40.480817s (0.008445s ago):
[   40.490428]  test_alloc+0x22c/0x620
[   40.491053]  test_out_of_bounds_read+0xdc/0x3e0
[   40.491875]  kunit_try_run_case+0x14c/0x3d0
[   40.492413]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   40.493096]  kthread+0x318/0x618
[   40.493712]  ret_from_fork+0x10/0x20
[   40.494275] 
[   40.494757] CPU: 1 UID: 0 PID: 278 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   40.495808] Tainted: [B]=BAD_PAGE, [N]=TEST
[   40.496379] Hardware name: linux,dummy-virt (DT)
[   40.496991] ==================================================================

test log snippet boot parser 37

Home

[   40.584890] ==================================================================
[   40.585715] BUG: KFENCE: out-of-bounds read in test_out_of_bounds_read+0x1c8/0x3e0
[   40.585715] 
[   40.586631] Out-of-bounds read at 0x000000000bc2fbef (32B right of kfence-#167):
[   40.587464]  test_out_of_bounds_read+0x1c8/0x3e0
[   40.588597]  kunit_try_run_case+0x14c/0x3d0
[   40.589188]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   40.589882]  kthread+0x318/0x618
[   40.590391]  ret_from_fork+0x10/0x20
[   40.590879] 
[   40.591257] kfence-#167: 0x0000000032a7ebcc-0x0000000091bd5587, size=32, cache=test
[   40.591257] 
[   40.592365] allocated by task 278 on cpu 1 at 40.584729s (0.007625s ago):
[   40.593179]  test_alloc+0x22c/0x620
[   40.593681]  test_out_of_bounds_read+0x198/0x3e0
[   40.594373]  kunit_try_run_case+0x14c/0x3d0
[   40.595056]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   40.595846]  kthread+0x318/0x618
[   40.596423]  ret_from_fork+0x10/0x20
[   40.597010] 
[   40.597416] CPU: 1 UID: 0 PID: 278 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   40.598327] Tainted: [B]=BAD_PAGE, [N]=TEST
[   40.598961] Hardware name: linux,dummy-virt (DT)
[   40.599632] ==================================================================

test log snippet boot parser 38

Home

[   41.312923] ==================================================================
[   41.313917] BUG: KFENCE: out-of-bounds write in test_out_of_bounds_write+0x100/0x240
[   41.313917] 
[   41.314942] Out-of-bounds write at 0x0000000066e029b4 (1B left of kfence-#174):
[   41.316819]  test_out_of_bounds_write+0x100/0x240
[   41.317612]  kunit_try_run_case+0x14c/0x3d0
[   41.318183]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.318933]  kthread+0x318/0x618
[   41.319454]  ret_from_fork+0x10/0x20
[   41.320095] 
[   41.320469] kfence-#174: 0x0000000009dbd7d4-0x000000004d2a5afe, size=32, cache=test
[   41.320469] 
[   41.321507] allocated by task 282 on cpu 1 at 41.312760s (0.008701s ago):
[   41.322443]  test_alloc+0x22c/0x620
[   41.322911]  test_out_of_bounds_write+0xc8/0x240
[   41.323674]  kunit_try_run_case+0x14c/0x3d0
[   41.324296]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.325098]  kthread+0x318/0x618
[   41.325624]  ret_from_fork+0x10/0x20
[   41.326255] 
[   41.326721] CPU: 1 UID: 0 PID: 282 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   41.327770] Tainted: [B]=BAD_PAGE, [N]=TEST
[   41.328350] Hardware name: linux,dummy-virt (DT)
[   41.329061] ==================================================================

test log snippet boot parser 39

Home

[   41.104981] ==================================================================
[   41.105868] BUG: KFENCE: out-of-bounds write in test_out_of_bounds_write+0x100/0x240
[   41.105868] 
[   41.106666] Out-of-bounds write at 0x00000000b1bcd77a (1B left of kfence-#172):
[   41.107569]  test_out_of_bounds_write+0x100/0x240
[   41.108269]  kunit_try_run_case+0x14c/0x3d0
[   41.108875]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.109592]  kthread+0x318/0x618
[   41.110115]  ret_from_fork+0x10/0x20
[   41.110739] 
[   41.111079] kfence-#172: 0x000000005cfc89f6-0x00000000aefda7c1, size=32, cache=kmalloc-32
[   41.111079] 
[   41.112030] allocated by task 280 on cpu 1 at 41.104705s (0.007314s ago):
[   41.112923]  test_alloc+0x298/0x620
[   41.113512]  test_out_of_bounds_write+0xc8/0x240
[   41.114069]  kunit_try_run_case+0x14c/0x3d0
[   41.114743]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.115320]  kthread+0x318/0x618
[   41.115905]  ret_from_fork+0x10/0x20
[   41.116479] 
[   41.116897] CPU: 1 UID: 0 PID: 280 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   41.117915] Tainted: [B]=BAD_PAGE, [N]=TEST
[   41.118555] Hardware name: linux,dummy-virt (DT)
[   41.119086] ==================================================================

test log snippet boot parser 40

Home

[   33.507063] ==================================================================
[   33.508678] BUG: KFENCE: use-after-free read in kmalloc_uaf+0x184/0x338
[   33.508678] 
[   33.509681] Use-after-free read at 0x00000000f5d45bc9 (in kfence-#131):
[   33.511455]  kmalloc_uaf+0x184/0x338
[   33.512174]  kunit_try_run_case+0x14c/0x3d0
[   33.513016]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.513891]  kthread+0x318/0x618
[   33.514389]  ret_from_fork+0x10/0x20
[   33.514963] 
[   33.516098] kfence-#131: 0x0000000024377005-0x0000000068a223ab, size=10, cache=kmalloc-16
[   33.516098] 
[   33.517661] allocated by task 173 on cpu 0 at 33.505462s (0.011835s ago):
[   33.519075]  kmalloc_uaf+0xb8/0x338
[   33.519622]  kunit_try_run_case+0x14c/0x3d0
[   33.520153]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.520800]  kthread+0x318/0x618
[   33.521264]  ret_from_fork+0x10/0x20
[   33.522151] 
[   33.522834] freed by task 173 on cpu 0 at 33.505581s (0.017013s ago):
[   33.524256]  kmalloc_uaf+0x11c/0x338
[   33.524790]  kunit_try_run_case+0x14c/0x3d0
[   33.525355]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.526109]  kthread+0x318/0x618
[   33.526681]  ret_from_fork+0x10/0x20
[   33.527287] 
[   33.527816] CPU: 0 UID: 0 PID: 173 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   33.529579] Tainted: [B]=BAD_PAGE, [N]=TEST
[   33.530298] Hardware name: linux,dummy-virt (DT)
[   33.531135] ==================================================================

test log snippet boot parser 41

Home

[   33.507063] ==================================================================
[   33.508678] BUG: KFENCE: use-after-free read in kmalloc_uaf+0x184/0x338
[   33.508678] 
[   33.509681] Use-after-free read at 0x00000000f5d45bc9 (in kfence-#131):
[   33.511455]  kmalloc_uaf+0x184/0x338
[   33.512174]  kunit_try_run_case+0x14c/0x3d0
[   33.513016]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.513891]  kthread+0x318/0x618
[   33.514389]  ret_from_fork+0x10/0x20
[   33.514963] 
[   33.516098] kfence-#131: 0x0000000024377005-0x0000000068a223ab, size=10, cache=kmalloc-16
[   33.516098] 
[   33.517661] allocated by task 173 on cpu 0 at 33.505462s (0.011835s ago):
[   33.519075]  kmalloc_uaf+0xb8/0x338
[   33.519622]  kunit_try_run_case+0x14c/0x3d0
[   33.520153]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.520800]  kthread+0x318/0x618
[   33.521264]  ret_from_fork+0x10/0x20
[   33.522151] 
[   33.522834] freed by task 173 on cpu 0 at 33.505581s (0.017013s ago):
[   33.524256]  kmalloc_uaf+0x11c/0x338
[   33.524790]  kunit_try_run_case+0x14c/0x3d0
[   33.525355]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   33.526109]  kthread+0x318/0x618
[   33.526681]  ret_from_fork+0x10/0x20
[   33.527287] 
[   33.527816] CPU: 0 UID: 0 PID: 173 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   33.529579] Tainted: [B]=BAD_PAGE, [N]=TEST
[   33.530298] Hardware name: linux,dummy-virt (DT)
[   33.531135] ==================================================================

test log snippet boot parser 42

Home

[   64.197801] ==================================================================
[   64.198471] BUG: KFENCE: use-after-free read in test_krealloc+0x51c/0x830
[   64.198471] 
[   64.199330] Use-after-free read at 0x0000000019ea3de4 (in kfence-#238):
[   64.200074]  test_krealloc+0x51c/0x830
[   64.201193]  kunit_try_run_case+0x14c/0x3d0
[   64.202203]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   64.203026]  kthread+0x318/0x618
[   64.203387]  ret_from_fork+0x10/0x20
[   64.204426] 
[   64.204802] kfence-#238: 0x0000000019ea3de4-0x00000000c125c642, size=32, cache=kmalloc-32
[   64.204802] 
[   64.205742] allocated by task 326 on cpu 0 at 64.196736s (0.008995s ago):
[   64.206613]  test_alloc+0x298/0x620
[   64.207087]  test_krealloc+0xc0/0x830
[   64.207576]  kunit_try_run_case+0x14c/0x3d0
[   64.208732]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   64.209468]  kthread+0x318/0x618
[   64.210132]  ret_from_fork+0x10/0x20
[   64.210644] 
[   64.210949] freed by task 326 on cpu 0 at 64.197156s (0.013781s ago):
[   64.211608]  krealloc_noprof+0x148/0x360
[   64.212135]  test_krealloc+0x1dc/0x830
[   64.213092]  kunit_try_run_case+0x14c/0x3d0
[   64.213808]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   64.214673]  kthread+0x318/0x618
[   64.215383]  ret_from_fork+0x10/0x20
[   64.215924] 
[   64.216299] CPU: 0 UID: 0 PID: 326 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   64.217593] Tainted: [B]=BAD_PAGE, [N]=TEST
[   64.218210] Hardware name: linux,dummy-virt (DT)
[   64.219219] ==================================================================

test log snippet boot parser 43

Home

[   64.106469] ==================================================================
[   64.107226] BUG: KFENCE: use-after-free read in test_memcache_typesafe_by_rcu+0x280/0x560
[   64.107226] 
[   64.108467] Use-after-free read at 0x00000000fcbfd90d (in kfence-#237):
[   64.109745]  test_memcache_typesafe_by_rcu+0x280/0x560
[   64.110531]  kunit_try_run_case+0x14c/0x3d0
[   64.111044]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   64.111614]  kthread+0x318/0x618
[   64.112068]  ret_from_fork+0x10/0x20
[   64.112533] 
[   64.112853] kfence-#237: 0x00000000fcbfd90d-0x000000001973935d, size=32, cache=test
[   64.112853] 
[   64.113601] allocated by task 324 on cpu 1 at 64.097792s (0.015796s ago):
[   64.114269]  test_alloc+0x22c/0x620
[   64.114745]  test_memcache_typesafe_by_rcu+0x15c/0x560
[   64.115277]  kunit_try_run_case+0x14c/0x3d0
[   64.115786]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   64.116327]  kthread+0x318/0x618
[   64.116784]  ret_from_fork+0x10/0x20
[   64.117338] 
[   64.117671] freed by task 324 on cpu 1 at 64.097981s (0.019676s ago):
[   64.118362]  test_memcache_typesafe_by_rcu+0x1a8/0x560
[   64.118933]  kunit_try_run_case+0x14c/0x3d0
[   64.119413]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   64.119987]  kthread+0x318/0x618
[   64.120428]  ret_from_fork+0x10/0x20
[   64.120896] 
[   64.121243] CPU: 1 UID: 0 PID: 324 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   64.122021] Tainted: [B]=BAD_PAGE, [N]=TEST
[   64.122465] Hardware name: linux,dummy-virt (DT)
[   64.122965] ==================================================================

test log snippet boot parser 44

Home

[   41.521055] ==================================================================
[   41.521928] BUG: KFENCE: use-after-free read in test_use_after_free_read+0x114/0x248
[   41.521928] 
[   41.523099] Use-after-free read at 0x000000008b14e997 (in kfence-#176):
[   41.523955]  test_use_after_free_read+0x114/0x248
[   41.524517]  kunit_try_run_case+0x14c/0x3d0
[   41.525232]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.525826]  kthread+0x318/0x618
[   41.526265]  ret_from_fork+0x10/0x20
[   41.526717] 
[   41.526892] kfence-#176: 0x000000008b14e997-0x00000000051a499e, size=32, cache=test
[   41.526892] 
[   41.527307] allocated by task 286 on cpu 1 at 41.520653s (0.006649s ago):
[   41.527814]  test_alloc+0x22c/0x620
[   41.528255]  test_use_after_free_read+0xd0/0x248
[   41.528786]  kunit_try_run_case+0x14c/0x3d0
[   41.529273]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.530098]  kthread+0x318/0x618
[   41.530739]  ret_from_fork+0x10/0x20
[   41.531378] 
[   41.531823] freed by task 286 on cpu 1 at 41.520758s (0.011050s ago):
[   41.532848]  test_use_after_free_read+0xf0/0x248
[   41.533428]  kunit_try_run_case+0x14c/0x3d0
[   41.533949]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.534692]  kthread+0x318/0x618
[   41.535224]  ret_from_fork+0x10/0x20
[   41.535765] 
[   41.536174] CPU: 1 UID: 0 PID: 286 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   41.537068] Tainted: [B]=BAD_PAGE, [N]=TEST
[   41.537688] Hardware name: linux,dummy-virt (DT)
[   41.538336] ==================================================================

test log snippet boot parser 45

Home

[   41.417004] ==================================================================
[   41.417753] BUG: KFENCE: use-after-free read in test_use_after_free_read+0x114/0x248
[   41.417753] 
[   41.419145] Use-after-free read at 0x00000000161e7c27 (in kfence-#175):
[   41.420479]  test_use_after_free_read+0x114/0x248
[   41.421109]  kunit_try_run_case+0x14c/0x3d0
[   41.421741]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.422514]  kthread+0x318/0x618
[   41.423114]  ret_from_fork+0x10/0x20
[   41.423742] 
[   41.424147] kfence-#175: 0x00000000161e7c27-0x00000000233d0932, size=32, cache=kmalloc-32
[   41.424147] 
[   41.425210] allocated by task 284 on cpu 0 at 41.416617s (0.008582s ago):
[   41.425990]  test_alloc+0x298/0x620
[   41.426645]  test_use_after_free_read+0xd0/0x248
[   41.427370]  kunit_try_run_case+0x14c/0x3d0
[   41.428018]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.428796]  kthread+0x318/0x618
[   41.429405]  ret_from_fork+0x10/0x20
[   41.430033] 
[   41.430433] freed by task 284 on cpu 0 at 41.416749s (0.013647s ago):
[   41.431305]  test_use_after_free_read+0x1c0/0x248
[   41.432080]  kunit_try_run_case+0x14c/0x3d0
[   41.432758]  kunit_generic_run_threadfn_adapter+0x88/0x100
[   41.433399]  kthread+0x318/0x618
[   41.434024]  ret_from_fork+0x10/0x20
[   41.434658] 
[   41.435075] CPU: 0 UID: 0 PID: 284 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   41.435994] Tainted: [B]=BAD_PAGE, [N]=TEST
[   41.436628] Hardware name: linux,dummy-virt (DT)
[   41.437230] ==================================================================

test log snippet boot parser 46

Home

------------[ cut here ]------------
[  202.399909] drm-kunit-mock-device drm_test_framebuffer_free.drm-kunit-mock-device: [drm] drm_WARN_ON(!list_empty(&fb->filp_head))
[  202.400354] WARNING: CPU: 0 PID: 2150 at drivers/gpu/drm/drm_framebuffer.c:832 drm_framebuffer_free+0x136/0x1b0
[  202.403248] Modules linked in:
[  202.403557] CPU: 0 UID: 0 PID: 2150 Comm: kunit_try_catch Tainted: G    B D W        N 6.13.0 #1
[  202.405201] Tainted: [B]=BAD_PAGE, [D]=DIE, [W]=WARN, [N]=TEST
[  202.405619] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[  202.406968] RIP: 0010:drm_framebuffer_free+0x136/0x1b0
[  202.407365] Code: 8b 7d 50 4d 85 ff 74 2b 4c 89 ef e8 c4 16 82 00 48 c7 c1 60 3f db 8f 4c 89 fa 48 c7 c7 c0 3f db 8f 48 89 c6 e8 3b f1 86 fe 90 <0f> 0b 90 90 e9 25 ff ff ff 48 b8 00 00 00 00 00 fc ff df 4c 89 ea
[  202.408821] RSP: 0000:ffff888104837b68 EFLAGS: 00010282
[  202.409449] RAX: 0000000000000000 RBX: ffff888104837c40 RCX: 1ffffffff216437c
[  202.410140] RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000001
[  202.410772] RBP: ffff888104837b90 R08: 0000000000000000 R09: fffffbfff216437c
[  202.412274] R10: 0000000000000003 R11: 0000000000032468 R12: ffff888104837c18
[  202.413039] R13: ffff888104894800 R14: ffff8881048da000 R15: ffff8881045ddc80
[  202.413796] FS:  0000000000000000(0000) GS:ffff88815b000000(0000) knlGS:0000000000000000
[  202.414335] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  202.415001] CR2: 00007ffff7ffe000 CR3: 00000000566b8000 CR4: 00000000000006f0
[  202.415425] DR0: ffffffff91e01260 DR1: ffffffff91e01261 DR2: ffffffff91e01263
[  202.416170] DR3: ffffffff91e01265 DR6: 00000000ffff0ff0 DR7: 0000000000000600
[  202.416967] Call Trace:
[  202.417244]  <TASK>
[  202.417558]  ? show_regs+0x68/0x80
[  202.418028]  ? __warn+0xd5/0x260
[  202.418362]  ? drm_framebuffer_free+0x136/0x1b0
[  202.419163]  ? report_bug+0x278/0x2e0
[  202.419506]  ? handle_bug+0x5c/0xb0
[  202.420017]  ? exc_invalid_op+0x1c/0x50
[  202.420392]  ? asm_exc_invalid_op+0x1f/0x30
[  202.421005]  ? drm_framebuffer_free+0x136/0x1b0
[  202.421752]  ? drm_framebuffer_free+0x135/0x1b0
[  202.422227]  drm_test_framebuffer_free+0x1ac/0x610
[  202.422807]  ? __pfx_drm_test_framebuffer_free+0x10/0x10
[  202.423157]  ? __pfx_drm_mode_config_init_release+0x10/0x10
[  202.423731]  ? __pfx_drm_mode_config_init_release+0x10/0x10
[  202.424365]  ? __drmm_add_action_or_reset+0x22/0x50
[  202.424911]  ? __schedule+0xc6f/0x27d0
[  202.425404]  ? __pfx_read_tsc+0x10/0x10
[  202.425718]  ? ktime_get_ts64+0x86/0x230
[  202.426310]  kunit_try_run_case+0x1b3/0x490
[  202.427114]  ? __pfx_kunit_try_run_case+0x10/0x10
[  202.427751]  ? _raw_spin_lock_irqsave+0xa2/0x110
[  202.428150]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[  202.428716]  ? __kthread_parkme+0x82/0x160
[  202.429183]  ? preempt_count_sub+0x50/0x80
[  202.429758]  ? __pfx_kunit_try_run_case+0x10/0x10
[  202.430127]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[  202.430812]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[  202.431388]  kthread+0x324/0x6d0
[  202.431750]  ? trace_preempt_on+0x20/0xc0
[  202.432172]  ? __pfx_kthread+0x10/0x10
[  202.432612]  ? _raw_spin_unlock_irq+0x47/0x80
[  202.433323]  ? calculate_sigpending+0x7b/0xa0
[  202.433806]  ? __pfx_kthread+0x10/0x10
[  202.434272]  ret_from_fork+0x41/0x80
[  202.434826]  ? __pfx_kthread+0x10/0x10
[  202.435158]  ret_from_fork_asm+0x1a/0x30
[  202.435736]  </TASK>
[  202.436038] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 47

Home

------------[ cut here ]------------
[  199.878209] WARNING: CPU: 0 PID: 1588 at drivers/gpu/drm/drm_connector.c:861 drm_connector_dynamic_register+0xbf/0x110
[  199.879175] Modules linked in:
[  199.879888] CPU: 0 UID: 0 PID: 1588 Comm: kunit_try_catch Tainted: G    B D          N 6.13.0 #1
[  199.880789] Tainted: [B]=BAD_PAGE, [D]=DIE, [N]=TEST
[  199.881226] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[  199.881971] RIP: 0010:drm_connector_dynamic_register+0xbf/0x110
[  199.882755] Code: 49 8d 7c 24 58 48 89 fa 48 c1 ea 03 0f b6 04 02 84 c0 74 02 7e 36 31 c0 41 80 7c 24 58 00 75 1b 5b 41 5c 5d c3 cc cc cc cc 90 <0f> 0b 90 b8 ea ff ff ff 5b 41 5c 5d c3 cc cc cc cc 48 89 df e8 b8
[  199.884013] RSP: 0000:ffff8881020efc90 EFLAGS: 00010246
[  199.886394] RAX: dffffc0000000000 RBX: ffff8881024ea000 RCX: 0000000000000000
[  199.887786] RDX: 1ffff1102049d432 RSI: ffffffff8cfc0fa9 RDI: ffff8881024ea190
[  199.888257] RBP: ffff8881020efca0 R08: 1ffff11020062f70 R09: ffffed102041df65
[  199.888906] R10: 0000000000000003 R11: ffffffff8c556f18 R12: 0000000000000000
[  199.889597] R13: ffff8881020efd38 R14: ffff888100317c88 R15: ffff888100317c90
[  199.891124] FS:  0000000000000000(0000) GS:ffff88815b000000(0000) knlGS:0000000000000000
[  199.891594] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  199.892233] CR2: 00007ffff7ffe000 CR3: 00000000566b8000 CR4: 00000000000006f0
[  199.892965] DR0: ffffffff91e01260 DR1: ffffffff91e01261 DR2: ffffffff91e01263
[  199.893366] DR3: ffffffff91e01265 DR6: 00000000ffff0ff0 DR7: 0000000000000600
[  199.894274] Call Trace:
[  199.894533]  <TASK>
[  199.895046]  ? show_regs+0x68/0x80
[  199.895510]  ? __warn+0xd5/0x260
[  199.895979]  ? drm_connector_dynamic_register+0xbf/0x110
[  199.896386]  ? report_bug+0x278/0x2e0
[  199.896829]  ? handle_bug+0x5c/0xb0
[  199.897174]  ? exc_invalid_op+0x1c/0x50
[  199.897837]  ? asm_exc_invalid_op+0x1f/0x30
[  199.898491]  ? kasan_save_track+0x18/0x40
[  199.899084]  ? kunit_add_action+0x2d9/0x4a0
[  199.899786]  ? drm_connector_dynamic_register+0xbf/0x110
[  199.900362]  drm_test_drm_connector_dynamic_register_early_no_init+0x105/0x290
[  199.901294]  ? __pfx_drm_test_drm_connector_dynamic_register_early_no_init+0x10/0x10
[  199.902381]  ? __schedule+0xc6f/0x27d0
[  199.902932]  ? __pfx_read_tsc+0x10/0x10
[  199.903513]  ? ktime_get_ts64+0x86/0x230
[  199.904085]  kunit_try_run_case+0x1b3/0x490
[  199.904494]  ? __pfx_kunit_try_run_case+0x10/0x10
[  199.905115]  ? _raw_spin_lock_irqsave+0xa2/0x110
[  199.905858]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[  199.906395]  ? __kthread_parkme+0x82/0x160
[  199.906864]  ? preempt_count_sub+0x50/0x80
[  199.907354]  ? __pfx_kunit_try_run_case+0x10/0x10
[  199.907984]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[  199.908829]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[  199.909390]  kthread+0x324/0x6d0
[  199.909942]  ? trace_preempt_on+0x20/0xc0
[  199.910322]  ? __pfx_kthread+0x10/0x10
[  199.910922]  ? _raw_spin_unlock_irq+0x47/0x80
[  199.911314]  ? calculate_sigpending+0x7b/0xa0
[  199.911925]  ? __pfx_kthread+0x10/0x10
[  199.912391]  ret_from_fork+0x41/0x80
[  199.913010]  ? __pfx_kthread+0x10/0x10
[  199.913362]  ret_from_fork_asm+0x1a/0x30
[  199.914005]  </TASK>
[  199.914403] ---[ end trace 0000000000000000 ]---
------------[ cut here ]------------
[  200.034050] WARNING: CPU: 1 PID: 1596 at drivers/gpu/drm/drm_connector.c:861 drm_connector_dynamic_register+0xbf/0x110
[  200.035404] Modules linked in:
[  200.036007] CPU: 1 UID: 0 PID: 1596 Comm: kunit_try_catch Tainted: G    B D W        N 6.13.0 #1
[  200.037245] Tainted: [B]=BAD_PAGE, [D]=DIE, [W]=WARN, [N]=TEST
[  200.038235] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[  200.039087] RIP: 0010:drm_connector_dynamic_register+0xbf/0x110
[  200.039831] Code: 49 8d 7c 24 58 48 89 fa 48 c1 ea 03 0f b6 04 02 84 c0 74 02 7e 36 31 c0 41 80 7c 24 58 00 75 1b 5b 41 5c 5d c3 cc cc cc cc 90 <0f> 0b 90 b8 ea ff ff ff 5b 41 5c 5d c3 cc cc cc cc 48 89 df e8 b8
[  200.041497] RSP: 0000:ffff88810388fc90 EFLAGS: 00010246
[  200.042086] RAX: dffffc0000000000 RBX: ffff8881034d8000 RCX: 0000000000000000
[  200.043432] RDX: 1ffff1102069b032 RSI: ffffffff8cfc0fa9 RDI: ffff8881034d8190
[  200.044292] RBP: ffff88810388fca0 R08: 1ffff11020062f70 R09: ffffed1020711f65
[  200.045009] R10: 0000000000000003 R11: ffffffff8ba044aa R12: 0000000000000000
[  200.045507] R13: ffff88810388fd38 R14: ffff888100317c88 R15: ffff888100317c90
[  200.046023] FS:  0000000000000000(0000) GS:ffff88815b100000(0000) knlGS:0000000000000000
[  200.046852] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  200.047528] CR2: 00007ffff7ffe000 CR3: 00000000566b8000 CR4: 00000000000006f0
[  200.048308] DR0: ffffffff91e01264 DR1: ffffffff91e01269 DR2: ffffffff91e0126a
[  200.049167] DR3: ffffffff91e0126b DR6: 00000000ffff0ff0 DR7: 0000000000000600
[  200.049745] Call Trace:
[  200.050061]  <TASK>
[  200.050491]  ? show_regs+0x68/0x80
[  200.051275]  ? __warn+0xd5/0x260
[  200.051990]  ? drm_connector_dynamic_register+0xbf/0x110
[  200.052817]  ? report_bug+0x278/0x2e0
[  200.053296]  ? handle_bug+0x5c/0xb0
[  200.053850]  ? exc_invalid_op+0x1c/0x50
[  200.054399]  ? asm_exc_invalid_op+0x1f/0x30
[  200.054964]  ? ret_from_fork_asm+0x1a/0x30
[  200.055736]  ? kunit_add_action+0x2d9/0x4a0
[  200.056270]  ? drm_connector_dynamic_register+0xbf/0x110
[  200.057121]  drm_test_drm_connector_dynamic_register_no_init+0x105/0x290
[  200.057899]  ? __pfx_drm_test_drm_connector_dynamic_register_no_init+0x10/0x10
[  200.058788]  ? __schedule+0xc6f/0x27d0
[  200.059430]  ? __pfx_read_tsc+0x10/0x10
[  200.059877]  ? ktime_get_ts64+0x86/0x230
[  200.060509]  kunit_try_run_case+0x1b3/0x490
[  200.061250]  ? __pfx_kunit_try_run_case+0x10/0x10
[  200.061848]  ? _raw_spin_lock_irqsave+0xa2/0x110
[  200.062441]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[  200.063134]  ? __kthread_parkme+0x82/0x160
[  200.063800]  ? preempt_count_sub+0x50/0x80
[  200.064398]  ? __pfx_kunit_try_run_case+0x10/0x10
[  200.065061]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[  200.066110]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[  200.067144]  kthread+0x324/0x6d0
[  200.067718]  ? trace_preempt_on+0x20/0xc0
[  200.068270]  ? __pfx_kthread+0x10/0x10
[  200.068772]  ? _raw_spin_unlock_irq+0x47/0x80
[  200.069370]  ? calculate_sigpending+0x7b/0xa0
[  200.069852]  ? __pfx_kthread+0x10/0x10
[  200.070183]  ret_from_fork+0x41/0x80
[  200.071403]  ? __pfx_kthread+0x10/0x10
[  200.072210]  ret_from_fork_asm+0x1a/0x30
[  200.072858]  </TASK>
[  200.073191] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 48

Home

------------[ cut here ]------------
[  200.034050] WARNING: CPU: 1 PID: 1596 at drivers/gpu/drm/drm_connector.c:861 drm_connector_dynamic_register+0xbf/0x110
[  200.035404] Modules linked in:
[  200.036007] CPU: 1 UID: 0 PID: 1596 Comm: kunit_try_catch Tainted: G    B D W        N 6.13.0 #1
[  200.037245] Tainted: [B]=BAD_PAGE, [D]=DIE, [W]=WARN, [N]=TEST
[  200.038235] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[  200.039087] RIP: 0010:drm_connector_dynamic_register+0xbf/0x110
[  200.039831] Code: 49 8d 7c 24 58 48 89 fa 48 c1 ea 03 0f b6 04 02 84 c0 74 02 7e 36 31 c0 41 80 7c 24 58 00 75 1b 5b 41 5c 5d c3 cc cc cc cc 90 <0f> 0b 90 b8 ea ff ff ff 5b 41 5c 5d c3 cc cc cc cc 48 89 df e8 b8
[  200.041497] RSP: 0000:ffff88810388fc90 EFLAGS: 00010246
[  200.042086] RAX: dffffc0000000000 RBX: ffff8881034d8000 RCX: 0000000000000000
[  200.043432] RDX: 1ffff1102069b032 RSI: ffffffff8cfc0fa9 RDI: ffff8881034d8190
[  200.044292] RBP: ffff88810388fca0 R08: 1ffff11020062f70 R09: ffffed1020711f65
[  200.045009] R10: 0000000000000003 R11: ffffffff8ba044aa R12: 0000000000000000
[  200.045507] R13: ffff88810388fd38 R14: ffff888100317c88 R15: ffff888100317c90
[  200.046023] FS:  0000000000000000(0000) GS:ffff88815b100000(0000) knlGS:0000000000000000
[  200.046852] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  200.047528] CR2: 00007ffff7ffe000 CR3: 00000000566b8000 CR4: 00000000000006f0
[  200.048308] DR0: ffffffff91e01264 DR1: ffffffff91e01269 DR2: ffffffff91e0126a
[  200.049167] DR3: ffffffff91e0126b DR6: 00000000ffff0ff0 DR7: 0000000000000600
[  200.049745] Call Trace:
[  200.050061]  <TASK>
[  200.050491]  ? show_regs+0x68/0x80
[  200.051275]  ? __warn+0xd5/0x260
[  200.051990]  ? drm_connector_dynamic_register+0xbf/0x110
[  200.052817]  ? report_bug+0x278/0x2e0
[  200.053296]  ? handle_bug+0x5c/0xb0
[  200.053850]  ? exc_invalid_op+0x1c/0x50
[  200.054399]  ? asm_exc_invalid_op+0x1f/0x30
[  200.054964]  ? ret_from_fork_asm+0x1a/0x30
[  200.055736]  ? kunit_add_action+0x2d9/0x4a0
[  200.056270]  ? drm_connector_dynamic_register+0xbf/0x110
[  200.057121]  drm_test_drm_connector_dynamic_register_no_init+0x105/0x290
[  200.057899]  ? __pfx_drm_test_drm_connector_dynamic_register_no_init+0x10/0x10
[  200.058788]  ? __schedule+0xc6f/0x27d0
[  200.059430]  ? __pfx_read_tsc+0x10/0x10
[  200.059877]  ? ktime_get_ts64+0x86/0x230
[  200.060509]  kunit_try_run_case+0x1b3/0x490
[  200.061250]  ? __pfx_kunit_try_run_case+0x10/0x10
[  200.061848]  ? _raw_spin_lock_irqsave+0xa2/0x110
[  200.062441]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[  200.063134]  ? __kthread_parkme+0x82/0x160
[  200.063800]  ? preempt_count_sub+0x50/0x80
[  200.064398]  ? __pfx_kunit_try_run_case+0x10/0x10
[  200.065061]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[  200.066110]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[  200.067144]  kthread+0x324/0x6d0
[  200.067718]  ? trace_preempt_on+0x20/0xc0
[  200.068270]  ? __pfx_kthread+0x10/0x10
[  200.068772]  ? _raw_spin_unlock_irq+0x47/0x80
[  200.069370]  ? calculate_sigpending+0x7b/0xa0
[  200.069852]  ? __pfx_kthread+0x10/0x10
[  200.070183]  ret_from_fork+0x41/0x80
[  200.071403]  ? __pfx_kthread+0x10/0x10
[  200.072210]  ret_from_fork_asm+0x1a/0x30
[  200.072858]  </TASK>
[  200.073191] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 49

Home

------------[ cut here ]------------
[  199.878209] WARNING: CPU: 0 PID: 1588 at drivers/gpu/drm/drm_connector.c:861 drm_connector_dynamic_register+0xbf/0x110
[  199.879175] Modules linked in:
[  199.879888] CPU: 0 UID: 0 PID: 1588 Comm: kunit_try_catch Tainted: G    B D          N 6.13.0 #1
[  199.880789] Tainted: [B]=BAD_PAGE, [D]=DIE, [N]=TEST
[  199.881226] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[  199.881971] RIP: 0010:drm_connector_dynamic_register+0xbf/0x110
[  199.882755] Code: 49 8d 7c 24 58 48 89 fa 48 c1 ea 03 0f b6 04 02 84 c0 74 02 7e 36 31 c0 41 80 7c 24 58 00 75 1b 5b 41 5c 5d c3 cc cc cc cc 90 <0f> 0b 90 b8 ea ff ff ff 5b 41 5c 5d c3 cc cc cc cc 48 89 df e8 b8
[  199.884013] RSP: 0000:ffff8881020efc90 EFLAGS: 00010246
[  199.886394] RAX: dffffc0000000000 RBX: ffff8881024ea000 RCX: 0000000000000000
[  199.887786] RDX: 1ffff1102049d432 RSI: ffffffff8cfc0fa9 RDI: ffff8881024ea190
[  199.888257] RBP: ffff8881020efca0 R08: 1ffff11020062f70 R09: ffffed102041df65
[  199.888906] R10: 0000000000000003 R11: ffffffff8c556f18 R12: 0000000000000000
[  199.889597] R13: ffff8881020efd38 R14: ffff888100317c88 R15: ffff888100317c90
[  199.891124] FS:  0000000000000000(0000) GS:ffff88815b000000(0000) knlGS:0000000000000000
[  199.891594] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  199.892233] CR2: 00007ffff7ffe000 CR3: 00000000566b8000 CR4: 00000000000006f0
[  199.892965] DR0: ffffffff91e01260 DR1: ffffffff91e01261 DR2: ffffffff91e01263
[  199.893366] DR3: ffffffff91e01265 DR6: 00000000ffff0ff0 DR7: 0000000000000600
[  199.894274] Call Trace:
[  199.894533]  <TASK>
[  199.895046]  ? show_regs+0x68/0x80
[  199.895510]  ? __warn+0xd5/0x260
[  199.895979]  ? drm_connector_dynamic_register+0xbf/0x110
[  199.896386]  ? report_bug+0x278/0x2e0
[  199.896829]  ? handle_bug+0x5c/0xb0
[  199.897174]  ? exc_invalid_op+0x1c/0x50
[  199.897837]  ? asm_exc_invalid_op+0x1f/0x30
[  199.898491]  ? kasan_save_track+0x18/0x40
[  199.899084]  ? kunit_add_action+0x2d9/0x4a0
[  199.899786]  ? drm_connector_dynamic_register+0xbf/0x110
[  199.900362]  drm_test_drm_connector_dynamic_register_early_no_init+0x105/0x290
[  199.901294]  ? __pfx_drm_test_drm_connector_dynamic_register_early_no_init+0x10/0x10
[  199.902381]  ? __schedule+0xc6f/0x27d0
[  199.902932]  ? __pfx_read_tsc+0x10/0x10
[  199.903513]  ? ktime_get_ts64+0x86/0x230
[  199.904085]  kunit_try_run_case+0x1b3/0x490
[  199.904494]  ? __pfx_kunit_try_run_case+0x10/0x10
[  199.905115]  ? _raw_spin_lock_irqsave+0xa2/0x110
[  199.905858]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[  199.906395]  ? __kthread_parkme+0x82/0x160
[  199.906864]  ? preempt_count_sub+0x50/0x80
[  199.907354]  ? __pfx_kunit_try_run_case+0x10/0x10
[  199.907984]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[  199.908829]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[  199.909390]  kthread+0x324/0x6d0
[  199.909942]  ? trace_preempt_on+0x20/0xc0
[  199.910322]  ? __pfx_kthread+0x10/0x10
[  199.910922]  ? _raw_spin_unlock_irq+0x47/0x80
[  199.911314]  ? calculate_sigpending+0x7b/0xa0
[  199.911925]  ? __pfx_kthread+0x10/0x10
[  199.912391]  ret_from_fork+0x41/0x80
[  199.913010]  ? __pfx_kthread+0x10/0x10
[  199.913362]  ret_from_fork_asm+0x1a/0x30
[  199.914005]  </TASK>
[  199.914403] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 50

Home

------------[ cut here ]------------
[  202.467596] WARNING: CPU: 1 PID: 2154 at drivers/gpu/drm/drm_framebuffer.c:867 drm_framebuffer_init+0x44/0x300
[  202.468460] Modules linked in:
[  202.469425] CPU: 1 UID: 0 PID: 2154 Comm: kunit_try_catch Tainted: G    B D W        N 6.13.0 #1
[  202.470546] Tainted: [B]=BAD_PAGE, [D]=DIE, [W]=WARN, [N]=TEST
[  202.471353] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[  202.472729] RIP: 0010:drm_framebuffer_init+0x44/0x300
[  202.473188] Code: 56 49 89 d6 48 89 f2 41 55 48 c1 ea 03 41 54 49 89 fc 53 48 89 f3 48 83 ec 18 80 3c 02 00 0f 85 00 02 00 00 4c 39 23 74 20 90 <0f> 0b 90 41 bd ea ff ff ff 48 83 c4 18 44 89 e8 5b 41 5c 41 5d 41
[  202.474928] RSP: 0000:ffff888104837b30 EFLAGS: 00010246
[  202.475335] RAX: dffffc0000000000 RBX: ffff888104837c28 RCX: 0000000000000000
[  202.476442] RDX: 1ffff11020906f8e RSI: ffff888104837c28 RDI: ffff888104837c70
[  202.477371] RBP: ffff888104837b70 R08: ffff888103a03000 R09: ffffffff8fdb8f60
[  202.478069] R10: 0000000000000003 R11: 0000000029326fba R12: ffff888103a03000
[  202.478632] R13: ffff888100317b20 R14: ffff888104837ba8 R15: ffff888104837db8
[  202.479031] FS:  0000000000000000(0000) GS:ffff88815b100000(0000) knlGS:0000000000000000
[  202.479727] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  202.480065] CR2: 00007ffff7ffe000 CR3: 00000000566b8000 CR4: 00000000000006f0
[  202.480938] DR0: ffffffff91e01264 DR1: ffffffff91e01269 DR2: ffffffff91e0126a
[  202.481765] DR3: ffffffff91e0126b DR6: 00000000ffff0ff0 DR7: 0000000000000600
[  202.482348] Call Trace:
[  202.482562]  <TASK>
[  202.483044]  ? show_regs+0x68/0x80
[  202.483753]  ? __warn+0xd5/0x260
[  202.484184]  ? drm_framebuffer_init+0x44/0x300
[  202.484871]  ? report_bug+0x278/0x2e0
[  202.485343]  ? handle_bug+0x5c/0xb0
[  202.485939]  ? exc_invalid_op+0x1c/0x50
[  202.486339]  ? asm_exc_invalid_op+0x1f/0x30
[  202.487082]  ? drm_framebuffer_init+0x44/0x300
[  202.487539]  ? add_dr+0xc1/0x1d0
[  202.488077]  drm_test_framebuffer_init_bad_format+0xfd/0x240
[  202.488836]  ? add_dr+0x148/0x1d0
[  202.489320]  ? __pfx_drm_test_framebuffer_init_bad_format+0x10/0x10
[  202.489998]  ? __drmm_add_action+0x1a4/0x280
[  202.490553]  ? __pfx_drm_mode_config_init_release+0x10/0x10
[  202.491259]  ? __pfx_drm_mode_config_init_release+0x10/0x10
[  202.492225]  ? __drmm_add_action_or_reset+0x22/0x50
[  202.492945]  ? __schedule+0xc6f/0x27d0
[  202.493404]  ? __pfx_read_tsc+0x10/0x10
[  202.493884]  ? ktime_get_ts64+0x86/0x230
[  202.494427]  kunit_try_run_case+0x1b3/0x490
[  202.495066]  ? __pfx_kunit_try_run_case+0x10/0x10
[  202.495796]  ? _raw_spin_lock_irqsave+0xa2/0x110
[  202.496288]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[  202.496865]  ? __kthread_parkme+0x82/0x160
[  202.497391]  ? preempt_count_sub+0x50/0x80
[  202.498785]  ? __pfx_kunit_try_run_case+0x10/0x10
[  202.499146]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[  202.499909]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[  202.500737]  kthread+0x324/0x6d0
[  202.501126]  ? trace_preempt_on+0x20/0xc0
[  202.501761]  ? __pfx_kthread+0x10/0x10
[  202.502166]  ? _raw_spin_unlock_irq+0x47/0x80
[  202.502927]  ? calculate_sigpending+0x7b/0xa0
[  202.503476]  ? __pfx_kthread+0x10/0x10
[  202.504042]  ret_from_fork+0x41/0x80
[  202.504404]  ? __pfx_kthread+0x10/0x10
[  202.505033]  ret_from_fork_asm+0x1a/0x30
[  202.505535]  </TASK>
[  202.505987] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 51

Home

------------[ cut here ]------------
[  203.964475] WARNING: CPU: 0 PID: 2358 at drivers/gpu/drm/drm_rect.c:137 drm_rect_calc_hscale+0x125/0x190
[  203.966359] Modules linked in:
[  203.966790] CPU: 0 UID: 0 PID: 2358 Comm: kunit_try_catch Tainted: G    B D W        N 6.13.0 #1
[  203.967388] Tainted: [B]=BAD_PAGE, [D]=DIE, [W]=WARN, [N]=TEST
[  203.968442] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[  203.969257] RIP: 0010:drm_rect_calc_hscale+0x125/0x190
[  203.970140] Code: 7f 43 41 39 ce 7c 3e 48 83 c4 08 89 c8 5b 41 5c 41 5d 41 5e 5d c3 cc cc cc cc 8d 44 08 ff 99 f7 f9 89 c1 85 c0 79 d6 eb de 90 <0f> 0b 90 b9 ea ff ff ff 48 83 c4 08 5b 89 c8 41 5c 41 5d 41 5e 5d
[  203.971716] RSP: 0000:ffff888105017c78 EFLAGS: 00010286
[  203.972129] RAX: 0000000000010000 RBX: 00000000ffff0000 RCX: 00000000ffff0000
[  203.973118] RDX: 0000000000000003 RSI: 0000000000000000 RDI: ffffffff8fe0fdb8
[  203.973839] RBP: ffff888105017ca0 R08: 0000000000000000 R09: ffffed1020a04640
[  203.975047] R10: ffff888105023207 R11: 0000000000000000 R12: ffffffff8fe0fda0
[  203.976061] R13: 0000000000000000 R14: 000000007fffffff R15: ffff888105017d38
[  203.977036] FS:  0000000000000000(0000) GS:ffff88815b000000(0000) knlGS:0000000000000000
[  203.978070] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  203.978506] CR2: 00007ffff7ffe000 CR3: 00000000566b8000 CR4: 00000000000006f0
[  203.979147] DR0: ffffffff91e01260 DR1: ffffffff91e01261 DR2: ffffffff91e01263
[  203.980148] DR3: ffffffff91e01265 DR6: 00000000ffff0ff0 DR7: 0000000000000600
[  203.980710] Call Trace:
[  203.981177]  <TASK>
[  203.981489]  ? show_regs+0x68/0x80
[  203.982103]  ? __warn+0xd5/0x260
[  203.982525]  ? drm_rect_calc_hscale+0x125/0x190
[  203.983193]  ? report_bug+0x278/0x2e0
[  203.984180]  ? handle_bug+0x5c/0xb0
[  203.984732]  ? exc_invalid_op+0x1c/0x50
[  203.985279]  ? asm_exc_invalid_op+0x1f/0x30
[  203.985925]  ? drm_rect_calc_hscale+0x125/0x190
[  203.986825]  drm_test_rect_calc_hscale+0x109/0x270
[  203.987176]  ? __pfx_drm_test_rect_calc_hscale+0x10/0x10
[  203.987711]  ? __schedule+0xc6f/0x27d0
[  203.988611]  ? __pfx_read_tsc+0x10/0x10
[  203.989709]  ? ktime_get_ts64+0x86/0x230
[  203.990285]  kunit_try_run_case+0x1b3/0x490
[  203.990757]  ? __pfx_kunit_try_run_case+0x10/0x10
[  203.991514]  ? _raw_spin_lock_irqsave+0xa2/0x110
[  203.992024]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[  203.993052]  ? __kthread_parkme+0x82/0x160
[  203.993507]  ? preempt_count_sub+0x50/0x80
[  203.994011]  ? __pfx_kunit_try_run_case+0x10/0x10
[  203.994866]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[  203.995768]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[  203.997065]  kthread+0x324/0x6d0
[  203.997492]  ? trace_preempt_on+0x20/0xc0
[  203.998076]  ? __pfx_kthread+0x10/0x10
[  203.998633]  ? _raw_spin_unlock_irq+0x47/0x80
[  203.999110]  ? calculate_sigpending+0x7b/0xa0
[  204.000101]  ? __pfx_kthread+0x10/0x10
[  204.000602]  ret_from_fork+0x41/0x80
[  204.001554]  ? __pfx_kthread+0x10/0x10
[  204.002373]  ret_from_fork_asm+0x1a/0x30
[  204.002697]  </TASK>
[  204.003253] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 52

Home

------------[ cut here ]------------
[  203.920475] WARNING: CPU: 0 PID: 2356 at drivers/gpu/drm/drm_rect.c:137 drm_rect_calc_hscale+0x125/0x190
[  203.921048] Modules linked in:
[  203.922488] CPU: 0 UID: 0 PID: 2356 Comm: kunit_try_catch Tainted: G    B D W        N 6.13.0 #1
[  203.923360] Tainted: [B]=BAD_PAGE, [D]=DIE, [W]=WARN, [N]=TEST
[  203.923971] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[  203.925158] RIP: 0010:drm_rect_calc_hscale+0x125/0x190
[  203.925598] Code: 7f 43 41 39 ce 7c 3e 48 83 c4 08 89 c8 5b 41 5c 41 5d 41 5e 5d c3 cc cc cc cc 8d 44 08 ff 99 f7 f9 89 c1 85 c0 79 d6 eb de 90 <0f> 0b 90 b9 ea ff ff ff 48 83 c4 08 5b 89 c8 41 5c 41 5d 41 5e 5d
[  203.927082] RSP: 0000:ffff888104f2fc78 EFLAGS: 00010286
[  203.927945] RAX: 00000000ffff0000 RBX: 00000000ffff0000 RCX: 0000000000010000
[  203.928574] RDX: 0000000000000003 RSI: 0000000000000000 RDI: ffffffff8fe0fd80
[  203.929931] RBP: ffff888104f2fca0 R08: 0000000000000000 R09: ffffed1020a04600
[  203.930378] R10: ffff888105023007 R11: 0000000000000000 R12: ffffffff8fe0fd68
[  203.931121] R13: 0000000000000000 R14: 000000007fffffff R15: ffff888104f2fd38
[  203.932062] FS:  0000000000000000(0000) GS:ffff88815b000000(0000) knlGS:0000000000000000
[  203.932992] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  203.933739] CR2: 00007ffff7ffe000 CR3: 00000000566b8000 CR4: 00000000000006f0
[  203.934362] DR0: ffffffff91e01260 DR1: ffffffff91e01261 DR2: ffffffff91e01263
[  203.935082] DR3: ffffffff91e01265 DR6: 00000000ffff0ff0 DR7: 0000000000000600
[  203.936141] Call Trace:
[  203.936455]  <TASK>
[  203.936929]  ? show_regs+0x68/0x80
[  203.937339]  ? __warn+0xd5/0x260
[  203.938079]  ? drm_rect_calc_hscale+0x125/0x190
[  203.939229]  ? report_bug+0x278/0x2e0
[  203.939623]  ? handle_bug+0x5c/0xb0
[  203.940067]  ? exc_invalid_op+0x1c/0x50
[  203.940973]  ? asm_exc_invalid_op+0x1f/0x30
[  203.941524]  ? drm_rect_calc_hscale+0x125/0x190
[  203.942179]  drm_test_rect_calc_hscale+0x109/0x270
[  203.942881]  ? __pfx_drm_test_rect_calc_hscale+0x10/0x10
[  203.943343]  ? irqentry_exit+0x2a/0x60
[  203.943812]  ? sysvec_apic_timer_interrupt+0x50/0x90
[  203.944232]  ? trace_hardirqs_on+0x37/0xe0
[  203.944698]  ? __pfx_read_tsc+0x10/0x10
[  203.945046]  ? ktime_get_ts64+0x86/0x230
[  203.945689]  kunit_try_run_case+0x1b3/0x490
[  203.946235]  ? __pfx_kunit_try_run_case+0x10/0x10
[  203.946788]  ? _raw_spin_lock_irqsave+0xa2/0x110
[  203.947305]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[  203.947896]  ? __kthread_parkme+0x82/0x160
[  203.948397]  ? preempt_count_sub+0x50/0x80
[  203.948878]  ? __pfx_kunit_try_run_case+0x10/0x10
[  203.949370]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[  203.950158]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[  203.951006]  kthread+0x324/0x6d0
[  203.951439]  ? trace_preempt_on+0x20/0xc0
[  203.951935]  ? __pfx_kthread+0x10/0x10
[  203.952474]  ? _raw_spin_unlock_irq+0x47/0x80
[  203.953126]  ? calculate_sigpending+0x7b/0xa0
[  203.954992]  ? __pfx_kthread+0x10/0x10
[  203.955514]  ret_from_fork+0x41/0x80
[  203.956426]  ? __pfx_kthread+0x10/0x10
[  203.957084]  ret_from_fork_asm+0x1a/0x30
[  203.957837]  </TASK>
[  203.958167] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 53

Home

------------[ cut here ]------------
[  204.074482] WARNING: CPU: 0 PID: 2370 at drivers/gpu/drm/drm_rect.c:137 drm_rect_calc_vscale+0x130/0x190
[  204.075015] Modules linked in:
[  204.075347] CPU: 0 UID: 0 PID: 2370 Comm: kunit_try_catch Tainted: G    B D W        N 6.13.0 #1
[  204.076267] Tainted: [B]=BAD_PAGE, [D]=DIE, [W]=WARN, [N]=TEST
[  204.076615] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[  204.077741] RIP: 0010:drm_rect_calc_vscale+0x130/0x190
[  204.078389] Code: 7f 43 41 39 ce 7c 3e 48 83 c4 08 89 c8 5b 41 5c 41 5d 41 5e 5d c3 cc cc cc cc 8d 44 08 ff 99 f7 f9 89 c1 85 c0 79 d6 eb de 90 <0f> 0b 90 b9 ea ff ff ff 48 83 c4 08 5b 89 c8 41 5c 41 5d 41 5e 5d
[  204.080017] RSP: 0000:ffff888104d97c78 EFLAGS: 00010286
[  204.080475] RAX: 0000000000010000 RBX: 00000000ffff0000 RCX: 00000000ffff0000
[  204.081884] RDX: 0000000000000007 RSI: 0000000000000000 RDI: ffffffff8fe0fdb4
[  204.082359] RBP: ffff888104d97ca0 R08: 0000000000000000 R09: ffffed1020a04740
[  204.083548] R10: ffff888105023a07 R11: 0000000000000000 R12: ffffffff8fe0fda0
[  204.084109] R13: 0000000000000000 R14: 000000007fffffff R15: ffff888104d97d38
[  204.085093] FS:  0000000000000000(0000) GS:ffff88815b000000(0000) knlGS:0000000000000000
[  204.085918] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  204.086825] CR2: 00007ffff7ffe000 CR3: 00000000566b8000 CR4: 00000000000006f0
[  204.087476] DR0: ffffffff91e01260 DR1: ffffffff91e01261 DR2: ffffffff91e01263
[  204.088984] DR3: ffffffff91e01265 DR6: 00000000ffff0ff0 DR7: 0000000000000600
[  204.089813] Call Trace:
[  204.089978]  <TASK>
[  204.090099]  ? show_regs+0x68/0x80
[  204.090355]  ? __warn+0xd5/0x260
[  204.090648]  ? drm_rect_calc_vscale+0x130/0x190
[  204.091460]  ? report_bug+0x278/0x2e0
[  204.092255]  ? handle_bug+0x5c/0xb0
[  204.092894]  ? exc_invalid_op+0x1c/0x50
[  204.093371]  ? asm_exc_invalid_op+0x1f/0x30
[  204.094354]  ? drm_rect_calc_vscale+0x130/0x190
[  204.095032]  drm_test_rect_calc_vscale+0x109/0x270
[  204.096045]  ? __pfx_drm_test_rect_calc_vscale+0x10/0x10
[  204.096470]  ? __schedule+0xc6f/0x27d0
[  204.097255]  ? __pfx_read_tsc+0x10/0x10
[  204.098078]  ? ktime_get_ts64+0x86/0x230
[  204.098683]  kunit_try_run_case+0x1b3/0x490
[  204.099946]  ? __pfx_kunit_try_run_case+0x10/0x10
[  204.100360]  ? _raw_spin_lock_irqsave+0xa2/0x110
[  204.100663]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[  204.101963]  ? __kthread_parkme+0x82/0x160
[  204.102382]  ? preempt_count_sub+0x50/0x80
[  204.103468]  ? __pfx_kunit_try_run_case+0x10/0x10
[  204.104128]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[  204.104880]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[  204.105876]  kthread+0x324/0x6d0
[  204.106186]  ? trace_preempt_on+0x20/0xc0
[  204.106879]  ? __pfx_kthread+0x10/0x10
[  204.107497]  ? _raw_spin_unlock_irq+0x47/0x80
[  204.108292]  ? calculate_sigpending+0x7b/0xa0
[  204.108891]  ? __pfx_kthread+0x10/0x10
[  204.109182]  ret_from_fork+0x41/0x80
[  204.109385]  ? __pfx_kthread+0x10/0x10
[  204.109681]  ret_from_fork_asm+0x1a/0x30
[  204.110291]  </TASK>
[  204.111454] ---[ end trace 0000000000000000 ]---
---
------------[ cut here ]------------
[  204.029368] WARNING: CPU: 0 PID: 2368 at drivers/gpu/drm/drm_rect.c:137 drm_rect_calc_vscale+0x130/0x190
[  204.031387] Modules linked in:
[  204.031948] CPU: 0 UID: 0 PID: 2368 Comm: kunit_try_catch Tainted: G    B D W        N 6.13.0 #1
[  204.032711] Tainted: [B]=BAD_PAGE, [D]=DIE, [W]=WARN, [N]=TEST
[  204.033562] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[  204.034751] RIP: 0010:drm_rect_calc_vscale+0x130/0x190
[  204.035185] Code: 7f 43 41 39 ce 7c 3e 48 83 c4 08 89 c8 5b 41 5c 41 5d 41 5e 5d c3 cc cc cc cc 8d 44 08 ff 99 f7 f9 89 c1 85 c0 79 d6 eb de 90 <0f> 0b 90 b9 ea ff ff ff 48 83 c4 08 5b 89 c8 41 5c 41 5d 41 5e 5d
[  204.037005] RSP: 0000:ffff888104f2fc78 EFLAGS: 00010286
[  204.037453] RAX: 00000000ffff0000 RBX: 00000000ffff0000 RCX: 0000000000010000
[  204.038000] RDX: 0000000000000007 RSI: 0000000000000000 RDI: ffffffff8fe0fd7c
[  204.038798] RBP: ffff888104f2fca0 R08: 0000000000000000 R09: ffffed1020a04720
[  204.039472] R10: ffff888105023907 R11: 0000000000000000 R12: ffffffff8fe0fd68
[  204.040052] R13: 0000000000000000 R14: 000000007fffffff R15: ffff888104f2fd38
[  204.040982] FS:  0000000000000000(0000) GS:ffff88815b000000(0000) knlGS:0000000000000000
[  204.041446] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  204.042061] CR2: 00007ffff7ffe000 CR3: 00000000566b8000 CR4: 00000000000006f0
[  204.043030] DR0: ffffffff91e01260 DR1: ffffffff91e01261 DR2: ffffffff91e01263
[  204.043476] DR3: ffffffff91e01265 DR6: 00000000ffff0ff0 DR7: 0000000000000600
[  204.044025] Call Trace:
[  204.044329]  <TASK>
[  204.045140]  ? show_regs+0x68/0x80
[  204.045568]  ? __warn+0xd5/0x260
[  204.046718]  ? drm_rect_calc_vscale+0x130/0x190
[  204.047097]  ? report_bug+0x278/0x2e0
[  204.047548]  ? handle_bug+0x5c/0xb0
[  204.048737]  ? exc_invalid_op+0x1c/0x50
[  204.049339]  ? asm_exc_invalid_op+0x1f/0x30
[  204.050009]  ? drm_rect_calc_vscale+0x130/0x190
[  204.050425]  drm_test_rect_calc_vscale+0x109/0x270
[  204.051100]  ? __pfx_drm_test_rect_calc_vscale+0x10/0x10
[  204.051471]  ? __schedule+0xc6f/0x27d0
[  204.052470]  ? __pfx_read_tsc+0x10/0x10
[  204.052789]  ? ktime_get_ts64+0x86/0x230
[  204.053549]  kunit_try_run_case+0x1b3/0x490
[  204.054107]  ? __pfx_kunit_try_run_case+0x10/0x10
[  204.054493]  ? _raw_spin_lock_irqsave+0xa2/0x110
[  204.055193]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[  204.056037]  ? __kthread_parkme+0x82/0x160
[  204.056554]  ? preempt_count_sub+0x50/0x80
[  204.057930]  ? __pfx_kunit_try_run_case+0x10/0x10
[  204.058311]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[  204.059725]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[  204.060172]  kthread+0x324/0x6d0
[  204.060611]  ? trace_preempt_on+0x20/0xc0
[  204.060912]  ? __pfx_kthread+0x10/0x10
[  204.061487]  ? _raw_spin_unlock_irq+0x47/0x80
[  204.062433]  ? calculate_sigpending+0x7b/0xa0
[  204.063467]  ? __pfx_kthread+0x10/0x10
[  204.063974]  ret_from_fork+0x41/0x80
[  204.064381]  ? __pfx_kthread+0x10/0x10
[  204.066344]  ret_from_fork_asm+0x1a/0x30
[  204.066744]  </TASK>
[  204.067054] ---[ end trace 0000000000000000 ]---

test log snippet boot parser 54

Home

[   25.134890] ==================================================================
[   25.136305] BUG: KASAN: alloca-out-of-bounds in kasan_alloca_oob_left+0x31f/0x380
[   25.137131] Read of size 1 at addr ffff888102ddfc3f by task kunit_try_catch/259
[   25.138174] 
[   25.138814] CPU: 0 UID: 0 PID: 259 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.138936] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.138973] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.139027] Call Trace:
[   25.139059]  <TASK>
[   25.139097]  dump_stack_lvl+0x73/0xb0
[   25.139274]  print_report+0xd1/0x640
[   25.139351]  ? __virt_addr_valid+0x1db/0x2d0
[   25.139478]  ? kasan_addr_to_slab+0x11/0xa0
[   25.139534]  kasan_report+0x102/0x140
[   25.139566]  ? kasan_alloca_oob_left+0x31f/0x380
[   25.139623]  ? kasan_alloca_oob_left+0x31f/0x380
[   25.139690]  __asan_report_load1_noabort+0x18/0x20
[   25.139730]  kasan_alloca_oob_left+0x31f/0x380
[   25.139767]  ? finish_task_switch.isra.0+0x153/0x700
[   25.139801]  ? insn_decode+0x90/0x3e0
[   25.139836]  ? trace_hardirqs_on+0x37/0xe0
[   25.139871]  ? __pfx_kasan_alloca_oob_left+0x10/0x10
[   25.139908]  ? __schedule+0xc6f/0x27d0
[   25.139939]  ? __pfx_read_tsc+0x10/0x10
[   25.139969]  ? ktime_get_ts64+0x86/0x230
[   25.140001]  kunit_try_run_case+0x1b3/0x490
[   25.140037]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.140069]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.140101]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.140131]  ? __kthread_parkme+0x82/0x160
[   25.140161]  ? preempt_count_sub+0x50/0x80
[   25.140201]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.140259]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.140308]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.140344]  kthread+0x324/0x6d0
[   25.140374]  ? trace_preempt_on+0x20/0xc0
[   25.140429]  ? __pfx_kthread+0x10/0x10
[   25.140459]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.140497]  ? calculate_sigpending+0x7b/0xa0
[   25.140535]  ? __pfx_kthread+0x10/0x10
[   25.140571]  ret_from_fork+0x41/0x80
[   25.140628]  ? __pfx_kthread+0x10/0x10
[   25.140682]  ret_from_fork_asm+0x1a/0x30
[   25.140756]  </TASK>
[   25.140781] 
[   25.158869] The buggy address belongs to stack of task kunit_try_catch/259
[   25.159429] 
[   25.159601] The buggy address belongs to the physical page:
[   25.159916] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102ddf
[   25.160555] flags: 0x200000000000000(node=0|zone=2)
[   25.161246] raw: 0200000000000000 ffffea00040b77c8 ffffea00040b77c8 0000000000000000
[   25.162083] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
[   25.163080] page dumped because: kasan: bad access detected
[   25.163729] 
[   25.163901] Memory state around the buggy address:
[   25.164393]  ffff888102ddfb00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.165174]  ffff888102ddfb80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.165980] >ffff888102ddfc00: 00 00 00 00 ca ca ca ca 00 02 cb cb cb cb cb cb
[   25.166715]                                         ^
[   25.167153]  ffff888102ddfc80: 00 00 00 f1 f1 f1 f1 01 f2 04 f2 00 f2 f2 f2 00
[   25.168155]  ffff888102ddfd00: 00 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 f1
[   25.168968] ==================================================================

test log snippet boot parser 55

Home

[   25.176055] ==================================================================
[   25.176842] BUG: KASAN: alloca-out-of-bounds in kasan_alloca_oob_right+0x328/0x390
[   25.178057] Read of size 1 at addr ffff888102defc4a by task kunit_try_catch/261
[   25.179189] 
[   25.179546] CPU: 0 UID: 0 PID: 261 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.179636] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.179668] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.179724] Call Trace:
[   25.179755]  <TASK>
[   25.179786]  dump_stack_lvl+0x73/0xb0
[   25.179908]  print_report+0xd1/0x640
[   25.179985]  ? __virt_addr_valid+0x1db/0x2d0
[   25.180039]  ? kasan_addr_to_slab+0x11/0xa0
[   25.180074]  kasan_report+0x102/0x140
[   25.180108]  ? kasan_alloca_oob_right+0x328/0x390
[   25.180144]  ? kasan_alloca_oob_right+0x328/0x390
[   25.180191]  __asan_report_load1_noabort+0x18/0x20
[   25.180290]  kasan_alloca_oob_right+0x328/0x390
[   25.180363]  ? finish_task_switch.isra.0+0x153/0x700
[   25.180438]  ? insn_decode+0x90/0x3e0
[   25.180476]  ? trace_hardirqs_on+0x37/0xe0
[   25.180509]  ? __pfx_kasan_alloca_oob_right+0x10/0x10
[   25.180548]  ? __schedule+0xc6f/0x27d0
[   25.180590]  ? __pfx_read_tsc+0x10/0x10
[   25.180686]  ? ktime_get_ts64+0x86/0x230
[   25.180736]  kunit_try_run_case+0x1b3/0x490
[   25.180774]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.180805]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.180838]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.180871]  ? __kthread_parkme+0x82/0x160
[   25.180900]  ? preempt_count_sub+0x50/0x80
[   25.180932]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.180965]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.180998]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.181031]  kthread+0x324/0x6d0
[   25.181060]  ? trace_preempt_on+0x20/0xc0
[   25.181089]  ? __pfx_kthread+0x10/0x10
[   25.181119]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.181150]  ? calculate_sigpending+0x7b/0xa0
[   25.181182]  ? __pfx_kthread+0x10/0x10
[   25.181249]  ret_from_fork+0x41/0x80
[   25.181280]  ? __pfx_kthread+0x10/0x10
[   25.181310]  ret_from_fork_asm+0x1a/0x30
[   25.181354]  </TASK>
[   25.181370] 
[   25.199022] The buggy address belongs to stack of task kunit_try_catch/261
[   25.199945] 
[   25.200202] The buggy address belongs to the physical page:
[   25.200891] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102def
[   25.201474] flags: 0x200000000000000(node=0|zone=2)
[   25.202125] raw: 0200000000000000 ffffea00040b7bc8 ffffea00040b7bc8 0000000000000000
[   25.202586] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
[   25.202991] page dumped because: kasan: bad access detected
[   25.203748] 
[   25.203981] Memory state around the buggy address:
[   25.204459]  ffff888102defb00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.205264]  ffff888102defb80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.206064] >ffff888102defc00: 00 00 00 00 ca ca ca ca 00 02 cb cb cb cb cb cb
[   25.206813]                                               ^
[   25.207367]  ffff888102defc80: 00 00 00 f1 f1 f1 f1 01 f2 04 f2 00 f2 f2 f2 00
[   25.208069]  ffff888102defd00: 00 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 f1
[   25.208481] ==================================================================

test log snippet boot parser 56

Home

[   23.021291] ==================================================================
[   23.021997] BUG: KASAN: double-free in kfree_sensitive+0x2e/0x90
[   23.022496] Free of addr ffff8881012e6740 by task kunit_try_catch/200
[   23.023890] 
[   23.024560] CPU: 0 UID: 0 PID: 200 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.024871] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.024906] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.024963] Call Trace:
[   23.025004]  <TASK>
[   23.025044]  dump_stack_lvl+0x73/0xb0
[   23.025120]  print_report+0xd1/0x640
[   23.025294]  ? __virt_addr_valid+0x1db/0x2d0
[   23.025384]  ? kfree_sensitive+0x2e/0x90
[   23.025462]  ? kasan_complete_mode_report_info+0x64/0x200
[   23.025522]  ? kfree_sensitive+0x2e/0x90
[   23.025563]  kasan_report_invalid_free+0xc0/0xf0
[   23.025676]  ? kfree_sensitive+0x2e/0x90
[   23.025737]  ? kfree_sensitive+0x2e/0x90
[   23.025770]  check_slab_allocation+0x101/0x130
[   23.025805]  __kasan_slab_pre_free+0x28/0x40
[   23.025835]  kfree+0xf1/0x3f0
[   23.025862]  ? kfree_sensitive+0x2e/0x90
[   23.025895]  kfree_sensitive+0x2e/0x90
[   23.025926]  kmalloc_double_kzfree+0x19d/0x360
[   23.025959]  ? __pfx_kmalloc_double_kzfree+0x10/0x10
[   23.025992]  ? __schedule+0xc6f/0x27d0
[   23.026024]  ? __pfx_read_tsc+0x10/0x10
[   23.026051]  ? ktime_get_ts64+0x86/0x230
[   23.026085]  kunit_try_run_case+0x1b3/0x490
[   23.026118]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.026148]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.026179]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.026246]  ? __kthread_parkme+0x82/0x160
[   23.026274]  ? preempt_count_sub+0x50/0x80
[   23.026307]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.026339]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.026378]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.026426]  kthread+0x324/0x6d0
[   23.026455]  ? trace_preempt_on+0x20/0xc0
[   23.026485]  ? __pfx_kthread+0x10/0x10
[   23.026514]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.026544]  ? calculate_sigpending+0x7b/0xa0
[   23.026579]  ? __pfx_kthread+0x10/0x10
[   23.026661]  ret_from_fork+0x41/0x80
[   23.026710]  ? __pfx_kthread+0x10/0x10
[   23.026742]  ret_from_fork_asm+0x1a/0x30
[   23.026785]  </TASK>
[   23.026801] 
[   23.046663] Allocated by task 200:
[   23.047208]  kasan_save_stack+0x3d/0x60
[   23.047859]  kasan_save_track+0x18/0x40
[   23.048357]  kasan_save_alloc_info+0x3b/0x50
[   23.049393]  __kasan_kmalloc+0xb7/0xc0
[   23.050053]  __kmalloc_cache_noprof+0x184/0x410
[   23.050556]  kmalloc_double_kzfree+0xaa/0x360
[   23.051223]  kunit_try_run_case+0x1b3/0x490
[   23.051980]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.052394]  kthread+0x324/0x6d0
[   23.052731]  ret_from_fork+0x41/0x80
[   23.053642]  ret_from_fork_asm+0x1a/0x30
[   23.054013] 
[   23.054172] Freed by task 200:
[   23.054491]  kasan_save_stack+0x3d/0x60
[   23.055166]  kasan_save_track+0x18/0x40
[   23.056032]  kasan_save_free_info+0x3f/0x60
[   23.056537]  __kasan_slab_free+0x56/0x70
[   23.057164]  kfree+0x123/0x3f0
[   23.057570]  kfree_sensitive+0x67/0x90
[   23.058421]  kmalloc_double_kzfree+0x12c/0x360
[   23.058850]  kunit_try_run_case+0x1b3/0x490
[   23.059539]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.060153]  kthread+0x324/0x6d0
[   23.060647]  ret_from_fork+0x41/0x80
[   23.061111]  ret_from_fork_asm+0x1a/0x30
[   23.061993] 
[   23.062167] The buggy address belongs to the object at ffff8881012e6740
[   23.062167]  which belongs to the cache kmalloc-16 of size 16
[   23.063007] The buggy address is located 0 bytes inside of
[   23.063007]  16-byte region [ffff8881012e6740, ffff8881012e6750)
[   23.064414] 
[   23.064657] The buggy address belongs to the physical page:
[   23.065051] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   23.066472] flags: 0x200000000000000(node=0|zone=2)
[   23.067250] page_type: f5(slab)
[   23.067852] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   23.069007] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   23.069566] page dumped because: kasan: bad access detected
[   23.070293] 
[   23.070738] Memory state around the buggy address:
[   23.071353]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   23.072240]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   23.073035] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc fc fc fc fc
[   23.074030]                                            ^
[   23.074572]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.075420]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.076321] ==================================================================

test log snippet boot parser 57

Home

[   23.591283] ==================================================================
[   23.592201] BUG: KASAN: double-free in kmem_cache_double_free+0x1e6/0x490
[   23.592636] Free of addr ffff888102a61000 by task kunit_try_catch/217
[   23.594108] 
[   23.594470] CPU: 1 UID: 0 PID: 217 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.595019] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.595062] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.595114] Call Trace:
[   23.595145]  <TASK>
[   23.595212]  dump_stack_lvl+0x73/0xb0
[   23.595299]  print_report+0xd1/0x640
[   23.595339]  ? __virt_addr_valid+0x1db/0x2d0
[   23.595375]  ? kmem_cache_double_free+0x1e6/0x490
[   23.595421]  ? kasan_complete_mode_report_info+0x64/0x200
[   23.595454]  ? kmem_cache_double_free+0x1e6/0x490
[   23.595485]  kasan_report_invalid_free+0xc0/0xf0
[   23.595520]  ? kmem_cache_double_free+0x1e6/0x490
[   23.595552]  ? kmem_cache_double_free+0x1e6/0x490
[   23.595612]  check_slab_allocation+0x101/0x130
[   23.595646]  __kasan_slab_pre_free+0x28/0x40
[   23.595676]  kmem_cache_free+0xee/0x420
[   23.595706]  ? kmem_cache_alloc_noprof+0x11e/0x3e0
[   23.595736]  ? kmem_cache_double_free+0x1e6/0x490
[   23.595769]  kmem_cache_double_free+0x1e6/0x490
[   23.595798]  ? __pfx_kmem_cache_double_free+0x10/0x10
[   23.595826]  ? finish_task_switch.isra.0+0x153/0x700
[   23.595857]  ? __switch_to+0x5d9/0xf60
[   23.595895]  ? __pfx_read_tsc+0x10/0x10
[   23.595922]  ? ktime_get_ts64+0x86/0x230
[   23.595955]  kunit_try_run_case+0x1b3/0x490
[   23.595989]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.596019]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.596051]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.596081]  ? __kthread_parkme+0x82/0x160
[   23.596109]  ? preempt_count_sub+0x50/0x80
[   23.596140]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.596172]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.596319]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.596353]  kthread+0x324/0x6d0
[   23.596383]  ? trace_preempt_on+0x20/0xc0
[   23.596432]  ? __pfx_kthread+0x10/0x10
[   23.596462]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.596491]  ? calculate_sigpending+0x7b/0xa0
[   23.596524]  ? __pfx_kthread+0x10/0x10
[   23.596554]  ret_from_fork+0x41/0x80
[   23.596678]  ? __pfx_kthread+0x10/0x10
[   23.596715]  ret_from_fork_asm+0x1a/0x30
[   23.596760]  </TASK>
[   23.596775] 
[   23.618389] Allocated by task 217:
[   23.618665]  kasan_save_stack+0x3d/0x60
[   23.619019]  kasan_save_track+0x18/0x40
[   23.619424]  kasan_save_alloc_info+0x3b/0x50
[   23.620053]  __kasan_slab_alloc+0x91/0xa0
[   23.620697]  kmem_cache_alloc_noprof+0x11e/0x3e0
[   23.621057]  kmem_cache_double_free+0x150/0x490
[   23.621933]  kunit_try_run_case+0x1b3/0x490
[   23.622648]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.623612]  kthread+0x324/0x6d0
[   23.624228]  ret_from_fork+0x41/0x80
[   23.624791]  ret_from_fork_asm+0x1a/0x30
[   23.625466] 
[   23.625624] Freed by task 217:
[   23.625954]  kasan_save_stack+0x3d/0x60
[   23.626333]  kasan_save_track+0x18/0x40
[   23.627658]  kasan_save_free_info+0x3f/0x60
[   23.628008]  __kasan_slab_free+0x56/0x70
[   23.628670]  kmem_cache_free+0x120/0x420
[   23.629341]  kmem_cache_double_free+0x16b/0x490
[   23.630145]  kunit_try_run_case+0x1b3/0x490
[   23.631071]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.631371]  kthread+0x324/0x6d0
[   23.631620]  ret_from_fork+0x41/0x80
[   23.632056]  ret_from_fork_asm+0x1a/0x30
[   23.632475] 
[   23.632637] The buggy address belongs to the object at ffff888102a61000
[   23.632637]  which belongs to the cache test_cache of size 200
[   23.634583] The buggy address is located 0 bytes inside of
[   23.634583]  200-byte region [ffff888102a61000, ffff888102a610c8)
[   23.635634] 
[   23.635871] The buggy address belongs to the physical page:
[   23.637081] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a61
[   23.637927] flags: 0x200000000000000(node=0|zone=2)
[   23.638146] page_type: f5(slab)
[   23.638502] raw: 0200000000000000 ffff888100ff7780 dead000000000122 0000000000000000
[   23.639558] raw: 0000000000000000 00000000800f000f 00000001f5000000 0000000000000000
[   23.640066] page dumped because: kasan: bad access detected
[   23.641021] 
[   23.641334] Memory state around the buggy address:
[   23.642009]  ffff888102a60f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.643009]  ffff888102a60f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.643866] >ffff888102a61000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.644231]                    ^
[   23.644973]  ffff888102a61080: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc
[   23.645827]  ffff888102a61100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.646558] ==================================================================

test log snippet boot parser 58

Home

[   24.871063] ==================================================================
[   24.872524] BUG: KASAN: double-free in mempool_double_free_helper+0x185/0x370
[   24.872999] Free of addr ffff888102bd4000 by task kunit_try_catch/245
[   24.874018] 
[   24.874264] CPU: 1 UID: 0 PID: 245 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.874385] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.874440] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.874499] Call Trace:
[   24.874527]  <TASK>
[   24.874559]  dump_stack_lvl+0x73/0xb0
[   24.874636]  print_report+0xd1/0x640
[   24.874695]  ? __virt_addr_valid+0x1db/0x2d0
[   24.874757]  ? mempool_double_free_helper+0x185/0x370
[   24.874862]  ? kasan_addr_to_slab+0x11/0xa0
[   24.874933]  ? mempool_double_free_helper+0x185/0x370
[   24.874997]  kasan_report_invalid_free+0xc0/0xf0
[   24.875062]  ? mempool_double_free_helper+0x185/0x370
[   24.875131]  ? mempool_double_free_helper+0x185/0x370
[   24.875187]  __kasan_mempool_poison_object+0x1b3/0x1d0
[   24.875345]  mempool_free+0x2ec/0x380
[   24.875437]  mempool_double_free_helper+0x185/0x370
[   24.875512]  ? __pfx_mempool_double_free_helper+0x10/0x10
[   24.875578]  ? finish_task_switch.isra.0+0x153/0x700
[   24.875617]  mempool_kmalloc_large_double_free+0xb1/0x100
[   24.875655]  ? __pfx_mempool_kmalloc_large_double_free+0x10/0x10
[   24.875692]  ? __switch_to+0x5d9/0xf60
[   24.875725]  ? __pfx_mempool_kmalloc+0x10/0x10
[   24.875755]  ? __pfx_mempool_kfree+0x10/0x10
[   24.875788]  ? __pfx_read_tsc+0x10/0x10
[   24.875816]  ? ktime_get_ts64+0x86/0x230
[   24.875852]  kunit_try_run_case+0x1b3/0x490
[   24.875886]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.875918]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.875951]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.875984]  ? __kthread_parkme+0x82/0x160
[   24.876014]  ? preempt_count_sub+0x50/0x80
[   24.876046]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.876079]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.876113]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.876145]  kthread+0x324/0x6d0
[   24.876174]  ? trace_preempt_on+0x20/0xc0
[   24.876237]  ? __pfx_kthread+0x10/0x10
[   24.876270]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.876301]  ? calculate_sigpending+0x7b/0xa0
[   24.876337]  ? __pfx_kthread+0x10/0x10
[   24.876366]  ret_from_fork+0x41/0x80
[   24.876394]  ? __pfx_kthread+0x10/0x10
[   24.876443]  ret_from_fork_asm+0x1a/0x30
[   24.876485]  </TASK>
[   24.876501] 
[   24.894630] The buggy address belongs to the physical page:
[   24.895230] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102bd4
[   24.896022] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   24.896754] flags: 0x200000000000040(head|node=0|zone=2)
[   24.897257] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   24.897989] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   24.898732] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   24.899307] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   24.899866] head: 0200000000000002 ffffea00040af501 ffffffffffffffff 0000000000000000
[   24.900342] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   24.901083] page dumped because: kasan: bad access detected
[   24.901728] 
[   24.901951] Memory state around the buggy address:
[   24.902469]  ffff888102bd3f00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.903076]  ffff888102bd3f80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.903784] >ffff888102bd4000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.904414]                    ^
[   24.904793]  ffff888102bd4080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.905390]  ffff888102bd4100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.906049] ==================================================================

test log snippet boot parser 59

Home

[   24.912366] ==================================================================
[   24.913770] BUG: KASAN: double-free in mempool_double_free_helper+0x185/0x370
[   24.914313] Free of addr ffff888102d6c000 by task kunit_try_catch/247
[   24.914688] 
[   24.914866] CPU: 0 UID: 0 PID: 247 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.915334] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.915373] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.915842] Call Trace:
[   24.915873]  <TASK>
[   24.915895]  dump_stack_lvl+0x73/0xb0
[   24.915940]  print_report+0xd1/0x640
[   24.915974]  ? __virt_addr_valid+0x1db/0x2d0
[   24.916009]  ? mempool_double_free_helper+0x185/0x370
[   24.916044]  ? kasan_addr_to_slab+0x11/0xa0
[   24.916074]  ? mempool_double_free_helper+0x185/0x370
[   24.916110]  kasan_report_invalid_free+0xc0/0xf0
[   24.916143]  ? mempool_double_free_helper+0x185/0x370
[   24.916183]  ? mempool_double_free_helper+0x185/0x370
[   24.916246]  __kasan_mempool_poison_pages+0x115/0x130
[   24.916283]  mempool_free+0x290/0x380
[   24.916318]  mempool_double_free_helper+0x185/0x370
[   24.916353]  ? __pfx_mempool_double_free_helper+0x10/0x10
[   24.916393]  ? finish_task_switch.isra.0+0x153/0x700
[   24.916446]  mempool_page_alloc_double_free+0xac/0x100
[   24.916480]  ? __pfx_mempool_page_alloc_double_free+0x10/0x10
[   24.916510]  ? __switch_to+0x5d9/0xf60
[   24.916543]  ? __pfx_mempool_alloc_pages+0x10/0x10
[   24.916573]  ? __pfx_mempool_free_pages+0x10/0x10
[   24.916667]  ? __pfx_read_tsc+0x10/0x10
[   24.916727]  ? ktime_get_ts64+0x86/0x230
[   24.916766]  kunit_try_run_case+0x1b3/0x490
[   24.916803]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.916836]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.916869]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.916902]  ? __kthread_parkme+0x82/0x160
[   24.916931]  ? preempt_count_sub+0x50/0x80
[   24.916964]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.916998]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.917030]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.917064]  kthread+0x324/0x6d0
[   24.917093]  ? trace_preempt_on+0x20/0xc0
[   24.917123]  ? __pfx_kthread+0x10/0x10
[   24.917153]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.917196]  ? calculate_sigpending+0x7b/0xa0
[   24.917252]  ? __pfx_kthread+0x10/0x10
[   24.917281]  ret_from_fork+0x41/0x80
[   24.917310]  ? __pfx_kthread+0x10/0x10
[   24.917340]  ret_from_fork_asm+0x1a/0x30
[   24.917382]  </TASK>
[   24.917397] 
[   24.938693] The buggy address belongs to the physical page:
[   24.939019] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102d6c
[   24.940025] flags: 0x200000000000000(node=0|zone=2)
[   24.940753] raw: 0200000000000000 0000000000000000 dead000000000122 0000000000000000
[   24.941349] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   24.942494] page dumped because: kasan: bad access detected
[   24.943146] 
[   24.943430] Memory state around the buggy address:
[   24.944015]  ffff888102d6bf00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.944562]  ffff888102d6bf80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.945328] >ffff888102d6c000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.945747]                    ^
[   24.946107]  ffff888102d6c080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.946835]  ffff888102d6c100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.948057] ==================================================================

test log snippet boot parser 60

Home

[   24.805553] ==================================================================
[   24.806627] BUG: KASAN: double-free in mempool_double_free_helper+0x185/0x370
[   24.807225] Free of addr ffff888102a59a00 by task kunit_try_catch/243
[   24.808000] 
[   24.808263] CPU: 1 UID: 0 PID: 243 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.808367] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.808419] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.808479] Call Trace:
[   24.808513]  <TASK>
[   24.808544]  dump_stack_lvl+0x73/0xb0
[   24.808822]  print_report+0xd1/0x640
[   24.808906]  ? __virt_addr_valid+0x1db/0x2d0
[   24.808986]  ? mempool_double_free_helper+0x185/0x370
[   24.809054]  ? kasan_complete_mode_report_info+0x64/0x200
[   24.809114]  ? mempool_double_free_helper+0x185/0x370
[   24.809174]  kasan_report_invalid_free+0xc0/0xf0
[   24.809234]  ? mempool_double_free_helper+0x185/0x370
[   24.809304]  ? mempool_double_free_helper+0x185/0x370
[   24.809365]  ? mempool_double_free_helper+0x185/0x370
[   24.809453]  check_slab_allocation+0x101/0x130
[   24.809515]  __kasan_mempool_poison_object+0x91/0x1d0
[   24.809777]  mempool_free+0x2ec/0x380
[   24.809872]  mempool_double_free_helper+0x185/0x370
[   24.809946]  ? __pfx_mempool_double_free_helper+0x10/0x10
[   24.809995]  ? finish_task_switch.isra.0+0x153/0x700
[   24.810033]  mempool_kmalloc_double_free+0xb1/0x100
[   24.810069]  ? __pfx_mempool_kmalloc_double_free+0x10/0x10
[   24.810103]  ? __switch_to+0x5d9/0xf60
[   24.810135]  ? __pfx_mempool_kmalloc+0x10/0x10
[   24.810166]  ? __pfx_mempool_kfree+0x10/0x10
[   24.810226]  ? __pfx_read_tsc+0x10/0x10
[   24.810259]  ? ktime_get_ts64+0x86/0x230
[   24.810295]  kunit_try_run_case+0x1b3/0x490
[   24.810330]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.810369]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.810419]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.810454]  ? __kthread_parkme+0x82/0x160
[   24.810483]  ? preempt_count_sub+0x50/0x80
[   24.810516]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.810549]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.810586]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.810698]  kthread+0x324/0x6d0
[   24.810737]  ? trace_preempt_on+0x20/0xc0
[   24.810773]  ? __pfx_kthread+0x10/0x10
[   24.810804]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.810835]  ? calculate_sigpending+0x7b/0xa0
[   24.810870]  ? __pfx_kthread+0x10/0x10
[   24.810901]  ret_from_fork+0x41/0x80
[   24.810931]  ? __pfx_kthread+0x10/0x10
[   24.810960]  ret_from_fork_asm+0x1a/0x30
[   24.811003]  </TASK>
[   24.811019] 
[   24.835262] Allocated by task 243:
[   24.835805]  kasan_save_stack+0x3d/0x60
[   24.836386]  kasan_save_track+0x18/0x40
[   24.836849]  kasan_save_alloc_info+0x3b/0x50
[   24.837139]  __kasan_mempool_unpoison_object+0x1a9/0x200
[   24.837878]  remove_element+0x11e/0x190
[   24.838479]  mempool_alloc_preallocated+0x4d/0x90
[   24.839163]  mempool_double_free_helper+0x8b/0x370
[   24.840078]  mempool_kmalloc_double_free+0xb1/0x100
[   24.840715]  kunit_try_run_case+0x1b3/0x490
[   24.841025]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.841556]  kthread+0x324/0x6d0
[   24.842034]  ret_from_fork+0x41/0x80
[   24.842491]  ret_from_fork_asm+0x1a/0x30
[   24.843122] 
[   24.843393] Freed by task 243:
[   24.843923]  kasan_save_stack+0x3d/0x60
[   24.844435]  kasan_save_track+0x18/0x40
[   24.845243]  kasan_save_free_info+0x3f/0x60
[   24.845651]  __kasan_mempool_poison_object+0x131/0x1d0
[   24.846064]  mempool_free+0x2ec/0x380
[   24.846836]  mempool_double_free_helper+0x10a/0x370
[   24.847429]  mempool_kmalloc_double_free+0xb1/0x100
[   24.847978]  kunit_try_run_case+0x1b3/0x490
[   24.848178]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.848563]  kthread+0x324/0x6d0
[   24.848866]  ret_from_fork+0x41/0x80
[   24.849526]  ret_from_fork_asm+0x1a/0x30
[   24.850153] 
[   24.850419] The buggy address belongs to the object at ffff888102a59a00
[   24.850419]  which belongs to the cache kmalloc-128 of size 128
[   24.851864] The buggy address is located 0 bytes inside of
[   24.851864]  128-byte region [ffff888102a59a00, ffff888102a59a80)
[   24.853195] 
[   24.853914] The buggy address belongs to the physical page:
[   24.854572] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a59
[   24.855469] flags: 0x200000000000000(node=0|zone=2)
[   24.856215] page_type: f5(slab)
[   24.856571] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   24.857575] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   24.857969] page dumped because: kasan: bad access detected
[   24.859101] 
[   24.859371] Memory state around the buggy address:
[   24.860005]  ffff888102a59900: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   24.860640]  ffff888102a59980: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   24.861282] >ffff888102a59a00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   24.861933]                    ^
[   24.862270]  ffff888102a59a80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   24.862982]  ffff888102a59b00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   24.863547] ==================================================================

test log snippet boot parser 61

Home

[   25.053951] ==================================================================
[   25.055482] BUG: KASAN: global-out-of-bounds in kasan_global_oob_right+0x288/0x2d0
[   25.056124] Read of size 1 at addr ffffffff91e12ced by task kunit_try_catch/253
[   25.056621] 
[   25.056993] CPU: 0 UID: 0 PID: 253 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.057099] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.057135] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.057196] Call Trace:
[   25.057228]  <TASK>
[   25.057262]  dump_stack_lvl+0x73/0xb0
[   25.057341]  print_report+0xd1/0x640
[   25.057416]  ? __virt_addr_valid+0x1db/0x2d0
[   25.057487]  ? kasan_addr_to_slab+0x11/0xa0
[   25.057549]  kasan_report+0x102/0x140
[   25.057607]  ? kasan_global_oob_right+0x288/0x2d0
[   25.057669]  ? kasan_global_oob_right+0x288/0x2d0
[   25.057743]  __asan_report_load1_noabort+0x18/0x20
[   25.057803]  kasan_global_oob_right+0x288/0x2d0
[   25.057866]  ? __pfx_kasan_global_oob_right+0x10/0x10
[   25.057934]  ? __schedule+0xc6f/0x27d0
[   25.057997]  ? __pfx_read_tsc+0x10/0x10
[   25.058051]  ? ktime_get_ts64+0x86/0x230
[   25.058192]  kunit_try_run_case+0x1b3/0x490
[   25.058285]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.058345]  ? irqentry_exit+0x2a/0x60
[   25.058393]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.058451]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.058485]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.058519]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.058552]  kthread+0x324/0x6d0
[   25.058587]  ? trace_preempt_on+0x20/0xc0
[   25.058656]  ? __pfx_kthread+0x10/0x10
[   25.058708]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.058756]  ? calculate_sigpending+0x7b/0xa0
[   25.058793]  ? __pfx_kthread+0x10/0x10
[   25.058824]  ret_from_fork+0x41/0x80
[   25.058855]  ? __pfx_kthread+0x10/0x10
[   25.058886]  ret_from_fork_asm+0x1a/0x30
[   25.058930]  </TASK>
[   25.058947] 
[   25.072927] The buggy address belongs to the variable:
[   25.073531]  global_array+0xd/0x40
[   25.074073] 
[   25.074440] The buggy address belongs to the physical page:
[   25.075032] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x57a12
[   25.075998] flags: 0x100000000002000(reserved|node=0|zone=1)
[   25.076774] raw: 0100000000002000 ffffea00015e8488 ffffea00015e8488 0000000000000000
[   25.077419] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   25.078160] page dumped because: kasan: bad access detected
[   25.078771] 
[   25.078995] Memory state around the buggy address:
[   25.079781]  ffffffff91e12b80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.080499]  ffffffff91e12c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.081263] >ffffffff91e12c80: 00 00 00 00 00 00 00 00 00 00 00 00 00 02 f9 f9
[   25.082042]                                                           ^
[   25.082855]  ffffffff91e12d00: f9 f9 f9 f9 00 f9 f9 f9 f9 f9 f9 f9 04 f9 f9 f9
[   25.083611]  ffffffff91e12d80: f9 f9 f9 f9 00 f9 f9 f9 f9 f9 f9 f9 02 f9 f9 f9
[   25.084262] ==================================================================

test log snippet boot parser 62

Home

[   21.481214] ==================================================================
[   21.481963] BUG: KASAN: invalid-free in kfree+0x271/0x3f0
[   21.482828] Free of addr ffff888102324001 by task kunit_try_catch/158
[   21.483806] 
[   21.484265] CPU: 0 UID: 0 PID: 158 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.484379] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.484428] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.484616] Call Trace:
[   21.484659]  <TASK>
[   21.484697]  dump_stack_lvl+0x73/0xb0
[   21.484777]  print_report+0xd1/0x640
[   21.484840]  ? __virt_addr_valid+0x1db/0x2d0
[   21.484904]  ? kfree+0x271/0x3f0
[   21.485043]  ? kasan_addr_to_slab+0x11/0xa0
[   21.485089]  ? kfree+0x271/0x3f0
[   21.485116]  kasan_report_invalid_free+0xc0/0xf0
[   21.485152]  ? kfree+0x271/0x3f0
[   21.485195]  ? kfree+0x271/0x3f0
[   21.485247]  __kasan_kfree_large+0x86/0xd0
[   21.485279]  free_large_kmalloc+0x3b/0xd0
[   21.485308]  kfree+0x271/0x3f0
[   21.485340]  kmalloc_large_invalid_free+0x121/0x2b0
[   21.485372]  ? __pfx_kmalloc_large_invalid_free+0x10/0x10
[   21.485424]  ? __schedule+0xc6f/0x27d0
[   21.485460]  ? __pfx_read_tsc+0x10/0x10
[   21.485488]  ? ktime_get_ts64+0x86/0x230
[   21.485522]  kunit_try_run_case+0x1b3/0x490
[   21.485557]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.485618]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.485690]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.485725]  ? __kthread_parkme+0x82/0x160
[   21.485755]  ? preempt_count_sub+0x50/0x80
[   21.485787]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.485820]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.485853]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.485884]  kthread+0x324/0x6d0
[   21.485911]  ? trace_preempt_on+0x20/0xc0
[   21.485942]  ? __pfx_kthread+0x10/0x10
[   21.485971]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.486000]  ? calculate_sigpending+0x7b/0xa0
[   21.486032]  ? __pfx_kthread+0x10/0x10
[   21.486062]  ret_from_fork+0x41/0x80
[   21.486091]  ? __pfx_kthread+0x10/0x10
[   21.486119]  ret_from_fork_asm+0x1a/0x30
[   21.486162]  </TASK>
[   21.486177] 
[   21.504983] The buggy address belongs to the physical page:
[   21.505463] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102324
[   21.506214] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.506902] flags: 0x200000000000040(head|node=0|zone=2)
[   21.507544] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   21.508959] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   21.509498] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   21.510440] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   21.511366] head: 0200000000000002 ffffea000408c901 ffffffffffffffff 0000000000000000
[   21.512142] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   21.513206] page dumped because: kasan: bad access detected
[   21.513873] 
[   21.514037] Memory state around the buggy address:
[   21.514743]  ffff888102323f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.515384]  ffff888102323f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.516057] >ffff888102324000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.516831]                    ^
[   21.517220]  ffff888102324080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.518164]  ffff888102324100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.519106] ==================================================================

test log snippet boot parser 63

Home

[   23.659741] ==================================================================
[   23.660142] BUG: KASAN: invalid-free in kmem_cache_invalid_free+0x1d9/0x470
[   23.660379] Free of addr ffff888101ae9001 by task kunit_try_catch/219
[   23.662344] 
[   23.662623] CPU: 0 UID: 0 PID: 219 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.662724] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.662755] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.662809] Call Trace:
[   23.662840]  <TASK>
[   23.662873]  dump_stack_lvl+0x73/0xb0
[   23.662946]  print_report+0xd1/0x640
[   23.663000]  ? __virt_addr_valid+0x1db/0x2d0
[   23.663060]  ? kmem_cache_invalid_free+0x1d9/0x470
[   23.663106]  ? kasan_complete_mode_report_info+0x2a/0x200
[   23.663161]  ? kmem_cache_invalid_free+0x1d9/0x470
[   23.664021]  kasan_report_invalid_free+0xc0/0xf0
[   23.664089]  ? kmem_cache_invalid_free+0x1d9/0x470
[   23.664125]  ? kmem_cache_invalid_free+0x1d9/0x470
[   23.664156]  check_slab_allocation+0x11f/0x130
[   23.664193]  __kasan_slab_pre_free+0x28/0x40
[   23.664246]  kmem_cache_free+0xee/0x420
[   23.664277]  ? kmem_cache_alloc_noprof+0x11e/0x3e0
[   23.664307]  ? kmem_cache_invalid_free+0x1d9/0x470
[   23.664340]  kmem_cache_invalid_free+0x1d9/0x470
[   23.664371]  ? __pfx_kmem_cache_invalid_free+0x10/0x10
[   23.664415]  ? finish_task_switch.isra.0+0x153/0x700
[   23.664449]  ? __switch_to+0x5d9/0xf60
[   23.664487]  ? __pfx_read_tsc+0x10/0x10
[   23.664515]  ? ktime_get_ts64+0x86/0x230
[   23.664549]  kunit_try_run_case+0x1b3/0x490
[   23.664585]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.664681]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.664745]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.664778]  ? __kthread_parkme+0x82/0x160
[   23.664808]  ? preempt_count_sub+0x50/0x80
[   23.664840]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.664872]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.664905]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.664936]  kthread+0x324/0x6d0
[   23.664963]  ? trace_preempt_on+0x20/0xc0
[   23.664994]  ? __pfx_kthread+0x10/0x10
[   23.665023]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.665052]  ? calculate_sigpending+0x7b/0xa0
[   23.665084]  ? __pfx_kthread+0x10/0x10
[   23.665113]  ret_from_fork+0x41/0x80
[   23.665141]  ? __pfx_kthread+0x10/0x10
[   23.665169]  ret_from_fork_asm+0x1a/0x30
[   23.665234]  </TASK>
[   23.665251] 
[   23.683817] Allocated by task 219:
[   23.684058]  kasan_save_stack+0x3d/0x60
[   23.684342]  kasan_save_track+0x18/0x40
[   23.685264]  kasan_save_alloc_info+0x3b/0x50
[   23.685877]  __kasan_slab_alloc+0x91/0xa0
[   23.686378]  kmem_cache_alloc_noprof+0x11e/0x3e0
[   23.687046]  kmem_cache_invalid_free+0x158/0x470
[   23.687718]  kunit_try_run_case+0x1b3/0x490
[   23.688216]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.688985]  kthread+0x324/0x6d0
[   23.689374]  ret_from_fork+0x41/0x80
[   23.689904]  ret_from_fork_asm+0x1a/0x30
[   23.690375] 
[   23.690768] The buggy address belongs to the object at ffff888101ae9000
[   23.690768]  which belongs to the cache test_cache of size 200
[   23.691720] The buggy address is located 1 bytes inside of
[   23.691720]  200-byte region [ffff888101ae9000, ffff888101ae90c8)
[   23.692836] 
[   23.693388] The buggy address belongs to the physical page:
[   23.694045] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101ae9
[   23.694795] flags: 0x200000000000000(node=0|zone=2)
[   23.695348] page_type: f5(slab)
[   23.695901] raw: 0200000000000000 ffff888101aabc80 dead000000000122 0000000000000000
[   23.696544] raw: 0000000000000000 00000000800f000f 00000001f5000000 0000000000000000
[   23.697375] page dumped because: kasan: bad access detected
[   23.698066] 
[   23.698333] Memory state around the buggy address:
[   23.698944]  ffff888101ae8f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.699780]  ffff888101ae8f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.700309] >ffff888101ae9000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   23.701172]                    ^
[   23.701486]  ffff888101ae9080: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
[   23.702027]  ffff888101ae9100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.702764] ==================================================================

test log snippet boot parser 64

Home

[   25.010427] ==================================================================
[   25.011419] BUG: KASAN: invalid-free in mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   25.011945] Free of addr ffff888102d9c001 by task kunit_try_catch/251
[   25.012345] 
[   25.012761] CPU: 1 UID: 0 PID: 251 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.012873] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.012911] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.012969] Call Trace:
[   25.013003]  <TASK>
[   25.013038]  dump_stack_lvl+0x73/0xb0
[   25.013115]  print_report+0xd1/0x640
[   25.013179]  ? __virt_addr_valid+0x1db/0x2d0
[   25.013490]  ? mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   25.013797]  ? kasan_addr_to_slab+0x11/0xa0
[   25.013872]  ? mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   25.013945]  kasan_report_invalid_free+0xc0/0xf0
[   25.014000]  ? mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   25.014044]  ? mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   25.014083]  __kasan_mempool_poison_object+0x102/0x1d0
[   25.014120]  mempool_free+0x2ec/0x380
[   25.014155]  mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   25.014230]  ? __pfx_mempool_kmalloc_invalid_free_helper+0x10/0x10
[   25.014280]  ? finish_task_switch.isra.0+0x153/0x700
[   25.014316]  mempool_kmalloc_large_invalid_free+0xb1/0x100
[   25.014359]  ? __pfx_mempool_kmalloc_large_invalid_free+0x10/0x10
[   25.014395]  ? __switch_to+0x5d9/0xf60
[   25.014446]  ? __pfx_mempool_kmalloc+0x10/0x10
[   25.014476]  ? __pfx_mempool_kfree+0x10/0x10
[   25.014510]  ? __pfx_read_tsc+0x10/0x10
[   25.014537]  ? ktime_get_ts64+0x86/0x230
[   25.014574]  kunit_try_run_case+0x1b3/0x490
[   25.014668]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.014730]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.014792]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.014853]  ? __kthread_parkme+0x82/0x160
[   25.014904]  ? preempt_count_sub+0x50/0x80
[   25.014947]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.014982]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.015017]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.015051]  kthread+0x324/0x6d0
[   25.015078]  ? trace_preempt_on+0x20/0xc0
[   25.015110]  ? __pfx_kthread+0x10/0x10
[   25.015140]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.015171]  ? calculate_sigpending+0x7b/0xa0
[   25.015238]  ? __pfx_kthread+0x10/0x10
[   25.015271]  ret_from_fork+0x41/0x80
[   25.015302]  ? __pfx_kthread+0x10/0x10
[   25.015332]  ret_from_fork_asm+0x1a/0x30
[   25.015375]  </TASK>
[   25.015391] 
[   25.033927] The buggy address belongs to the physical page:
[   25.034475] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102d9c
[   25.035151] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   25.035900] flags: 0x200000000000040(head|node=0|zone=2)
[   25.036470] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   25.036927] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   25.037611] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   25.038132] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   25.039374] head: 0200000000000002 ffffea00040b6701 ffffffffffffffff 0000000000000000
[   25.040131] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   25.040855] page dumped because: kasan: bad access detected
[   25.041444] 
[   25.041813] Memory state around the buggy address:
[   25.042283]  ffff888102d9bf00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   25.043238]  ffff888102d9bf80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   25.043860] >ffff888102d9c000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.044245]                    ^
[   25.044504]  ffff888102d9c080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.045151]  ffff888102d9c100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.046105] ==================================================================

test log snippet boot parser 65

Home

[   24.954376] ==================================================================
[   24.955769] BUG: KASAN: invalid-free in mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   24.956226] Free of addr ffff888102a59e01 by task kunit_try_catch/249
[   24.957320] 
[   24.957585] CPU: 1 UID: 0 PID: 249 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.957861] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.957901] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.957959] Call Trace:
[   24.957991]  <TASK>
[   24.958020]  dump_stack_lvl+0x73/0xb0
[   24.958096]  print_report+0xd1/0x640
[   24.958158]  ? __virt_addr_valid+0x1db/0x2d0
[   24.958298]  ? mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   24.958423]  ? kasan_complete_mode_report_info+0x2a/0x200
[   24.958496]  ? mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   24.958556]  kasan_report_invalid_free+0xc0/0xf0
[   24.958627]  ? mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   24.958714]  ? mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   24.958755]  ? mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   24.958793]  check_slab_allocation+0x11f/0x130
[   24.958828]  __kasan_mempool_poison_object+0x91/0x1d0
[   24.958864]  mempool_free+0x2ec/0x380
[   24.958903]  mempool_kmalloc_invalid_free_helper+0x133/0x2e0
[   24.958940]  ? __pfx_mempool_kmalloc_invalid_free_helper+0x10/0x10
[   24.958984]  ? finish_task_switch.isra.0+0x153/0x700
[   24.959020]  mempool_kmalloc_invalid_free+0xb1/0x100
[   24.959055]  ? __pfx_mempool_kmalloc_invalid_free+0x10/0x10
[   24.959089]  ? __switch_to+0x5d9/0xf60
[   24.959123]  ? __pfx_mempool_kmalloc+0x10/0x10
[   24.959152]  ? __pfx_mempool_kfree+0x10/0x10
[   24.959199]  ? __pfx_read_tsc+0x10/0x10
[   24.959243]  ? ktime_get_ts64+0x86/0x230
[   24.959279]  kunit_try_run_case+0x1b3/0x490
[   24.959315]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.959346]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.959380]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.959430]  ? __kthread_parkme+0x82/0x160
[   24.959462]  ? preempt_count_sub+0x50/0x80
[   24.959495]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.959528]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.959563]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.959638]  kthread+0x324/0x6d0
[   24.959707]  ? trace_preempt_on+0x20/0xc0
[   24.959766]  ? __pfx_kthread+0x10/0x10
[   24.959817]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.959868]  ? calculate_sigpending+0x7b/0xa0
[   24.959926]  ? __pfx_kthread+0x10/0x10
[   24.959959]  ret_from_fork+0x41/0x80
[   24.959991]  ? __pfx_kthread+0x10/0x10
[   24.960022]  ret_from_fork_asm+0x1a/0x30
[   24.960066]  </TASK>
[   24.960083] 
[   24.982338] Allocated by task 249:
[   24.982743]  kasan_save_stack+0x3d/0x60
[   24.983154]  kasan_save_track+0x18/0x40
[   24.983875]  kasan_save_alloc_info+0x3b/0x50
[   24.984422]  __kasan_mempool_unpoison_object+0x1a9/0x200
[   24.985102]  remove_element+0x11e/0x190
[   24.986022]  mempool_alloc_preallocated+0x4d/0x90
[   24.986540]  mempool_kmalloc_invalid_free_helper+0x84/0x2e0
[   24.987171]  mempool_kmalloc_invalid_free+0xb1/0x100
[   24.987569]  kunit_try_run_case+0x1b3/0x490
[   24.988199]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.988981]  kthread+0x324/0x6d0
[   24.989354]  ret_from_fork+0x41/0x80
[   24.989951]  ret_from_fork_asm+0x1a/0x30
[   24.990476] 
[   24.991112] The buggy address belongs to the object at ffff888102a59e00
[   24.991112]  which belongs to the cache kmalloc-128 of size 128
[   24.992233] The buggy address is located 1 bytes inside of
[   24.992233]  128-byte region [ffff888102a59e00, ffff888102a59e80)
[   24.993238] 
[   24.993458] The buggy address belongs to the physical page:
[   24.994159] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a59
[   24.995072] flags: 0x200000000000000(node=0|zone=2)
[   24.996007] page_type: f5(slab)
[   24.996496] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   24.997127] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   24.997918] page dumped because: kasan: bad access detected
[   24.998491] 
[   24.998878] Memory state around the buggy address:
[   24.999387]  ffff888102a59d00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   25.000218]  ffff888102a59d80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.001367] >ffff888102a59e00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.001999]                    ^
[   25.002313]  ffff888102a59e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.003087]  ffff888102a59f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.004547] ==================================================================

test log snippet boot parser 66

Home

[   22.693104] ==================================================================
[   22.694367] BUG: KASAN: out-of-bounds in kmalloc_memmove_negative_size+0x172/0x330
[   22.695182] Read of size 18446744073709551614 at addr ffff888101add584 by task kunit_try_catch/188
[   22.696040] 
[   22.696288] CPU: 0 UID: 0 PID: 188 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.696429] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.696463] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.696516] Call Trace:
[   22.696546]  <TASK>
[   22.696579]  dump_stack_lvl+0x73/0xb0
[   22.696772]  print_report+0xd1/0x640
[   22.696840]  ? __virt_addr_valid+0x1db/0x2d0
[   22.696906]  ? kasan_complete_mode_report_info+0x2a/0x200
[   22.696972]  kasan_report+0x102/0x140
[   22.697026]  ? kmalloc_memmove_negative_size+0x172/0x330
[   22.697224]  ? kmalloc_memmove_negative_size+0x172/0x330
[   22.697360]  kasan_check_range+0x10c/0x1c0
[   22.697446]  __asan_memmove+0x27/0x70
[   22.697722]  kmalloc_memmove_negative_size+0x172/0x330
[   22.697815]  ? __pfx_kmalloc_memmove_negative_size+0x10/0x10
[   22.697884]  ? __schedule+0xc6f/0x27d0
[   22.697930]  ? __pfx_read_tsc+0x10/0x10
[   22.697960]  ? ktime_get_ts64+0x86/0x230
[   22.697995]  kunit_try_run_case+0x1b3/0x490
[   22.698030]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.698060]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.698101]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.698134]  ? __kthread_parkme+0x82/0x160
[   22.698162]  ? preempt_count_sub+0x50/0x80
[   22.698216]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.698252]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.698285]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.698316]  kthread+0x324/0x6d0
[   22.698344]  ? trace_preempt_on+0x20/0xc0
[   22.698382]  ? __pfx_kthread+0x10/0x10
[   22.698431]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.698461]  ? calculate_sigpending+0x7b/0xa0
[   22.698495]  ? __pfx_kthread+0x10/0x10
[   22.698525]  ret_from_fork+0x41/0x80
[   22.698553]  ? __pfx_kthread+0x10/0x10
[   22.698588]  ret_from_fork_asm+0x1a/0x30
[   22.698669]  </TASK>
[   22.698699] 
[   22.717393] Allocated by task 188:
[   22.717859]  kasan_save_stack+0x3d/0x60
[   22.719213]  kasan_save_track+0x18/0x40
[   22.719778]  kasan_save_alloc_info+0x3b/0x50
[   22.720363]  __kasan_kmalloc+0xb7/0xc0
[   22.721040]  __kmalloc_cache_noprof+0x184/0x410
[   22.721798]  kmalloc_memmove_negative_size+0xad/0x330
[   22.722506]  kunit_try_run_case+0x1b3/0x490
[   22.723371]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.723793]  kthread+0x324/0x6d0
[   22.724172]  ret_from_fork+0x41/0x80
[   22.724880]  ret_from_fork_asm+0x1a/0x30
[   22.725567] 
[   22.725768] The buggy address belongs to the object at ffff888101add580
[   22.725768]  which belongs to the cache kmalloc-64 of size 64
[   22.726678] The buggy address is located 4 bytes inside of
[   22.726678]  64-byte region [ffff888101add580, ffff888101add5c0)
[   22.727892] 
[   22.728551] The buggy address belongs to the physical page:
[   22.728960] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101add
[   22.730272] flags: 0x200000000000000(node=0|zone=2)
[   22.730884] page_type: f5(slab)
[   22.731187] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   22.732425] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   22.733092] page dumped because: kasan: bad access detected
[   22.733650] 
[   22.733851] Memory state around the buggy address:
[   22.734546]  ffff888101add480: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   22.735417]  ffff888101add500: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   22.736269] >ffff888101add580: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
[   22.736963]                    ^
[   22.737961]  ffff888101add600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.738512]  ffff888101add680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.738880] ==================================================================

test log snippet boot parser 67

Home

[   29.225245] ==================================================================
[   29.225795] BUG: KASAN: slab-out-of-bounds in _copy_from_user+0x32/0x90
[   29.226040] Write of size 121 at addr ffff888101af6100 by task kunit_try_catch/293
[   29.226277] 
[   29.226385] CPU: 0 UID: 0 PID: 293 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.226493] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.226530] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.226587] Call Trace:
[   29.226624]  <TASK>
[   29.226664]  dump_stack_lvl+0x73/0xb0
[   29.226745]  print_report+0xd1/0x640
[   29.226815]  ? __virt_addr_valid+0x1db/0x2d0
[   29.226891]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.226967]  kasan_report+0x102/0x140
[   29.227081]  ? _copy_from_user+0x32/0x90
[   29.227682]  ? _copy_from_user+0x32/0x90
[   29.227781]  kasan_check_range+0x10c/0x1c0
[   29.227850]  __kasan_check_write+0x18/0x20
[   29.227912]  _copy_from_user+0x32/0x90
[   29.227977]  copy_user_test_oob+0x2bf/0x10f0
[   29.228051]  ? __pfx_copy_user_test_oob+0x10/0x10
[   29.228111]  ? finish_task_switch.isra.0+0x153/0x700
[   29.228163]  ? __switch_to+0x5d9/0xf60
[   29.228265]  ? __schedule+0xc6f/0x27d0
[   29.228306]  ? __pfx_read_tsc+0x10/0x10
[   29.228337]  ? ktime_get_ts64+0x86/0x230
[   29.228374]  kunit_try_run_case+0x1b3/0x490
[   29.228430]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.228466]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.228500]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.228535]  ? __kthread_parkme+0x82/0x160
[   29.228566]  ? preempt_count_sub+0x50/0x80
[   29.228601]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.228636]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.228672]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.228706]  kthread+0x324/0x6d0
[   29.228736]  ? trace_preempt_on+0x20/0xc0
[   29.228770]  ? __pfx_kthread+0x10/0x10
[   29.228801]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.228833]  ? calculate_sigpending+0x7b/0xa0
[   29.228869]  ? __pfx_kthread+0x10/0x10
[   29.228901]  ret_from_fork+0x41/0x80
[   29.228932]  ? __pfx_kthread+0x10/0x10
[   29.228964]  ret_from_fork_asm+0x1a/0x30
[   29.229009]  </TASK>
[   29.229027] 
[   29.246342] Allocated by task 293:
[   29.246837]  kasan_save_stack+0x3d/0x60
[   29.247363]  kasan_save_track+0x18/0x40
[   29.247898]  kasan_save_alloc_info+0x3b/0x50
[   29.248477]  __kasan_kmalloc+0xb7/0xc0
[   29.248770]  __kmalloc_noprof+0x1c4/0x500
[   29.249286]  kunit_kmalloc_array+0x25/0x60
[   29.249902]  copy_user_test_oob+0xac/0x10f0
[   29.250476]  kunit_try_run_case+0x1b3/0x490
[   29.250967]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.251595]  kthread+0x324/0x6d0
[   29.252032]  ret_from_fork+0x41/0x80
[   29.252576]  ret_from_fork_asm+0x1a/0x30
[   29.252991] 
[   29.253331] The buggy address belongs to the object at ffff888101af6100
[   29.253331]  which belongs to the cache kmalloc-128 of size 128
[   29.254579] The buggy address is located 0 bytes inside of
[   29.254579]  allocated 120-byte region [ffff888101af6100, ffff888101af6178)
[   29.255963] 
[   29.256377] The buggy address belongs to the physical page:
[   29.256820] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af6
[   29.257450] flags: 0x200000000000000(node=0|zone=2)
[   29.257846] page_type: f5(slab)
[   29.258110] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   29.258916] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   29.259678] page dumped because: kasan: bad access detected
[   29.260253] 
[   29.260444] Memory state around the buggy address:
[   29.260939]  ffff888101af6000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   29.261662]  ffff888101af6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.262396] >ffff888101af6100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   29.262916]                                                                 ^
[   29.263573]  ffff888101af6180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.264120]  ffff888101af6200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.264900] ==================================================================

test log snippet boot parser 68

Home

[   29.271243] ==================================================================
[   29.271834] BUG: KASAN: slab-out-of-bounds in _copy_to_user+0x4a/0x70
[   29.272594] Read of size 121 at addr ffff888101af6100 by task kunit_try_catch/293
[   29.273081] 
[   29.273928] CPU: 0 UID: 0 PID: 293 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.274000] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.274021] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.274053] Call Trace:
[   29.274074]  <TASK>
[   29.274096]  dump_stack_lvl+0x73/0xb0
[   29.274140]  print_report+0xd1/0x640
[   29.274176]  ? __virt_addr_valid+0x1db/0x2d0
[   29.274243]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.274283]  kasan_report+0x102/0x140
[   29.274315]  ? _copy_to_user+0x4a/0x70
[   29.274357]  ? _copy_to_user+0x4a/0x70
[   29.274397]  kasan_check_range+0x10c/0x1c0
[   29.274447]  __kasan_check_read+0x15/0x20
[   29.274479]  _copy_to_user+0x4a/0x70
[   29.274511]  copy_user_test_oob+0x365/0x10f0
[   29.274548]  ? __pfx_copy_user_test_oob+0x10/0x10
[   29.274578]  ? finish_task_switch.isra.0+0x153/0x700
[   29.274611]  ? __switch_to+0x5d9/0xf60
[   29.274646]  ? __schedule+0xc6f/0x27d0
[   29.274680]  ? __pfx_read_tsc+0x10/0x10
[   29.274709]  ? ktime_get_ts64+0x86/0x230
[   29.274744]  kunit_try_run_case+0x1b3/0x490
[   29.274779]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.274812]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.274845]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.274879]  ? __kthread_parkme+0x82/0x160
[   29.274910]  ? preempt_count_sub+0x50/0x80
[   29.274944]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.274979]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.275013]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.275049]  kthread+0x324/0x6d0
[   29.275078]  ? trace_preempt_on+0x20/0xc0
[   29.275110]  ? __pfx_kthread+0x10/0x10
[   29.275141]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.275173]  ? calculate_sigpending+0x7b/0xa0
[   29.275233]  ? __pfx_kthread+0x10/0x10
[   29.275267]  ret_from_fork+0x41/0x80
[   29.275299]  ? __pfx_kthread+0x10/0x10
[   29.275330]  ret_from_fork_asm+0x1a/0x30
[   29.275376]  </TASK>
[   29.275393] 
[   29.290119] Allocated by task 293:
[   29.290640]  kasan_save_stack+0x3d/0x60
[   29.291173]  kasan_save_track+0x18/0x40
[   29.291723]  kasan_save_alloc_info+0x3b/0x50
[   29.292029]  __kasan_kmalloc+0xb7/0xc0
[   29.292461]  __kmalloc_noprof+0x1c4/0x500
[   29.292938]  kunit_kmalloc_array+0x25/0x60
[   29.293462]  copy_user_test_oob+0xac/0x10f0
[   29.293907]  kunit_try_run_case+0x1b3/0x490
[   29.294368]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.294888]  kthread+0x324/0x6d0
[   29.295329]  ret_from_fork+0x41/0x80
[   29.295782]  ret_from_fork_asm+0x1a/0x30
[   29.296249] 
[   29.296437] The buggy address belongs to the object at ffff888101af6100
[   29.296437]  which belongs to the cache kmalloc-128 of size 128
[   29.297532] The buggy address is located 0 bytes inside of
[   29.297532]  allocated 120-byte region [ffff888101af6100, ffff888101af6178)
[   29.298543] 
[   29.298780] The buggy address belongs to the physical page:
[   29.299280] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af6
[   29.299791] flags: 0x200000000000000(node=0|zone=2)
[   29.300370] page_type: f5(slab)
[   29.300816] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   29.301571] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   29.302307] page dumped because: kasan: bad access detected
[   29.302845] 
[   29.303032] Memory state around the buggy address:
[   29.303700]  ffff888101af6000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   29.304289]  ffff888101af6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.304719] >ffff888101af6100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   29.305461]                                                                 ^
[   29.306107]  ffff888101af6180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.306847]  ffff888101af6200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.307378] ==================================================================

test log snippet boot parser 69

Home

[   29.105267] ==================================================================
[   29.106388] BUG: KASAN: slab-out-of-bounds in copy_to_kernel_nofault+0x225/0x260
[   29.107741] Read of size 8 at addr ffff888101af6078 by task kunit_try_catch/289
[   29.108838] 
[   29.109135] CPU: 0 UID: 0 PID: 289 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.109260] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.109300] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.109383] Call Trace:
[   29.109433]  <TASK>
[   29.109472]  dump_stack_lvl+0x73/0xb0
[   29.109558]  print_report+0xd1/0x640
[   29.109678]  ? __virt_addr_valid+0x1db/0x2d0
[   29.109723]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.109765]  kasan_report+0x102/0x140
[   29.109798]  ? copy_to_kernel_nofault+0x225/0x260
[   29.109835]  ? copy_to_kernel_nofault+0x225/0x260
[   29.109877]  __asan_report_load8_noabort+0x18/0x20
[   29.109910]  copy_to_kernel_nofault+0x225/0x260
[   29.109946]  copy_to_kernel_nofault_oob+0x179/0x4e0
[   29.109982]  ? __pfx_copy_to_kernel_nofault_oob+0x10/0x10
[   29.110018]  ? finish_task_switch.isra.0+0x153/0x700
[   29.110051]  ? __schedule+0xc6f/0x27d0
[   29.110084]  ? trace_hardirqs_on+0x37/0xe0
[   29.110129]  ? __pfx_read_tsc+0x10/0x10
[   29.110159]  ? ktime_get_ts64+0x86/0x230
[   29.110224]  kunit_try_run_case+0x1b3/0x490
[   29.110268]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.110302]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.110336]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.110381]  ? __kthread_parkme+0x82/0x160
[   29.110436]  ? preempt_count_sub+0x50/0x80
[   29.110506]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.110573]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.110659]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.110711]  kthread+0x324/0x6d0
[   29.110744]  ? trace_preempt_on+0x20/0xc0
[   29.110777]  ? __pfx_kthread+0x10/0x10
[   29.110810]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.110842]  ? calculate_sigpending+0x7b/0xa0
[   29.110880]  ? __pfx_kthread+0x10/0x10
[   29.110913]  ret_from_fork+0x41/0x80
[   29.110944]  ? __pfx_kthread+0x10/0x10
[   29.110977]  ret_from_fork_asm+0x1a/0x30
[   29.111024]  </TASK>
[   29.111043] 
[   29.130024] Allocated by task 289:
[   29.130864]  kasan_save_stack+0x3d/0x60
[   29.131919]  kasan_save_track+0x18/0x40
[   29.132617]  kasan_save_alloc_info+0x3b/0x50
[   29.133008]  __kasan_kmalloc+0xb7/0xc0
[   29.133921]  __kmalloc_cache_noprof+0x184/0x410
[   29.134225]  copy_to_kernel_nofault_oob+0xc5/0x4e0
[   29.135030]  kunit_try_run_case+0x1b3/0x490
[   29.135747]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.136672]  kthread+0x324/0x6d0
[   29.137150]  ret_from_fork+0x41/0x80
[   29.137911]  ret_from_fork_asm+0x1a/0x30
[   29.138308] 
[   29.138825] The buggy address belongs to the object at ffff888101af6000
[   29.138825]  which belongs to the cache kmalloc-128 of size 128
[   29.140159] The buggy address is located 0 bytes to the right of
[   29.140159]  allocated 120-byte region [ffff888101af6000, ffff888101af6078)
[   29.141742] 
[   29.141925] The buggy address belongs to the physical page:
[   29.142252] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af6
[   29.143461] flags: 0x200000000000000(node=0|zone=2)
[   29.144271] page_type: f5(slab)
[   29.144548] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   29.145915] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   29.146901] page dumped because: kasan: bad access detected
[   29.147342] 
[   29.147601] Memory state around the buggy address:
[   29.148233]  ffff888101af5f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.149058]  ffff888101af5f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.150079] >ffff888101af6000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   29.151055]                                                                 ^
[   29.152307]  ffff888101af6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.153193]  ffff888101af6100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.153917] ==================================================================

test log snippet boot parser 70

Home

[   29.155530] ==================================================================
[   29.156502] BUG: KASAN: slab-out-of-bounds in copy_to_kernel_nofault+0x99/0x260
[   29.157792] Write of size 8 at addr ffff888101af6078 by task kunit_try_catch/289
[   29.158616] 
[   29.158829] CPU: 0 UID: 0 PID: 289 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.158939] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.158968] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.158999] Call Trace:
[   29.159020]  <TASK>
[   29.159043]  dump_stack_lvl+0x73/0xb0
[   29.159089]  print_report+0xd1/0x640
[   29.159152]  ? __virt_addr_valid+0x1db/0x2d0
[   29.159216]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.159284]  kasan_report+0x102/0x140
[   29.159350]  ? copy_to_kernel_nofault+0x99/0x260
[   29.159494]  ? copy_to_kernel_nofault+0x99/0x260
[   29.159575]  kasan_check_range+0x10c/0x1c0
[   29.159642]  __kasan_check_write+0x18/0x20
[   29.159704]  copy_to_kernel_nofault+0x99/0x260
[   29.159773]  copy_to_kernel_nofault_oob+0x214/0x4e0
[   29.159837]  ? __pfx_copy_to_kernel_nofault_oob+0x10/0x10
[   29.159896]  ? finish_task_switch.isra.0+0x153/0x700
[   29.159952]  ? __schedule+0xc6f/0x27d0
[   29.160007]  ? trace_hardirqs_on+0x37/0xe0
[   29.160081]  ? __pfx_read_tsc+0x10/0x10
[   29.160133]  ? ktime_get_ts64+0x86/0x230
[   29.160198]  kunit_try_run_case+0x1b3/0x490
[   29.160257]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.160293]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.160327]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.160362]  ? __kthread_parkme+0x82/0x160
[   29.160394]  ? preempt_count_sub+0x50/0x80
[   29.160455]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.160492]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.160527]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.160562]  kthread+0x324/0x6d0
[   29.160637]  ? trace_preempt_on+0x20/0xc0
[   29.160705]  ? __pfx_kthread+0x10/0x10
[   29.160742]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.160775]  ? calculate_sigpending+0x7b/0xa0
[   29.160810]  ? __pfx_kthread+0x10/0x10
[   29.160846]  ret_from_fork+0x41/0x80
[   29.160878]  ? __pfx_kthread+0x10/0x10
[   29.160911]  ret_from_fork_asm+0x1a/0x30
[   29.160974]  </TASK>
[   29.160994] 
[   29.182264] Allocated by task 289:
[   29.183088]  kasan_save_stack+0x3d/0x60
[   29.183540]  kasan_save_track+0x18/0x40
[   29.183966]  kasan_save_alloc_info+0x3b/0x50
[   29.184284]  __kasan_kmalloc+0xb7/0xc0
[   29.184717]  __kmalloc_cache_noprof+0x184/0x410
[   29.185170]  copy_to_kernel_nofault_oob+0xc5/0x4e0
[   29.185961]  kunit_try_run_case+0x1b3/0x490
[   29.186510]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.187317]  kthread+0x324/0x6d0
[   29.187873]  ret_from_fork+0x41/0x80
[   29.188396]  ret_from_fork_asm+0x1a/0x30
[   29.189038] 
[   29.189336] The buggy address belongs to the object at ffff888101af6000
[   29.189336]  which belongs to the cache kmalloc-128 of size 128
[   29.190475] The buggy address is located 0 bytes to the right of
[   29.190475]  allocated 120-byte region [ffff888101af6000, ffff888101af6078)
[   29.191908] 
[   29.192166] The buggy address belongs to the physical page:
[   29.192902] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af6
[   29.193464] flags: 0x200000000000000(node=0|zone=2)
[   29.194218] page_type: f5(slab)
[   29.194600] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   29.195711] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   29.196495] page dumped because: kasan: bad access detected
[   29.197174] 
[   29.197478] Memory state around the buggy address:
[   29.198159]  ffff888101af5f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.199178]  ffff888101af5f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.199985] >ffff888101af6000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   29.200878]                                                                 ^
[   29.202008]  ffff888101af6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.202823]  ffff888101af6100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.203703] ==================================================================

test log snippet boot parser 71

Home

[   29.433721] ==================================================================
[   29.434515] BUG: KASAN: slab-out-of-bounds in copy_user_test_oob+0x605/0x10f0
[   29.435316] Read of size 121 at addr ffff888101af6100 by task kunit_try_catch/293
[   29.436060] 
[   29.436355] CPU: 0 UID: 0 PID: 293 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.436474] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.436512] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.436572] Call Trace:
[   29.436605]  <TASK>
[   29.436657]  dump_stack_lvl+0x73/0xb0
[   29.436763]  print_report+0xd1/0x640
[   29.436829]  ? __virt_addr_valid+0x1db/0x2d0
[   29.436878]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.436918]  kasan_report+0x102/0x140
[   29.436951]  ? copy_user_test_oob+0x605/0x10f0
[   29.436990]  ? copy_user_test_oob+0x605/0x10f0
[   29.437054]  kasan_check_range+0x10c/0x1c0
[   29.437115]  __kasan_check_read+0x15/0x20
[   29.437169]  copy_user_test_oob+0x605/0x10f0
[   29.437289]  ? __pfx_copy_user_test_oob+0x10/0x10
[   29.437350]  ? finish_task_switch.isra.0+0x153/0x700
[   29.437451]  ? __switch_to+0x5d9/0xf60
[   29.437525]  ? __schedule+0xc6f/0x27d0
[   29.437592]  ? __pfx_read_tsc+0x10/0x10
[   29.437650]  ? ktime_get_ts64+0x86/0x230
[   29.437708]  kunit_try_run_case+0x1b3/0x490
[   29.437748]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.437784]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.437817]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.437852]  ? __kthread_parkme+0x82/0x160
[   29.437883]  ? preempt_count_sub+0x50/0x80
[   29.437917]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.437952]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.437987]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.438024]  kthread+0x324/0x6d0
[   29.438055]  ? trace_preempt_on+0x20/0xc0
[   29.438087]  ? __pfx_kthread+0x10/0x10
[   29.438119]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.438151]  ? calculate_sigpending+0x7b/0xa0
[   29.438247]  ? __pfx_kthread+0x10/0x10
[   29.438290]  ret_from_fork+0x41/0x80
[   29.438323]  ? __pfx_kthread+0x10/0x10
[   29.438366]  ret_from_fork_asm+0x1a/0x30
[   29.438432]  </TASK>
[   29.438450] 
[   29.454801] Allocated by task 293:
[   29.455175]  kasan_save_stack+0x3d/0x60
[   29.455658]  kasan_save_track+0x18/0x40
[   29.456123]  kasan_save_alloc_info+0x3b/0x50
[   29.456542]  __kasan_kmalloc+0xb7/0xc0
[   29.457005]  __kmalloc_noprof+0x1c4/0x500
[   29.457453]  kunit_kmalloc_array+0x25/0x60
[   29.457830]  copy_user_test_oob+0xac/0x10f0
[   29.458396]  kunit_try_run_case+0x1b3/0x490
[   29.458870]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.459423]  kthread+0x324/0x6d0
[   29.459818]  ret_from_fork+0x41/0x80
[   29.460100]  ret_from_fork_asm+0x1a/0x30
[   29.460449] 
[   29.460617] The buggy address belongs to the object at ffff888101af6100
[   29.460617]  which belongs to the cache kmalloc-128 of size 128
[   29.461687] The buggy address is located 0 bytes inside of
[   29.461687]  allocated 120-byte region [ffff888101af6100, ffff888101af6178)
[   29.462797] 
[   29.463121] The buggy address belongs to the physical page:
[   29.463677] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af6
[   29.464355] flags: 0x200000000000000(node=0|zone=2)
[   29.464822] page_type: f5(slab)
[   29.465269] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   29.465760] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   29.466285] page dumped because: kasan: bad access detected
[   29.466869] 
[   29.467099] Memory state around the buggy address:
[   29.467627]  ffff888101af6000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   29.468368]  ffff888101af6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.468845] >ffff888101af6100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   29.469282]                                                                 ^
[   29.469965]  ffff888101af6180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.470702]  ffff888101af6200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.471425] ==================================================================
---
[   29.353889] ==================================================================
[   29.354643] BUG: KASAN: slab-out-of-bounds in copy_user_test_oob+0x4ab/0x10f0
[   29.355377] Read of size 121 at addr ffff888101af6100 by task kunit_try_catch/293
[   29.355928] 
[   29.356240] CPU: 0 UID: 0 PID: 293 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.356351] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.356388] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.356463] Call Trace:
[   29.356496]  <TASK>
[   29.356533]  dump_stack_lvl+0x73/0xb0
[   29.356610]  print_report+0xd1/0x640
[   29.356677]  ? __virt_addr_valid+0x1db/0x2d0
[   29.356767]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.356868]  kasan_report+0x102/0x140
[   29.356934]  ? copy_user_test_oob+0x4ab/0x10f0
[   29.357001]  ? copy_user_test_oob+0x4ab/0x10f0
[   29.357077]  kasan_check_range+0x10c/0x1c0
[   29.357136]  __kasan_check_read+0x15/0x20
[   29.357236]  copy_user_test_oob+0x4ab/0x10f0
[   29.357324]  ? __pfx_copy_user_test_oob+0x10/0x10
[   29.357423]  ? finish_task_switch.isra.0+0x153/0x700
[   29.357497]  ? __switch_to+0x5d9/0xf60
[   29.357562]  ? __schedule+0xc6f/0x27d0
[   29.357626]  ? __pfx_read_tsc+0x10/0x10
[   29.357678]  ? ktime_get_ts64+0x86/0x230
[   29.357765]  kunit_try_run_case+0x1b3/0x490
[   29.357865]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.357925]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.357989]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.358051]  ? __kthread_parkme+0x82/0x160
[   29.358110]  ? preempt_count_sub+0x50/0x80
[   29.358175]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.358332]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.358434]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.358507]  kthread+0x324/0x6d0
[   29.358588]  ? trace_preempt_on+0x20/0xc0
[   29.358677]  ? __pfx_kthread+0x10/0x10
[   29.358742]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.358806]  ? calculate_sigpending+0x7b/0xa0
[   29.358876]  ? __pfx_kthread+0x10/0x10
[   29.358940]  ret_from_fork+0x41/0x80
[   29.359002]  ? __pfx_kthread+0x10/0x10
[   29.359063]  ret_from_fork_asm+0x1a/0x30
[   29.359120]  </TASK>
[   29.359139] 
[   29.376341] Allocated by task 293:
[   29.376657]  kasan_save_stack+0x3d/0x60
[   29.377000]  kasan_save_track+0x18/0x40
[   29.377415]  kasan_save_alloc_info+0x3b/0x50
[   29.377824]  __kasan_kmalloc+0xb7/0xc0
[   29.378171]  __kmalloc_noprof+0x1c4/0x500
[   29.379306]  kunit_kmalloc_array+0x25/0x60
[   29.379635]  copy_user_test_oob+0xac/0x10f0
[   29.379977]  kunit_try_run_case+0x1b3/0x490
[   29.380821]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.381222]  kthread+0x324/0x6d0
[   29.381521]  ret_from_fork+0x41/0x80
[   29.381923]  ret_from_fork_asm+0x1a/0x30
[   29.382276] 
[   29.382968] The buggy address belongs to the object at ffff888101af6100
[   29.382968]  which belongs to the cache kmalloc-128 of size 128
[   29.383892] The buggy address is located 0 bytes inside of
[   29.383892]  allocated 120-byte region [ffff888101af6100, ffff888101af6178)
[   29.385114] 
[   29.385702] The buggy address belongs to the physical page:
[   29.386205] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af6
[   29.387000] flags: 0x200000000000000(node=0|zone=2)
[   29.387338] page_type: f5(slab)
[   29.387760] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   29.388459] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   29.388977] page dumped because: kasan: bad access detected
[   29.389531] 
[   29.389783] Memory state around the buggy address:
[   29.390219]  ffff888101af6000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   29.391051]  ffff888101af6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.391697] >ffff888101af6100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   29.392367]                                                                 ^
[   29.393122]  ffff888101af6180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.393736]  ffff888101af6200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.394511] ==================================================================

test log snippet boot parser 72

Home

[   29.315571] ==================================================================
[   29.316110] BUG: KASAN: slab-out-of-bounds in copy_user_test_oob+0x3fe/0x10f0
[   29.316836] Write of size 121 at addr ffff888101af6100 by task kunit_try_catch/293
[   29.317549] 
[   29.317750] CPU: 0 UID: 0 PID: 293 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.317855] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.317914] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.317992] Call Trace:
[   29.318033]  <TASK>
[   29.318068]  dump_stack_lvl+0x73/0xb0
[   29.318147]  print_report+0xd1/0x640
[   29.318251]  ? __virt_addr_valid+0x1db/0x2d0
[   29.318346]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.318465]  kasan_report+0x102/0x140
[   29.318558]  ? copy_user_test_oob+0x3fe/0x10f0
[   29.318628]  ? copy_user_test_oob+0x3fe/0x10f0
[   29.318700]  kasan_check_range+0x10c/0x1c0
[   29.318781]  __kasan_check_write+0x18/0x20
[   29.318870]  copy_user_test_oob+0x3fe/0x10f0
[   29.318941]  ? __pfx_copy_user_test_oob+0x10/0x10
[   29.318996]  ? finish_task_switch.isra.0+0x153/0x700
[   29.319059]  ? __switch_to+0x5d9/0xf60
[   29.319143]  ? __schedule+0xc6f/0x27d0
[   29.319231]  ? __pfx_read_tsc+0x10/0x10
[   29.319293]  ? ktime_get_ts64+0x86/0x230
[   29.319380]  kunit_try_run_case+0x1b3/0x490
[   29.319469]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.319506]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.319541]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.319576]  ? __kthread_parkme+0x82/0x160
[   29.319607]  ? preempt_count_sub+0x50/0x80
[   29.319642]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.319677]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.319712]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.319747]  kthread+0x324/0x6d0
[   29.319777]  ? trace_preempt_on+0x20/0xc0
[   29.319811]  ? __pfx_kthread+0x10/0x10
[   29.319843]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.319875]  ? calculate_sigpending+0x7b/0xa0
[   29.319910]  ? __pfx_kthread+0x10/0x10
[   29.319943]  ret_from_fork+0x41/0x80
[   29.319974]  ? __pfx_kthread+0x10/0x10
[   29.320006]  ret_from_fork_asm+0x1a/0x30
[   29.320050]  </TASK>
[   29.320069] 
[   29.333286] Allocated by task 293:
[   29.333681]  kasan_save_stack+0x3d/0x60
[   29.334124]  kasan_save_track+0x18/0x40
[   29.334584]  kasan_save_alloc_info+0x3b/0x50
[   29.334985]  __kasan_kmalloc+0xb7/0xc0
[   29.335273]  __kmalloc_noprof+0x1c4/0x500
[   29.335707]  kunit_kmalloc_array+0x25/0x60
[   29.336213]  copy_user_test_oob+0xac/0x10f0
[   29.336916]  kunit_try_run_case+0x1b3/0x490
[   29.337479]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.338298]  kthread+0x324/0x6d0
[   29.338659]  ret_from_fork+0x41/0x80
[   29.338938]  ret_from_fork_asm+0x1a/0x30
[   29.339229] 
[   29.339498] The buggy address belongs to the object at ffff888101af6100
[   29.339498]  which belongs to the cache kmalloc-128 of size 128
[   29.340738] The buggy address is located 0 bytes inside of
[   29.340738]  allocated 120-byte region [ffff888101af6100, ffff888101af6178)
[   29.341708] 
[   29.341863] The buggy address belongs to the physical page:
[   29.342210] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af6
[   29.344787] flags: 0x200000000000000(node=0|zone=2)
[   29.345111] page_type: f5(slab)
[   29.346565] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   29.346992] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   29.347740] page dumped because: kasan: bad access detected
[   29.348330] 
[   29.348599] Memory state around the buggy address:
[   29.348912]  ffff888101af6000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   29.349618]  ffff888101af6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.350337] >ffff888101af6100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   29.350839]                                                                 ^
[   29.351531]  ffff888101af6180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.352067]  ffff888101af6200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.352721] ==================================================================
---
[   29.395548] ==================================================================
[   29.397380] BUG: KASAN: slab-out-of-bounds in copy_user_test_oob+0x558/0x10f0
[   29.398041] Write of size 121 at addr ffff888101af6100 by task kunit_try_catch/293
[   29.398893] 
[   29.399083] CPU: 0 UID: 0 PID: 293 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.399202] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.399239] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.399297] Call Trace:
[   29.399333]  <TASK>
[   29.399366]  dump_stack_lvl+0x73/0xb0
[   29.399471]  print_report+0xd1/0x640
[   29.399540]  ? __virt_addr_valid+0x1db/0x2d0
[   29.399604]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.399674]  kasan_report+0x102/0x140
[   29.399735]  ? copy_user_test_oob+0x558/0x10f0
[   29.399800]  ? copy_user_test_oob+0x558/0x10f0
[   29.399870]  kasan_check_range+0x10c/0x1c0
[   29.399933]  __kasan_check_write+0x18/0x20
[   29.399991]  copy_user_test_oob+0x558/0x10f0
[   29.400054]  ? __pfx_copy_user_test_oob+0x10/0x10
[   29.400115]  ? finish_task_switch.isra.0+0x153/0x700
[   29.400176]  ? __switch_to+0x5d9/0xf60
[   29.400244]  ? __schedule+0xc6f/0x27d0
[   29.400310]  ? __pfx_read_tsc+0x10/0x10
[   29.400368]  ? ktime_get_ts64+0x86/0x230
[   29.400491]  kunit_try_run_case+0x1b3/0x490
[   29.400574]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.400636]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.400674]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.400710]  ? __kthread_parkme+0x82/0x160
[   29.400742]  ? preempt_count_sub+0x50/0x80
[   29.400777]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.400812]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.400847]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.400882]  kthread+0x324/0x6d0
[   29.400913]  ? trace_preempt_on+0x20/0xc0
[   29.400969]  ? __pfx_kthread+0x10/0x10
[   29.401003]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.401036]  ? calculate_sigpending+0x7b/0xa0
[   29.401071]  ? __pfx_kthread+0x10/0x10
[   29.401104]  ret_from_fork+0x41/0x80
[   29.401136]  ? __pfx_kthread+0x10/0x10
[   29.401168]  ret_from_fork_asm+0x1a/0x30
[   29.401255]  </TASK>
[   29.401274] 
[   29.413431] Allocated by task 293:
[   29.413791]  kasan_save_stack+0x3d/0x60
[   29.414202]  kasan_save_track+0x18/0x40
[   29.414668]  kasan_save_alloc_info+0x3b/0x50
[   29.415128]  __kasan_kmalloc+0xb7/0xc0
[   29.415567]  __kmalloc_noprof+0x1c4/0x500
[   29.415930]  kunit_kmalloc_array+0x25/0x60
[   29.416285]  copy_user_test_oob+0xac/0x10f0
[   29.416699]  kunit_try_run_case+0x1b3/0x490
[   29.417006]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.417561]  kthread+0x324/0x6d0
[   29.417943]  ret_from_fork+0x41/0x80
[   29.419925]  ret_from_fork_asm+0x1a/0x30
[   29.420260] 
[   29.420443] The buggy address belongs to the object at ffff888101af6100
[   29.420443]  which belongs to the cache kmalloc-128 of size 128
[   29.421040] The buggy address is located 0 bytes inside of
[   29.421040]  allocated 120-byte region [ffff888101af6100, ffff888101af6178)
[   29.422001] 
[   29.422300] The buggy address belongs to the physical page:
[   29.422896] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af6
[   29.424264] flags: 0x200000000000000(node=0|zone=2)
[   29.424934] page_type: f5(slab)
[   29.425388] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   29.426311] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   29.427273] page dumped because: kasan: bad access detected
[   29.427963] 
[   29.428317] Memory state around the buggy address:
[   29.428897]  ffff888101af6000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   29.429741]  ffff888101af6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.430484] >ffff888101af6100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   29.431244]                                                                 ^
[   29.431847]  ffff888101af6180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.432096]  ffff888101af6200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.432492] ==================================================================

test log snippet boot parser 73

Home

[   26.971704] ==================================================================
[   26.972300] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x8fa/0x5450
[   26.972738] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.973530] 
[   26.973966] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.974245] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.974301] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.974516] Call Trace:
[   26.974642]  <TASK>
[   26.974762]  dump_stack_lvl+0x73/0xb0
[   26.974883]  print_report+0xd1/0x640
[   26.974959]  ? __virt_addr_valid+0x1db/0x2d0
[   26.975039]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.975124]  kasan_report+0x102/0x140
[   26.975237]  ? kasan_atomics_helper+0x8fa/0x5450
[   26.975326]  ? kasan_atomics_helper+0x8fa/0x5450
[   26.975429]  kasan_check_range+0x10c/0x1c0
[   26.975511]  __kasan_check_write+0x18/0x20
[   26.975618]  kasan_atomics_helper+0x8fa/0x5450
[   26.975711]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.975793]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.975871]  ? kasan_atomics+0x153/0x310
[   26.975950]  kasan_atomics+0x1dd/0x310
[   26.976016]  ? __pfx_kasan_atomics+0x10/0x10
[   26.976090]  ? __pfx_read_tsc+0x10/0x10
[   26.976153]  ? ktime_get_ts64+0x86/0x230
[   26.976273]  kunit_try_run_case+0x1b3/0x490
[   26.976358]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.976455]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.976751]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.976845]  ? __kthread_parkme+0x82/0x160
[   26.976914]  ? preempt_count_sub+0x50/0x80
[   26.976992]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.977051]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.977099]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.977145]  kthread+0x324/0x6d0
[   26.977178]  ? trace_preempt_on+0x20/0xc0
[   26.977256]  ? __pfx_kthread+0x10/0x10
[   26.977291]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.977329]  ? calculate_sigpending+0x7b/0xa0
[   26.977369]  ? __pfx_kthread+0x10/0x10
[   26.977422]  ret_from_fork+0x41/0x80
[   26.977458]  ? __pfx_kthread+0x10/0x10
[   26.977493]  ret_from_fork_asm+0x1a/0x30
[   26.977542]  </TASK>
[   26.977560] 
[   26.999962] Allocated by task 273:
[   27.000343]  kasan_save_stack+0x3d/0x60
[   27.001236]  kasan_save_track+0x18/0x40
[   27.001746]  kasan_save_alloc_info+0x3b/0x50
[   27.002297]  __kasan_kmalloc+0xb7/0xc0
[   27.002951]  __kmalloc_cache_noprof+0x184/0x410
[   27.003560]  kasan_atomics+0x96/0x310
[   27.004344]  kunit_try_run_case+0x1b3/0x490
[   27.005023]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.005810]  kthread+0x324/0x6d0
[   27.006282]  ret_from_fork+0x41/0x80
[   27.006921]  ret_from_fork_asm+0x1a/0x30
[   27.007454] 
[   27.007891] The buggy address belongs to the object at ffff888101aebe00
[   27.007891]  which belongs to the cache kmalloc-64 of size 64
[   27.008915] The buggy address is located 0 bytes to the right of
[   27.008915]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.010337] 
[   27.010565] The buggy address belongs to the physical page:
[   27.010942] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.011796] flags: 0x200000000000000(node=0|zone=2)
[   27.012374] page_type: f5(slab)
[   27.012960] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.013885] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.014848] page dumped because: kasan: bad access detected
[   27.015337] 
[   27.015571] Memory state around the buggy address:
[   27.016423]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.017317]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.018144] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.019003]                                      ^
[   27.019570]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.020273]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.021272] ==================================================================
---
[   28.427325] ==================================================================
[   28.428418] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1c19/0x5450
[   28.428988] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.429933] 
[   28.430296] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.430437] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.430480] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.430541] Call Trace:
[   28.430575]  <TASK>
[   28.430610]  dump_stack_lvl+0x73/0xb0
[   28.430689]  print_report+0xd1/0x640
[   28.430754]  ? __virt_addr_valid+0x1db/0x2d0
[   28.430820]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.431062]  kasan_report+0x102/0x140
[   28.431171]  ? kasan_atomics_helper+0x1c19/0x5450
[   28.431274]  ? kasan_atomics_helper+0x1c19/0x5450
[   28.431356]  kasan_check_range+0x10c/0x1c0
[   28.431443]  __kasan_check_write+0x18/0x20
[   28.431501]  kasan_atomics_helper+0x1c19/0x5450
[   28.431542]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.431578]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.431611]  ? kasan_atomics+0x153/0x310
[   28.431648]  kasan_atomics+0x1dd/0x310
[   28.431679]  ? __pfx_kasan_atomics+0x10/0x10
[   28.431712]  ? __pfx_read_tsc+0x10/0x10
[   28.431742]  ? ktime_get_ts64+0x86/0x230
[   28.431777]  kunit_try_run_case+0x1b3/0x490
[   28.431813]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.431846]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.431881]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.431915]  ? __kthread_parkme+0x82/0x160
[   28.431944]  ? preempt_count_sub+0x50/0x80
[   28.431979]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.432013]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.432047]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.432082]  kthread+0x324/0x6d0
[   28.432111]  ? trace_preempt_on+0x20/0xc0
[   28.432142]  ? __pfx_kthread+0x10/0x10
[   28.432174]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.432241]  ? calculate_sigpending+0x7b/0xa0
[   28.432280]  ? __pfx_kthread+0x10/0x10
[   28.432312]  ret_from_fork+0x41/0x80
[   28.432344]  ? __pfx_kthread+0x10/0x10
[   28.432375]  ret_from_fork_asm+0x1a/0x30
[   28.432436]  </TASK>
[   28.432454] 
[   28.448413] Allocated by task 273:
[   28.448792]  kasan_save_stack+0x3d/0x60
[   28.449276]  kasan_save_track+0x18/0x40
[   28.449734]  kasan_save_alloc_info+0x3b/0x50
[   28.450148]  __kasan_kmalloc+0xb7/0xc0
[   28.450645]  __kmalloc_cache_noprof+0x184/0x410
[   28.451007]  kasan_atomics+0x96/0x310
[   28.451345]  kunit_try_run_case+0x1b3/0x490
[   28.451868]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.452577]  kthread+0x324/0x6d0
[   28.452960]  ret_from_fork+0x41/0x80
[   28.453497]  ret_from_fork_asm+0x1a/0x30
[   28.453800] 
[   28.453964] The buggy address belongs to the object at ffff888101aebe00
[   28.453964]  which belongs to the cache kmalloc-64 of size 64
[   28.454574] The buggy address is located 0 bytes to the right of
[   28.454574]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.454958] 
[   28.455050] The buggy address belongs to the physical page:
[   28.455233] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.456518] flags: 0x200000000000000(node=0|zone=2)
[   28.457018] page_type: f5(slab)
[   28.458577] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.459664] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.460609] page dumped because: kasan: bad access detected
[   28.461285] 
[   28.461532] Memory state around the buggy address:
[   28.462004]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.462913]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.463930] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.464795]                                      ^
[   28.465499]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.466333]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.466790] ==================================================================
---
[   28.983061] ==================================================================
[   28.983682] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x224d/0x5450
[   28.984442] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.984869] 
[   28.985251] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.985360] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.985399] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.985477] Call Trace:
[   28.985515]  <TASK>
[   28.985550]  dump_stack_lvl+0x73/0xb0
[   28.985626]  print_report+0xd1/0x640
[   28.985693]  ? __virt_addr_valid+0x1db/0x2d0
[   28.985788]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.985923]  kasan_report+0x102/0x140
[   28.986005]  ? kasan_atomics_helper+0x224d/0x5450
[   28.986132]  ? kasan_atomics_helper+0x224d/0x5450
[   28.986213]  kasan_check_range+0x10c/0x1c0
[   28.986279]  __kasan_check_write+0x18/0x20
[   28.986341]  kasan_atomics_helper+0x224d/0x5450
[   28.986437]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.986508]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.986572]  ? kasan_atomics+0x153/0x310
[   28.986640]  kasan_atomics+0x1dd/0x310
[   28.986702]  ? __pfx_kasan_atomics+0x10/0x10
[   28.986763]  ? __pfx_read_tsc+0x10/0x10
[   28.986823]  ? ktime_get_ts64+0x86/0x230
[   28.986888]  kunit_try_run_case+0x1b3/0x490
[   28.986963]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.987029]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.987094]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.987161]  ? __kthread_parkme+0x82/0x160
[   28.987223]  ? preempt_count_sub+0x50/0x80
[   28.987291]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.987358]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.987506]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.987592]  kthread+0x324/0x6d0
[   28.987652]  ? trace_preempt_on+0x20/0xc0
[   28.987716]  ? __pfx_kthread+0x10/0x10
[   28.987779]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.987844]  ? calculate_sigpending+0x7b/0xa0
[   28.987914]  ? __pfx_kthread+0x10/0x10
[   28.987978]  ret_from_fork+0x41/0x80
[   28.988037]  ? __pfx_kthread+0x10/0x10
[   28.988100]  ret_from_fork_asm+0x1a/0x30
[   28.988187]  </TASK>
[   28.988222] 
[   29.003662] Allocated by task 273:
[   29.004014]  kasan_save_stack+0x3d/0x60
[   29.004682]  kasan_save_track+0x18/0x40
[   29.005112]  kasan_save_alloc_info+0x3b/0x50
[   29.005744]  __kasan_kmalloc+0xb7/0xc0
[   29.006148]  __kmalloc_cache_noprof+0x184/0x410
[   29.006499]  kasan_atomics+0x96/0x310
[   29.006906]  kunit_try_run_case+0x1b3/0x490
[   29.007484]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.008083]  kthread+0x324/0x6d0
[   29.008549]  ret_from_fork+0x41/0x80
[   29.009099]  ret_from_fork_asm+0x1a/0x30
[   29.009563] 
[   29.009796] The buggy address belongs to the object at ffff888101aebe00
[   29.009796]  which belongs to the cache kmalloc-64 of size 64
[   29.010816] The buggy address is located 0 bytes to the right of
[   29.010816]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   29.011744] 
[   29.012119] The buggy address belongs to the physical page:
[   29.012789] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   29.013651] flags: 0x200000000000000(node=0|zone=2)
[   29.014347] page_type: f5(slab)
[   29.014769] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   29.015428] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   29.016066] page dumped because: kasan: bad access detected
[   29.016663] 
[   29.016892] Memory state around the buggy address:
[   29.017587]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   29.018000]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   29.018642] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   29.019503]                                      ^
[   29.020037]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.020648]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.021039] ==================================================================
---
[   27.253879] ==================================================================
[   27.254699] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0xd48/0x5450
[   27.255544] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.256109] 
[   27.256345] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.256470] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.256507] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.256626] Call Trace:
[   27.256667]  <TASK>
[   27.256754]  dump_stack_lvl+0x73/0xb0
[   27.256843]  print_report+0xd1/0x640
[   27.256981]  ? __virt_addr_valid+0x1db/0x2d0
[   27.257056]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.257130]  kasan_report+0x102/0x140
[   27.257236]  ? kasan_atomics_helper+0xd48/0x5450
[   27.257319]  ? kasan_atomics_helper+0xd48/0x5450
[   27.257398]  kasan_check_range+0x10c/0x1c0
[   27.257484]  __kasan_check_write+0x18/0x20
[   27.257548]  kasan_atomics_helper+0xd48/0x5450
[   27.257619]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.257685]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.257751]  ? kasan_atomics+0x153/0x310
[   27.257812]  kasan_atomics+0x1dd/0x310
[   27.257951]  ? __pfx_kasan_atomics+0x10/0x10
[   27.258020]  ? __pfx_read_tsc+0x10/0x10
[   27.258075]  ? ktime_get_ts64+0x86/0x230
[   27.258140]  kunit_try_run_case+0x1b3/0x490
[   27.258313]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.258421]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.258493]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.258557]  ? __kthread_parkme+0x82/0x160
[   27.258614]  ? preempt_count_sub+0x50/0x80
[   27.258684]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.258755]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.258822]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.258891]  kthread+0x324/0x6d0
[   27.258952]  ? trace_preempt_on+0x20/0xc0
[   27.259011]  ? __pfx_kthread+0x10/0x10
[   27.259075]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.259136]  ? calculate_sigpending+0x7b/0xa0
[   27.259316]  ? __pfx_kthread+0x10/0x10
[   27.259394]  ret_from_fork+0x41/0x80
[   27.259476]  ? __pfx_kthread+0x10/0x10
[   27.259538]  ret_from_fork_asm+0x1a/0x30
[   27.259628]  </TASK>
[   27.259663] 
[   27.279444] Allocated by task 273:
[   27.279854]  kasan_save_stack+0x3d/0x60
[   27.280359]  kasan_save_track+0x18/0x40
[   27.281111]  kasan_save_alloc_info+0x3b/0x50
[   27.282370]  __kasan_kmalloc+0xb7/0xc0
[   27.282950]  __kmalloc_cache_noprof+0x184/0x410
[   27.283537]  kasan_atomics+0x96/0x310
[   27.283983]  kunit_try_run_case+0x1b3/0x490
[   27.284452]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.285910]  kthread+0x324/0x6d0
[   27.286484]  ret_from_fork+0x41/0x80
[   27.286936]  ret_from_fork_asm+0x1a/0x30
[   27.287463] 
[   27.287708] The buggy address belongs to the object at ffff888101aebe00
[   27.287708]  which belongs to the cache kmalloc-64 of size 64
[   27.288475] The buggy address is located 0 bytes to the right of
[   27.288475]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.290254] 
[   27.290633] The buggy address belongs to the physical page:
[   27.291342] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.292026] flags: 0x200000000000000(node=0|zone=2)
[   27.292710] page_type: f5(slab)
[   27.293234] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.294168] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.295034] page dumped because: kasan: bad access detected
[   27.295768] 
[   27.296245] Memory state around the buggy address:
[   27.296592]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.297049]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.297822] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.298620]                                      ^
[   27.299085]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.300236]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.300665] ==================================================================
---
[   28.628395] ==================================================================
[   28.628942] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1eab/0x5450
[   28.629676] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.630360] 
[   28.630628] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.630770] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.630813] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.630869] Call Trace:
[   28.630902]  <TASK>
[   28.630935]  dump_stack_lvl+0x73/0xb0
[   28.631011]  print_report+0xd1/0x640
[   28.631120]  ? __virt_addr_valid+0x1db/0x2d0
[   28.631234]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.631316]  kasan_report+0x102/0x140
[   28.631380]  ? kasan_atomics_helper+0x1eab/0x5450
[   28.631469]  ? kasan_atomics_helper+0x1eab/0x5450
[   28.631551]  kasan_check_range+0x10c/0x1c0
[   28.631616]  __kasan_check_write+0x18/0x20
[   28.631675]  kasan_atomics_helper+0x1eab/0x5450
[   28.631746]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.631828]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.631921]  ? kasan_atomics+0x153/0x310
[   28.631993]  kasan_atomics+0x1dd/0x310
[   28.632057]  ? __pfx_kasan_atomics+0x10/0x10
[   28.632123]  ? __pfx_read_tsc+0x10/0x10
[   28.632220]  ? ktime_get_ts64+0x86/0x230
[   28.632299]  kunit_try_run_case+0x1b3/0x490
[   28.632374]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.632455]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.632523]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.632585]  ? __kthread_parkme+0x82/0x160
[   28.632666]  ? preempt_count_sub+0x50/0x80
[   28.632759]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.632844]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.632942]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.633018]  kthread+0x324/0x6d0
[   28.633079]  ? trace_preempt_on+0x20/0xc0
[   28.633136]  ? __pfx_kthread+0x10/0x10
[   28.633228]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.633270]  ? calculate_sigpending+0x7b/0xa0
[   28.633307]  ? __pfx_kthread+0x10/0x10
[   28.633340]  ret_from_fork+0x41/0x80
[   28.633372]  ? __pfx_kthread+0x10/0x10
[   28.633422]  ret_from_fork_asm+0x1a/0x30
[   28.633471]  </TASK>
[   28.633489] 
[   28.649688] Allocated by task 273:
[   28.650115]  kasan_save_stack+0x3d/0x60
[   28.650948]  kasan_save_track+0x18/0x40
[   28.651821]  kasan_save_alloc_info+0x3b/0x50
[   28.652139]  __kasan_kmalloc+0xb7/0xc0
[   28.652792]  __kmalloc_cache_noprof+0x184/0x410
[   28.653117]  kasan_atomics+0x96/0x310
[   28.653715]  kunit_try_run_case+0x1b3/0x490
[   28.654138]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.654519]  kthread+0x324/0x6d0
[   28.654842]  ret_from_fork+0x41/0x80
[   28.655600]  ret_from_fork_asm+0x1a/0x30
[   28.656105] 
[   28.656424] The buggy address belongs to the object at ffff888101aebe00
[   28.656424]  which belongs to the cache kmalloc-64 of size 64
[   28.657239] The buggy address is located 0 bytes to the right of
[   28.657239]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.658237] 
[   28.658529] The buggy address belongs to the physical page:
[   28.659078] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.659872] flags: 0x200000000000000(node=0|zone=2)
[   28.660444] page_type: f5(slab)
[   28.660824] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.661725] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.662442] page dumped because: kasan: bad access detected
[   28.663434] 
[   28.663683] Memory state around the buggy address:
[   28.664042]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.664461]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.665117] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.665832]                                      ^
[   28.666210]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.666830]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.667606] ==================================================================
---
[   27.989061] ==================================================================
[   27.989637] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x151e/0x5450
[   27.990388] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.992243] 
[   27.992485] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.992592] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.992631] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.992686] Call Trace:
[   27.992721]  <TASK>
[   27.992745]  dump_stack_lvl+0x73/0xb0
[   27.992800]  print_report+0xd1/0x640
[   27.992851]  ? __virt_addr_valid+0x1db/0x2d0
[   27.992889]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.992927]  kasan_report+0x102/0x140
[   27.992959]  ? kasan_atomics_helper+0x151e/0x5450
[   27.992995]  ? kasan_atomics_helper+0x151e/0x5450
[   27.993034]  kasan_check_range+0x10c/0x1c0
[   27.993067]  __kasan_check_write+0x18/0x20
[   27.993097]  kasan_atomics_helper+0x151e/0x5450
[   27.993133]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.993167]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.993239]  ? kasan_atomics+0x153/0x310
[   27.993321]  kasan_atomics+0x1dd/0x310
[   27.993386]  ? __pfx_kasan_atomics+0x10/0x10
[   27.993472]  ? __pfx_read_tsc+0x10/0x10
[   27.993531]  ? ktime_get_ts64+0x86/0x230
[   27.993572]  kunit_try_run_case+0x1b3/0x490
[   27.993612]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.993647]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.993682]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.993716]  ? __kthread_parkme+0x82/0x160
[   27.993748]  ? preempt_count_sub+0x50/0x80
[   27.993783]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.993819]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.993854]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.993889]  kthread+0x324/0x6d0
[   27.993919]  ? trace_preempt_on+0x20/0xc0
[   27.993951]  ? __pfx_kthread+0x10/0x10
[   27.993983]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.994014]  ? calculate_sigpending+0x7b/0xa0
[   27.994050]  ? __pfx_kthread+0x10/0x10
[   27.994083]  ret_from_fork+0x41/0x80
[   27.994114]  ? __pfx_kthread+0x10/0x10
[   27.994146]  ret_from_fork_asm+0x1a/0x30
[   27.994229]  </TASK>
[   27.994251] 
[   28.012615] Allocated by task 273:
[   28.013002]  kasan_save_stack+0x3d/0x60
[   28.013531]  kasan_save_track+0x18/0x40
[   28.014041]  kasan_save_alloc_info+0x3b/0x50
[   28.014647]  __kasan_kmalloc+0xb7/0xc0
[   28.015135]  __kmalloc_cache_noprof+0x184/0x410
[   28.015560]  kasan_atomics+0x96/0x310
[   28.015924]  kunit_try_run_case+0x1b3/0x490
[   28.016423]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.017068]  kthread+0x324/0x6d0
[   28.017432]  ret_from_fork+0x41/0x80
[   28.017882]  ret_from_fork_asm+0x1a/0x30
[   28.018342] 
[   28.018613] The buggy address belongs to the object at ffff888101aebe00
[   28.018613]  which belongs to the cache kmalloc-64 of size 64
[   28.019501] The buggy address is located 0 bytes to the right of
[   28.019501]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.020397] 
[   28.020653] The buggy address belongs to the physical page:
[   28.021178] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.021858] flags: 0x200000000000000(node=0|zone=2)
[   28.022450] page_type: f5(slab)
[   28.022832] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.023469] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.023976] page dumped because: kasan: bad access detected
[   28.024558] 
[   28.024792] Memory state around the buggy address:
[   28.025301]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.025855]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.026503] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.027128]                                      ^
[   28.027689]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.028256]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.028939] ==================================================================
---
[   28.183962] ==================================================================
[   28.184779] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1819/0x5450
[   28.186496] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.187133] 
[   28.188949] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.189553] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.189597] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.189638] Call Trace:
[   28.189658]  <TASK>
[   28.189682]  dump_stack_lvl+0x73/0xb0
[   28.189761]  print_report+0xd1/0x640
[   28.189819]  ? __virt_addr_valid+0x1db/0x2d0
[   28.189878]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.189946]  kasan_report+0x102/0x140
[   28.190004]  ? kasan_atomics_helper+0x1819/0x5450
[   28.190072]  ? kasan_atomics_helper+0x1819/0x5450
[   28.190147]  kasan_check_range+0x10c/0x1c0
[   28.190231]  __kasan_check_write+0x18/0x20
[   28.190292]  kasan_atomics_helper+0x1819/0x5450
[   28.190372]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.190460]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.190520]  ? kasan_atomics+0x153/0x310
[   28.190590]  kasan_atomics+0x1dd/0x310
[   28.190647]  ? __pfx_kasan_atomics+0x10/0x10
[   28.190710]  ? __pfx_read_tsc+0x10/0x10
[   28.190763]  ? ktime_get_ts64+0x86/0x230
[   28.190867]  kunit_try_run_case+0x1b3/0x490
[   28.190952]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.191016]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.191083]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.191151]  ? __kthread_parkme+0x82/0x160
[   28.191254]  ? preempt_count_sub+0x50/0x80
[   28.191328]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.191414]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.191490]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.191560]  kthread+0x324/0x6d0
[   28.191618]  ? trace_preempt_on+0x20/0xc0
[   28.191681]  ? __pfx_kthread+0x10/0x10
[   28.191740]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.191804]  ? calculate_sigpending+0x7b/0xa0
[   28.191872]  ? __pfx_kthread+0x10/0x10
[   28.191934]  ret_from_fork+0x41/0x80
[   28.191996]  ? __pfx_kthread+0x10/0x10
[   28.192059]  ret_from_fork_asm+0x1a/0x30
[   28.192146]  </TASK>
[   28.192177] 
[   28.204831] Allocated by task 273:
[   28.205242]  kasan_save_stack+0x3d/0x60
[   28.205631]  kasan_save_track+0x18/0x40
[   28.206059]  kasan_save_alloc_info+0x3b/0x50
[   28.206486]  __kasan_kmalloc+0xb7/0xc0
[   28.206906]  __kmalloc_cache_noprof+0x184/0x410
[   28.207385]  kasan_atomics+0x96/0x310
[   28.207752]  kunit_try_run_case+0x1b3/0x490
[   28.208105]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.208684]  kthread+0x324/0x6d0
[   28.208997]  ret_from_fork+0x41/0x80
[   28.209371]  ret_from_fork_asm+0x1a/0x30
[   28.209822] 
[   28.210041] The buggy address belongs to the object at ffff888101aebe00
[   28.210041]  which belongs to the cache kmalloc-64 of size 64
[   28.210797] The buggy address is located 0 bytes to the right of
[   28.210797]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.211871] 
[   28.212107] The buggy address belongs to the physical page:
[   28.212635] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.213174] flags: 0x200000000000000(node=0|zone=2)
[   28.213616] page_type: f5(slab)
[   28.213874] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.214342] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.214781] page dumped because: kasan: bad access detected
[   28.215108] 
[   28.215321] Memory state around the buggy address:
[   28.215808]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.216530]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.217238] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.218011]                                      ^
[   28.218886]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.221014]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.221695] ==================================================================
---
[   27.150124] ==================================================================
[   27.150977] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0xb6b/0x5450
[   27.151503] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.151921] 
[   27.152101] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.152253] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.152297] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.152359] Call Trace:
[   27.152394]  <TASK>
[   27.152535]  dump_stack_lvl+0x73/0xb0
[   27.152617]  print_report+0xd1/0x640
[   27.152685]  ? __virt_addr_valid+0x1db/0x2d0
[   27.152754]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.152826]  kasan_report+0x102/0x140
[   27.152886]  ? kasan_atomics_helper+0xb6b/0x5450
[   27.152954]  ? kasan_atomics_helper+0xb6b/0x5450
[   27.153034]  kasan_check_range+0x10c/0x1c0
[   27.153099]  __kasan_check_write+0x18/0x20
[   27.153160]  kasan_atomics_helper+0xb6b/0x5450
[   27.153282]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.153356]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.153439]  ? kasan_atomics+0x153/0x310
[   27.153511]  kasan_atomics+0x1dd/0x310
[   27.153573]  ? __pfx_kasan_atomics+0x10/0x10
[   27.153638]  ? __pfx_read_tsc+0x10/0x10
[   27.153696]  ? ktime_get_ts64+0x86/0x230
[   27.153767]  kunit_try_run_case+0x1b3/0x490
[   27.153840]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.153904]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.153971]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.154039]  ? __kthread_parkme+0x82/0x160
[   27.154101]  ? preempt_count_sub+0x50/0x80
[   27.154170]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.154290]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.154367]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.154454]  kthread+0x324/0x6d0
[   27.154514]  ? trace_preempt_on+0x20/0xc0
[   27.154575]  ? __pfx_kthread+0x10/0x10
[   27.154634]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.154696]  ? calculate_sigpending+0x7b/0xa0
[   27.154760]  ? __pfx_kthread+0x10/0x10
[   27.154823]  ret_from_fork+0x41/0x80
[   27.154881]  ? __pfx_kthread+0x10/0x10
[   27.154963]  ret_from_fork_asm+0x1a/0x30
[   27.155074]  </TASK>
[   27.155105] 
[   27.168312] Allocated by task 273:
[   27.168687]  kasan_save_stack+0x3d/0x60
[   27.169098]  kasan_save_track+0x18/0x40
[   27.169515]  kasan_save_alloc_info+0x3b/0x50
[   27.169820]  __kasan_kmalloc+0xb7/0xc0
[   27.170101]  __kmalloc_cache_noprof+0x184/0x410
[   27.170485]  kasan_atomics+0x96/0x310
[   27.170895]  kunit_try_run_case+0x1b3/0x490
[   27.171398]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.171963]  kthread+0x324/0x6d0
[   27.172389]  ret_from_fork+0x41/0x80
[   27.172810]  ret_from_fork_asm+0x1a/0x30
[   27.173252] 
[   27.173488] The buggy address belongs to the object at ffff888101aebe00
[   27.173488]  which belongs to the cache kmalloc-64 of size 64
[   27.174248] The buggy address is located 0 bytes to the right of
[   27.174248]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.175339] 
[   27.175588] The buggy address belongs to the physical page:
[   27.176032] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.176657] flags: 0x200000000000000(node=0|zone=2)
[   27.176979] page_type: f5(slab)
[   27.177382] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.178080] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.178693] page dumped because: kasan: bad access detected
[   27.179133] 
[   27.179343] Memory state around the buggy address:
[   27.179669]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.180333]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.180996] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.181677]                                      ^
[   27.182070]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.182778]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.183329] ==================================================================
---
[   27.340941] ==================================================================
[   27.341585] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0xe79/0x5450
[   27.342440] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.342903] 
[   27.343148] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.343347] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.343386] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.343460] Call Trace:
[   27.343496]  <TASK>
[   27.343597]  dump_stack_lvl+0x73/0xb0
[   27.343835]  print_report+0xd1/0x640
[   27.344066]  ? __virt_addr_valid+0x1db/0x2d0
[   27.344330]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.344501]  kasan_report+0x102/0x140
[   27.344611]  ? kasan_atomics_helper+0xe79/0x5450
[   27.344684]  ? kasan_atomics_helper+0xe79/0x5450
[   27.344766]  kasan_check_range+0x10c/0x1c0
[   27.344829]  __kasan_check_write+0x18/0x20
[   27.344891]  kasan_atomics_helper+0xe79/0x5450
[   27.344962]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.345028]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.345092]  ? kasan_atomics+0x153/0x310
[   27.345164]  kasan_atomics+0x1dd/0x310
[   27.345225]  ? __pfx_kasan_atomics+0x10/0x10
[   27.345294]  ? __pfx_read_tsc+0x10/0x10
[   27.345349]  ? ktime_get_ts64+0x86/0x230
[   27.345452]  kunit_try_run_case+0x1b3/0x490
[   27.345530]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.345594]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.345657]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.345725]  ? __kthread_parkme+0x82/0x160
[   27.345786]  ? preempt_count_sub+0x50/0x80
[   27.345852]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.345922]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.345992]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.346062]  kthread+0x324/0x6d0
[   27.346120]  ? trace_preempt_on+0x20/0xc0
[   27.346183]  ? __pfx_kthread+0x10/0x10
[   27.346267]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.346332]  ? calculate_sigpending+0x7b/0xa0
[   27.346397]  ? __pfx_kthread+0x10/0x10
[   27.346478]  ret_from_fork+0x41/0x80
[   27.346533]  ? __pfx_kthread+0x10/0x10
[   27.346766]  ret_from_fork_asm+0x1a/0x30
[   27.346996]  </TASK>
[   27.347120] 
[   27.364598] Allocated by task 273:
[   27.365120]  kasan_save_stack+0x3d/0x60
[   27.365522]  kasan_save_track+0x18/0x40
[   27.365944]  kasan_save_alloc_info+0x3b/0x50
[   27.366263]  __kasan_kmalloc+0xb7/0xc0
[   27.366665]  __kmalloc_cache_noprof+0x184/0x410
[   27.367242]  kasan_atomics+0x96/0x310
[   27.367669]  kunit_try_run_case+0x1b3/0x490
[   27.368051]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.368687]  kthread+0x324/0x6d0
[   27.369058]  ret_from_fork+0x41/0x80
[   27.369474]  ret_from_fork_asm+0x1a/0x30
[   27.369762] 
[   27.369929] The buggy address belongs to the object at ffff888101aebe00
[   27.369929]  which belongs to the cache kmalloc-64 of size 64
[   27.370669] The buggy address is located 0 bytes to the right of
[   27.370669]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.371978] 
[   27.372243] The buggy address belongs to the physical page:
[   27.372784] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.373296] flags: 0x200000000000000(node=0|zone=2)
[   27.373792] page_type: f5(slab)
[   27.374085] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.375094] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.375526] page dumped because: kasan: bad access detected
[   27.375853] 
[   27.376020] Memory state around the buggy address:
[   27.376503]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.377326]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.377988] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.378789]                                      ^
[   27.379271]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.379774]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.380170] ==================================================================
---
[   28.110269] ==================================================================
[   28.112141] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x16e8/0x5450
[   28.113775] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.115249] 
[   28.115883] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.115996] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.116035] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.116094] Call Trace:
[   28.116131]  <TASK>
[   28.116166]  dump_stack_lvl+0x73/0xb0
[   28.116613]  print_report+0xd1/0x640
[   28.116692]  ? __virt_addr_valid+0x1db/0x2d0
[   28.116757]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.116800]  kasan_report+0x102/0x140
[   28.116834]  ? kasan_atomics_helper+0x16e8/0x5450
[   28.116869]  ? kasan_atomics_helper+0x16e8/0x5450
[   28.116911]  kasan_check_range+0x10c/0x1c0
[   28.116943]  __kasan_check_write+0x18/0x20
[   28.116973]  kasan_atomics_helper+0x16e8/0x5450
[   28.117008]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.117042]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.117074]  ? kasan_atomics+0x153/0x310
[   28.117109]  kasan_atomics+0x1dd/0x310
[   28.117140]  ? __pfx_kasan_atomics+0x10/0x10
[   28.117172]  ? __pfx_read_tsc+0x10/0x10
[   28.117259]  ? ktime_get_ts64+0x86/0x230
[   28.117306]  kunit_try_run_case+0x1b3/0x490
[   28.117345]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.117380]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.117447]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.117488]  ? __kthread_parkme+0x82/0x160
[   28.117520]  ? preempt_count_sub+0x50/0x80
[   28.117553]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.117590]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.117625]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.117660]  kthread+0x324/0x6d0
[   28.117689]  ? trace_preempt_on+0x20/0xc0
[   28.117722]  ? __pfx_kthread+0x10/0x10
[   28.117755]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.117788]  ? calculate_sigpending+0x7b/0xa0
[   28.117823]  ? __pfx_kthread+0x10/0x10
[   28.117856]  ret_from_fork+0x41/0x80
[   28.117885]  ? __pfx_kthread+0x10/0x10
[   28.117917]  ret_from_fork_asm+0x1a/0x30
[   28.117962]  </TASK>
[   28.117979] 
[   28.132675] Allocated by task 273:
[   28.133232]  kasan_save_stack+0x3d/0x60
[   28.133780]  kasan_save_track+0x18/0x40
[   28.134314]  kasan_save_alloc_info+0x3b/0x50
[   28.134849]  __kasan_kmalloc+0xb7/0xc0
[   28.135355]  __kmalloc_cache_noprof+0x184/0x410
[   28.135933]  kasan_atomics+0x96/0x310
[   28.136430]  kunit_try_run_case+0x1b3/0x490
[   28.136740]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.137484]  kthread+0x324/0x6d0
[   28.137928]  ret_from_fork+0x41/0x80
[   28.138466]  ret_from_fork_asm+0x1a/0x30
[   28.138940] 
[   28.139253] The buggy address belongs to the object at ffff888101aebe00
[   28.139253]  which belongs to the cache kmalloc-64 of size 64
[   28.140489] The buggy address is located 0 bytes to the right of
[   28.140489]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.141522] 
[   28.141754] The buggy address belongs to the physical page:
[   28.142223] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.142881] flags: 0x200000000000000(node=0|zone=2)
[   28.143262] page_type: f5(slab)
[   28.143678] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.144485] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.145159] page dumped because: kasan: bad access detected
[   28.145638] 
[   28.145844] Memory state around the buggy address:
[   28.146333]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.146907]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.147427] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.147911]                                      ^
[   28.148414]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.148900]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.149472] ==================================================================
---
[   28.585930] ==================================================================
[   28.586705] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1e13/0x5450
[   28.587473] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.588077] 
[   28.588473] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.588582] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.588622] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.588682] Call Trace:
[   28.588719]  <TASK>
[   28.588754]  dump_stack_lvl+0x73/0xb0
[   28.588832]  print_report+0xd1/0x640
[   28.588900]  ? __virt_addr_valid+0x1db/0x2d0
[   28.588969]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.589042]  kasan_report+0x102/0x140
[   28.589105]  ? kasan_atomics_helper+0x1e13/0x5450
[   28.589176]  ? kasan_atomics_helper+0x1e13/0x5450
[   28.589312]  kasan_check_range+0x10c/0x1c0
[   28.589474]  __kasan_check_write+0x18/0x20
[   28.589593]  kasan_atomics_helper+0x1e13/0x5450
[   28.589666]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.589734]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.589797]  ? kasan_atomics+0x153/0x310
[   28.589864]  kasan_atomics+0x1dd/0x310
[   28.589926]  ? __pfx_kasan_atomics+0x10/0x10
[   28.589987]  ? __pfx_read_tsc+0x10/0x10
[   28.590046]  ? ktime_get_ts64+0x86/0x230
[   28.590107]  kunit_try_run_case+0x1b3/0x490
[   28.590174]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.590392]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.590483]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.590552]  ? __kthread_parkme+0x82/0x160
[   28.590616]  ? preempt_count_sub+0x50/0x80
[   28.590681]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.590747]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.590900]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.590978]  kthread+0x324/0x6d0
[   28.591037]  ? trace_preempt_on+0x20/0xc0
[   28.591097]  ? __pfx_kthread+0x10/0x10
[   28.591269]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.591345]  ? calculate_sigpending+0x7b/0xa0
[   28.591432]  ? __pfx_kthread+0x10/0x10
[   28.591498]  ret_from_fork+0x41/0x80
[   28.591636]  ? __pfx_kthread+0x10/0x10
[   28.591704]  ret_from_fork_asm+0x1a/0x30
[   28.591786]  </TASK>
[   28.591818] 
[   28.610048] Allocated by task 273:
[   28.610921]  kasan_save_stack+0x3d/0x60
[   28.611464]  kasan_save_track+0x18/0x40
[   28.611895]  kasan_save_alloc_info+0x3b/0x50
[   28.613146]  __kasan_kmalloc+0xb7/0xc0
[   28.613520]  __kmalloc_cache_noprof+0x184/0x410
[   28.613982]  kasan_atomics+0x96/0x310
[   28.614599]  kunit_try_run_case+0x1b3/0x490
[   28.614869]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.615430]  kthread+0x324/0x6d0
[   28.615811]  ret_from_fork+0x41/0x80
[   28.616274]  ret_from_fork_asm+0x1a/0x30
[   28.616769] 
[   28.617006] The buggy address belongs to the object at ffff888101aebe00
[   28.617006]  which belongs to the cache kmalloc-64 of size 64
[   28.617896] The buggy address is located 0 bytes to the right of
[   28.617896]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.618895] 
[   28.619132] The buggy address belongs to the physical page:
[   28.619623] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.620303] flags: 0x200000000000000(node=0|zone=2)
[   28.620705] page_type: f5(slab)
[   28.621111] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.621732] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.622384] page dumped because: kasan: bad access detected
[   28.622960] 
[   28.623169] Memory state around the buggy address:
[   28.623694]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.624220]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.624892] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.625443]                                      ^
[   28.625938]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.626619]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.627307] ==================================================================
---
[   27.116579] ==================================================================
[   27.117068] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0xac8/0x5450
[   27.117537] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.117955] 
[   27.118126] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.118272] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.118314] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.118381] Call Trace:
[   27.118434]  <TASK>
[   27.118470]  dump_stack_lvl+0x73/0xb0
[   27.118551]  print_report+0xd1/0x640
[   27.118617]  ? __virt_addr_valid+0x1db/0x2d0
[   27.118677]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.118741]  kasan_report+0x102/0x140
[   27.118794]  ? kasan_atomics_helper+0xac8/0x5450
[   27.118854]  ? kasan_atomics_helper+0xac8/0x5450
[   27.118928]  kasan_check_range+0x10c/0x1c0
[   27.118985]  __kasan_check_write+0x18/0x20
[   27.119042]  kasan_atomics_helper+0xac8/0x5450
[   27.119108]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.119171]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.119280]  ? kasan_atomics+0x153/0x310
[   27.119350]  kasan_atomics+0x1dd/0x310
[   27.119426]  ? __pfx_kasan_atomics+0x10/0x10
[   27.119494]  ? __pfx_read_tsc+0x10/0x10
[   27.119556]  ? ktime_get_ts64+0x86/0x230
[   27.119673]  kunit_try_run_case+0x1b3/0x490
[   27.119745]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.119801]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.119859]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.119911]  ? __kthread_parkme+0x82/0x160
[   27.119958]  ? preempt_count_sub+0x50/0x80
[   27.120020]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.120082]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.120149]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.120259]  kthread+0x324/0x6d0
[   27.120323]  ? trace_preempt_on+0x20/0xc0
[   27.120388]  ? __pfx_kthread+0x10/0x10
[   27.120467]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.120532]  ? calculate_sigpending+0x7b/0xa0
[   27.120600]  ? __pfx_kthread+0x10/0x10
[   27.120711]  ret_from_fork+0x41/0x80
[   27.120771]  ? __pfx_kthread+0x10/0x10
[   27.120834]  ret_from_fork_asm+0x1a/0x30
[   27.120916]  </TASK>
[   27.120950] 
[   27.134436] Allocated by task 273:
[   27.134784]  kasan_save_stack+0x3d/0x60
[   27.135238]  kasan_save_track+0x18/0x40
[   27.135610]  kasan_save_alloc_info+0x3b/0x50
[   27.136057]  __kasan_kmalloc+0xb7/0xc0
[   27.136474]  __kmalloc_cache_noprof+0x184/0x410
[   27.136809]  kasan_atomics+0x96/0x310
[   27.137250]  kunit_try_run_case+0x1b3/0x490
[   27.137694]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.138066]  kthread+0x324/0x6d0
[   27.138504]  ret_from_fork+0x41/0x80
[   27.138900]  ret_from_fork_asm+0x1a/0x30
[   27.139241] 
[   27.139483] The buggy address belongs to the object at ffff888101aebe00
[   27.139483]  which belongs to the cache kmalloc-64 of size 64
[   27.140330] The buggy address is located 0 bytes to the right of
[   27.140330]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.141277] 
[   27.141528] The buggy address belongs to the physical page:
[   27.141892] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.142523] flags: 0x200000000000000(node=0|zone=2)
[   27.142902] page_type: f5(slab)
[   27.143321] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.144027] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.144606] page dumped because: kasan: bad access detected
[   27.145129] 
[   27.145390] Memory state around the buggy address:
[   27.145761]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.146368]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.146971] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.147489]                                      ^
[   27.147955]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.148463]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.149076] ==================================================================
---
[   27.671274] ==================================================================
[   27.671980] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1218/0x5450
[   27.672667] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.673257] 
[   27.673490] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.673612] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.673674] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.673737] Call Trace:
[   27.673775]  <TASK>
[   27.673822]  dump_stack_lvl+0x73/0xb0
[   27.673929]  print_report+0xd1/0x640
[   27.674003]  ? __virt_addr_valid+0x1db/0x2d0
[   27.674069]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.674142]  kasan_report+0x102/0x140
[   27.674249]  ? kasan_atomics_helper+0x1218/0x5450
[   27.674313]  ? kasan_atomics_helper+0x1218/0x5450
[   27.674398]  kasan_check_range+0x10c/0x1c0
[   27.674482]  __kasan_check_write+0x18/0x20
[   27.674542]  kasan_atomics_helper+0x1218/0x5450
[   27.674611]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.674678]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.674743]  ? kasan_atomics+0x153/0x310
[   27.674812]  kasan_atomics+0x1dd/0x310
[   27.674872]  ? __pfx_kasan_atomics+0x10/0x10
[   27.674934]  ? __pfx_read_tsc+0x10/0x10
[   27.674992]  ? ktime_get_ts64+0x86/0x230
[   27.675057]  kunit_try_run_case+0x1b3/0x490
[   27.675152]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.675276]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.675347]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.675433]  ? __kthread_parkme+0x82/0x160
[   27.675497]  ? preempt_count_sub+0x50/0x80
[   27.675563]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.675634]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.675705]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.675775]  kthread+0x324/0x6d0
[   27.675835]  ? trace_preempt_on+0x20/0xc0
[   27.675890]  ? __pfx_kthread+0x10/0x10
[   27.675925]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.675959]  ? calculate_sigpending+0x7b/0xa0
[   27.675994]  ? __pfx_kthread+0x10/0x10
[   27.676027]  ret_from_fork+0x41/0x80
[   27.676061]  ? __pfx_kthread+0x10/0x10
[   27.676093]  ret_from_fork_asm+0x1a/0x30
[   27.676139]  </TASK>
[   27.676156] 
[   27.695107] Allocated by task 273:
[   27.695642]  kasan_save_stack+0x3d/0x60
[   27.696378]  kasan_save_track+0x18/0x40
[   27.696754]  kasan_save_alloc_info+0x3b/0x50
[   27.697555]  __kasan_kmalloc+0xb7/0xc0
[   27.697989]  __kmalloc_cache_noprof+0x184/0x410
[   27.698962]  kasan_atomics+0x96/0x310
[   27.699603]  kunit_try_run_case+0x1b3/0x490
[   27.699958]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.700449]  kthread+0x324/0x6d0
[   27.701155]  ret_from_fork+0x41/0x80
[   27.701456]  ret_from_fork_asm+0x1a/0x30
[   27.701881] 
[   27.702097] The buggy address belongs to the object at ffff888101aebe00
[   27.702097]  which belongs to the cache kmalloc-64 of size 64
[   27.703455] The buggy address is located 0 bytes to the right of
[   27.703455]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.705120] 
[   27.705319] The buggy address belongs to the physical page:
[   27.705622] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.706726] flags: 0x200000000000000(node=0|zone=2)
[   27.707415] page_type: f5(slab)
[   27.707873] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.708611] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.709458] page dumped because: kasan: bad access detected
[   27.710258] 
[   27.710741] Memory state around the buggy address:
[   27.711263]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.711984]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.712632] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.713311]                                      ^
[   27.713821]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.714565]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.715038] ==================================================================
---
[   26.735888] ==================================================================
[   26.736826] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x5ff/0x5450
[   26.737769] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.738180] 
[   26.738541] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.738657] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.738696] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.738757] Call Trace:
[   26.738796]  <TASK>
[   26.738831]  dump_stack_lvl+0x73/0xb0
[   26.738909]  print_report+0xd1/0x640
[   26.738971]  ? __virt_addr_valid+0x1db/0x2d0
[   26.739038]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.739108]  kasan_report+0x102/0x140
[   26.739301]  ? kasan_atomics_helper+0x5ff/0x5450
[   26.739398]  ? kasan_atomics_helper+0x5ff/0x5450
[   26.739504]  kasan_check_range+0x10c/0x1c0
[   26.739583]  __kasan_check_write+0x18/0x20
[   26.739734]  kasan_atomics_helper+0x5ff/0x5450
[   26.739810]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.739875]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.739939]  ? kasan_atomics+0x153/0x310
[   26.740003]  kasan_atomics+0x1dd/0x310
[   26.740186]  ? __pfx_kasan_atomics+0x10/0x10
[   26.740279]  ? __pfx_read_tsc+0x10/0x10
[   26.740325]  ? ktime_get_ts64+0x86/0x230
[   26.740363]  kunit_try_run_case+0x1b3/0x490
[   26.740425]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.740486]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.740550]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.740675]  ? __kthread_parkme+0x82/0x160
[   26.741107]  ? preempt_count_sub+0x50/0x80
[   26.741194]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.741282]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.741341]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.741379]  kthread+0x324/0x6d0
[   26.741437]  ? trace_preempt_on+0x20/0xc0
[   26.741487]  ? __pfx_kthread+0x10/0x10
[   26.741520]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.741554]  ? calculate_sigpending+0x7b/0xa0
[   26.741591]  ? __pfx_kthread+0x10/0x10
[   26.741727]  ret_from_fork+0x41/0x80
[   26.741762]  ? __pfx_kthread+0x10/0x10
[   26.741794]  ret_from_fork_asm+0x1a/0x30
[   26.741842]  </TASK>
[   26.741859] 
[   26.758236] Allocated by task 273:
[   26.758870]  kasan_save_stack+0x3d/0x60
[   26.759573]  kasan_save_track+0x18/0x40
[   26.760082]  kasan_save_alloc_info+0x3b/0x50
[   26.760388]  __kasan_kmalloc+0xb7/0xc0
[   26.760856]  __kmalloc_cache_noprof+0x184/0x410
[   26.761790]  kasan_atomics+0x96/0x310
[   26.762180]  kunit_try_run_case+0x1b3/0x490
[   26.763862]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.765578]  kthread+0x324/0x6d0
[   26.765940]  ret_from_fork+0x41/0x80
[   26.766947]  ret_from_fork_asm+0x1a/0x30
[   26.767483] 
[   26.767719] The buggy address belongs to the object at ffff888101aebe00
[   26.767719]  which belongs to the cache kmalloc-64 of size 64
[   26.768486] The buggy address is located 0 bytes to the right of
[   26.768486]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.770163] 
[   26.770717] The buggy address belongs to the physical page:
[   26.771443] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.772756] flags: 0x200000000000000(node=0|zone=2)
[   26.773209] page_type: f5(slab)
[   26.773725] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.774964] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.776068] page dumped because: kasan: bad access detected
[   26.776458] 
[   26.776926] Memory state around the buggy address:
[   26.778024]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.778695]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.779871] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.780444]                                      ^
[   26.781150]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.782396]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.783110] ==================================================================
---
[   27.022466] ==================================================================
[   27.023308] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x993/0x5450
[   27.024269] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.025274] 
[   27.025735] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.025844] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.025884] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.026023] Call Trace:
[   27.026133]  <TASK>
[   27.026285]  dump_stack_lvl+0x73/0xb0
[   27.026424]  print_report+0xd1/0x640
[   27.026500]  ? __virt_addr_valid+0x1db/0x2d0
[   27.026572]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.026955]  kasan_report+0x102/0x140
[   27.026992]  ? kasan_atomics_helper+0x993/0x5450
[   27.027029]  ? kasan_atomics_helper+0x993/0x5450
[   27.027070]  kasan_check_range+0x10c/0x1c0
[   27.027104]  __kasan_check_write+0x18/0x20
[   27.027135]  kasan_atomics_helper+0x993/0x5450
[   27.027171]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.027243]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.027278]  ? kasan_atomics+0x153/0x310
[   27.027315]  kasan_atomics+0x1dd/0x310
[   27.027346]  ? __pfx_kasan_atomics+0x10/0x10
[   27.027379]  ? __pfx_read_tsc+0x10/0x10
[   27.027425]  ? ktime_get_ts64+0x86/0x230
[   27.027462]  kunit_try_run_case+0x1b3/0x490
[   27.027500]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.027535]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.027569]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.027654]  ? __kthread_parkme+0x82/0x160
[   27.027711]  ? preempt_count_sub+0x50/0x80
[   27.027747]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.027782]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.027820]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.027855]  kthread+0x324/0x6d0
[   27.027885]  ? trace_preempt_on+0x20/0xc0
[   27.027917]  ? __pfx_kthread+0x10/0x10
[   27.027949]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.027982]  ? calculate_sigpending+0x7b/0xa0
[   27.028017]  ? __pfx_kthread+0x10/0x10
[   27.028048]  ret_from_fork+0x41/0x80
[   27.028081]  ? __pfx_kthread+0x10/0x10
[   27.028112]  ret_from_fork_asm+0x1a/0x30
[   27.028181]  </TASK>
[   27.028234] 
[   27.048151] Allocated by task 273:
[   27.048748]  kasan_save_stack+0x3d/0x60
[   27.049299]  kasan_save_track+0x18/0x40
[   27.049960]  kasan_save_alloc_info+0x3b/0x50
[   27.050583]  __kasan_kmalloc+0xb7/0xc0
[   27.051366]  __kmalloc_cache_noprof+0x184/0x410
[   27.051728]  kasan_atomics+0x96/0x310
[   27.052136]  kunit_try_run_case+0x1b3/0x490
[   27.052513]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.053036]  kthread+0x324/0x6d0
[   27.053377]  ret_from_fork+0x41/0x80
[   27.054443]  ret_from_fork_asm+0x1a/0x30
[   27.055042] 
[   27.055687] The buggy address belongs to the object at ffff888101aebe00
[   27.055687]  which belongs to the cache kmalloc-64 of size 64
[   27.056996] The buggy address is located 0 bytes to the right of
[   27.056996]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.058606] 
[   27.058966] The buggy address belongs to the physical page:
[   27.059918] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.060695] flags: 0x200000000000000(node=0|zone=2)
[   27.061029] page_type: f5(slab)
[   27.061521] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.062510] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.063682] page dumped because: kasan: bad access detected
[   27.064133] 
[   27.064427] Memory state around the buggy address:
[   27.065229]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.066104]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.067092] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.067928]                                      ^
[   27.068641]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.069585]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.070376] ==================================================================
---
[   28.825932] ==================================================================
[   28.826397] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x20c9/0x5450
[   28.826878] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.827335] 
[   28.827591] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.827696] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.827734] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.827793] Call Trace:
[   28.827828]  <TASK>
[   28.827866]  dump_stack_lvl+0x73/0xb0
[   28.827942]  print_report+0xd1/0x640
[   28.828009]  ? __virt_addr_valid+0x1db/0x2d0
[   28.828080]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.828154]  kasan_report+0x102/0x140
[   28.828303]  ? kasan_atomics_helper+0x20c9/0x5450
[   28.828380]  ? kasan_atomics_helper+0x20c9/0x5450
[   28.828479]  kasan_check_range+0x10c/0x1c0
[   28.828548]  __kasan_check_write+0x18/0x20
[   28.828610]  kasan_atomics_helper+0x20c9/0x5450
[   28.828681]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.828749]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.828815]  ? kasan_atomics+0x153/0x310
[   28.828886]  kasan_atomics+0x1dd/0x310
[   28.828945]  ? __pfx_kasan_atomics+0x10/0x10
[   28.829009]  ? __pfx_read_tsc+0x10/0x10
[   28.829068]  ? ktime_get_ts64+0x86/0x230
[   28.829133]  kunit_try_run_case+0x1b3/0x490
[   28.829249]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.829323]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.829390]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.829474]  ? __kthread_parkme+0x82/0x160
[   28.829532]  ? preempt_count_sub+0x50/0x80
[   28.829596]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.829663]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.829733]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.829800]  kthread+0x324/0x6d0
[   28.829857]  ? trace_preempt_on+0x20/0xc0
[   28.829918]  ? __pfx_kthread+0x10/0x10
[   28.829975]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.830034]  ? calculate_sigpending+0x7b/0xa0
[   28.830100]  ? __pfx_kthread+0x10/0x10
[   28.830162]  ret_from_fork+0x41/0x80
[   28.830258]  ? __pfx_kthread+0x10/0x10
[   28.830315]  ret_from_fork_asm+0x1a/0x30
[   28.830421]  </TASK>
[   28.830459] 
[   28.844106] Allocated by task 273:
[   28.844531]  kasan_save_stack+0x3d/0x60
[   28.844933]  kasan_save_track+0x18/0x40
[   28.845264]  kasan_save_alloc_info+0x3b/0x50
[   28.845718]  __kasan_kmalloc+0xb7/0xc0
[   28.846145]  __kmalloc_cache_noprof+0x184/0x410
[   28.846689]  kasan_atomics+0x96/0x310
[   28.847105]  kunit_try_run_case+0x1b3/0x490
[   28.847621]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.847976]  kthread+0x324/0x6d0
[   28.848386]  ret_from_fork+0x41/0x80
[   28.848824]  ret_from_fork_asm+0x1a/0x30
[   28.849310] 
[   28.849555] The buggy address belongs to the object at ffff888101aebe00
[   28.849555]  which belongs to the cache kmalloc-64 of size 64
[   28.850231] The buggy address is located 0 bytes to the right of
[   28.850231]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.851319] 
[   28.851530] The buggy address belongs to the physical page:
[   28.851975] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.852573] flags: 0x200000000000000(node=0|zone=2)
[   28.853066] page_type: f5(slab)
[   28.853469] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.853893] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.854369] page dumped because: kasan: bad access detected
[   28.856449] 
[   28.856923] Memory state around the buggy address:
[   28.857884]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.861719]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.862454] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.863472]                                      ^
[   28.864028]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.864593]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.865293] ==================================================================
---
[   26.879885] ==================================================================
[   26.880669] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x7c8/0x5450
[   26.881752] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.882270] 
[   26.882867] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.883249] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.883425] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.883681] Call Trace:
[   26.883732]  <TASK>
[   26.883770]  dump_stack_lvl+0x73/0xb0
[   26.884517]  print_report+0xd1/0x640
[   26.884625]  ? __virt_addr_valid+0x1db/0x2d0
[   26.884705]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.884755]  kasan_report+0x102/0x140
[   26.884790]  ? kasan_atomics_helper+0x7c8/0x5450
[   26.884826]  ? kasan_atomics_helper+0x7c8/0x5450
[   26.884867]  kasan_check_range+0x10c/0x1c0
[   26.884899]  __kasan_check_write+0x18/0x20
[   26.884930]  kasan_atomics_helper+0x7c8/0x5450
[   26.884966]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.885002]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.885034]  ? kasan_atomics+0x153/0x310
[   26.885070]  kasan_atomics+0x1dd/0x310
[   26.885101]  ? __pfx_kasan_atomics+0x10/0x10
[   26.885133]  ? __pfx_read_tsc+0x10/0x10
[   26.885162]  ? ktime_get_ts64+0x86/0x230
[   26.885203]  kunit_try_run_case+0x1b3/0x490
[   26.885248]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.885282]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.885316]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.885351]  ? __kthread_parkme+0x82/0x160
[   26.885381]  ? preempt_count_sub+0x50/0x80
[   26.885433]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.885469]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.885504]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.885539]  kthread+0x324/0x6d0
[   26.885570]  ? trace_preempt_on+0x20/0xc0
[   26.885679]  ? __pfx_kthread+0x10/0x10
[   26.885746]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.885808]  ? calculate_sigpending+0x7b/0xa0
[   26.885870]  ? __pfx_kthread+0x10/0x10
[   26.885925]  ret_from_fork+0x41/0x80
[   26.885984]  ? __pfx_kthread+0x10/0x10
[   26.886042]  ret_from_fork_asm+0x1a/0x30
[   26.886107]  </TASK>
[   26.886127] 
[   26.905097] Allocated by task 273:
[   26.905550]  kasan_save_stack+0x3d/0x60
[   26.906096]  kasan_save_track+0x18/0x40
[   26.906672]  kasan_save_alloc_info+0x3b/0x50
[   26.907229]  __kasan_kmalloc+0xb7/0xc0
[   26.907817]  __kmalloc_cache_noprof+0x184/0x410
[   26.908509]  kasan_atomics+0x96/0x310
[   26.909007]  kunit_try_run_case+0x1b3/0x490
[   26.909569]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.910287]  kthread+0x324/0x6d0
[   26.910549]  ret_from_fork+0x41/0x80
[   26.910954]  ret_from_fork_asm+0x1a/0x30
[   26.911386] 
[   26.911592] The buggy address belongs to the object at ffff888101aebe00
[   26.911592]  which belongs to the cache kmalloc-64 of size 64
[   26.912384] The buggy address is located 0 bytes to the right of
[   26.912384]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.914217] 
[   26.914619] The buggy address belongs to the physical page:
[   26.915504] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.916673] flags: 0x200000000000000(node=0|zone=2)
[   26.917050] page_type: f5(slab)
[   26.917863] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.918466] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.919201] page dumped because: kasan: bad access detected
[   26.920166] 
[   26.920347] Memory state around the buggy address:
[   26.920825]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.921491]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.922039] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.922762]                                      ^
[   26.923444]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.923941]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.924688] ==================================================================
---
[   28.261120] ==================================================================
[   28.261771] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x194b/0x5450
[   28.262881] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.263355] 
[   28.263562] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.263672] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.263714] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.263776] Call Trace:
[   28.263809]  <TASK>
[   28.263844]  dump_stack_lvl+0x73/0xb0
[   28.263916]  print_report+0xd1/0x640
[   28.263975]  ? __virt_addr_valid+0x1db/0x2d0
[   28.264040]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.264163]  kasan_report+0x102/0x140
[   28.264294]  ? kasan_atomics_helper+0x194b/0x5450
[   28.264367]  ? kasan_atomics_helper+0x194b/0x5450
[   28.264470]  kasan_check_range+0x10c/0x1c0
[   28.264537]  __kasan_check_write+0x18/0x20
[   28.264600]  kasan_atomics_helper+0x194b/0x5450
[   28.264666]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.264730]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.264793]  ? kasan_atomics+0x153/0x310
[   28.264859]  kasan_atomics+0x1dd/0x310
[   28.264919]  ? __pfx_kasan_atomics+0x10/0x10
[   28.264981]  ? __pfx_read_tsc+0x10/0x10
[   28.265039]  ? ktime_get_ts64+0x86/0x230
[   28.265101]  kunit_try_run_case+0x1b3/0x490
[   28.265282]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.265355]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.265440]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.265508]  ? __kthread_parkme+0x82/0x160
[   28.265570]  ? preempt_count_sub+0x50/0x80
[   28.265638]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.265706]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.265778]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.265848]  kthread+0x324/0x6d0
[   28.265908]  ? trace_preempt_on+0x20/0xc0
[   28.265968]  ? __pfx_kthread+0x10/0x10
[   28.266031]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.266093]  ? calculate_sigpending+0x7b/0xa0
[   28.266162]  ? __pfx_kthread+0x10/0x10
[   28.266275]  ret_from_fork+0x41/0x80
[   28.266320]  ? __pfx_kthread+0x10/0x10
[   28.266370]  ret_from_fork_asm+0x1a/0x30
[   28.266432]  </TASK>
[   28.266450] 
[   28.281009] Allocated by task 273:
[   28.281308]  kasan_save_stack+0x3d/0x60
[   28.281750]  kasan_save_track+0x18/0x40
[   28.282311]  kasan_save_alloc_info+0x3b/0x50
[   28.282909]  __kasan_kmalloc+0xb7/0xc0
[   28.283454]  __kmalloc_cache_noprof+0x184/0x410
[   28.284012]  kasan_atomics+0x96/0x310
[   28.284540]  kunit_try_run_case+0x1b3/0x490
[   28.285069]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.285706]  kthread+0x324/0x6d0
[   28.286110]  ret_from_fork+0x41/0x80
[   28.286657]  ret_from_fork_asm+0x1a/0x30
[   28.287290] 
[   28.287564] The buggy address belongs to the object at ffff888101aebe00
[   28.287564]  which belongs to the cache kmalloc-64 of size 64
[   28.288429] The buggy address is located 0 bytes to the right of
[   28.288429]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.289059] 
[   28.289275] The buggy address belongs to the physical page:
[   28.289817] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.290660] flags: 0x200000000000000(node=0|zone=2)
[   28.291162] page_type: f5(slab)
[   28.291586] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.292379] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.293029] page dumped because: kasan: bad access detected
[   28.295339] 
[   28.295707] Memory state around the buggy address:
[   28.295957]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.296234]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.297511] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.298427]                                      ^
[   28.298939]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.299863]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.300382] ==================================================================
---
[   28.341235] ==================================================================
[   28.341847] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1a80/0x5450
[   28.342708] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.343501] 
[   28.343683] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.343817] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.343856] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.343910] Call Trace:
[   28.343966]  <TASK>
[   28.344021]  dump_stack_lvl+0x73/0xb0
[   28.344105]  print_report+0xd1/0x640
[   28.344173]  ? __virt_addr_valid+0x1db/0x2d0
[   28.344280]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.344388]  kasan_report+0x102/0x140
[   28.344474]  ? kasan_atomics_helper+0x1a80/0x5450
[   28.344548]  ? kasan_atomics_helper+0x1a80/0x5450
[   28.344628]  kasan_check_range+0x10c/0x1c0
[   28.344693]  __kasan_check_write+0x18/0x20
[   28.344753]  kasan_atomics_helper+0x1a80/0x5450
[   28.344824]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.344882]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.344942]  ? kasan_atomics+0x153/0x310
[   28.345006]  kasan_atomics+0x1dd/0x310
[   28.345087]  ? __pfx_kasan_atomics+0x10/0x10
[   28.345178]  ? __pfx_read_tsc+0x10/0x10
[   28.345234]  ? ktime_get_ts64+0x86/0x230
[   28.345292]  kunit_try_run_case+0x1b3/0x490
[   28.345364]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.345439]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.345508]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.345572]  ? __kthread_parkme+0x82/0x160
[   28.345632]  ? preempt_count_sub+0x50/0x80
[   28.345695]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.345763]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.345833]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.345901]  kthread+0x324/0x6d0
[   28.345960]  ? trace_preempt_on+0x20/0xc0
[   28.346061]  ? __pfx_kthread+0x10/0x10
[   28.346146]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.346214]  ? calculate_sigpending+0x7b/0xa0
[   28.346286]  ? __pfx_kthread+0x10/0x10
[   28.346355]  ret_from_fork+0x41/0x80
[   28.346436]  ? __pfx_kthread+0x10/0x10
[   28.346500]  ret_from_fork_asm+0x1a/0x30
[   28.346590]  </TASK>
[   28.346622] 
[   28.363031] Allocated by task 273:
[   28.363909]  kasan_save_stack+0x3d/0x60
[   28.364198]  kasan_save_track+0x18/0x40
[   28.364569]  kasan_save_alloc_info+0x3b/0x50
[   28.364937]  __kasan_kmalloc+0xb7/0xc0
[   28.365919]  __kmalloc_cache_noprof+0x184/0x410
[   28.366281]  kasan_atomics+0x96/0x310
[   28.366838]  kunit_try_run_case+0x1b3/0x490
[   28.367177]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.368432]  kthread+0x324/0x6d0
[   28.368823]  ret_from_fork+0x41/0x80
[   28.369058]  ret_from_fork_asm+0x1a/0x30
[   28.369826] 
[   28.370081] The buggy address belongs to the object at ffff888101aebe00
[   28.370081]  which belongs to the cache kmalloc-64 of size 64
[   28.371743] The buggy address is located 0 bytes to the right of
[   28.371743]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.372830] 
[   28.373055] The buggy address belongs to the physical page:
[   28.373868] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.374767] flags: 0x200000000000000(node=0|zone=2)
[   28.375239] page_type: f5(slab)
[   28.375509] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.375980] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.377348] page dumped because: kasan: bad access detected
[   28.377761] 
[   28.377985] Memory state around the buggy address:
[   28.378470]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.379136]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.379616] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.380731]                                      ^
[   28.381439]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.381986]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.383069] ==================================================================
---
[   28.904018] ==================================================================
[   28.904814] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x218b/0x5450
[   28.905570] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.906379] 
[   28.906699] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.906809] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.906846] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.906903] Call Trace:
[   28.906935]  <TASK>
[   28.906969]  dump_stack_lvl+0x73/0xb0
[   28.907090]  print_report+0xd1/0x640
[   28.907165]  ? __virt_addr_valid+0x1db/0x2d0
[   28.907277]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.907351]  kasan_report+0x102/0x140
[   28.907435]  ? kasan_atomics_helper+0x218b/0x5450
[   28.907503]  ? kasan_atomics_helper+0x218b/0x5450
[   28.907621]  kasan_check_range+0x10c/0x1c0
[   28.907689]  __kasan_check_write+0x18/0x20
[   28.907757]  kasan_atomics_helper+0x218b/0x5450
[   28.907829]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.907892]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.907952]  ? kasan_atomics+0x153/0x310
[   28.908015]  kasan_atomics+0x1dd/0x310
[   28.908094]  ? __pfx_kasan_atomics+0x10/0x10
[   28.908215]  ? __pfx_read_tsc+0x10/0x10
[   28.908289]  ? ktime_get_ts64+0x86/0x230
[   28.908360]  kunit_try_run_case+0x1b3/0x490
[   28.908456]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.908561]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.908636]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.908698]  ? __kthread_parkme+0x82/0x160
[   28.908750]  ? preempt_count_sub+0x50/0x80
[   28.908788]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.908823]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.908861]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.908896]  kthread+0x324/0x6d0
[   28.908925]  ? trace_preempt_on+0x20/0xc0
[   28.908958]  ? __pfx_kthread+0x10/0x10
[   28.908989]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.909021]  ? calculate_sigpending+0x7b/0xa0
[   28.909056]  ? __pfx_kthread+0x10/0x10
[   28.909089]  ret_from_fork+0x41/0x80
[   28.909120]  ? __pfx_kthread+0x10/0x10
[   28.909152]  ret_from_fork_asm+0x1a/0x30
[   28.909225]  </TASK>
[   28.909243] 
[   28.927354] Allocated by task 273:
[   28.927758]  kasan_save_stack+0x3d/0x60
[   28.928319]  kasan_save_track+0x18/0x40
[   28.929005]  kasan_save_alloc_info+0x3b/0x50
[   28.929515]  __kasan_kmalloc+0xb7/0xc0
[   28.929803]  __kmalloc_cache_noprof+0x184/0x410
[   28.930486]  kasan_atomics+0x96/0x310
[   28.931064]  kunit_try_run_case+0x1b3/0x490
[   28.931576]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.932320]  kthread+0x324/0x6d0
[   28.932628]  ret_from_fork+0x41/0x80
[   28.933172]  ret_from_fork_asm+0x1a/0x30
[   28.933726] 
[   28.933954] The buggy address belongs to the object at ffff888101aebe00
[   28.933954]  which belongs to the cache kmalloc-64 of size 64
[   28.934859] The buggy address is located 0 bytes to the right of
[   28.934859]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.935802] 
[   28.935974] The buggy address belongs to the physical page:
[   28.936476] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.937365] flags: 0x200000000000000(node=0|zone=2)
[   28.937861] page_type: f5(slab)
[   28.938119] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.939140] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.939886] page dumped because: kasan: bad access detected
[   28.940506] 
[   28.940692] Memory state around the buggy address:
[   28.941163]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.941945]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.942575] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.943428]                                      ^
[   28.943813]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.944525]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.944960] ==================================================================
---
[   28.384082] ==================================================================
[   28.384758] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1b23/0x5450
[   28.385195] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.386915] 
[   28.387269] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.387426] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.387483] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.387547] Call Trace:
[   28.387583]  <TASK>
[   28.387615]  dump_stack_lvl+0x73/0xb0
[   28.387666]  print_report+0xd1/0x640
[   28.387702]  ? __virt_addr_valid+0x1db/0x2d0
[   28.387738]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.387777]  kasan_report+0x102/0x140
[   28.387809]  ? kasan_atomics_helper+0x1b23/0x5450
[   28.387846]  ? kasan_atomics_helper+0x1b23/0x5450
[   28.387887]  kasan_check_range+0x10c/0x1c0
[   28.387919]  __kasan_check_write+0x18/0x20
[   28.387950]  kasan_atomics_helper+0x1b23/0x5450
[   28.387986]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.388021]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.388054]  ? kasan_atomics+0x153/0x310
[   28.388089]  kasan_atomics+0x1dd/0x310
[   28.388120]  ? __pfx_kasan_atomics+0x10/0x10
[   28.388152]  ? __pfx_read_tsc+0x10/0x10
[   28.388238]  ? ktime_get_ts64+0x86/0x230
[   28.388277]  kunit_try_run_case+0x1b3/0x490
[   28.388317]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.388349]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.388382]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.388434]  ? __kthread_parkme+0x82/0x160
[   28.388465]  ? preempt_count_sub+0x50/0x80
[   28.388501]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.388536]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.388570]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.388605]  kthread+0x324/0x6d0
[   28.388636]  ? trace_preempt_on+0x20/0xc0
[   28.388668]  ? __pfx_kthread+0x10/0x10
[   28.388698]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.388732]  ? calculate_sigpending+0x7b/0xa0
[   28.388767]  ? __pfx_kthread+0x10/0x10
[   28.388798]  ret_from_fork+0x41/0x80
[   28.388830]  ? __pfx_kthread+0x10/0x10
[   28.388860]  ret_from_fork_asm+0x1a/0x30
[   28.388905]  </TASK>
[   28.388922] 
[   28.404449] Allocated by task 273:
[   28.404832]  kasan_save_stack+0x3d/0x60
[   28.405263]  kasan_save_track+0x18/0x40
[   28.405798]  kasan_save_alloc_info+0x3b/0x50
[   28.406387]  __kasan_kmalloc+0xb7/0xc0
[   28.406930]  __kmalloc_cache_noprof+0x184/0x410
[   28.407476]  kasan_atomics+0x96/0x310
[   28.408114]  kunit_try_run_case+0x1b3/0x490
[   28.408662]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.409606]  kthread+0x324/0x6d0
[   28.410059]  ret_from_fork+0x41/0x80
[   28.410608]  ret_from_fork_asm+0x1a/0x30
[   28.411110] 
[   28.411458] The buggy address belongs to the object at ffff888101aebe00
[   28.411458]  which belongs to the cache kmalloc-64 of size 64
[   28.412054] The buggy address is located 0 bytes to the right of
[   28.412054]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.413980] 
[   28.414392] The buggy address belongs to the physical page:
[   28.415068] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.416049] flags: 0x200000000000000(node=0|zone=2)
[   28.416771] page_type: f5(slab)
[   28.417151] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.418328] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.419240] page dumped because: kasan: bad access detected
[   28.419852] 
[   28.420268] Memory state around the buggy address:
[   28.421012]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.422373]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.422891] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.423308]                                      ^
[   28.423904]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.424844]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.425592] ==================================================================
---
[   27.302084] ==================================================================
[   27.302882] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0xde1/0x5450
[   27.304046] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.304536] 
[   27.304772] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.304881] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.304972] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.305034] Call Trace:
[   27.305071]  <TASK>
[   27.305109]  dump_stack_lvl+0x73/0xb0
[   27.305189]  print_report+0xd1/0x640
[   27.305538]  ? __virt_addr_valid+0x1db/0x2d0
[   27.305612]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.305684]  kasan_report+0x102/0x140
[   27.305745]  ? kasan_atomics_helper+0xde1/0x5450
[   27.305893]  ? kasan_atomics_helper+0xde1/0x5450
[   27.305972]  kasan_check_range+0x10c/0x1c0
[   27.306036]  __kasan_check_write+0x18/0x20
[   27.306083]  kasan_atomics_helper+0xde1/0x5450
[   27.306121]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.306157]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.306194]  ? kasan_atomics+0x153/0x310
[   27.306262]  kasan_atomics+0x1dd/0x310
[   27.306292]  ? __pfx_kasan_atomics+0x10/0x10
[   27.306325]  ? __pfx_read_tsc+0x10/0x10
[   27.306363]  ? ktime_get_ts64+0x86/0x230
[   27.306415]  kunit_try_run_case+0x1b3/0x490
[   27.306456]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.306489]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.306523]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.306557]  ? __kthread_parkme+0x82/0x160
[   27.306587]  ? preempt_count_sub+0x50/0x80
[   27.306621]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.306657]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.306690]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.306725]  kthread+0x324/0x6d0
[   27.306755]  ? trace_preempt_on+0x20/0xc0
[   27.306786]  ? __pfx_kthread+0x10/0x10
[   27.306818]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.306848]  ? calculate_sigpending+0x7b/0xa0
[   27.306883]  ? __pfx_kthread+0x10/0x10
[   27.306916]  ret_from_fork+0x41/0x80
[   27.306946]  ? __pfx_kthread+0x10/0x10
[   27.306977]  ret_from_fork_asm+0x1a/0x30
[   27.307020]  </TASK>
[   27.307038] 
[   27.321326] Allocated by task 273:
[   27.321772]  kasan_save_stack+0x3d/0x60
[   27.322228]  kasan_save_track+0x18/0x40
[   27.322711]  kasan_save_alloc_info+0x3b/0x50
[   27.323166]  __kasan_kmalloc+0xb7/0xc0
[   27.323671]  __kmalloc_cache_noprof+0x184/0x410
[   27.324237]  kasan_atomics+0x96/0x310
[   27.324601]  kunit_try_run_case+0x1b3/0x490
[   27.324908]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.325666]  kthread+0x324/0x6d0
[   27.326056]  ret_from_fork+0x41/0x80
[   27.326591]  ret_from_fork_asm+0x1a/0x30
[   27.327090] 
[   27.327268] The buggy address belongs to the object at ffff888101aebe00
[   27.327268]  which belongs to the cache kmalloc-64 of size 64
[   27.328434] The buggy address is located 0 bytes to the right of
[   27.328434]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.329352] 
[   27.329684] The buggy address belongs to the physical page:
[   27.330126] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.331035] flags: 0x200000000000000(node=0|zone=2)
[   27.331777] page_type: f5(slab)
[   27.332032] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.332606] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.333552] page dumped because: kasan: bad access detected
[   27.334082] 
[   27.334449] Memory state around the buggy address:
[   27.334926]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.335915]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.336589] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.336988]                                      ^
[   27.337316]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.338396]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.339686] ==================================================================
---
[   28.508967] ==================================================================
[   28.510427] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1ce2/0x5450
[   28.510945] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.511684] 
[   28.511973] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.512082] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.512121] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.512179] Call Trace:
[   28.512253]  <TASK>
[   28.512289]  dump_stack_lvl+0x73/0xb0
[   28.512422]  print_report+0xd1/0x640
[   28.512498]  ? __virt_addr_valid+0x1db/0x2d0
[   28.512603]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.512706]  kasan_report+0x102/0x140
[   28.512772]  ? kasan_atomics_helper+0x1ce2/0x5450
[   28.512842]  ? kasan_atomics_helper+0x1ce2/0x5450
[   28.512923]  kasan_check_range+0x10c/0x1c0
[   28.512989]  __kasan_check_write+0x18/0x20
[   28.513050]  kasan_atomics_helper+0x1ce2/0x5450
[   28.513121]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.513231]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.513290]  ? kasan_atomics+0x153/0x310
[   28.513328]  kasan_atomics+0x1dd/0x310
[   28.513361]  ? __pfx_kasan_atomics+0x10/0x10
[   28.513394]  ? __pfx_read_tsc+0x10/0x10
[   28.513444]  ? ktime_get_ts64+0x86/0x230
[   28.513479]  kunit_try_run_case+0x1b3/0x490
[   28.513518]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.513551]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.513586]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.513621]  ? __kthread_parkme+0x82/0x160
[   28.513652]  ? preempt_count_sub+0x50/0x80
[   28.513687]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.513722]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.513757]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.513794]  kthread+0x324/0x6d0
[   28.513823]  ? trace_preempt_on+0x20/0xc0
[   28.513854]  ? __pfx_kthread+0x10/0x10
[   28.513886]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.513917]  ? calculate_sigpending+0x7b/0xa0
[   28.513950]  ? __pfx_kthread+0x10/0x10
[   28.513984]  ret_from_fork+0x41/0x80
[   28.514013]  ? __pfx_kthread+0x10/0x10
[   28.514045]  ret_from_fork_asm+0x1a/0x30
[   28.514088]  </TASK>
[   28.514106] 
[   28.527899] Allocated by task 273:
[   28.528379]  kasan_save_stack+0x3d/0x60
[   28.528852]  kasan_save_track+0x18/0x40
[   28.529341]  kasan_save_alloc_info+0x3b/0x50
[   28.529796]  __kasan_kmalloc+0xb7/0xc0
[   28.530085]  __kmalloc_cache_noprof+0x184/0x410
[   28.530673]  kasan_atomics+0x96/0x310
[   28.531122]  kunit_try_run_case+0x1b3/0x490
[   28.531639]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.532211]  kthread+0x324/0x6d0
[   28.532610]  ret_from_fork+0x41/0x80
[   28.533019]  ret_from_fork_asm+0x1a/0x30
[   28.533352] 
[   28.533632] The buggy address belongs to the object at ffff888101aebe00
[   28.533632]  which belongs to the cache kmalloc-64 of size 64
[   28.534715] The buggy address is located 0 bytes to the right of
[   28.534715]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.535759] 
[   28.535997] The buggy address belongs to the physical page:
[   28.536513] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.537372] flags: 0x200000000000000(node=0|zone=2)
[   28.537970] page_type: f5(slab)
[   28.538499] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.539125] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.539848] page dumped because: kasan: bad access detected
[   28.540580] 
[   28.540828] Memory state around the buggy address:
[   28.541391]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.542123]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.542948] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.543730]                                      ^
[   28.544318]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.544911]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.545921] ==================================================================
---
[   27.184355] ==================================================================
[   27.185067] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0xc71/0x5450
[   27.185790] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.186499] 
[   27.186743] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.186852] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.186891] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.186951] Call Trace:
[   27.186986]  <TASK>
[   27.187026]  dump_stack_lvl+0x73/0xb0
[   27.187104]  print_report+0xd1/0x640
[   27.187172]  ? __virt_addr_valid+0x1db/0x2d0
[   27.187295]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.187374]  kasan_report+0x102/0x140
[   27.187456]  ? kasan_atomics_helper+0xc71/0x5450
[   27.187524]  ? kasan_atomics_helper+0xc71/0x5450
[   27.187599]  kasan_check_range+0x10c/0x1c0
[   27.187668]  __kasan_check_write+0x18/0x20
[   27.187725]  kasan_atomics_helper+0xc71/0x5450
[   27.187792]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.187855]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.187919]  ? kasan_atomics+0x153/0x310
[   27.187988]  kasan_atomics+0x1dd/0x310
[   27.188049]  ? __pfx_kasan_atomics+0x10/0x10
[   27.188112]  ? __pfx_read_tsc+0x10/0x10
[   27.188168]  ? ktime_get_ts64+0x86/0x230
[   27.188283]  kunit_try_run_case+0x1b3/0x490
[   27.188360]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.188446]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.188517]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.188585]  ? __kthread_parkme+0x82/0x160
[   27.188646]  ? preempt_count_sub+0x50/0x80
[   27.188715]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.188785]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.188857]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.188930]  kthread+0x324/0x6d0
[   27.188991]  ? trace_preempt_on+0x20/0xc0
[   27.189055]  ? __pfx_kthread+0x10/0x10
[   27.189118]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.189222]  ? calculate_sigpending+0x7b/0xa0
[   27.189310]  ? __pfx_kthread+0x10/0x10
[   27.189378]  ret_from_fork+0x41/0x80
[   27.189462]  ? __pfx_kthread+0x10/0x10
[   27.189526]  ret_from_fork_asm+0x1a/0x30
[   27.189614]  </TASK>
[   27.189643] 
[   27.202709] Allocated by task 273:
[   27.203068]  kasan_save_stack+0x3d/0x60
[   27.203463]  kasan_save_track+0x18/0x40
[   27.203880]  kasan_save_alloc_info+0x3b/0x50
[   27.204302]  __kasan_kmalloc+0xb7/0xc0
[   27.204613]  __kmalloc_cache_noprof+0x184/0x410
[   27.204927]  kasan_atomics+0x96/0x310
[   27.205240]  kunit_try_run_case+0x1b3/0x490
[   27.205673]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.206251]  kthread+0x324/0x6d0
[   27.206672]  ret_from_fork+0x41/0x80
[   27.207089]  ret_from_fork_asm+0x1a/0x30
[   27.207575] 
[   27.207801] The buggy address belongs to the object at ffff888101aebe00
[   27.207801]  which belongs to the cache kmalloc-64 of size 64
[   27.208891] The buggy address is located 0 bytes to the right of
[   27.208891]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.209761] 
[   27.209997] The buggy address belongs to the physical page:
[   27.210473] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.211024] flags: 0x200000000000000(node=0|zone=2)
[   27.211412] page_type: f5(slab)
[   27.211674] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.212397] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.213067] page dumped because: kasan: bad access detected
[   27.213637] 
[   27.213852] Memory state around the buggy address:
[   27.214360]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.215006]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.215707] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.216223]                                      ^
[   27.216612]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.217294]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.217946] ==================================================================
---
[   28.150461] ==================================================================
[   28.151001] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1780/0x5450
[   28.151722] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.152279] 
[   28.152545] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.152652] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.152688] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.152747] Call Trace:
[   28.152783]  <TASK>
[   28.152816]  dump_stack_lvl+0x73/0xb0
[   28.152896]  print_report+0xd1/0x640
[   28.152959]  ? __virt_addr_valid+0x1db/0x2d0
[   28.153026]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.153095]  kasan_report+0x102/0x140
[   28.153159]  ? kasan_atomics_helper+0x1780/0x5450
[   28.153280]  ? kasan_atomics_helper+0x1780/0x5450
[   28.153364]  kasan_check_range+0x10c/0x1c0
[   28.153446]  __kasan_check_write+0x18/0x20
[   28.153508]  kasan_atomics_helper+0x1780/0x5450
[   28.153571]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.153633]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.153693]  ? kasan_atomics+0x153/0x310
[   28.153758]  kasan_atomics+0x1dd/0x310
[   28.153811]  ? __pfx_kasan_atomics+0x10/0x10
[   28.153871]  ? __pfx_read_tsc+0x10/0x10
[   28.153925]  ? ktime_get_ts64+0x86/0x230
[   28.153986]  kunit_try_run_case+0x1b3/0x490
[   28.154057]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.154123]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.154229]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.154310]  ? __kthread_parkme+0x82/0x160
[   28.154384]  ? preempt_count_sub+0x50/0x80
[   28.154469]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.154537]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.154603]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.154669]  kthread+0x324/0x6d0
[   28.154725]  ? trace_preempt_on+0x20/0xc0
[   28.154789]  ? __pfx_kthread+0x10/0x10
[   28.154848]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.154912]  ? calculate_sigpending+0x7b/0xa0
[   28.154981]  ? __pfx_kthread+0x10/0x10
[   28.155043]  ret_from_fork+0x41/0x80
[   28.155102]  ? __pfx_kthread+0x10/0x10
[   28.155167]  ret_from_fork_asm+0x1a/0x30
[   28.155306]  </TASK>
[   28.155329] 
[   28.167623] Allocated by task 273:
[   28.167999]  kasan_save_stack+0x3d/0x60
[   28.168511]  kasan_save_track+0x18/0x40
[   28.168949]  kasan_save_alloc_info+0x3b/0x50
[   28.169452]  __kasan_kmalloc+0xb7/0xc0
[   28.169865]  __kmalloc_cache_noprof+0x184/0x410
[   28.170391]  kasan_atomics+0x96/0x310
[   28.170820]  kunit_try_run_case+0x1b3/0x490
[   28.171302]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.171735]  kthread+0x324/0x6d0
[   28.172005]  ret_from_fork+0x41/0x80
[   28.172330]  ret_from_fork_asm+0x1a/0x30
[   28.172777] 
[   28.172993] The buggy address belongs to the object at ffff888101aebe00
[   28.172993]  which belongs to the cache kmalloc-64 of size 64
[   28.174028] The buggy address is located 0 bytes to the right of
[   28.174028]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.175136] 
[   28.175419] The buggy address belongs to the physical page:
[   28.175822] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.176437] flags: 0x200000000000000(node=0|zone=2)
[   28.176916] page_type: f5(slab)
[   28.177239] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.177689] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.178101] page dumped because: kasan: bad access detected
[   28.178487] 
[   28.178721] Memory state around the buggy address:
[   28.179228]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.179903]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.180618] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.181290]                                      ^
[   28.181768]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.182484]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.183058] ==================================================================
---
[   27.908994] ==================================================================
[   27.909748] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1468/0x5450
[   27.910560] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.911365] 
[   27.911647] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.911756] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.911796] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.911854] Call Trace:
[   27.911892]  <TASK>
[   27.911923]  dump_stack_lvl+0x73/0xb0
[   27.911996]  print_report+0xd1/0x640
[   27.912122]  ? __virt_addr_valid+0x1db/0x2d0
[   27.912259]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.912339]  kasan_report+0x102/0x140
[   27.912396]  ? kasan_atomics_helper+0x1468/0x5450
[   27.912484]  ? kasan_atomics_helper+0x1468/0x5450
[   27.912604]  kasan_check_range+0x10c/0x1c0
[   27.912711]  __kasan_check_write+0x18/0x20
[   27.912821]  kasan_atomics_helper+0x1468/0x5450
[   27.912903]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.913023]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.913117]  ? kasan_atomics+0x153/0x310
[   27.913224]  kasan_atomics+0x1dd/0x310
[   27.913340]  ? __pfx_kasan_atomics+0x10/0x10
[   27.913457]  ? __pfx_read_tsc+0x10/0x10
[   27.913540]  ? ktime_get_ts64+0x86/0x230
[   27.913629]  kunit_try_run_case+0x1b3/0x490
[   27.913779]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.913886]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.913980]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.914080]  ? __kthread_parkme+0x82/0x160
[   27.914168]  ? preempt_count_sub+0x50/0x80
[   27.914298]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.914383]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.914476]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.914550]  kthread+0x324/0x6d0
[   27.914612]  ? trace_preempt_on+0x20/0xc0
[   27.914675]  ? __pfx_kthread+0x10/0x10
[   27.914739]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.914804]  ? calculate_sigpending+0x7b/0xa0
[   27.914873]  ? __pfx_kthread+0x10/0x10
[   27.914937]  ret_from_fork+0x41/0x80
[   27.914999]  ? __pfx_kthread+0x10/0x10
[   27.915062]  ret_from_fork_asm+0x1a/0x30
[   27.915152]  </TASK>
[   27.915249] 
[   27.932307] Allocated by task 273:
[   27.932724]  kasan_save_stack+0x3d/0x60
[   27.933100]  kasan_save_track+0x18/0x40
[   27.933590]  kasan_save_alloc_info+0x3b/0x50
[   27.933898]  __kasan_kmalloc+0xb7/0xc0
[   27.934178]  __kmalloc_cache_noprof+0x184/0x410
[   27.934766]  kasan_atomics+0x96/0x310
[   27.935215]  kunit_try_run_case+0x1b3/0x490
[   27.935692]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.936310]  kthread+0x324/0x6d0
[   27.936730]  ret_from_fork+0x41/0x80
[   27.937155]  ret_from_fork_asm+0x1a/0x30
[   27.937713] 
[   27.937993] The buggy address belongs to the object at ffff888101aebe00
[   27.937993]  which belongs to the cache kmalloc-64 of size 64
[   27.939139] The buggy address is located 0 bytes to the right of
[   27.939139]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.940121] 
[   27.940463] The buggy address belongs to the physical page:
[   27.941046] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.941941] flags: 0x200000000000000(node=0|zone=2)
[   27.942551] page_type: f5(slab)
[   27.942896] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.943620] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.944117] page dumped because: kasan: bad access detected
[   27.944724] 
[   27.944895] Memory state around the buggy address:
[   27.945361]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.945960]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.946664] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.947365]                                      ^
[   27.947723]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.948956]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.949759] ==================================================================
---
[   26.784582] ==================================================================
[   26.785363] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x698/0x5450
[   26.785955] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.786926] 
[   26.787998] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.788117] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.788174] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.788224] Call Trace:
[   26.788245]  <TASK>
[   26.788267]  dump_stack_lvl+0x73/0xb0
[   26.788312]  print_report+0xd1/0x640
[   26.788346]  ? __virt_addr_valid+0x1db/0x2d0
[   26.788383]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.788439]  kasan_report+0x102/0x140
[   26.788472]  ? kasan_atomics_helper+0x698/0x5450
[   26.788507]  ? kasan_atomics_helper+0x698/0x5450
[   26.788548]  kasan_check_range+0x10c/0x1c0
[   26.788582]  __kasan_check_write+0x18/0x20
[   26.788683]  kasan_atomics_helper+0x698/0x5450
[   26.788751]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.788810]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.788869]  ? kasan_atomics+0x153/0x310
[   26.788932]  kasan_atomics+0x1dd/0x310
[   26.788977]  ? __pfx_kasan_atomics+0x10/0x10
[   26.789011]  ? __pfx_read_tsc+0x10/0x10
[   26.789044]  ? ktime_get_ts64+0x86/0x230
[   26.789079]  kunit_try_run_case+0x1b3/0x490
[   26.789116]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.789150]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.789198]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.789254]  ? __kthread_parkme+0x82/0x160
[   26.789287]  ? preempt_count_sub+0x50/0x80
[   26.789322]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.789359]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.789394]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.789446]  kthread+0x324/0x6d0
[   26.789479]  ? trace_preempt_on+0x20/0xc0
[   26.789512]  ? __pfx_kthread+0x10/0x10
[   26.789542]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.789575]  ? calculate_sigpending+0x7b/0xa0
[   26.789672]  ? __pfx_kthread+0x10/0x10
[   26.789723]  ret_from_fork+0x41/0x80
[   26.789757]  ? __pfx_kthread+0x10/0x10
[   26.789789]  ret_from_fork_asm+0x1a/0x30
[   26.789836]  </TASK>
[   26.789853] 
[   26.809606] Allocated by task 273:
[   26.810497]  kasan_save_stack+0x3d/0x60
[   26.811672]  kasan_save_track+0x18/0x40
[   26.812109]  kasan_save_alloc_info+0x3b/0x50
[   26.812730]  __kasan_kmalloc+0xb7/0xc0
[   26.813310]  __kmalloc_cache_noprof+0x184/0x410
[   26.814119]  kasan_atomics+0x96/0x310
[   26.814505]  kunit_try_run_case+0x1b3/0x490
[   26.815435]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.816140]  kthread+0x324/0x6d0
[   26.816668]  ret_from_fork+0x41/0x80
[   26.817035]  ret_from_fork_asm+0x1a/0x30
[   26.817495] 
[   26.818129] The buggy address belongs to the object at ffff888101aebe00
[   26.818129]  which belongs to the cache kmalloc-64 of size 64
[   26.819661] The buggy address is located 0 bytes to the right of
[   26.819661]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.820831] 
[   26.821209] The buggy address belongs to the physical page:
[   26.821560] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.822325] flags: 0x200000000000000(node=0|zone=2)
[   26.822936] page_type: f5(slab)
[   26.823844] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.824506] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.825618] page dumped because: kasan: bad access detected
[   26.826325] 
[   26.826740] Memory state around the buggy address:
[   26.827112]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.828141]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.829104] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.829928]                                      ^
[   26.830936]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.831869]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.832451] ==================================================================
---
[   28.223850] ==================================================================
[   28.225015] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x18b2/0x5450
[   28.225855] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.226495] 
[   28.226682] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.226790] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.226944] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.227010] Call Trace:
[   28.227046]  <TASK>
[   28.227082]  dump_stack_lvl+0x73/0xb0
[   28.227158]  print_report+0xd1/0x640
[   28.227238]  ? __virt_addr_valid+0x1db/0x2d0
[   28.227305]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.227377]  kasan_report+0x102/0x140
[   28.227461]  ? kasan_atomics_helper+0x18b2/0x5450
[   28.227529]  ? kasan_atomics_helper+0x18b2/0x5450
[   28.227601]  kasan_check_range+0x10c/0x1c0
[   28.227661]  __kasan_check_write+0x18/0x20
[   28.227722]  kasan_atomics_helper+0x18b2/0x5450
[   28.227790]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.227856]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.227922]  ? kasan_atomics+0x153/0x310
[   28.227989]  kasan_atomics+0x1dd/0x310
[   28.228046]  ? __pfx_kasan_atomics+0x10/0x10
[   28.228108]  ? __pfx_read_tsc+0x10/0x10
[   28.228194]  ? ktime_get_ts64+0x86/0x230
[   28.228268]  kunit_try_run_case+0x1b3/0x490
[   28.228366]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.228424]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.228461]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.228496]  ? __kthread_parkme+0x82/0x160
[   28.228528]  ? preempt_count_sub+0x50/0x80
[   28.228564]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.228598]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.228633]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.228668]  kthread+0x324/0x6d0
[   28.228697]  ? trace_preempt_on+0x20/0xc0
[   28.228733]  ? __pfx_kthread+0x10/0x10
[   28.228764]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.228796]  ? calculate_sigpending+0x7b/0xa0
[   28.228830]  ? __pfx_kthread+0x10/0x10
[   28.228863]  ret_from_fork+0x41/0x80
[   28.228893]  ? __pfx_kthread+0x10/0x10
[   28.228925]  ret_from_fork_asm+0x1a/0x30
[   28.228970]  </TASK>
[   28.228987] 
[   28.243939] Allocated by task 273:
[   28.244383]  kasan_save_stack+0x3d/0x60
[   28.244751]  kasan_save_track+0x18/0x40
[   28.245247]  kasan_save_alloc_info+0x3b/0x50
[   28.245699]  __kasan_kmalloc+0xb7/0xc0
[   28.246161]  __kmalloc_cache_noprof+0x184/0x410
[   28.246705]  kasan_atomics+0x96/0x310
[   28.246994]  kunit_try_run_case+0x1b3/0x490
[   28.247532]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.247973]  kthread+0x324/0x6d0
[   28.248417]  ret_from_fork+0x41/0x80
[   28.248850]  ret_from_fork_asm+0x1a/0x30
[   28.249310] 
[   28.249498] The buggy address belongs to the object at ffff888101aebe00
[   28.249498]  which belongs to the cache kmalloc-64 of size 64
[   28.250089] The buggy address is located 0 bytes to the right of
[   28.250089]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.251136] 
[   28.251430] The buggy address belongs to the physical page:
[   28.251960] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.252738] flags: 0x200000000000000(node=0|zone=2)
[   28.253272] page_type: f5(slab)
[   28.253581] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.254005] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.254490] page dumped because: kasan: bad access detected
[   28.254951] 
[   28.255222] Memory state around the buggy address:
[   28.255713]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.256455]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.257139] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.257860]                                      ^
[   28.258475]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.259170]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.259941] ==================================================================
---
[   26.926027] ==================================================================
[   26.926900] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x861/0x5450
[   26.927554] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.928239] 
[   26.928829] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.928942] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.928979] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.929049] Call Trace:
[   26.929107]  <TASK>
[   26.929145]  dump_stack_lvl+0x73/0xb0
[   26.929264]  print_report+0xd1/0x640
[   26.929335]  ? __virt_addr_valid+0x1db/0x2d0
[   26.929432]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.929529]  kasan_report+0x102/0x140
[   26.929734]  ? kasan_atomics_helper+0x861/0x5450
[   26.929828]  ? kasan_atomics_helper+0x861/0x5450
[   26.929906]  kasan_check_range+0x10c/0x1c0
[   26.930016]  __kasan_check_write+0x18/0x20
[   26.930086]  kasan_atomics_helper+0x861/0x5450
[   26.930159]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.930267]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.930334]  ? kasan_atomics+0x153/0x310
[   26.930429]  kasan_atomics+0x1dd/0x310
[   26.930497]  ? __pfx_kasan_atomics+0x10/0x10
[   26.930560]  ? __pfx_read_tsc+0x10/0x10
[   26.930805]  ? ktime_get_ts64+0x86/0x230
[   26.930887]  kunit_try_run_case+0x1b3/0x490
[   26.930950]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.930988]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.931023]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.931058]  ? __kthread_parkme+0x82/0x160
[   26.931090]  ? preempt_count_sub+0x50/0x80
[   26.931125]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.931161]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.931219]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.931256]  kthread+0x324/0x6d0
[   26.931286]  ? trace_preempt_on+0x20/0xc0
[   26.931317]  ? __pfx_kthread+0x10/0x10
[   26.931348]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.931380]  ? calculate_sigpending+0x7b/0xa0
[   26.931432]  ? __pfx_kthread+0x10/0x10
[   26.931465]  ret_from_fork+0x41/0x80
[   26.931497]  ? __pfx_kthread+0x10/0x10
[   26.931527]  ret_from_fork_asm+0x1a/0x30
[   26.931574]  </TASK>
[   26.931625] 
[   26.951044] Allocated by task 273:
[   26.951480]  kasan_save_stack+0x3d/0x60
[   26.951813]  kasan_save_track+0x18/0x40
[   26.952099]  kasan_save_alloc_info+0x3b/0x50
[   26.952620]  __kasan_kmalloc+0xb7/0xc0
[   26.953034]  __kmalloc_cache_noprof+0x184/0x410
[   26.953771]  kasan_atomics+0x96/0x310
[   26.954259]  kunit_try_run_case+0x1b3/0x490
[   26.954959]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.955499]  kthread+0x324/0x6d0
[   26.955948]  ret_from_fork+0x41/0x80
[   26.956586]  ret_from_fork_asm+0x1a/0x30
[   26.957037] 
[   26.957454] The buggy address belongs to the object at ffff888101aebe00
[   26.957454]  which belongs to the cache kmalloc-64 of size 64
[   26.958081] The buggy address is located 0 bytes to the right of
[   26.958081]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.959483] 
[   26.959724] The buggy address belongs to the physical page:
[   26.960763] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.961798] flags: 0x200000000000000(node=0|zone=2)
[   26.962142] page_type: f5(slab)
[   26.962421] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.962850] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.963744] page dumped because: kasan: bad access detected
[   26.964498] 
[   26.964887] Memory state around the buggy address:
[   26.965573]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.966464]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.967286] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.968010]                                      ^
[   26.968626]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.969657]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.970453] ==================================================================
---
[   28.071571] ==================================================================
[   28.072375] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1650/0x5450
[   28.073221] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.073980] 
[   28.074236] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.074398] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.074467] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.074528] Call Trace:
[   28.074559]  <TASK>
[   28.074595]  dump_stack_lvl+0x73/0xb0
[   28.074669]  print_report+0xd1/0x640
[   28.074823]  ? __virt_addr_valid+0x1db/0x2d0
[   28.074923]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.075018]  kasan_report+0x102/0x140
[   28.075077]  ? kasan_atomics_helper+0x1650/0x5450
[   28.075145]  ? kasan_atomics_helper+0x1650/0x5450
[   28.075302]  kasan_check_range+0x10c/0x1c0
[   28.075438]  __kasan_check_write+0x18/0x20
[   28.075531]  kasan_atomics_helper+0x1650/0x5450
[   28.075682]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.075804]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.075902]  ? kasan_atomics+0x153/0x310
[   28.075977]  kasan_atomics+0x1dd/0x310
[   28.076041]  ? __pfx_kasan_atomics+0x10/0x10
[   28.076102]  ? __pfx_read_tsc+0x10/0x10
[   28.076154]  ? ktime_get_ts64+0x86/0x230
[   28.076331]  kunit_try_run_case+0x1b3/0x490
[   28.076468]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.076613]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.076712]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.076787]  ? __kthread_parkme+0x82/0x160
[   28.076849]  ? preempt_count_sub+0x50/0x80
[   28.076915]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.076986]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.077053]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.077095]  kthread+0x324/0x6d0
[   28.077126]  ? trace_preempt_on+0x20/0xc0
[   28.077159]  ? __pfx_kthread+0x10/0x10
[   28.077233]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.077272]  ? calculate_sigpending+0x7b/0xa0
[   28.077307]  ? __pfx_kthread+0x10/0x10
[   28.077341]  ret_from_fork+0x41/0x80
[   28.077373]  ? __pfx_kthread+0x10/0x10
[   28.077419]  ret_from_fork_asm+0x1a/0x30
[   28.077469]  </TASK>
[   28.077487] 
[   28.092675] Allocated by task 273:
[   28.093084]  kasan_save_stack+0x3d/0x60
[   28.093575]  kasan_save_track+0x18/0x40
[   28.094170]  kasan_save_alloc_info+0x3b/0x50
[   28.094542]  __kasan_kmalloc+0xb7/0xc0
[   28.094983]  __kmalloc_cache_noprof+0x184/0x410
[   28.095560]  kasan_atomics+0x96/0x310
[   28.095868]  kunit_try_run_case+0x1b3/0x490
[   28.096435]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.096962]  kthread+0x324/0x6d0
[   28.097373]  ret_from_fork+0x41/0x80
[   28.097745]  ret_from_fork_asm+0x1a/0x30
[   28.098284] 
[   28.098506] The buggy address belongs to the object at ffff888101aebe00
[   28.098506]  which belongs to the cache kmalloc-64 of size 64
[   28.099298] The buggy address is located 0 bytes to the right of
[   28.099298]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.099940] 
[   28.100114] The buggy address belongs to the physical page:
[   28.100450] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.101204] flags: 0x200000000000000(node=0|zone=2)
[   28.101851] page_type: f5(slab)
[   28.102251] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.102967] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.103940] page dumped because: kasan: bad access detected
[   28.104528] 
[   28.104760] Memory state around the buggy address:
[   28.105225]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.105889]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.106553] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.106979]                                      ^
[   28.107290]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.107983]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.109069] ==================================================================
---
[   28.547709] ==================================================================
[   28.548100] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1d7b/0x5450
[   28.548814] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.549623] 
[   28.549912] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.550020] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.550059] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.550120] Call Trace:
[   28.550157]  <TASK>
[   28.550235]  dump_stack_lvl+0x73/0xb0
[   28.550324]  print_report+0xd1/0x640
[   28.550420]  ? __virt_addr_valid+0x1db/0x2d0
[   28.550495]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.550565]  kasan_report+0x102/0x140
[   28.550627]  ? kasan_atomics_helper+0x1d7b/0x5450
[   28.550735]  ? kasan_atomics_helper+0x1d7b/0x5450
[   28.550823]  kasan_check_range+0x10c/0x1c0
[   28.550911]  __kasan_check_write+0x18/0x20
[   28.550982]  kasan_atomics_helper+0x1d7b/0x5450
[   28.551078]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.551157]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.551301]  ? kasan_atomics+0x153/0x310
[   28.551419]  kasan_atomics+0x1dd/0x310
[   28.551515]  ? __pfx_kasan_atomics+0x10/0x10
[   28.551590]  ? __pfx_read_tsc+0x10/0x10
[   28.551648]  ? ktime_get_ts64+0x86/0x230
[   28.551715]  kunit_try_run_case+0x1b3/0x490
[   28.551790]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.551857]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.551924]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.551993]  ? __kthread_parkme+0x82/0x160
[   28.552054]  ? preempt_count_sub+0x50/0x80
[   28.552123]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.552236]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.552318]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.552382]  kthread+0x324/0x6d0
[   28.552438]  ? trace_preempt_on+0x20/0xc0
[   28.552472]  ? __pfx_kthread+0x10/0x10
[   28.552506]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.552538]  ? calculate_sigpending+0x7b/0xa0
[   28.552572]  ? __pfx_kthread+0x10/0x10
[   28.552605]  ret_from_fork+0x41/0x80
[   28.552635]  ? __pfx_kthread+0x10/0x10
[   28.552665]  ret_from_fork_asm+0x1a/0x30
[   28.552712]  </TASK>
[   28.552730] 
[   28.567358] Allocated by task 273:
[   28.567747]  kasan_save_stack+0x3d/0x60
[   28.568303]  kasan_save_track+0x18/0x40
[   28.568697]  kasan_save_alloc_info+0x3b/0x50
[   28.569232]  __kasan_kmalloc+0xb7/0xc0
[   28.569646]  __kmalloc_cache_noprof+0x184/0x410
[   28.570123]  kasan_atomics+0x96/0x310
[   28.570547]  kunit_try_run_case+0x1b3/0x490
[   28.571056]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.571684]  kthread+0x324/0x6d0
[   28.572075]  ret_from_fork+0x41/0x80
[   28.572512]  ret_from_fork_asm+0x1a/0x30
[   28.573000] 
[   28.573337] The buggy address belongs to the object at ffff888101aebe00
[   28.573337]  which belongs to the cache kmalloc-64 of size 64
[   28.574256] The buggy address is located 0 bytes to the right of
[   28.574256]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.575430] 
[   28.575670] The buggy address belongs to the physical page:
[   28.576136] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.576803] flags: 0x200000000000000(node=0|zone=2)
[   28.577125] page_type: f5(slab)
[   28.577416] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.578120] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.578976] page dumped because: kasan: bad access detected
[   28.579562] 
[   28.579791] Memory state around the buggy address:
[   28.580373]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.581144]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.581877] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.582322]                                      ^
[   28.582886]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.583736]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.584480] ==================================================================
---
[   26.689076] ==================================================================
[   26.689759] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x566/0x5450
[   26.690312] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.691295] 
[   26.691713] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.691829] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.691867] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.691929] Call Trace:
[   26.691968]  <TASK>
[   26.692003]  dump_stack_lvl+0x73/0xb0
[   26.692083]  print_report+0xd1/0x640
[   26.692147]  ? __virt_addr_valid+0x1db/0x2d0
[   26.692258]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.692381]  kasan_report+0x102/0x140
[   26.692467]  ? kasan_atomics_helper+0x566/0x5450
[   26.692571]  ? kasan_atomics_helper+0x566/0x5450
[   26.692809]  kasan_check_range+0x10c/0x1c0
[   26.692880]  __kasan_check_write+0x18/0x20
[   26.692931]  kasan_atomics_helper+0x566/0x5450
[   26.692970]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.693005]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.693041]  ? kasan_atomics+0x153/0x310
[   26.693077]  kasan_atomics+0x1dd/0x310
[   26.693108]  ? __pfx_kasan_atomics+0x10/0x10
[   26.693140]  ? __pfx_read_tsc+0x10/0x10
[   26.693170]  ? ktime_get_ts64+0x86/0x230
[   26.693206]  kunit_try_run_case+0x1b3/0x490
[   26.693242]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.693308]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.693349]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.693384]  ? __kthread_parkme+0x82/0x160
[   26.693433]  ? preempt_count_sub+0x50/0x80
[   26.693469]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.693506]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.693541]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.693578]  kthread+0x324/0x6d0
[   26.693676]  ? trace_preempt_on+0x20/0xc0
[   26.693719]  ? __pfx_kthread+0x10/0x10
[   26.693753]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.693786]  ? calculate_sigpending+0x7b/0xa0
[   26.693822]  ? __pfx_kthread+0x10/0x10
[   26.693853]  ret_from_fork+0x41/0x80
[   26.693886]  ? __pfx_kthread+0x10/0x10
[   26.693918]  ret_from_fork_asm+0x1a/0x30
[   26.693965]  </TASK>
[   26.693983] 
[   26.712729] Allocated by task 273:
[   26.713277]  kasan_save_stack+0x3d/0x60
[   26.713842]  kasan_save_track+0x18/0x40
[   26.714583]  kasan_save_alloc_info+0x3b/0x50
[   26.715947]  __kasan_kmalloc+0xb7/0xc0
[   26.716462]  __kmalloc_cache_noprof+0x184/0x410
[   26.717050]  kasan_atomics+0x96/0x310
[   26.718044]  kunit_try_run_case+0x1b3/0x490
[   26.718393]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.719248]  kthread+0x324/0x6d0
[   26.719518]  ret_from_fork+0x41/0x80
[   26.720176]  ret_from_fork_asm+0x1a/0x30
[   26.721145] 
[   26.721832] The buggy address belongs to the object at ffff888101aebe00
[   26.721832]  which belongs to the cache kmalloc-64 of size 64
[   26.723421] The buggy address is located 0 bytes to the right of
[   26.723421]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.724364] 
[   26.724741] The buggy address belongs to the physical page:
[   26.725173] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.725860] flags: 0x200000000000000(node=0|zone=2)
[   26.726513] page_type: f5(slab)
[   26.726799] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.727816] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.728543] page dumped because: kasan: bad access detected
[   26.729270] 
[   26.729507] Memory state around the buggy address:
[   26.730152]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.731180]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.731897] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.732764]                                      ^
[   26.733205]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.734039]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.734647] ==================================================================
---
[   28.301923] ==================================================================
[   28.302897] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x19e4/0x5450
[   28.303922] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.304570] 
[   28.304779] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.304911] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.304952] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.305013] Call Trace:
[   28.305044]  <TASK>
[   28.305079]  dump_stack_lvl+0x73/0xb0
[   28.305157]  print_report+0xd1/0x640
[   28.305289]  ? __virt_addr_valid+0x1db/0x2d0
[   28.305380]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.305473]  kasan_report+0x102/0x140
[   28.305540]  ? kasan_atomics_helper+0x19e4/0x5450
[   28.305607]  ? kasan_atomics_helper+0x19e4/0x5450
[   28.305668]  kasan_check_range+0x10c/0x1c0
[   28.305703]  __kasan_check_write+0x18/0x20
[   28.305735]  kasan_atomics_helper+0x19e4/0x5450
[   28.305772]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.305807]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.305843]  ? kasan_atomics+0x153/0x310
[   28.305878]  kasan_atomics+0x1dd/0x310
[   28.305907]  ? __pfx_kasan_atomics+0x10/0x10
[   28.305940]  ? __pfx_read_tsc+0x10/0x10
[   28.305969]  ? ktime_get_ts64+0x86/0x230
[   28.306002]  kunit_try_run_case+0x1b3/0x490
[   28.306039]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.306073]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.306105]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.306140]  ? __kthread_parkme+0x82/0x160
[   28.306171]  ? preempt_count_sub+0x50/0x80
[   28.306242]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.306279]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.306313]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.306355]  kthread+0x324/0x6d0
[   28.306388]  ? trace_preempt_on+0x20/0xc0
[   28.306438]  ? __pfx_kthread+0x10/0x10
[   28.306469]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.306502]  ? calculate_sigpending+0x7b/0xa0
[   28.306536]  ? __pfx_kthread+0x10/0x10
[   28.306569]  ret_from_fork+0x41/0x80
[   28.306599]  ? __pfx_kthread+0x10/0x10
[   28.306631]  ret_from_fork_asm+0x1a/0x30
[   28.306676]  </TASK>
[   28.306693] 
[   28.319386] Allocated by task 273:
[   28.319773]  kasan_save_stack+0x3d/0x60
[   28.320262]  kasan_save_track+0x18/0x40
[   28.320731]  kasan_save_alloc_info+0x3b/0x50
[   28.321241]  __kasan_kmalloc+0xb7/0xc0
[   28.321626]  __kmalloc_cache_noprof+0x184/0x410
[   28.322060]  kasan_atomics+0x96/0x310
[   28.322553]  kunit_try_run_case+0x1b3/0x490
[   28.322861]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.323198]  kthread+0x324/0x6d0
[   28.323582]  ret_from_fork+0x41/0x80
[   28.324178]  ret_from_fork_asm+0x1a/0x30
[   28.324713] 
[   28.324930] The buggy address belongs to the object at ffff888101aebe00
[   28.324930]  which belongs to the cache kmalloc-64 of size 64
[   28.326592] The buggy address is located 0 bytes to the right of
[   28.326592]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.328398] 
[   28.329353] The buggy address belongs to the physical page:
[   28.330123] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.331666] flags: 0x200000000000000(node=0|zone=2)
[   28.332174] page_type: f5(slab)
[   28.333388] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.333952] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.335096] page dumped because: kasan: bad access detected
[   28.335980] 
[   28.336369] Memory state around the buggy address:
[   28.336675]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.337259]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.337961] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.338651]                                      ^
[   28.339150]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.339634]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.340352] ==================================================================
---
[   27.426275] ==================================================================
[   27.427816] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0xfaa/0x5450
[   27.428826] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.429647] 
[   27.429925] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.430033] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.430071] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.430129] Call Trace:
[   27.430163]  <TASK>
[   27.430196]  dump_stack_lvl+0x73/0xb0
[   27.430272]  print_report+0xd1/0x640
[   27.430375]  ? __virt_addr_valid+0x1db/0x2d0
[   27.430476]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.430547]  kasan_report+0x102/0x140
[   27.430594]  ? kasan_atomics_helper+0xfaa/0x5450
[   27.430631]  ? kasan_atomics_helper+0xfaa/0x5450
[   27.430672]  kasan_check_range+0x10c/0x1c0
[   27.430705]  __kasan_check_write+0x18/0x20
[   27.430737]  kasan_atomics_helper+0xfaa/0x5450
[   27.430773]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.430807]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.430840]  ? kasan_atomics+0x153/0x310
[   27.430876]  kasan_atomics+0x1dd/0x310
[   27.430906]  ? __pfx_kasan_atomics+0x10/0x10
[   27.430938]  ? __pfx_read_tsc+0x10/0x10
[   27.430968]  ? ktime_get_ts64+0x86/0x230
[   27.431001]  kunit_try_run_case+0x1b3/0x490
[   27.431039]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.431072]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.431108]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.431140]  ? __kthread_parkme+0x82/0x160
[   27.431170]  ? preempt_count_sub+0x50/0x80
[   27.431234]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.431272]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.431309]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.431344]  kthread+0x324/0x6d0
[   27.431373]  ? trace_preempt_on+0x20/0xc0
[   27.431421]  ? __pfx_kthread+0x10/0x10
[   27.431454]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.431487]  ? calculate_sigpending+0x7b/0xa0
[   27.431521]  ? __pfx_kthread+0x10/0x10
[   27.431554]  ret_from_fork+0x41/0x80
[   27.431584]  ? __pfx_kthread+0x10/0x10
[   27.431616]  ret_from_fork_asm+0x1a/0x30
[   27.431660]  </TASK>
[   27.431678] 
[   27.446727] Allocated by task 273:
[   27.447180]  kasan_save_stack+0x3d/0x60
[   27.447720]  kasan_save_track+0x18/0x40
[   27.448018]  kasan_save_alloc_info+0x3b/0x50
[   27.449030]  __kasan_kmalloc+0xb7/0xc0
[   27.449296]  __kmalloc_cache_noprof+0x184/0x410
[   27.449792]  kasan_atomics+0x96/0x310
[   27.450188]  kunit_try_run_case+0x1b3/0x490
[   27.451127]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.451777]  kthread+0x324/0x6d0
[   27.452055]  ret_from_fork+0x41/0x80
[   27.452517]  ret_from_fork_asm+0x1a/0x30
[   27.453275] 
[   27.453495] The buggy address belongs to the object at ffff888101aebe00
[   27.453495]  which belongs to the cache kmalloc-64 of size 64
[   27.454833] The buggy address is located 0 bytes to the right of
[   27.454833]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.456383] 
[   27.456571] The buggy address belongs to the physical page:
[   27.457467] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.458231] flags: 0x200000000000000(node=0|zone=2)
[   27.458657] page_type: f5(slab)
[   27.459022] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.459676] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.460329] page dumped because: kasan: bad access detected
[   27.460742] 
[   27.460911] Memory state around the buggy address:
[   27.461576]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.462186]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.463165] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.463952]                                      ^
[   27.464620]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.465071]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.466017] ==================================================================
---
[   28.030028] ==================================================================
[   28.030688] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x15b7/0x5450
[   28.031151] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.032491] 
[   28.032753] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.032883] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.032923] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.032983] Call Trace:
[   28.033019]  <TASK>
[   28.033050]  dump_stack_lvl+0x73/0xb0
[   28.033173]  print_report+0xd1/0x640
[   28.033287]  ? __virt_addr_valid+0x1db/0x2d0
[   28.033362]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.033455]  kasan_report+0x102/0x140
[   28.033521]  ? kasan_atomics_helper+0x15b7/0x5450
[   28.033588]  ? kasan_atomics_helper+0x15b7/0x5450
[   28.033669]  kasan_check_range+0x10c/0x1c0
[   28.033734]  __kasan_check_write+0x18/0x20
[   28.033793]  kasan_atomics_helper+0x15b7/0x5450
[   28.033864]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.033930]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.033988]  ? kasan_atomics+0x153/0x310
[   28.034057]  kasan_atomics+0x1dd/0x310
[   28.034111]  ? __pfx_kasan_atomics+0x10/0x10
[   28.034234]  ? __pfx_read_tsc+0x10/0x10
[   28.034300]  ? ktime_get_ts64+0x86/0x230
[   28.034388]  kunit_try_run_case+0x1b3/0x490
[   28.034495]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.034566]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.034634]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.034704]  ? __kthread_parkme+0x82/0x160
[   28.034747]  ? preempt_count_sub+0x50/0x80
[   28.034783]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.034821]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.034858]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.034893]  kthread+0x324/0x6d0
[   28.034924]  ? trace_preempt_on+0x20/0xc0
[   28.034956]  ? __pfx_kthread+0x10/0x10
[   28.034988]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.035021]  ? calculate_sigpending+0x7b/0xa0
[   28.035057]  ? __pfx_kthread+0x10/0x10
[   28.035088]  ret_from_fork+0x41/0x80
[   28.035119]  ? __pfx_kthread+0x10/0x10
[   28.035151]  ret_from_fork_asm+0x1a/0x30
[   28.035221]  </TASK>
[   28.035241] 
[   28.051761] Allocated by task 273:
[   28.052225]  kasan_save_stack+0x3d/0x60
[   28.052665]  kasan_save_track+0x18/0x40
[   28.053220]  kasan_save_alloc_info+0x3b/0x50
[   28.053719]  __kasan_kmalloc+0xb7/0xc0
[   28.054250]  __kmalloc_cache_noprof+0x184/0x410
[   28.054715]  kasan_atomics+0x96/0x310
[   28.055180]  kunit_try_run_case+0x1b3/0x490
[   28.055758]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.056325]  kthread+0x324/0x6d0
[   28.056778]  ret_from_fork+0x41/0x80
[   28.057292]  ret_from_fork_asm+0x1a/0x30
[   28.057856] 
[   28.058044] The buggy address belongs to the object at ffff888101aebe00
[   28.058044]  which belongs to the cache kmalloc-64 of size 64
[   28.058987] The buggy address is located 0 bytes to the right of
[   28.058987]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.060133] 
[   28.060389] The buggy address belongs to the physical page:
[   28.061061] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.061696] flags: 0x200000000000000(node=0|zone=2)
[   28.062213] page_type: f5(slab)
[   28.062620] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.063362] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.064046] page dumped because: kasan: bad access detected
[   28.064637] 
[   28.064896] Memory state around the buggy address:
[   28.065427]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.066023]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.066703] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.067252]                                      ^
[   28.067781]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.068615]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.069259] ==================================================================
---
[   27.757934] ==================================================================
[   27.758491] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x12e7/0x5450
[   27.759183] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.759688] 
[   27.759800] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.759861] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.759882] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.759913] Call Trace:
[   27.759933]  <TASK>
[   27.759953]  dump_stack_lvl+0x73/0xb0
[   27.759996]  print_report+0xd1/0x640
[   27.760031]  ? __virt_addr_valid+0x1db/0x2d0
[   27.760067]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.760105]  kasan_report+0x102/0x140
[   27.760136]  ? kasan_atomics_helper+0x12e7/0x5450
[   27.760172]  ? kasan_atomics_helper+0x12e7/0x5450
[   27.760282]  kasan_check_range+0x10c/0x1c0
[   27.760346]  __kasan_check_write+0x18/0x20
[   27.760425]  kasan_atomics_helper+0x12e7/0x5450
[   27.760852]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.761235]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.761348]  ? kasan_atomics+0x153/0x310
[   27.761414]  kasan_atomics+0x1dd/0x310
[   27.761454]  ? __pfx_kasan_atomics+0x10/0x10
[   27.761488]  ? __pfx_read_tsc+0x10/0x10
[   27.761518]  ? ktime_get_ts64+0x86/0x230
[   27.761554]  kunit_try_run_case+0x1b3/0x490
[   27.761591]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.761625]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.761659]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.761693]  ? __kthread_parkme+0x82/0x160
[   27.761725]  ? preempt_count_sub+0x50/0x80
[   27.761760]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.761795]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.761829]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.761864]  kthread+0x324/0x6d0
[   27.761894]  ? trace_preempt_on+0x20/0xc0
[   27.761926]  ? __pfx_kthread+0x10/0x10
[   27.761958]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.761992]  ? calculate_sigpending+0x7b/0xa0
[   27.762027]  ? __pfx_kthread+0x10/0x10
[   27.762059]  ret_from_fork+0x41/0x80
[   27.762090]  ? __pfx_kthread+0x10/0x10
[   27.762122]  ret_from_fork_asm+0x1a/0x30
[   27.762166]  </TASK>
[   27.762195] 
[   27.778321] Allocated by task 273:
[   27.778752]  kasan_save_stack+0x3d/0x60
[   27.779157]  kasan_save_track+0x18/0x40
[   27.779683]  kasan_save_alloc_info+0x3b/0x50
[   27.780089]  __kasan_kmalloc+0xb7/0xc0
[   27.780593]  __kmalloc_cache_noprof+0x184/0x410
[   27.781048]  kasan_atomics+0x96/0x310
[   27.781518]  kunit_try_run_case+0x1b3/0x490
[   27.781921]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.782469]  kthread+0x324/0x6d0
[   27.782819]  ret_from_fork+0x41/0x80
[   27.783222]  ret_from_fork_asm+0x1a/0x30
[   27.783666] 
[   27.783856] The buggy address belongs to the object at ffff888101aebe00
[   27.783856]  which belongs to the cache kmalloc-64 of size 64
[   27.784707] The buggy address is located 0 bytes to the right of
[   27.784707]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.785375] 
[   27.785625] The buggy address belongs to the physical page:
[   27.786220] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.786972] flags: 0x200000000000000(node=0|zone=2)
[   27.787519] page_type: f5(slab)
[   27.787882] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.788619] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.789308] page dumped because: kasan: bad access detected
[   27.789707] 
[   27.789869] Memory state around the buggy address:
[   27.790426]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.791112]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.791714] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.792372]                                      ^
[   27.792858]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.793544]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.794147] ==================================================================
---
[   27.506994] ==================================================================
[   27.507567] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x107a/0x5450
[   27.508520] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.509051] 
[   27.509354] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.509498] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.509535] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.509629] Call Trace:
[   27.509668]  <TASK>
[   27.509724]  dump_stack_lvl+0x73/0xb0
[   27.509807]  print_report+0xd1/0x640
[   27.509874]  ? __virt_addr_valid+0x1db/0x2d0
[   27.509944]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.510017]  kasan_report+0x102/0x140
[   27.510081]  ? kasan_atomics_helper+0x107a/0x5450
[   27.510155]  ? kasan_atomics_helper+0x107a/0x5450
[   27.510283]  kasan_check_range+0x10c/0x1c0
[   27.510365]  __kasan_check_write+0x18/0x20
[   27.510494]  kasan_atomics_helper+0x107a/0x5450
[   27.510603]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.510755]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.510829]  ? kasan_atomics+0x153/0x310
[   27.510901]  kasan_atomics+0x1dd/0x310
[   27.510963]  ? __pfx_kasan_atomics+0x10/0x10
[   27.511024]  ? __pfx_read_tsc+0x10/0x10
[   27.511075]  ? ktime_get_ts64+0x86/0x230
[   27.511245]  kunit_try_run_case+0x1b3/0x490
[   27.511335]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.511423]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.511487]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.511553]  ? __kthread_parkme+0x82/0x160
[   27.511609]  ? preempt_count_sub+0x50/0x80
[   27.511760]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.511861]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.511958]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.512105]  kthread+0x324/0x6d0
[   27.512227]  ? trace_preempt_on+0x20/0xc0
[   27.512307]  ? __pfx_kthread+0x10/0x10
[   27.512449]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.512548]  ? calculate_sigpending+0x7b/0xa0
[   27.512643]  ? __pfx_kthread+0x10/0x10
[   27.512754]  ret_from_fork+0x41/0x80
[   27.512846]  ? __pfx_kthread+0x10/0x10
[   27.512912]  ret_from_fork_asm+0x1a/0x30
[   27.512993]  </TASK>
[   27.513025] 
[   27.528276] Allocated by task 273:
[   27.528698]  kasan_save_stack+0x3d/0x60
[   27.529239]  kasan_save_track+0x18/0x40
[   27.529722]  kasan_save_alloc_info+0x3b/0x50
[   27.530122]  __kasan_kmalloc+0xb7/0xc0
[   27.530676]  __kmalloc_cache_noprof+0x184/0x410
[   27.531252]  kasan_atomics+0x96/0x310
[   27.531704]  kunit_try_run_case+0x1b3/0x490
[   27.532153]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.532673]  kthread+0x324/0x6d0
[   27.532999]  ret_from_fork+0x41/0x80
[   27.533531]  ret_from_fork_asm+0x1a/0x30
[   27.533965] 
[   27.534229] The buggy address belongs to the object at ffff888101aebe00
[   27.534229]  which belongs to the cache kmalloc-64 of size 64
[   27.535271] The buggy address is located 0 bytes to the right of
[   27.535271]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.536341] 
[   27.536596] The buggy address belongs to the physical page:
[   27.537004] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.537660] flags: 0x200000000000000(node=0|zone=2)
[   27.538271] page_type: f5(slab)
[   27.538682] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.539432] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.540174] page dumped because: kasan: bad access detected
[   27.540816] 
[   27.541045] Memory state around the buggy address:
[   27.541692]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.542475]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.543160] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.543772]                                      ^
[   27.544108]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.544877]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.545687] ==================================================================
---
[   27.071923] ==================================================================
[   27.073005] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0xa2c/0x5450
[   27.074368] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.075317] 
[   27.075573] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.075709] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.075745] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.075801] Call Trace:
[   27.075837]  <TASK>
[   27.075939]  dump_stack_lvl+0x73/0xb0
[   27.076056]  print_report+0xd1/0x640
[   27.076146]  ? __virt_addr_valid+0x1db/0x2d0
[   27.076253]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.076375]  kasan_report+0x102/0x140
[   27.076485]  ? kasan_atomics_helper+0xa2c/0x5450
[   27.076555]  ? kasan_atomics_helper+0xa2c/0x5450
[   27.076683]  kasan_check_range+0x10c/0x1c0
[   27.076751]  __kasan_check_write+0x18/0x20
[   27.076874]  kasan_atomics_helper+0xa2c/0x5450
[   27.076977]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.077043]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.077105]  ? kasan_atomics+0x153/0x310
[   27.077172]  kasan_atomics+0x1dd/0x310
[   27.077349]  ? __pfx_kasan_atomics+0x10/0x10
[   27.077438]  ? __pfx_read_tsc+0x10/0x10
[   27.077499]  ? ktime_get_ts64+0x86/0x230
[   27.077641]  kunit_try_run_case+0x1b3/0x490
[   27.077755]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.077822]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.077887]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.077946]  ? __kthread_parkme+0x82/0x160
[   27.078000]  ? preempt_count_sub+0x50/0x80
[   27.078062]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.078126]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.078261]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.078304]  kthread+0x324/0x6d0
[   27.078335]  ? trace_preempt_on+0x20/0xc0
[   27.078377]  ? __pfx_kthread+0x10/0x10
[   27.078427]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.078462]  ? calculate_sigpending+0x7b/0xa0
[   27.078497]  ? __pfx_kthread+0x10/0x10
[   27.078532]  ret_from_fork+0x41/0x80
[   27.078562]  ? __pfx_kthread+0x10/0x10
[   27.078638]  ret_from_fork_asm+0x1a/0x30
[   27.078716]  </TASK>
[   27.078738] 
[   27.095671] Allocated by task 273:
[   27.095920]  kasan_save_stack+0x3d/0x60
[   27.096205]  kasan_save_track+0x18/0x40
[   27.096642]  kasan_save_alloc_info+0x3b/0x50
[   27.097090]  __kasan_kmalloc+0xb7/0xc0
[   27.097555]  __kmalloc_cache_noprof+0x184/0x410
[   27.098064]  kasan_atomics+0x96/0x310
[   27.098624]  kunit_try_run_case+0x1b3/0x490
[   27.099080]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.099686]  kthread+0x324/0x6d0
[   27.099954]  ret_from_fork+0x41/0x80
[   27.100275]  ret_from_fork_asm+0x1a/0x30
[   27.100762] 
[   27.100995] The buggy address belongs to the object at ffff888101aebe00
[   27.100995]  which belongs to the cache kmalloc-64 of size 64
[   27.102078] The buggy address is located 0 bytes to the right of
[   27.102078]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.103309] 
[   27.103498] The buggy address belongs to the physical page:
[   27.104063] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.104623] flags: 0x200000000000000(node=0|zone=2)
[   27.104944] page_type: f5(slab)
[   27.105239] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.106166] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.107073] page dumped because: kasan: bad access detected
[   27.108036] 
[   27.108318] Memory state around the buggy address:
[   27.108980]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.109784]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.110449] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.111219]                                      ^
[   27.111781]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.112542]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.113783] ==================================================================
---
[   27.589364] ==================================================================
[   27.590336] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1149/0x5450
[   27.591362] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.592050] 
[   27.592498] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.592610] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.592648] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.592709] Call Trace:
[   27.592747]  <TASK>
[   27.592782]  dump_stack_lvl+0x73/0xb0
[   27.592859]  print_report+0xd1/0x640
[   27.592925]  ? __virt_addr_valid+0x1db/0x2d0
[   27.592993]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.593066]  kasan_report+0x102/0x140
[   27.593127]  ? kasan_atomics_helper+0x1149/0x5450
[   27.593302]  ? kasan_atomics_helper+0x1149/0x5450
[   27.593509]  kasan_check_range+0x10c/0x1c0
[   27.593581]  __kasan_check_write+0x18/0x20
[   27.593644]  kasan_atomics_helper+0x1149/0x5450
[   27.593870]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.593987]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.594058]  ? kasan_atomics+0x153/0x310
[   27.594131]  kasan_atomics+0x1dd/0x310
[   27.594239]  ? __pfx_kasan_atomics+0x10/0x10
[   27.594309]  ? __pfx_read_tsc+0x10/0x10
[   27.594344]  ? ktime_get_ts64+0x86/0x230
[   27.594389]  kunit_try_run_case+0x1b3/0x490
[   27.594447]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.594481]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.594515]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.594550]  ? __kthread_parkme+0x82/0x160
[   27.594581]  ? preempt_count_sub+0x50/0x80
[   27.594616]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.594651]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.594687]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.594724]  kthread+0x324/0x6d0
[   27.594753]  ? trace_preempt_on+0x20/0xc0
[   27.594786]  ? __pfx_kthread+0x10/0x10
[   27.594817]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.594849]  ? calculate_sigpending+0x7b/0xa0
[   27.594884]  ? __pfx_kthread+0x10/0x10
[   27.594918]  ret_from_fork+0x41/0x80
[   27.594949]  ? __pfx_kthread+0x10/0x10
[   27.594981]  ret_from_fork_asm+0x1a/0x30
[   27.595024]  </TASK>
[   27.595042] 
[   27.612042] Allocated by task 273:
[   27.612505]  kasan_save_stack+0x3d/0x60
[   27.612940]  kasan_save_track+0x18/0x40
[   27.613333]  kasan_save_alloc_info+0x3b/0x50
[   27.614062]  __kasan_kmalloc+0xb7/0xc0
[   27.614392]  __kmalloc_cache_noprof+0x184/0x410
[   27.614726]  kasan_atomics+0x96/0x310
[   27.615002]  kunit_try_run_case+0x1b3/0x490
[   27.615617]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.616258]  kthread+0x324/0x6d0
[   27.617083]  ret_from_fork+0x41/0x80
[   27.617688]  ret_from_fork_asm+0x1a/0x30
[   27.618502] 
[   27.618842] The buggy address belongs to the object at ffff888101aebe00
[   27.618842]  which belongs to the cache kmalloc-64 of size 64
[   27.620046] The buggy address is located 0 bytes to the right of
[   27.620046]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.621320] 
[   27.621578] The buggy address belongs to the physical page:
[   27.622133] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.622641] flags: 0x200000000000000(node=0|zone=2)
[   27.622958] page_type: f5(slab)
[   27.623607] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.624310] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.625249] page dumped because: kasan: bad access detected
[   27.625749] 
[   27.626096] Memory state around the buggy address:
[   27.626771]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.627495]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.627904] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.628320]                                      ^
[   27.628665]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.629424]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.630111] ==================================================================
---
[   28.668746] ==================================================================
[   28.669473] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x1f44/0x5450
[   28.670005] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.671132] 
[   28.671428] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.671535] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.671572] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.671650] Call Trace:
[   28.671684]  <TASK>
[   28.671718]  dump_stack_lvl+0x73/0xb0
[   28.671792]  print_report+0xd1/0x640
[   28.671857]  ? __virt_addr_valid+0x1db/0x2d0
[   28.671922]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.671990]  kasan_report+0x102/0x140
[   28.672049]  ? kasan_atomics_helper+0x1f44/0x5450
[   28.672111]  ? kasan_atomics_helper+0x1f44/0x5450
[   28.672184]  kasan_check_range+0x10c/0x1c0
[   28.672246]  __kasan_check_write+0x18/0x20
[   28.672303]  kasan_atomics_helper+0x1f44/0x5450
[   28.672374]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.672460]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.672524]  ? kasan_atomics+0x153/0x310
[   28.672591]  kasan_atomics+0x1dd/0x310
[   28.672652]  ? __pfx_kasan_atomics+0x10/0x10
[   28.672711]  ? __pfx_read_tsc+0x10/0x10
[   28.672767]  ? ktime_get_ts64+0x86/0x230
[   28.672833]  kunit_try_run_case+0x1b3/0x490
[   28.672905]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.672967]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.673030]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.673090]  ? __kthread_parkme+0x82/0x160
[   28.673151]  ? preempt_count_sub+0x50/0x80
[   28.673218]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.673288]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.673359]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.673445]  kthread+0x324/0x6d0
[   28.673509]  ? trace_preempt_on+0x20/0xc0
[   28.673572]  ? __pfx_kthread+0x10/0x10
[   28.673636]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.673699]  ? calculate_sigpending+0x7b/0xa0
[   28.673770]  ? __pfx_kthread+0x10/0x10
[   28.673834]  ret_from_fork+0x41/0x80
[   28.673890]  ? __pfx_kthread+0x10/0x10
[   28.673967]  ret_from_fork_asm+0x1a/0x30
[   28.674019]  </TASK>
[   28.674038] 
[   28.689249] Allocated by task 273:
[   28.689536]  kasan_save_stack+0x3d/0x60
[   28.689829]  kasan_save_track+0x18/0x40
[   28.690111]  kasan_save_alloc_info+0x3b/0x50
[   28.690585]  __kasan_kmalloc+0xb7/0xc0
[   28.691011]  __kmalloc_cache_noprof+0x184/0x410
[   28.691593]  kasan_atomics+0x96/0x310
[   28.692012]  kunit_try_run_case+0x1b3/0x490
[   28.692565]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.693095]  kthread+0x324/0x6d0
[   28.693619]  ret_from_fork+0x41/0x80
[   28.693902]  ret_from_fork_asm+0x1a/0x30
[   28.694188] 
[   28.694364] The buggy address belongs to the object at ffff888101aebe00
[   28.694364]  which belongs to the cache kmalloc-64 of size 64
[   28.695353] The buggy address is located 0 bytes to the right of
[   28.695353]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.696699] 
[   28.696898] The buggy address belongs to the physical page:
[   28.697239] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.697934] flags: 0x200000000000000(node=0|zone=2)
[   28.698540] page_type: f5(slab)
[   28.698915] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.699693] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.700133] page dumped because: kasan: bad access detected
[   28.700570] 
[   28.700814] Memory state around the buggy address:
[   28.701304]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.701933]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.702490] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.702884]                                      ^
[   28.703329]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.704310]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.704961] ==================================================================
---
[   26.833892] ==================================================================
[   26.835177] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x730/0x5450
[   26.835874] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.837298] 
[   26.837608] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.837719] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.837759] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.837810] Call Trace:
[   26.837842]  <TASK>
[   26.837873]  dump_stack_lvl+0x73/0xb0
[   26.837926]  print_report+0xd1/0x640
[   26.837962]  ? __virt_addr_valid+0x1db/0x2d0
[   26.837997]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.838036]  kasan_report+0x102/0x140
[   26.838068]  ? kasan_atomics_helper+0x730/0x5450
[   26.838104]  ? kasan_atomics_helper+0x730/0x5450
[   26.838167]  kasan_check_range+0x10c/0x1c0
[   26.838279]  __kasan_check_write+0x18/0x20
[   26.838368]  kasan_atomics_helper+0x730/0x5450
[   26.838457]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.838528]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.838593]  ? kasan_atomics+0x153/0x310
[   26.838702]  kasan_atomics+0x1dd/0x310
[   26.838747]  ? __pfx_kasan_atomics+0x10/0x10
[   26.838781]  ? __pfx_read_tsc+0x10/0x10
[   26.838811]  ? ktime_get_ts64+0x86/0x230
[   26.838847]  kunit_try_run_case+0x1b3/0x490
[   26.838885]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.838919]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.838954]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.838989]  ? __kthread_parkme+0x82/0x160
[   26.839020]  ? preempt_count_sub+0x50/0x80
[   26.839055]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.839090]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.839126]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.839162]  kthread+0x324/0x6d0
[   26.839220]  ? trace_preempt_on+0x20/0xc0
[   26.839258]  ? __pfx_kthread+0x10/0x10
[   26.839289]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.839323]  ? calculate_sigpending+0x7b/0xa0
[   26.839358]  ? __pfx_kthread+0x10/0x10
[   26.839391]  ret_from_fork+0x41/0x80
[   26.839439]  ? __pfx_kthread+0x10/0x10
[   26.839470]  ret_from_fork_asm+0x1a/0x30
[   26.839515]  </TASK>
[   26.839534] 
[   26.859345] Allocated by task 273:
[   26.859836]  kasan_save_stack+0x3d/0x60
[   26.860464]  kasan_save_track+0x18/0x40
[   26.861040]  kasan_save_alloc_info+0x3b/0x50
[   26.861654]  __kasan_kmalloc+0xb7/0xc0
[   26.862255]  __kmalloc_cache_noprof+0x184/0x410
[   26.862983]  kasan_atomics+0x96/0x310
[   26.863530]  kunit_try_run_case+0x1b3/0x490
[   26.864051]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.864916]  kthread+0x324/0x6d0
[   26.865433]  ret_from_fork+0x41/0x80
[   26.865988]  ret_from_fork_asm+0x1a/0x30
[   26.866592] 
[   26.866942] The buggy address belongs to the object at ffff888101aebe00
[   26.866942]  which belongs to the cache kmalloc-64 of size 64
[   26.868061] The buggy address is located 0 bytes to the right of
[   26.868061]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.869222] 
[   26.869505] The buggy address belongs to the physical page:
[   26.870131] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.870874] flags: 0x200000000000000(node=0|zone=2)
[   26.871603] page_type: f5(slab)
[   26.872072] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.872756] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.873142] page dumped because: kasan: bad access detected
[   26.874097] 
[   26.874380] Memory state around the buggy address:
[   26.874793]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.875231]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.875838] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.876563]                                      ^
[   26.877059]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.877795]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.878731] ==================================================================
---
[   26.595097] ==================================================================
[   26.595495] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4a1/0x5450
[   26.595896] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.597052] 
[   26.597300] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.597427] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.597471] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.597531] Call Trace:
[   26.597566]  <TASK>
[   26.597603]  dump_stack_lvl+0x73/0xb0
[   26.597711]  print_report+0xd1/0x640
[   26.597779]  ? __virt_addr_valid+0x1db/0x2d0
[   26.597846]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.598055]  kasan_report+0x102/0x140
[   26.598173]  ? kasan_atomics_helper+0x4a1/0x5450
[   26.598276]  ? kasan_atomics_helper+0x4a1/0x5450
[   26.598366]  kasan_check_range+0x10c/0x1c0
[   26.598467]  __kasan_check_write+0x18/0x20
[   26.598557]  kasan_atomics_helper+0x4a1/0x5450
[   26.598633]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.598705]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.598771]  ? kasan_atomics+0x153/0x310
[   26.598842]  kasan_atomics+0x1dd/0x310
[   26.598903]  ? __pfx_kasan_atomics+0x10/0x10
[   26.598968]  ? __pfx_read_tsc+0x10/0x10
[   26.599024]  ? ktime_get_ts64+0x86/0x230
[   26.599120]  kunit_try_run_case+0x1b3/0x490
[   26.599192]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.599260]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.599296]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.599331]  ? __kthread_parkme+0x82/0x160
[   26.599363]  ? preempt_count_sub+0x50/0x80
[   26.599398]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.599451]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.599490]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.599526]  kthread+0x324/0x6d0
[   26.599555]  ? trace_preempt_on+0x20/0xc0
[   26.599609]  ? __pfx_kthread+0x10/0x10
[   26.599670]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.599714]  ? calculate_sigpending+0x7b/0xa0
[   26.599751]  ? __pfx_kthread+0x10/0x10
[   26.599784]  ret_from_fork+0x41/0x80
[   26.599816]  ? __pfx_kthread+0x10/0x10
[   26.599848]  ret_from_fork_asm+0x1a/0x30
[   26.599894]  </TASK>
[   26.599913] 
[   26.618913] Allocated by task 273:
[   26.619799]  kasan_save_stack+0x3d/0x60
[   26.620327]  kasan_save_track+0x18/0x40
[   26.620890]  kasan_save_alloc_info+0x3b/0x50
[   26.621443]  __kasan_kmalloc+0xb7/0xc0
[   26.621967]  __kmalloc_cache_noprof+0x184/0x410
[   26.622705]  kasan_atomics+0x96/0x310
[   26.623161]  kunit_try_run_case+0x1b3/0x490
[   26.623849]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.624543]  kthread+0x324/0x6d0
[   26.625150]  ret_from_fork+0x41/0x80
[   26.625768]  ret_from_fork_asm+0x1a/0x30
[   26.626328] 
[   26.626739] The buggy address belongs to the object at ffff888101aebe00
[   26.626739]  which belongs to the cache kmalloc-64 of size 64
[   26.627649] The buggy address is located 0 bytes to the right of
[   26.627649]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.629029] 
[   26.629323] The buggy address belongs to the physical page:
[   26.630123] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.631169] flags: 0x200000000000000(node=0|zone=2)
[   26.631802] page_type: f5(slab)
[   26.632301] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.633211] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.633851] page dumped because: kasan: bad access detected
[   26.634572] 
[   26.634995] Memory state around the buggy address:
[   26.635672]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.636206]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.636540] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.637175]                                      ^
[   26.637606]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.638315]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.639029] ==================================================================
---
[   28.751307] ==================================================================
[   28.752592] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x2007/0x5450
[   28.753112] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.754384] 
[   28.754853] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.754946] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.754974] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.755007] Call Trace:
[   28.755033]  <TASK>
[   28.755057]  dump_stack_lvl+0x73/0xb0
[   28.755102]  print_report+0xd1/0x640
[   28.755138]  ? __virt_addr_valid+0x1db/0x2d0
[   28.755175]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.755292]  kasan_report+0x102/0x140
[   28.755359]  ? kasan_atomics_helper+0x2007/0x5450
[   28.755431]  ? kasan_atomics_helper+0x2007/0x5450
[   28.755475]  kasan_check_range+0x10c/0x1c0
[   28.755509]  __kasan_check_write+0x18/0x20
[   28.755541]  kasan_atomics_helper+0x2007/0x5450
[   28.755577]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.755611]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.755645]  ? kasan_atomics+0x153/0x310
[   28.755680]  kasan_atomics+0x1dd/0x310
[   28.755710]  ? __pfx_kasan_atomics+0x10/0x10
[   28.755743]  ? __pfx_read_tsc+0x10/0x10
[   28.755773]  ? ktime_get_ts64+0x86/0x230
[   28.755808]  kunit_try_run_case+0x1b3/0x490
[   28.755846]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.755879]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.755915]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.755949]  ? __kthread_parkme+0x82/0x160
[   28.755980]  ? preempt_count_sub+0x50/0x80
[   28.756015]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.756051]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.756085]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.756121]  kthread+0x324/0x6d0
[   28.756150]  ? trace_preempt_on+0x20/0xc0
[   28.756196]  ? __pfx_kthread+0x10/0x10
[   28.756255]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.756290]  ? calculate_sigpending+0x7b/0xa0
[   28.756326]  ? __pfx_kthread+0x10/0x10
[   28.756359]  ret_from_fork+0x41/0x80
[   28.756391]  ? __pfx_kthread+0x10/0x10
[   28.756438]  ret_from_fork_asm+0x1a/0x30
[   28.756485]  </TASK>
[   28.756503] 
[   28.772766] Allocated by task 273:
[   28.773259]  kasan_save_stack+0x3d/0x60
[   28.773792]  kasan_save_track+0x18/0x40
[   28.774092]  kasan_save_alloc_info+0x3b/0x50
[   28.774643]  __kasan_kmalloc+0xb7/0xc0
[   28.775044]  __kmalloc_cache_noprof+0x184/0x410
[   28.775576]  kasan_atomics+0x96/0x310
[   28.776008]  kunit_try_run_case+0x1b3/0x490
[   28.776430]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.776999]  kthread+0x324/0x6d0
[   28.777416]  ret_from_fork+0x41/0x80
[   28.777734]  ret_from_fork_asm+0x1a/0x30
[   28.778159] 
[   28.778443] The buggy address belongs to the object at ffff888101aebe00
[   28.778443]  which belongs to the cache kmalloc-64 of size 64
[   28.779244] The buggy address is located 0 bytes to the right of
[   28.779244]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.779962] 
[   28.780273] The buggy address belongs to the physical page:
[   28.780814] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.781599] flags: 0x200000000000000(node=0|zone=2)
[   28.782018] page_type: f5(slab)
[   28.782457] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.783124] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.783885] page dumped because: kasan: bad access detected
[   28.784269] 
[   28.784555] Memory state around the buggy address:
[   28.785069]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.785672]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.786071] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.786814]                                      ^
[   28.787334]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.788031]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.788542] ==================================================================
---
[   27.381131] ==================================================================
[   27.381876] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0xf11/0x5450
[   27.382723] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.383185] 
[   27.383371] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.383499] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.383540] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.383599] Call Trace:
[   27.383634]  <TASK>
[   27.383673]  dump_stack_lvl+0x73/0xb0
[   27.383751]  print_report+0xd1/0x640
[   27.383819]  ? __virt_addr_valid+0x1db/0x2d0
[   27.383891]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.383962]  kasan_report+0x102/0x140
[   27.384019]  ? kasan_atomics_helper+0xf11/0x5450
[   27.384086]  ? kasan_atomics_helper+0xf11/0x5450
[   27.384155]  kasan_check_range+0x10c/0x1c0
[   27.384214]  __kasan_check_write+0x18/0x20
[   27.384273]  kasan_atomics_helper+0xf11/0x5450
[   27.384340]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.384416]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.384483]  ? kasan_atomics+0x153/0x310
[   27.384552]  kasan_atomics+0x1dd/0x310
[   27.384613]  ? __pfx_kasan_atomics+0x10/0x10
[   27.384674]  ? __pfx_read_tsc+0x10/0x10
[   27.384730]  ? ktime_get_ts64+0x86/0x230
[   27.384794]  kunit_try_run_case+0x1b3/0x490
[   27.384867]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.384934]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.385000]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.385068]  ? __kthread_parkme+0x82/0x160
[   27.385128]  ? preempt_count_sub+0x50/0x80
[   27.385193]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.385288]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.385357]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.385448]  kthread+0x324/0x6d0
[   27.385512]  ? trace_preempt_on+0x20/0xc0
[   27.385575]  ? __pfx_kthread+0x10/0x10
[   27.385638]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.385703]  ? calculate_sigpending+0x7b/0xa0
[   27.385773]  ? __pfx_kthread+0x10/0x10
[   27.385837]  ret_from_fork+0x41/0x80
[   27.385896]  ? __pfx_kthread+0x10/0x10
[   27.385959]  ret_from_fork_asm+0x1a/0x30
[   27.386044]  </TASK>
[   27.386076] 
[   27.406897] Allocated by task 273:
[   27.407492]  kasan_save_stack+0x3d/0x60
[   27.408022]  kasan_save_track+0x18/0x40
[   27.408442]  kasan_save_alloc_info+0x3b/0x50
[   27.408839]  __kasan_kmalloc+0xb7/0xc0
[   27.409388]  __kmalloc_cache_noprof+0x184/0x410
[   27.409902]  kasan_atomics+0x96/0x310
[   27.410765]  kunit_try_run_case+0x1b3/0x490
[   27.411103]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.411666]  kthread+0x324/0x6d0
[   27.411923]  ret_from_fork+0x41/0x80
[   27.412186]  ret_from_fork_asm+0x1a/0x30
[   27.412818] 
[   27.412985] The buggy address belongs to the object at ffff888101aebe00
[   27.412985]  which belongs to the cache kmalloc-64 of size 64
[   27.413930] The buggy address is located 0 bytes to the right of
[   27.413930]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.415341] 
[   27.415692] The buggy address belongs to the physical page:
[   27.416370] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.417225] flags: 0x200000000000000(node=0|zone=2)
[   27.417715] page_type: f5(slab)
[   27.418037] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.418736] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.419233] page dumped because: kasan: bad access detected
[   27.420262] 
[   27.420514] Memory state around the buggy address:
[   27.421330]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.422129]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.422656] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.423227]                                      ^
[   27.423757]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.424473]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.425216] ==================================================================

test log snippet boot parser 74

Home

[   26.547358] ==================================================================
[   26.548348] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4b56/0x5450
[   26.549135] Read of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.549797] 
[   26.550076] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.550276] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.550323] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.550388] Call Trace:
[   26.550442]  <TASK>
[   26.550477]  dump_stack_lvl+0x73/0xb0
[   26.550761]  print_report+0xd1/0x640
[   26.550840]  ? __virt_addr_valid+0x1db/0x2d0
[   26.550912]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.551070]  kasan_report+0x102/0x140
[   26.551165]  ? kasan_atomics_helper+0x4b56/0x5450
[   26.551368]  ? kasan_atomics_helper+0x4b56/0x5450
[   26.551486]  __asan_report_load4_noabort+0x18/0x20
[   26.551555]  kasan_atomics_helper+0x4b56/0x5450
[   26.551628]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.551696]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.551758]  ? kasan_atomics+0x153/0x310
[   26.551829]  kasan_atomics+0x1dd/0x310
[   26.551882]  ? __pfx_kasan_atomics+0x10/0x10
[   26.551940]  ? __pfx_read_tsc+0x10/0x10
[   26.551994]  ? ktime_get_ts64+0x86/0x230
[   26.552233]  kunit_try_run_case+0x1b3/0x490
[   26.552490]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.552657]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.552839]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.552920]  ? __kthread_parkme+0x82/0x160
[   26.552983]  ? preempt_count_sub+0x50/0x80
[   26.553053]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.553125]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.553196]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.553265]  kthread+0x324/0x6d0
[   26.553323]  ? trace_preempt_on+0x20/0xc0
[   26.553383]  ? __pfx_kthread+0x10/0x10
[   26.553468]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.553632]  ? calculate_sigpending+0x7b/0xa0
[   26.553675]  ? __pfx_kthread+0x10/0x10
[   26.553709]  ret_from_fork+0x41/0x80
[   26.553742]  ? __pfx_kthread+0x10/0x10
[   26.553773]  ret_from_fork_asm+0x1a/0x30
[   26.553821]  </TASK>
[   26.553838] 
[   26.571843] Allocated by task 273:
[   26.572454]  kasan_save_stack+0x3d/0x60
[   26.573554]  kasan_save_track+0x18/0x40
[   26.574371]  kasan_save_alloc_info+0x3b/0x50
[   26.575256]  __kasan_kmalloc+0xb7/0xc0
[   26.575812]  __kmalloc_cache_noprof+0x184/0x410
[   26.576304]  kasan_atomics+0x96/0x310
[   26.577029]  kunit_try_run_case+0x1b3/0x490
[   26.577617]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.578767]  kthread+0x324/0x6d0
[   26.579252]  ret_from_fork+0x41/0x80
[   26.579748]  ret_from_fork_asm+0x1a/0x30
[   26.580264] 
[   26.580524] The buggy address belongs to the object at ffff888101aebe00
[   26.580524]  which belongs to the cache kmalloc-64 of size 64
[   26.581903] The buggy address is located 0 bytes to the right of
[   26.581903]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.582906] 
[   26.583309] The buggy address belongs to the physical page:
[   26.584591] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.585440] flags: 0x200000000000000(node=0|zone=2)
[   26.585913] page_type: f5(slab)
[   26.586191] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.587150] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.587915] page dumped because: kasan: bad access detected
[   26.588493] 
[   26.588852] Memory state around the buggy address:
[   26.589776]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.590714]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.591474] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.592152]                                      ^
[   26.592772]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.593481]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.594212] ==================================================================
---
[   27.547132] ==================================================================
[   27.547932] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4a1e/0x5450
[   27.548750] Read of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.549524] 
[   27.549771] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.549949] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.549994] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.550058] Call Trace:
[   27.550092]  <TASK>
[   27.550130]  dump_stack_lvl+0x73/0xb0
[   27.550257]  print_report+0xd1/0x640
[   27.550334]  ? __virt_addr_valid+0x1db/0x2d0
[   27.550429]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.550506]  kasan_report+0x102/0x140
[   27.550621]  ? kasan_atomics_helper+0x4a1e/0x5450
[   27.550769]  ? kasan_atomics_helper+0x4a1e/0x5450
[   27.550886]  __asan_report_load4_noabort+0x18/0x20
[   27.551031]  kasan_atomics_helper+0x4a1e/0x5450
[   27.551212]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.551302]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.551370]  ? kasan_atomics+0x153/0x310
[   27.551456]  kasan_atomics+0x1dd/0x310
[   27.551521]  ? __pfx_kasan_atomics+0x10/0x10
[   27.551586]  ? __pfx_read_tsc+0x10/0x10
[   27.551644]  ? ktime_get_ts64+0x86/0x230
[   27.551713]  kunit_try_run_case+0x1b3/0x490
[   27.551786]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.551853]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.551920]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.551985]  ? __kthread_parkme+0x82/0x160
[   27.552043]  ? preempt_count_sub+0x50/0x80
[   27.552266]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.552379]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.552475]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.552541]  kthread+0x324/0x6d0
[   27.552592]  ? trace_preempt_on+0x20/0xc0
[   27.552651]  ? __pfx_kthread+0x10/0x10
[   27.552761]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.552863]  ? calculate_sigpending+0x7b/0xa0
[   27.552936]  ? __pfx_kthread+0x10/0x10
[   27.553001]  ret_from_fork+0x41/0x80
[   27.553063]  ? __pfx_kthread+0x10/0x10
[   27.553119]  ret_from_fork_asm+0x1a/0x30
[   27.553286]  </TASK>
[   27.553349] 
[   27.569804] Allocated by task 273:
[   27.570378]  kasan_save_stack+0x3d/0x60
[   27.570848]  kasan_save_track+0x18/0x40
[   27.571391]  kasan_save_alloc_info+0x3b/0x50
[   27.571960]  __kasan_kmalloc+0xb7/0xc0
[   27.572473]  __kmalloc_cache_noprof+0x184/0x410
[   27.572810]  kasan_atomics+0x96/0x310
[   27.573082]  kunit_try_run_case+0x1b3/0x490
[   27.573567]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.574113]  kthread+0x324/0x6d0
[   27.574761]  ret_from_fork+0x41/0x80
[   27.575435]  ret_from_fork_asm+0x1a/0x30
[   27.575799] 
[   27.575960] The buggy address belongs to the object at ffff888101aebe00
[   27.575960]  which belongs to the cache kmalloc-64 of size 64
[   27.576618] The buggy address is located 0 bytes to the right of
[   27.576618]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.577643] 
[   27.578286] The buggy address belongs to the physical page:
[   27.578994] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.579821] flags: 0x200000000000000(node=0|zone=2)
[   27.580466] page_type: f5(slab)
[   27.580838] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.581657] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.582569] page dumped because: kasan: bad access detected
[   27.583090] 
[   27.583327] Memory state around the buggy address:
[   27.583911]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.584687]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.585552] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.586072]                                      ^
[   27.586451]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.587283]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.588027] ==================================================================
---
[   27.795103] ==================================================================
[   27.795855] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x49d0/0x5450
[   27.796583] Read of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.797329] 
[   27.797614] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.797740] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.797802] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.797864] Call Trace:
[   27.797901]  <TASK>
[   27.797939]  dump_stack_lvl+0x73/0xb0
[   27.798015]  print_report+0xd1/0x640
[   27.798083]  ? __virt_addr_valid+0x1db/0x2d0
[   27.798152]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.798275]  kasan_report+0x102/0x140
[   27.798346]  ? kasan_atomics_helper+0x49d0/0x5450
[   27.798439]  ? kasan_atomics_helper+0x49d0/0x5450
[   27.798521]  __asan_report_load4_noabort+0x18/0x20
[   27.798585]  kasan_atomics_helper+0x49d0/0x5450
[   27.798654]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.798721]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.798787]  ? kasan_atomics+0x153/0x310
[   27.798853]  kasan_atomics+0x1dd/0x310
[   27.798956]  ? __pfx_kasan_atomics+0x10/0x10
[   27.799026]  ? __pfx_read_tsc+0x10/0x10
[   27.799085]  ? ktime_get_ts64+0x86/0x230
[   27.799153]  kunit_try_run_case+0x1b3/0x490
[   27.799275]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.799345]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.799428]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.799498]  ? __kthread_parkme+0x82/0x160
[   27.799560]  ? preempt_count_sub+0x50/0x80
[   27.799619]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.799686]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.799749]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.799859]  kthread+0x324/0x6d0
[   27.799926]  ? trace_preempt_on+0x20/0xc0
[   27.799988]  ? __pfx_kthread+0x10/0x10
[   27.800044]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.800104]  ? calculate_sigpending+0x7b/0xa0
[   27.800164]  ? __pfx_kthread+0x10/0x10
[   27.800305]  ret_from_fork+0x41/0x80
[   27.800373]  ? __pfx_kthread+0x10/0x10
[   27.800447]  ret_from_fork_asm+0x1a/0x30
[   27.800535]  </TASK>
[   27.800565] 
[   27.814119] Allocated by task 273:
[   27.814575]  kasan_save_stack+0x3d/0x60
[   27.814916]  kasan_save_track+0x18/0x40
[   27.815442]  kasan_save_alloc_info+0x3b/0x50
[   27.815780]  __kasan_kmalloc+0xb7/0xc0
[   27.816063]  __kmalloc_cache_noprof+0x184/0x410
[   27.816446]  kasan_atomics+0x96/0x310
[   27.816747]  kunit_try_run_case+0x1b3/0x490
[   27.817273]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.817839]  kthread+0x324/0x6d0
[   27.818232]  ret_from_fork+0x41/0x80
[   27.818672]  ret_from_fork_asm+0x1a/0x30
[   27.819100] 
[   27.819319] The buggy address belongs to the object at ffff888101aebe00
[   27.819319]  which belongs to the cache kmalloc-64 of size 64
[   27.820268] The buggy address is located 0 bytes to the right of
[   27.820268]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.821322] 
[   27.821609] The buggy address belongs to the physical page:
[   27.822151] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.822861] flags: 0x200000000000000(node=0|zone=2)
[   27.823361] page_type: f5(slab)
[   27.823772] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.824553] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.825147] page dumped because: kasan: bad access detected
[   27.825748] 
[   27.826018] Memory state around the buggy address:
[   27.826513]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.827080]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.827664] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.828319]                                      ^
[   27.828811]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.829514]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.830127] ==================================================================
---
[   27.467070] ==================================================================
[   27.468586] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4a38/0x5450
[   27.469123] Read of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.470163] 
[   27.470817] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.470883] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.470903] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.470935] Call Trace:
[   27.470954]  <TASK>
[   27.470975]  dump_stack_lvl+0x73/0xb0
[   27.471019]  print_report+0xd1/0x640
[   27.471054]  ? __virt_addr_valid+0x1db/0x2d0
[   27.471089]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.471127]  kasan_report+0x102/0x140
[   27.471159]  ? kasan_atomics_helper+0x4a38/0x5450
[   27.471233]  ? kasan_atomics_helper+0x4a38/0x5450
[   27.471330]  __asan_report_load4_noabort+0x18/0x20
[   27.471396]  kasan_atomics_helper+0x4a38/0x5450
[   27.471604]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.471661]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.471699]  ? kasan_atomics+0x153/0x310
[   27.471735]  kasan_atomics+0x1dd/0x310
[   27.471766]  ? __pfx_kasan_atomics+0x10/0x10
[   27.471800]  ? __pfx_read_tsc+0x10/0x10
[   27.471830]  ? ktime_get_ts64+0x86/0x230
[   27.471864]  kunit_try_run_case+0x1b3/0x490
[   27.471900]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.471934]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.471966]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.472001]  ? __kthread_parkme+0x82/0x160
[   27.472030]  ? preempt_count_sub+0x50/0x80
[   27.472064]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.472099]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.472132]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.472167]  kthread+0x324/0x6d0
[   27.472227]  ? trace_preempt_on+0x20/0xc0
[   27.472263]  ? __pfx_kthread+0x10/0x10
[   27.472296]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.472327]  ? calculate_sigpending+0x7b/0xa0
[   27.472363]  ? __pfx_kthread+0x10/0x10
[   27.472396]  ret_from_fork+0x41/0x80
[   27.472447]  ? __pfx_kthread+0x10/0x10
[   27.472480]  ret_from_fork_asm+0x1a/0x30
[   27.472524]  </TASK>
[   27.472541] 
[   27.489351] Allocated by task 273:
[   27.489613]  kasan_save_stack+0x3d/0x60
[   27.489904]  kasan_save_track+0x18/0x40
[   27.490284]  kasan_save_alloc_info+0x3b/0x50
[   27.490801]  __kasan_kmalloc+0xb7/0xc0
[   27.491254]  __kmalloc_cache_noprof+0x184/0x410
[   27.491743]  kasan_atomics+0x96/0x310
[   27.492197]  kunit_try_run_case+0x1b3/0x490
[   27.492574]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.493058]  kthread+0x324/0x6d0
[   27.493518]  ret_from_fork+0x41/0x80
[   27.493855]  ret_from_fork_asm+0x1a/0x30
[   27.494151] 
[   27.494326] The buggy address belongs to the object at ffff888101aebe00
[   27.494326]  which belongs to the cache kmalloc-64 of size 64
[   27.495606] The buggy address is located 0 bytes to the right of
[   27.495606]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.496747] 
[   27.497058] The buggy address belongs to the physical page:
[   27.497569] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.498318] flags: 0x200000000000000(node=0|zone=2)
[   27.498756] page_type: f5(slab)
[   27.499294] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.500005] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.500707] page dumped because: kasan: bad access detected
[   27.501233] 
[   27.501578] Memory state around the buggy address:
[   27.501903]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.502363]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.503159] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.503893]                                      ^
[   27.504454]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.505173]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.505856] ==================================================================
---
[   26.410662] ==================================================================
[   26.411685] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4b8a/0x5450
[   26.412507] Read of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.413715] 
[   26.413882] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.413947] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.413968] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.414019] Call Trace:
[   26.414051]  <TASK>
[   26.414086]  dump_stack_lvl+0x73/0xb0
[   26.414164]  print_report+0xd1/0x640
[   26.414273]  ? __virt_addr_valid+0x1db/0x2d0
[   26.414338]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.414433]  kasan_report+0x102/0x140
[   26.414500]  ? kasan_atomics_helper+0x4b8a/0x5450
[   26.414549]  ? kasan_atomics_helper+0x4b8a/0x5450
[   26.414624]  __asan_report_load4_noabort+0x18/0x20
[   26.414698]  kasan_atomics_helper+0x4b8a/0x5450
[   26.414741]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.414778]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.414812]  ? kasan_atomics+0x153/0x310
[   26.414847]  kasan_atomics+0x1dd/0x310
[   26.414878]  ? __pfx_kasan_atomics+0x10/0x10
[   26.414910]  ? __pfx_read_tsc+0x10/0x10
[   26.414940]  ? ktime_get_ts64+0x86/0x230
[   26.414977]  kunit_try_run_case+0x1b3/0x490
[   26.415013]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.415047]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.415080]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.415114]  ? __kthread_parkme+0x82/0x160
[   26.415145]  ? preempt_count_sub+0x50/0x80
[   26.415179]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.415245]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.415281]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.415316]  kthread+0x324/0x6d0
[   26.415347]  ? trace_preempt_on+0x20/0xc0
[   26.415378]  ? __pfx_kthread+0x10/0x10
[   26.415427]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.415461]  ? calculate_sigpending+0x7b/0xa0
[   26.415497]  ? __pfx_kthread+0x10/0x10
[   26.415530]  ret_from_fork+0x41/0x80
[   26.415562]  ? __pfx_kthread+0x10/0x10
[   26.415623]  ret_from_fork_asm+0x1a/0x30
[   26.415712]  </TASK>
[   26.415745] 
[   26.431422] Allocated by task 273:
[   26.432109]  kasan_save_stack+0x3d/0x60
[   26.432783]  kasan_save_track+0x18/0x40
[   26.433286]  kasan_save_alloc_info+0x3b/0x50
[   26.433897]  __kasan_kmalloc+0xb7/0xc0
[   26.434440]  __kmalloc_cache_noprof+0x184/0x410
[   26.435117]  kasan_atomics+0x96/0x310
[   26.435631]  kunit_try_run_case+0x1b3/0x490
[   26.437002]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.437919]  kthread+0x324/0x6d0
[   26.438512]  ret_from_fork+0x41/0x80
[   26.439239]  ret_from_fork_asm+0x1a/0x30
[   26.439935] 
[   26.440158] The buggy address belongs to the object at ffff888101aebe00
[   26.440158]  which belongs to the cache kmalloc-64 of size 64
[   26.442052] The buggy address is located 0 bytes to the right of
[   26.442052]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.443465] 
[   26.444342] The buggy address belongs to the physical page:
[   26.445007] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.445980] flags: 0x200000000000000(node=0|zone=2)
[   26.446600] page_type: f5(slab)
[   26.447264] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.448517] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.449090] page dumped because: kasan: bad access detected
[   26.449740] 
[   26.449966] Memory state around the buggy address:
[   26.450562]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.451277]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.452011] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.452418]                                      ^
[   26.453347]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.453992]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.454676] ==================================================================
---
[   27.631357] ==================================================================
[   27.632271] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4a04/0x5450
[   27.633048] Read of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.633875] 
[   27.634125] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.634324] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.634375] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.634452] Call Trace:
[   27.634488]  <TASK>
[   27.634526]  dump_stack_lvl+0x73/0xb0
[   27.634606]  print_report+0xd1/0x640
[   27.634671]  ? __virt_addr_valid+0x1db/0x2d0
[   27.634741]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.634815]  kasan_report+0x102/0x140
[   27.634879]  ? kasan_atomics_helper+0x4a04/0x5450
[   27.634949]  ? kasan_atomics_helper+0x4a04/0x5450
[   27.635030]  __asan_report_load4_noabort+0x18/0x20
[   27.635095]  kasan_atomics_helper+0x4a04/0x5450
[   27.635163]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.635247]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.635438]  ? kasan_atomics+0x153/0x310
[   27.635521]  kasan_atomics+0x1dd/0x310
[   27.635586]  ? __pfx_kasan_atomics+0x10/0x10
[   27.635652]  ? __pfx_read_tsc+0x10/0x10
[   27.635709]  ? ktime_get_ts64+0x86/0x230
[   27.635777]  kunit_try_run_case+0x1b3/0x490
[   27.635849]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.635912]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.635974]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.636036]  ? __kthread_parkme+0x82/0x160
[   27.636094]  ? preempt_count_sub+0x50/0x80
[   27.636158]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.636226]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.636293]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.636463]  kthread+0x324/0x6d0
[   27.636533]  ? trace_preempt_on+0x20/0xc0
[   27.636598]  ? __pfx_kthread+0x10/0x10
[   27.636661]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.636729]  ? calculate_sigpending+0x7b/0xa0
[   27.636799]  ? __pfx_kthread+0x10/0x10
[   27.636863]  ret_from_fork+0x41/0x80
[   27.636925]  ? __pfx_kthread+0x10/0x10
[   27.636988]  ret_from_fork_asm+0x1a/0x30
[   27.637076]  </TASK>
[   27.637111] 
[   27.652411] Allocated by task 273:
[   27.652715]  kasan_save_stack+0x3d/0x60
[   27.654742]  kasan_save_track+0x18/0x40
[   27.655077]  kasan_save_alloc_info+0x3b/0x50
[   27.655398]  __kasan_kmalloc+0xb7/0xc0
[   27.655831]  __kmalloc_cache_noprof+0x184/0x410
[   27.656316]  kasan_atomics+0x96/0x310
[   27.656805]  kunit_try_run_case+0x1b3/0x490
[   27.657249]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.657880]  kthread+0x324/0x6d0
[   27.658378]  ret_from_fork+0x41/0x80
[   27.658849]  ret_from_fork_asm+0x1a/0x30
[   27.659393] 
[   27.659695] The buggy address belongs to the object at ffff888101aebe00
[   27.659695]  which belongs to the cache kmalloc-64 of size 64
[   27.660588] The buggy address is located 0 bytes to the right of
[   27.660588]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.661534] 
[   27.661809] The buggy address belongs to the physical page:
[   27.662354] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.662972] flags: 0x200000000000000(node=0|zone=2)
[   27.663559] page_type: f5(slab)
[   27.663939] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.664612] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.665157] page dumped because: kasan: bad access detected
[   27.665695] 
[   27.665863] Memory state around the buggy address:
[   27.666424]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.667068]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.667647] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.668354]                                      ^
[   27.668797]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.669487]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.670140] ==================================================================
---
[   27.218936] ==================================================================
[   27.219635] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4a86/0x5450
[   27.220457] Read of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.221051] 
[   27.221285] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.221392] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.221453] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.221508] Call Trace:
[   27.221586]  <TASK>
[   27.221623]  dump_stack_lvl+0x73/0xb0
[   27.221696]  print_report+0xd1/0x640
[   27.221759]  ? __virt_addr_valid+0x1db/0x2d0
[   27.221817]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.221888]  kasan_report+0x102/0x140
[   27.221947]  ? kasan_atomics_helper+0x4a86/0x5450
[   27.222018]  ? kasan_atomics_helper+0x4a86/0x5450
[   27.222098]  __asan_report_load4_noabort+0x18/0x20
[   27.222160]  kasan_atomics_helper+0x4a86/0x5450
[   27.222232]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.222297]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.222373]  ? kasan_atomics+0x153/0x310
[   27.222462]  kasan_atomics+0x1dd/0x310
[   27.222519]  ? __pfx_kasan_atomics+0x10/0x10
[   27.222580]  ? __pfx_read_tsc+0x10/0x10
[   27.222636]  ? ktime_get_ts64+0x86/0x230
[   27.222699]  kunit_try_run_case+0x1b3/0x490
[   27.222769]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.222833]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.222899]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.222966]  ? __kthread_parkme+0x82/0x160
[   27.223027]  ? preempt_count_sub+0x50/0x80
[   27.223094]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.223163]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.223233]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.223302]  kthread+0x324/0x6d0
[   27.223362]  ? trace_preempt_on+0x20/0xc0
[   27.223480]  ? __pfx_kthread+0x10/0x10
[   27.223555]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.223621]  ? calculate_sigpending+0x7b/0xa0
[   27.223691]  ? __pfx_kthread+0x10/0x10
[   27.223755]  ret_from_fork+0x41/0x80
[   27.223818]  ? __pfx_kthread+0x10/0x10
[   27.223881]  ret_from_fork_asm+0x1a/0x30
[   27.223966]  </TASK>
[   27.224001] 
[   27.236811] Allocated by task 273:
[   27.237258]  kasan_save_stack+0x3d/0x60
[   27.237708]  kasan_save_track+0x18/0x40
[   27.238103]  kasan_save_alloc_info+0x3b/0x50
[   27.238481]  __kasan_kmalloc+0xb7/0xc0
[   27.238767]  __kmalloc_cache_noprof+0x184/0x410
[   27.239073]  kasan_atomics+0x96/0x310
[   27.239399]  kunit_try_run_case+0x1b3/0x490
[   27.239922]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.240593]  kthread+0x324/0x6d0
[   27.240989]  ret_from_fork+0x41/0x80
[   27.241528]  ret_from_fork_asm+0x1a/0x30
[   27.241976] 
[   27.242268] The buggy address belongs to the object at ffff888101aebe00
[   27.242268]  which belongs to the cache kmalloc-64 of size 64
[   27.243350] The buggy address is located 0 bytes to the right of
[   27.243350]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.244523] 
[   27.244758] The buggy address belongs to the physical page:
[   27.245335] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.246024] flags: 0x200000000000000(node=0|zone=2)
[   27.246571] page_type: f5(slab)
[   27.246966] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.247623] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.248042] page dumped because: kasan: bad access detected
[   27.248427] 
[   27.248596] Memory state around the buggy address:
[   27.248892]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.249364]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.250042] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.250781]                                      ^
[   27.251329]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.252051]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.252758] ==================================================================
---
[   26.311522] ==================================================================
[   26.312999] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4bbe/0x5450
[   26.313623] Read of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.314230] 
[   26.314693] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.314980] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.315045] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.315109] Call Trace:
[   26.315133]  <TASK>
[   26.315154]  dump_stack_lvl+0x73/0xb0
[   26.315480]  print_report+0xd1/0x640
[   26.315576]  ? __virt_addr_valid+0x1db/0x2d0
[   26.315741]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.315911]  kasan_report+0x102/0x140
[   26.315982]  ? kasan_atomics_helper+0x4bbe/0x5450
[   26.316021]  ? kasan_atomics_helper+0x4bbe/0x5450
[   26.316061]  __asan_report_load4_noabort+0x18/0x20
[   26.316091]  kasan_atomics_helper+0x4bbe/0x5450
[   26.316125]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.316158]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.316193]  ? kasan_atomics+0x153/0x310
[   26.316255]  kasan_atomics+0x1dd/0x310
[   26.316284]  ? __pfx_kasan_atomics+0x10/0x10
[   26.316314]  ? __pfx_read_tsc+0x10/0x10
[   26.316341]  ? ktime_get_ts64+0x86/0x230
[   26.316377]  kunit_try_run_case+0x1b3/0x490
[   26.316429]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.316460]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.316492]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.316523]  ? __kthread_parkme+0x82/0x160
[   26.316552]  ? preempt_count_sub+0x50/0x80
[   26.316591]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.316659]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.316717]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.316773]  kthread+0x324/0x6d0
[   26.316820]  ? trace_preempt_on+0x20/0xc0
[   26.316875]  ? __pfx_kthread+0x10/0x10
[   26.316926]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.316968]  ? calculate_sigpending+0x7b/0xa0
[   26.317004]  ? __pfx_kthread+0x10/0x10
[   26.317035]  ret_from_fork+0x41/0x80
[   26.317065]  ? __pfx_kthread+0x10/0x10
[   26.317095]  ret_from_fork_asm+0x1a/0x30
[   26.317138]  </TASK>
[   26.317156] 
[   26.339113] Allocated by task 273:
[   26.339462]  kasan_save_stack+0x3d/0x60
[   26.340298]  kasan_save_track+0x18/0x40
[   26.340671]  kasan_save_alloc_info+0x3b/0x50
[   26.341493]  __kasan_kmalloc+0xb7/0xc0
[   26.342375]  __kmalloc_cache_noprof+0x184/0x410
[   26.342706]  kasan_atomics+0x96/0x310
[   26.343118]  kunit_try_run_case+0x1b3/0x490
[   26.343488]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.344058]  kthread+0x324/0x6d0
[   26.344495]  ret_from_fork+0x41/0x80
[   26.345002]  ret_from_fork_asm+0x1a/0x30
[   26.345881] 
[   26.346064] The buggy address belongs to the object at ffff888101aebe00
[   26.346064]  which belongs to the cache kmalloc-64 of size 64
[   26.346997] The buggy address is located 0 bytes to the right of
[   26.346997]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.348754] 
[   26.349151] The buggy address belongs to the physical page:
[   26.349793] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.350259] flags: 0x200000000000000(node=0|zone=2)
[   26.351027] page_type: f5(slab)
[   26.351521] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.352134] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.353115] page dumped because: kasan: bad access detected
[   26.354130] 
[   26.354602] Memory state around the buggy address:
[   26.354951]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.355384]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.357195] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.358284]                                      ^
[   26.358814]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.359820]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.360870] ==================================================================
---
[   27.716440] ==================================================================
[   27.717903] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x49ea/0x5450
[   27.718799] Read of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.719504] 
[   27.719785] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.719927] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.719968] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.720050] Call Trace:
[   27.720088]  <TASK>
[   27.720157]  dump_stack_lvl+0x73/0xb0
[   27.720293]  print_report+0xd1/0x640
[   27.720368]  ? __virt_addr_valid+0x1db/0x2d0
[   27.720456]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.720532]  kasan_report+0x102/0x140
[   27.720597]  ? kasan_atomics_helper+0x49ea/0x5450
[   27.720668]  ? kasan_atomics_helper+0x49ea/0x5450
[   27.720744]  __asan_report_load4_noabort+0x18/0x20
[   27.720807]  kasan_atomics_helper+0x49ea/0x5450
[   27.720874]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.721025]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.721119]  ? kasan_atomics+0x153/0x310
[   27.721246]  kasan_atomics+0x1dd/0x310
[   27.721392]  ? __pfx_kasan_atomics+0x10/0x10
[   27.721518]  ? __pfx_read_tsc+0x10/0x10
[   27.721603]  ? ktime_get_ts64+0x86/0x230
[   27.721673]  kunit_try_run_case+0x1b3/0x490
[   27.721744]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.721829]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.721923]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.721997]  ? __kthread_parkme+0x82/0x160
[   27.722059]  ? preempt_count_sub+0x50/0x80
[   27.722112]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.722151]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.722222]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.722266]  kthread+0x324/0x6d0
[   27.722298]  ? trace_preempt_on+0x20/0xc0
[   27.722331]  ? __pfx_kthread+0x10/0x10
[   27.722371]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.722418]  ? calculate_sigpending+0x7b/0xa0
[   27.722456]  ? __pfx_kthread+0x10/0x10
[   27.722489]  ret_from_fork+0x41/0x80
[   27.722521]  ? __pfx_kthread+0x10/0x10
[   27.722552]  ret_from_fork_asm+0x1a/0x30
[   27.722599]  </TASK>
[   27.722616] 
[   27.738734] Allocated by task 273:
[   27.739218]  kasan_save_stack+0x3d/0x60
[   27.739769]  kasan_save_track+0x18/0x40
[   27.740235]  kasan_save_alloc_info+0x3b/0x50
[   27.740941]  __kasan_kmalloc+0xb7/0xc0
[   27.741479]  __kmalloc_cache_noprof+0x184/0x410
[   27.742033]  kasan_atomics+0x96/0x310
[   27.742475]  kunit_try_run_case+0x1b3/0x490
[   27.742782]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.743274]  kthread+0x324/0x6d0
[   27.743610]  ret_from_fork+0x41/0x80
[   27.744006]  ret_from_fork_asm+0x1a/0x30
[   27.744605] 
[   27.744834] The buggy address belongs to the object at ffff888101aebe00
[   27.744834]  which belongs to the cache kmalloc-64 of size 64
[   27.745767] The buggy address is located 0 bytes to the right of
[   27.745767]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.746680] 
[   27.746948] The buggy address belongs to the physical page:
[   27.747606] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.748231] flags: 0x200000000000000(node=0|zone=2)
[   27.748765] page_type: f5(slab)
[   27.749168] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.749822] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.750632] page dumped because: kasan: bad access detected
[   27.751070] 
[   27.751289] Memory state around the buggy address:
[   27.751791]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.752676]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.753490] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.753894]                                      ^
[   27.754148]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.755980]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.756657] ==================================================================

test log snippet boot parser 75

Home

[   28.789996] ==================================================================
[   28.790834] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4f9a/0x5450
[   28.792245] Read of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.792868] 
[   28.793146] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.793338] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.793413] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.793481] Call Trace:
[   28.793517]  <TASK>
[   28.793550]  dump_stack_lvl+0x73/0xb0
[   28.793629]  print_report+0xd1/0x640
[   28.793692]  ? __virt_addr_valid+0x1db/0x2d0
[   28.793760]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.793848]  kasan_report+0x102/0x140
[   28.793942]  ? kasan_atomics_helper+0x4f9a/0x5450
[   28.794017]  ? kasan_atomics_helper+0x4f9a/0x5450
[   28.794098]  __asan_report_load8_noabort+0x18/0x20
[   28.794162]  kasan_atomics_helper+0x4f9a/0x5450
[   28.794284]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.794362]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.794446]  ? kasan_atomics+0x153/0x310
[   28.794515]  kasan_atomics+0x1dd/0x310
[   28.794596]  ? __pfx_kasan_atomics+0x10/0x10
[   28.794682]  ? __pfx_read_tsc+0x10/0x10
[   28.794743]  ? ktime_get_ts64+0x86/0x230
[   28.794809]  kunit_try_run_case+0x1b3/0x490
[   28.794875]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.794911]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.794946]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.794981]  ? __kthread_parkme+0x82/0x160
[   28.795012]  ? preempt_count_sub+0x50/0x80
[   28.795048]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.795084]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.795119]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.795153]  kthread+0x324/0x6d0
[   28.795199]  ? trace_preempt_on+0x20/0xc0
[   28.795259]  ? __pfx_kthread+0x10/0x10
[   28.795292]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.795324]  ? calculate_sigpending+0x7b/0xa0
[   28.795359]  ? __pfx_kthread+0x10/0x10
[   28.795391]  ret_from_fork+0x41/0x80
[   28.795441]  ? __pfx_kthread+0x10/0x10
[   28.795472]  ret_from_fork_asm+0x1a/0x30
[   28.795518]  </TASK>
[   28.795536] 
[   28.808556] Allocated by task 273:
[   28.808938]  kasan_save_stack+0x3d/0x60
[   28.809488]  kasan_save_track+0x18/0x40
[   28.809922]  kasan_save_alloc_info+0x3b/0x50
[   28.810493]  __kasan_kmalloc+0xb7/0xc0
[   28.810959]  __kmalloc_cache_noprof+0x184/0x410
[   28.811600]  kasan_atomics+0x96/0x310
[   28.812016]  kunit_try_run_case+0x1b3/0x490
[   28.812601]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.813095]  kthread+0x324/0x6d0
[   28.813536]  ret_from_fork+0x41/0x80
[   28.813839]  ret_from_fork_asm+0x1a/0x30
[   28.814128] 
[   28.814292] The buggy address belongs to the object at ffff888101aebe00
[   28.814292]  which belongs to the cache kmalloc-64 of size 64
[   28.815542] The buggy address is located 0 bytes to the right of
[   28.815542]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.816853] 
[   28.817111] The buggy address belongs to the physical page:
[   28.817583] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.818282] flags: 0x200000000000000(node=0|zone=2)
[   28.818738] page_type: f5(slab)
[   28.819153] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.819676] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.820092] page dumped because: kasan: bad access detected
[   28.820688] 
[   28.820958] Memory state around the buggy address:
[   28.821565]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.822252]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.823084] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.823521]                                      ^
[   28.824001]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.824487]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.824931] ==================================================================
---
[   28.705973] ==================================================================
[   28.706541] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4f73/0x5450
[   28.707168] Read of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.708757] 
[   28.709274] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.709628] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.709673] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.709959] Call Trace:
[   28.710006]  <TASK>
[   28.710045]  dump_stack_lvl+0x73/0xb0
[   28.710377]  print_report+0xd1/0x640
[   28.710474]  ? __virt_addr_valid+0x1db/0x2d0
[   28.710785]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.710884]  kasan_report+0x102/0x140
[   28.711388]  ? kasan_atomics_helper+0x4f73/0x5450
[   28.711500]  ? kasan_atomics_helper+0x4f73/0x5450
[   28.711742]  __asan_report_load8_noabort+0x18/0x20
[   28.711833]  kasan_atomics_helper+0x4f73/0x5450
[   28.711909]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.711979]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.712044]  ? kasan_atomics+0x153/0x310
[   28.712085]  kasan_atomics+0x1dd/0x310
[   28.712139]  ? __pfx_kasan_atomics+0x10/0x10
[   28.713433]  ? __pfx_read_tsc+0x10/0x10
[   28.713525]  ? ktime_get_ts64+0x86/0x230
[   28.713599]  kunit_try_run_case+0x1b3/0x490
[   28.713676]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.713743]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.713811]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.713869]  ? __kthread_parkme+0x82/0x160
[   28.713904]  ? preempt_count_sub+0x50/0x80
[   28.713940]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.713975]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.714012]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.714048]  kthread+0x324/0x6d0
[   28.714079]  ? trace_preempt_on+0x20/0xc0
[   28.714112]  ? __pfx_kthread+0x10/0x10
[   28.714144]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.714176]  ? calculate_sigpending+0x7b/0xa0
[   28.714324]  ? __pfx_kthread+0x10/0x10
[   28.714420]  ret_from_fork+0x41/0x80
[   28.714487]  ? __pfx_kthread+0x10/0x10
[   28.714552]  ret_from_fork_asm+0x1a/0x30
[   28.714643]  </TASK>
[   28.714677] 
[   28.730598] Allocated by task 273:
[   28.731018]  kasan_save_stack+0x3d/0x60
[   28.731553]  kasan_save_track+0x18/0x40
[   28.732026]  kasan_save_alloc_info+0x3b/0x50
[   28.732578]  __kasan_kmalloc+0xb7/0xc0
[   28.733036]  __kmalloc_cache_noprof+0x184/0x410
[   28.733596]  kasan_atomics+0x96/0x310
[   28.734064]  kunit_try_run_case+0x1b3/0x490
[   28.734611]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.735231]  kthread+0x324/0x6d0
[   28.735672]  ret_from_fork+0x41/0x80
[   28.736129]  ret_from_fork_asm+0x1a/0x30
[   28.736657] 
[   28.736903] The buggy address belongs to the object at ffff888101aebe00
[   28.736903]  which belongs to the cache kmalloc-64 of size 64
[   28.738046] The buggy address is located 0 bytes to the right of
[   28.738046]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.739381] 
[   28.739576] The buggy address belongs to the physical page:
[   28.740084] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.740624] flags: 0x200000000000000(node=0|zone=2)
[   28.741119] page_type: f5(slab)
[   28.741580] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.743016] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.744065] page dumped because: kasan: bad access detected
[   28.744695] 
[   28.744886] Memory state around the buggy address:
[   28.745825]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.746307]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.746937] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.747779]                                      ^
[   28.748503]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.749202]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.749888] ==================================================================
---
[   27.870113] ==================================================================
[   27.870813] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4eb0/0x5450
[   27.871384] Read of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.872082] 
[   27.872441] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.872551] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.872590] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.872648] Call Trace:
[   27.872684]  <TASK>
[   27.872766]  dump_stack_lvl+0x73/0xb0
[   27.872876]  print_report+0xd1/0x640
[   27.872952]  ? __virt_addr_valid+0x1db/0x2d0
[   27.873021]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.873139]  kasan_report+0x102/0x140
[   27.873276]  ? kasan_atomics_helper+0x4eb0/0x5450
[   27.873350]  ? kasan_atomics_helper+0x4eb0/0x5450
[   27.873488]  __asan_report_load8_noabort+0x18/0x20
[   27.873595]  kasan_atomics_helper+0x4eb0/0x5450
[   27.873672]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.873736]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.873848]  ? kasan_atomics+0x153/0x310
[   27.873954]  kasan_atomics+0x1dd/0x310
[   27.874019]  ? __pfx_kasan_atomics+0x10/0x10
[   27.874082]  ? __pfx_read_tsc+0x10/0x10
[   27.874134]  ? ktime_get_ts64+0x86/0x230
[   27.874320]  kunit_try_run_case+0x1b3/0x490
[   27.874451]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.874574]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.874700]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.874782]  ? __kthread_parkme+0x82/0x160
[   27.874869]  ? preempt_count_sub+0x50/0x80
[   27.874937]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.875006]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.875116]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.875229]  kthread+0x324/0x6d0
[   27.875296]  ? trace_preempt_on+0x20/0xc0
[   27.875361]  ? __pfx_kthread+0x10/0x10
[   27.875433]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.875494]  ? calculate_sigpending+0x7b/0xa0
[   27.875561]  ? __pfx_kthread+0x10/0x10
[   27.875617]  ret_from_fork+0x41/0x80
[   27.875679]  ? __pfx_kthread+0x10/0x10
[   27.875741]  ret_from_fork_asm+0x1a/0x30
[   27.875824]  </TASK>
[   27.875856] 
[   27.890701] Allocated by task 273:
[   27.891072]  kasan_save_stack+0x3d/0x60
[   27.891550]  kasan_save_track+0x18/0x40
[   27.891841]  kasan_save_alloc_info+0x3b/0x50
[   27.892442]  __kasan_kmalloc+0xb7/0xc0
[   27.892898]  __kmalloc_cache_noprof+0x184/0x410
[   27.893502]  kasan_atomics+0x96/0x310
[   27.893989]  kunit_try_run_case+0x1b3/0x490
[   27.894570]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.895034]  kthread+0x324/0x6d0
[   27.895538]  ret_from_fork+0x41/0x80
[   27.896001]  ret_from_fork_asm+0x1a/0x30
[   27.896530] 
[   27.896784] The buggy address belongs to the object at ffff888101aebe00
[   27.896784]  which belongs to the cache kmalloc-64 of size 64
[   27.897804] The buggy address is located 0 bytes to the right of
[   27.897804]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.898734] 
[   27.899003] The buggy address belongs to the physical page:
[   27.899655] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.900363] flags: 0x200000000000000(node=0|zone=2)
[   27.900846] page_type: f5(slab)
[   27.901100] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.901709] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.902504] page dumped because: kasan: bad access detected
[   27.903138] 
[   27.903436] Memory state around the buggy address:
[   27.903952]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.904665]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.905330] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.906000]                                      ^
[   27.906489]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.907216]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.907780] ==================================================================
---
[   28.945804] ==================================================================
[   28.947343] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4fa7/0x5450
[   28.948055] Read of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.949150] 
[   28.949357] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.949483] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.949523] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.949581] Call Trace:
[   28.949616]  <TASK>
[   28.949651]  dump_stack_lvl+0x73/0xb0
[   28.949725]  print_report+0xd1/0x640
[   28.949792]  ? __virt_addr_valid+0x1db/0x2d0
[   28.949856]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.949980]  kasan_report+0x102/0x140
[   28.950045]  ? kasan_atomics_helper+0x4fa7/0x5450
[   28.950113]  ? kasan_atomics_helper+0x4fa7/0x5450
[   28.950189]  __asan_report_load8_noabort+0x18/0x20
[   28.950255]  kasan_atomics_helper+0x4fa7/0x5450
[   28.950326]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.950418]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.950503]  ? kasan_atomics+0x153/0x310
[   28.950552]  kasan_atomics+0x1dd/0x310
[   28.950601]  ? __pfx_kasan_atomics+0x10/0x10
[   28.950635]  ? __pfx_read_tsc+0x10/0x10
[   28.950664]  ? ktime_get_ts64+0x86/0x230
[   28.950700]  kunit_try_run_case+0x1b3/0x490
[   28.950737]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.950770]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.950804]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.950837]  ? __kthread_parkme+0x82/0x160
[   28.950868]  ? preempt_count_sub+0x50/0x80
[   28.950904]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.950939]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.950975]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.951010]  kthread+0x324/0x6d0
[   28.951040]  ? trace_preempt_on+0x20/0xc0
[   28.951072]  ? __pfx_kthread+0x10/0x10
[   28.951102]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.951135]  ? calculate_sigpending+0x7b/0xa0
[   28.951170]  ? __pfx_kthread+0x10/0x10
[   28.951232]  ret_from_fork+0x41/0x80
[   28.951267]  ? __pfx_kthread+0x10/0x10
[   28.951298]  ret_from_fork_asm+0x1a/0x30
[   28.951343]  </TASK>
[   28.951361] 
[   28.965079] Allocated by task 273:
[   28.965480]  kasan_save_stack+0x3d/0x60
[   28.965973]  kasan_save_track+0x18/0x40
[   28.966523]  kasan_save_alloc_info+0x3b/0x50
[   28.966829]  __kasan_kmalloc+0xb7/0xc0
[   28.967366]  __kmalloc_cache_noprof+0x184/0x410
[   28.967930]  kasan_atomics+0x96/0x310
[   28.968396]  kunit_try_run_case+0x1b3/0x490
[   28.968937]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.969492]  kthread+0x324/0x6d0
[   28.969888]  ret_from_fork+0x41/0x80
[   28.970364]  ret_from_fork_asm+0x1a/0x30
[   28.970796] 
[   28.971095] The buggy address belongs to the object at ffff888101aebe00
[   28.971095]  which belongs to the cache kmalloc-64 of size 64
[   28.971990] The buggy address is located 0 bytes to the right of
[   28.971990]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.973032] 
[   28.973258] The buggy address belongs to the physical page:
[   28.973787] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.974552] flags: 0x200000000000000(node=0|zone=2)
[   28.975032] page_type: f5(slab)
[   28.975562] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.976098] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.976879] page dumped because: kasan: bad access detected
[   28.977281] 
[   28.977662] Memory state around the buggy address:
[   28.978191]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.978709]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.979460] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.980028]                                      ^
[   28.980748]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.981314]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.981984] ==================================================================
---
[   28.467561] ==================================================================
[   28.468217] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4f32/0x5450
[   28.468913] Read of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.469774] 
[   28.470141] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.470273] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.470314] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.470380] Call Trace:
[   28.470434]  <TASK>
[   28.470489]  dump_stack_lvl+0x73/0xb0
[   28.470571]  print_report+0xd1/0x640
[   28.470682]  ? __virt_addr_valid+0x1db/0x2d0
[   28.470790]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.470872]  kasan_report+0x102/0x140
[   28.470936]  ? kasan_atomics_helper+0x4f32/0x5450
[   28.471002]  ? kasan_atomics_helper+0x4f32/0x5450
[   28.471074]  __asan_report_load8_noabort+0x18/0x20
[   28.471135]  kasan_atomics_helper+0x4f32/0x5450
[   28.471292]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.471371]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.471480]  ? kasan_atomics+0x153/0x310
[   28.471549]  kasan_atomics+0x1dd/0x310
[   28.471695]  ? __pfx_kasan_atomics+0x10/0x10
[   28.471766]  ? __pfx_read_tsc+0x10/0x10
[   28.471823]  ? ktime_get_ts64+0x86/0x230
[   28.471888]  kunit_try_run_case+0x1b3/0x490
[   28.472045]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.472119]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.472184]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.472269]  ? __kthread_parkme+0x82/0x160
[   28.472379]  ? preempt_count_sub+0x50/0x80
[   28.472485]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.472557]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.472621]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.472687]  kthread+0x324/0x6d0
[   28.472810]  ? trace_preempt_on+0x20/0xc0
[   28.472900]  ? __pfx_kthread+0x10/0x10
[   28.472964]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.473024]  ? calculate_sigpending+0x7b/0xa0
[   28.473144]  ? __pfx_kthread+0x10/0x10
[   28.473267]  ret_from_fork+0x41/0x80
[   28.473332]  ? __pfx_kthread+0x10/0x10
[   28.473397]  ret_from_fork_asm+0x1a/0x30
[   28.473471]  </TASK>
[   28.473488] 
[   28.488544] Allocated by task 273:
[   28.488906]  kasan_save_stack+0x3d/0x60
[   28.489476]  kasan_save_track+0x18/0x40
[   28.489852]  kasan_save_alloc_info+0x3b/0x50
[   28.490386]  __kasan_kmalloc+0xb7/0xc0
[   28.491006]  __kmalloc_cache_noprof+0x184/0x410
[   28.491438]  kasan_atomics+0x96/0x310
[   28.491789]  kunit_try_run_case+0x1b3/0x490
[   28.492201]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.492779]  kthread+0x324/0x6d0
[   28.493170]  ret_from_fork+0x41/0x80
[   28.493653]  ret_from_fork_asm+0x1a/0x30
[   28.494096] 
[   28.494383] The buggy address belongs to the object at ffff888101aebe00
[   28.494383]  which belongs to the cache kmalloc-64 of size 64
[   28.495509] The buggy address is located 0 bytes to the right of
[   28.495509]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.496913] 
[   28.497094] The buggy address belongs to the physical page:
[   28.498977] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.499727] flags: 0x200000000000000(node=0|zone=2)
[   28.500877] page_type: f5(slab)
[   28.501134] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.501591] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.502336] page dumped because: kasan: bad access detected
[   28.502976] 
[   28.503223] Memory state around the buggy address:
[   28.503768]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.504635]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.505511] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.506059]                                      ^
[   28.507048]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.507682]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.508121] ==================================================================
---
[   29.021850] ==================================================================
[   29.022760] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x5117/0x5450
[   29.023510] Read of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   29.024478] 
[   29.024783] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.024886] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.024922] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.024983] Call Trace:
[   29.025018]  <TASK>
[   29.025053]  dump_stack_lvl+0x73/0xb0
[   29.025131]  print_report+0xd1/0x640
[   29.025194]  ? __virt_addr_valid+0x1db/0x2d0
[   29.025295]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.025460]  kasan_report+0x102/0x140
[   29.025596]  ? kasan_atomics_helper+0x5117/0x5450
[   29.025723]  ? kasan_atomics_helper+0x5117/0x5450
[   29.025818]  __asan_report_load8_noabort+0x18/0x20
[   29.025886]  kasan_atomics_helper+0x5117/0x5450
[   29.026028]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   29.026101]  ? __kmalloc_cache_noprof+0x184/0x410
[   29.026166]  ? kasan_atomics+0x153/0x310
[   29.026231]  kasan_atomics+0x1dd/0x310
[   29.026292]  ? __pfx_kasan_atomics+0x10/0x10
[   29.026426]  ? __pfx_read_tsc+0x10/0x10
[   29.026509]  ? ktime_get_ts64+0x86/0x230
[   29.026576]  kunit_try_run_case+0x1b3/0x490
[   29.026724]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.026794]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.026859]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.026926]  ? __kthread_parkme+0x82/0x160
[   29.026990]  ? preempt_count_sub+0x50/0x80
[   29.027059]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.027129]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.027197]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.027265]  kthread+0x324/0x6d0
[   29.027319]  ? trace_preempt_on+0x20/0xc0
[   29.027485]  ? __pfx_kthread+0x10/0x10
[   29.027616]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.027737]  ? calculate_sigpending+0x7b/0xa0
[   29.027823]  ? __pfx_kthread+0x10/0x10
[   29.027888]  ret_from_fork+0x41/0x80
[   29.027947]  ? __pfx_kthread+0x10/0x10
[   29.028010]  ret_from_fork_asm+0x1a/0x30
[   29.028100]  </TASK>
[   29.028134] 
[   29.049717] Allocated by task 273:
[   29.050041]  kasan_save_stack+0x3d/0x60
[   29.050500]  kasan_save_track+0x18/0x40
[   29.051000]  kasan_save_alloc_info+0x3b/0x50
[   29.051394]  __kasan_kmalloc+0xb7/0xc0
[   29.051933]  __kmalloc_cache_noprof+0x184/0x410
[   29.052551]  kasan_atomics+0x96/0x310
[   29.053035]  kunit_try_run_case+0x1b3/0x490
[   29.053398]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.053832]  kthread+0x324/0x6d0
[   29.054488]  ret_from_fork+0x41/0x80
[   29.055013]  ret_from_fork_asm+0x1a/0x30
[   29.055560] 
[   29.055793] The buggy address belongs to the object at ffff888101aebe00
[   29.055793]  which belongs to the cache kmalloc-64 of size 64
[   29.056846] The buggy address is located 0 bytes to the right of
[   29.056846]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   29.057877] 
[   29.058080] The buggy address belongs to the physical page:
[   29.058891] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   29.059713] flags: 0x200000000000000(node=0|zone=2)
[   29.060391] page_type: f5(slab)
[   29.060829] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   29.061680] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   29.062419] page dumped because: kasan: bad access detected
[   29.063043] 
[   29.063311] Memory state around the buggy address:
[   29.063824]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   29.064540]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   29.065082] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   29.065927]                                      ^
[   29.066494]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.067011]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.067702] ==================================================================
---
[   28.866256] ==================================================================
[   28.866894] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4fb4/0x5450
[   28.867511] Read of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   28.868125] 
[   28.868357] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   28.868483] Tainted: [B]=BAD_PAGE, [N]=TEST
[   28.868523] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   28.868581] Call Trace:
[   28.868612]  <TASK>
[   28.868646]  dump_stack_lvl+0x73/0xb0
[   28.868720]  print_report+0xd1/0x640
[   28.868784]  ? __virt_addr_valid+0x1db/0x2d0
[   28.868850]  ? kasan_complete_mode_report_info+0x2a/0x200
[   28.868960]  kasan_report+0x102/0x140
[   28.869031]  ? kasan_atomics_helper+0x4fb4/0x5450
[   28.869102]  ? kasan_atomics_helper+0x4fb4/0x5450
[   28.869178]  __asan_report_load8_noabort+0x18/0x20
[   28.869286]  kasan_atomics_helper+0x4fb4/0x5450
[   28.869358]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   28.869436]  ? __kmalloc_cache_noprof+0x184/0x410
[   28.869501]  ? kasan_atomics+0x153/0x310
[   28.869569]  kasan_atomics+0x1dd/0x310
[   28.869630]  ? __pfx_kasan_atomics+0x10/0x10
[   28.869689]  ? __pfx_read_tsc+0x10/0x10
[   28.869742]  ? ktime_get_ts64+0x86/0x230
[   28.869805]  kunit_try_run_case+0x1b3/0x490
[   28.869877]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.869938]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   28.870005]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   28.870072]  ? __kthread_parkme+0x82/0x160
[   28.870134]  ? preempt_count_sub+0x50/0x80
[   28.870246]  ? __pfx_kunit_try_run_case+0x10/0x10
[   28.870328]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.870423]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   28.870495]  kthread+0x324/0x6d0
[   28.870553]  ? trace_preempt_on+0x20/0xc0
[   28.870612]  ? __pfx_kthread+0x10/0x10
[   28.870668]  ? _raw_spin_unlock_irq+0x47/0x80
[   28.870725]  ? calculate_sigpending+0x7b/0xa0
[   28.870790]  ? __pfx_kthread+0x10/0x10
[   28.870852]  ret_from_fork+0x41/0x80
[   28.870891]  ? __pfx_kthread+0x10/0x10
[   28.870924]  ret_from_fork_asm+0x1a/0x30
[   28.870971]  </TASK>
[   28.870988] 
[   28.886039] Allocated by task 273:
[   28.886507]  kasan_save_stack+0x3d/0x60
[   28.886806]  kasan_save_track+0x18/0x40
[   28.887094]  kasan_save_alloc_info+0x3b/0x50
[   28.887649]  __kasan_kmalloc+0xb7/0xc0
[   28.888140]  __kmalloc_cache_noprof+0x184/0x410
[   28.888805]  kasan_atomics+0x96/0x310
[   28.889325]  kunit_try_run_case+0x1b3/0x490
[   28.889892]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   28.890591]  kthread+0x324/0x6d0
[   28.891044]  ret_from_fork+0x41/0x80
[   28.891580]  ret_from_fork_asm+0x1a/0x30
[   28.892094] 
[   28.892436] The buggy address belongs to the object at ffff888101aebe00
[   28.892436]  which belongs to the cache kmalloc-64 of size 64
[   28.893612] The buggy address is located 0 bytes to the right of
[   28.893612]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   28.894495] 
[   28.894721] The buggy address belongs to the physical page:
[   28.895120] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   28.895788] flags: 0x200000000000000(node=0|zone=2)
[   28.896220] page_type: f5(slab)
[   28.896612] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   28.897195] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   28.897793] page dumped because: kasan: bad access detected
[   28.898279] 
[   28.898534] Memory state around the buggy address:
[   28.899170]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.899783]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   28.900182] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   28.900596]                                      ^
[   28.901109]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.901981]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   28.902782] ==================================================================

test log snippet boot parser 76

Home

[   26.362005] ==================================================================
[   26.362441] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4ba4/0x5450
[   26.364340] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.364905] 
[   26.365392] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.365523] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.365562] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.365721] Call Trace:
[   26.365768]  <TASK>
[   26.365809]  dump_stack_lvl+0x73/0xb0
[   26.365887]  print_report+0xd1/0x640
[   26.365954]  ? __virt_addr_valid+0x1db/0x2d0
[   26.366061]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.366146]  kasan_report+0x102/0x140
[   26.366210]  ? kasan_atomics_helper+0x4ba4/0x5450
[   26.366301]  ? kasan_atomics_helper+0x4ba4/0x5450
[   26.366368]  __asan_report_store4_noabort+0x1b/0x30
[   26.366423]  kasan_atomics_helper+0x4ba4/0x5450
[   26.366462]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.366497]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.366529]  ? kasan_atomics+0x153/0x310
[   26.366568]  kasan_atomics+0x1dd/0x310
[   26.366642]  ? __pfx_kasan_atomics+0x10/0x10
[   26.366709]  ? __pfx_read_tsc+0x10/0x10
[   26.366742]  ? ktime_get_ts64+0x86/0x230
[   26.366780]  kunit_try_run_case+0x1b3/0x490
[   26.366818]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.366851]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.366885]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.366919]  ? __kthread_parkme+0x82/0x160
[   26.366949]  ? preempt_count_sub+0x50/0x80
[   26.366985]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.367019]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.367055]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.367088]  kthread+0x324/0x6d0
[   26.367119]  ? trace_preempt_on+0x20/0xc0
[   26.367152]  ? __pfx_kthread+0x10/0x10
[   26.367196]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.367256]  ? calculate_sigpending+0x7b/0xa0
[   26.367293]  ? __pfx_kthread+0x10/0x10
[   26.367325]  ret_from_fork+0x41/0x80
[   26.367357]  ? __pfx_kthread+0x10/0x10
[   26.367390]  ret_from_fork_asm+0x1a/0x30
[   26.367455]  </TASK>
[   26.367473] 
[   26.387463] Allocated by task 273:
[   26.388070]  kasan_save_stack+0x3d/0x60
[   26.388640]  kasan_save_track+0x18/0x40
[   26.389283]  kasan_save_alloc_info+0x3b/0x50
[   26.389779]  __kasan_kmalloc+0xb7/0xc0
[   26.390563]  __kmalloc_cache_noprof+0x184/0x410
[   26.391177]  kasan_atomics+0x96/0x310
[   26.391707]  kunit_try_run_case+0x1b3/0x490
[   26.392283]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.392755]  kthread+0x324/0x6d0
[   26.393099]  ret_from_fork+0x41/0x80
[   26.393653]  ret_from_fork_asm+0x1a/0x30
[   26.394398] 
[   26.394564] The buggy address belongs to the object at ffff888101aebe00
[   26.394564]  which belongs to the cache kmalloc-64 of size 64
[   26.395689] The buggy address is located 0 bytes to the right of
[   26.395689]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.396990] 
[   26.397429] The buggy address belongs to the physical page:
[   26.398149] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.398957] flags: 0x200000000000000(node=0|zone=2)
[   26.399862] page_type: f5(slab)
[   26.400211] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.401093] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.401917] page dumped because: kasan: bad access detected
[   26.402647] 
[   26.403129] Memory state around the buggy address:
[   26.403739]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.404446]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.405299] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.406188]                                      ^
[   26.407146]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.408311]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.408878] ==================================================================
---
[   26.640276] ==================================================================
[   26.641520] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4b3c/0x5450
[   26.642325] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.643145] 
[   26.643530] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.643636] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.643676] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.643734] Call Trace:
[   26.643770]  <TASK>
[   26.643804]  dump_stack_lvl+0x73/0xb0
[   26.643883]  print_report+0xd1/0x640
[   26.643946]  ? __virt_addr_valid+0x1db/0x2d0
[   26.644015]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.644087]  kasan_report+0x102/0x140
[   26.644151]  ? kasan_atomics_helper+0x4b3c/0x5450
[   26.644218]  ? kasan_atomics_helper+0x4b3c/0x5450
[   26.644296]  __asan_report_store4_noabort+0x1b/0x30
[   26.644566]  kasan_atomics_helper+0x4b3c/0x5450
[   26.644660]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.644735]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.645231]  ? kasan_atomics+0x153/0x310
[   26.645309]  kasan_atomics+0x1dd/0x310
[   26.645373]  ? __pfx_kasan_atomics+0x10/0x10
[   26.645532]  ? __pfx_read_tsc+0x10/0x10
[   26.645615]  ? ktime_get_ts64+0x86/0x230
[   26.645690]  kunit_try_run_case+0x1b3/0x490
[   26.645766]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.645828]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.645868]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.645904]  ? __kthread_parkme+0x82/0x160
[   26.645936]  ? preempt_count_sub+0x50/0x80
[   26.645972]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.646009]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.646045]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.646081]  kthread+0x324/0x6d0
[   26.646112]  ? trace_preempt_on+0x20/0xc0
[   26.646144]  ? __pfx_kthread+0x10/0x10
[   26.646176]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.646221]  ? calculate_sigpending+0x7b/0xa0
[   26.646257]  ? __pfx_kthread+0x10/0x10
[   26.646290]  ret_from_fork+0x41/0x80
[   26.646320]  ? __pfx_kthread+0x10/0x10
[   26.646361]  ret_from_fork_asm+0x1a/0x30
[   26.646425]  </TASK>
[   26.646443] 
[   26.667000] Allocated by task 273:
[   26.667396]  kasan_save_stack+0x3d/0x60
[   26.667886]  kasan_save_track+0x18/0x40
[   26.668231]  kasan_save_alloc_info+0x3b/0x50
[   26.668770]  __kasan_kmalloc+0xb7/0xc0
[   26.669155]  __kmalloc_cache_noprof+0x184/0x410
[   26.669854]  kasan_atomics+0x96/0x310
[   26.670336]  kunit_try_run_case+0x1b3/0x490
[   26.671013]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.671625]  kthread+0x324/0x6d0
[   26.672177]  ret_from_fork+0x41/0x80
[   26.672544]  ret_from_fork_asm+0x1a/0x30
[   26.673161] 
[   26.673457] The buggy address belongs to the object at ffff888101aebe00
[   26.673457]  which belongs to the cache kmalloc-64 of size 64
[   26.674742] The buggy address is located 0 bytes to the right of
[   26.674742]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.676277] 
[   26.676457] The buggy address belongs to the physical page:
[   26.676987] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.678011] flags: 0x200000000000000(node=0|zone=2)
[   26.678503] page_type: f5(slab)
[   26.678893] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.679419] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.679914] page dumped because: kasan: bad access detected
[   26.680303] 
[   26.680886] Memory state around the buggy address:
[   26.681328]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.682363]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.683246] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.684088]                                      ^
[   26.684749]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.685528]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.686420] ==================================================================
---
[   26.456006] ==================================================================
[   26.456672] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x4b70/0x5450
[   26.457100] Write of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.458264] 
[   26.458590] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.458703] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.458738] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.458790] Call Trace:
[   26.458820]  <TASK>
[   26.458858]  dump_stack_lvl+0x73/0xb0
[   26.458934]  print_report+0xd1/0x640
[   26.458998]  ? __virt_addr_valid+0x1db/0x2d0
[   26.459064]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.459135]  kasan_report+0x102/0x140
[   26.459215]  ? kasan_atomics_helper+0x4b70/0x5450
[   26.459276]  ? kasan_atomics_helper+0x4b70/0x5450
[   26.459321]  __asan_report_store4_noabort+0x1b/0x30
[   26.459354]  kasan_atomics_helper+0x4b70/0x5450
[   26.459391]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.459576]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.459649]  ? kasan_atomics+0x153/0x310
[   26.459719]  kasan_atomics+0x1dd/0x310
[   26.459819]  ? __pfx_kasan_atomics+0x10/0x10
[   26.459918]  ? __pfx_read_tsc+0x10/0x10
[   26.459978]  ? ktime_get_ts64+0x86/0x230
[   26.460047]  kunit_try_run_case+0x1b3/0x490
[   26.460119]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.460222]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.460300]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.460370]  ? __kthread_parkme+0x82/0x160
[   26.460449]  ? preempt_count_sub+0x50/0x80
[   26.460519]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.460562]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.460666]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.460729]  kthread+0x324/0x6d0
[   26.460761]  ? trace_preempt_on+0x20/0xc0
[   26.460795]  ? __pfx_kthread+0x10/0x10
[   26.460827]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.460860]  ? calculate_sigpending+0x7b/0xa0
[   26.460894]  ? __pfx_kthread+0x10/0x10
[   26.460927]  ret_from_fork+0x41/0x80
[   26.460958]  ? __pfx_kthread+0x10/0x10
[   26.460989]  ret_from_fork_asm+0x1a/0x30
[   26.461035]  </TASK>
[   26.461053] 
[   26.479632] Allocated by task 273:
[   26.479976]  kasan_save_stack+0x3d/0x60
[   26.480481]  kasan_save_track+0x18/0x40
[   26.481066]  kasan_save_alloc_info+0x3b/0x50
[   26.481643]  __kasan_kmalloc+0xb7/0xc0
[   26.482318]  __kmalloc_cache_noprof+0x184/0x410
[   26.483018]  kasan_atomics+0x96/0x310
[   26.483609]  kunit_try_run_case+0x1b3/0x490
[   26.484297]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.484790]  kthread+0x324/0x6d0
[   26.485033]  ret_from_fork+0x41/0x80
[   26.485318]  ret_from_fork_asm+0x1a/0x30
[   26.485770] 
[   26.486002] The buggy address belongs to the object at ffff888101aebe00
[   26.486002]  which belongs to the cache kmalloc-64 of size 64
[   26.487567] The buggy address is located 0 bytes to the right of
[   26.487567]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.489530] 
[   26.489692] The buggy address belongs to the physical page:
[   26.490025] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.490867] flags: 0x200000000000000(node=0|zone=2)
[   26.491178] page_type: f5(slab)
[   26.491444] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.492811] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.494202] page dumped because: kasan: bad access detected
[   26.495026] 
[   26.496121] Memory state around the buggy address:
[   26.496599]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.497816]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.498343] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.498985]                                      ^
[   26.499947]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.500528]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.501495] ==================================================================

test log snippet boot parser 77

Home

[   26.502497] ==================================================================
[   26.502949] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x3e0/0x5450
[   26.503354] Read of size 4 at addr ffff888101aebe30 by task kunit_try_catch/273
[   26.504120] 
[   26.504479] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.504594] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.504633] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.504693] Call Trace:
[   26.504732]  <TASK>
[   26.504768]  dump_stack_lvl+0x73/0xb0
[   26.504847]  print_report+0xd1/0x640
[   26.504970]  ? __virt_addr_valid+0x1db/0x2d0
[   26.505078]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.505245]  kasan_report+0x102/0x140
[   26.505323]  ? kasan_atomics_helper+0x3e0/0x5450
[   26.505481]  ? kasan_atomics_helper+0x3e0/0x5450
[   26.505572]  kasan_check_range+0x10c/0x1c0
[   26.505686]  __kasan_check_read+0x15/0x20
[   26.505751]  kasan_atomics_helper+0x3e0/0x5450
[   26.505822]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   26.505905]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.505970]  ? kasan_atomics+0x153/0x310
[   26.506033]  kasan_atomics+0x1dd/0x310
[   26.506090]  ? __pfx_kasan_atomics+0x10/0x10
[   26.506250]  ? __pfx_read_tsc+0x10/0x10
[   26.506316]  ? ktime_get_ts64+0x86/0x230
[   26.506394]  kunit_try_run_case+0x1b3/0x490
[   26.506535]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.506660]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.506732]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.506797]  ? __kthread_parkme+0x82/0x160
[   26.506849]  ? preempt_count_sub+0x50/0x80
[   26.506887]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.506922]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.506959]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.506994]  kthread+0x324/0x6d0
[   26.507025]  ? trace_preempt_on+0x20/0xc0
[   26.507057]  ? __pfx_kthread+0x10/0x10
[   26.507089]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.507120]  ? calculate_sigpending+0x7b/0xa0
[   26.507155]  ? __pfx_kthread+0x10/0x10
[   26.507221]  ret_from_fork+0x41/0x80
[   26.507260]  ? __pfx_kthread+0x10/0x10
[   26.507291]  ret_from_fork_asm+0x1a/0x30
[   26.507338]  </TASK>
[   26.507356] 
[   26.525440] Allocated by task 273:
[   26.526015]  kasan_save_stack+0x3d/0x60
[   26.526552]  kasan_save_track+0x18/0x40
[   26.527264]  kasan_save_alloc_info+0x3b/0x50
[   26.528171]  __kasan_kmalloc+0xb7/0xc0
[   26.528752]  __kmalloc_cache_noprof+0x184/0x410
[   26.529264]  kasan_atomics+0x96/0x310
[   26.529710]  kunit_try_run_case+0x1b3/0x490
[   26.530468]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.531052]  kthread+0x324/0x6d0
[   26.531551]  ret_from_fork+0x41/0x80
[   26.532158]  ret_from_fork_asm+0x1a/0x30
[   26.532679] 
[   26.533105] The buggy address belongs to the object at ffff888101aebe00
[   26.533105]  which belongs to the cache kmalloc-64 of size 64
[   26.534631] The buggy address is located 0 bytes to the right of
[   26.534631]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   26.535799] 
[   26.535973] The buggy address belongs to the physical page:
[   26.536373] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   26.536808] flags: 0x200000000000000(node=0|zone=2)
[   26.537097] page_type: f5(slab)
[   26.537334] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   26.538013] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   26.538507] page dumped because: kasan: bad access detected
[   26.539375] 
[   26.540111] Memory state around the buggy address:
[   26.540810]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.541696]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   26.542494] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   26.543388]                                      ^
[   26.544108]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.545357]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.546257] ==================================================================
---
[   27.831388] ==================================================================
[   27.832266] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x13b6/0x5450
[   27.832979] Read of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.834415] 
[   27.834667] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.834794] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.834857] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.834919] Call Trace:
[   27.834957]  <TASK>
[   27.834990]  dump_stack_lvl+0x73/0xb0
[   27.835071]  print_report+0xd1/0x640
[   27.835151]  ? __virt_addr_valid+0x1db/0x2d0
[   27.835275]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.835354]  kasan_report+0x102/0x140
[   27.835441]  ? kasan_atomics_helper+0x13b6/0x5450
[   27.835515]  ? kasan_atomics_helper+0x13b6/0x5450
[   27.835641]  kasan_check_range+0x10c/0x1c0
[   27.835862]  __kasan_check_read+0x15/0x20
[   27.835973]  kasan_atomics_helper+0x13b6/0x5450
[   27.836083]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.836156]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.836271]  ? kasan_atomics+0x153/0x310
[   27.836389]  kasan_atomics+0x1dd/0x310
[   27.836481]  ? __pfx_kasan_atomics+0x10/0x10
[   27.836550]  ? __pfx_read_tsc+0x10/0x10
[   27.836611]  ? ktime_get_ts64+0x86/0x230
[   27.836678]  kunit_try_run_case+0x1b3/0x490
[   27.836802]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.836879]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.836944]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.837013]  ? __kthread_parkme+0x82/0x160
[   27.837090]  ? preempt_count_sub+0x50/0x80
[   27.837213]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.837304]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.837383]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.837468]  kthread+0x324/0x6d0
[   27.837531]  ? trace_preempt_on+0x20/0xc0
[   27.837592]  ? __pfx_kthread+0x10/0x10
[   27.837675]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.837768]  ? calculate_sigpending+0x7b/0xa0
[   27.837833]  ? __pfx_kthread+0x10/0x10
[   27.837898]  ret_from_fork+0x41/0x80
[   27.837958]  ? __pfx_kthread+0x10/0x10
[   27.838026]  ret_from_fork_asm+0x1a/0x30
[   27.838115]  </TASK>
[   27.838147] 
[   27.851971] Allocated by task 273:
[   27.852434]  kasan_save_stack+0x3d/0x60
[   27.852833]  kasan_save_track+0x18/0x40
[   27.853295]  kasan_save_alloc_info+0x3b/0x50
[   27.853799]  __kasan_kmalloc+0xb7/0xc0
[   27.854175]  __kmalloc_cache_noprof+0x184/0x410
[   27.854744]  kasan_atomics+0x96/0x310
[   27.855089]  kunit_try_run_case+0x1b3/0x490
[   27.855604]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.855956]  kthread+0x324/0x6d0
[   27.856315]  ret_from_fork+0x41/0x80
[   27.856785]  ret_from_fork_asm+0x1a/0x30
[   27.857300] 
[   27.857570] The buggy address belongs to the object at ffff888101aebe00
[   27.857570]  which belongs to the cache kmalloc-64 of size 64
[   27.858714] The buggy address is located 0 bytes to the right of
[   27.858714]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.859748] 
[   27.860017] The buggy address belongs to the physical page:
[   27.860654] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.861367] flags: 0x200000000000000(node=0|zone=2)
[   27.861889] page_type: f5(slab)
[   27.862396] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.862935] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.863780] page dumped because: kasan: bad access detected
[   27.864301] 
[   27.864558] Memory state around the buggy address:
[   27.865039]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.865849]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.866517] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.867112]                                      ^
[   27.867664]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.868306]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.868949] ==================================================================

test log snippet boot parser 78

Home

[   27.950775] ==================================================================
[   27.952817] BUG: KASAN: slab-out-of-bounds in kasan_atomics_helper+0x50d6/0x5450
[   27.953466] Write of size 8 at addr ffff888101aebe30 by task kunit_try_catch/273
[   27.954301] 
[   27.954597] CPU: 0 UID: 0 PID: 273 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   27.954706] Tainted: [B]=BAD_PAGE, [N]=TEST
[   27.954741] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   27.954797] Call Trace:
[   27.954827]  <TASK>
[   27.954859]  dump_stack_lvl+0x73/0xb0
[   27.954937]  print_report+0xd1/0x640
[   27.955002]  ? __virt_addr_valid+0x1db/0x2d0
[   27.955072]  ? kasan_complete_mode_report_info+0x2a/0x200
[   27.955141]  kasan_report+0x102/0x140
[   27.955234]  ? kasan_atomics_helper+0x50d6/0x5450
[   27.955279]  ? kasan_atomics_helper+0x50d6/0x5450
[   27.955321]  __asan_report_store8_noabort+0x1b/0x30
[   27.955354]  kasan_atomics_helper+0x50d6/0x5450
[   27.955391]  ? __pfx_kasan_atomics_helper+0x10/0x10
[   27.955442]  ? __kmalloc_cache_noprof+0x184/0x410
[   27.955476]  ? kasan_atomics+0x153/0x310
[   27.955511]  kasan_atomics+0x1dd/0x310
[   27.955542]  ? __pfx_kasan_atomics+0x10/0x10
[   27.955574]  ? __pfx_read_tsc+0x10/0x10
[   27.955606]  ? ktime_get_ts64+0x86/0x230
[   27.955639]  kunit_try_run_case+0x1b3/0x490
[   27.955677]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.955710]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   27.955745]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   27.955779]  ? __kthread_parkme+0x82/0x160
[   27.955808]  ? preempt_count_sub+0x50/0x80
[   27.955844]  ? __pfx_kunit_try_run_case+0x10/0x10
[   27.955879]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.955913]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   27.955949]  kthread+0x324/0x6d0
[   27.955979]  ? trace_preempt_on+0x20/0xc0
[   27.956011]  ? __pfx_kthread+0x10/0x10
[   27.956043]  ? _raw_spin_unlock_irq+0x47/0x80
[   27.956075]  ? calculate_sigpending+0x7b/0xa0
[   27.956110]  ? __pfx_kthread+0x10/0x10
[   27.956141]  ret_from_fork+0x41/0x80
[   27.956172]  ? __pfx_kthread+0x10/0x10
[   27.956233]  ret_from_fork_asm+0x1a/0x30
[   27.956283]  </TASK>
[   27.956299] 
[   27.968421] Allocated by task 273:
[   27.968803]  kasan_save_stack+0x3d/0x60
[   27.969253]  kasan_save_track+0x18/0x40
[   27.969701]  kasan_save_alloc_info+0x3b/0x50
[   27.970134]  __kasan_kmalloc+0xb7/0xc0
[   27.970561]  __kmalloc_cache_noprof+0x184/0x410
[   27.970878]  kasan_atomics+0x96/0x310
[   27.971275]  kunit_try_run_case+0x1b3/0x490
[   27.971748]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   27.972295]  kthread+0x324/0x6d0
[   27.972704]  ret_from_fork+0x41/0x80
[   27.973005]  ret_from_fork_asm+0x1a/0x30
[   27.973370] 
[   27.973631] The buggy address belongs to the object at ffff888101aebe00
[   27.973631]  which belongs to the cache kmalloc-64 of size 64
[   27.974793] The buggy address is located 0 bytes to the right of
[   27.974793]  allocated 48-byte region [ffff888101aebe00, ffff888101aebe30)
[   27.975576] 
[   27.975812] The buggy address belongs to the physical page:
[   27.976355] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aeb
[   27.977072] flags: 0x200000000000000(node=0|zone=2)
[   27.977570] page_type: f5(slab)
[   27.977899] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   27.978604] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   27.979109] page dumped because: kasan: bad access detected
[   27.979794] 
[   27.980022] Memory state around the buggy address:
[   27.981032]  ffff888101aebd00: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.982991]  ffff888101aebd80: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   27.983707] >ffff888101aebe00: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
[   27.984316]                                      ^
[   27.984724]  ffff888101aebe80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.987582]  ffff888101aebf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   27.988179] ==================================================================

test log snippet boot parser 79

Home

[   25.868822] ==================================================================
[   25.869275] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x4ad/0xd50
[   25.870052] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   25.870684] 
[   25.870944] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.871040] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.871074] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.871125] Call Trace:
[   25.871155]  <TASK>
[   25.871231]  dump_stack_lvl+0x73/0xb0
[   25.871313]  print_report+0xd1/0x640
[   25.871391]  ? __virt_addr_valid+0x1db/0x2d0
[   25.871491]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.871563]  kasan_report+0x102/0x140
[   25.871622]  ? kasan_bitops_modify.constprop.0+0x4ad/0xd50
[   25.871691]  ? kasan_bitops_modify.constprop.0+0x4ad/0xd50
[   25.871768]  kasan_check_range+0x10c/0x1c0
[   25.871831]  __kasan_check_write+0x18/0x20
[   25.871886]  kasan_bitops_modify.constprop.0+0x4ad/0xd50
[   25.871951]  ? __pfx_kasan_bitops_modify.constprop.0+0x10/0x10
[   25.872008]  ? __kmalloc_cache_noprof+0x184/0x410
[   25.872058]  ? trace_hardirqs_on+0x37/0xe0
[   25.872111]  ? kasan_bitops_generic+0x93/0x1c0
[   25.872220]  kasan_bitops_generic+0x117/0x1c0
[   25.872314]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   25.872389]  ? __pfx_read_tsc+0x10/0x10
[   25.872471]  ? ktime_get_ts64+0x86/0x230
[   25.872535]  kunit_try_run_case+0x1b3/0x490
[   25.872603]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.872662]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.872726]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.872780]  ? __kthread_parkme+0x82/0x160
[   25.872833]  ? preempt_count_sub+0x50/0x80
[   25.872891]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.872974]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.873067]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.873135]  kthread+0x324/0x6d0
[   25.873227]  ? trace_preempt_on+0x20/0xc0
[   25.873294]  ? __pfx_kthread+0x10/0x10
[   25.873392]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.873471]  ? calculate_sigpending+0x7b/0xa0
[   25.873538]  ? __pfx_kthread+0x10/0x10
[   25.873591]  ret_from_fork+0x41/0x80
[   25.873646]  ? __pfx_kthread+0x10/0x10
[   25.873700]  ret_from_fork_asm+0x1a/0x30
[   25.873797]  </TASK>
[   25.873845] 
[   25.887743] Allocated by task 269:
[   25.888117]  kasan_save_stack+0x3d/0x60
[   25.888636]  kasan_save_track+0x18/0x40
[   25.889067]  kasan_save_alloc_info+0x3b/0x50
[   25.889592]  __kasan_kmalloc+0xb7/0xc0
[   25.890044]  __kmalloc_cache_noprof+0x184/0x410
[   25.890589]  kasan_bitops_generic+0x93/0x1c0
[   25.891029]  kunit_try_run_case+0x1b3/0x490
[   25.891551]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.891984]  kthread+0x324/0x6d0
[   25.892450]  ret_from_fork+0x41/0x80
[   25.892809]  ret_from_fork_asm+0x1a/0x30
[   25.893287] 
[   25.893529] The buggy address belongs to the object at ffff8881012e6760
[   25.893529]  which belongs to the cache kmalloc-16 of size 16
[   25.894419] The buggy address is located 8 bytes inside of
[   25.894419]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   25.895282] 
[   25.895542] The buggy address belongs to the physical page:
[   25.896036] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   25.896602] flags: 0x200000000000000(node=0|zone=2)
[   25.897122] page_type: f5(slab)
[   25.897537] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   25.898341] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   25.899056] page dumped because: kasan: bad access detected
[   25.899617] 
[   25.899848] Memory state around the buggy address:
[   25.900370]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   25.900843]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   25.901300] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   25.901988]                                                           ^
[   25.902664]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.903371]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.904059] ==================================================================
---
[   25.745289] ==================================================================
[   25.746453] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x2d6/0xd50
[   25.747209] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   25.747631] 
[   25.747870] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.747976] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.748008] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.748064] Call Trace:
[   25.748099]  <TASK>
[   25.748160]  dump_stack_lvl+0x73/0xb0
[   25.748337]  print_report+0xd1/0x640
[   25.748420]  ? __virt_addr_valid+0x1db/0x2d0
[   25.748609]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.748699]  kasan_report+0x102/0x140
[   25.748794]  ? kasan_bitops_modify.constprop.0+0x2d6/0xd50
[   25.748867]  ? kasan_bitops_modify.constprop.0+0x2d6/0xd50
[   25.748946]  kasan_check_range+0x10c/0x1c0
[   25.749004]  __kasan_check_write+0x18/0x20
[   25.749064]  kasan_bitops_modify.constprop.0+0x2d6/0xd50
[   25.749129]  ? __pfx_kasan_bitops_modify.constprop.0+0x10/0x10
[   25.749191]  ? __kmalloc_cache_noprof+0x184/0x410
[   25.749253]  ? trace_hardirqs_on+0x37/0xe0
[   25.749308]  ? kasan_bitops_generic+0x93/0x1c0
[   25.749378]  kasan_bitops_generic+0x117/0x1c0
[   25.749456]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   25.749519]  ? __pfx_read_tsc+0x10/0x10
[   25.749600]  ? ktime_get_ts64+0x86/0x230
[   25.749663]  kunit_try_run_case+0x1b3/0x490
[   25.749729]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.749832]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.749896]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.749955]  ? __kthread_parkme+0x82/0x160
[   25.750012]  ? preempt_count_sub+0x50/0x80
[   25.750073]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.750134]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.750200]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.750262]  kthread+0x324/0x6d0
[   25.750353]  ? trace_preempt_on+0x20/0xc0
[   25.750441]  ? __pfx_kthread+0x10/0x10
[   25.750503]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.750566]  ? calculate_sigpending+0x7b/0xa0
[   25.750633]  ? __pfx_kthread+0x10/0x10
[   25.750691]  ret_from_fork+0x41/0x80
[   25.750737]  ? __pfx_kthread+0x10/0x10
[   25.750770]  ret_from_fork_asm+0x1a/0x30
[   25.750813]  </TASK>
[   25.750829] 
[   25.770001] Allocated by task 269:
[   25.770466]  kasan_save_stack+0x3d/0x60
[   25.770881]  kasan_save_track+0x18/0x40
[   25.771390]  kasan_save_alloc_info+0x3b/0x50
[   25.771743]  __kasan_kmalloc+0xb7/0xc0
[   25.772216]  __kmalloc_cache_noprof+0x184/0x410
[   25.772602]  kasan_bitops_generic+0x93/0x1c0
[   25.773119]  kunit_try_run_case+0x1b3/0x490
[   25.773624]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.774015]  kthread+0x324/0x6d0
[   25.774467]  ret_from_fork+0x41/0x80
[   25.774876]  ret_from_fork_asm+0x1a/0x30
[   25.775354] 
[   25.775673] The buggy address belongs to the object at ffff8881012e6760
[   25.775673]  which belongs to the cache kmalloc-16 of size 16
[   25.776290] The buggy address is located 8 bytes inside of
[   25.776290]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   25.777495] 
[   25.777662] The buggy address belongs to the physical page:
[   25.777976] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   25.778682] flags: 0x200000000000000(node=0|zone=2)
[   25.779287] page_type: f5(slab)
[   25.779681] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   25.780450] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   25.780965] page dumped because: kasan: bad access detected
[   25.781321] 
[   25.781599] Memory state around the buggy address:
[   25.782119]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   25.782828]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   25.783253] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   25.783945]                                                           ^
[   25.784600]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.785005]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.785760] ==================================================================
---
[   25.624224] ==================================================================
[   25.625076] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x102/0xd50
[   25.625767] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   25.626582] 
[   25.626826] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.626889] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.626907] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.626935] Call Trace:
[   25.626953]  <TASK>
[   25.626971]  dump_stack_lvl+0x73/0xb0
[   25.627014]  print_report+0xd1/0x640
[   25.627046]  ? __virt_addr_valid+0x1db/0x2d0
[   25.627081]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.627117]  kasan_report+0x102/0x140
[   25.627146]  ? kasan_bitops_modify.constprop.0+0x102/0xd50
[   25.627180]  ? kasan_bitops_modify.constprop.0+0x102/0xd50
[   25.627279]  kasan_check_range+0x10c/0x1c0
[   25.627349]  __kasan_check_write+0x18/0x20
[   25.627420]  kasan_bitops_modify.constprop.0+0x102/0xd50
[   25.627765]  ? __pfx_kasan_bitops_modify.constprop.0+0x10/0x10
[   25.627844]  ? __kmalloc_cache_noprof+0x184/0x410
[   25.627899]  ? trace_hardirqs_on+0x37/0xe0
[   25.627957]  ? kasan_bitops_generic+0x93/0x1c0
[   25.628020]  kasan_bitops_generic+0x117/0x1c0
[   25.628078]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   25.628171]  ? __pfx_read_tsc+0x10/0x10
[   25.628311]  ? ktime_get_ts64+0x86/0x230
[   25.628390]  kunit_try_run_case+0x1b3/0x490
[   25.628477]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.628538]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.628578]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.628610]  ? __kthread_parkme+0x82/0x160
[   25.628640]  ? preempt_count_sub+0x50/0x80
[   25.628673]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.628705]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.628740]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.628774]  kthread+0x324/0x6d0
[   25.628803]  ? trace_preempt_on+0x20/0xc0
[   25.628833]  ? __pfx_kthread+0x10/0x10
[   25.628862]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.628892]  ? calculate_sigpending+0x7b/0xa0
[   25.628926]  ? __pfx_kthread+0x10/0x10
[   25.628957]  ret_from_fork+0x41/0x80
[   25.628986]  ? __pfx_kthread+0x10/0x10
[   25.629014]  ret_from_fork_asm+0x1a/0x30
[   25.629057]  </TASK>
[   25.629072] 
[   25.646820] Allocated by task 269:
[   25.647248]  kasan_save_stack+0x3d/0x60
[   25.647789]  kasan_save_track+0x18/0x40
[   25.648336]  kasan_save_alloc_info+0x3b/0x50
[   25.648904]  __kasan_kmalloc+0xb7/0xc0
[   25.649314]  __kmalloc_cache_noprof+0x184/0x410
[   25.649875]  kasan_bitops_generic+0x93/0x1c0
[   25.650396]  kunit_try_run_case+0x1b3/0x490
[   25.650770]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.651341]  kthread+0x324/0x6d0
[   25.651686]  ret_from_fork+0x41/0x80
[   25.652093]  ret_from_fork_asm+0x1a/0x30
[   25.652483] 
[   25.652706] The buggy address belongs to the object at ffff8881012e6760
[   25.652706]  which belongs to the cache kmalloc-16 of size 16
[   25.653494] The buggy address is located 8 bytes inside of
[   25.653494]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   25.654259] 
[   25.654593] The buggy address belongs to the physical page:
[   25.654928] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   25.655715] flags: 0x200000000000000(node=0|zone=2)
[   25.656148] page_type: f5(slab)
[   25.656605] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   25.657069] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   25.657817] page dumped because: kasan: bad access detected
[   25.658452] 
[   25.658634] Memory state around the buggy address:
[   25.658962]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   25.659686]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   25.660219] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   25.660832]                                                           ^
[   25.661530]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.662018]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.662644] ==================================================================
---
[   25.905462] ==================================================================
[   25.906511] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x548/0xd50
[   25.907447] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   25.907964] 
[   25.908175] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.908281] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.908316] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.908371] Call Trace:
[   25.908422]  <TASK>
[   25.908455]  dump_stack_lvl+0x73/0xb0
[   25.908530]  print_report+0xd1/0x640
[   25.908593]  ? __virt_addr_valid+0x1db/0x2d0
[   25.908656]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.908728]  kasan_report+0x102/0x140
[   25.908785]  ? kasan_bitops_modify.constprop.0+0x548/0xd50
[   25.908851]  ? kasan_bitops_modify.constprop.0+0x548/0xd50
[   25.908926]  kasan_check_range+0x10c/0x1c0
[   25.908986]  __kasan_check_write+0x18/0x20
[   25.909045]  kasan_bitops_modify.constprop.0+0x548/0xd50
[   25.909107]  ? __pfx_kasan_bitops_modify.constprop.0+0x10/0x10
[   25.909217]  ? __kmalloc_cache_noprof+0x184/0x410
[   25.909330]  ? trace_hardirqs_on+0x37/0xe0
[   25.909393]  ? kasan_bitops_generic+0x93/0x1c0
[   25.909478]  kasan_bitops_generic+0x117/0x1c0
[   25.909533]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   25.909624]  ? __pfx_read_tsc+0x10/0x10
[   25.909678]  ? ktime_get_ts64+0x86/0x230
[   25.909739]  kunit_try_run_case+0x1b3/0x490
[   25.909804]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.909905]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.909971]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.910037]  ? __kthread_parkme+0x82/0x160
[   25.910095]  ? preempt_count_sub+0x50/0x80
[   25.910157]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.910272]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.910343]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.910437]  kthread+0x324/0x6d0
[   25.910495]  ? trace_preempt_on+0x20/0xc0
[   25.910556]  ? __pfx_kthread+0x10/0x10
[   25.910610]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.910667]  ? calculate_sigpending+0x7b/0xa0
[   25.910733]  ? __pfx_kthread+0x10/0x10
[   25.910789]  ret_from_fork+0x41/0x80
[   25.910863]  ? __pfx_kthread+0x10/0x10
[   25.910945]  ret_from_fork_asm+0x1a/0x30
[   25.911027]  </TASK>
[   25.911059] 
[   25.925647] Allocated by task 269:
[   25.926032]  kasan_save_stack+0x3d/0x60
[   25.926506]  kasan_save_track+0x18/0x40
[   25.926938]  kasan_save_alloc_info+0x3b/0x50
[   25.927427]  __kasan_kmalloc+0xb7/0xc0
[   25.927817]  __kmalloc_cache_noprof+0x184/0x410
[   25.928118]  kasan_bitops_generic+0x93/0x1c0
[   25.928558]  kunit_try_run_case+0x1b3/0x490
[   25.929048]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.929647]  kthread+0x324/0x6d0
[   25.930057]  ret_from_fork+0x41/0x80
[   25.930539]  ret_from_fork_asm+0x1a/0x30
[   25.930914] 
[   25.931119] The buggy address belongs to the object at ffff8881012e6760
[   25.931119]  which belongs to the cache kmalloc-16 of size 16
[   25.932099] The buggy address is located 8 bytes inside of
[   25.932099]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   25.933045] 
[   25.933347] The buggy address belongs to the physical page:
[   25.933821] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   25.934493] flags: 0x200000000000000(node=0|zone=2)
[   25.934810] page_type: f5(slab)
[   25.935059] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   25.935594] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   25.936345] page dumped because: kasan: bad access detected
[   25.936888] 
[   25.937108] Memory state around the buggy address:
[   25.937623]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   25.938356]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   25.939072] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   25.939648]                                                           ^
[   25.940231]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.940851]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.941611] ==================================================================
---
[   25.827118] ==================================================================
[   25.829117] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x40f/0xd50
[   25.829847] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   25.830278] 
[   25.831218] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.831341] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.831377] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.831651] Call Trace:
[   25.831693]  <TASK>
[   25.831729]  dump_stack_lvl+0x73/0xb0
[   25.831808]  print_report+0xd1/0x640
[   25.831872]  ? __virt_addr_valid+0x1db/0x2d0
[   25.831938]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.831992]  kasan_report+0x102/0x140
[   25.832026]  ? kasan_bitops_modify.constprop.0+0x40f/0xd50
[   25.832061]  ? kasan_bitops_modify.constprop.0+0x40f/0xd50
[   25.832100]  kasan_check_range+0x10c/0x1c0
[   25.832131]  __kasan_check_write+0x18/0x20
[   25.832160]  kasan_bitops_modify.constprop.0+0x40f/0xd50
[   25.832246]  ? __pfx_kasan_bitops_modify.constprop.0+0x10/0x10
[   25.832312]  ? __kmalloc_cache_noprof+0x184/0x410
[   25.832345]  ? trace_hardirqs_on+0x37/0xe0
[   25.832374]  ? kasan_bitops_generic+0x93/0x1c0
[   25.832430]  kasan_bitops_generic+0x117/0x1c0
[   25.832463]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   25.832495]  ? __pfx_read_tsc+0x10/0x10
[   25.832522]  ? ktime_get_ts64+0x86/0x230
[   25.832555]  kunit_try_run_case+0x1b3/0x490
[   25.832592]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.832623]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.832656]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.832689]  ? __kthread_parkme+0x82/0x160
[   25.832718]  ? preempt_count_sub+0x50/0x80
[   25.832750]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.832783]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.832816]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.832849]  kthread+0x324/0x6d0
[   25.832876]  ? trace_preempt_on+0x20/0xc0
[   25.832906]  ? __pfx_kthread+0x10/0x10
[   25.832935]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.832965]  ? calculate_sigpending+0x7b/0xa0
[   25.832999]  ? __pfx_kthread+0x10/0x10
[   25.833029]  ret_from_fork+0x41/0x80
[   25.833057]  ? __pfx_kthread+0x10/0x10
[   25.833087]  ret_from_fork_asm+0x1a/0x30
[   25.833128]  </TASK>
[   25.833143] 
[   25.848679] Allocated by task 269:
[   25.848916]  kasan_save_stack+0x3d/0x60
[   25.849237]  kasan_save_track+0x18/0x40
[   25.849612]  kasan_save_alloc_info+0x3b/0x50
[   25.850225]  __kasan_kmalloc+0xb7/0xc0
[   25.850673]  __kmalloc_cache_noprof+0x184/0x410
[   25.851309]  kasan_bitops_generic+0x93/0x1c0
[   25.851783]  kunit_try_run_case+0x1b3/0x490
[   25.852331]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.852940]  kthread+0x324/0x6d0
[   25.853371]  ret_from_fork+0x41/0x80
[   25.853993]  ret_from_fork_asm+0x1a/0x30
[   25.856073] 
[   25.856726] The buggy address belongs to the object at ffff8881012e6760
[   25.856726]  which belongs to the cache kmalloc-16 of size 16
[   25.858124] The buggy address is located 8 bytes inside of
[   25.858124]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   25.859439] 
[   25.859760] The buggy address belongs to the physical page:
[   25.860133] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   25.860760] flags: 0x200000000000000(node=0|zone=2)
[   25.861087] page_type: f5(slab)
[   25.861573] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   25.862073] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   25.862519] page dumped because: kasan: bad access detected
[   25.862841] 
[   25.863051] Memory state around the buggy address:
[   25.863614]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   25.864383]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   25.865077] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   25.865564]                                                           ^
[   25.866230]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.866666]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.867388] ==================================================================
---
[   25.665053] ==================================================================
[   25.666623] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x19d/0xd50
[   25.667176] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   25.667917] 
[   25.668167] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.668310] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.668345] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.668398] Call Trace:
[   25.668454]  <TASK>
[   25.668485]  dump_stack_lvl+0x73/0xb0
[   25.668604]  print_report+0xd1/0x640
[   25.668670]  ? __virt_addr_valid+0x1db/0x2d0
[   25.668735]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.668789]  kasan_report+0x102/0x140
[   25.668823]  ? kasan_bitops_modify.constprop.0+0x19d/0xd50
[   25.668855]  ? kasan_bitops_modify.constprop.0+0x19d/0xd50
[   25.668894]  kasan_check_range+0x10c/0x1c0
[   25.668939]  __kasan_check_write+0x18/0x20
[   25.668987]  kasan_bitops_modify.constprop.0+0x19d/0xd50
[   25.669048]  ? __pfx_kasan_bitops_modify.constprop.0+0x10/0x10
[   25.669145]  ? __kmalloc_cache_noprof+0x184/0x410
[   25.669256]  ? trace_hardirqs_on+0x37/0xe0
[   25.669315]  ? kasan_bitops_generic+0x93/0x1c0
[   25.669384]  kasan_bitops_generic+0x117/0x1c0
[   25.669460]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   25.669556]  ? __pfx_read_tsc+0x10/0x10
[   25.669619]  ? ktime_get_ts64+0x86/0x230
[   25.669681]  kunit_try_run_case+0x1b3/0x490
[   25.669747]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.669801]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.669834]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.669868]  ? __kthread_parkme+0x82/0x160
[   25.669898]  ? preempt_count_sub+0x50/0x80
[   25.669931]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.669963]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.669996]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.670030]  kthread+0x324/0x6d0
[   25.670057]  ? trace_preempt_on+0x20/0xc0
[   25.670085]  ? __pfx_kthread+0x10/0x10
[   25.670115]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.670146]  ? calculate_sigpending+0x7b/0xa0
[   25.670179]  ? __pfx_kthread+0x10/0x10
[   25.670257]  ret_from_fork+0x41/0x80
[   25.670288]  ? __pfx_kthread+0x10/0x10
[   25.670319]  ret_from_fork_asm+0x1a/0x30
[   25.670372]  </TASK>
[   25.670387] 
[   25.684490] Allocated by task 269:
[   25.684861]  kasan_save_stack+0x3d/0x60
[   25.685314]  kasan_save_track+0x18/0x40
[   25.685776]  kasan_save_alloc_info+0x3b/0x50
[   25.686260]  __kasan_kmalloc+0xb7/0xc0
[   25.686713]  __kmalloc_cache_noprof+0x184/0x410
[   25.687222]  kasan_bitops_generic+0x93/0x1c0
[   25.687691]  kunit_try_run_case+0x1b3/0x490
[   25.688164]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.688769]  kthread+0x324/0x6d0
[   25.689122]  ret_from_fork+0x41/0x80
[   25.689552]  ret_from_fork_asm+0x1a/0x30
[   25.689840] 
[   25.689994] The buggy address belongs to the object at ffff8881012e6760
[   25.689994]  which belongs to the cache kmalloc-16 of size 16
[   25.690922] The buggy address is located 8 bytes inside of
[   25.690922]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   25.692106] 
[   25.692432] The buggy address belongs to the physical page:
[   25.692861] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   25.693381] flags: 0x200000000000000(node=0|zone=2)
[   25.693920] page_type: f5(slab)
[   25.694302] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   25.694839] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   25.695247] page dumped because: kasan: bad access detected
[   25.695703] 
[   25.696015] Memory state around the buggy address:
[   25.696657]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   25.697336]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   25.698035] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   25.698978]                                                           ^
[   25.699353]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.699766]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.700152] ==================================================================
---
[   25.787006] ==================================================================
[   25.788069] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x374/0xd50
[   25.789047] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   25.789870] 
[   25.790075] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.790213] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.790252] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.790304] Call Trace:
[   25.790336]  <TASK>
[   25.790380]  dump_stack_lvl+0x73/0xb0
[   25.790478]  print_report+0xd1/0x640
[   25.790543]  ? __virt_addr_valid+0x1db/0x2d0
[   25.790611]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.790680]  kasan_report+0x102/0x140
[   25.790739]  ? kasan_bitops_modify.constprop.0+0x374/0xd50
[   25.790806]  ? kasan_bitops_modify.constprop.0+0x374/0xd50
[   25.790881]  kasan_check_range+0x10c/0x1c0
[   25.790943]  __kasan_check_write+0x18/0x20
[   25.791002]  kasan_bitops_modify.constprop.0+0x374/0xd50
[   25.791065]  ? __pfx_kasan_bitops_modify.constprop.0+0x10/0x10
[   25.791155]  ? __kmalloc_cache_noprof+0x184/0x410
[   25.791212]  ? trace_hardirqs_on+0x37/0xe0
[   25.791269]  ? kasan_bitops_generic+0x93/0x1c0
[   25.791364]  kasan_bitops_generic+0x117/0x1c0
[   25.791455]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   25.791520]  ? __pfx_read_tsc+0x10/0x10
[   25.791575]  ? ktime_get_ts64+0x86/0x230
[   25.791634]  kunit_try_run_case+0x1b3/0x490
[   25.791703]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.791765]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.791828]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.791891]  ? __kthread_parkme+0x82/0x160
[   25.791949]  ? preempt_count_sub+0x50/0x80
[   25.792013]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.792079]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.792145]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.792210]  kthread+0x324/0x6d0
[   25.792267]  ? trace_preempt_on+0x20/0xc0
[   25.792326]  ? __pfx_kthread+0x10/0x10
[   25.792384]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.792461]  ? calculate_sigpending+0x7b/0xa0
[   25.792529]  ? __pfx_kthread+0x10/0x10
[   25.792590]  ret_from_fork+0x41/0x80
[   25.792640]  ? __pfx_kthread+0x10/0x10
[   25.792694]  ret_from_fork_asm+0x1a/0x30
[   25.792768]  </TASK>
[   25.792829] 
[   25.806871] Allocated by task 269:
[   25.807115]  kasan_save_stack+0x3d/0x60
[   25.807394]  kasan_save_track+0x18/0x40
[   25.807885]  kasan_save_alloc_info+0x3b/0x50
[   25.808461]  __kasan_kmalloc+0xb7/0xc0
[   25.808895]  __kmalloc_cache_noprof+0x184/0x410
[   25.809466]  kasan_bitops_generic+0x93/0x1c0
[   25.809917]  kunit_try_run_case+0x1b3/0x490
[   25.810432]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.810811]  kthread+0x324/0x6d0
[   25.811070]  ret_from_fork+0x41/0x80
[   25.811430]  ret_from_fork_asm+0x1a/0x30
[   25.811955] 
[   25.812207] The buggy address belongs to the object at ffff8881012e6760
[   25.812207]  which belongs to the cache kmalloc-16 of size 16
[   25.813342] The buggy address is located 8 bytes inside of
[   25.813342]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   25.813975] 
[   25.814139] The buggy address belongs to the physical page:
[   25.814768] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   25.815842] flags: 0x200000000000000(node=0|zone=2)
[   25.816387] page_type: f5(slab)
[   25.816692] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   25.817371] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   25.819637] page dumped because: kasan: bad access detected
[   25.820206] 
[   25.821388] Memory state around the buggy address:
[   25.821867]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   25.822544]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   25.823383] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   25.824205]                                                           ^
[   25.824678]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.825388]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.825826] ==================================================================
---
[   25.702135] ==================================================================
[   25.702934] BUG: KASAN: slab-out-of-bounds in kasan_bitops_modify.constprop.0+0x23b/0xd50
[   25.703970] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   25.704891] 
[   25.705127] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.705259] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.705301] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.705355] Call Trace:
[   25.705391]  <TASK>
[   25.705441]  dump_stack_lvl+0x73/0xb0
[   25.705522]  print_report+0xd1/0x640
[   25.705584]  ? __virt_addr_valid+0x1db/0x2d0
[   25.705655]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.705725]  kasan_report+0x102/0x140
[   25.705783]  ? kasan_bitops_modify.constprop.0+0x23b/0xd50
[   25.705850]  ? kasan_bitops_modify.constprop.0+0x23b/0xd50
[   25.705928]  kasan_check_range+0x10c/0x1c0
[   25.705985]  __kasan_check_write+0x18/0x20
[   25.706045]  kasan_bitops_modify.constprop.0+0x23b/0xd50
[   25.706110]  ? __pfx_kasan_bitops_modify.constprop.0+0x10/0x10
[   25.706172]  ? __kmalloc_cache_noprof+0x184/0x410
[   25.706228]  ? trace_hardirqs_on+0x37/0xe0
[   25.706285]  ? kasan_bitops_generic+0x93/0x1c0
[   25.706356]  kasan_bitops_generic+0x117/0x1c0
[   25.706433]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   25.706496]  ? __pfx_read_tsc+0x10/0x10
[   25.706554]  ? ktime_get_ts64+0x86/0x230
[   25.706617]  kunit_try_run_case+0x1b3/0x490
[   25.706689]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.706749]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.706813]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.706879]  ? __kthread_parkme+0x82/0x160
[   25.706937]  ? preempt_count_sub+0x50/0x80
[   25.707004]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.707104]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.707177]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.707244]  kthread+0x324/0x6d0
[   25.707301]  ? trace_preempt_on+0x20/0xc0
[   25.707352]  ? __pfx_kthread+0x10/0x10
[   25.707419]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.707477]  ? calculate_sigpending+0x7b/0xa0
[   25.707583]  ? __pfx_kthread+0x10/0x10
[   25.707643]  ret_from_fork+0x41/0x80
[   25.707703]  ? __pfx_kthread+0x10/0x10
[   25.707762]  ret_from_fork_asm+0x1a/0x30
[   25.707847]  </TASK>
[   25.707876] 
[   25.722224] Allocated by task 269:
[   25.722729]  kasan_save_stack+0x3d/0x60
[   25.723246]  kasan_save_track+0x18/0x40
[   25.723682]  kasan_save_alloc_info+0x3b/0x50
[   25.724170]  __kasan_kmalloc+0xb7/0xc0
[   25.724748]  __kmalloc_cache_noprof+0x184/0x410
[   25.726496]  kasan_bitops_generic+0x93/0x1c0
[   25.726992]  kunit_try_run_case+0x1b3/0x490
[   25.727717]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.728182]  kthread+0x324/0x6d0
[   25.730653]  ret_from_fork+0x41/0x80
[   25.731498]  ret_from_fork_asm+0x1a/0x30
[   25.731811] 
[   25.731953] The buggy address belongs to the object at ffff8881012e6760
[   25.731953]  which belongs to the cache kmalloc-16 of size 16
[   25.734023] The buggy address is located 8 bytes inside of
[   25.734023]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   25.734828] 
[   25.735462] The buggy address belongs to the physical page:
[   25.736017] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   25.736899] flags: 0x200000000000000(node=0|zone=2)
[   25.737604] page_type: f5(slab)
[   25.737910] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   25.738655] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   25.739173] page dumped because: kasan: bad access detected
[   25.739779] 
[   25.739961] Memory state around the buggy address:
[   25.740282]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   25.740924]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   25.741570] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   25.742451]                                                           ^
[   25.743021]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.743718]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.744311] ==================================================================

test log snippet boot parser 80

Home

[   26.136169] ==================================================================
[   26.137129] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x41b/0xd90
[   26.138926] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   26.139642] 
[   26.139888] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.139996] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.140033] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.140091] Call Trace:
[   26.140129]  <TASK>
[   26.140166]  dump_stack_lvl+0x73/0xb0
[   26.140304]  print_report+0xd1/0x640
[   26.140374]  ? __virt_addr_valid+0x1db/0x2d0
[   26.140464]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.140537]  kasan_report+0x102/0x140
[   26.140596]  ? kasan_bitops_test_and_modify.constprop.0+0x41b/0xd90
[   26.140672]  ? kasan_bitops_test_and_modify.constprop.0+0x41b/0xd90
[   26.140754]  kasan_check_range+0x10c/0x1c0
[   26.140816]  __kasan_check_write+0x18/0x20
[   26.140898]  kasan_bitops_test_and_modify.constprop.0+0x41b/0xd90
[   26.140996]  ? __pfx_kasan_bitops_test_and_modify.constprop.0+0x10/0x10
[   26.141064]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.141124]  ? trace_hardirqs_on+0x37/0xe0
[   26.141218]  ? kasan_bitops_generic+0x93/0x1c0
[   26.141297]  kasan_bitops_generic+0x122/0x1c0
[   26.141363]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   26.141442]  ? __pfx_read_tsc+0x10/0x10
[   26.141497]  ? ktime_get_ts64+0x86/0x230
[   26.141564]  kunit_try_run_case+0x1b3/0x490
[   26.141633]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.141695]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.141757]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.141822]  ? __kthread_parkme+0x82/0x160
[   26.141880]  ? preempt_count_sub+0x50/0x80
[   26.141944]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.142013]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.142078]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.142144]  kthread+0x324/0x6d0
[   26.142241]  ? trace_preempt_on+0x20/0xc0
[   26.142309]  ? __pfx_kthread+0x10/0x10
[   26.142374]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.142454]  ? calculate_sigpending+0x7b/0xa0
[   26.142522]  ? __pfx_kthread+0x10/0x10
[   26.142570]  ret_from_fork+0x41/0x80
[   26.142622]  ? __pfx_kthread+0x10/0x10
[   26.142675]  ret_from_fork_asm+0x1a/0x30
[   26.142770]  </TASK>
[   26.142800] 
[   26.158392] Allocated by task 269:
[   26.158786]  kasan_save_stack+0x3d/0x60
[   26.159250]  kasan_save_track+0x18/0x40
[   26.159695]  kasan_save_alloc_info+0x3b/0x50
[   26.160148]  __kasan_kmalloc+0xb7/0xc0
[   26.160616]  __kmalloc_cache_noprof+0x184/0x410
[   26.161085]  kasan_bitops_generic+0x93/0x1c0
[   26.161614]  kunit_try_run_case+0x1b3/0x490
[   26.162106]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.162605]  kthread+0x324/0x6d0
[   26.162869]  ret_from_fork+0x41/0x80
[   26.163135]  ret_from_fork_asm+0x1a/0x30
[   26.163500] 
[   26.163724] The buggy address belongs to the object at ffff8881012e6760
[   26.163724]  which belongs to the cache kmalloc-16 of size 16
[   26.164821] The buggy address is located 8 bytes inside of
[   26.164821]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   26.165917] 
[   26.166220] The buggy address belongs to the physical page:
[   26.166761] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   26.167544] flags: 0x200000000000000(node=0|zone=2)
[   26.167862] page_type: f5(slab)
[   26.168110] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   26.168762] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   26.169512] page dumped because: kasan: bad access detected
[   26.170037] 
[   26.170303] Memory state around the buggy address:
[   26.170816]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   26.171538]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   26.172247] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   26.172794]                                                           ^
[   26.173388]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.174073]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.174651] ==================================================================
---
[   26.019384] ==================================================================
[   26.020234] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x23f/0xd90
[   26.021115] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   26.021936] 
[   26.022238] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.022361] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.022414] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.022476] Call Trace:
[   26.022510]  <TASK>
[   26.022544]  dump_stack_lvl+0x73/0xb0
[   26.022623]  print_report+0xd1/0x640
[   26.022684]  ? __virt_addr_valid+0x1db/0x2d0
[   26.022753]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.022824]  kasan_report+0x102/0x140
[   26.022880]  ? kasan_bitops_test_and_modify.constprop.0+0x23f/0xd90
[   26.022967]  ? kasan_bitops_test_and_modify.constprop.0+0x23f/0xd90
[   26.023078]  kasan_check_range+0x10c/0x1c0
[   26.023138]  __kasan_check_write+0x18/0x20
[   26.023239]  kasan_bitops_test_and_modify.constprop.0+0x23f/0xd90
[   26.023339]  ? __pfx_kasan_bitops_test_and_modify.constprop.0+0x10/0x10
[   26.023454]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.023518]  ? trace_hardirqs_on+0x37/0xe0
[   26.023573]  ? kasan_bitops_generic+0x93/0x1c0
[   26.023637]  kasan_bitops_generic+0x122/0x1c0
[   26.023716]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   26.023802]  ? __pfx_read_tsc+0x10/0x10
[   26.023856]  ? ktime_get_ts64+0x86/0x230
[   26.023915]  kunit_try_run_case+0x1b3/0x490
[   26.023956]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.023989]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.024020]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.024064]  ? __kthread_parkme+0x82/0x160
[   26.024119]  ? preempt_count_sub+0x50/0x80
[   26.024173]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.024284]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.024348]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.024501]  kthread+0x324/0x6d0
[   26.024576]  ? trace_preempt_on+0x20/0xc0
[   26.024636]  ? __pfx_kthread+0x10/0x10
[   26.024693]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.024747]  ? calculate_sigpending+0x7b/0xa0
[   26.024860]  ? __pfx_kthread+0x10/0x10
[   26.024922]  ret_from_fork+0x41/0x80
[   26.024977]  ? __pfx_kthread+0x10/0x10
[   26.025030]  ret_from_fork_asm+0x1a/0x30
[   26.025112]  </TASK>
[   26.025140] 
[   26.042908] Allocated by task 269:
[   26.043373]  kasan_save_stack+0x3d/0x60
[   26.043729]  kasan_save_track+0x18/0x40
[   26.044003]  kasan_save_alloc_info+0x3b/0x50
[   26.044614]  __kasan_kmalloc+0xb7/0xc0
[   26.045085]  __kmalloc_cache_noprof+0x184/0x410
[   26.045752]  kasan_bitops_generic+0x93/0x1c0
[   26.046335]  kunit_try_run_case+0x1b3/0x490
[   26.046890]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.047829]  kthread+0x324/0x6d0
[   26.048136]  ret_from_fork+0x41/0x80
[   26.048460]  ret_from_fork_asm+0x1a/0x30
[   26.048883] 
[   26.049103] The buggy address belongs to the object at ffff8881012e6760
[   26.049103]  which belongs to the cache kmalloc-16 of size 16
[   26.050582] The buggy address is located 8 bytes inside of
[   26.050582]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   26.051937] 
[   26.052099] The buggy address belongs to the physical page:
[   26.052596] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   26.053150] flags: 0x200000000000000(node=0|zone=2)
[   26.053760] page_type: f5(slab)
[   26.054118] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   26.054721] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   26.055348] page dumped because: kasan: bad access detected
[   26.055910] 
[   26.056081] Memory state around the buggy address:
[   26.056634]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   26.057112]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   26.057797] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   26.058371]                                                           ^
[   26.059044]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.059748]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.060256] ==================================================================
---
[   26.097630] ==================================================================
[   26.099157] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x37d/0xd90
[   26.100001] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   26.100943] 
[   26.101187] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.101285] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.101321] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.101373] Call Trace:
[   26.101434]  <TASK>
[   26.101482]  dump_stack_lvl+0x73/0xb0
[   26.101565]  print_report+0xd1/0x640
[   26.101630]  ? __virt_addr_valid+0x1db/0x2d0
[   26.101698]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.101769]  kasan_report+0x102/0x140
[   26.101830]  ? kasan_bitops_test_and_modify.constprop.0+0x37d/0xd90
[   26.101905]  ? kasan_bitops_test_and_modify.constprop.0+0x37d/0xd90
[   26.101989]  kasan_check_range+0x10c/0x1c0
[   26.102049]  __kasan_check_write+0x18/0x20
[   26.102108]  kasan_bitops_test_and_modify.constprop.0+0x37d/0xd90
[   26.102179]  ? __pfx_kasan_bitops_test_and_modify.constprop.0+0x10/0x10
[   26.102245]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.102301]  ? trace_hardirqs_on+0x37/0xe0
[   26.102362]  ? kasan_bitops_generic+0x93/0x1c0
[   26.102446]  kasan_bitops_generic+0x122/0x1c0
[   26.102510]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   26.102570]  ? __pfx_read_tsc+0x10/0x10
[   26.102620]  ? ktime_get_ts64+0x86/0x230
[   26.102685]  kunit_try_run_case+0x1b3/0x490
[   26.102752]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.102813]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.102878]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.102942]  ? __kthread_parkme+0x82/0x160
[   26.102995]  ? preempt_count_sub+0x50/0x80
[   26.103060]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.103128]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.103191]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.103256]  kthread+0x324/0x6d0
[   26.103362]  ? trace_preempt_on+0x20/0xc0
[   26.103449]  ? __pfx_kthread+0x10/0x10
[   26.103517]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.103578]  ? calculate_sigpending+0x7b/0xa0
[   26.103643]  ? __pfx_kthread+0x10/0x10
[   26.103703]  ret_from_fork+0x41/0x80
[   26.103759]  ? __pfx_kthread+0x10/0x10
[   26.103819]  ret_from_fork_asm+0x1a/0x30
[   26.103900]  </TASK>
[   26.103928] 
[   26.120248] Allocated by task 269:
[   26.120647]  kasan_save_stack+0x3d/0x60
[   26.120985]  kasan_save_track+0x18/0x40
[   26.121487]  kasan_save_alloc_info+0x3b/0x50
[   26.121879]  __kasan_kmalloc+0xb7/0xc0
[   26.122383]  __kmalloc_cache_noprof+0x184/0x410
[   26.122766]  kasan_bitops_generic+0x93/0x1c0
[   26.123237]  kunit_try_run_case+0x1b3/0x490
[   26.123739]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.124346]  kthread+0x324/0x6d0
[   26.124636]  ret_from_fork+0x41/0x80
[   26.125083]  ret_from_fork_asm+0x1a/0x30
[   26.125530] 
[   26.125752] The buggy address belongs to the object at ffff8881012e6760
[   26.125752]  which belongs to the cache kmalloc-16 of size 16
[   26.126737] The buggy address is located 8 bytes inside of
[   26.126737]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   26.127362] 
[   26.127533] The buggy address belongs to the physical page:
[   26.127849] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   26.128467] flags: 0x200000000000000(node=0|zone=2)
[   26.128991] page_type: f5(slab)
[   26.129440] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   26.129894] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   26.130363] page dumped because: kasan: bad access detected
[   26.130935] 
[   26.131164] Memory state around the buggy address:
[   26.131656]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   26.132300]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   26.133022] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   26.133675]                                                           ^
[   26.134262]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.134687]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.135074] ==================================================================
---
[   25.979886] ==================================================================
[   25.980732] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x1a1/0xd90
[   25.981552] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   25.982393] 
[   25.982656] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.982764] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.982800] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.982860] Call Trace:
[   25.982894]  <TASK>
[   25.982925]  dump_stack_lvl+0x73/0xb0
[   25.983001]  print_report+0xd1/0x640
[   25.983066]  ? __virt_addr_valid+0x1db/0x2d0
[   25.983134]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.983249]  kasan_report+0x102/0x140
[   25.983321]  ? kasan_bitops_test_and_modify.constprop.0+0x1a1/0xd90
[   25.983394]  ? kasan_bitops_test_and_modify.constprop.0+0x1a1/0xd90
[   25.983492]  kasan_check_range+0x10c/0x1c0
[   25.983555]  __kasan_check_write+0x18/0x20
[   25.983613]  kasan_bitops_test_and_modify.constprop.0+0x1a1/0xd90
[   25.983683]  ? __pfx_kasan_bitops_test_and_modify.constprop.0+0x10/0x10
[   25.983753]  ? __kmalloc_cache_noprof+0x184/0x410
[   25.983813]  ? trace_hardirqs_on+0x37/0xe0
[   25.983869]  ? kasan_bitops_generic+0x93/0x1c0
[   25.983937]  kasan_bitops_generic+0x122/0x1c0
[   25.983999]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   25.984063]  ? __pfx_read_tsc+0x10/0x10
[   25.984118]  ? ktime_get_ts64+0x86/0x230
[   25.984225]  kunit_try_run_case+0x1b3/0x490
[   25.984312]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.984373]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.984459]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.984522]  ? __kthread_parkme+0x82/0x160
[   25.984577]  ? preempt_count_sub+0x50/0x80
[   25.984643]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.984710]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.984774]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.984840]  kthread+0x324/0x6d0
[   25.984898]  ? trace_preempt_on+0x20/0xc0
[   25.984955]  ? __pfx_kthread+0x10/0x10
[   25.985014]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.985071]  ? calculate_sigpending+0x7b/0xa0
[   25.985139]  ? __pfx_kthread+0x10/0x10
[   25.985239]  ret_from_fork+0x41/0x80
[   25.985312]  ? __pfx_kthread+0x10/0x10
[   25.985371]  ret_from_fork_asm+0x1a/0x30
[   25.985461]  </TASK>
[   25.985488] 
[   26.000095] Allocated by task 269:
[   26.000354]  kasan_save_stack+0x3d/0x60
[   26.000669]  kasan_save_track+0x18/0x40
[   26.000932]  kasan_save_alloc_info+0x3b/0x50
[   26.001203]  __kasan_kmalloc+0xb7/0xc0
[   26.001512]  __kmalloc_cache_noprof+0x184/0x410
[   26.002018]  kasan_bitops_generic+0x93/0x1c0
[   26.002587]  kunit_try_run_case+0x1b3/0x490
[   26.003074]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.003676]  kthread+0x324/0x6d0
[   26.004155]  ret_from_fork+0x41/0x80
[   26.004681]  ret_from_fork_asm+0x1a/0x30
[   26.005260] 
[   26.005540] The buggy address belongs to the object at ffff8881012e6760
[   26.005540]  which belongs to the cache kmalloc-16 of size 16
[   26.006516] The buggy address is located 8 bytes inside of
[   26.006516]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   26.007898] 
[   26.008218] The buggy address belongs to the physical page:
[   26.008837] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   26.009683] flags: 0x200000000000000(node=0|zone=2)
[   26.010013] page_type: f5(slab)
[   26.010311] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   26.011172] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   26.012008] page dumped because: kasan: bad access detected
[   26.012663] 
[   26.012921] Memory state around the buggy address:
[   26.013531]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   26.014317]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   26.015052] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   26.015745]                                                           ^
[   26.016121]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.016888]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.017686] ==================================================================
---
[   26.061490] ==================================================================
[   26.062053] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x2de/0xd90
[   26.062589] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   26.063292] 
[   26.063586] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.063697] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.063732] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.063792] Call Trace:
[   26.063824]  <TASK>
[   26.063860]  dump_stack_lvl+0x73/0xb0
[   26.063934]  print_report+0xd1/0x640
[   26.063999]  ? __virt_addr_valid+0x1db/0x2d0
[   26.064124]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.064197]  kasan_report+0x102/0x140
[   26.064254]  ? kasan_bitops_test_and_modify.constprop.0+0x2de/0xd90
[   26.064323]  ? kasan_bitops_test_and_modify.constprop.0+0x2de/0xd90
[   26.064420]  kasan_check_range+0x10c/0x1c0
[   26.064486]  __kasan_check_write+0x18/0x20
[   26.064542]  kasan_bitops_test_and_modify.constprop.0+0x2de/0xd90
[   26.064608]  ? __pfx_kasan_bitops_test_and_modify.constprop.0+0x10/0x10
[   26.064677]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.064737]  ? trace_hardirqs_on+0x37/0xe0
[   26.064792]  ? kasan_bitops_generic+0x93/0x1c0
[   26.064856]  kasan_bitops_generic+0x122/0x1c0
[   26.064913]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   26.064973]  ? __pfx_read_tsc+0x10/0x10
[   26.065018]  ? ktime_get_ts64+0x86/0x230
[   26.065077]  kunit_try_run_case+0x1b3/0x490
[   26.065142]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.065204]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.065261]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.065319]  ? __kthread_parkme+0x82/0x160
[   26.065368]  ? preempt_count_sub+0x50/0x80
[   26.065448]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.065510]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.065575]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.065637]  kthread+0x324/0x6d0
[   26.065682]  ? trace_preempt_on+0x20/0xc0
[   26.065715]  ? __pfx_kthread+0x10/0x10
[   26.065744]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.065777]  ? calculate_sigpending+0x7b/0xa0
[   26.065816]  ? __pfx_kthread+0x10/0x10
[   26.065867]  ret_from_fork+0x41/0x80
[   26.065916]  ? __pfx_kthread+0x10/0x10
[   26.065971]  ret_from_fork_asm+0x1a/0x30
[   26.066041]  </TASK>
[   26.066071] 
[   26.080821] Allocated by task 269:
[   26.081059]  kasan_save_stack+0x3d/0x60
[   26.081480]  kasan_save_track+0x18/0x40
[   26.081934]  kasan_save_alloc_info+0x3b/0x50
[   26.082479]  __kasan_kmalloc+0xb7/0xc0
[   26.082899]  __kmalloc_cache_noprof+0x184/0x410
[   26.083733]  kasan_bitops_generic+0x93/0x1c0
[   26.084179]  kunit_try_run_case+0x1b3/0x490
[   26.084725]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.085190]  kthread+0x324/0x6d0
[   26.085445]  ret_from_fork+0x41/0x80
[   26.085857]  ret_from_fork_asm+0x1a/0x30
[   26.086294] 
[   26.086568] The buggy address belongs to the object at ffff8881012e6760
[   26.086568]  which belongs to the cache kmalloc-16 of size 16
[   26.087362] The buggy address is located 8 bytes inside of
[   26.087362]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   26.088254] 
[   26.088437] The buggy address belongs to the physical page:
[   26.088754] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   26.089182] flags: 0x200000000000000(node=0|zone=2)
[   26.089610] page_type: f5(slab)
[   26.089988] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   26.091041] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   26.091834] page dumped because: kasan: bad access detected
[   26.092455] 
[   26.092715] Memory state around the buggy address:
[   26.093236]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   26.093920]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   26.094754] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   26.095310]                                                           ^
[   26.095703]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.096100]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.096523] ==================================================================
---
[   26.176362] ==================================================================
[   26.176958] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x4ba/0xd90
[   26.178028] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   26.178546] 
[   26.178831] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.178935] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.178972] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.179027] Call Trace:
[   26.179060]  <TASK>
[   26.179108]  dump_stack_lvl+0x73/0xb0
[   26.179211]  print_report+0xd1/0x640
[   26.179279]  ? __virt_addr_valid+0x1db/0x2d0
[   26.179340]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.179425]  kasan_report+0x102/0x140
[   26.179487]  ? kasan_bitops_test_and_modify.constprop.0+0x4ba/0xd90
[   26.179573]  ? kasan_bitops_test_and_modify.constprop.0+0x4ba/0xd90
[   26.179679]  kasan_check_range+0x10c/0x1c0
[   26.179737]  __kasan_check_write+0x18/0x20
[   26.179791]  kasan_bitops_test_and_modify.constprop.0+0x4ba/0xd90
[   26.179859]  ? __pfx_kasan_bitops_test_and_modify.constprop.0+0x10/0x10
[   26.179942]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.180028]  ? trace_hardirqs_on+0x37/0xe0
[   26.180084]  ? kasan_bitops_generic+0x93/0x1c0
[   26.180146]  kasan_bitops_generic+0x122/0x1c0
[   26.180199]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   26.180281]  ? __pfx_read_tsc+0x10/0x10
[   26.180329]  ? ktime_get_ts64+0x86/0x230
[   26.180429]  kunit_try_run_case+0x1b3/0x490
[   26.180505]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.180565]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.180623]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.180683]  ? __kthread_parkme+0x82/0x160
[   26.180732]  ? preempt_count_sub+0x50/0x80
[   26.180837]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.180908]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.180966]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.181028]  kthread+0x324/0x6d0
[   26.181080]  ? trace_preempt_on+0x20/0xc0
[   26.181135]  ? __pfx_kthread+0x10/0x10
[   26.181194]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.181293]  ? calculate_sigpending+0x7b/0xa0
[   26.181390]  ? __pfx_kthread+0x10/0x10
[   26.181472]  ret_from_fork+0x41/0x80
[   26.181529]  ? __pfx_kthread+0x10/0x10
[   26.181587]  ret_from_fork_asm+0x1a/0x30
[   26.181664]  </TASK>
[   26.181694] 
[   26.196382] Allocated by task 269:
[   26.196802]  kasan_save_stack+0x3d/0x60
[   26.197143]  kasan_save_track+0x18/0x40
[   26.197627]  kasan_save_alloc_info+0x3b/0x50
[   26.198054]  __kasan_kmalloc+0xb7/0xc0
[   26.198368]  __kmalloc_cache_noprof+0x184/0x410
[   26.198694]  kasan_bitops_generic+0x93/0x1c0
[   26.198988]  kunit_try_run_case+0x1b3/0x490
[   26.199283]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.199821]  kthread+0x324/0x6d0
[   26.200226]  ret_from_fork+0x41/0x80
[   26.200687]  ret_from_fork_asm+0x1a/0x30
[   26.201118] 
[   26.201383] The buggy address belongs to the object at ffff8881012e6760
[   26.201383]  which belongs to the cache kmalloc-16 of size 16
[   26.202555] The buggy address is located 8 bytes inside of
[   26.202555]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   26.203806] 
[   26.204037] The buggy address belongs to the physical page:
[   26.204797] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   26.205240] flags: 0x200000000000000(node=0|zone=2)
[   26.205797] page_type: f5(slab)
[   26.206199] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   26.206773] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   26.207512] page dumped because: kasan: bad access detected
[   26.207921] 
[   26.208077] Memory state around the buggy address:
[   26.208736]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   26.209396]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   26.210105] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   26.210661]                                                           ^
[   26.211199]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.212074]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.212602] ==================================================================
---
[   25.942822] ==================================================================
[   25.943541] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x102/0xd90
[   25.944388] Write of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   25.944944] 
[   25.945304] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.945427] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.945465] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.945523] Call Trace:
[   25.945556]  <TASK>
[   25.945590]  dump_stack_lvl+0x73/0xb0
[   25.945666]  print_report+0xd1/0x640
[   25.945731]  ? __virt_addr_valid+0x1db/0x2d0
[   25.945797]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.945868]  kasan_report+0x102/0x140
[   25.945929]  ? kasan_bitops_test_and_modify.constprop.0+0x102/0xd90
[   25.946004]  ? kasan_bitops_test_and_modify.constprop.0+0x102/0xd90
[   25.946086]  kasan_check_range+0x10c/0x1c0
[   25.946146]  __kasan_check_write+0x18/0x20
[   25.946276]  kasan_bitops_test_and_modify.constprop.0+0x102/0xd90
[   25.946362]  ? __pfx_kasan_bitops_test_and_modify.constprop.0+0x10/0x10
[   25.946449]  ? __kmalloc_cache_noprof+0x184/0x410
[   25.946513]  ? trace_hardirqs_on+0x37/0xe0
[   25.946570]  ? kasan_bitops_generic+0x93/0x1c0
[   25.946637]  kasan_bitops_generic+0x122/0x1c0
[   25.946697]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   25.946757]  ? __pfx_read_tsc+0x10/0x10
[   25.946811]  ? ktime_get_ts64+0x86/0x230
[   25.946876]  kunit_try_run_case+0x1b3/0x490
[   25.946945]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.947005]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.947069]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.947134]  ? __kthread_parkme+0x82/0x160
[   25.947233]  ? preempt_count_sub+0x50/0x80
[   25.947308]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.947374]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.947458]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.947522]  kthread+0x324/0x6d0
[   25.947577]  ? trace_preempt_on+0x20/0xc0
[   25.947628]  ? __pfx_kthread+0x10/0x10
[   25.947681]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.947738]  ? calculate_sigpending+0x7b/0xa0
[   25.947800]  ? __pfx_kthread+0x10/0x10
[   25.947877]  ret_from_fork+0x41/0x80
[   25.947959]  ? __pfx_kthread+0x10/0x10
[   25.948017]  ret_from_fork_asm+0x1a/0x30
[   25.948097]  </TASK>
[   25.948124] 
[   25.962652] Allocated by task 269:
[   25.962914]  kasan_save_stack+0x3d/0x60
[   25.963421]  kasan_save_track+0x18/0x40
[   25.963842]  kasan_save_alloc_info+0x3b/0x50
[   25.964371]  __kasan_kmalloc+0xb7/0xc0
[   25.964838]  __kmalloc_cache_noprof+0x184/0x410
[   25.965387]  kasan_bitops_generic+0x93/0x1c0
[   25.965724]  kunit_try_run_case+0x1b3/0x490
[   25.966233]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.966729]  kthread+0x324/0x6d0
[   25.967052]  ret_from_fork+0x41/0x80
[   25.967521]  ret_from_fork_asm+0x1a/0x30
[   25.967867] 
[   25.968032] The buggy address belongs to the object at ffff8881012e6760
[   25.968032]  which belongs to the cache kmalloc-16 of size 16
[   25.968707] The buggy address is located 8 bytes inside of
[   25.968707]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   25.969352] 
[   25.969597] The buggy address belongs to the physical page:
[   25.970160] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   25.970941] flags: 0x200000000000000(node=0|zone=2)
[   25.971460] page_type: f5(slab)
[   25.971831] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   25.972519] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   25.973188] page dumped because: kasan: bad access detected
[   25.973803] 
[   25.974035] Memory state around the buggy address:
[   25.974581]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   25.975280]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   25.975997] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   25.976604]                                                           ^
[   25.977217]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.977807]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.978689] ==================================================================

test log snippet boot parser 81

Home

[   26.256845] ==================================================================
[   26.257450] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0xd08/0xd90
[   26.259027] Read of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   26.259608] 
[   26.259857] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.259955] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.259990] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.260039] Call Trace:
[   26.260074]  <TASK>
[   26.260107]  dump_stack_lvl+0x73/0xb0
[   26.260179]  print_report+0xd1/0x640
[   26.260243]  ? __virt_addr_valid+0x1db/0x2d0
[   26.260304]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.260374]  kasan_report+0x102/0x140
[   26.260457]  ? kasan_bitops_test_and_modify.constprop.0+0xd08/0xd90
[   26.260524]  ? kasan_bitops_test_and_modify.constprop.0+0xd08/0xd90
[   26.260604]  __asan_report_load8_noabort+0x18/0x20
[   26.260691]  kasan_bitops_test_and_modify.constprop.0+0xd08/0xd90
[   26.260760]  ? __pfx_kasan_bitops_test_and_modify.constprop.0+0x10/0x10
[   26.260830]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.260889]  ? trace_hardirqs_on+0x37/0xe0
[   26.260942]  ? kasan_bitops_generic+0x93/0x1c0
[   26.261009]  kasan_bitops_generic+0x122/0x1c0
[   26.261070]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   26.261130]  ? __pfx_read_tsc+0x10/0x10
[   26.261183]  ? ktime_get_ts64+0x86/0x230
[   26.261274]  kunit_try_run_case+0x1b3/0x490
[   26.261376]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.261459]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.261525]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.261590]  ? __kthread_parkme+0x82/0x160
[   26.261647]  ? preempt_count_sub+0x50/0x80
[   26.261709]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.261775]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.261842]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.261905]  kthread+0x324/0x6d0
[   26.261964]  ? trace_preempt_on+0x20/0xc0
[   26.262021]  ? __pfx_kthread+0x10/0x10
[   26.262080]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.262139]  ? calculate_sigpending+0x7b/0xa0
[   26.262223]  ? __pfx_kthread+0x10/0x10
[   26.262289]  ret_from_fork+0x41/0x80
[   26.262346]  ? __pfx_kthread+0x10/0x10
[   26.262421]  ret_from_fork_asm+0x1a/0x30
[   26.262504]  </TASK>
[   26.262532] 
[   26.283250] Allocated by task 269:
[   26.283770]  kasan_save_stack+0x3d/0x60
[   26.284226]  kasan_save_track+0x18/0x40
[   26.284599]  kasan_save_alloc_info+0x3b/0x50
[   26.285051]  __kasan_kmalloc+0xb7/0xc0
[   26.285509]  __kmalloc_cache_noprof+0x184/0x410
[   26.285950]  kasan_bitops_generic+0x93/0x1c0
[   26.286498]  kunit_try_run_case+0x1b3/0x490
[   26.286841]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.287439]  kthread+0x324/0x6d0
[   26.287789]  ret_from_fork+0x41/0x80
[   26.288179]  ret_from_fork_asm+0x1a/0x30
[   26.288738] 
[   26.288984] The buggy address belongs to the object at ffff8881012e6760
[   26.288984]  which belongs to the cache kmalloc-16 of size 16
[   26.290231] The buggy address is located 8 bytes inside of
[   26.290231]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   26.291462] 
[   26.291771] The buggy address belongs to the physical page:
[   26.292131] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   26.292796] flags: 0x200000000000000(node=0|zone=2)
[   26.293360] page_type: f5(slab)
[   26.293659] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   26.294367] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   26.294992] page dumped because: kasan: bad access detected
[   26.295590] 
[   26.295801] Memory state around the buggy address:
[   26.296330]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   26.296951]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   26.297668] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   26.298322]                                                           ^
[   26.298971]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.299529]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.300102] ==================================================================

test log snippet boot parser 82

Home

[   26.213967] ==================================================================
[   26.215036] BUG: KASAN: slab-out-of-bounds in kasan_bitops_test_and_modify.constprop.0+0x558/0xd90
[   26.215878] Read of size 8 at addr ffff8881012e6768 by task kunit_try_catch/269
[   26.216595] 
[   26.216899] CPU: 0 UID: 0 PID: 269 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   26.217009] Tainted: [B]=BAD_PAGE, [N]=TEST
[   26.217045] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   26.217095] Call Trace:
[   26.217129]  <TASK>
[   26.217160]  dump_stack_lvl+0x73/0xb0
[   26.217233]  print_report+0xd1/0x640
[   26.217309]  ? __virt_addr_valid+0x1db/0x2d0
[   26.217416]  ? kasan_complete_mode_report_info+0x2a/0x200
[   26.217489]  kasan_report+0x102/0x140
[   26.217556]  ? kasan_bitops_test_and_modify.constprop.0+0x558/0xd90
[   26.217630]  ? kasan_bitops_test_and_modify.constprop.0+0x558/0xd90
[   26.217710]  kasan_check_range+0x10c/0x1c0
[   26.217772]  __kasan_check_read+0x15/0x20
[   26.217831]  kasan_bitops_test_and_modify.constprop.0+0x558/0xd90
[   26.217899]  ? __pfx_kasan_bitops_test_and_modify.constprop.0+0x10/0x10
[   26.217969]  ? __kmalloc_cache_noprof+0x184/0x410
[   26.218030]  ? trace_hardirqs_on+0x37/0xe0
[   26.218082]  ? kasan_bitops_generic+0x93/0x1c0
[   26.218185]  kasan_bitops_generic+0x122/0x1c0
[   26.218303]  ? __pfx_kasan_bitops_generic+0x10/0x10
[   26.218395]  ? __pfx_read_tsc+0x10/0x10
[   26.218470]  ? ktime_get_ts64+0x86/0x230
[   26.218537]  kunit_try_run_case+0x1b3/0x490
[   26.218606]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.218668]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   26.218729]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   26.218793]  ? __kthread_parkme+0x82/0x160
[   26.218851]  ? preempt_count_sub+0x50/0x80
[   26.218912]  ? __pfx_kunit_try_run_case+0x10/0x10
[   26.218979]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.219045]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   26.219110]  kthread+0x324/0x6d0
[   26.219164]  ? trace_preempt_on+0x20/0xc0
[   26.219223]  ? __pfx_kthread+0x10/0x10
[   26.219282]  ? _raw_spin_unlock_irq+0x47/0x80
[   26.219339]  ? calculate_sigpending+0x7b/0xa0
[   26.219421]  ? __pfx_kthread+0x10/0x10
[   26.219512]  ret_from_fork+0x41/0x80
[   26.219564]  ? __pfx_kthread+0x10/0x10
[   26.219617]  ret_from_fork_asm+0x1a/0x30
[   26.219697]  </TASK>
[   26.219723] 
[   26.233616] Allocated by task 269:
[   26.234123]  kasan_save_stack+0x3d/0x60
[   26.234605]  kasan_save_track+0x18/0x40
[   26.235029]  kasan_save_alloc_info+0x3b/0x50
[   26.235731]  __kasan_kmalloc+0xb7/0xc0
[   26.236062]  __kmalloc_cache_noprof+0x184/0x410
[   26.236374]  kasan_bitops_generic+0x93/0x1c0
[   26.236817]  kunit_try_run_case+0x1b3/0x490
[   26.237487]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   26.238079]  kthread+0x324/0x6d0
[   26.238569]  ret_from_fork+0x41/0x80
[   26.240448]  ret_from_fork_asm+0x1a/0x30
[   26.241052] 
[   26.241460] The buggy address belongs to the object at ffff8881012e6760
[   26.241460]  which belongs to the cache kmalloc-16 of size 16
[   26.244716] The buggy address is located 8 bytes inside of
[   26.244716]  allocated 9-byte region [ffff8881012e6760, ffff8881012e6769)
[   26.245741] 
[   26.245946] The buggy address belongs to the physical page:
[   26.246559] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   26.247123] flags: 0x200000000000000(node=0|zone=2)
[   26.247546] page_type: f5(slab)
[   26.247894] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   26.248523] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   26.249380] page dumped because: kasan: bad access detected
[   26.249979] 
[   26.250273] Memory state around the buggy address:
[   26.250790]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   26.251612]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   26.252276] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc 00 01 fc fc
[   26.252928]                                                           ^
[   26.253717]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.254481]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   26.255080] ==================================================================

test log snippet boot parser 83

Home

[   21.349867] ==================================================================
[   21.350787] BUG: KASAN: slab-out-of-bounds in kmalloc_big_oob_right+0x318/0x370
[   21.351316] Write of size 1 at addr ffff888102bddf00 by task kunit_try_catch/152
[   21.352672] 
[   21.352865] CPU: 0 UID: 0 PID: 152 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.352973] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.353007] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.353380] Call Trace:
[   21.353444]  <TASK>
[   21.353483]  dump_stack_lvl+0x73/0xb0
[   21.353563]  print_report+0xd1/0x640
[   21.353691]  ? __virt_addr_valid+0x1db/0x2d0
[   21.353770]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.353836]  kasan_report+0x102/0x140
[   21.353882]  ? kmalloc_big_oob_right+0x318/0x370
[   21.353917]  ? kmalloc_big_oob_right+0x318/0x370
[   21.353956]  __asan_report_store1_noabort+0x1b/0x30
[   21.353988]  kmalloc_big_oob_right+0x318/0x370
[   21.354019]  ? __pfx_kmalloc_big_oob_right+0x10/0x10
[   21.354052]  ? __schedule+0xc6f/0x27d0
[   21.354084]  ? __pfx_read_tsc+0x10/0x10
[   21.354112]  ? ktime_get_ts64+0x86/0x230
[   21.354147]  kunit_try_run_case+0x1b3/0x490
[   21.354181]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.354235]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.354268]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.354299]  ? __kthread_parkme+0x82/0x160
[   21.354328]  ? preempt_count_sub+0x50/0x80
[   21.354371]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.354419]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.354453]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.354484]  kthread+0x324/0x6d0
[   21.354512]  ? trace_preempt_on+0x20/0xc0
[   21.354543]  ? __pfx_kthread+0x10/0x10
[   21.354572]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.354643]  ? calculate_sigpending+0x7b/0xa0
[   21.354706]  ? __pfx_kthread+0x10/0x10
[   21.354743]  ret_from_fork+0x41/0x80
[   21.354773]  ? __pfx_kthread+0x10/0x10
[   21.354802]  ret_from_fork_asm+0x1a/0x30
[   21.354846]  </TASK>
[   21.354862] 
[   21.372159] Allocated by task 152:
[   21.372755]  kasan_save_stack+0x3d/0x60
[   21.373250]  kasan_save_track+0x18/0x40
[   21.374054]  kasan_save_alloc_info+0x3b/0x50
[   21.374532]  __kasan_kmalloc+0xb7/0xc0
[   21.375146]  __kmalloc_cache_noprof+0x184/0x410
[   21.375815]  kmalloc_big_oob_right+0xaa/0x370
[   21.376343]  kunit_try_run_case+0x1b3/0x490
[   21.377034]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.377545]  kthread+0x324/0x6d0
[   21.378232]  ret_from_fork+0x41/0x80
[   21.378588]  ret_from_fork_asm+0x1a/0x30
[   21.378967] 
[   21.379184] The buggy address belongs to the object at ffff888102bdc000
[   21.379184]  which belongs to the cache kmalloc-8k of size 8192
[   21.380360] The buggy address is located 0 bytes to the right of
[   21.380360]  allocated 7936-byte region [ffff888102bdc000, ffff888102bddf00)
[   21.382069] 
[   21.382379] The buggy address belongs to the physical page:
[   21.382990] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102bd8
[   21.383737] head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.384480] flags: 0x200000000000040(head|node=0|zone=2)
[   21.385108] page_type: f5(slab)
[   21.385414] raw: 0200000000000040 ffff888100042280 dead000000000122 0000000000000000
[   21.386363] raw: 0000000000000000 0000000080020002 00000001f5000000 0000000000000000
[   21.387107] head: 0200000000000040 ffff888100042280 dead000000000122 0000000000000000
[   21.387746] head: 0000000000000000 0000000080020002 00000001f5000000 0000000000000000
[   21.388734] head: 0200000000000003 ffffea00040af601 ffffffffffffffff 0000000000000000
[   21.389393] head: 0000000000000008 0000000000000000 00000000ffffffff 0000000000000000
[   21.390309] page dumped because: kasan: bad access detected
[   21.390945] 
[   21.391167] Memory state around the buggy address:
[   21.391808]  ffff888102bdde00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.392337]  ffff888102bdde80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.393146] >ffff888102bddf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.394165]                    ^
[   21.394541]  ffff888102bddf80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.395054]  ffff888102bde000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.396001] ==================================================================

test log snippet boot parser 84

Home

[   21.402241] ==================================================================
[   21.403463] BUG: KASAN: slab-out-of-bounds in kmalloc_large_oob_right+0x2eb/0x340
[   21.404157] Write of size 1 at addr ffff88810232600a by task kunit_try_catch/154
[   21.404689] 
[   21.405011] CPU: 0 UID: 0 PID: 154 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.405120] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.405153] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.405220] Call Trace:
[   21.405253]  <TASK>
[   21.405289]  dump_stack_lvl+0x73/0xb0
[   21.405365]  print_report+0xd1/0x640
[   21.405444]  ? __virt_addr_valid+0x1db/0x2d0
[   21.405513]  ? kasan_addr_to_slab+0x11/0xa0
[   21.405569]  kasan_report+0x102/0x140
[   21.405622]  ? kmalloc_large_oob_right+0x2eb/0x340
[   21.405681]  ? kmalloc_large_oob_right+0x2eb/0x340
[   21.405748]  __asan_report_store1_noabort+0x1b/0x30
[   21.405804]  kmalloc_large_oob_right+0x2eb/0x340
[   21.405867]  ? __pfx_kmalloc_large_oob_right+0x10/0x10
[   21.405935]  ? __pfx_kmalloc_large_oob_right+0x10/0x10
[   21.406025]  kunit_try_run_case+0x1b3/0x490
[   21.406071]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.406102]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.406134]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.406165]  ? __kthread_parkme+0x82/0x160
[   21.406205]  ? preempt_count_sub+0x50/0x80
[   21.406240]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.406272]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.406304]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.406335]  kthread+0x324/0x6d0
[   21.406371]  ? trace_preempt_on+0x20/0xc0
[   21.406417]  ? __pfx_kthread+0x10/0x10
[   21.406448]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.406477]  ? calculate_sigpending+0x7b/0xa0
[   21.406512]  ? __pfx_kthread+0x10/0x10
[   21.406541]  ret_from_fork+0x41/0x80
[   21.406570]  ? __pfx_kthread+0x10/0x10
[   21.406599]  ret_from_fork_asm+0x1a/0x30
[   21.406640]  </TASK>
[   21.406656] 
[   21.421015] The buggy address belongs to the physical page:
[   21.421417] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102324
[   21.422488] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.423281] flags: 0x200000000000040(head|node=0|zone=2)
[   21.424032] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   21.424748] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   21.425341] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   21.426049] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   21.426916] head: 0200000000000002 ffffea000408c901 ffffffffffffffff 0000000000000000
[   21.427550] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   21.428017] page dumped because: kasan: bad access detected
[   21.428751] 
[   21.429044] Memory state around the buggy address:
[   21.430199]  ffff888102325f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.431172]  ffff888102325f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.432134] >ffff888102326000: 00 02 fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   21.433164]                       ^
[   21.433747]  ffff888102326080: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   21.435460]  ffff888102326100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   21.436094] ==================================================================

test log snippet boot parser 85

Home

[   22.745250] ==================================================================
[   22.746483] BUG: KASAN: slab-out-of-bounds in kmalloc_memmove_invalid_size+0x170/0x330
[   22.747745] Read of size 64 at addr ffff888102a5b084 by task kunit_try_catch/190
[   22.748783] 
[   22.749017] CPU: 1 UID: 0 PID: 190 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.749117] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.749138] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.749168] Call Trace:
[   22.749190]  <TASK>
[   22.749464]  dump_stack_lvl+0x73/0xb0
[   22.749572]  print_report+0xd1/0x640
[   22.749618]  ? __virt_addr_valid+0x1db/0x2d0
[   22.749658]  ? kasan_complete_mode_report_info+0x2a/0x200
[   22.749735]  kasan_report+0x102/0x140
[   22.749790]  ? kmalloc_memmove_invalid_size+0x170/0x330
[   22.749844]  ? kmalloc_memmove_invalid_size+0x170/0x330
[   22.749886]  kasan_check_range+0x10c/0x1c0
[   22.749916]  __asan_memmove+0x27/0x70
[   22.749945]  kmalloc_memmove_invalid_size+0x170/0x330
[   22.749980]  ? __pfx_kmalloc_memmove_invalid_size+0x10/0x10
[   22.750015]  ? __schedule+0xc6f/0x27d0
[   22.750048]  ? __pfx_read_tsc+0x10/0x10
[   22.750077]  ? ktime_get_ts64+0x86/0x230
[   22.750112]  kunit_try_run_case+0x1b3/0x490
[   22.750146]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.750176]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.750247]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.750278]  ? __kthread_parkme+0x82/0x160
[   22.750308]  ? preempt_count_sub+0x50/0x80
[   22.750341]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.750380]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.750428]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.750460]  kthread+0x324/0x6d0
[   22.750489]  ? trace_preempt_on+0x20/0xc0
[   22.750520]  ? __pfx_kthread+0x10/0x10
[   22.750549]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.750578]  ? calculate_sigpending+0x7b/0xa0
[   22.750663]  ? __pfx_kthread+0x10/0x10
[   22.750715]  ret_from_fork+0x41/0x80
[   22.750747]  ? __pfx_kthread+0x10/0x10
[   22.750776]  ret_from_fork_asm+0x1a/0x30
[   22.750820]  </TASK>
[   22.750836] 
[   22.767186] Allocated by task 190:
[   22.767573]  kasan_save_stack+0x3d/0x60
[   22.768265]  kasan_save_track+0x18/0x40
[   22.768779]  kasan_save_alloc_info+0x3b/0x50
[   22.769516]  __kasan_kmalloc+0xb7/0xc0
[   22.770119]  __kmalloc_cache_noprof+0x184/0x410
[   22.770690]  kmalloc_memmove_invalid_size+0xad/0x330
[   22.771268]  kunit_try_run_case+0x1b3/0x490
[   22.771747]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.772134]  kthread+0x324/0x6d0
[   22.772662]  ret_from_fork+0x41/0x80
[   22.773030]  ret_from_fork_asm+0x1a/0x30
[   22.773390] 
[   22.773626] The buggy address belongs to the object at ffff888102a5b080
[   22.773626]  which belongs to the cache kmalloc-64 of size 64
[   22.774358] The buggy address is located 4 bytes inside of
[   22.774358]  allocated 64-byte region [ffff888102a5b080, ffff888102a5b0c0)
[   22.775814] 
[   22.776054] The buggy address belongs to the physical page:
[   22.776786] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a5b
[   22.778082] flags: 0x200000000000000(node=0|zone=2)
[   22.778968] page_type: f5(slab)
[   22.779499] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   22.780283] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   22.782612] page dumped because: kasan: bad access detected
[   22.783382] 
[   22.783638] Memory state around the buggy address:
[   22.784440]  ffff888102a5af80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.785267]  ffff888102a5b000: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   22.785881] >ffff888102a5b080: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
[   22.786664]                                            ^
[   22.787274]  ffff888102a5b100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.788173]  ffff888102a5b180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.789126] ==================================================================

test log snippet boot parser 86

Home

[   21.203435] ==================================================================
[   21.204564] BUG: KASAN: slab-out-of-bounds in kmalloc_node_oob_right+0x36b/0x3d0
[   21.205343] Read of size 1 at addr ffff888102c15000 by task kunit_try_catch/148
[   21.206229] 
[   21.206553] CPU: 1 UID: 0 PID: 148 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.206699] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.206735] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.206791] Call Trace:
[   21.206825]  <TASK>
[   21.206861]  dump_stack_lvl+0x73/0xb0
[   21.206941]  print_report+0xd1/0x640
[   21.207000]  ? __virt_addr_valid+0x1db/0x2d0
[   21.207064]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.207127]  kasan_report+0x102/0x140
[   21.207196]  ? kmalloc_node_oob_right+0x36b/0x3d0
[   21.207254]  ? kmalloc_node_oob_right+0x36b/0x3d0
[   21.207372]  __asan_report_load1_noabort+0x18/0x20
[   21.207446]  kmalloc_node_oob_right+0x36b/0x3d0
[   21.207489]  ? __pfx_kmalloc_node_oob_right+0x10/0x10
[   21.207519]  ? __schedule+0xc6f/0x27d0
[   21.207554]  ? __pfx_read_tsc+0x10/0x10
[   21.207632]  ? ktime_get_ts64+0x86/0x230
[   21.207706]  kunit_try_run_case+0x1b3/0x490
[   21.207747]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.207778]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.207810]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.207841]  ? __kthread_parkme+0x82/0x160
[   21.207870]  ? preempt_count_sub+0x50/0x80
[   21.207902]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.207934]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.207966]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.207997]  kthread+0x324/0x6d0
[   21.208024]  ? trace_preempt_on+0x20/0xc0
[   21.208056]  ? __pfx_kthread+0x10/0x10
[   21.208094]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.208123]  ? calculate_sigpending+0x7b/0xa0
[   21.208157]  ? __pfx_kthread+0x10/0x10
[   21.208202]  ret_from_fork+0x41/0x80
[   21.208239]  ? __pfx_kthread+0x10/0x10
[   21.208268]  ret_from_fork_asm+0x1a/0x30
[   21.208311]  </TASK>
[   21.208327] 
[   21.227751] Allocated by task 148:
[   21.228284]  kasan_save_stack+0x3d/0x60
[   21.228761]  kasan_save_track+0x18/0x40
[   21.229178]  kasan_save_alloc_info+0x3b/0x50
[   21.229792]  __kasan_kmalloc+0xb7/0xc0
[   21.230130]  __kmalloc_cache_node_noprof+0x183/0x410
[   21.230855]  kmalloc_node_oob_right+0xac/0x3d0
[   21.231356]  kunit_try_run_case+0x1b3/0x490
[   21.231751]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.232305]  kthread+0x324/0x6d0
[   21.233083]  ret_from_fork+0x41/0x80
[   21.233557]  ret_from_fork_asm+0x1a/0x30
[   21.233845] 
[   21.234003] The buggy address belongs to the object at ffff888102c14000
[   21.234003]  which belongs to the cache kmalloc-4k of size 4096
[   21.235048] The buggy address is located 0 bytes to the right of
[   21.235048]  allocated 4096-byte region [ffff888102c14000, ffff888102c15000)
[   21.236442] 
[   21.236720] The buggy address belongs to the physical page:
[   21.237874] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102c10
[   21.239228] head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.239863] flags: 0x200000000000040(head|node=0|zone=2)
[   21.240793] page_type: f5(slab)
[   21.241344] raw: 0200000000000040 ffff888100042140 dead000000000122 0000000000000000
[   21.241897] raw: 0000000000000000 0000000080040004 00000001f5000000 0000000000000000
[   21.242843] head: 0200000000000040 ffff888100042140 dead000000000122 0000000000000000
[   21.243536] head: 0000000000000000 0000000080040004 00000001f5000000 0000000000000000
[   21.243910] head: 0200000000000003 ffffea00040b0401 ffffffffffffffff 0000000000000000
[   21.244160] head: 0000000000000008 0000000000000000 00000000ffffffff 0000000000000000
[   21.244452] page dumped because: kasan: bad access detected
[   21.245105] 
[   21.245396] Memory state around the buggy address:
[   21.246485]  ffff888102c14f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.247135]  ffff888102c14f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.247791] >ffff888102c15000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.248787]                    ^
[   21.249157]  ffff888102c15080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.250338]  ffff888102c15100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.250966] ==================================================================

test log snippet boot parser 87

Home

[   22.336004] ==================================================================
[   22.336866] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_16+0x454/0x4a0
[   22.337388] Write of size 16 at addr ffff8881012e6700 by task kunit_try_catch/174
[   22.338314] 
[   22.338959] CPU: 0 UID: 0 PID: 174 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.339064] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.339097] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.339137] Call Trace:
[   22.339158]  <TASK>
[   22.339185]  dump_stack_lvl+0x73/0xb0
[   22.339244]  print_report+0xd1/0x640
[   22.339278]  ? __virt_addr_valid+0x1db/0x2d0
[   22.339315]  ? kasan_complete_mode_report_info+0x2a/0x200
[   22.339349]  kasan_report+0x102/0x140
[   22.339378]  ? kmalloc_oob_16+0x454/0x4a0
[   22.339434]  ? kmalloc_oob_16+0x454/0x4a0
[   22.339498]  __asan_report_store16_noabort+0x1b/0x30
[   22.339555]  kmalloc_oob_16+0x454/0x4a0
[   22.339680]  ? __pfx_kmalloc_oob_16+0x10/0x10
[   22.339789]  ? irqentry_exit+0x2a/0x60
[   22.339882]  ? sysvec_apic_timer_interrupt+0x50/0x90
[   22.339950]  ? __pfx_read_tsc+0x10/0x10
[   22.340001]  ? ktime_get_ts64+0x86/0x230
[   22.340064]  kunit_try_run_case+0x1b3/0x490
[   22.340178]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.340281]  ? __kthread_parkme+0x8f/0x160
[   22.340461]  ? __kthread_parkme+0x8f/0x160
[   22.340513]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.340548]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.340586]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.340688]  kthread+0x324/0x6d0
[   22.340735]  ? trace_preempt_on+0x20/0xc0
[   22.340769]  ? __pfx_kthread+0x10/0x10
[   22.340798]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.340829]  ? calculate_sigpending+0x7b/0xa0
[   22.340863]  ? __pfx_kthread+0x10/0x10
[   22.340892]  ret_from_fork+0x41/0x80
[   22.340921]  ? __pfx_kthread+0x10/0x10
[   22.340949]  ret_from_fork_asm+0x1a/0x30
[   22.340992]  </TASK>
[   22.341008] 
[   22.359613] Allocated by task 174:
[   22.359996]  kasan_save_stack+0x3d/0x60
[   22.360466]  kasan_save_track+0x18/0x40
[   22.361125]  kasan_save_alloc_info+0x3b/0x50
[   22.361730]  __kasan_kmalloc+0xb7/0xc0
[   22.361992]  __kmalloc_cache_noprof+0x184/0x410
[   22.362299]  kmalloc_oob_16+0xa9/0x4a0
[   22.363063]  kunit_try_run_case+0x1b3/0x490
[   22.363372]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.364094]  kthread+0x324/0x6d0
[   22.364955]  ret_from_fork+0x41/0x80
[   22.365490]  ret_from_fork_asm+0x1a/0x30
[   22.366474] 
[   22.366955] The buggy address belongs to the object at ffff8881012e6700
[   22.366955]  which belongs to the cache kmalloc-16 of size 16
[   22.368253] The buggy address is located 0 bytes inside of
[   22.368253]  allocated 13-byte region [ffff8881012e6700, ffff8881012e670d)
[   22.369277] 
[   22.369715] The buggy address belongs to the physical page:
[   22.370243] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   22.371726] flags: 0x200000000000000(node=0|zone=2)
[   22.372195] page_type: f5(slab)
[   22.373051] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   22.373935] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   22.374486] page dumped because: kasan: bad access detected
[   22.375257] 
[   22.375555] Memory state around the buggy address:
[   22.376482]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   22.377347]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   22.378040] >ffff8881012e6700: 00 05 fc fc 00 00 fc fc fc fc fc fc fc fc fc fc
[   22.378796]                       ^
[   22.379432]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.380218]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.381010] ==================================================================

test log snippet boot parser 88

Home

[   22.445238] ==================================================================
[   22.445976] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_in_memset+0x160/0x320
[   22.446651] Write of size 128 at addr ffff888101ad5900 by task kunit_try_catch/178
[   22.448519] 
[   22.448790] CPU: 0 UID: 0 PID: 178 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.448896] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.448928] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.448978] Call Trace:
[   22.449007]  <TASK>
[   22.449038]  dump_stack_lvl+0x73/0xb0
[   22.449113]  print_report+0xd1/0x640
[   22.449167]  ? __virt_addr_valid+0x1db/0x2d0
[   22.449295]  ? kasan_complete_mode_report_info+0x2a/0x200
[   22.449361]  kasan_report+0x102/0x140
[   22.449436]  ? kmalloc_oob_in_memset+0x160/0x320
[   22.449500]  ? kmalloc_oob_in_memset+0x160/0x320
[   22.449570]  kasan_check_range+0x10c/0x1c0
[   22.449642]  __asan_memset+0x27/0x50
[   22.449695]  kmalloc_oob_in_memset+0x160/0x320
[   22.449779]  ? __pfx_kmalloc_oob_in_memset+0x10/0x10
[   22.449863]  ? __schedule+0xc6f/0x27d0
[   22.449902]  ? __pfx_read_tsc+0x10/0x10
[   22.449932]  ? ktime_get_ts64+0x86/0x230
[   22.449968]  kunit_try_run_case+0x1b3/0x490
[   22.450003]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.450034]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.450065]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.450096]  ? __kthread_parkme+0x82/0x160
[   22.450125]  ? preempt_count_sub+0x50/0x80
[   22.450158]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.450218]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.450255]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.450288]  kthread+0x324/0x6d0
[   22.450316]  ? trace_preempt_on+0x20/0xc0
[   22.450354]  ? __pfx_kthread+0x10/0x10
[   22.450384]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.450431]  ? calculate_sigpending+0x7b/0xa0
[   22.450466]  ? __pfx_kthread+0x10/0x10
[   22.450495]  ret_from_fork+0x41/0x80
[   22.450523]  ? __pfx_kthread+0x10/0x10
[   22.450552]  ret_from_fork_asm+0x1a/0x30
[   22.450606]  </TASK>
[   22.450642] 
[   22.469963] Allocated by task 178:
[   22.470430]  kasan_save_stack+0x3d/0x60
[   22.470932]  kasan_save_track+0x18/0x40
[   22.471497]  kasan_save_alloc_info+0x3b/0x50
[   22.472030]  __kasan_kmalloc+0xb7/0xc0
[   22.472734]  __kmalloc_cache_noprof+0x184/0x410
[   22.473281]  kmalloc_oob_in_memset+0xad/0x320
[   22.473936]  kunit_try_run_case+0x1b3/0x490
[   22.474490]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.475142]  kthread+0x324/0x6d0
[   22.475751]  ret_from_fork+0x41/0x80
[   22.476218]  ret_from_fork_asm+0x1a/0x30
[   22.476805] 
[   22.477070] The buggy address belongs to the object at ffff888101ad5900
[   22.477070]  which belongs to the cache kmalloc-128 of size 128
[   22.478412] The buggy address is located 0 bytes inside of
[   22.478412]  allocated 120-byte region [ffff888101ad5900, ffff888101ad5978)
[   22.479932] 
[   22.480116] The buggy address belongs to the physical page:
[   22.481424] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101ad5
[   22.482223] flags: 0x200000000000000(node=0|zone=2)
[   22.482755] page_type: f5(slab)
[   22.483437] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   22.483921] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   22.485145] page dumped because: kasan: bad access detected
[   22.485741] 
[   22.485959] Memory state around the buggy address:
[   22.486536]  ffff888101ad5800: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   22.487471]  ffff888101ad5880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.488427] >ffff888101ad5900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   22.489338]                                                                 ^
[   22.490566]  ffff888101ad5980: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.491086]  ffff888101ad5a00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.491733] ==================================================================

test log snippet boot parser 89

Home

[   21.152349] ==================================================================
[   21.153544] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_left+0x363/0x3c0
[   21.154598] Read of size 1 at addr ffff8881012e66df by task kunit_try_catch/146
[   21.155344] 
[   21.155553] CPU: 0 UID: 0 PID: 146 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.155649] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.155837] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.155915] Call Trace:
[   21.155950]  <TASK>
[   21.155991]  dump_stack_lvl+0x73/0xb0
[   21.156076]  print_report+0xd1/0x640
[   21.156138]  ? __virt_addr_valid+0x1db/0x2d0
[   21.156220]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.156288]  kasan_report+0x102/0x140
[   21.156346]  ? kmalloc_oob_left+0x363/0x3c0
[   21.156425]  ? kmalloc_oob_left+0x363/0x3c0
[   21.156500]  __asan_report_load1_noabort+0x18/0x20
[   21.156560]  kmalloc_oob_left+0x363/0x3c0
[   21.156621]  ? __pfx_kmalloc_oob_left+0x10/0x10
[   21.156677]  ? __schedule+0xc6f/0x27d0
[   21.156785]  ? __pfx_read_tsc+0x10/0x10
[   21.156841]  ? ktime_get_ts64+0x86/0x230
[   21.156883]  kunit_try_run_case+0x1b3/0x490
[   21.156920]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.156951]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.156982]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.157013]  ? __kthread_parkme+0x82/0x160
[   21.157042]  ? preempt_count_sub+0x50/0x80
[   21.157075]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.157107]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.157138]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.157169]  kthread+0x324/0x6d0
[   21.157234]  ? trace_preempt_on+0x20/0xc0
[   21.157269]  ? __pfx_kthread+0x10/0x10
[   21.157297]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.157327]  ? calculate_sigpending+0x7b/0xa0
[   21.157360]  ? __pfx_kthread+0x10/0x10
[   21.157389]  ret_from_fork+0x41/0x80
[   21.157435]  ? __pfx_kthread+0x10/0x10
[   21.157464]  ret_from_fork_asm+0x1a/0x30
[   21.157507]  </TASK>
[   21.157524] 
[   21.176905] Allocated by task 10:
[   21.177276]  kasan_save_stack+0x3d/0x60
[   21.177912]  kasan_save_track+0x18/0x40
[   21.178390]  kasan_save_alloc_info+0x3b/0x50
[   21.178779]  __kasan_kmalloc+0xb7/0xc0
[   21.179206]  __kmalloc_node_track_caller_noprof+0x1c6/0x500
[   21.179677]  kvasprintf+0xc6/0x150
[   21.180057]  __kthread_create_on_node+0x18c/0x3a0
[   21.180803]  kthread_create_on_node+0xa8/0xe0
[   21.181262]  create_worker+0x3c8/0x7a0
[   21.181743]  worker_thread+0x97d/0x1300
[   21.182179]  kthread+0x324/0x6d0
[   21.182729]  ret_from_fork+0x41/0x80
[   21.183130]  ret_from_fork_asm+0x1a/0x30
[   21.183551] 
[   21.183921] The buggy address belongs to the object at ffff8881012e66c0
[   21.183921]  which belongs to the cache kmalloc-16 of size 16
[   21.185121] The buggy address is located 19 bytes to the right of
[   21.185121]  allocated 12-byte region [ffff8881012e66c0, ffff8881012e66cc)
[   21.186324] 
[   21.186725] The buggy address belongs to the physical page:
[   21.187233] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   21.188019] flags: 0x200000000000000(node=0|zone=2)
[   21.188602] page_type: f5(slab)
[   21.188979] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   21.189508] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   21.190097] page dumped because: kasan: bad access detected
[   21.190902] 
[   21.191061] Memory state around the buggy address:
[   21.191562]  ffff8881012e6580: 00 04 fc fc 00 04 fc fc fa fb fc fc fa fb fc fc
[   21.192229]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   21.192908] >ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc 00 07 fc fc
[   21.193574]                                                     ^
[   21.194360]  ffff8881012e6700: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.194978]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.195803] ==================================================================

test log snippet boot parser 90

Home

[   22.642830] ==================================================================
[   22.644198] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_16+0x167/0x330
[   22.644972] Write of size 16 at addr ffff888102a59269 by task kunit_try_catch/186
[   22.645966] 
[   22.646158] CPU: 1 UID: 0 PID: 186 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.646266] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.646299] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.646366] Call Trace:
[   22.646415]  <TASK>
[   22.646451]  dump_stack_lvl+0x73/0xb0
[   22.646531]  print_report+0xd1/0x640
[   22.646590]  ? __virt_addr_valid+0x1db/0x2d0
[   22.646657]  ? kasan_complete_mode_report_info+0x2a/0x200
[   22.646717]  kasan_report+0x102/0x140
[   22.646809]  ? kmalloc_oob_memset_16+0x167/0x330
[   22.646871]  ? kmalloc_oob_memset_16+0x167/0x330
[   22.646942]  kasan_check_range+0x10c/0x1c0
[   22.647000]  __asan_memset+0x27/0x50
[   22.647037]  kmalloc_oob_memset_16+0x167/0x330
[   22.647070]  ? __pfx_kmalloc_oob_memset_16+0x10/0x10
[   22.647103]  ? __schedule+0xc6f/0x27d0
[   22.647138]  ? __pfx_read_tsc+0x10/0x10
[   22.647166]  ? ktime_get_ts64+0x86/0x230
[   22.647213]  kunit_try_run_case+0x1b3/0x490
[   22.647251]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.647281]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.647312]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.647343]  ? __kthread_parkme+0x82/0x160
[   22.647372]  ? preempt_count_sub+0x50/0x80
[   22.647420]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.647455]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.647486]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.647517]  kthread+0x324/0x6d0
[   22.647545]  ? trace_preempt_on+0x20/0xc0
[   22.647578]  ? __pfx_kthread+0x10/0x10
[   22.647649]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.647699]  ? calculate_sigpending+0x7b/0xa0
[   22.647736]  ? __pfx_kthread+0x10/0x10
[   22.647766]  ret_from_fork+0x41/0x80
[   22.647796]  ? __pfx_kthread+0x10/0x10
[   22.647824]  ret_from_fork_asm+0x1a/0x30
[   22.647867]  </TASK>
[   22.647883] 
[   22.665022] Allocated by task 186:
[   22.665170]  kasan_save_stack+0x3d/0x60
[   22.665331]  kasan_save_track+0x18/0x40
[   22.666143]  kasan_save_alloc_info+0x3b/0x50
[   22.667066]  __kasan_kmalloc+0xb7/0xc0
[   22.668023]  __kmalloc_cache_noprof+0x184/0x410
[   22.668716]  kmalloc_oob_memset_16+0xad/0x330
[   22.669244]  kunit_try_run_case+0x1b3/0x490
[   22.669838]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.670439]  kthread+0x324/0x6d0
[   22.670985]  ret_from_fork+0x41/0x80
[   22.671453]  ret_from_fork_asm+0x1a/0x30
[   22.672042] 
[   22.672264] The buggy address belongs to the object at ffff888102a59200
[   22.672264]  which belongs to the cache kmalloc-128 of size 128
[   22.672952] The buggy address is located 105 bytes inside of
[   22.672952]  allocated 120-byte region [ffff888102a59200, ffff888102a59278)
[   22.674898] 
[   22.675267] The buggy address belongs to the physical page:
[   22.676267] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a59
[   22.677097] flags: 0x200000000000000(node=0|zone=2)
[   22.677890] page_type: f5(slab)
[   22.678182] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   22.679273] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   22.680336] page dumped because: kasan: bad access detected
[   22.680925] 
[   22.681084] Memory state around the buggy address:
[   22.682171]  ffff888102a59100: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   22.683044]  ffff888102a59180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.684146] >ffff888102a59200: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   22.684605]                                                                 ^
[   22.685851]  ffff888102a59280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.686500]  ffff888102a59300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.687315] ==================================================================

test log snippet boot parser 91

Home

[   22.498338] ==================================================================
[   22.499108] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_2+0x167/0x330
[   22.499772] Write of size 2 at addr ffff888101ad5a77 by task kunit_try_catch/180
[   22.500453] 
[   22.501492] CPU: 0 UID: 0 PID: 180 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.501616] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.501652] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.501709] Call Trace:
[   22.501742]  <TASK>
[   22.501780]  dump_stack_lvl+0x73/0xb0
[   22.501899]  print_report+0xd1/0x640
[   22.501944]  ? __virt_addr_valid+0x1db/0x2d0
[   22.501982]  ? kasan_complete_mode_report_info+0x2a/0x200
[   22.502016]  kasan_report+0x102/0x140
[   22.502045]  ? kmalloc_oob_memset_2+0x167/0x330
[   22.502078]  ? kmalloc_oob_memset_2+0x167/0x330
[   22.502115]  kasan_check_range+0x10c/0x1c0
[   22.502144]  __asan_memset+0x27/0x50
[   22.502172]  kmalloc_oob_memset_2+0x167/0x330
[   22.502229]  ? __pfx_kmalloc_oob_memset_2+0x10/0x10
[   22.502262]  ? __schedule+0xc6f/0x27d0
[   22.502295]  ? __pfx_read_tsc+0x10/0x10
[   22.502322]  ? ktime_get_ts64+0x86/0x230
[   22.502364]  kunit_try_run_case+0x1b3/0x490
[   22.502415]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.502447]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.502479]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.502509]  ? __kthread_parkme+0x82/0x160
[   22.502538]  ? preempt_count_sub+0x50/0x80
[   22.502571]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.502623]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.502677]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.502729]  kthread+0x324/0x6d0
[   22.502777]  ? trace_preempt_on+0x20/0xc0
[   22.502829]  ? __pfx_kthread+0x10/0x10
[   22.502876]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.502924]  ? calculate_sigpending+0x7b/0xa0
[   22.502961]  ? __pfx_kthread+0x10/0x10
[   22.502991]  ret_from_fork+0x41/0x80
[   22.503021]  ? __pfx_kthread+0x10/0x10
[   22.503049]  ret_from_fork_asm+0x1a/0x30
[   22.503092]  </TASK>
[   22.503108] 
[   22.524510] Allocated by task 180:
[   22.525351]  kasan_save_stack+0x3d/0x60
[   22.525908]  kasan_save_track+0x18/0x40
[   22.526371]  kasan_save_alloc_info+0x3b/0x50
[   22.526881]  __kasan_kmalloc+0xb7/0xc0
[   22.527764]  __kmalloc_cache_noprof+0x184/0x410
[   22.528207]  kmalloc_oob_memset_2+0xad/0x330
[   22.528609]  kunit_try_run_case+0x1b3/0x490
[   22.529025]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.530161]  kthread+0x324/0x6d0
[   22.530483]  ret_from_fork+0x41/0x80
[   22.531065]  ret_from_fork_asm+0x1a/0x30
[   22.531797] 
[   22.531964] The buggy address belongs to the object at ffff888101ad5a00
[   22.531964]  which belongs to the cache kmalloc-128 of size 128
[   22.533876] The buggy address is located 119 bytes inside of
[   22.533876]  allocated 120-byte region [ffff888101ad5a00, ffff888101ad5a78)
[   22.534962] 
[   22.535510] The buggy address belongs to the physical page:
[   22.536207] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101ad5
[   22.536981] flags: 0x200000000000000(node=0|zone=2)
[   22.537478] page_type: f5(slab)
[   22.537848] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   22.538839] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   22.539683] page dumped because: kasan: bad access detected
[   22.540370] 
[   22.540757] Memory state around the buggy address:
[   22.541366]  ffff888101ad5900: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   22.542280]  ffff888101ad5980: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.542776] >ffff888101ad5a00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   22.543739]                                                                 ^
[   22.543980]  ffff888101ad5a80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.544857]  ffff888101ad5b00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.545657] ==================================================================

test log snippet boot parser 92

Home

[   22.552253] ==================================================================
[   22.553260] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_4+0x167/0x330
[   22.553748] Write of size 4 at addr ffff888102a59075 by task kunit_try_catch/182
[   22.554630] 
[   22.555586] CPU: 1 UID: 0 PID: 182 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.555721] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.555755] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.555806] Call Trace:
[   22.556068]  <TASK>
[   22.556119]  dump_stack_lvl+0x73/0xb0
[   22.556179]  print_report+0xd1/0x640
[   22.556248]  ? __virt_addr_valid+0x1db/0x2d0
[   22.556285]  ? kasan_complete_mode_report_info+0x2a/0x200
[   22.556320]  kasan_report+0x102/0x140
[   22.556349]  ? kmalloc_oob_memset_4+0x167/0x330
[   22.556382]  ? kmalloc_oob_memset_4+0x167/0x330
[   22.556437]  kasan_check_range+0x10c/0x1c0
[   22.556466]  __asan_memset+0x27/0x50
[   22.556495]  kmalloc_oob_memset_4+0x167/0x330
[   22.556528]  ? __pfx_kmalloc_oob_memset_4+0x10/0x10
[   22.556561]  ? __schedule+0xc6f/0x27d0
[   22.556629]  ? __pfx_read_tsc+0x10/0x10
[   22.556702]  ? ktime_get_ts64+0x86/0x230
[   22.556764]  kunit_try_run_case+0x1b3/0x490
[   22.556802]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.556833]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.556864]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.556896]  ? __kthread_parkme+0x82/0x160
[   22.556924]  ? preempt_count_sub+0x50/0x80
[   22.556955]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.556987]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.557019]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.557050]  kthread+0x324/0x6d0
[   22.557077]  ? trace_preempt_on+0x20/0xc0
[   22.557107]  ? __pfx_kthread+0x10/0x10
[   22.557135]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.557164]  ? calculate_sigpending+0x7b/0xa0
[   22.557225]  ? __pfx_kthread+0x10/0x10
[   22.557258]  ret_from_fork+0x41/0x80
[   22.557287]  ? __pfx_kthread+0x10/0x10
[   22.557316]  ret_from_fork_asm+0x1a/0x30
[   22.557358]  </TASK>
[   22.557375] 
[   22.574041] Allocated by task 182:
[   22.574517]  kasan_save_stack+0x3d/0x60
[   22.575040]  kasan_save_track+0x18/0x40
[   22.575544]  kasan_save_alloc_info+0x3b/0x50
[   22.576242]  __kasan_kmalloc+0xb7/0xc0
[   22.576980]  __kmalloc_cache_noprof+0x184/0x410
[   22.577588]  kmalloc_oob_memset_4+0xad/0x330
[   22.578228]  kunit_try_run_case+0x1b3/0x490
[   22.578565]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.579315]  kthread+0x324/0x6d0
[   22.579720]  ret_from_fork+0x41/0x80
[   22.580250]  ret_from_fork_asm+0x1a/0x30
[   22.580843] 
[   22.581052] The buggy address belongs to the object at ffff888102a59000
[   22.581052]  which belongs to the cache kmalloc-128 of size 128
[   22.582019] The buggy address is located 117 bytes inside of
[   22.582019]  allocated 120-byte region [ffff888102a59000, ffff888102a59078)
[   22.583327] 
[   22.583721] The buggy address belongs to the physical page:
[   22.584262] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a59
[   22.584710] flags: 0x200000000000000(node=0|zone=2)
[   22.585240] page_type: f5(slab)
[   22.585960] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   22.586638] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   22.587582] page dumped because: kasan: bad access detected
[   22.587907] 
[   22.588213] Memory state around the buggy address:
[   22.588975]  ffff888102a58f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.589820]  ffff888102a58f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.590374] >ffff888102a59000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   22.591116]                                                                 ^
[   22.591840]  ffff888102a59080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.592488]  ffff888102a59100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.593380] ==================================================================

test log snippet boot parser 93

Home

[   22.600440] ==================================================================
[   22.601214] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_memset_8+0x167/0x330
[   22.601499] Write of size 8 at addr ffff888102a59171 by task kunit_try_catch/184
[   22.601738] 
[   22.601838] CPU: 1 UID: 0 PID: 184 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.601894] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.601912] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.601940] Call Trace:
[   22.601958]  <TASK>
[   22.601980]  dump_stack_lvl+0x73/0xb0
[   22.602020]  print_report+0xd1/0x640
[   22.602053]  ? __virt_addr_valid+0x1db/0x2d0
[   22.602086]  ? kasan_complete_mode_report_info+0x2a/0x200
[   22.602120]  kasan_report+0x102/0x140
[   22.602149]  ? kmalloc_oob_memset_8+0x167/0x330
[   22.602184]  ? kmalloc_oob_memset_8+0x167/0x330
[   22.602293]  kasan_check_range+0x10c/0x1c0
[   22.602363]  __asan_memset+0x27/0x50
[   22.602436]  kmalloc_oob_memset_8+0x167/0x330
[   22.602502]  ? __pfx_kmalloc_oob_memset_8+0x10/0x10
[   22.602563]  ? __schedule+0xc6f/0x27d0
[   22.602623]  ? __pfx_read_tsc+0x10/0x10
[   22.602674]  ? ktime_get_ts64+0x86/0x230
[   22.602740]  kunit_try_run_case+0x1b3/0x490
[   22.602806]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.602862]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.602966]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.603495]  ? __kthread_parkme+0x82/0x160
[   22.603556]  ? preempt_count_sub+0x50/0x80
[   22.603648]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.603760]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.603831]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.603895]  kthread+0x324/0x6d0
[   22.603946]  ? trace_preempt_on+0x20/0xc0
[   22.603982]  ? __pfx_kthread+0x10/0x10
[   22.604011]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.604041]  ? calculate_sigpending+0x7b/0xa0
[   22.604076]  ? __pfx_kthread+0x10/0x10
[   22.604105]  ret_from_fork+0x41/0x80
[   22.604133]  ? __pfx_kthread+0x10/0x10
[   22.604162]  ret_from_fork_asm+0x1a/0x30
[   22.604220]  </TASK>
[   22.604237] 
[   22.618712] Allocated by task 184:
[   22.619038]  kasan_save_stack+0x3d/0x60
[   22.619449]  kasan_save_track+0x18/0x40
[   22.619916]  kasan_save_alloc_info+0x3b/0x50
[   22.620430]  __kasan_kmalloc+0xb7/0xc0
[   22.620915]  __kmalloc_cache_noprof+0x184/0x410
[   22.621488]  kmalloc_oob_memset_8+0xad/0x330
[   22.621973]  kunit_try_run_case+0x1b3/0x490
[   22.622555]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.623118]  kthread+0x324/0x6d0
[   22.623613]  ret_from_fork+0x41/0x80
[   22.624066]  ret_from_fork_asm+0x1a/0x30
[   22.624444] 
[   22.624607] The buggy address belongs to the object at ffff888102a59100
[   22.624607]  which belongs to the cache kmalloc-128 of size 128
[   22.625281] The buggy address is located 113 bytes inside of
[   22.625281]  allocated 120-byte region [ffff888102a59100, ffff888102a59178)
[   22.626479] 
[   22.626714] The buggy address belongs to the physical page:
[   22.627371] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a59
[   22.628136] flags: 0x200000000000000(node=0|zone=2)
[   22.628840] page_type: f5(slab)
[   22.629389] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   22.629925] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   22.630392] page dumped because: kasan: bad access detected
[   22.630991] 
[   22.631383] Memory state around the buggy address:
[   22.631930]  ffff888102a59000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   22.632728]  ffff888102a59080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.633438] >ffff888102a59100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   22.634055]                                                                 ^
[   22.634736]  ffff888102a59180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.635328]  ffff888102a59200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.635956] ==================================================================

test log snippet boot parser 94

Home

[   21.064944] ==================================================================
[   21.065722] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0x6bf/0x7f0
[   21.066865] Write of size 1 at addr ffff888102a48d78 by task kunit_try_catch/144
[   21.067769] 
[   21.068036] CPU: 1 UID: 0 PID: 144 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.068140] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.068177] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.068229] Call Trace:
[   21.068264]  <TASK>
[   21.068300]  dump_stack_lvl+0x73/0xb0
[   21.068373]  print_report+0xd1/0x640
[   21.068453]  ? __virt_addr_valid+0x1db/0x2d0
[   21.068517]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.068560]  kasan_report+0x102/0x140
[   21.068625]  ? kmalloc_oob_right+0x6bf/0x7f0
[   21.068697]  ? kmalloc_oob_right+0x6bf/0x7f0
[   21.068762]  __asan_report_store1_noabort+0x1b/0x30
[   21.068801]  kmalloc_oob_right+0x6bf/0x7f0
[   21.068834]  ? __pfx_kmalloc_oob_right+0x10/0x10
[   21.068866]  ? __schedule+0xc6f/0x27d0
[   21.068896]  ? __pfx_read_tsc+0x10/0x10
[   21.068923]  ? ktime_get_ts64+0x86/0x230
[   21.068959]  kunit_try_run_case+0x1b3/0x490
[   21.068992]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.069022]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.069053]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.069083]  ? __kthread_parkme+0x82/0x160
[   21.069111]  ? preempt_count_sub+0x50/0x80
[   21.069144]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.069176]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.069233]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.069266]  kthread+0x324/0x6d0
[   21.069294]  ? trace_preempt_on+0x20/0xc0
[   21.069324]  ? __pfx_kthread+0x10/0x10
[   21.069352]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.069381]  ? calculate_sigpending+0x7b/0xa0
[   21.069430]  ? __pfx_kthread+0x10/0x10
[   21.069460]  ret_from_fork+0x41/0x80
[   21.069488]  ? __pfx_kthread+0x10/0x10
[   21.069517]  ret_from_fork_asm+0x1a/0x30
[   21.069560]  </TASK>
[   21.069576] 
[   21.086391] Allocated by task 144:
[   21.086979]  kasan_save_stack+0x3d/0x60
[   21.087455]  kasan_save_track+0x18/0x40
[   21.088216]  kasan_save_alloc_info+0x3b/0x50
[   21.088766]  __kasan_kmalloc+0xb7/0xc0
[   21.089127]  __kmalloc_cache_noprof+0x184/0x410
[   21.089725]  kmalloc_oob_right+0xaa/0x7f0
[   21.090183]  kunit_try_run_case+0x1b3/0x490
[   21.090879]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.091483]  kthread+0x324/0x6d0
[   21.092268]  ret_from_fork+0x41/0x80
[   21.092752]  ret_from_fork_asm+0x1a/0x30
[   21.093130] 
[   21.093304] The buggy address belongs to the object at ffff888102a48d00
[   21.093304]  which belongs to the cache kmalloc-128 of size 128
[   21.094151] The buggy address is located 5 bytes to the right of
[   21.094151]  allocated 115-byte region [ffff888102a48d00, ffff888102a48d73)
[   21.094965] 
[   21.095118] The buggy address belongs to the physical page:
[   21.095468] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a48
[   21.096545] flags: 0x200000000000000(node=0|zone=2)
[   21.097053] page_type: f5(slab)
[   21.097738] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   21.098466] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.099389] page dumped because: kasan: bad access detected
[   21.100365] 
[   21.100609] Memory state around the buggy address:
[   21.101074]  ffff888102a48c00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   21.102030]  ffff888102a48c80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.102770] >ffff888102a48d00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fc
[   21.103433]                                                                 ^
[   21.104499]  ffff888102a48d80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.105334]  ffff888102a48e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.106514] ==================================================================

test log snippet boot parser 95

Home

[   21.010821] ==================================================================
[   21.012038] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0x6f2/0x7f0
[   21.013302] Write of size 1 at addr ffff888102a48d73 by task kunit_try_catch/144
[   21.014570] 
[   21.016499] CPU: 1 UID: 0 PID: 144 Comm: kunit_try_catch Tainted: G                 N 6.13.0 #1
[   21.017006] Tainted: [N]=TEST
[   21.017054] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.017439] Call Trace:
[   21.017552]  <TASK>
[   21.017779]  dump_stack_lvl+0x73/0xb0
[   21.017920]  print_report+0xd1/0x640
[   21.017969]  ? __virt_addr_valid+0x1db/0x2d0
[   21.018008]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.018043]  kasan_report+0x102/0x140
[   21.018100]  ? kmalloc_oob_right+0x6f2/0x7f0
[   21.018133]  ? kmalloc_oob_right+0x6f2/0x7f0
[   21.018171]  __asan_report_store1_noabort+0x1b/0x30
[   21.018238]  kmalloc_oob_right+0x6f2/0x7f0
[   21.018273]  ? __pfx_kmalloc_oob_right+0x10/0x10
[   21.018305]  ? __schedule+0xc6f/0x27d0
[   21.018339]  ? __pfx_read_tsc+0x10/0x10
[   21.018375]  ? ktime_get_ts64+0x86/0x230
[   21.018427]  kunit_try_run_case+0x1b3/0x490
[   21.018465]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.018495]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.018527]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.018557]  ? __kthread_parkme+0x82/0x160
[   21.018592]  ? preempt_count_sub+0x50/0x80
[   21.018646]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.018701]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.018755]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.018808]  kthread+0x324/0x6d0
[   21.018857]  ? trace_preempt_on+0x20/0xc0
[   21.018916]  ? __pfx_kthread+0x10/0x10
[   21.018971]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.019029]  ? calculate_sigpending+0x7b/0xa0
[   21.019094]  ? __pfx_kthread+0x10/0x10
[   21.019149]  ret_from_fork+0x41/0x80
[   21.019197]  ? __pfx_kthread+0x10/0x10
[   21.019246]  ret_from_fork_asm+0x1a/0x30
[   21.019347]  </TASK>
[   21.019468] 
[   21.037848] Allocated by task 144:
[   21.039007]  kasan_save_stack+0x3d/0x60
[   21.039523]  kasan_save_track+0x18/0x40
[   21.039929]  kasan_save_alloc_info+0x3b/0x50
[   21.040394]  __kasan_kmalloc+0xb7/0xc0
[   21.040706]  __kmalloc_cache_noprof+0x184/0x410
[   21.041159]  kmalloc_oob_right+0xaa/0x7f0
[   21.041805]  kunit_try_run_case+0x1b3/0x490
[   21.042270]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.043124]  kthread+0x324/0x6d0
[   21.043556]  ret_from_fork+0x41/0x80
[   21.044084]  ret_from_fork_asm+0x1a/0x30
[   21.044537] 
[   21.045061] The buggy address belongs to the object at ffff888102a48d00
[   21.045061]  which belongs to the cache kmalloc-128 of size 128
[   21.046322] The buggy address is located 0 bytes to the right of
[   21.046322]  allocated 115-byte region [ffff888102a48d00, ffff888102a48d73)
[   21.048119] 
[   21.048547] The buggy address belongs to the physical page:
[   21.049701] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a48
[   21.050985] flags: 0x200000000000000(node=0|zone=2)
[   21.052377] page_type: f5(slab)
[   21.053877] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   21.054396] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.055322] page dumped because: kasan: bad access detected
[   21.055946] 
[   21.056415] Memory state around the buggy address:
[   21.057663]  ffff888102a48c00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   21.058303]  ffff888102a48c80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.058854] >ffff888102a48d00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fc
[   21.059344]                                                              ^
[   21.060470]  ffff888102a48d80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.061596]  ffff888102a48e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.062565] ==================================================================

test log snippet boot parser 96

Home

[   21.108335] ==================================================================
[   21.109097] BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0x68c/0x7f0
[   21.110439] Read of size 1 at addr ffff888102a48d80 by task kunit_try_catch/144
[   21.111137] 
[   21.111426] CPU: 1 UID: 0 PID: 144 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.111535] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.111568] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.111623] Call Trace:
[   21.111654]  <TASK>
[   21.111684]  dump_stack_lvl+0x73/0xb0
[   21.111761]  print_report+0xd1/0x640
[   21.112333]  ? __virt_addr_valid+0x1db/0x2d0
[   21.112421]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.112493]  kasan_report+0x102/0x140
[   21.112550]  ? kmalloc_oob_right+0x68c/0x7f0
[   21.112646]  ? kmalloc_oob_right+0x68c/0x7f0
[   21.112732]  __asan_report_load1_noabort+0x18/0x20
[   21.112798]  kmalloc_oob_right+0x68c/0x7f0
[   21.112857]  ? __pfx_kmalloc_oob_right+0x10/0x10
[   21.112909]  ? __schedule+0xc6f/0x27d0
[   21.112959]  ? __pfx_read_tsc+0x10/0x10
[   21.113003]  ? ktime_get_ts64+0x86/0x230
[   21.113060]  kunit_try_run_case+0x1b3/0x490
[   21.113117]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.113170]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.113515]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.113625]  ? __kthread_parkme+0x82/0x160
[   21.113684]  ? preempt_count_sub+0x50/0x80
[   21.113745]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.113807]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.113870]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.113932]  kthread+0x324/0x6d0
[   21.113969]  ? trace_preempt_on+0x20/0xc0
[   21.114001]  ? __pfx_kthread+0x10/0x10
[   21.114030]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.114060]  ? calculate_sigpending+0x7b/0xa0
[   21.114092]  ? __pfx_kthread+0x10/0x10
[   21.114121]  ret_from_fork+0x41/0x80
[   21.114149]  ? __pfx_kthread+0x10/0x10
[   21.114177]  ret_from_fork_asm+0x1a/0x30
[   21.114269]  </TASK>
[   21.114286] 
[   21.127286] Allocated by task 144:
[   21.127713]  kasan_save_stack+0x3d/0x60
[   21.128128]  kasan_save_track+0x18/0x40
[   21.128642]  kasan_save_alloc_info+0x3b/0x50
[   21.129310]  __kasan_kmalloc+0xb7/0xc0
[   21.129750]  __kmalloc_cache_noprof+0x184/0x410
[   21.130333]  kmalloc_oob_right+0xaa/0x7f0
[   21.131027]  kunit_try_run_case+0x1b3/0x490
[   21.131676]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.132144]  kthread+0x324/0x6d0
[   21.132473]  ret_from_fork+0x41/0x80
[   21.132746]  ret_from_fork_asm+0x1a/0x30
[   21.133024] 
[   21.133176] The buggy address belongs to the object at ffff888102a48d00
[   21.133176]  which belongs to the cache kmalloc-128 of size 128
[   21.134219] The buggy address is located 13 bytes to the right of
[   21.134219]  allocated 115-byte region [ffff888102a48d00, ffff888102a48d73)
[   21.135798] 
[   21.136030] The buggy address belongs to the physical page:
[   21.136782] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a48
[   21.137515] flags: 0x200000000000000(node=0|zone=2)
[   21.137831] page_type: f5(slab)
[   21.138072] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   21.138622] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.139809] page dumped because: kasan: bad access detected
[   21.140713] 
[   21.140947] Memory state around the buggy address:
[   21.141651]  ffff888102a48c80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.142572]  ffff888102a48d00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fc
[   21.143186] >ffff888102a48d80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.143796]                    ^
[   21.144036]  ffff888102a48e00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.144439]  ffff888102a48e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.145373] ==================================================================

test log snippet boot parser 97

Home

[   21.301248] ==================================================================
[   21.302258] BUG: KASAN: slab-out-of-bounds in kmalloc_track_caller_oob_right+0x4b3/0x530
[   21.302740] Write of size 1 at addr ffff888102a48f78 by task kunit_try_catch/150
[   21.303680] 
[   21.303877] CPU: 1 UID: 0 PID: 150 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.303979] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.304014] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.304066] Call Trace:
[   21.304297]  <TASK>
[   21.304348]  dump_stack_lvl+0x73/0xb0
[   21.304447]  print_report+0xd1/0x640
[   21.304513]  ? __virt_addr_valid+0x1db/0x2d0
[   21.304712]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.304771]  kasan_report+0x102/0x140
[   21.304804]  ? kmalloc_track_caller_oob_right+0x4b3/0x530
[   21.304837]  ? kmalloc_track_caller_oob_right+0x4b3/0x530
[   21.304874]  __asan_report_store1_noabort+0x1b/0x30
[   21.304904]  kmalloc_track_caller_oob_right+0x4b3/0x530
[   21.304934]  ? __pfx_kmalloc_track_caller_oob_right+0x10/0x10
[   21.304965]  ? __schedule+0xc6f/0x27d0
[   21.304995]  ? __pfx_read_tsc+0x10/0x10
[   21.305022]  ? ktime_get_ts64+0x86/0x230
[   21.305056]  kunit_try_run_case+0x1b3/0x490
[   21.305089]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.305119]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.305149]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.305180]  ? __kthread_parkme+0x82/0x160
[   21.305208]  ? preempt_count_sub+0x50/0x80
[   21.305240]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.305274]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.305332]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.305364]  kthread+0x324/0x6d0
[   21.305391]  ? trace_preempt_on+0x20/0xc0
[   21.305438]  ? __pfx_kthread+0x10/0x10
[   21.305467]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.305496]  ? calculate_sigpending+0x7b/0xa0
[   21.305529]  ? __pfx_kthread+0x10/0x10
[   21.305558]  ret_from_fork+0x41/0x80
[   21.305591]  ? __pfx_kthread+0x10/0x10
[   21.305686]  ret_from_fork_asm+0x1a/0x30
[   21.305762]  </TASK>
[   21.305790] 
[   21.322832] Allocated by task 150:
[   21.323199]  kasan_save_stack+0x3d/0x60
[   21.323916]  kasan_save_track+0x18/0x40
[   21.324240]  kasan_save_alloc_info+0x3b/0x50
[   21.324731]  __kasan_kmalloc+0xb7/0xc0
[   21.325222]  __kmalloc_node_track_caller_noprof+0x1c6/0x500
[   21.325983]  kmalloc_track_caller_oob_right+0x19b/0x530
[   21.327253]  kunit_try_run_case+0x1b3/0x490
[   21.327739]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.328502]  kthread+0x324/0x6d0
[   21.329050]  ret_from_fork+0x41/0x80
[   21.329701]  ret_from_fork_asm+0x1a/0x30
[   21.330084] 
[   21.330307] The buggy address belongs to the object at ffff888102a48f00
[   21.330307]  which belongs to the cache kmalloc-128 of size 128
[   21.331451] The buggy address is located 0 bytes to the right of
[   21.331451]  allocated 120-byte region [ffff888102a48f00, ffff888102a48f78)
[   21.332920] 
[   21.333356] The buggy address belongs to the physical page:
[   21.334060] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a48
[   21.334773] flags: 0x200000000000000(node=0|zone=2)
[   21.335306] page_type: f5(slab)
[   21.335829] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   21.336551] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.337529] page dumped because: kasan: bad access detected
[   21.337985] 
[   21.338231] Memory state around the buggy address:
[   21.338943]  ffff888102a48e00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   21.339546]  ffff888102a48e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.340272] >ffff888102a48f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   21.340897]                                                                 ^
[   21.341997]  ffff888102a48f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.342800]  ffff888102a49000: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   21.343362] ==================================================================
---
[   21.257596] ==================================================================
[   21.258554] BUG: KASAN: slab-out-of-bounds in kmalloc_track_caller_oob_right+0x4ca/0x530
[   21.259248] Write of size 1 at addr ffff888102a48e78 by task kunit_try_catch/150
[   21.260700] 
[   21.260893] CPU: 1 UID: 0 PID: 150 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.260998] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.261031] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.261086] Call Trace:
[   21.261154]  <TASK>
[   21.261229]  dump_stack_lvl+0x73/0xb0
[   21.261319]  print_report+0xd1/0x640
[   21.261430]  ? __virt_addr_valid+0x1db/0x2d0
[   21.261511]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.261615]  kasan_report+0x102/0x140
[   21.261684]  ? kmalloc_track_caller_oob_right+0x4ca/0x530
[   21.261744]  ? kmalloc_track_caller_oob_right+0x4ca/0x530
[   21.261809]  __asan_report_store1_noabort+0x1b/0x30
[   21.261864]  kmalloc_track_caller_oob_right+0x4ca/0x530
[   21.261919]  ? __pfx_kmalloc_track_caller_oob_right+0x10/0x10
[   21.261976]  ? __schedule+0xc6f/0x27d0
[   21.262033]  ? __pfx_read_tsc+0x10/0x10
[   21.262080]  ? ktime_get_ts64+0x86/0x230
[   21.262143]  kunit_try_run_case+0x1b3/0x490
[   21.262251]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.262312]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.262356]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.262390]  ? __kthread_parkme+0x82/0x160
[   21.262439]  ? preempt_count_sub+0x50/0x80
[   21.262474]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.262506]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.262538]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.262570]  kthread+0x324/0x6d0
[   21.262666]  ? trace_preempt_on+0x20/0xc0
[   21.262719]  ? __pfx_kthread+0x10/0x10
[   21.262750]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.262780]  ? calculate_sigpending+0x7b/0xa0
[   21.262814]  ? __pfx_kthread+0x10/0x10
[   21.262843]  ret_from_fork+0x41/0x80
[   21.262872]  ? __pfx_kthread+0x10/0x10
[   21.262901]  ret_from_fork_asm+0x1a/0x30
[   21.262943]  </TASK>
[   21.262959] 
[   21.278083] Allocated by task 150:
[   21.278601]  kasan_save_stack+0x3d/0x60
[   21.279089]  kasan_save_track+0x18/0x40
[   21.279645]  kasan_save_alloc_info+0x3b/0x50
[   21.280241]  __kasan_kmalloc+0xb7/0xc0
[   21.280874]  __kmalloc_node_track_caller_noprof+0x1c6/0x500
[   21.281543]  kmalloc_track_caller_oob_right+0x9a/0x530
[   21.281876]  kunit_try_run_case+0x1b3/0x490
[   21.282163]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.283087]  kthread+0x324/0x6d0
[   21.283628]  ret_from_fork+0x41/0x80
[   21.284048]  ret_from_fork_asm+0x1a/0x30
[   21.284791] 
[   21.285034] The buggy address belongs to the object at ffff888102a48e00
[   21.285034]  which belongs to the cache kmalloc-128 of size 128
[   21.286643] The buggy address is located 0 bytes to the right of
[   21.286643]  allocated 120-byte region [ffff888102a48e00, ffff888102a48e78)
[   21.289051] 
[   21.289256] The buggy address belongs to the physical page:
[   21.289560] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a48
[   21.290446] flags: 0x200000000000000(node=0|zone=2)
[   21.291244] page_type: f5(slab)
[   21.291769] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   21.292634] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.293420] page dumped because: kasan: bad access detected
[   21.293887] 
[   21.294093] Memory state around the buggy address:
[   21.295110]  ffff888102a48d00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   21.295638]  ffff888102a48d80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.296474] >ffff888102a48e00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   21.297441]                                                                 ^
[   21.298341]  ffff888102a48e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.298725]  ffff888102a48f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.299885] ==================================================================

test log snippet boot parser 98

Home

[   23.532142] ==================================================================
[   23.532951] BUG: KASAN: slab-out-of-bounds in kmem_cache_oob+0x404/0x530
[   23.533360] Read of size 1 at addr ffff888101ae70c8 by task kunit_try_catch/215
[   23.534004] 
[   23.534195] CPU: 0 UID: 0 PID: 215 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.534295] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.534327] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.534382] Call Trace:
[   23.534955]  <TASK>
[   23.535004]  dump_stack_lvl+0x73/0xb0
[   23.535079]  print_report+0xd1/0x640
[   23.535143]  ? __virt_addr_valid+0x1db/0x2d0
[   23.535250]  ? kasan_complete_mode_report_info+0x2a/0x200
[   23.535307]  kasan_report+0x102/0x140
[   23.535340]  ? kmem_cache_oob+0x404/0x530
[   23.535373]  ? kmem_cache_oob+0x404/0x530
[   23.535429]  __asan_report_load1_noabort+0x18/0x20
[   23.535483]  kmem_cache_oob+0x404/0x530
[   23.535531]  ? trace_hardirqs_on+0x37/0xe0
[   23.535722]  ? __pfx_kmem_cache_oob+0x10/0x10
[   23.535798]  ? finish_task_switch.isra.0+0x153/0x700
[   23.535857]  ? __switch_to+0x5d9/0xf60
[   23.535926]  ? __pfx_read_tsc+0x10/0x10
[   23.535977]  ? ktime_get_ts64+0x86/0x230
[   23.536040]  kunit_try_run_case+0x1b3/0x490
[   23.536106]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.536165]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.536273]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.536311]  ? __kthread_parkme+0x82/0x160
[   23.536341]  ? preempt_count_sub+0x50/0x80
[   23.536373]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.536423]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.536458]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.536489]  kthread+0x324/0x6d0
[   23.536517]  ? trace_preempt_on+0x20/0xc0
[   23.536546]  ? __pfx_kthread+0x10/0x10
[   23.536667]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.536712]  ? calculate_sigpending+0x7b/0xa0
[   23.536747]  ? __pfx_kthread+0x10/0x10
[   23.536777]  ret_from_fork+0x41/0x80
[   23.536807]  ? __pfx_kthread+0x10/0x10
[   23.536835]  ret_from_fork_asm+0x1a/0x30
[   23.536877]  </TASK>
[   23.536892] 
[   23.553818] Allocated by task 215:
[   23.554214]  kasan_save_stack+0x3d/0x60
[   23.554996]  kasan_save_track+0x18/0x40
[   23.555522]  kasan_save_alloc_info+0x3b/0x50
[   23.556417]  __kasan_slab_alloc+0x91/0xa0
[   23.556862]  kmem_cache_alloc_noprof+0x11e/0x3e0
[   23.557238]  kmem_cache_oob+0x158/0x530
[   23.557937]  kunit_try_run_case+0x1b3/0x490
[   23.558461]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.559309]  kthread+0x324/0x6d0
[   23.559891]  ret_from_fork+0x41/0x80
[   23.560342]  ret_from_fork_asm+0x1a/0x30
[   23.561286] 
[   23.561534] The buggy address belongs to the object at ffff888101ae7000
[   23.561534]  which belongs to the cache test_cache of size 200
[   23.562875] The buggy address is located 0 bytes to the right of
[   23.562875]  allocated 200-byte region [ffff888101ae7000, ffff888101ae70c8)
[   23.564073] 
[   23.564336] The buggy address belongs to the physical page:
[   23.565142] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101ae7
[   23.565931] flags: 0x200000000000000(node=0|zone=2)
[   23.566697] page_type: f5(slab)
[   23.567509] raw: 0200000000000000 ffff888101aabb40 dead000000000122 0000000000000000
[   23.568226] raw: 0000000000000000 00000000800f000f 00000001f5000000 0000000000000000
[   23.568826] page dumped because: kasan: bad access detected
[   23.569454] 
[   23.569874] Memory state around the buggy address:
[   23.570332]  ffff888101ae6f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.571460]  ffff888101ae7000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   23.572065] >ffff888101ae7080: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
[   23.572608]                                               ^
[   23.573322]  ffff888101ae7100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.574060]  ffff888101ae7180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.574928] ==================================================================

test log snippet boot parser 99

Home

[   22.109550] ==================================================================
[   22.110880] BUG: KASAN: slab-out-of-bounds in krealloc_less_oob_helper+0xec8/0x11d0
[   22.112148] Write of size 1 at addr ffff888102bbe0da by task kunit_try_catch/170
[   22.113024] 
[   22.113224] CPU: 1 UID: 0 PID: 170 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.113904] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.113926] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.113955] Call Trace:
[   22.113976]  <TASK>
[   22.113997]  dump_stack_lvl+0x73/0xb0
[   22.114039]  print_report+0xd1/0x640
[   22.114071]  ? __virt_addr_valid+0x1db/0x2d0
[   22.114104]  ? kasan_addr_to_slab+0x11/0xa0
[   22.114136]  kasan_report+0x102/0x140
[   22.114165]  ? krealloc_less_oob_helper+0xec8/0x11d0
[   22.114228]  ? krealloc_less_oob_helper+0xec8/0x11d0
[   22.114272]  __asan_report_store1_noabort+0x1b/0x30
[   22.114302]  krealloc_less_oob_helper+0xec8/0x11d0
[   22.114340]  ? __pfx_krealloc_less_oob_helper+0x10/0x10
[   22.114381]  ? finish_task_switch.isra.0+0x153/0x700
[   22.114427]  ? __switch_to+0x5d9/0xf60
[   22.114461]  ? __schedule+0xc6f/0x27d0
[   22.114492]  ? __pfx_read_tsc+0x10/0x10
[   22.114524]  krealloc_large_less_oob+0x1c/0x30
[   22.114556]  kunit_try_run_case+0x1b3/0x490
[   22.114624]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.114694]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.114736]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.114768]  ? __kthread_parkme+0x82/0x160
[   22.114797]  ? preempt_count_sub+0x50/0x80
[   22.114829]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.114862]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.114894]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.114926]  kthread+0x324/0x6d0
[   22.114953]  ? trace_preempt_on+0x20/0xc0
[   22.114983]  ? __pfx_kthread+0x10/0x10
[   22.115011]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.115040]  ? calculate_sigpending+0x7b/0xa0
[   22.115072]  ? __pfx_kthread+0x10/0x10
[   22.115101]  ret_from_fork+0x41/0x80
[   22.115129]  ? __pfx_kthread+0x10/0x10
[   22.115157]  ret_from_fork_asm+0x1a/0x30
[   22.115221]  </TASK>
[   22.115237] 
[   22.134288] The buggy address belongs to the physical page:
[   22.135031] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102bbc
[   22.135745] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   22.136714] flags: 0x200000000000040(head|node=0|zone=2)
[   22.137152] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.138427] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.138945] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.139578] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.140558] head: 0200000000000002 ffffea00040aef01 ffffffffffffffff 0000000000000000
[   22.141534] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   22.142540] page dumped because: kasan: bad access detected
[   22.143237] 
[   22.143578] Memory state around the buggy address:
[   22.144188]  ffff888102bbdf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.144661]  ffff888102bbe000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.145080] >ffff888102bbe080: 00 00 00 00 00 00 00 00 00 01 fe fe fe fe fe fe
[   22.145723]                                                     ^
[   22.146229]  ffff888102bbe100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.147337]  ffff888102bbe180: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.147951] ==================================================================

test log snippet boot parser 100

Home

[   22.068535] ==================================================================
[   22.069333] BUG: KASAN: slab-out-of-bounds in krealloc_less_oob_helper+0xe25/0x11d0
[   22.070031] Write of size 1 at addr ffff888102bbe0d0 by task kunit_try_catch/170
[   22.070469] 
[   22.071124] CPU: 1 UID: 0 PID: 170 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.071259] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.071296] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.071349] Call Trace:
[   22.071380]  <TASK>
[   22.071428]  dump_stack_lvl+0x73/0xb0
[   22.071503]  print_report+0xd1/0x640
[   22.071562]  ? __virt_addr_valid+0x1db/0x2d0
[   22.071739]  ? kasan_addr_to_slab+0x11/0xa0
[   22.071808]  kasan_report+0x102/0x140
[   22.071864]  ? krealloc_less_oob_helper+0xe25/0x11d0
[   22.071911]  ? krealloc_less_oob_helper+0xe25/0x11d0
[   22.071952]  __asan_report_store1_noabort+0x1b/0x30
[   22.071983]  krealloc_less_oob_helper+0xe25/0x11d0
[   22.072024]  ? __pfx_krealloc_less_oob_helper+0x10/0x10
[   22.072080]  ? finish_task_switch.isra.0+0x153/0x700
[   22.072133]  ? __switch_to+0x5d9/0xf60
[   22.072191]  ? __schedule+0xc6f/0x27d0
[   22.072383]  ? __pfx_read_tsc+0x10/0x10
[   22.072472]  krealloc_large_less_oob+0x1c/0x30
[   22.072536]  kunit_try_run_case+0x1b3/0x490
[   22.072713]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.072814]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.072904]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.072968]  ? __kthread_parkme+0x82/0x160
[   22.073020]  ? preempt_count_sub+0x50/0x80
[   22.073064]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.073097]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.073130]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.073162]  kthread+0x324/0x6d0
[   22.073192]  ? trace_preempt_on+0x20/0xc0
[   22.073248]  ? __pfx_kthread+0x10/0x10
[   22.073281]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.073310]  ? calculate_sigpending+0x7b/0xa0
[   22.073344]  ? __pfx_kthread+0x10/0x10
[   22.073373]  ret_from_fork+0x41/0x80
[   22.073419]  ? __pfx_kthread+0x10/0x10
[   22.073450]  ret_from_fork_asm+0x1a/0x30
[   22.073492]  </TASK>
[   22.073507] 
[   22.092299] The buggy address belongs to the physical page:
[   22.092948] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102bbc
[   22.093891] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   22.094682] flags: 0x200000000000040(head|node=0|zone=2)
[   22.095761] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.097515] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.098424] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.099139] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.100144] head: 0200000000000002 ffffea00040aef01 ffffffffffffffff 0000000000000000
[   22.100929] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   22.101879] page dumped because: kasan: bad access detected
[   22.102712] 
[   22.102888] Memory state around the buggy address:
[   22.103214]  ffff888102bbdf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.104372]  ffff888102bbe000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.105394] >ffff888102bbe080: 00 00 00 00 00 00 00 00 00 01 fe fe fe fe fe fe
[   22.106064]                                                  ^
[   22.106823]  ffff888102bbe100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.107881]  ffff888102bbe180: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.108756] ==================================================================

test log snippet boot parser 101

Home

[   21.727581] ==================================================================
[   21.728592] BUG: KASAN: slab-out-of-bounds in krealloc_less_oob_helper+0xe25/0x11d0
[   21.729293] Write of size 1 at addr ffff88810039c4d0 by task kunit_try_catch/166
[   21.729994] 
[   21.730282] CPU: 0 UID: 0 PID: 166 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.730414] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.730537] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.730602] Call Trace:
[   21.730637]  <TASK>
[   21.730674]  dump_stack_lvl+0x73/0xb0
[   21.730750]  print_report+0xd1/0x640
[   21.730810]  ? __virt_addr_valid+0x1db/0x2d0
[   21.730872]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.730936]  kasan_report+0x102/0x140
[   21.730990]  ? krealloc_less_oob_helper+0xe25/0x11d0
[   21.731054]  ? krealloc_less_oob_helper+0xe25/0x11d0
[   21.731126]  __asan_report_store1_noabort+0x1b/0x30
[   21.731183]  krealloc_less_oob_helper+0xe25/0x11d0
[   21.731252]  ? __pfx_krealloc_less_oob_helper+0x10/0x10
[   21.731317]  ? finish_task_switch.isra.0+0x153/0x700
[   21.731374]  ? __switch_to+0x5d9/0xf60
[   21.731458]  ? __schedule+0xc6f/0x27d0
[   21.731518]  ? __pfx_read_tsc+0x10/0x10
[   21.731752]  krealloc_less_oob+0x1c/0x30
[   21.731847]  kunit_try_run_case+0x1b3/0x490
[   21.731921]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.731999]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.732084]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.732147]  ? __kthread_parkme+0x82/0x160
[   21.732201]  ? preempt_count_sub+0x50/0x80
[   21.732275]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.732341]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.732383]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.732435]  kthread+0x324/0x6d0
[   21.732464]  ? trace_preempt_on+0x20/0xc0
[   21.732494]  ? __pfx_kthread+0x10/0x10
[   21.732523]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.732552]  ? calculate_sigpending+0x7b/0xa0
[   21.732591]  ? __pfx_kthread+0x10/0x10
[   21.732681]  ret_from_fork+0x41/0x80
[   21.732733]  ? __pfx_kthread+0x10/0x10
[   21.732783]  ret_from_fork_asm+0x1a/0x30
[   21.732856]  </TASK>
[   21.732880] 
[   21.754018] Allocated by task 166:
[   21.754532]  kasan_save_stack+0x3d/0x60
[   21.754893]  kasan_save_track+0x18/0x40
[   21.755328]  kasan_save_alloc_info+0x3b/0x50
[   21.755715]  __kasan_krealloc+0x190/0x1f0
[   21.756535]  krealloc_noprof+0xf3/0x340
[   21.757043]  krealloc_less_oob_helper+0x1ab/0x11d0
[   21.757566]  krealloc_less_oob+0x1c/0x30
[   21.758217]  kunit_try_run_case+0x1b3/0x490
[   21.758759]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.759121]  kthread+0x324/0x6d0
[   21.759518]  ret_from_fork+0x41/0x80
[   21.759898]  ret_from_fork_asm+0x1a/0x30
[   21.760307] 
[   21.760481] The buggy address belongs to the object at ffff88810039c400
[   21.760481]  which belongs to the cache kmalloc-256 of size 256
[   21.761568] The buggy address is located 7 bytes to the right of
[   21.761568]  allocated 201-byte region [ffff88810039c400, ffff88810039c4c9)
[   21.762530] 
[   21.762777] The buggy address belongs to the physical page:
[   21.763292] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10039c
[   21.765054] head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.765763] flags: 0x200000000000040(head|node=0|zone=2)
[   21.766496] page_type: f5(slab)
[   21.766992] raw: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.767637] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.768371] head: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.768890] head: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.770488] head: 0200000000000001 ffffea000400e701 ffffffffffffffff 0000000000000000
[   21.771434] head: 0000000000000002 0000000000000000 00000000ffffffff 0000000000000000
[   21.771956] page dumped because: kasan: bad access detected
[   21.772308] 
[   21.772661] Memory state around the buggy address:
[   21.773174]  ffff88810039c380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.774038]  ffff88810039c400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.774564] >ffff88810039c480: 00 00 00 00 00 00 00 00 00 01 fc fc fc fc fc fc
[   21.776382]                                                  ^
[   21.776848]  ffff88810039c500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.777391]  ffff88810039c580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.778166] ==================================================================

test log snippet boot parser 102

Home

[   22.187205] ==================================================================
[   22.187840] BUG: KASAN: slab-out-of-bounds in krealloc_less_oob_helper+0xd49/0x11d0
[   22.188949] Write of size 1 at addr ffff888102bbe0eb by task kunit_try_catch/170
[   22.189601] 
[   22.189932] CPU: 1 UID: 0 PID: 170 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.190035] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.190070] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.190120] Call Trace:
[   22.190152]  <TASK>
[   22.190182]  dump_stack_lvl+0x73/0xb0
[   22.190253]  print_report+0xd1/0x640
[   22.190314]  ? __virt_addr_valid+0x1db/0x2d0
[   22.190390]  ? kasan_addr_to_slab+0x11/0xa0
[   22.190472]  kasan_report+0x102/0x140
[   22.190548]  ? krealloc_less_oob_helper+0xd49/0x11d0
[   22.190622]  ? krealloc_less_oob_helper+0xd49/0x11d0
[   22.190692]  __asan_report_store1_noabort+0x1b/0x30
[   22.190749]  krealloc_less_oob_helper+0xd49/0x11d0
[   22.190814]  ? __pfx_krealloc_less_oob_helper+0x10/0x10
[   22.190877]  ? finish_task_switch.isra.0+0x153/0x700
[   22.190927]  ? __switch_to+0x5d9/0xf60
[   22.190987]  ? __schedule+0xc6f/0x27d0
[   22.191040]  ? __pfx_read_tsc+0x10/0x10
[   22.191096]  krealloc_large_less_oob+0x1c/0x30
[   22.191154]  kunit_try_run_case+0x1b3/0x490
[   22.191217]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.191273]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.191331]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.191392]  ? __kthread_parkme+0x82/0x160
[   22.191466]  ? preempt_count_sub+0x50/0x80
[   22.191741]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.191814]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.191877]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.191937]  kthread+0x324/0x6d0
[   22.191991]  ? trace_preempt_on+0x20/0xc0
[   22.192047]  ? __pfx_kthread+0x10/0x10
[   22.192101]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.192156]  ? calculate_sigpending+0x7b/0xa0
[   22.192207]  ? __pfx_kthread+0x10/0x10
[   22.192259]  ret_from_fork+0x41/0x80
[   22.192330]  ? __pfx_kthread+0x10/0x10
[   22.192384]  ret_from_fork_asm+0x1a/0x30
[   22.192484]  </TASK>
[   22.192513] 
[   22.207916] The buggy address belongs to the physical page:
[   22.208392] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102bbc
[   22.208884] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   22.209285] flags: 0x200000000000040(head|node=0|zone=2)
[   22.210266] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.211160] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.212053] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.212926] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.214147] head: 0200000000000002 ffffea00040aef01 ffffffffffffffff 0000000000000000
[   22.215041] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   22.216180] page dumped because: kasan: bad access detected
[   22.216762] 
[   22.217214] Memory state around the buggy address:
[   22.218290]  ffff888102bbdf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.219090]  ffff888102bbe000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.219870] >ffff888102bbe080: 00 00 00 00 00 00 00 00 00 01 fe fe fe fe fe fe
[   22.220787]                                                           ^
[   22.221468]  ffff888102bbe100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.222606]  ffff888102bbe180: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.223031] ==================================================================
---
[   22.148806] ==================================================================
[   22.150057] BUG: KASAN: slab-out-of-bounds in krealloc_less_oob_helper+0xe92/0x11d0
[   22.150826] Write of size 1 at addr ffff888102bbe0ea by task kunit_try_catch/170
[   22.151545] 
[   22.151972] CPU: 1 UID: 0 PID: 170 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.152078] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.152112] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.152161] Call Trace:
[   22.152233]  <TASK>
[   22.152275]  dump_stack_lvl+0x73/0xb0
[   22.152390]  print_report+0xd1/0x640
[   22.152473]  ? __virt_addr_valid+0x1db/0x2d0
[   22.152538]  ? kasan_addr_to_slab+0x11/0xa0
[   22.152749]  kasan_report+0x102/0x140
[   22.152827]  ? krealloc_less_oob_helper+0xe92/0x11d0
[   22.152893]  ? krealloc_less_oob_helper+0xe92/0x11d0
[   22.152969]  __asan_report_store1_noabort+0x1b/0x30
[   22.153025]  krealloc_less_oob_helper+0xe92/0x11d0
[   22.153097]  ? __pfx_krealloc_less_oob_helper+0x10/0x10
[   22.153160]  ? finish_task_switch.isra.0+0x153/0x700
[   22.153278]  ? __switch_to+0x5d9/0xf60
[   22.153372]  ? __schedule+0xc6f/0x27d0
[   22.153451]  ? __pfx_read_tsc+0x10/0x10
[   22.153514]  krealloc_large_less_oob+0x1c/0x30
[   22.153566]  kunit_try_run_case+0x1b3/0x490
[   22.153887]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.153922]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.153953]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.153985]  ? __kthread_parkme+0x82/0x160
[   22.154013]  ? preempt_count_sub+0x50/0x80
[   22.154045]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.154077]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.154108]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.154140]  kthread+0x324/0x6d0
[   22.154166]  ? trace_preempt_on+0x20/0xc0
[   22.154203]  ? __pfx_kthread+0x10/0x10
[   22.154239]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.154268]  ? calculate_sigpending+0x7b/0xa0
[   22.154301]  ? __pfx_kthread+0x10/0x10
[   22.154330]  ret_from_fork+0x41/0x80
[   22.154368]  ? __pfx_kthread+0x10/0x10
[   22.154411]  ret_from_fork_asm+0x1a/0x30
[   22.154458]  </TASK>
[   22.154473] 
[   22.171974] The buggy address belongs to the physical page:
[   22.172530] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102bbc
[   22.173183] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   22.173608] flags: 0x200000000000040(head|node=0|zone=2)
[   22.174141] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.175375] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.175992] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.177056] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.177506] head: 0200000000000002 ffffea00040aef01 ffffffffffffffff 0000000000000000
[   22.177922] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   22.179678] page dumped because: kasan: bad access detected
[   22.180989] 
[   22.181397] Memory state around the buggy address:
[   22.181924]  ffff888102bbdf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.182413]  ffff888102bbe000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.183201] >ffff888102bbe080: 00 00 00 00 00 00 00 00 00 01 fe fe fe fe fe fe
[   22.183882]                                                           ^
[   22.184475]  ffff888102bbe100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.185616]  ffff888102bbe180: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.186068] ==================================================================

test log snippet boot parser 103

Home

[   21.779536] ==================================================================
[   21.780791] BUG: KASAN: slab-out-of-bounds in krealloc_less_oob_helper+0xec8/0x11d0
[   21.781626] Write of size 1 at addr ffff88810039c4da by task kunit_try_catch/166
[   21.782435] 
[   21.782755] CPU: 0 UID: 0 PID: 166 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.782866] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.782899] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.782950] Call Trace:
[   21.782992]  <TASK>
[   21.783025]  dump_stack_lvl+0x73/0xb0
[   21.783099]  print_report+0xd1/0x640
[   21.783159]  ? __virt_addr_valid+0x1db/0x2d0
[   21.783268]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.783338]  kasan_report+0x102/0x140
[   21.783395]  ? krealloc_less_oob_helper+0xec8/0x11d0
[   21.783482]  ? krealloc_less_oob_helper+0xec8/0x11d0
[   21.783559]  __asan_report_store1_noabort+0x1b/0x30
[   21.783663]  krealloc_less_oob_helper+0xec8/0x11d0
[   21.783732]  ? __pfx_krealloc_less_oob_helper+0x10/0x10
[   21.783792]  ? finish_task_switch.isra.0+0x153/0x700
[   21.783842]  ? __switch_to+0x5d9/0xf60
[   21.783899]  ? __schedule+0xc6f/0x27d0
[   21.783951]  ? __pfx_read_tsc+0x10/0x10
[   21.783998]  krealloc_less_oob+0x1c/0x30
[   21.784031]  kunit_try_run_case+0x1b3/0x490
[   21.784066]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.784096]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.784127]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.784158]  ? __kthread_parkme+0x82/0x160
[   21.784220]  ? preempt_count_sub+0x50/0x80
[   21.784256]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.784288]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.784320]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.784351]  kthread+0x324/0x6d0
[   21.784378]  ? trace_preempt_on+0x20/0xc0
[   21.784425]  ? __pfx_kthread+0x10/0x10
[   21.784454]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.784484]  ? calculate_sigpending+0x7b/0xa0
[   21.784517]  ? __pfx_kthread+0x10/0x10
[   21.784546]  ret_from_fork+0x41/0x80
[   21.784574]  ? __pfx_kthread+0x10/0x10
[   21.784655]  ret_from_fork_asm+0x1a/0x30
[   21.784725]  </TASK>
[   21.784742] 
[   21.802484] Allocated by task 166:
[   21.803047]  kasan_save_stack+0x3d/0x60
[   21.803562]  kasan_save_track+0x18/0x40
[   21.804204]  kasan_save_alloc_info+0x3b/0x50
[   21.804894]  __kasan_krealloc+0x190/0x1f0
[   21.805317]  krealloc_noprof+0xf3/0x340
[   21.805618]  krealloc_less_oob_helper+0x1ab/0x11d0
[   21.805977]  krealloc_less_oob+0x1c/0x30
[   21.806993]  kunit_try_run_case+0x1b3/0x490
[   21.807566]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.808347]  kthread+0x324/0x6d0
[   21.808943]  ret_from_fork+0x41/0x80
[   21.809460]  ret_from_fork_asm+0x1a/0x30
[   21.810043] 
[   21.810361] The buggy address belongs to the object at ffff88810039c400
[   21.810361]  which belongs to the cache kmalloc-256 of size 256
[   21.812016] The buggy address is located 17 bytes to the right of
[   21.812016]  allocated 201-byte region [ffff88810039c400, ffff88810039c4c9)
[   21.812874] 
[   21.813101] The buggy address belongs to the physical page:
[   21.813982] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10039c
[   21.814907] head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.815363] flags: 0x200000000000040(head|node=0|zone=2)
[   21.816320] page_type: f5(slab)
[   21.816700] raw: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.817303] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.818236] head: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.819359] head: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.820374] head: 0200000000000001 ffffea000400e701 ffffffffffffffff 0000000000000000
[   21.821457] head: 0000000000000002 0000000000000000 00000000ffffffff 0000000000000000
[   21.822732] page dumped because: kasan: bad access detected
[   21.823102] 
[   21.823266] Memory state around the buggy address:
[   21.823644]  ffff88810039c380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.824288]  ffff88810039c400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.825064] >ffff88810039c480: 00 00 00 00 00 00 00 00 00 01 fc fc fc fc fc fc
[   21.825474]                                                     ^
[   21.826230]  ffff88810039c500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.827011]  ffff88810039c580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.828322] ==================================================================

test log snippet boot parser 104

Home

[   22.030893] ==================================================================
[   22.031659] BUG: KASAN: slab-out-of-bounds in krealloc_less_oob_helper+0xd72/0x11d0
[   22.032444] Write of size 1 at addr ffff888102bbe0c9 by task kunit_try_catch/170
[   22.033818] 
[   22.034166] CPU: 1 UID: 0 PID: 170 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.034524] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.034563] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.034709] Call Trace:
[   22.034751]  <TASK>
[   22.034783]  dump_stack_lvl+0x73/0xb0
[   22.034857]  print_report+0xd1/0x640
[   22.034915]  ? __virt_addr_valid+0x1db/0x2d0
[   22.034951]  ? kasan_addr_to_slab+0x11/0xa0
[   22.034984]  kasan_report+0x102/0x140
[   22.035014]  ? krealloc_less_oob_helper+0xd72/0x11d0
[   22.035050]  ? krealloc_less_oob_helper+0xd72/0x11d0
[   22.035090]  __asan_report_store1_noabort+0x1b/0x30
[   22.035119]  krealloc_less_oob_helper+0xd72/0x11d0
[   22.035156]  ? __pfx_krealloc_less_oob_helper+0x10/0x10
[   22.035189]  ? finish_task_switch.isra.0+0x153/0x700
[   22.035218]  ? __switch_to+0x5d9/0xf60
[   22.035250]  ? __schedule+0xc6f/0x27d0
[   22.035280]  ? __pfx_read_tsc+0x10/0x10
[   22.035311]  krealloc_large_less_oob+0x1c/0x30
[   22.035343]  kunit_try_run_case+0x1b3/0x490
[   22.035376]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.035430]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.035465]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.035496]  ? __kthread_parkme+0x82/0x160
[   22.035524]  ? preempt_count_sub+0x50/0x80
[   22.035555]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.035586]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.035618]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.035649]  kthread+0x324/0x6d0
[   22.035676]  ? trace_preempt_on+0x20/0xc0
[   22.035706]  ? __pfx_kthread+0x10/0x10
[   22.035734]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.035763]  ? calculate_sigpending+0x7b/0xa0
[   22.035795]  ? __pfx_kthread+0x10/0x10
[   22.035824]  ret_from_fork+0x41/0x80
[   22.035852]  ? __pfx_kthread+0x10/0x10
[   22.035880]  ret_from_fork_asm+0x1a/0x30
[   22.035921]  </TASK>
[   22.035937] 
[   22.052552] The buggy address belongs to the physical page:
[   22.053269] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102bbc
[   22.054147] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   22.054848] flags: 0x200000000000040(head|node=0|zone=2)
[   22.055483] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.056426] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.057094] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.058021] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.058955] head: 0200000000000002 ffffea00040aef01 ffffffffffffffff 0000000000000000
[   22.059697] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   22.061014] page dumped because: kasan: bad access detected
[   22.061428] 
[   22.061646] Memory state around the buggy address:
[   22.062217]  ffff888102bbdf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.062750]  ffff888102bbe000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.063825] >ffff888102bbe080: 00 00 00 00 00 00 00 00 00 01 fe fe fe fe fe fe
[   22.064289]                                               ^
[   22.064873]  ffff888102bbe100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.065742]  ffff888102bbe180: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.066848] ==================================================================

test log snippet boot parser 105

Home

[   21.675617] ==================================================================
[   21.676433] BUG: KASAN: slab-out-of-bounds in krealloc_less_oob_helper+0xd72/0x11d0
[   21.677343] Write of size 1 at addr ffff88810039c4c9 by task kunit_try_catch/166
[   21.678000] 
[   21.678344] CPU: 0 UID: 0 PID: 166 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.678474] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.678509] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.678563] Call Trace:
[   21.678751]  <TASK>
[   21.678799]  dump_stack_lvl+0x73/0xb0
[   21.678883]  print_report+0xd1/0x640
[   21.678947]  ? __virt_addr_valid+0x1db/0x2d0
[   21.679032]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.679123]  kasan_report+0x102/0x140
[   21.679217]  ? krealloc_less_oob_helper+0xd72/0x11d0
[   21.679295]  ? krealloc_less_oob_helper+0xd72/0x11d0
[   21.679369]  __asan_report_store1_noabort+0x1b/0x30
[   21.679462]  krealloc_less_oob_helper+0xd72/0x11d0
[   21.679558]  ? __pfx_krealloc_less_oob_helper+0x10/0x10
[   21.679741]  ? finish_task_switch.isra.0+0x153/0x700
[   21.679778]  ? __switch_to+0x5d9/0xf60
[   21.679814]  ? __schedule+0xc6f/0x27d0
[   21.679848]  ? __pfx_read_tsc+0x10/0x10
[   21.679880]  krealloc_less_oob+0x1c/0x30
[   21.679912]  kunit_try_run_case+0x1b3/0x490
[   21.679947]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.679977]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.680008]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.680038]  ? __kthread_parkme+0x82/0x160
[   21.680067]  ? preempt_count_sub+0x50/0x80
[   21.680098]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.680129]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.680160]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.680225]  kthread+0x324/0x6d0
[   21.680257]  ? trace_preempt_on+0x20/0xc0
[   21.680290]  ? __pfx_kthread+0x10/0x10
[   21.680318]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.680348]  ? calculate_sigpending+0x7b/0xa0
[   21.680382]  ? __pfx_kthread+0x10/0x10
[   21.680427]  ret_from_fork+0x41/0x80
[   21.680456]  ? __pfx_kthread+0x10/0x10
[   21.680485]  ret_from_fork_asm+0x1a/0x30
[   21.680528]  </TASK>
[   21.680544] 
[   21.700110] Allocated by task 166:
[   21.700553]  kasan_save_stack+0x3d/0x60
[   21.701358]  kasan_save_track+0x18/0x40
[   21.701805]  kasan_save_alloc_info+0x3b/0x50
[   21.702885]  __kasan_krealloc+0x190/0x1f0
[   21.703379]  krealloc_noprof+0xf3/0x340
[   21.703864]  krealloc_less_oob_helper+0x1ab/0x11d0
[   21.704777]  krealloc_less_oob+0x1c/0x30
[   21.705448]  kunit_try_run_case+0x1b3/0x490
[   21.705971]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.706670]  kthread+0x324/0x6d0
[   21.707275]  ret_from_fork+0x41/0x80
[   21.707686]  ret_from_fork_asm+0x1a/0x30
[   21.708368] 
[   21.708734] The buggy address belongs to the object at ffff88810039c400
[   21.708734]  which belongs to the cache kmalloc-256 of size 256
[   21.709973] The buggy address is located 0 bytes to the right of
[   21.709973]  allocated 201-byte region [ffff88810039c400, ffff88810039c4c9)
[   21.711277] 
[   21.711524] The buggy address belongs to the physical page:
[   21.712056] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10039c
[   21.712557] head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.713034] flags: 0x200000000000040(head|node=0|zone=2)
[   21.713874] page_type: f5(slab)
[   21.714277] raw: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.715440] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.716294] head: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.717068] head: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.717776] head: 0200000000000001 ffffea000400e701 ffffffffffffffff 0000000000000000
[   21.718552] head: 0000000000000002 0000000000000000 00000000ffffffff 0000000000000000
[   21.719320] page dumped because: kasan: bad access detected
[   21.719644] 
[   21.720122] Memory state around the buggy address:
[   21.720919]  ffff88810039c380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.722156]  ffff88810039c400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.722687] >ffff88810039c480: 00 00 00 00 00 00 00 00 00 01 fc fc fc fc fc fc
[   21.723512]                                               ^
[   21.724468]  ffff88810039c500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.725064]  ffff88810039c580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.726061] ==================================================================

test log snippet boot parser 106

Home

[   21.877537] ==================================================================
[   21.878732] BUG: KASAN: slab-out-of-bounds in krealloc_less_oob_helper+0xd49/0x11d0
[   21.879247] Write of size 1 at addr ffff88810039c4eb by task kunit_try_catch/166
[   21.879990] 
[   21.880235] CPU: 0 UID: 0 PID: 166 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.880377] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.880427] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.880480] Call Trace:
[   21.880517]  <TASK>
[   21.880551]  dump_stack_lvl+0x73/0xb0
[   21.880782]  print_report+0xd1/0x640
[   21.880848]  ? __virt_addr_valid+0x1db/0x2d0
[   21.880907]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.880971]  kasan_report+0x102/0x140
[   21.881024]  ? krealloc_less_oob_helper+0xd49/0x11d0
[   21.881086]  ? krealloc_less_oob_helper+0xd49/0x11d0
[   21.881157]  __asan_report_store1_noabort+0x1b/0x30
[   21.881213]  krealloc_less_oob_helper+0xd49/0x11d0
[   21.881319]  ? __pfx_krealloc_less_oob_helper+0x10/0x10
[   21.881413]  ? finish_task_switch.isra.0+0x153/0x700
[   21.881473]  ? __switch_to+0x5d9/0xf60
[   21.881534]  ? __schedule+0xc6f/0x27d0
[   21.881721]  ? __pfx_read_tsc+0x10/0x10
[   21.881802]  krealloc_less_oob+0x1c/0x30
[   21.881865]  kunit_try_run_case+0x1b3/0x490
[   21.881928]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.881983]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.882037]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.882091]  ? __kthread_parkme+0x82/0x160
[   21.882142]  ? preempt_count_sub+0x50/0x80
[   21.882197]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.882255]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.882314]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.882386]  kthread+0x324/0x6d0
[   21.882457]  ? trace_preempt_on+0x20/0xc0
[   21.882513]  ? __pfx_kthread+0x10/0x10
[   21.882566]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.882621]  ? calculate_sigpending+0x7b/0xa0
[   21.882681]  ? __pfx_kthread+0x10/0x10
[   21.882714]  ret_from_fork+0x41/0x80
[   21.882744]  ? __pfx_kthread+0x10/0x10
[   21.882773]  ret_from_fork_asm+0x1a/0x30
[   21.882816]  </TASK>
[   21.882831] 
[   21.907304] Allocated by task 166:
[   21.908284]  kasan_save_stack+0x3d/0x60
[   21.908723]  kasan_save_track+0x18/0x40
[   21.909422]  kasan_save_alloc_info+0x3b/0x50
[   21.910500]  __kasan_krealloc+0x190/0x1f0
[   21.911081]  krealloc_noprof+0xf3/0x340
[   21.911619]  krealloc_less_oob_helper+0x1ab/0x11d0
[   21.912248]  krealloc_less_oob+0x1c/0x30
[   21.913749]  kunit_try_run_case+0x1b3/0x490
[   21.914952]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.915597]  kthread+0x324/0x6d0
[   21.916292]  ret_from_fork+0x41/0x80
[   21.917780]  ret_from_fork_asm+0x1a/0x30
[   21.918325] 
[   21.918573] The buggy address belongs to the object at ffff88810039c400
[   21.918573]  which belongs to the cache kmalloc-256 of size 256
[   21.919314] The buggy address is located 34 bytes to the right of
[   21.919314]  allocated 201-byte region [ffff88810039c400, ffff88810039c4c9)
[   21.921290] 
[   21.921545] The buggy address belongs to the physical page:
[   21.922489] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10039c
[   21.923835] head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.924731] flags: 0x200000000000040(head|node=0|zone=2)
[   21.925616] page_type: f5(slab)
[   21.925979] raw: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.927878] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.929235] head: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.930026] head: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.930916] head: 0200000000000001 ffffea000400e701 ffffffffffffffff 0000000000000000
[   21.931816] head: 0000000000000002 0000000000000000 00000000ffffffff 0000000000000000
[   21.932602] page dumped because: kasan: bad access detected
[   21.933170] 
[   21.933416] Memory state around the buggy address:
[   21.933904]  ffff88810039c380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.934934]  ffff88810039c400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.935722] >ffff88810039c480: 00 00 00 00 00 00 00 00 00 01 fc fc fc fc fc fc
[   21.936377]                                                           ^
[   21.937145]  ffff88810039c500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.937868]  ffff88810039c580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.938663] ==================================================================
---
[   21.829382] ==================================================================
[   21.830038] BUG: KASAN: slab-out-of-bounds in krealloc_less_oob_helper+0xe92/0x11d0
[   21.830653] Write of size 1 at addr ffff88810039c4ea by task kunit_try_catch/166
[   21.832040] 
[   21.832333] CPU: 0 UID: 0 PID: 166 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.832456] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.832492] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.832547] Call Trace:
[   21.832585]  <TASK>
[   21.832620]  dump_stack_lvl+0x73/0xb0
[   21.832694]  print_report+0xd1/0x640
[   21.832773]  ? __virt_addr_valid+0x1db/0x2d0
[   21.832841]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.832907]  kasan_report+0x102/0x140
[   21.833115]  ? krealloc_less_oob_helper+0xe92/0x11d0
[   21.833225]  ? krealloc_less_oob_helper+0xe92/0x11d0
[   21.833383]  __asan_report_store1_noabort+0x1b/0x30
[   21.833491]  krealloc_less_oob_helper+0xe92/0x11d0
[   21.833569]  ? __pfx_krealloc_less_oob_helper+0x10/0x10
[   21.833743]  ? finish_task_switch.isra.0+0x153/0x700
[   21.833813]  ? __switch_to+0x5d9/0xf60
[   21.833878]  ? __schedule+0xc6f/0x27d0
[   21.833930]  ? __pfx_read_tsc+0x10/0x10
[   21.833964]  krealloc_less_oob+0x1c/0x30
[   21.834000]  kunit_try_run_case+0x1b3/0x490
[   21.834033]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.834064]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.834094]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.834125]  ? __kthread_parkme+0x82/0x160
[   21.834153]  ? preempt_count_sub+0x50/0x80
[   21.834184]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.834237]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.834270]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.834301]  kthread+0x324/0x6d0
[   21.834328]  ? trace_preempt_on+0x20/0xc0
[   21.834365]  ? __pfx_kthread+0x10/0x10
[   21.834394]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.834440]  ? calculate_sigpending+0x7b/0xa0
[   21.834473]  ? __pfx_kthread+0x10/0x10
[   21.834502]  ret_from_fork+0x41/0x80
[   21.834530]  ? __pfx_kthread+0x10/0x10
[   21.834559]  ret_from_fork_asm+0x1a/0x30
[   21.834646]  </TASK>
[   21.834681] 
[   21.853264] Allocated by task 166:
[   21.853786]  kasan_save_stack+0x3d/0x60
[   21.854167]  kasan_save_track+0x18/0x40
[   21.854808]  kasan_save_alloc_info+0x3b/0x50
[   21.855320]  __kasan_krealloc+0x190/0x1f0
[   21.856162]  krealloc_noprof+0xf3/0x340
[   21.856515]  krealloc_less_oob_helper+0x1ab/0x11d0
[   21.856990]  krealloc_less_oob+0x1c/0x30
[   21.857370]  kunit_try_run_case+0x1b3/0x490
[   21.857815]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.858290]  kthread+0x324/0x6d0
[   21.858919]  ret_from_fork+0x41/0x80
[   21.859327]  ret_from_fork_asm+0x1a/0x30
[   21.859899] 
[   21.860122] The buggy address belongs to the object at ffff88810039c400
[   21.860122]  which belongs to the cache kmalloc-256 of size 256
[   21.861160] The buggy address is located 33 bytes to the right of
[   21.861160]  allocated 201-byte region [ffff88810039c400, ffff88810039c4c9)
[   21.862506] 
[   21.862907] The buggy address belongs to the physical page:
[   21.863457] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10039c
[   21.864055] head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.864830] flags: 0x200000000000040(head|node=0|zone=2)
[   21.865206] page_type: f5(slab)
[   21.865602] raw: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.866196] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.867448] head: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.868546] head: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.869322] head: 0200000000000001 ffffea000400e701 ffffffffffffffff 0000000000000000
[   21.870062] head: 0000000000000002 0000000000000000 00000000ffffffff 0000000000000000
[   21.870849] page dumped because: kasan: bad access detected
[   21.871396] 
[   21.871734] Memory state around the buggy address:
[   21.872084]  ffff88810039c380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.873144]  ffff88810039c400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.873867] >ffff88810039c480: 00 00 00 00 00 00 00 00 00 01 fc fc fc fc fc fc
[   21.874450]                                                           ^
[   21.874934]  ffff88810039c500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.875825]  ffff88810039c580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.876348] ==================================================================

test log snippet boot parser 107

Home

[   21.946930] ==================================================================
[   21.947642] BUG: KASAN: slab-out-of-bounds in krealloc_more_oob_helper+0x823/0x930
[   21.948291] Write of size 1 at addr ffff888102bbe0eb by task kunit_try_catch/168
[   21.948956] 
[   21.949745] CPU: 1 UID: 0 PID: 168 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.949838] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.949870] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.949925] Call Trace:
[   21.949957]  <TASK>
[   21.949995]  dump_stack_lvl+0x73/0xb0
[   21.950077]  print_report+0xd1/0x640
[   21.950135]  ? __virt_addr_valid+0x1db/0x2d0
[   21.950178]  ? kasan_addr_to_slab+0x11/0xa0
[   21.950264]  kasan_report+0x102/0x140
[   21.950328]  ? krealloc_more_oob_helper+0x823/0x930
[   21.950416]  ? krealloc_more_oob_helper+0x823/0x930
[   21.950487]  __asan_report_store1_noabort+0x1b/0x30
[   21.950547]  krealloc_more_oob_helper+0x823/0x930
[   21.950615]  ? __schedule+0xc6f/0x27d0
[   21.950684]  ? __pfx_krealloc_more_oob_helper+0x10/0x10
[   21.950750]  ? finish_task_switch.isra.0+0x153/0x700
[   21.950812]  ? __switch_to+0x5d9/0xf60
[   21.950897]  ? __schedule+0xc6f/0x27d0
[   21.950975]  ? __pfx_read_tsc+0x10/0x10
[   21.951017]  krealloc_large_more_oob+0x1c/0x30
[   21.951048]  kunit_try_run_case+0x1b3/0x490
[   21.951084]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.951114]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.951146]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.951177]  ? __kthread_parkme+0x82/0x160
[   21.951243]  ? preempt_count_sub+0x50/0x80
[   21.951276]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.951309]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.951341]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.951372]  kthread+0x324/0x6d0
[   21.951416]  ? trace_preempt_on+0x20/0xc0
[   21.951450]  ? __pfx_kthread+0x10/0x10
[   21.951479]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.951508]  ? calculate_sigpending+0x7b/0xa0
[   21.951542]  ? __pfx_kthread+0x10/0x10
[   21.951571]  ret_from_fork+0x41/0x80
[   21.951639]  ? __pfx_kthread+0x10/0x10
[   21.951699]  ret_from_fork_asm+0x1a/0x30
[   21.951774]  </TASK>
[   21.951803] 
[   21.969071] The buggy address belongs to the physical page:
[   21.970093] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102bbc
[   21.971063] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.972082] flags: 0x200000000000040(head|node=0|zone=2)
[   21.972843] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   21.973756] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   21.974511] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   21.975387] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   21.976396] head: 0200000000000002 ffffea00040aef01 ffffffffffffffff 0000000000000000
[   21.977096] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   21.977983] page dumped because: kasan: bad access detected
[   21.978567] 
[   21.978977] Memory state around the buggy address:
[   21.979498]  ffff888102bbdf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.980521]  ffff888102bbe000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.980938] >ffff888102bbe080: 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fe fe
[   21.981551]                                                           ^
[   21.982148]  ffff888102bbe100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   21.983044]  ffff888102bbe180: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   21.983917] ==================================================================

test log snippet boot parser 108

Home

[   21.568289] ==================================================================
[   21.569208] BUG: KASAN: slab-out-of-bounds in krealloc_more_oob_helper+0x823/0x930
[   21.570315] Write of size 1 at addr ffff888100a9beeb by task kunit_try_catch/164
[   21.571017] 
[   21.571433] CPU: 1 UID: 0 PID: 164 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.571540] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.571572] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.572051] Call Trace:
[   21.572091]  <TASK>
[   21.572126]  dump_stack_lvl+0x73/0xb0
[   21.572255]  print_report+0xd1/0x640
[   21.572311]  ? __virt_addr_valid+0x1db/0x2d0
[   21.572350]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.572385]  kasan_report+0x102/0x140
[   21.572435]  ? krealloc_more_oob_helper+0x823/0x930
[   21.572466]  ? krealloc_more_oob_helper+0x823/0x930
[   21.572502]  __asan_report_store1_noabort+0x1b/0x30
[   21.572532]  krealloc_more_oob_helper+0x823/0x930
[   21.572559]  ? __schedule+0xc6f/0x27d0
[   21.572649]  ? __pfx_krealloc_more_oob_helper+0x10/0x10
[   21.572725]  ? finish_task_switch.isra.0+0x153/0x700
[   21.572776]  ? __switch_to+0x5d9/0xf60
[   21.572813]  ? __schedule+0xc6f/0x27d0
[   21.572844]  ? __pfx_read_tsc+0x10/0x10
[   21.572878]  krealloc_more_oob+0x1c/0x30
[   21.572906]  kunit_try_run_case+0x1b3/0x490
[   21.572941]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.572970]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.573002]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.573033]  ? __kthread_parkme+0x82/0x160
[   21.573061]  ? preempt_count_sub+0x50/0x80
[   21.573092]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.573124]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.573157]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.573220]  kthread+0x324/0x6d0
[   21.573253]  ? trace_preempt_on+0x20/0xc0
[   21.573285]  ? __pfx_kthread+0x10/0x10
[   21.573313]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.573343]  ? calculate_sigpending+0x7b/0xa0
[   21.573377]  ? __pfx_kthread+0x10/0x10
[   21.573422]  ret_from_fork+0x41/0x80
[   21.573452]  ? __pfx_kthread+0x10/0x10
[   21.573481]  ret_from_fork_asm+0x1a/0x30
[   21.573525]  </TASK>
[   21.573540] 
[   21.593417] Allocated by task 164:
[   21.594257]  kasan_save_stack+0x3d/0x60
[   21.594779]  kasan_save_track+0x18/0x40
[   21.595169]  kasan_save_alloc_info+0x3b/0x50
[   21.595784]  __kasan_krealloc+0x190/0x1f0
[   21.596341]  krealloc_noprof+0xf3/0x340
[   21.597112]  krealloc_more_oob_helper+0x1aa/0x930
[   21.597564]  krealloc_more_oob+0x1c/0x30
[   21.598520]  kunit_try_run_case+0x1b3/0x490
[   21.599024]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.599661]  kthread+0x324/0x6d0
[   21.600281]  ret_from_fork+0x41/0x80
[   21.600860]  ret_from_fork_asm+0x1a/0x30
[   21.601327] 
[   21.601628] The buggy address belongs to the object at ffff888100a9be00
[   21.601628]  which belongs to the cache kmalloc-256 of size 256
[   21.602640] The buggy address is located 0 bytes to the right of
[   21.602640]  allocated 235-byte region [ffff888100a9be00, ffff888100a9beeb)
[   21.604188] 
[   21.604741] The buggy address belongs to the physical page:
[   21.605231] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x100a9a
[   21.606035] head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.607023] flags: 0x200000000000040(head|node=0|zone=2)
[   21.607566] page_type: f5(slab)
[   21.608273] raw: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.609888] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.610849] head: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.611446] head: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.612015] head: 0200000000000001 ffffea000402a681 ffffffffffffffff 0000000000000000
[   21.613028] head: 0000000000000002 0000000000000000 00000000ffffffff 0000000000000000
[   21.613746] page dumped because: kasan: bad access detected
[   21.614548] 
[   21.615016] Memory state around the buggy address:
[   21.615394]  ffff888100a9bd80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.615819]  ffff888100a9be00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.616467] >ffff888100a9be80: 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fc fc
[   21.617307]                                                           ^
[   21.618160]  ffff888100a9bf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.618578]  ffff888100a9bf80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.619337] ==================================================================

test log snippet boot parser 109

Home

[   21.986561] ==================================================================
[   21.987365] BUG: KASAN: slab-out-of-bounds in krealloc_more_oob_helper+0x7ed/0x930
[   21.988689] Write of size 1 at addr ffff888102bbe0f0 by task kunit_try_catch/168
[   21.989573] 
[   21.989963] CPU: 1 UID: 0 PID: 168 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.990071] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.990100] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.990130] Call Trace:
[   21.990149]  <TASK>
[   21.990171]  dump_stack_lvl+0x73/0xb0
[   21.990323]  print_report+0xd1/0x640
[   21.990458]  ? __virt_addr_valid+0x1db/0x2d0
[   21.990524]  ? kasan_addr_to_slab+0x11/0xa0
[   21.990647]  kasan_report+0x102/0x140
[   21.990706]  ? krealloc_more_oob_helper+0x7ed/0x930
[   21.990765]  ? krealloc_more_oob_helper+0x7ed/0x930
[   21.990832]  __asan_report_store1_noabort+0x1b/0x30
[   21.990892]  krealloc_more_oob_helper+0x7ed/0x930
[   21.990945]  ? __schedule+0xc6f/0x27d0
[   21.991002]  ? __pfx_krealloc_more_oob_helper+0x10/0x10
[   21.991061]  ? finish_task_switch.isra.0+0x153/0x700
[   21.991111]  ? __switch_to+0x5d9/0xf60
[   21.991147]  ? __schedule+0xc6f/0x27d0
[   21.991178]  ? __pfx_read_tsc+0x10/0x10
[   21.991249]  krealloc_large_more_oob+0x1c/0x30
[   21.991279]  kunit_try_run_case+0x1b3/0x490
[   21.991314]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.991344]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.991375]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.991423]  ? __kthread_parkme+0x82/0x160
[   21.991453]  ? preempt_count_sub+0x50/0x80
[   21.991484]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.991516]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.991548]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.991579]  kthread+0x324/0x6d0
[   21.991666]  ? trace_preempt_on+0x20/0xc0
[   21.991718]  ? __pfx_kthread+0x10/0x10
[   21.991749]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.991779]  ? calculate_sigpending+0x7b/0xa0
[   21.991811]  ? __pfx_kthread+0x10/0x10
[   21.991840]  ret_from_fork+0x41/0x80
[   21.991869]  ? __pfx_kthread+0x10/0x10
[   21.991898]  ret_from_fork_asm+0x1a/0x30
[   21.991940]  </TASK>
[   21.991956] 
[   22.010924] The buggy address belongs to the physical page:
[   22.011499] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102bbc
[   22.012363] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   22.012826] flags: 0x200000000000040(head|node=0|zone=2)
[   22.013945] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.014848] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.015462] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   22.016434] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   22.017178] head: 0200000000000002 ffffea00040aef01 ffffffffffffffff 0000000000000000
[   22.018105] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   22.018946] page dumped because: kasan: bad access detected
[   22.019449] 
[   22.019842] Memory state around the buggy address:
[   22.020363]  ffff888102bbdf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.021132]  ffff888102bbe000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   22.022128] >ffff888102bbe080: 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fe fe
[   22.022790]                                                              ^
[   22.023377]  ffff888102bbe100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.024125]  ffff888102bbe180: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   22.024886] ==================================================================

test log snippet boot parser 110

Home

[   21.620492] ==================================================================
[   21.621174] BUG: KASAN: slab-out-of-bounds in krealloc_more_oob_helper+0x7ed/0x930
[   21.621586] Write of size 1 at addr ffff888100a9bef0 by task kunit_try_catch/164
[   21.622732] 
[   21.623036] CPU: 1 UID: 0 PID: 164 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.623144] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.623176] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.623240] Call Trace:
[   21.623270]  <TASK>
[   21.623301]  dump_stack_lvl+0x73/0xb0
[   21.623372]  print_report+0xd1/0x640
[   21.623450]  ? __virt_addr_valid+0x1db/0x2d0
[   21.623515]  ? kasan_complete_mode_report_info+0x2a/0x200
[   21.623576]  kasan_report+0x102/0x140
[   21.623629]  ? krealloc_more_oob_helper+0x7ed/0x930
[   21.623686]  ? krealloc_more_oob_helper+0x7ed/0x930
[   21.623747]  __asan_report_store1_noabort+0x1b/0x30
[   21.623939]  krealloc_more_oob_helper+0x7ed/0x930
[   21.624047]  ? __schedule+0xc6f/0x27d0
[   21.624119]  ? __pfx_krealloc_more_oob_helper+0x10/0x10
[   21.624185]  ? finish_task_switch.isra.0+0x153/0x700
[   21.624264]  ? __switch_to+0x5d9/0xf60
[   21.624332]  ? __schedule+0xc6f/0x27d0
[   21.624386]  ? __pfx_read_tsc+0x10/0x10
[   21.624506]  krealloc_more_oob+0x1c/0x30
[   21.624559]  kunit_try_run_case+0x1b3/0x490
[   21.624755]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.624828]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.624893]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.624950]  ? __kthread_parkme+0x82/0x160
[   21.625001]  ? preempt_count_sub+0x50/0x80
[   21.625057]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.625118]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.625178]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.625292]  kthread+0x324/0x6d0
[   21.625356]  ? trace_preempt_on+0x20/0xc0
[   21.625431]  ? __pfx_kthread+0x10/0x10
[   21.625490]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.625547]  ? calculate_sigpending+0x7b/0xa0
[   21.625690]  ? __pfx_kthread+0x10/0x10
[   21.625769]  ret_from_fork+0x41/0x80
[   21.625823]  ? __pfx_kthread+0x10/0x10
[   21.625876]  ret_from_fork_asm+0x1a/0x30
[   21.625947]  </TASK>
[   21.625967] 
[   21.642790] Allocated by task 164:
[   21.643159]  kasan_save_stack+0x3d/0x60
[   21.643688]  kasan_save_track+0x18/0x40
[   21.644145]  kasan_save_alloc_info+0x3b/0x50
[   21.644686]  __kasan_krealloc+0x190/0x1f0
[   21.645335]  krealloc_noprof+0xf3/0x340
[   21.646137]  krealloc_more_oob_helper+0x1aa/0x930
[   21.646891]  krealloc_more_oob+0x1c/0x30
[   21.647360]  kunit_try_run_case+0x1b3/0x490
[   21.647881]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.648730]  kthread+0x324/0x6d0
[   21.649217]  ret_from_fork+0x41/0x80
[   21.650117]  ret_from_fork_asm+0x1a/0x30
[   21.650788] 
[   21.651046] The buggy address belongs to the object at ffff888100a9be00
[   21.651046]  which belongs to the cache kmalloc-256 of size 256
[   21.652307] The buggy address is located 5 bytes to the right of
[   21.652307]  allocated 235-byte region [ffff888100a9be00, ffff888100a9beeb)
[   21.653499] 
[   21.654147] The buggy address belongs to the physical page:
[   21.654821] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x100a9a
[   21.655812] head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   21.656354] flags: 0x200000000000040(head|node=0|zone=2)
[   21.656771] page_type: f5(slab)
[   21.657022] raw: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.657922] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.658870] head: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   21.659830] head: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   21.660525] head: 0200000000000001 ffffea000402a681 ffffffffffffffff 0000000000000000
[   21.661450] head: 0000000000000002 0000000000000000 00000000ffffffff 0000000000000000
[   21.662159] page dumped because: kasan: bad access detected
[   21.663060] 
[   21.663241] Memory state around the buggy address:
[   21.664002]  ffff888100a9bd80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.664782]  ffff888100a9be00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   21.665326] >ffff888100a9be80: 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fc fc
[   21.666107]                                                              ^
[   21.666838]  ffff888100a9bf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.667533]  ffff888100a9bf80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.668276] ==================================================================

test log snippet boot parser 111

Home

[   23.083429] ==================================================================
[   23.084776] BUG: KASAN: slab-out-of-bounds in ksize_unpoisons_memory+0x81e/0x9b0
[   23.085553] Read of size 1 at addr ffff888102a59373 by task kunit_try_catch/202
[   23.086500] 
[   23.086855] CPU: 1 UID: 0 PID: 202 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.086960] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.086993] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.087046] Call Trace:
[   23.087076]  <TASK>
[   23.087111]  dump_stack_lvl+0x73/0xb0
[   23.087207]  print_report+0xd1/0x640
[   23.087277]  ? __virt_addr_valid+0x1db/0x2d0
[   23.087345]  ? kasan_complete_mode_report_info+0x2a/0x200
[   23.087425]  kasan_report+0x102/0x140
[   23.087485]  ? ksize_unpoisons_memory+0x81e/0x9b0
[   23.087551]  ? ksize_unpoisons_memory+0x81e/0x9b0
[   23.087625]  __asan_report_load1_noabort+0x18/0x20
[   23.087681]  ksize_unpoisons_memory+0x81e/0x9b0
[   23.087795]  ? __pfx_ksize_unpoisons_memory+0x10/0x10
[   23.087843]  ? finish_task_switch.isra.0+0x153/0x700
[   23.087878]  ? __switch_to+0x5d9/0xf60
[   23.087914]  ? __schedule+0xc6f/0x27d0
[   23.087947]  ? __pfx_read_tsc+0x10/0x10
[   23.087974]  ? ktime_get_ts64+0x86/0x230
[   23.088009]  kunit_try_run_case+0x1b3/0x490
[   23.088044]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.088074]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.088130]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.088161]  ? __kthread_parkme+0x82/0x160
[   23.088223]  ? preempt_count_sub+0x50/0x80
[   23.088260]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.088292]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.088324]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.088356]  kthread+0x324/0x6d0
[   23.088384]  ? trace_preempt_on+0x20/0xc0
[   23.088432]  ? __pfx_kthread+0x10/0x10
[   23.088461]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.088490]  ? calculate_sigpending+0x7b/0xa0
[   23.088525]  ? __pfx_kthread+0x10/0x10
[   23.088554]  ret_from_fork+0x41/0x80
[   23.088586]  ? __pfx_kthread+0x10/0x10
[   23.088667]  ret_from_fork_asm+0x1a/0x30
[   23.088743]  </TASK>
[   23.088767] 
[   23.105010] Allocated by task 202:
[   23.105282]  kasan_save_stack+0x3d/0x60
[   23.105805]  kasan_save_track+0x18/0x40
[   23.106371]  kasan_save_alloc_info+0x3b/0x50
[   23.106954]  __kasan_kmalloc+0xb7/0xc0
[   23.107415]  __kmalloc_cache_noprof+0x184/0x410
[   23.108057]  ksize_unpoisons_memory+0xc8/0x9b0
[   23.108555]  kunit_try_run_case+0x1b3/0x490
[   23.109154]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.109781]  kthread+0x324/0x6d0
[   23.110171]  ret_from_fork+0x41/0x80
[   23.110937]  ret_from_fork_asm+0x1a/0x30
[   23.111297] 
[   23.111544] The buggy address belongs to the object at ffff888102a59300
[   23.111544]  which belongs to the cache kmalloc-128 of size 128
[   23.112489] The buggy address is located 0 bytes to the right of
[   23.112489]  allocated 115-byte region [ffff888102a59300, ffff888102a59373)
[   23.113981] 
[   23.114224] The buggy address belongs to the physical page:
[   23.114788] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a59
[   23.115263] flags: 0x200000000000000(node=0|zone=2)
[   23.115915] page_type: f5(slab)
[   23.116455] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   23.117387] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   23.118311] page dumped because: kasan: bad access detected
[   23.119024] 
[   23.119292] Memory state around the buggy address:
[   23.120346]  ffff888102a59200: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.121886]  ffff888102a59280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.122521] >ffff888102a59300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fc
[   23.123393]                                                              ^
[   23.124197]  ffff888102a59380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.125011]  ffff888102a59400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.125958] ==================================================================

test log snippet boot parser 112

Home

[   23.127542] ==================================================================
[   23.128938] BUG: KASAN: slab-out-of-bounds in ksize_unpoisons_memory+0x7eb/0x9b0
[   23.129458] Read of size 1 at addr ffff888102a59378 by task kunit_try_catch/202
[   23.129984] 
[   23.130223] CPU: 1 UID: 0 PID: 202 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.130328] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.130367] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.130470] Call Trace:
[   23.130507]  <TASK>
[   23.130541]  dump_stack_lvl+0x73/0xb0
[   23.130885]  print_report+0xd1/0x640
[   23.130978]  ? __virt_addr_valid+0x1db/0x2d0
[   23.131057]  ? kasan_complete_mode_report_info+0x2a/0x200
[   23.131127]  kasan_report+0x102/0x140
[   23.131213]  ? ksize_unpoisons_memory+0x7eb/0x9b0
[   23.131295]  ? ksize_unpoisons_memory+0x7eb/0x9b0
[   23.131368]  __asan_report_load1_noabort+0x18/0x20
[   23.131441]  ksize_unpoisons_memory+0x7eb/0x9b0
[   23.131507]  ? __pfx_ksize_unpoisons_memory+0x10/0x10
[   23.131569]  ? finish_task_switch.isra.0+0x153/0x700
[   23.131666]  ? __switch_to+0x5d9/0xf60
[   23.131728]  ? __schedule+0xc6f/0x27d0
[   23.131998]  ? __pfx_read_tsc+0x10/0x10
[   23.132055]  ? ktime_get_ts64+0x86/0x230
[   23.132121]  kunit_try_run_case+0x1b3/0x490
[   23.132222]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.132268]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.132301]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.132333]  ? __kthread_parkme+0x82/0x160
[   23.132361]  ? preempt_count_sub+0x50/0x80
[   23.132393]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.132445]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.132478]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.132509]  kthread+0x324/0x6d0
[   23.132536]  ? trace_preempt_on+0x20/0xc0
[   23.132567]  ? __pfx_kthread+0x10/0x10
[   23.132657]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.132719]  ? calculate_sigpending+0x7b/0xa0
[   23.132777]  ? __pfx_kthread+0x10/0x10
[   23.132828]  ret_from_fork+0x41/0x80
[   23.132876]  ? __pfx_kthread+0x10/0x10
[   23.132920]  ret_from_fork_asm+0x1a/0x30
[   23.132965]  </TASK>
[   23.132981] 
[   23.151045] Allocated by task 202:
[   23.151465]  kasan_save_stack+0x3d/0x60
[   23.152018]  kasan_save_track+0x18/0x40
[   23.152554]  kasan_save_alloc_info+0x3b/0x50
[   23.153135]  __kasan_kmalloc+0xb7/0xc0
[   23.153673]  __kmalloc_cache_noprof+0x184/0x410
[   23.154253]  ksize_unpoisons_memory+0xc8/0x9b0
[   23.154958]  kunit_try_run_case+0x1b3/0x490
[   23.155509]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.156277]  kthread+0x324/0x6d0
[   23.156786]  ret_from_fork+0x41/0x80
[   23.157276]  ret_from_fork_asm+0x1a/0x30
[   23.157872] 
[   23.158219] The buggy address belongs to the object at ffff888102a59300
[   23.158219]  which belongs to the cache kmalloc-128 of size 128
[   23.159584] The buggy address is located 5 bytes to the right of
[   23.159584]  allocated 115-byte region [ffff888102a59300, ffff888102a59373)
[   23.160656] 
[   23.160889] The buggy address belongs to the physical page:
[   23.161945] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a59
[   23.163139] flags: 0x200000000000000(node=0|zone=2)
[   23.163560] page_type: f5(slab)
[   23.164002] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   23.164626] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   23.165682] page dumped because: kasan: bad access detected
[   23.166479] 
[   23.166972] Memory state around the buggy address:
[   23.167327]  ffff888102a59200: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.167805]  ffff888102a59280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.168627] >ffff888102a59300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fc
[   23.169018]                                                                 ^
[   23.169960]  ffff888102a59380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.170932]  ffff888102a59400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.171353] ==================================================================
---
[   23.172734] ==================================================================
[   23.173857] BUG: KASAN: slab-out-of-bounds in ksize_unpoisons_memory+0x7b8/0x9b0
[   23.175327] Read of size 1 at addr ffff888102a5937f by task kunit_try_catch/202
[   23.175924] 
[   23.176135] CPU: 1 UID: 0 PID: 202 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.176269] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.176304] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.176359] Call Trace:
[   23.176392]  <TASK>
[   23.176441]  dump_stack_lvl+0x73/0xb0
[   23.176513]  print_report+0xd1/0x640
[   23.176570]  ? __virt_addr_valid+0x1db/0x2d0
[   23.176687]  ? kasan_complete_mode_report_info+0x2a/0x200
[   23.176753]  kasan_report+0x102/0x140
[   23.176811]  ? ksize_unpoisons_memory+0x7b8/0x9b0
[   23.176875]  ? ksize_unpoisons_memory+0x7b8/0x9b0
[   23.176942]  __asan_report_load1_noabort+0x18/0x20
[   23.176994]  ksize_unpoisons_memory+0x7b8/0x9b0
[   23.177057]  ? __pfx_ksize_unpoisons_memory+0x10/0x10
[   23.177117]  ? finish_task_switch.isra.0+0x153/0x700
[   23.177166]  ? __switch_to+0x5d9/0xf60
[   23.177234]  ? __schedule+0xc6f/0x27d0
[   23.177267]  ? __pfx_read_tsc+0x10/0x10
[   23.177295]  ? ktime_get_ts64+0x86/0x230
[   23.177330]  kunit_try_run_case+0x1b3/0x490
[   23.177364]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.177394]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.177446]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.177477]  ? __kthread_parkme+0x82/0x160
[   23.177505]  ? preempt_count_sub+0x50/0x80
[   23.177536]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.177568]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.177661]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.177730]  kthread+0x324/0x6d0
[   23.177766]  ? trace_preempt_on+0x20/0xc0
[   23.177797]  ? __pfx_kthread+0x10/0x10
[   23.177826]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.177856]  ? calculate_sigpending+0x7b/0xa0
[   23.177888]  ? __pfx_kthread+0x10/0x10
[   23.177917]  ret_from_fork+0x41/0x80
[   23.177945]  ? __pfx_kthread+0x10/0x10
[   23.177973]  ret_from_fork_asm+0x1a/0x30
[   23.178015]  </TASK>
[   23.178030] 
[   23.195176] Allocated by task 202:
[   23.195468]  kasan_save_stack+0x3d/0x60
[   23.196264]  kasan_save_track+0x18/0x40
[   23.196870]  kasan_save_alloc_info+0x3b/0x50
[   23.197346]  __kasan_kmalloc+0xb7/0xc0
[   23.197918]  __kmalloc_cache_noprof+0x184/0x410
[   23.198362]  ksize_unpoisons_memory+0xc8/0x9b0
[   23.198727]  kunit_try_run_case+0x1b3/0x490
[   23.199023]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.199559]  kthread+0x324/0x6d0
[   23.199925]  ret_from_fork+0x41/0x80
[   23.200890]  ret_from_fork_asm+0x1a/0x30
[   23.201360] 
[   23.201569] The buggy address belongs to the object at ffff888102a59300
[   23.201569]  which belongs to the cache kmalloc-128 of size 128
[   23.202376] The buggy address is located 12 bytes to the right of
[   23.202376]  allocated 115-byte region [ffff888102a59300, ffff888102a59373)
[   23.203463] 
[   23.204077] The buggy address belongs to the physical page:
[   23.204749] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a59
[   23.205489] flags: 0x200000000000000(node=0|zone=2)
[   23.206119] page_type: f5(slab)
[   23.206533] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   23.207112] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   23.207819] page dumped because: kasan: bad access detected
[   23.208287] 
[   23.208464] Memory state around the buggy address:
[   23.208903]  ffff888102a59200: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.210018]  ffff888102a59280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.210488] >ffff888102a59300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fc
[   23.211112]                                                                 ^
[   23.211925]  ffff888102a59380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.212975]  ffff888102a59400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.213811] ==================================================================

test log snippet boot parser 113

Home

[   25.214429] ==================================================================
[   25.215438] BUG: KASAN: slab-out-of-bounds in memchr+0x79/0x90
[   25.216107] Read of size 1 at addr ffff888101af1158 by task kunit_try_catch/263
[   25.216585] 
[   25.216800] CPU: 0 UID: 0 PID: 263 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.216905] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.216942] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.216996] Call Trace:
[   25.217016]  <TASK>
[   25.217036]  dump_stack_lvl+0x73/0xb0
[   25.217080]  print_report+0xd1/0x640
[   25.217114]  ? __virt_addr_valid+0x1db/0x2d0
[   25.217167]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.217211]  kasan_report+0x102/0x140
[   25.217241]  ? memchr+0x79/0x90
[   25.217271]  ? memchr+0x79/0x90
[   25.217303]  __asan_report_load1_noabort+0x18/0x20
[   25.217335]  memchr+0x79/0x90
[   25.217363]  kasan_memchr+0x163/0x320
[   25.217395]  ? __pfx_kasan_memchr+0x10/0x10
[   25.217589]  ? __schedule+0xc6f/0x27d0
[   25.217662]  ? __pfx_read_tsc+0x10/0x10
[   25.217714]  ? ktime_get_ts64+0x86/0x230
[   25.217783]  kunit_try_run_case+0x1b3/0x490
[   25.217849]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.217912]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.217976]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.218040]  ? __kthread_parkme+0x82/0x160
[   25.218095]  ? preempt_count_sub+0x50/0x80
[   25.218169]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.218271]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.218345]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.218433]  kthread+0x324/0x6d0
[   25.218465]  ? trace_preempt_on+0x20/0xc0
[   25.218499]  ? __pfx_kthread+0x10/0x10
[   25.218530]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.218563]  ? calculate_sigpending+0x7b/0xa0
[   25.218636]  ? __pfx_kthread+0x10/0x10
[   25.218698]  ret_from_fork+0x41/0x80
[   25.218733]  ? __pfx_kthread+0x10/0x10
[   25.218764]  ret_from_fork_asm+0x1a/0x30
[   25.218810]  </TASK>
[   25.218826] 
[   25.234828] Allocated by task 263:
[   25.235204]  kasan_save_stack+0x3d/0x60
[   25.236393]  kasan_save_track+0x18/0x40
[   25.237003]  kasan_save_alloc_info+0x3b/0x50
[   25.237538]  __kasan_kmalloc+0xb7/0xc0
[   25.238151]  __kmalloc_cache_noprof+0x184/0x410
[   25.238570]  kasan_memchr+0xad/0x320
[   25.239167]  kunit_try_run_case+0x1b3/0x490
[   25.239990]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.240648]  kthread+0x324/0x6d0
[   25.241039]  ret_from_fork+0x41/0x80
[   25.241776]  ret_from_fork_asm+0x1a/0x30
[   25.242241] 
[   25.242739] The buggy address belongs to the object at ffff888101af1140
[   25.242739]  which belongs to the cache kmalloc-32 of size 32
[   25.243703] The buggy address is located 0 bytes to the right of
[   25.243703]  allocated 24-byte region [ffff888101af1140, ffff888101af1158)
[   25.245495] 
[   25.246318] The buggy address belongs to the physical page:
[   25.246653] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af1
[   25.247353] flags: 0x200000000000000(node=0|zone=2)
[   25.247862] page_type: f5(slab)
[   25.248387] raw: 0200000000000000 ffff888100041780 dead000000000122 0000000000000000
[   25.249380] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   25.249841] page dumped because: kasan: bad access detected
[   25.250346] 
[   25.250599] Memory state around the buggy address:
[   25.251065]  ffff888101af1000: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   25.251914]  ffff888101af1080: 00 00 00 fc fc fc fc fc 00 00 00 fc fc fc fc fc
[   25.252496] >ffff888101af1100: 00 00 07 fc fc fc fc fc 00 00 00 fc fc fc fc fc
[   25.253396]                                                     ^
[   25.254089]  ffff888101af1180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.254878]  ffff888101af1200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.255323] ==================================================================

test log snippet boot parser 114

Home

[   25.262128] ==================================================================
[   25.263035] BUG: KASAN: slab-out-of-bounds in memcmp+0x1b4/0x1d0
[   25.263361] Read of size 1 at addr ffff888102a66e18 by task kunit_try_catch/265
[   25.263994] 
[   25.264227] CPU: 1 UID: 0 PID: 265 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.264348] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.264383] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.264452] Call Trace:
[   25.264484]  <TASK>
[   25.264517]  dump_stack_lvl+0x73/0xb0
[   25.264593]  print_report+0xd1/0x640
[   25.264654]  ? __virt_addr_valid+0x1db/0x2d0
[   25.264723]  ? kasan_complete_mode_report_info+0x2a/0x200
[   25.264786]  kasan_report+0x102/0x140
[   25.264842]  ? memcmp+0x1b4/0x1d0
[   25.264889]  ? memcmp+0x1b4/0x1d0
[   25.264950]  __asan_report_load1_noabort+0x18/0x20
[   25.265008]  memcmp+0x1b4/0x1d0
[   25.265058]  kasan_memcmp+0x190/0x390
[   25.265118]  ? __pfx_kasan_memcmp+0x10/0x10
[   25.265174]  ? sysvec_apic_timer_interrupt+0x50/0x90
[   25.265250]  ? __pfx_kasan_memcmp+0x10/0x10
[   25.265485]  kunit_try_run_case+0x1b3/0x490
[   25.265561]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.265940]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.266026]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.266089]  ? __kthread_parkme+0x82/0x160
[   25.266152]  ? preempt_count_sub+0x50/0x80
[   25.266215]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.266283]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.266386]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.266477]  kthread+0x324/0x6d0
[   25.266514]  ? trace_preempt_on+0x20/0xc0
[   25.266549]  ? __pfx_kthread+0x10/0x10
[   25.266580]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.266680]  ? calculate_sigpending+0x7b/0xa0
[   25.266752]  ? __pfx_kthread+0x10/0x10
[   25.266803]  ret_from_fork+0x41/0x80
[   25.266838]  ? __pfx_kthread+0x10/0x10
[   25.266868]  ret_from_fork_asm+0x1a/0x30
[   25.266912]  </TASK>
[   25.266929] 
[   25.285508] Allocated by task 265:
[   25.285822]  kasan_save_stack+0x3d/0x60
[   25.286912]  kasan_save_track+0x18/0x40
[   25.287522]  kasan_save_alloc_info+0x3b/0x50
[   25.288090]  __kasan_kmalloc+0xb7/0xc0
[   25.288757]  __kmalloc_cache_noprof+0x184/0x410
[   25.289284]  kasan_memcmp+0xb8/0x390
[   25.289659]  kunit_try_run_case+0x1b3/0x490
[   25.290054]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.291029]  kthread+0x324/0x6d0
[   25.291338]  ret_from_fork+0x41/0x80
[   25.291674]  ret_from_fork_asm+0x1a/0x30
[   25.292038] 
[   25.292510] The buggy address belongs to the object at ffff888102a66e00
[   25.292510]  which belongs to the cache kmalloc-32 of size 32
[   25.293391] The buggy address is located 0 bytes to the right of
[   25.293391]  allocated 24-byte region [ffff888102a66e00, ffff888102a66e18)
[   25.294991] 
[   25.295378] The buggy address belongs to the physical page:
[   25.296084] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a66
[   25.297083] flags: 0x200000000000000(node=0|zone=2)
[   25.297523] page_type: f5(slab)
[   25.298061] raw: 0200000000000000 ffff888100041780 dead000000000122 0000000000000000
[   25.299028] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   25.299772] page dumped because: kasan: bad access detected
[   25.300397] 
[   25.300873] Memory state around the buggy address:
[   25.301297]  ffff888102a66d00: 00 00 07 fc fc fc fc fc 00 00 00 fc fc fc fc fc
[   25.302103]  ffff888102a66d80: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   25.302818] >ffff888102a66e00: 00 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.303757]                             ^
[   25.304047]  ffff888102a66e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.305039]  ffff888102a66f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.305464] ==================================================================

test log snippet boot parser 115

Home

[   24.500358] ==================================================================
[   24.501595] BUG: KASAN: slab-out-of-bounds in mempool_oob_right_helper+0x31a/0x380
[   24.502259] Read of size 1 at addr ffff888102d6a001 by task kunit_try_catch/231
[   24.503640] 
[   24.503862] CPU: 0 UID: 0 PID: 231 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.503968] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.504003] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.504054] Call Trace:
[   24.504084]  <TASK>
[   24.504117]  dump_stack_lvl+0x73/0xb0
[   24.504193]  print_report+0xd1/0x640
[   24.504553]  ? __virt_addr_valid+0x1db/0x2d0
[   24.504688]  ? kasan_addr_to_slab+0x11/0xa0
[   24.504753]  kasan_report+0x102/0x140
[   24.504819]  ? mempool_oob_right_helper+0x31a/0x380
[   24.504882]  ? mempool_oob_right_helper+0x31a/0x380
[   24.504937]  __asan_report_load1_noabort+0x18/0x20
[   24.504970]  mempool_oob_right_helper+0x31a/0x380
[   24.505007]  ? __pfx_mempool_oob_right_helper+0x10/0x10
[   24.505048]  ? finish_task_switch.isra.0+0x153/0x700
[   24.505087]  mempool_kmalloc_large_oob_right+0xb6/0x100
[   24.505123]  ? __pfx_mempool_kmalloc_large_oob_right+0x10/0x10
[   24.505157]  ? __switch_to+0x5d9/0xf60
[   24.505223]  ? __pfx_mempool_kmalloc+0x10/0x10
[   24.505262]  ? __pfx_mempool_kfree+0x10/0x10
[   24.505295]  ? __pfx_read_tsc+0x10/0x10
[   24.505323]  ? ktime_get_ts64+0x86/0x230
[   24.505357]  kunit_try_run_case+0x1b3/0x490
[   24.505394]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.505446]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.505479]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.505511]  ? __kthread_parkme+0x82/0x160
[   24.505541]  ? preempt_count_sub+0x50/0x80
[   24.505572]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.505666]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.505734]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.505777]  kthread+0x324/0x6d0
[   24.505808]  ? trace_preempt_on+0x20/0xc0
[   24.505842]  ? __pfx_kthread+0x10/0x10
[   24.505873]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.505906]  ? calculate_sigpending+0x7b/0xa0
[   24.505941]  ? __pfx_kthread+0x10/0x10
[   24.505972]  ret_from_fork+0x41/0x80
[   24.506001]  ? __pfx_kthread+0x10/0x10
[   24.506031]  ret_from_fork_asm+0x1a/0x30
[   24.506073]  </TASK>
[   24.506089] 
[   24.526907] The buggy address belongs to the physical page:
[   24.527995] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102d68
[   24.529192] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   24.529855] flags: 0x200000000000040(head|node=0|zone=2)
[   24.530479] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   24.531367] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   24.532111] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   24.532662] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   24.533184] head: 0200000000000002 ffffea00040b5a01 ffffffffffffffff 0000000000000000
[   24.534277] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   24.535114] page dumped because: kasan: bad access detected
[   24.535780] 
[   24.536071] Memory state around the buggy address:
[   24.536764]  ffff888102d69f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   24.537503]  ffff888102d69f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   24.538097] >ffff888102d6a000: 01 fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   24.539278]                    ^
[   24.539858]  ffff888102d6a080: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   24.540378]  ffff888102d6a100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
[   24.541169] ==================================================================

test log snippet boot parser 116

Home

[   24.439036] ==================================================================
[   24.440415] BUG: KASAN: slab-out-of-bounds in mempool_oob_right_helper+0x31a/0x380
[   24.441161] Read of size 1 at addr ffff888101ad5e73 by task kunit_try_catch/229
[   24.441800] 
[   24.442014] CPU: 0 UID: 0 PID: 229 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.442131] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.442168] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.442229] Call Trace:
[   24.442263]  <TASK>
[   24.442795]  dump_stack_lvl+0x73/0xb0
[   24.442905]  print_report+0xd1/0x640
[   24.442988]  ? __virt_addr_valid+0x1db/0x2d0
[   24.443089]  ? kasan_complete_mode_report_info+0x2a/0x200
[   24.443161]  kasan_report+0x102/0x140
[   24.443289]  ? mempool_oob_right_helper+0x31a/0x380
[   24.443338]  ? mempool_oob_right_helper+0x31a/0x380
[   24.443382]  __asan_report_load1_noabort+0x18/0x20
[   24.443433]  mempool_oob_right_helper+0x31a/0x380
[   24.443472]  ? __pfx_mempool_oob_right_helper+0x10/0x10
[   24.443514]  ? finish_task_switch.isra.0+0x153/0x700
[   24.443553]  mempool_kmalloc_oob_right+0xb6/0x100
[   24.443627]  ? __pfx_mempool_kmalloc_oob_right+0x10/0x10
[   24.443707]  ? __switch_to+0x5d9/0xf60
[   24.443751]  ? __pfx_mempool_kmalloc+0x10/0x10
[   24.443786]  ? __pfx_mempool_kfree+0x10/0x10
[   24.443820]  ? __pfx_read_tsc+0x10/0x10
[   24.443850]  ? ktime_get_ts64+0x86/0x230
[   24.443885]  kunit_try_run_case+0x1b3/0x490
[   24.443924]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.443956]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.443990]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.444023]  ? __kthread_parkme+0x82/0x160
[   24.444052]  ? preempt_count_sub+0x50/0x80
[   24.444087]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.444119]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.444152]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.444199]  kthread+0x324/0x6d0
[   24.444237]  ? trace_preempt_on+0x20/0xc0
[   24.444271]  ? __pfx_kthread+0x10/0x10
[   24.444300]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.444330]  ? calculate_sigpending+0x7b/0xa0
[   24.444367]  ? __pfx_kthread+0x10/0x10
[   24.444396]  ret_from_fork+0x41/0x80
[   24.444443]  ? __pfx_kthread+0x10/0x10
[   24.444474]  ret_from_fork_asm+0x1a/0x30
[   24.444518]  </TASK>
[   24.444536] 
[   24.468830] Allocated by task 229:
[   24.469128]  kasan_save_stack+0x3d/0x60
[   24.469995]  kasan_save_track+0x18/0x40
[   24.470575]  kasan_save_alloc_info+0x3b/0x50
[   24.471203]  __kasan_mempool_unpoison_object+0x1a9/0x200
[   24.471859]  remove_element+0x11e/0x190
[   24.472264]  mempool_alloc_preallocated+0x4d/0x90
[   24.472824]  mempool_oob_right_helper+0x8b/0x380
[   24.473465]  mempool_kmalloc_oob_right+0xb6/0x100
[   24.474169]  kunit_try_run_case+0x1b3/0x490
[   24.475050]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.475557]  kthread+0x324/0x6d0
[   24.476123]  ret_from_fork+0x41/0x80
[   24.476907]  ret_from_fork_asm+0x1a/0x30
[   24.477493] 
[   24.477752] The buggy address belongs to the object at ffff888101ad5e00
[   24.477752]  which belongs to the cache kmalloc-128 of size 128
[   24.479697] The buggy address is located 0 bytes to the right of
[   24.479697]  allocated 115-byte region [ffff888101ad5e00, ffff888101ad5e73)
[   24.480596] 
[   24.480844] The buggy address belongs to the physical page:
[   24.481965] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101ad5
[   24.482605] flags: 0x200000000000000(node=0|zone=2)
[   24.483268] page_type: f5(slab)
[   24.484214] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   24.484914] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   24.486254] page dumped because: kasan: bad access detected
[   24.486629] 
[   24.487008] Memory state around the buggy address:
[   24.487854]  ffff888101ad5d00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   24.488546]  ffff888101ad5d80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   24.489073] >ffff888101ad5e00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fc
[   24.490110]                                                              ^
[   24.490851]  ffff888101ad5e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   24.491627]  ffff888101ad5f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 fc
[   24.492136] ==================================================================

test log snippet boot parser 117

Home

[   24.549989] ==================================================================
[   24.550923] BUG: KASAN: slab-out-of-bounds in mempool_oob_right_helper+0x31a/0x380
[   24.551938] Read of size 1 at addr ffff888102a6a2bb by task kunit_try_catch/233
[   24.552842] 
[   24.553271] CPU: 1 UID: 0 PID: 233 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.553383] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.553436] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.553493] Call Trace:
[   24.553524]  <TASK>
[   24.553559]  dump_stack_lvl+0x73/0xb0
[   24.553671]  print_report+0xd1/0x640
[   24.553748]  ? __virt_addr_valid+0x1db/0x2d0
[   24.553984]  ? kasan_complete_mode_report_info+0x2a/0x200
[   24.554063]  kasan_report+0x102/0x140
[   24.554123]  ? mempool_oob_right_helper+0x31a/0x380
[   24.554189]  ? mempool_oob_right_helper+0x31a/0x380
[   24.554284]  __asan_report_load1_noabort+0x18/0x20
[   24.554317]  mempool_oob_right_helper+0x31a/0x380
[   24.554363]  ? __pfx_mempool_oob_right_helper+0x10/0x10
[   24.554422]  ? finish_task_switch.isra.0+0x153/0x700
[   24.554464]  mempool_slab_oob_right+0xb1/0x100
[   24.554496]  ? __pfx_mempool_slab_oob_right+0x10/0x10
[   24.554525]  ? __switch_to+0x5d9/0xf60
[   24.554558]  ? __pfx_mempool_alloc_slab+0x10/0x10
[   24.554590]  ? __pfx_mempool_free_slab+0x10/0x10
[   24.554623]  ? __pfx_read_tsc+0x10/0x10
[   24.554651]  ? ktime_get_ts64+0x86/0x230
[   24.554684]  kunit_try_run_case+0x1b3/0x490
[   24.554720]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.554752]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.554784]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.554816]  ? __kthread_parkme+0x82/0x160
[   24.554846]  ? preempt_count_sub+0x50/0x80
[   24.554877]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.554944]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.555021]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.555060]  kthread+0x324/0x6d0
[   24.555091]  ? trace_preempt_on+0x20/0xc0
[   24.555125]  ? __pfx_kthread+0x10/0x10
[   24.555154]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.555200]  ? calculate_sigpending+0x7b/0xa0
[   24.555242]  ? __pfx_kthread+0x10/0x10
[   24.555274]  ret_from_fork+0x41/0x80
[   24.555303]  ? __pfx_kthread+0x10/0x10
[   24.555333]  ret_from_fork_asm+0x1a/0x30
[   24.555376]  </TASK>
[   24.555393] 
[   24.573385] Allocated by task 233:
[   24.573855]  kasan_save_stack+0x3d/0x60
[   24.574395]  kasan_save_track+0x18/0x40
[   24.574931]  kasan_save_alloc_info+0x3b/0x50
[   24.575471]  __kasan_mempool_unpoison_object+0x1bb/0x200
[   24.576162]  remove_element+0x11e/0x190
[   24.576794]  mempool_alloc_preallocated+0x4d/0x90
[   24.577239]  mempool_oob_right_helper+0x8b/0x380
[   24.577908]  mempool_slab_oob_right+0xb1/0x100
[   24.578329]  kunit_try_run_case+0x1b3/0x490
[   24.578809]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.579411]  kthread+0x324/0x6d0
[   24.579960]  ret_from_fork+0x41/0x80
[   24.580452]  ret_from_fork_asm+0x1a/0x30
[   24.580929] 
[   24.581155] The buggy address belongs to the object at ffff888102a6a240
[   24.581155]  which belongs to the cache test_cache of size 123
[   24.582101] The buggy address is located 0 bytes to the right of
[   24.582101]  allocated 123-byte region [ffff888102a6a240, ffff888102a6a2bb)
[   24.583301] 
[   24.583705] The buggy address belongs to the physical page:
[   24.584278] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a6a
[   24.585078] flags: 0x200000000000000(node=0|zone=2)
[   24.585530] page_type: f5(slab)
[   24.586059] raw: 0200000000000000 ffff888100ff7c80 dead000000000122 0000000000000000
[   24.586893] raw: 0000000000000000 0000000080150015 00000001f5000000 0000000000000000
[   24.587558] page dumped because: kasan: bad access detected
[   24.588364] 
[   24.588734] Memory state around the buggy address:
[   24.589043]  ffff888102a6a180: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   24.589894]  ffff888102a6a200: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00
[   24.590725] >ffff888102a6a280: 00 00 00 00 00 00 00 03 fc fc fc fc fc fc fc fc
[   24.591318]                                         ^
[   24.592033]  ffff888102a6a300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   24.592693]  ffff888102a6a380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   24.593549] ==================================================================

test log snippet boot parser 118

Home

[   29.472869] ==================================================================
[   29.473515] BUG: KASAN: slab-out-of-bounds in strncpy_from_user+0x2e/0x1d0
[   29.474088] Write of size 121 at addr ffff888101af6100 by task kunit_try_catch/293
[   29.474593] 
[   29.474831] CPU: 0 UID: 0 PID: 293 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.474936] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.474974] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.475026] Call Trace:
[   29.475060]  <TASK>
[   29.475093]  dump_stack_lvl+0x73/0xb0
[   29.475166]  print_report+0xd1/0x640
[   29.475231]  ? __virt_addr_valid+0x1db/0x2d0
[   29.475298]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.475365]  kasan_report+0x102/0x140
[   29.475488]  ? strncpy_from_user+0x2e/0x1d0
[   29.475599]  ? strncpy_from_user+0x2e/0x1d0
[   29.475701]  kasan_check_range+0x10c/0x1c0
[   29.475765]  __kasan_check_write+0x18/0x20
[   29.475826]  strncpy_from_user+0x2e/0x1d0
[   29.475892]  copy_user_test_oob+0x761/0x10f0
[   29.475962]  ? __pfx_copy_user_test_oob+0x10/0x10
[   29.476019]  ? finish_task_switch.isra.0+0x153/0x700
[   29.476081]  ? __switch_to+0x5d9/0xf60
[   29.476151]  ? __schedule+0xc6f/0x27d0
[   29.476267]  ? __pfx_read_tsc+0x10/0x10
[   29.476334]  ? ktime_get_ts64+0x86/0x230
[   29.476438]  kunit_try_run_case+0x1b3/0x490
[   29.476521]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.476589]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.476656]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.476723]  ? __kthread_parkme+0x82/0x160
[   29.476784]  ? preempt_count_sub+0x50/0x80
[   29.476853]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.476920]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.476992]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.477062]  kthread+0x324/0x6d0
[   29.477117]  ? trace_preempt_on+0x20/0xc0
[   29.477179]  ? __pfx_kthread+0x10/0x10
[   29.477283]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.477345]  ? calculate_sigpending+0x7b/0xa0
[   29.477441]  ? __pfx_kthread+0x10/0x10
[   29.477531]  ret_from_fork+0x41/0x80
[   29.477610]  ? __pfx_kthread+0x10/0x10
[   29.477697]  ret_from_fork_asm+0x1a/0x30
[   29.477791]  </TASK>
[   29.477823] 
[   29.491142] Allocated by task 293:
[   29.491452]  kasan_save_stack+0x3d/0x60
[   29.491930]  kasan_save_track+0x18/0x40
[   29.492417]  kasan_save_alloc_info+0x3b/0x50
[   29.492918]  __kasan_kmalloc+0xb7/0xc0
[   29.493428]  __kmalloc_noprof+0x1c4/0x500
[   29.493876]  kunit_kmalloc_array+0x25/0x60
[   29.494436]  copy_user_test_oob+0xac/0x10f0
[   29.494896]  kunit_try_run_case+0x1b3/0x490
[   29.495441]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.496023]  kthread+0x324/0x6d0
[   29.496481]  ret_from_fork+0x41/0x80
[   29.496913]  ret_from_fork_asm+0x1a/0x30
[   29.497264] 
[   29.497454] The buggy address belongs to the object at ffff888101af6100
[   29.497454]  which belongs to the cache kmalloc-128 of size 128
[   29.498153] The buggy address is located 0 bytes inside of
[   29.498153]  allocated 120-byte region [ffff888101af6100, ffff888101af6178)
[   29.499337] 
[   29.499594] The buggy address belongs to the physical page:
[   29.500159] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af6
[   29.500959] flags: 0x200000000000000(node=0|zone=2)
[   29.501550] page_type: f5(slab)
[   29.501962] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   29.502678] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   29.503286] page dumped because: kasan: bad access detected
[   29.503768] 
[   29.504011] Memory state around the buggy address:
[   29.504454]  ffff888101af6000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   29.505140]  ffff888101af6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.505758] >ffff888101af6100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   29.506422]                                                                 ^
[   29.506822]  ffff888101af6180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.507264]  ffff888101af6200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.507891] ==================================================================

test log snippet boot parser 119

Home

[   29.509424] ==================================================================
[   29.510718] BUG: KASAN: slab-out-of-bounds in strncpy_from_user+0x1a5/0x1d0
[   29.511372] Write of size 1 at addr ffff888101af6178 by task kunit_try_catch/293
[   29.511989] 
[   29.512307] CPU: 0 UID: 0 PID: 293 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   29.512976] Tainted: [B]=BAD_PAGE, [N]=TEST
[   29.513047] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   29.513115] Call Trace:
[   29.513150]  <TASK>
[   29.513223]  dump_stack_lvl+0x73/0xb0
[   29.513312]  print_report+0xd1/0x640
[   29.513382]  ? __virt_addr_valid+0x1db/0x2d0
[   29.513474]  ? kasan_complete_mode_report_info+0x2a/0x200
[   29.513551]  kasan_report+0x102/0x140
[   29.513614]  ? strncpy_from_user+0x1a5/0x1d0
[   29.513681]  ? strncpy_from_user+0x1a5/0x1d0
[   29.513758]  __asan_report_store1_noabort+0x1b/0x30
[   29.513827]  strncpy_from_user+0x1a5/0x1d0
[   29.513895]  copy_user_test_oob+0x761/0x10f0
[   29.513967]  ? __pfx_copy_user_test_oob+0x10/0x10
[   29.514018]  ? finish_task_switch.isra.0+0x153/0x700
[   29.514075]  ? __switch_to+0x5d9/0xf60
[   29.514141]  ? __schedule+0xc6f/0x27d0
[   29.514248]  ? __pfx_read_tsc+0x10/0x10
[   29.514315]  ? ktime_get_ts64+0x86/0x230
[   29.514390]  kunit_try_run_case+0x1b3/0x490
[   29.514486]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.514552]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   29.514617]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   29.514685]  ? __kthread_parkme+0x82/0x160
[   29.514746]  ? preempt_count_sub+0x50/0x80
[   29.514813]  ? __pfx_kunit_try_run_case+0x10/0x10
[   29.514883]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.514952]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   29.515020]  kthread+0x324/0x6d0
[   29.515123]  ? trace_preempt_on+0x20/0xc0
[   29.515228]  ? __pfx_kthread+0x10/0x10
[   29.515301]  ? _raw_spin_unlock_irq+0x47/0x80
[   29.515367]  ? calculate_sigpending+0x7b/0xa0
[   29.515452]  ? __pfx_kthread+0x10/0x10
[   29.515521]  ret_from_fork+0x41/0x80
[   29.515583]  ? __pfx_kthread+0x10/0x10
[   29.515646]  ret_from_fork_asm+0x1a/0x30
[   29.515736]  </TASK>
[   29.515771] 
[   29.528972] Allocated by task 293:
[   29.529310]  kasan_save_stack+0x3d/0x60
[   29.529760]  kasan_save_track+0x18/0x40
[   29.530266]  kasan_save_alloc_info+0x3b/0x50
[   29.530700]  __kasan_kmalloc+0xb7/0xc0
[   29.531153]  __kmalloc_noprof+0x1c4/0x500
[   29.531515]  kunit_kmalloc_array+0x25/0x60
[   29.531813]  copy_user_test_oob+0xac/0x10f0
[   29.532329]  kunit_try_run_case+0x1b3/0x490
[   29.532831]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   29.533442]  kthread+0x324/0x6d0
[   29.533854]  ret_from_fork+0x41/0x80
[   29.534362]  ret_from_fork_asm+0x1a/0x30
[   29.534845] 
[   29.535076] The buggy address belongs to the object at ffff888101af6100
[   29.535076]  which belongs to the cache kmalloc-128 of size 128
[   29.536017] The buggy address is located 0 bytes to the right of
[   29.536017]  allocated 120-byte region [ffff888101af6100, ffff888101af6178)
[   29.537085] 
[   29.537367] The buggy address belongs to the physical page:
[   29.537745] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af6
[   29.538511] flags: 0x200000000000000(node=0|zone=2)
[   29.539018] page_type: f5(slab)
[   29.539382] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   29.539827] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   29.540277] page dumped because: kasan: bad access detected
[   29.540817] 
[   29.541049] Memory state around the buggy address:
[   29.541587]  ffff888101af6000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   29.542312]  ffff888101af6080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.543007] >ffff888101af6100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
[   29.545486]                                                                 ^
[   29.546131]  ffff888101af6180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.547160]  ffff888101af6200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   29.547891] ==================================================================

test log snippet boot parser 120

Home

[   25.463182] ==================================================================
[   25.464209] BUG: KASAN: slab-use-after-free in kasan_strings+0xa0f/0xb60
[   25.465252] Read of size 1 at addr ffff888101af1310 by task kunit_try_catch/267
[   25.466119] 
[   25.466952] CPU: 0 UID: 0 PID: 267 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.467048] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.467070] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.467100] Call Trace:
[   25.467118]  <TASK>
[   25.467136]  dump_stack_lvl+0x73/0xb0
[   25.467179]  print_report+0xd1/0x640
[   25.467270]  ? __virt_addr_valid+0x1db/0x2d0
[   25.467312]  ? kasan_complete_mode_report_info+0x64/0x200
[   25.467350]  kasan_report+0x102/0x140
[   25.467380]  ? kasan_strings+0xa0f/0xb60
[   25.467429]  ? kasan_strings+0xa0f/0xb60
[   25.467466]  __asan_report_load1_noabort+0x18/0x20
[   25.467497]  kasan_strings+0xa0f/0xb60
[   25.467529]  ? __pfx_kasan_strings+0x10/0x10
[   25.467560]  ? __schedule+0xc6f/0x27d0
[   25.467745]  ? __pfx_read_tsc+0x10/0x10
[   25.467806]  ? ktime_get_ts64+0x86/0x230
[   25.467843]  kunit_try_run_case+0x1b3/0x490
[   25.467880]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.467913]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.467945]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.467976]  ? __kthread_parkme+0x82/0x160
[   25.468005]  ? preempt_count_sub+0x50/0x80
[   25.468039]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.468073]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.468106]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.468151]  kthread+0x324/0x6d0
[   25.468179]  ? trace_preempt_on+0x20/0xc0
[   25.468248]  ? __pfx_kthread+0x10/0x10
[   25.468279]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.468310]  ? calculate_sigpending+0x7b/0xa0
[   25.468344]  ? __pfx_kthread+0x10/0x10
[   25.468373]  ret_from_fork+0x41/0x80
[   25.468419]  ? __pfx_kthread+0x10/0x10
[   25.468451]  ret_from_fork_asm+0x1a/0x30
[   25.468496]  </TASK>
[   25.468510] 
[   25.485847] Allocated by task 267:
[   25.486252]  kasan_save_stack+0x3d/0x60
[   25.487097]  kasan_save_track+0x18/0x40
[   25.488190]  kasan_save_alloc_info+0x3b/0x50
[   25.489182]  __kasan_kmalloc+0xb7/0xc0
[   25.489530]  __kmalloc_cache_noprof+0x184/0x410
[   25.490045]  kasan_strings+0xb3/0xb60
[   25.491034]  kunit_try_run_case+0x1b3/0x490
[   25.491430]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.492027]  kthread+0x324/0x6d0
[   25.492816]  ret_from_fork+0x41/0x80
[   25.493139]  ret_from_fork_asm+0x1a/0x30
[   25.493458] 
[   25.493663] Freed by task 267:
[   25.494414]  kasan_save_stack+0x3d/0x60
[   25.495216]  kasan_save_track+0x18/0x40
[   25.495804]  kasan_save_free_info+0x3f/0x60
[   25.496192]  __kasan_slab_free+0x56/0x70
[   25.496983]  kfree+0x123/0x3f0
[   25.497466]  kasan_strings+0x13a/0xb60
[   25.498309]  kunit_try_run_case+0x1b3/0x490
[   25.498690]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.499519]  kthread+0x324/0x6d0
[   25.500234]  ret_from_fork+0x41/0x80
[   25.500537]  ret_from_fork_asm+0x1a/0x30
[   25.501516] 
[   25.501660] The buggy address belongs to the object at ffff888101af1300
[   25.501660]  which belongs to the cache kmalloc-32 of size 32
[   25.502928] The buggy address is located 16 bytes inside of
[   25.502928]  freed 32-byte region [ffff888101af1300, ffff888101af1320)
[   25.504299] 
[   25.504471] The buggy address belongs to the physical page:
[   25.504932] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af1
[   25.506150] flags: 0x200000000000000(node=0|zone=2)
[   25.506569] page_type: f5(slab)
[   25.507012] raw: 0200000000000000 ffff888100041780 dead000000000122 0000000000000000
[   25.508245] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   25.508815] page dumped because: kasan: bad access detected
[   25.509181] 
[   25.509429] Memory state around the buggy address:
[   25.509911]  ffff888101af1200: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   25.511348]  ffff888101af1280: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   25.511909] >ffff888101af1300: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   25.513012]                          ^
[   25.513326]  ffff888101af1380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.513803]  ffff888101af1400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.514766] ==================================================================

test log snippet boot parser 121

Home

[   22.967120] ==================================================================
[   22.968155] BUG: KASAN: slab-use-after-free in kmalloc_double_kzfree+0x19d/0x360
[   22.969150] Read of size 1 at addr ffff8881012e6740 by task kunit_try_catch/200
[   22.970054] 
[   22.970320] CPU: 0 UID: 0 PID: 200 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.970533] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.970570] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.970772] Call Trace:
[   22.970808]  <TASK>
[   22.970843]  dump_stack_lvl+0x73/0xb0
[   22.970975]  print_report+0xd1/0x640
[   22.971118]  ? __virt_addr_valid+0x1db/0x2d0
[   22.971252]  ? kasan_complete_mode_report_info+0x64/0x200
[   22.971325]  kasan_report+0x102/0x140
[   22.971384]  ? kmalloc_double_kzfree+0x19d/0x360
[   22.971469]  ? kmalloc_double_kzfree+0x19d/0x360
[   22.971531]  ? kmalloc_double_kzfree+0x19d/0x360
[   22.971566]  __kasan_check_byte+0x3d/0x50
[   22.971665]  kfree_sensitive+0x22/0x90
[   22.971736]  kmalloc_double_kzfree+0x19d/0x360
[   22.971781]  ? __pfx_kmalloc_double_kzfree+0x10/0x10
[   22.971816]  ? __schedule+0xc6f/0x27d0
[   22.971847]  ? __pfx_read_tsc+0x10/0x10
[   22.971876]  ? ktime_get_ts64+0x86/0x230
[   22.971910]  kunit_try_run_case+0x1b3/0x490
[   22.971942]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.971972]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.972003]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.972034]  ? __kthread_parkme+0x82/0x160
[   22.972061]  ? preempt_count_sub+0x50/0x80
[   22.972094]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.972125]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.972158]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.972226]  kthread+0x324/0x6d0
[   22.972260]  ? trace_preempt_on+0x20/0xc0
[   22.972292]  ? __pfx_kthread+0x10/0x10
[   22.972320]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.972350]  ? calculate_sigpending+0x7b/0xa0
[   22.972384]  ? __pfx_kthread+0x10/0x10
[   22.972430]  ret_from_fork+0x41/0x80
[   22.972459]  ? __pfx_kthread+0x10/0x10
[   22.972488]  ret_from_fork_asm+0x1a/0x30
[   22.972530]  </TASK>
[   22.972546] 
[   22.993024] Allocated by task 200:
[   22.993754]  kasan_save_stack+0x3d/0x60
[   22.994346]  kasan_save_track+0x18/0x40
[   22.994991]  kasan_save_alloc_info+0x3b/0x50
[   22.995385]  __kasan_kmalloc+0xb7/0xc0
[   22.995860]  __kmalloc_cache_noprof+0x184/0x410
[   22.996269]  kmalloc_double_kzfree+0xaa/0x360
[   22.997108]  kunit_try_run_case+0x1b3/0x490
[   22.997535]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.998262]  kthread+0x324/0x6d0
[   22.998558]  ret_from_fork+0x41/0x80
[   22.999177]  ret_from_fork_asm+0x1a/0x30
[   22.999803] 
[   23.000067] Freed by task 200:
[   23.000379]  kasan_save_stack+0x3d/0x60
[   23.001071]  kasan_save_track+0x18/0x40
[   23.001548]  kasan_save_free_info+0x3f/0x60
[   23.002238]  __kasan_slab_free+0x56/0x70
[   23.002791]  kfree+0x123/0x3f0
[   23.003173]  kfree_sensitive+0x67/0x90
[   23.003509]  kmalloc_double_kzfree+0x12c/0x360
[   23.004157]  kunit_try_run_case+0x1b3/0x490
[   23.004710]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.005425]  kthread+0x324/0x6d0
[   23.005939]  ret_from_fork+0x41/0x80
[   23.006281]  ret_from_fork_asm+0x1a/0x30
[   23.006826] 
[   23.007232] The buggy address belongs to the object at ffff8881012e6740
[   23.007232]  which belongs to the cache kmalloc-16 of size 16
[   23.008276] The buggy address is located 0 bytes inside of
[   23.008276]  freed 16-byte region [ffff8881012e6740, ffff8881012e6750)
[   23.009500] 
[   23.009899] The buggy address belongs to the physical page:
[   23.010396] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1012e6
[   23.011345] flags: 0x200000000000000(node=0|zone=2)
[   23.011866] page_type: f5(slab)
[   23.012303] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   23.013241] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   23.013933] page dumped because: kasan: bad access detected
[   23.014528] 
[   23.014924] Memory state around the buggy address:
[   23.015379]  ffff8881012e6600: 00 04 fc fc 00 00 fc fc 00 04 fc fc 00 04 fc fc
[   23.016352]  ffff8881012e6680: 00 00 fc fc fa fb fc fc 00 04 fc fc fa fb fc fc
[   23.017044] >ffff8881012e6700: fa fb fc fc fa fb fc fc fa fb fc fc fc fc fc fc
[   23.018480]                                            ^
[   23.019126]  ffff8881012e6780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.020110]  ffff8881012e6800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.020522] ==================================================================

test log snippet boot parser 122

Home

[   22.794917] ==================================================================
[   22.796010] BUG: KASAN: slab-use-after-free in kmalloc_uaf+0x322/0x380
[   22.796669] Read of size 1 at addr ffff8881025fbc48 by task kunit_try_catch/192
[   22.797770] 
[   22.798230] CPU: 1 UID: 0 PID: 192 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.798341] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.798448] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.798527] Call Trace:
[   22.798615]  <TASK>
[   22.798675]  dump_stack_lvl+0x73/0xb0
[   22.798758]  print_report+0xd1/0x640
[   22.798818]  ? __virt_addr_valid+0x1db/0x2d0
[   22.798876]  ? kasan_complete_mode_report_info+0x64/0x200
[   22.798935]  kasan_report+0x102/0x140
[   22.798988]  ? kmalloc_uaf+0x322/0x380
[   22.799021]  ? kmalloc_uaf+0x322/0x380
[   22.799058]  __asan_report_load1_noabort+0x18/0x20
[   22.799087]  kmalloc_uaf+0x322/0x380
[   22.799117]  ? __pfx_kmalloc_uaf+0x10/0x10
[   22.799147]  ? __schedule+0xc6f/0x27d0
[   22.799177]  ? __pfx_read_tsc+0x10/0x10
[   22.799243]  ? ktime_get_ts64+0x86/0x230
[   22.799279]  kunit_try_run_case+0x1b3/0x490
[   22.799313]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.799343]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.799374]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.799421]  ? __kthread_parkme+0x82/0x160
[   22.799451]  ? preempt_count_sub+0x50/0x80
[   22.799484]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.799516]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.799547]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.799580]  kthread+0x324/0x6d0
[   22.799679]  ? trace_preempt_on+0x20/0xc0
[   22.799732]  ? __pfx_kthread+0x10/0x10
[   22.799763]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.799793]  ? calculate_sigpending+0x7b/0xa0
[   22.799827]  ? __pfx_kthread+0x10/0x10
[   22.799857]  ret_from_fork+0x41/0x80
[   22.799885]  ? __pfx_kthread+0x10/0x10
[   22.799914]  ret_from_fork_asm+0x1a/0x30
[   22.799956]  </TASK>
[   22.799971] 
[   22.817792] Allocated by task 192:
[   22.818255]  kasan_save_stack+0x3d/0x60
[   22.818803]  kasan_save_track+0x18/0x40
[   22.819378]  kasan_save_alloc_info+0x3b/0x50
[   22.819871]  __kasan_kmalloc+0xb7/0xc0
[   22.820481]  __kmalloc_cache_noprof+0x184/0x410
[   22.821030]  kmalloc_uaf+0xab/0x380
[   22.821584]  kunit_try_run_case+0x1b3/0x490
[   22.822262]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.822727]  kthread+0x324/0x6d0
[   22.822992]  ret_from_fork+0x41/0x80
[   22.823517]  ret_from_fork_asm+0x1a/0x30
[   22.824360] 
[   22.824730] Freed by task 192:
[   22.825097]  kasan_save_stack+0x3d/0x60
[   22.825764]  kasan_save_track+0x18/0x40
[   22.826498]  kasan_save_free_info+0x3f/0x60
[   22.827087]  __kasan_slab_free+0x56/0x70
[   22.827731]  kfree+0x123/0x3f0
[   22.828135]  kmalloc_uaf+0x12d/0x380
[   22.828862]  kunit_try_run_case+0x1b3/0x490
[   22.829520]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.830293]  kthread+0x324/0x6d0
[   22.831050]  ret_from_fork+0x41/0x80
[   22.831643]  ret_from_fork_asm+0x1a/0x30
[   22.832224] 
[   22.832697] The buggy address belongs to the object at ffff8881025fbc40
[   22.832697]  which belongs to the cache kmalloc-16 of size 16
[   22.834137] The buggy address is located 8 bytes inside of
[   22.834137]  freed 16-byte region [ffff8881025fbc40, ffff8881025fbc50)
[   22.835530] 
[   22.835983] The buggy address belongs to the physical page:
[   22.836308] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1025fb
[   22.837415] flags: 0x200000000000000(node=0|zone=2)
[   22.838266] page_type: f5(slab)
[   22.838826] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   22.840221] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   22.841094] page dumped because: kasan: bad access detected
[   22.842047] 
[   22.842443] Memory state around the buggy address:
[   22.842742]  ffff8881025fbb00: 00 05 fc fc 00 02 fc fc 00 02 fc fc 00 02 fc fc
[   22.843795]  ffff8881025fbb80: 00 02 fc fc fa fb fc fc fa fb fc fc 00 05 fc fc
[   22.844261] >ffff8881025fbc00: fa fb fc fc fa fb fc fc fa fb fc fc fc fc fc fc
[   22.845337]                                               ^
[   22.846304]  ffff8881025fbc80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.847296]  ffff8881025fbd00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.848226] ==================================================================

test log snippet boot parser 123

Home

[   22.907044] ==================================================================
[   22.907874] BUG: KASAN: slab-use-after-free in kmalloc_uaf2+0x4aa/0x520
[   22.909469] Read of size 1 at addr ffff888102a5b1a8 by task kunit_try_catch/196
[   22.910234] 
[   22.910489] CPU: 1 UID: 0 PID: 196 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.910720] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.910764] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.910817] Call Trace:
[   22.910847]  <TASK>
[   22.910879]  dump_stack_lvl+0x73/0xb0
[   22.910953]  print_report+0xd1/0x640
[   22.911015]  ? __virt_addr_valid+0x1db/0x2d0
[   22.911074]  ? kasan_complete_mode_report_info+0x64/0x200
[   22.911136]  kasan_report+0x102/0x140
[   22.911228]  ? kmalloc_uaf2+0x4aa/0x520
[   22.911284]  ? kmalloc_uaf2+0x4aa/0x520
[   22.911323]  __asan_report_load1_noabort+0x18/0x20
[   22.911353]  kmalloc_uaf2+0x4aa/0x520
[   22.911383]  ? __pfx_kmalloc_uaf2+0x10/0x10
[   22.911429]  ? finish_task_switch.isra.0+0x153/0x700
[   22.911460]  ? __switch_to+0x5d9/0xf60
[   22.911496]  ? __schedule+0xc6f/0x27d0
[   22.911527]  ? __pfx_read_tsc+0x10/0x10
[   22.911553]  ? ktime_get_ts64+0x86/0x230
[   22.911628]  kunit_try_run_case+0x1b3/0x490
[   22.911706]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.911743]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.911775]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.911806]  ? __kthread_parkme+0x82/0x160
[   22.911835]  ? preempt_count_sub+0x50/0x80
[   22.911867]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.911899]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.911931]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.911962]  kthread+0x324/0x6d0
[   22.911989]  ? trace_preempt_on+0x20/0xc0
[   22.912019]  ? __pfx_kthread+0x10/0x10
[   22.912047]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.912076]  ? calculate_sigpending+0x7b/0xa0
[   22.912109]  ? __pfx_kthread+0x10/0x10
[   22.912138]  ret_from_fork+0x41/0x80
[   22.912166]  ? __pfx_kthread+0x10/0x10
[   22.912216]  ret_from_fork_asm+0x1a/0x30
[   22.912262]  </TASK>
[   22.912279] 
[   22.929382] Allocated by task 196:
[   22.929778]  kasan_save_stack+0x3d/0x60
[   22.930397]  kasan_save_track+0x18/0x40
[   22.931101]  kasan_save_alloc_info+0x3b/0x50
[   22.931785]  __kasan_kmalloc+0xb7/0xc0
[   22.932111]  __kmalloc_cache_noprof+0x184/0x410
[   22.932527]  kmalloc_uaf2+0xc7/0x520
[   22.933234]  kunit_try_run_case+0x1b3/0x490
[   22.934130]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.934901]  kthread+0x324/0x6d0
[   22.935383]  ret_from_fork+0x41/0x80
[   22.936010]  ret_from_fork_asm+0x1a/0x30
[   22.936488] 
[   22.936919] Freed by task 196:
[   22.937236]  kasan_save_stack+0x3d/0x60
[   22.937958]  kasan_save_track+0x18/0x40
[   22.938390]  kasan_save_free_info+0x3f/0x60
[   22.939280]  __kasan_slab_free+0x56/0x70
[   22.939725]  kfree+0x123/0x3f0
[   22.940045]  kmalloc_uaf2+0x14d/0x520
[   22.940465]  kunit_try_run_case+0x1b3/0x490
[   22.940890]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.941382]  kthread+0x324/0x6d0
[   22.941861]  ret_from_fork+0x41/0x80
[   22.942432]  ret_from_fork_asm+0x1a/0x30
[   22.942860] 
[   22.943151] The buggy address belongs to the object at ffff888102a5b180
[   22.943151]  which belongs to the cache kmalloc-64 of size 64
[   22.944535] The buggy address is located 40 bytes inside of
[   22.944535]  freed 64-byte region [ffff888102a5b180, ffff888102a5b1c0)
[   22.946189] 
[   22.946441] The buggy address belongs to the physical page:
[   22.947109] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a5b
[   22.948000] flags: 0x200000000000000(node=0|zone=2)
[   22.948549] page_type: f5(slab)
[   22.949156] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   22.950056] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   22.951174] page dumped because: kasan: bad access detected
[   22.951833] 
[   22.952057] Memory state around the buggy address:
[   22.952495]  ffff888102a5b080: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   22.953082]  ffff888102a5b100: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   22.953895] >ffff888102a5b180: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   22.954434]                                   ^
[   22.955032]  ffff888102a5b200: 00 00 00 00 00 03 fc fc fc fc fc fc fc fc fc fc
[   22.955788]  ffff888102a5b280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.956311] ==================================================================

test log snippet boot parser 124

Home

[   22.387190] ==================================================================
[   22.388027] BUG: KASAN: slab-use-after-free in kmalloc_uaf_16+0x47d/0x4c0
[   22.388566] Read of size 16 at addr ffff8881025fbc20 by task kunit_try_catch/176
[   22.390031] 
[   22.390208] CPU: 1 UID: 0 PID: 176 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.390271] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.390288] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.390318] Call Trace:
[   22.390336]  <TASK>
[   22.390364]  dump_stack_lvl+0x73/0xb0
[   22.390430]  print_report+0xd1/0x640
[   22.390487]  ? __virt_addr_valid+0x1db/0x2d0
[   22.390551]  ? kasan_complete_mode_report_info+0x64/0x200
[   22.390612]  kasan_report+0x102/0x140
[   22.390669]  ? kmalloc_uaf_16+0x47d/0x4c0
[   22.390729]  ? kmalloc_uaf_16+0x47d/0x4c0
[   22.390876]  __asan_report_load16_noabort+0x18/0x20
[   22.390949]  kmalloc_uaf_16+0x47d/0x4c0
[   22.391008]  ? __pfx_kmalloc_uaf_16+0x10/0x10
[   22.391071]  ? __pfx_kmalloc_uaf_16+0x10/0x10
[   22.391141]  kunit_try_run_case+0x1b3/0x490
[   22.391206]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.391266]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.391337]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.391370]  ? __kthread_parkme+0x82/0x160
[   22.391415]  ? preempt_count_sub+0x50/0x80
[   22.391450]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.391483]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.391516]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.391547]  kthread+0x324/0x6d0
[   22.391575]  ? trace_preempt_on+0x20/0xc0
[   22.391662]  ? __pfx_kthread+0x10/0x10
[   22.391715]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.391766]  ? calculate_sigpending+0x7b/0xa0
[   22.391823]  ? __pfx_kthread+0x10/0x10
[   22.391874]  ret_from_fork+0x41/0x80
[   22.391922]  ? __pfx_kthread+0x10/0x10
[   22.391963]  ret_from_fork_asm+0x1a/0x30
[   22.392008]  </TASK>
[   22.392025] 
[   22.409810] Allocated by task 176:
[   22.410275]  kasan_save_stack+0x3d/0x60
[   22.411000]  kasan_save_track+0x18/0x40
[   22.411350]  kasan_save_alloc_info+0x3b/0x50
[   22.412172]  __kasan_kmalloc+0xb7/0xc0
[   22.413026]  __kmalloc_cache_noprof+0x184/0x410
[   22.413638]  kmalloc_uaf_16+0x15c/0x4c0
[   22.414573]  kunit_try_run_case+0x1b3/0x490
[   22.415016]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.415661]  kthread+0x324/0x6d0
[   22.416493]  ret_from_fork+0x41/0x80
[   22.416691]  ret_from_fork_asm+0x1a/0x30
[   22.416948] 
[   22.417173] Freed by task 176:
[   22.417631]  kasan_save_stack+0x3d/0x60
[   22.418842]  kasan_save_track+0x18/0x40
[   22.419139]  kasan_save_free_info+0x3f/0x60
[   22.419444]  __kasan_slab_free+0x56/0x70
[   22.419965]  kfree+0x123/0x3f0
[   22.420737]  kmalloc_uaf_16+0x1d7/0x4c0
[   22.421067]  kunit_try_run_case+0x1b3/0x490
[   22.422097]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.422835]  kthread+0x324/0x6d0
[   22.423305]  ret_from_fork+0x41/0x80
[   22.424057]  ret_from_fork_asm+0x1a/0x30
[   22.424936] 
[   22.425107] The buggy address belongs to the object at ffff8881025fbc20
[   22.425107]  which belongs to the cache kmalloc-16 of size 16
[   22.426372] The buggy address is located 0 bytes inside of
[   22.426372]  freed 16-byte region [ffff8881025fbc20, ffff8881025fbc30)
[   22.427818] 
[   22.428106] The buggy address belongs to the physical page:
[   22.428749] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1025fb
[   22.429929] flags: 0x200000000000000(node=0|zone=2)
[   22.430564] page_type: f5(slab)
[   22.431033] raw: 0200000000000000 ffff888100041640 dead000000000122 0000000000000000
[   22.432037] raw: 0000000000000000 0000000080800080 00000001f5000000 0000000000000000
[   22.432772] page dumped because: kasan: bad access detected
[   22.433294] 
[   22.433778] Memory state around the buggy address:
[   22.434291]  ffff8881025fbb00: 00 05 fc fc 00 02 fc fc 00 02 fc fc 00 02 fc fc
[   22.435317]  ffff8881025fbb80: 00 02 fc fc fa fb fc fc fa fb fc fc 00 05 fc fc
[   22.436072] >ffff8881025fbc00: 00 00 fc fc fa fb fc fc fc fc fc fc fc fc fc fc
[   22.436665]                                ^
[   22.437103]  ffff8881025fbc80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.438174]  ffff8881025fbd00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.438948] ==================================================================

test log snippet boot parser 125

Home

[   22.854319] ==================================================================
[   22.855297] BUG: KASAN: slab-use-after-free in kmalloc_uaf_memset+0x1a4/0x360
[   22.855792] Write of size 33 at addr ffff888101add800 by task kunit_try_catch/194
[   22.856293] 
[   22.856711] CPU: 0 UID: 0 PID: 194 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.856817] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.856851] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.856905] Call Trace:
[   22.856937]  <TASK>
[   22.856973]  dump_stack_lvl+0x73/0xb0
[   22.857052]  print_report+0xd1/0x640
[   22.857109]  ? __virt_addr_valid+0x1db/0x2d0
[   22.857173]  ? kasan_complete_mode_report_info+0x64/0x200
[   22.857286]  kasan_report+0x102/0x140
[   22.857347]  ? kmalloc_uaf_memset+0x1a4/0x360
[   22.857427]  ? kmalloc_uaf_memset+0x1a4/0x360
[   22.857501]  kasan_check_range+0x10c/0x1c0
[   22.857562]  __asan_memset+0x27/0x50
[   22.857617]  kmalloc_uaf_memset+0x1a4/0x360
[   22.857673]  ? __pfx_kmalloc_uaf_memset+0x10/0x10
[   22.857731]  ? __schedule+0xc6f/0x27d0
[   22.857793]  ? __pfx_read_tsc+0x10/0x10
[   22.857847]  ? ktime_get_ts64+0x86/0x230
[   22.857913]  kunit_try_run_case+0x1b3/0x490
[   22.857978]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.858033]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.858089]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.858144]  ? __kthread_parkme+0x82/0x160
[   22.858222]  ? preempt_count_sub+0x50/0x80
[   22.858262]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.858295]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.858329]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.858370]  kthread+0x324/0x6d0
[   22.858416]  ? trace_preempt_on+0x20/0xc0
[   22.858452]  ? __pfx_kthread+0x10/0x10
[   22.858481]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.858511]  ? calculate_sigpending+0x7b/0xa0
[   22.858544]  ? __pfx_kthread+0x10/0x10
[   22.858574]  ret_from_fork+0x41/0x80
[   22.858657]  ? __pfx_kthread+0x10/0x10
[   22.858714]  ret_from_fork_asm+0x1a/0x30
[   22.858789]  </TASK>
[   22.858817] 
[   22.873887] Allocated by task 194:
[   22.874243]  kasan_save_stack+0x3d/0x60
[   22.875104]  kasan_save_track+0x18/0x40
[   22.875490]  kasan_save_alloc_info+0x3b/0x50
[   22.875856]  __kasan_kmalloc+0xb7/0xc0
[   22.876132]  __kmalloc_cache_noprof+0x184/0x410
[   22.876475]  kmalloc_uaf_memset+0xaa/0x360
[   22.876921]  kunit_try_run_case+0x1b3/0x490
[   22.877351]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.878138]  kthread+0x324/0x6d0
[   22.878570]  ret_from_fork+0x41/0x80
[   22.879382]  ret_from_fork_asm+0x1a/0x30
[   22.879962] 
[   22.880184] Freed by task 194:
[   22.880712]  kasan_save_stack+0x3d/0x60
[   22.881153]  kasan_save_track+0x18/0x40
[   22.881536]  kasan_save_free_info+0x3f/0x60
[   22.882124]  __kasan_slab_free+0x56/0x70
[   22.882950]  kfree+0x123/0x3f0
[   22.883275]  kmalloc_uaf_memset+0x12c/0x360
[   22.883682]  kunit_try_run_case+0x1b3/0x490
[   22.884050]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.884582]  kthread+0x324/0x6d0
[   22.884900]  ret_from_fork+0x41/0x80
[   22.885282]  ret_from_fork_asm+0x1a/0x30
[   22.886002] 
[   22.886230] The buggy address belongs to the object at ffff888101add800
[   22.886230]  which belongs to the cache kmalloc-64 of size 64
[   22.887462] The buggy address is located 0 bytes inside of
[   22.887462]  freed 64-byte region [ffff888101add800, ffff888101add840)
[   22.888494] 
[   22.888830] The buggy address belongs to the physical page:
[   22.889151] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101add
[   22.889944] flags: 0x200000000000000(node=0|zone=2)
[   22.890486] page_type: f5(slab)
[   22.891243] raw: 0200000000000000 ffff8881000418c0 dead000000000122 0000000000000000
[   22.892111] raw: 0000000000000000 0000000080200020 00000001f5000000 0000000000000000
[   22.892947] page dumped because: kasan: bad access detected
[   22.893500] 
[   22.893876] Memory state around the buggy address:
[   22.894360]  ffff888101add700: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   22.895316]  ffff888101add780: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   22.895741] >ffff888101add800: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   22.896196]                    ^
[   22.896566]  ffff888101add880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.897193]  ffff888101add900: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.898117] ==================================================================

test log snippet boot parser 126

Home

[   23.829466] ==================================================================
[   23.830688] BUG: KASAN: slab-use-after-free in kmem_cache_double_destroy+0x1bd/0x380
[   23.831994] Read of size 1 at addr ffff888101aabdc0 by task kunit_try_catch/223
[   23.832368] 
[   23.832911] CPU: 0 UID: 0 PID: 223 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.833026] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.833059] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.833117] Call Trace:
[   23.833148]  <TASK>
[   23.833185]  dump_stack_lvl+0x73/0xb0
[   23.833259]  print_report+0xd1/0x640
[   23.833317]  ? __virt_addr_valid+0x1db/0x2d0
[   23.833377]  ? kasan_complete_mode_report_info+0x64/0x200
[   23.833460]  kasan_report+0x102/0x140
[   23.833779]  ? kmem_cache_double_destroy+0x1bd/0x380
[   23.833933]  ? kmem_cache_double_destroy+0x1bd/0x380
[   23.834009]  ? kmem_cache_double_destroy+0x1bd/0x380
[   23.834059]  __kasan_check_byte+0x3d/0x50
[   23.834094]  kmem_cache_destroy+0x25/0x1d0
[   23.834133]  kmem_cache_double_destroy+0x1bd/0x380
[   23.834163]  ? __pfx_kmem_cache_double_destroy+0x10/0x10
[   23.834223]  ? finish_task_switch.isra.0+0x153/0x700
[   23.834261]  ? __switch_to+0x5d9/0xf60
[   23.834298]  ? __pfx_empty_cache_ctor+0x10/0x10
[   23.834331]  ? __pfx_read_tsc+0x10/0x10
[   23.834369]  ? ktime_get_ts64+0x86/0x230
[   23.834418]  kunit_try_run_case+0x1b3/0x490
[   23.834456]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.834488]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.834521]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.834554]  ? __kthread_parkme+0x82/0x160
[   23.834587]  ? preempt_count_sub+0x50/0x80
[   23.834687]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.834741]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.834797]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.834831]  kthread+0x324/0x6d0
[   23.834875]  ? trace_preempt_on+0x20/0xc0
[   23.834910]  ? __pfx_kthread+0x10/0x10
[   23.834946]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.834978]  ? calculate_sigpending+0x7b/0xa0
[   23.835020]  ? __pfx_kthread+0x10/0x10
[   23.835051]  ret_from_fork+0x41/0x80
[   23.835086]  ? __pfx_kthread+0x10/0x10
[   23.835117]  ret_from_fork_asm+0x1a/0x30
[   23.835166]  </TASK>
[   23.835182] 
[   23.852985] Allocated by task 223:
[   23.853227]  kasan_save_stack+0x3d/0x60
[   23.854152]  kasan_save_track+0x18/0x40
[   23.854648]  kasan_save_alloc_info+0x3b/0x50
[   23.855078]  __kasan_slab_alloc+0x91/0xa0
[   23.855514]  kmem_cache_alloc_noprof+0x11e/0x3e0
[   23.855996]  __kmem_cache_create_args+0x177/0x250
[   23.856392]  kmem_cache_double_destroy+0xd3/0x380
[   23.857238]  kunit_try_run_case+0x1b3/0x490
[   23.857908]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.858526]  kthread+0x324/0x6d0
[   23.859053]  ret_from_fork+0x41/0x80
[   23.859497]  ret_from_fork_asm+0x1a/0x30
[   23.860121] 
[   23.860375] Freed by task 223:
[   23.860823]  kasan_save_stack+0x3d/0x60
[   23.861230]  kasan_save_track+0x18/0x40
[   23.861920]  kasan_save_free_info+0x3f/0x60
[   23.862374]  __kasan_slab_free+0x56/0x70
[   23.862921]  kmem_cache_free+0x120/0x420
[   23.863385]  slab_kmem_cache_release+0x2e/0x40
[   23.863995]  kmem_cache_release+0x16/0x20
[   23.864465]  kobject_put+0x181/0x450
[   23.864979]  sysfs_slab_release+0x16/0x20
[   23.865350]  kmem_cache_destroy+0xf0/0x1d0
[   23.865697]  kmem_cache_double_destroy+0x14c/0x380
[   23.866170]  kunit_try_run_case+0x1b3/0x490
[   23.867103]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.867872]  kthread+0x324/0x6d0
[   23.868301]  ret_from_fork+0x41/0x80
[   23.868854]  ret_from_fork_asm+0x1a/0x30
[   23.869158] 
[   23.869428] The buggy address belongs to the object at ffff888101aabdc0
[   23.869428]  which belongs to the cache kmem_cache of size 208
[   23.870329] The buggy address is located 0 bytes inside of
[   23.870329]  freed 208-byte region [ffff888101aabdc0, ffff888101aabe90)
[   23.873438] 
[   23.873953] The buggy address belongs to the physical page:
[   23.874545] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aab
[   23.875391] flags: 0x200000000000000(node=0|zone=2)
[   23.875998] page_type: f5(slab)
[   23.876454] raw: 0200000000000000 ffff888100041000 dead000000000122 0000000000000000
[   23.877419] raw: 0000000000000000 00000000800c000c 00000001f5000000 0000000000000000
[   23.878223] page dumped because: kasan: bad access detected
[   23.878927] 
[   23.879093] Memory state around the buggy address:
[   23.879662]  ffff888101aabc80: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.880479]  ffff888101aabd00: fb fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc
[   23.880894] >ffff888101aabd80: fc fc fc fc fc fc fc fc fa fb fb fb fb fb fb fb
[   23.882157]                                            ^
[   23.882799]  ffff888101aabe00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.883516]  ffff888101aabe80: fb fb fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.883915] ==================================================================

test log snippet boot parser 127

Home

[   23.743983] ==================================================================
[   23.744752] BUG: KASAN: slab-use-after-free in kmem_cache_rcu_uaf+0x3e5/0x510
[   23.746117] Read of size 1 at addr ffff888102a62000 by task kunit_try_catch/221
[   23.747119] 
[   23.747328] CPU: 1 UID: 0 PID: 221 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.747450] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.747482] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.747581] Call Trace:
[   23.747614]  <TASK>
[   23.747647]  dump_stack_lvl+0x73/0xb0
[   23.747725]  print_report+0xd1/0x640
[   23.747784]  ? __virt_addr_valid+0x1db/0x2d0
[   23.747852]  ? kasan_complete_mode_report_info+0x64/0x200
[   23.747918]  kasan_report+0x102/0x140
[   23.747974]  ? kmem_cache_rcu_uaf+0x3e5/0x510
[   23.748032]  ? kmem_cache_rcu_uaf+0x3e5/0x510
[   23.748098]  __asan_report_load1_noabort+0x18/0x20
[   23.748154]  kmem_cache_rcu_uaf+0x3e5/0x510
[   23.748299]  ? __pfx_kmem_cache_rcu_uaf+0x10/0x10
[   23.748346]  ? finish_task_switch.isra.0+0x153/0x700
[   23.748379]  ? __switch_to+0x5d9/0xf60
[   23.748432]  ? irqentry_exit+0x2a/0x60
[   23.748468]  ? trace_hardirqs_on+0x37/0xe0
[   23.748499]  ? __pfx_read_tsc+0x10/0x10
[   23.748526]  ? ktime_get_ts64+0x86/0x230
[   23.748560]  kunit_try_run_case+0x1b3/0x490
[   23.748642]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.748708]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.748744]  ? __kthread_parkme+0x82/0x160
[   23.748774]  ? preempt_count_sub+0x50/0x80
[   23.748806]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.748838]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.748870]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.748901]  kthread+0x324/0x6d0
[   23.748929]  ? trace_preempt_on+0x20/0xc0
[   23.748958]  ? __pfx_kthread+0x10/0x10
[   23.748986]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.749015]  ? calculate_sigpending+0x7b/0xa0
[   23.749048]  ? __pfx_kthread+0x10/0x10
[   23.749077]  ret_from_fork+0x41/0x80
[   23.749105]  ? __pfx_kthread+0x10/0x10
[   23.749133]  ret_from_fork_asm+0x1a/0x30
[   23.749176]  </TASK>
[   23.749223] 
[   23.768691] Allocated by task 221:
[   23.769051]  kasan_save_stack+0x3d/0x60
[   23.769736]  kasan_save_track+0x18/0x40
[   23.770171]  kasan_save_alloc_info+0x3b/0x50
[   23.770571]  __kasan_slab_alloc+0x91/0xa0
[   23.771387]  kmem_cache_alloc_noprof+0x11e/0x3e0
[   23.771970]  kmem_cache_rcu_uaf+0x156/0x510
[   23.772463]  kunit_try_run_case+0x1b3/0x490
[   23.773055]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.773497]  kthread+0x324/0x6d0
[   23.774023]  ret_from_fork+0x41/0x80
[   23.774392]  ret_from_fork_asm+0x1a/0x30
[   23.774733] 
[   23.774951] Freed by task 0:
[   23.775275]  kasan_save_stack+0x3d/0x60
[   23.776093]  kasan_save_track+0x18/0x40
[   23.776506]  kasan_save_free_info+0x3f/0x60
[   23.777103]  __kasan_slab_free+0x56/0x70
[   23.777572]  slab_free_after_rcu_debug+0xe4/0x310
[   23.778145]  rcu_core+0x680/0x1d70
[   23.778512]  rcu_core_si+0x12/0x20
[   23.779060]  handle_softirqs+0x209/0x720
[   23.779556]  __irq_exit_rcu+0xc9/0x110
[   23.780343]  irq_exit_rcu+0x12/0x20
[   23.780776]  sysvec_apic_timer_interrupt+0x81/0x90
[   23.781248]  asm_sysvec_apic_timer_interrupt+0x1f/0x30
[   23.781998] 
[   23.782236] Last potentially related work creation:
[   23.782721]  kasan_save_stack+0x3d/0x60
[   23.783154]  __kasan_record_aux_stack+0xae/0xc0
[   23.783882]  kasan_record_aux_stack_noalloc+0xf/0x20
[   23.784362]  kmem_cache_free+0x284/0x420
[   23.785007]  kmem_cache_rcu_uaf+0x195/0x510
[   23.785426]  kunit_try_run_case+0x1b3/0x490
[   23.785974]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.786548]  kthread+0x324/0x6d0
[   23.786922]  ret_from_fork+0x41/0x80
[   23.787289]  ret_from_fork_asm+0x1a/0x30
[   23.788250] 
[   23.788455] The buggy address belongs to the object at ffff888102a62000
[   23.788455]  which belongs to the cache test_cache of size 200
[   23.789561] The buggy address is located 0 bytes inside of
[   23.789561]  freed 200-byte region [ffff888102a62000, ffff888102a620c8)
[   23.790510] 
[   23.790712] The buggy address belongs to the physical page:
[   23.791194] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a62
[   23.792428] flags: 0x200000000000000(node=0|zone=2)
[   23.793031] page_type: f5(slab)
[   23.793429] raw: 0200000000000000 ffff888100ff78c0 dead000000000122 0000000000000000
[   23.794109] raw: 0000000000000000 00000000800f000f 00000001f5000000 0000000000000000
[   23.794731] page dumped because: kasan: bad access detected
[   23.795227] 
[   23.795567] Memory state around the buggy address:
[   23.796388]  ffff888102a61f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.797041]  ffff888102a61f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.797854] >ffff888102a62000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.798531]                    ^
[   23.798966]  ffff888102a62080: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc
[   23.799944]  ffff888102a62100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.800372] ==================================================================

test log snippet boot parser 128

Home

[   22.230444] ==================================================================
[   22.231324] BUG: KASAN: slab-use-after-free in krealloc_uaf+0x1b9/0x5e0
[   22.232144] Read of size 1 at addr ffff888100a9c000 by task kunit_try_catch/172
[   22.233274] 
[   22.233597] CPU: 1 UID: 0 PID: 172 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.233704] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.233738] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.233790] Call Trace:
[   22.233820]  <TASK>
[   22.233854]  dump_stack_lvl+0x73/0xb0
[   22.233948]  print_report+0xd1/0x640
[   22.234039]  ? __virt_addr_valid+0x1db/0x2d0
[   22.234111]  ? kasan_complete_mode_report_info+0x64/0x200
[   22.234151]  kasan_report+0x102/0x140
[   22.234196]  ? krealloc_uaf+0x1b9/0x5e0
[   22.234253]  ? krealloc_uaf+0x1b9/0x5e0
[   22.234288]  ? krealloc_uaf+0x1b9/0x5e0
[   22.234319]  __kasan_check_byte+0x3d/0x50
[   22.234358]  krealloc_noprof+0x3f/0x340
[   22.234393]  krealloc_uaf+0x1b9/0x5e0
[   22.234444]  ? __pfx_krealloc_uaf+0x10/0x10
[   22.234473]  ? finish_task_switch.isra.0+0x153/0x700
[   22.234503]  ? __switch_to+0x5d9/0xf60
[   22.234536]  ? __schedule+0xc6f/0x27d0
[   22.234566]  ? __pfx_read_tsc+0x10/0x10
[   22.234593]  ? ktime_get_ts64+0x86/0x230
[   22.234627]  kunit_try_run_case+0x1b3/0x490
[   22.234659]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.234689]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.234719]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.234750]  ? __kthread_parkme+0x82/0x160
[   22.234778]  ? preempt_count_sub+0x50/0x80
[   22.234809]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.234840]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.234871]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.234903]  kthread+0x324/0x6d0
[   22.234930]  ? trace_preempt_on+0x20/0xc0
[   22.234959]  ? __pfx_kthread+0x10/0x10
[   22.234988]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.235016]  ? calculate_sigpending+0x7b/0xa0
[   22.235049]  ? __pfx_kthread+0x10/0x10
[   22.235078]  ret_from_fork+0x41/0x80
[   22.235106]  ? __pfx_kthread+0x10/0x10
[   22.235134]  ret_from_fork_asm+0x1a/0x30
[   22.235175]  </TASK>
[   22.235195] 
[   22.248588] Allocated by task 172:
[   22.249027]  kasan_save_stack+0x3d/0x60
[   22.249545]  kasan_save_track+0x18/0x40
[   22.249897]  kasan_save_alloc_info+0x3b/0x50
[   22.250441]  __kasan_kmalloc+0xb7/0xc0
[   22.250744]  __kmalloc_cache_noprof+0x184/0x410
[   22.251424]  krealloc_uaf+0xbc/0x5e0
[   22.251780]  kunit_try_run_case+0x1b3/0x490
[   22.252302]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.252737]  kthread+0x324/0x6d0
[   22.253140]  ret_from_fork+0x41/0x80
[   22.253572]  ret_from_fork_asm+0x1a/0x30
[   22.254066] 
[   22.254309] Freed by task 172:
[   22.254612]  kasan_save_stack+0x3d/0x60
[   22.254903]  kasan_save_track+0x18/0x40
[   22.255231]  kasan_save_free_info+0x3f/0x60
[   22.255721]  __kasan_slab_free+0x56/0x70
[   22.256148]  kfree+0x123/0x3f0
[   22.256521]  krealloc_uaf+0x13e/0x5e0
[   22.256938]  kunit_try_run_case+0x1b3/0x490
[   22.257443]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.257970]  kthread+0x324/0x6d0
[   22.258530]  ret_from_fork+0x41/0x80
[   22.258950]  ret_from_fork_asm+0x1a/0x30
[   22.259531] 
[   22.259728] The buggy address belongs to the object at ffff888100a9c000
[   22.259728]  which belongs to the cache kmalloc-256 of size 256
[   22.260516] The buggy address is located 0 bytes inside of
[   22.260516]  freed 256-byte region [ffff888100a9c000, ffff888100a9c100)
[   22.263694] 
[   22.263930] The buggy address belongs to the physical page:
[   22.264311] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x100a9c
[   22.265044] head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   22.267424] flags: 0x200000000000040(head|node=0|zone=2)
[   22.268615] page_type: f5(slab)
[   22.268954] raw: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   22.269591] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   22.270257] head: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   22.271079] head: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   22.271680] head: 0200000000000001 ffffea000402a701 ffffffffffffffff 0000000000000000
[   22.272267] head: 0000000000000002 0000000000000000 00000000ffffffff 0000000000000000
[   22.272982] page dumped because: kasan: bad access detected
[   22.273553] 
[   22.273753] Memory state around the buggy address:
[   22.274232]  ffff888100a9bf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.274771]  ffff888100a9bf80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.275361] >ffff888100a9c000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   22.276262]                    ^
[   22.276608]  ffff888100a9c080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   22.277202]  ffff888100a9c100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.277799] ==================================================================

test log snippet boot parser 129

Home

[   22.279947] ==================================================================
[   22.280810] BUG: KASAN: slab-use-after-free in krealloc_uaf+0x53e/0x5e0
[   22.281200] Read of size 1 at addr ffff888100a9c000 by task kunit_try_catch/172
[   22.281833] 
[   22.282315] CPU: 1 UID: 0 PID: 172 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   22.282445] Tainted: [B]=BAD_PAGE, [N]=TEST
[   22.282506] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   22.282579] Call Trace:
[   22.282963]  <TASK>
[   22.283035]  dump_stack_lvl+0x73/0xb0
[   22.283117]  print_report+0xd1/0x640
[   22.283180]  ? __virt_addr_valid+0x1db/0x2d0
[   22.283247]  ? kasan_complete_mode_report_info+0x64/0x200
[   22.283315]  kasan_report+0x102/0x140
[   22.283372]  ? krealloc_uaf+0x53e/0x5e0
[   22.283729]  ? krealloc_uaf+0x53e/0x5e0
[   22.283898]  __asan_report_load1_noabort+0x18/0x20
[   22.283990]  krealloc_uaf+0x53e/0x5e0
[   22.284057]  ? __pfx_krealloc_uaf+0x10/0x10
[   22.284114]  ? finish_task_switch.isra.0+0x153/0x700
[   22.284168]  ? __switch_to+0x5d9/0xf60
[   22.284245]  ? __schedule+0xc6f/0x27d0
[   22.284299]  ? __pfx_read_tsc+0x10/0x10
[   22.284329]  ? ktime_get_ts64+0x86/0x230
[   22.284365]  kunit_try_run_case+0x1b3/0x490
[   22.284417]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.284450]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   22.284481]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   22.284512]  ? __kthread_parkme+0x82/0x160
[   22.284539]  ? preempt_count_sub+0x50/0x80
[   22.284570]  ? __pfx_kunit_try_run_case+0x10/0x10
[   22.284602]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.284634]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   22.284665]  kthread+0x324/0x6d0
[   22.284692]  ? trace_preempt_on+0x20/0xc0
[   22.284721]  ? __pfx_kthread+0x10/0x10
[   22.284749]  ? _raw_spin_unlock_irq+0x47/0x80
[   22.284778]  ? calculate_sigpending+0x7b/0xa0
[   22.284810]  ? __pfx_kthread+0x10/0x10
[   22.284839]  ret_from_fork+0x41/0x80
[   22.284866]  ? __pfx_kthread+0x10/0x10
[   22.284895]  ret_from_fork_asm+0x1a/0x30
[   22.284937]  </TASK>
[   22.284952] 
[   22.297530] Allocated by task 172:
[   22.297942]  kasan_save_stack+0x3d/0x60
[   22.298516]  kasan_save_track+0x18/0x40
[   22.298975]  kasan_save_alloc_info+0x3b/0x50
[   22.299540]  __kasan_kmalloc+0xb7/0xc0
[   22.299861]  __kmalloc_cache_noprof+0x184/0x410
[   22.300169]  krealloc_uaf+0xbc/0x5e0
[   22.300458]  kunit_try_run_case+0x1b3/0x490
[   22.300747]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.301333]  kthread+0x324/0x6d0
[   22.301797]  ret_from_fork+0x41/0x80
[   22.302284]  ret_from_fork_asm+0x1a/0x30
[   22.302777] 
[   22.303014] Freed by task 172:
[   22.303460]  kasan_save_stack+0x3d/0x60
[   22.303887]  kasan_save_track+0x18/0x40
[   22.304390]  kasan_save_free_info+0x3f/0x60
[   22.304704]  __kasan_slab_free+0x56/0x70
[   22.304987]  kfree+0x123/0x3f0
[   22.305394]  krealloc_uaf+0x13e/0x5e0
[   22.305829]  kunit_try_run_case+0x1b3/0x490
[   22.306315]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   22.306891]  kthread+0x324/0x6d0
[   22.307347]  ret_from_fork+0x41/0x80
[   22.307739]  ret_from_fork_asm+0x1a/0x30
[   22.308155] 
[   22.308423] The buggy address belongs to the object at ffff888100a9c000
[   22.308423]  which belongs to the cache kmalloc-256 of size 256
[   22.309274] The buggy address is located 0 bytes inside of
[   22.309274]  freed 256-byte region [ffff888100a9c000, ffff888100a9c100)
[   22.310166] 
[   22.310443] The buggy address belongs to the physical page:
[   22.310977] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x100a9c
[   22.311694] head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   22.312260] flags: 0x200000000000040(head|node=0|zone=2)
[   22.312786] page_type: f5(slab)
[   22.313195] raw: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   22.313814] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   22.314477] head: 0200000000000040 ffff888100041b40 dead000000000122 0000000000000000
[   22.315238] head: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   22.315803] head: 0200000000000001 ffffea000402a701 ffffffffffffffff 0000000000000000
[   22.316223] head: 0000000000000002 0000000000000000 00000000ffffffff 0000000000000000
[   22.316943] page dumped because: kasan: bad access detected
[   22.317775] 
[   22.317998] Memory state around the buggy address:
[   22.318581]  ffff888100a9bf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.319020]  ffff888100a9bf80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.319951] >ffff888100a9c000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   22.320591]                    ^
[   22.320954]  ffff888100a9c080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   22.321571]  ffff888100a9c100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   22.322291] ==================================================================

test log snippet boot parser 130

Home

[   23.221516] ==================================================================
[   23.222573] BUG: KASAN: slab-use-after-free in ksize_uaf+0x19e/0x6c0
[   23.223100] Read of size 1 at addr ffff888101ad5b00 by task kunit_try_catch/204
[   23.223653] 
[   23.224142] CPU: 0 UID: 0 PID: 204 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.224266] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.224304] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.224358] Call Trace:
[   23.224391]  <TASK>
[   23.224446]  dump_stack_lvl+0x73/0xb0
[   23.224528]  print_report+0xd1/0x640
[   23.224655]  ? __virt_addr_valid+0x1db/0x2d0
[   23.224730]  ? kasan_complete_mode_report_info+0x64/0x200
[   23.224799]  kasan_report+0x102/0x140
[   23.224857]  ? ksize_uaf+0x19e/0x6c0
[   23.224918]  ? ksize_uaf+0x19e/0x6c0
[   23.224983]  ? ksize_uaf+0x19e/0x6c0
[   23.225042]  __kasan_check_byte+0x3d/0x50
[   23.225104]  ksize+0x20/0x60
[   23.225223]  ksize_uaf+0x19e/0x6c0
[   23.225299]  ? __pfx_ksize_uaf+0x10/0x10
[   23.225356]  ? __schedule+0xc6f/0x27d0
[   23.225435]  ? __pfx_read_tsc+0x10/0x10
[   23.225488]  ? ktime_get_ts64+0x86/0x230
[   23.225573]  kunit_try_run_case+0x1b3/0x490
[   23.225650]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.225706]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.225742]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.225775]  ? __kthread_parkme+0x82/0x160
[   23.225805]  ? preempt_count_sub+0x50/0x80
[   23.225838]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.225871]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.225903]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.225935]  kthread+0x324/0x6d0
[   23.225963]  ? trace_preempt_on+0x20/0xc0
[   23.225994]  ? __pfx_kthread+0x10/0x10
[   23.226023]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.226052]  ? calculate_sigpending+0x7b/0xa0
[   23.226085]  ? __pfx_kthread+0x10/0x10
[   23.226114]  ret_from_fork+0x41/0x80
[   23.226142]  ? __pfx_kthread+0x10/0x10
[   23.226171]  ret_from_fork_asm+0x1a/0x30
[   23.226238]  </TASK>
[   23.226255] 
[   23.245410] Allocated by task 204:
[   23.246049]  kasan_save_stack+0x3d/0x60
[   23.246433]  kasan_save_track+0x18/0x40
[   23.246936]  kasan_save_alloc_info+0x3b/0x50
[   23.247371]  __kasan_kmalloc+0xb7/0xc0
[   23.247804]  __kmalloc_cache_noprof+0x184/0x410
[   23.248199]  ksize_uaf+0xab/0x6c0
[   23.249129]  kunit_try_run_case+0x1b3/0x490
[   23.249906]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.250281]  kthread+0x324/0x6d0
[   23.250728]  ret_from_fork+0x41/0x80
[   23.251349]  ret_from_fork_asm+0x1a/0x30
[   23.251922] 
[   23.252150] Freed by task 204:
[   23.252567]  kasan_save_stack+0x3d/0x60
[   23.253416]  kasan_save_track+0x18/0x40
[   23.254063]  kasan_save_free_info+0x3f/0x60
[   23.254545]  __kasan_slab_free+0x56/0x70
[   23.254968]  kfree+0x123/0x3f0
[   23.255504]  ksize_uaf+0x12d/0x6c0
[   23.256078]  kunit_try_run_case+0x1b3/0x490
[   23.256447]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.257385]  kthread+0x324/0x6d0
[   23.258014]  ret_from_fork+0x41/0x80
[   23.258494]  ret_from_fork_asm+0x1a/0x30
[   23.259089] 
[   23.259373] The buggy address belongs to the object at ffff888101ad5b00
[   23.259373]  which belongs to the cache kmalloc-128 of size 128
[   23.260481] The buggy address is located 0 bytes inside of
[   23.260481]  freed 128-byte region [ffff888101ad5b00, ffff888101ad5b80)
[   23.262112] 
[   23.262416] The buggy address belongs to the physical page:
[   23.262998] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101ad5
[   23.263807] flags: 0x200000000000000(node=0|zone=2)
[   23.264345] page_type: f5(slab)
[   23.264986] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   23.265627] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   23.266214] page dumped because: kasan: bad access detected
[   23.267104] 
[   23.267353] Memory state around the buggy address:
[   23.267984]  ffff888101ad5a00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.268934]  ffff888101ad5a80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.269704] >ffff888101ad5b00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.270481]                    ^
[   23.271060]  ffff888101ad5b80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.271791]  ffff888101ad5c00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.272339] ==================================================================

test log snippet boot parser 131

Home

[   23.274463] ==================================================================
[   23.274985] BUG: KASAN: slab-use-after-free in ksize_uaf+0x600/0x6c0
[   23.275864] Read of size 1 at addr ffff888101ad5b00 by task kunit_try_catch/204
[   23.276927] 
[   23.277515] CPU: 0 UID: 0 PID: 204 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.277761] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.277799] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.277850] Call Trace:
[   23.277879]  <TASK>
[   23.277928]  dump_stack_lvl+0x73/0xb0
[   23.278027]  print_report+0xd1/0x640
[   23.278122]  ? __virt_addr_valid+0x1db/0x2d0
[   23.278174]  ? kasan_complete_mode_report_info+0x64/0x200
[   23.278247]  kasan_report+0x102/0x140
[   23.278281]  ? ksize_uaf+0x600/0x6c0
[   23.278333]  ? ksize_uaf+0x600/0x6c0
[   23.278384]  __asan_report_load1_noabort+0x18/0x20
[   23.278433]  ksize_uaf+0x600/0x6c0
[   23.278463]  ? __pfx_ksize_uaf+0x10/0x10
[   23.278494]  ? __schedule+0xc6f/0x27d0
[   23.278525]  ? __pfx_read_tsc+0x10/0x10
[   23.278551]  ? ktime_get_ts64+0x86/0x230
[   23.278592]  kunit_try_run_case+0x1b3/0x490
[   23.278687]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.278743]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.278798]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.278854]  ? __kthread_parkme+0x82/0x160
[   23.278960]  ? preempt_count_sub+0x50/0x80
[   23.279015]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.279050]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.279084]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.279115]  kthread+0x324/0x6d0
[   23.279144]  ? trace_preempt_on+0x20/0xc0
[   23.279174]  ? __pfx_kthread+0x10/0x10
[   23.279217]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.279247]  ? calculate_sigpending+0x7b/0xa0
[   23.279281]  ? __pfx_kthread+0x10/0x10
[   23.279310]  ret_from_fork+0x41/0x80
[   23.279338]  ? __pfx_kthread+0x10/0x10
[   23.279367]  ret_from_fork_asm+0x1a/0x30
[   23.279423]  </TASK>
[   23.279439] 
[   23.297453] Allocated by task 204:
[   23.298554]  kasan_save_stack+0x3d/0x60
[   23.299044]  kasan_save_track+0x18/0x40
[   23.299451]  kasan_save_alloc_info+0x3b/0x50
[   23.300486]  __kasan_kmalloc+0xb7/0xc0
[   23.300958]  __kmalloc_cache_noprof+0x184/0x410
[   23.301163]  ksize_uaf+0xab/0x6c0
[   23.301450]  kunit_try_run_case+0x1b3/0x490
[   23.302003]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.302992]  kthread+0x324/0x6d0
[   23.303532]  ret_from_fork+0x41/0x80
[   23.304303]  ret_from_fork_asm+0x1a/0x30
[   23.304574] 
[   23.304904] Freed by task 204:
[   23.305249]  kasan_save_stack+0x3d/0x60
[   23.305660]  kasan_save_track+0x18/0x40
[   23.306100]  kasan_save_free_info+0x3f/0x60
[   23.306612]  __kasan_slab_free+0x56/0x70
[   23.307071]  kfree+0x123/0x3f0
[   23.307322]  ksize_uaf+0x12d/0x6c0
[   23.307604]  kunit_try_run_case+0x1b3/0x490
[   23.307925]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.309000]  kthread+0x324/0x6d0
[   23.309527]  ret_from_fork+0x41/0x80
[   23.310081]  ret_from_fork_asm+0x1a/0x30
[   23.310616] 
[   23.310993] The buggy address belongs to the object at ffff888101ad5b00
[   23.310993]  which belongs to the cache kmalloc-128 of size 128
[   23.312548] The buggy address is located 0 bytes inside of
[   23.312548]  freed 128-byte region [ffff888101ad5b00, ffff888101ad5b80)
[   23.314254] 
[   23.314450] The buggy address belongs to the physical page:
[   23.315012] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101ad5
[   23.315719] flags: 0x200000000000000(node=0|zone=2)
[   23.316073] page_type: f5(slab)
[   23.317085] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   23.317918] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   23.318806] page dumped because: kasan: bad access detected
[   23.319378] 
[   23.319567] Memory state around the buggy address:
[   23.320296]  ffff888101ad5a00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.321297]  ffff888101ad5a80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.321991] >ffff888101ad5b00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.322707]                    ^
[   23.323135]  ffff888101ad5b80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.324014]  ffff888101ad5c00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.324594] ==================================================================

test log snippet boot parser 132

Home

[   23.326147] ==================================================================
[   23.327512] BUG: KASAN: slab-use-after-free in ksize_uaf+0x5e6/0x6c0
[   23.328322] Read of size 1 at addr ffff888101ad5b78 by task kunit_try_catch/204
[   23.329375] 
[   23.329632] CPU: 0 UID: 0 PID: 204 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.329731] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.329764] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.329813] Call Trace:
[   23.329842]  <TASK>
[   23.329873]  dump_stack_lvl+0x73/0xb0
[   23.329945]  print_report+0xd1/0x640
[   23.329987]  ? __virt_addr_valid+0x1db/0x2d0
[   23.330021]  ? kasan_complete_mode_report_info+0x64/0x200
[   23.330055]  kasan_report+0x102/0x140
[   23.330084]  ? ksize_uaf+0x5e6/0x6c0
[   23.330131]  ? ksize_uaf+0x5e6/0x6c0
[   23.330175]  __asan_report_load1_noabort+0x18/0x20
[   23.330204]  ksize_uaf+0x5e6/0x6c0
[   23.330234]  ? __pfx_ksize_uaf+0x10/0x10
[   23.330265]  ? __schedule+0xc6f/0x27d0
[   23.330295]  ? __pfx_read_tsc+0x10/0x10
[   23.330321]  ? ktime_get_ts64+0x86/0x230
[   23.330365]  kunit_try_run_case+0x1b3/0x490
[   23.330451]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.330524]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.330721]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.330901]  ? __kthread_parkme+0x82/0x160
[   23.331074]  ? preempt_count_sub+0x50/0x80
[   23.331129]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.331164]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.331217]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.331255]  kthread+0x324/0x6d0
[   23.331284]  ? trace_preempt_on+0x20/0xc0
[   23.331313]  ? __pfx_kthread+0x10/0x10
[   23.331342]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.331371]  ? calculate_sigpending+0x7b/0xa0
[   23.331418]  ? __pfx_kthread+0x10/0x10
[   23.331448]  ret_from_fork+0x41/0x80
[   23.331478]  ? __pfx_kthread+0x10/0x10
[   23.331507]  ret_from_fork_asm+0x1a/0x30
[   23.331548]  </TASK>
[   23.331563] 
[   23.347537] Allocated by task 204:
[   23.348020]  kasan_save_stack+0x3d/0x60
[   23.348826]  kasan_save_track+0x18/0x40
[   23.349354]  kasan_save_alloc_info+0x3b/0x50
[   23.350011]  __kasan_kmalloc+0xb7/0xc0
[   23.350522]  __kmalloc_cache_noprof+0x184/0x410
[   23.351246]  ksize_uaf+0xab/0x6c0
[   23.351842]  kunit_try_run_case+0x1b3/0x490
[   23.352317]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.353131]  kthread+0x324/0x6d0
[   23.353542]  ret_from_fork+0x41/0x80
[   23.354366]  ret_from_fork_asm+0x1a/0x30
[   23.354988] 
[   23.355287] Freed by task 204:
[   23.355665]  kasan_save_stack+0x3d/0x60
[   23.356083]  kasan_save_track+0x18/0x40
[   23.356768]  kasan_save_free_info+0x3f/0x60
[   23.357311]  __kasan_slab_free+0x56/0x70
[   23.358130]  kfree+0x123/0x3f0
[   23.358551]  ksize_uaf+0x12d/0x6c0
[   23.359156]  kunit_try_run_case+0x1b3/0x490
[   23.359673]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.360390]  kthread+0x324/0x6d0
[   23.361020]  ret_from_fork+0x41/0x80
[   23.361484]  ret_from_fork_asm+0x1a/0x30
[   23.362261] 
[   23.362513] The buggy address belongs to the object at ffff888101ad5b00
[   23.362513]  which belongs to the cache kmalloc-128 of size 128
[   23.363659] The buggy address is located 120 bytes inside of
[   23.363659]  freed 128-byte region [ffff888101ad5b00, ffff888101ad5b80)
[   23.364862] 
[   23.365093] The buggy address belongs to the physical page:
[   23.365556] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101ad5
[   23.366572] flags: 0x200000000000000(node=0|zone=2)
[   23.367052] page_type: f5(slab)
[   23.367325] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   23.368296] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   23.369234] page dumped because: kasan: bad access detected
[   23.369916] 
[   23.370224] Memory state around the buggy address:
[   23.370877]  ffff888101ad5a00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.371521]  ffff888101ad5a80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.372358] >ffff888101ad5b00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   23.373046]                                                                 ^
[   23.374067]  ffff888101ad5b80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.374844]  ffff888101ad5c00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.375264] ==================================================================

test log snippet boot parser 133

Home

[   24.703896] ==================================================================
[   24.704683] BUG: KASAN: slab-use-after-free in mempool_uaf_helper+0x394/0x400
[   24.705742] Read of size 1 at addr ffff888101aed240 by task kunit_try_catch/239
[   24.707028] 
[   24.707267] CPU: 0 UID: 0 PID: 239 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.707333] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.707357] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.707425] Call Trace:
[   24.707458]  <TASK>
[   24.707497]  dump_stack_lvl+0x73/0xb0
[   24.707571]  print_report+0xd1/0x640
[   24.707608]  ? __virt_addr_valid+0x1db/0x2d0
[   24.707645]  ? kasan_complete_mode_report_info+0x64/0x200
[   24.707682]  kasan_report+0x102/0x140
[   24.707725]  ? mempool_uaf_helper+0x394/0x400
[   24.707989]  ? mempool_uaf_helper+0x394/0x400
[   24.708076]  __asan_report_load1_noabort+0x18/0x20
[   24.708173]  mempool_uaf_helper+0x394/0x400
[   24.708281]  ? __pfx_mempool_uaf_helper+0x10/0x10
[   24.708326]  ? finish_task_switch.isra.0+0x153/0x700
[   24.708368]  mempool_slab_uaf+0xae/0x100
[   24.708397]  ? __pfx_mempool_slab_uaf+0x10/0x10
[   24.708447]  ? __switch_to+0x5d9/0xf60
[   24.708482]  ? __pfx_mempool_alloc_slab+0x10/0x10
[   24.708515]  ? __pfx_mempool_free_slab+0x10/0x10
[   24.708548]  ? __pfx_read_tsc+0x10/0x10
[   24.708589]  ? ktime_get_ts64+0x86/0x230
[   24.708655]  kunit_try_run_case+0x1b3/0x490
[   24.708694]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.708725]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.708760]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.708793]  ? __kthread_parkme+0x82/0x160
[   24.708823]  ? preempt_count_sub+0x50/0x80
[   24.708854]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.708887]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.708920]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.708952]  kthread+0x324/0x6d0
[   24.708981]  ? trace_preempt_on+0x20/0xc0
[   24.709012]  ? __pfx_kthread+0x10/0x10
[   24.709041]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.709072]  ? calculate_sigpending+0x7b/0xa0
[   24.709107]  ? __pfx_kthread+0x10/0x10
[   24.709137]  ret_from_fork+0x41/0x80
[   24.709167]  ? __pfx_kthread+0x10/0x10
[   24.709234]  ret_from_fork_asm+0x1a/0x30
[   24.709345]  </TASK>
[   24.709367] 
[   24.729255] Allocated by task 239:
[   24.730049]  kasan_save_stack+0x3d/0x60
[   24.730563]  kasan_save_track+0x18/0x40
[   24.731126]  kasan_save_alloc_info+0x3b/0x50
[   24.731808]  __kasan_mempool_unpoison_object+0x1bb/0x200
[   24.732294]  remove_element+0x11e/0x190
[   24.732744]  mempool_alloc_preallocated+0x4d/0x90
[   24.733308]  mempool_uaf_helper+0x97/0x400
[   24.733821]  mempool_slab_uaf+0xae/0x100
[   24.734262]  kunit_try_run_case+0x1b3/0x490
[   24.735100]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.735877]  kthread+0x324/0x6d0
[   24.736314]  ret_from_fork+0x41/0x80
[   24.736843]  ret_from_fork_asm+0x1a/0x30
[   24.737348] 
[   24.737732] Freed by task 239:
[   24.738084]  kasan_save_stack+0x3d/0x60
[   24.738529]  kasan_save_track+0x18/0x40
[   24.739726]  kasan_save_free_info+0x3f/0x60
[   24.740245]  __kasan_mempool_poison_object+0x131/0x1d0
[   24.740917]  mempool_free+0x2ec/0x380
[   24.741460]  mempool_uaf_helper+0x11b/0x400
[   24.742030]  mempool_slab_uaf+0xae/0x100
[   24.742550]  kunit_try_run_case+0x1b3/0x490
[   24.743297]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.743979]  kthread+0x324/0x6d0
[   24.744335]  ret_from_fork+0x41/0x80
[   24.745020]  ret_from_fork_asm+0x1a/0x30
[   24.745415] 
[   24.745830] The buggy address belongs to the object at ffff888101aed240
[   24.745830]  which belongs to the cache test_cache of size 123
[   24.747139] The buggy address is located 0 bytes inside of
[   24.747139]  freed 123-byte region [ffff888101aed240, ffff888101aed2bb)
[   24.748271] 
[   24.748561] The buggy address belongs to the physical page:
[   24.749270] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101aed
[   24.750063] flags: 0x200000000000000(node=0|zone=2)
[   24.751098] page_type: f5(slab)
[   24.751452] raw: 0200000000000000 ffff888102a65000 dead000000000122 0000000000000000
[   24.752074] raw: 0000000000000000 0000000080150015 00000001f5000000 0000000000000000
[   24.752917] page dumped because: kasan: bad access detected
[   24.753810] 
[   24.754039] Memory state around the buggy address:
[   24.754431]  ffff888101aed100: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   24.755647]  ffff888101aed180: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   24.755893] >ffff888101aed200: fc fc fc fc fc fc fc fc fa fb fb fb fb fb fb fb
[   24.756117]                                            ^
[   24.757047]  ffff888101aed280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
[   24.757893]  ffff888101aed300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   24.759069] ==================================================================

test log snippet boot parser 134

Home

[   24.605292] ==================================================================
[   24.606274] BUG: KASAN: slab-use-after-free in mempool_uaf_helper+0x394/0x400
[   24.606851] Read of size 1 at addr ffff888102a59600 by task kunit_try_catch/235
[   24.607327] 
[   24.607687] CPU: 1 UID: 0 PID: 235 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.607798] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.607830] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.607887] Call Trace:
[   24.607919]  <TASK>
[   24.607955]  dump_stack_lvl+0x73/0xb0
[   24.608033]  print_report+0xd1/0x640
[   24.608096]  ? __virt_addr_valid+0x1db/0x2d0
[   24.608196]  ? kasan_complete_mode_report_info+0x64/0x200
[   24.608282]  kasan_report+0x102/0x140
[   24.608341]  ? mempool_uaf_helper+0x394/0x400
[   24.608426]  ? mempool_uaf_helper+0x394/0x400
[   24.608509]  __asan_report_load1_noabort+0x18/0x20
[   24.608571]  mempool_uaf_helper+0x394/0x400
[   24.608641]  ? __pfx_mempool_uaf_helper+0x10/0x10
[   24.608719]  ? finish_task_switch.isra.0+0x153/0x700
[   24.608782]  mempool_kmalloc_uaf+0xb3/0x100
[   24.608834]  ? __pfx_mempool_kmalloc_uaf+0x10/0x10
[   24.608868]  ? __switch_to+0x5d9/0xf60
[   24.608902]  ? __pfx_mempool_kmalloc+0x10/0x10
[   24.608934]  ? __pfx_mempool_kfree+0x10/0x10
[   24.608966]  ? __pfx_read_tsc+0x10/0x10
[   24.608994]  ? ktime_get_ts64+0x86/0x230
[   24.609027]  kunit_try_run_case+0x1b3/0x490
[   24.609063]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.609094]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.609126]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.609158]  ? __kthread_parkme+0x82/0x160
[   24.609190]  ? preempt_count_sub+0x50/0x80
[   24.609257]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.609291]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.609323]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.609356]  kthread+0x324/0x6d0
[   24.609385]  ? trace_preempt_on+0x20/0xc0
[   24.609434]  ? __pfx_kthread+0x10/0x10
[   24.609465]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.609494]  ? calculate_sigpending+0x7b/0xa0
[   24.609529]  ? __pfx_kthread+0x10/0x10
[   24.609559]  ret_from_fork+0x41/0x80
[   24.609587]  ? __pfx_kthread+0x10/0x10
[   24.609617]  ret_from_fork_asm+0x1a/0x30
[   24.609659]  </TASK>
[   24.609676] 
[   24.623277] Allocated by task 235:
[   24.623671]  kasan_save_stack+0x3d/0x60
[   24.624130]  kasan_save_track+0x18/0x40
[   24.624473]  kasan_save_alloc_info+0x3b/0x50
[   24.624774]  __kasan_mempool_unpoison_object+0x1a9/0x200
[   24.625149]  remove_element+0x11e/0x190
[   24.625602]  mempool_alloc_preallocated+0x4d/0x90
[   24.626104]  mempool_uaf_helper+0x97/0x400
[   24.626628]  mempool_kmalloc_uaf+0xb3/0x100
[   24.627087]  kunit_try_run_case+0x1b3/0x490
[   24.627608]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.628167]  kthread+0x324/0x6d0
[   24.628560]  ret_from_fork+0x41/0x80
[   24.628999]  ret_from_fork_asm+0x1a/0x30
[   24.629581] 
[   24.629784] Freed by task 235:
[   24.630114]  kasan_save_stack+0x3d/0x60
[   24.630591]  kasan_save_track+0x18/0x40
[   24.630943]  kasan_save_free_info+0x3f/0x60
[   24.631486]  __kasan_mempool_poison_object+0x131/0x1d0
[   24.631925]  mempool_free+0x2ec/0x380
[   24.632425]  mempool_uaf_helper+0x11b/0x400
[   24.632815]  mempool_kmalloc_uaf+0xb3/0x100
[   24.633213]  kunit_try_run_case+0x1b3/0x490
[   24.633655]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.634189]  kthread+0x324/0x6d0
[   24.634799]  ret_from_fork+0x41/0x80
[   24.635246]  ret_from_fork_asm+0x1a/0x30
[   24.635719] 
[   24.635941] The buggy address belongs to the object at ffff888102a59600
[   24.635941]  which belongs to the cache kmalloc-128 of size 128
[   24.637329] The buggy address is located 0 bytes inside of
[   24.637329]  freed 128-byte region [ffff888102a59600, ffff888102a59680)
[   24.638055] 
[   24.638269] The buggy address belongs to the physical page:
[   24.638930] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102a59
[   24.639649] flags: 0x200000000000000(node=0|zone=2)
[   24.640139] page_type: f5(slab)
[   24.640551] raw: 0200000000000000 ffff888100041a00 dead000000000122 0000000000000000
[   24.641140] raw: 0000000000000000 0000000080100010 00000001f5000000 0000000000000000
[   24.641863] page dumped because: kasan: bad access detected
[   24.642377] 
[   24.642617] Memory state around the buggy address:
[   24.643002]  ffff888102a59500: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   24.643656]  ffff888102a59580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   24.644344] >ffff888102a59600: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[   24.645437]                    ^
[   24.645803]  ffff888102a59680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   24.646631]  ffff888102a59700: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   24.647298] ==================================================================

test log snippet boot parser 135

Home

[   23.384889] ==================================================================
[   23.385785] BUG: KASAN: slab-use-after-free in rcu_uaf_reclaim+0x50/0x60
[   23.387374] Read of size 4 at addr ffff888101adcd80 by task swapper/0/0
[   23.387998] 
[   23.388526] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Tainted: G    B            N 6.13.0 #1
[   23.388682] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.388725] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.388777] Call Trace:
[   23.388825]  <IRQ>
[   23.388859]  dump_stack_lvl+0x73/0xb0
[   23.388934]  print_report+0xd1/0x640
[   23.388987]  ? __virt_addr_valid+0x1db/0x2d0
[   23.389046]  ? kasan_complete_mode_report_info+0x64/0x200
[   23.389154]  kasan_report+0x102/0x140
[   23.389251]  ? rcu_uaf_reclaim+0x50/0x60
[   23.389306]  ? rcu_uaf_reclaim+0x50/0x60
[   23.389345]  __asan_report_load4_noabort+0x18/0x20
[   23.389376]  rcu_uaf_reclaim+0x50/0x60
[   23.389424]  rcu_core+0x680/0x1d70
[   23.389469]  ? __pfx_rcu_core+0x10/0x10
[   23.389508]  rcu_core_si+0x12/0x20
[   23.389535]  handle_softirqs+0x209/0x720
[   23.389565]  ? hrtimer_interrupt+0x2fe/0x780
[   23.389683]  ? __pfx_handle_softirqs+0x10/0x10
[   23.389741]  __irq_exit_rcu+0xc9/0x110
[   23.389774]  irq_exit_rcu+0x12/0x20
[   23.389804]  sysvec_apic_timer_interrupt+0x81/0x90
[   23.389841]  </IRQ>
[   23.389856]  <TASK>
[   23.389871]  asm_sysvec_apic_timer_interrupt+0x1f/0x30
[   23.390025] RIP: 0010:default_idle+0xf/0x20
[   23.390164] Code: 4c 01 c7 4c 29 c2 e9 72 ff ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa eb 07 0f 00 2d 13 4f 33 00 fb f4 <fa> c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90
[   23.390274] RSP: 0000:ffffffff90a07df0 EFLAGS: 00010212
[   23.390448] RAX: ffff88815b000000 RBX: ffffffff90a1a680 RCX: ffffffff8f726165
[   23.390523] RDX: ffffed102b606ad3 RSI: 0000000000000004 RDI: 000000000007abbc
[   23.390617] RBP: ffffffff90a07df8 R08: 0000000000000001 R09: ffffed102b606ad2
[   23.390746] R10: ffff88815b035693 R11: 0000000000000000 R12: 0000000000000000
[   23.390827] R13: fffffbfff21434d0 R14: ffffffff9157b950 R15: 0000000000000000
[   23.390918]  ? ct_kernel_exit.constprop.0+0xa5/0xd0
[   23.390969]  ? arch_cpu_idle+0xd/0x20
[   23.390996]  default_idle_call+0x48/0x80
[   23.391024]  do_idle+0x310/0x3c0
[   23.391056]  ? __pfx_do_idle+0x10/0x10
[   23.391083]  ? trace_preempt_on+0x20/0xc0
[   23.391112]  ? schedule+0x86/0x310
[   23.391140]  ? preempt_count_sub+0x50/0x80
[   23.391171]  cpu_startup_entry+0x5c/0x70
[   23.391213]  rest_init+0x11a/0x140
[   23.391238]  ? acpi_subsystem_init+0x5d/0x150
[   23.391270]  start_kernel+0x325/0x410
[   23.391299]  x86_64_start_reservations+0x1c/0x30
[   23.391332]  x86_64_start_kernel+0xcf/0xe0
[   23.391364]  common_startup_64+0x12c/0x138
[   23.391426]  </TASK>
[   23.391464] 
[   23.416261] Allocated by task 206:
[   23.416769]  kasan_save_stack+0x3d/0x60
[   23.417416]  kasan_save_track+0x18/0x40
[   23.418322]  kasan_save_alloc_info+0x3b/0x50
[   23.418941]  __kasan_kmalloc+0xb7/0xc0
[   23.419479]  __kmalloc_cache_noprof+0x184/0x410
[   23.420171]  rcu_uaf+0xb1/0x330
[   23.420827]  kunit_try_run_case+0x1b3/0x490
[   23.421387]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.422433]  kthread+0x324/0x6d0
[   23.423012]  ret_from_fork+0x41/0x80
[   23.423476]  ret_from_fork_asm+0x1a/0x30
[   23.424048] 
[   23.424350] Freed by task 0:
[   23.424766]  kasan_save_stack+0x3d/0x60
[   23.425377]  kasan_save_track+0x18/0x40
[   23.426222]  kasan_save_free_info+0x3f/0x60
[   23.426911]  __kasan_slab_free+0x56/0x70
[   23.427442]  kfree+0x123/0x3f0
[   23.427783]  rcu_uaf_reclaim+0x1f/0x60
[   23.428478]  rcu_core+0x680/0x1d70
[   23.429036]  rcu_core_si+0x12/0x20
[   23.430141]  handle_softirqs+0x209/0x720
[   23.430552]  __irq_exit_rcu+0xc9/0x110
[   23.430878]  irq_exit_rcu+0x12/0x20
[   23.431501]  sysvec_apic_timer_interrupt+0x81/0x90
[   23.432100]  asm_sysvec_apic_timer_interrupt+0x1f/0x30
[   23.432793] 
[   23.433023] Last potentially related work creation:
[   23.434319]  kasan_save_stack+0x3d/0x60
[   23.434622]  __kasan_record_aux_stack+0xae/0xc0
[   23.434809]  kasan_record_aux_stack_noalloc+0xf/0x20
[   23.434987]  __call_rcu_common.constprop.0+0x72/0xaa0
[   23.435165]  call_rcu+0x12/0x20
[   23.435702]  rcu_uaf+0x169/0x330
[   23.436246]  kunit_try_run_case+0x1b3/0x490
[   23.436862]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.438206]  kthread+0x324/0x6d0
[   23.438478]  ret_from_fork+0x41/0x80
[   23.439064]  ret_from_fork_asm+0x1a/0x30
[   23.439564] 
[   23.440050] The buggy address belongs to the object at ffff888101adcd80
[   23.440050]  which belongs to the cache kmalloc-32 of size 32
[   23.440898] The buggy address is located 0 bytes inside of
[   23.440898]  freed 32-byte region [ffff888101adcd80, ffff888101adcda0)
[   23.441692] 
[   23.441921] The buggy address belongs to the physical page:
[   23.442705] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101adc
[   23.443344] flags: 0x200000000000000(node=0|zone=2)
[   23.443970] page_type: f5(slab)
[   23.444453] raw: 0200000000000000 ffff888100041780 dead000000000122 0000000000000000
[   23.445437] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   23.446299] page dumped because: kasan: bad access detected
[   23.446874] 
[   23.447095] Memory state around the buggy address:
[   23.447442]  ffff888101adcc80: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   23.448078]  ffff888101adcd00: 00 00 05 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   23.448950] >ffff888101adcd80: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   23.449799]                    ^
[   23.450162]  ffff888101adce00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.451231]  ffff888101adce80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.451976] ==================================================================

test log snippet boot parser 136

Home

[   25.312201] ==================================================================
[   25.313382] BUG: KASAN: slab-use-after-free in strchr+0x9c/0xc0
[   25.313903] Read of size 1 at addr ffff888101af1310 by task kunit_try_catch/267
[   25.314933] 
[   25.316051] CPU: 0 UID: 0 PID: 267 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.316161] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.316191] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.316496] Call Trace:
[   25.316528]  <TASK>
[   25.316549]  dump_stack_lvl+0x73/0xb0
[   25.316635]  print_report+0xd1/0x640
[   25.316702]  ? __virt_addr_valid+0x1db/0x2d0
[   25.316748]  ? kasan_complete_mode_report_info+0x64/0x200
[   25.316785]  kasan_report+0x102/0x140
[   25.316816]  ? strchr+0x9c/0xc0
[   25.316846]  ? strchr+0x9c/0xc0
[   25.316879]  __asan_report_load1_noabort+0x18/0x20
[   25.316909]  strchr+0x9c/0xc0
[   25.316938]  kasan_strings+0x1b1/0xb60
[   25.316968]  ? __pfx_kasan_strings+0x10/0x10
[   25.317001]  ? __schedule+0xc6f/0x27d0
[   25.317032]  ? __pfx_read_tsc+0x10/0x10
[   25.317061]  ? ktime_get_ts64+0x86/0x230
[   25.317095]  kunit_try_run_case+0x1b3/0x490
[   25.317129]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.317160]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.317198]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.317251]  ? __kthread_parkme+0x82/0x160
[   25.317280]  ? preempt_count_sub+0x50/0x80
[   25.317313]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.317347]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.317381]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.317432]  kthread+0x324/0x6d0
[   25.317461]  ? trace_preempt_on+0x20/0xc0
[   25.317494]  ? __pfx_kthread+0x10/0x10
[   25.317522]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.317554]  ? calculate_sigpending+0x7b/0xa0
[   25.317608]  ? __pfx_kthread+0x10/0x10
[   25.317667]  ret_from_fork+0x41/0x80
[   25.317721]  ? __pfx_kthread+0x10/0x10
[   25.317772]  ret_from_fork_asm+0x1a/0x30
[   25.317819]  </TASK>
[   25.317834] 
[   25.333866] Allocated by task 267:
[   25.334109]  kasan_save_stack+0x3d/0x60
[   25.334937]  kasan_save_track+0x18/0x40
[   25.335752]  kasan_save_alloc_info+0x3b/0x50
[   25.336290]  __kasan_kmalloc+0xb7/0xc0
[   25.336977]  __kmalloc_cache_noprof+0x184/0x410
[   25.337756]  kasan_strings+0xb3/0xb60
[   25.338255]  kunit_try_run_case+0x1b3/0x490
[   25.339109]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.339962]  kthread+0x324/0x6d0
[   25.340439]  ret_from_fork+0x41/0x80
[   25.341038]  ret_from_fork_asm+0x1a/0x30
[   25.341687] 
[   25.341931] Freed by task 267:
[   25.342765]  kasan_save_stack+0x3d/0x60
[   25.343041]  kasan_save_track+0x18/0x40
[   25.343685]  kasan_save_free_info+0x3f/0x60
[   25.344051]  __kasan_slab_free+0x56/0x70
[   25.344627]  kfree+0x123/0x3f0
[   25.344987]  kasan_strings+0x13a/0xb60
[   25.345603]  kunit_try_run_case+0x1b3/0x490
[   25.346468]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.347438]  kthread+0x324/0x6d0
[   25.347870]  ret_from_fork+0x41/0x80
[   25.348170]  ret_from_fork_asm+0x1a/0x30
[   25.348588] 
[   25.348811] The buggy address belongs to the object at ffff888101af1300
[   25.348811]  which belongs to the cache kmalloc-32 of size 32
[   25.349841] The buggy address is located 16 bytes inside of
[   25.349841]  freed 32-byte region [ffff888101af1300, ffff888101af1320)
[   25.350805] 
[   25.351037] The buggy address belongs to the physical page:
[   25.351734] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af1
[   25.352232] flags: 0x200000000000000(node=0|zone=2)
[   25.352905] page_type: f5(slab)
[   25.353257] raw: 0200000000000000 ffff888100041780 dead000000000122 0000000000000000
[   25.354009] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   25.354828] page dumped because: kasan: bad access detected
[   25.355206] 
[   25.355426] Memory state around the buggy address:
[   25.356024]  ffff888101af1200: 00 00 07 fc fc fc fc fc 00 00 00 fc fc fc fc fc
[   25.356826]  ffff888101af1280: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   25.357334] >ffff888101af1300: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   25.358093]                          ^
[   25.358538]  ffff888101af1380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.359228]  ffff888101af1400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.360096] ==================================================================

test log snippet boot parser 137

Home

[   25.407910] ==================================================================
[   25.409696] BUG: KASAN: slab-use-after-free in strcmp+0xb0/0xc0
[   25.410249] Read of size 1 at addr ffff888101af1310 by task kunit_try_catch/267
[   25.412097] 
[   25.412833] CPU: 0 UID: 0 PID: 267 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.412946] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.412979] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.413035] Call Trace:
[   25.413067]  <TASK>
[   25.413098]  dump_stack_lvl+0x73/0xb0
[   25.413146]  print_report+0xd1/0x640
[   25.413181]  ? __virt_addr_valid+0x1db/0x2d0
[   25.413245]  ? kasan_complete_mode_report_info+0x64/0x200
[   25.413283]  kasan_report+0x102/0x140
[   25.413313]  ? strcmp+0xb0/0xc0
[   25.413342]  ? strcmp+0xb0/0xc0
[   25.413375]  __asan_report_load1_noabort+0x18/0x20
[   25.413423]  strcmp+0xb0/0xc0
[   25.413452]  kasan_strings+0x2e9/0xb60
[   25.413485]  ? __pfx_kasan_strings+0x10/0x10
[   25.413516]  ? __schedule+0xc6f/0x27d0
[   25.413546]  ? __pfx_read_tsc+0x10/0x10
[   25.413574]  ? ktime_get_ts64+0x86/0x230
[   25.413656]  kunit_try_run_case+0x1b3/0x490
[   25.413715]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.413750]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.413784]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.413820]  ? __kthread_parkme+0x82/0x160
[   25.413849]  ? preempt_count_sub+0x50/0x80
[   25.413881]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.413915]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.413948]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.413981]  kthread+0x324/0x6d0
[   25.414009]  ? trace_preempt_on+0x20/0xc0
[   25.414039]  ? __pfx_kthread+0x10/0x10
[   25.414069]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.414099]  ? calculate_sigpending+0x7b/0xa0
[   25.414132]  ? __pfx_kthread+0x10/0x10
[   25.414162]  ret_from_fork+0x41/0x80
[   25.414224]  ? __pfx_kthread+0x10/0x10
[   25.414259]  ret_from_fork_asm+0x1a/0x30
[   25.414302]  </TASK>
[   25.414317] 
[   25.433868] Allocated by task 267:
[   25.434359]  kasan_save_stack+0x3d/0x60
[   25.434980]  kasan_save_track+0x18/0x40
[   25.435417]  kasan_save_alloc_info+0x3b/0x50
[   25.436261]  __kasan_kmalloc+0xb7/0xc0
[   25.436808]  __kmalloc_cache_noprof+0x184/0x410
[   25.437431]  kasan_strings+0xb3/0xb60
[   25.437948]  kunit_try_run_case+0x1b3/0x490
[   25.438386]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.438978]  kthread+0x324/0x6d0
[   25.439338]  ret_from_fork+0x41/0x80
[   25.439760]  ret_from_fork_asm+0x1a/0x30
[   25.440162] 
[   25.440384] Freed by task 267:
[   25.441027]  kasan_save_stack+0x3d/0x60
[   25.441359]  kasan_save_track+0x18/0x40
[   25.442481]  kasan_save_free_info+0x3f/0x60
[   25.443096]  __kasan_slab_free+0x56/0x70
[   25.443713]  kfree+0x123/0x3f0
[   25.444093]  kasan_strings+0x13a/0xb60
[   25.444466]  kunit_try_run_case+0x1b3/0x490
[   25.445111]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.445798]  kthread+0x324/0x6d0
[   25.446183]  ret_from_fork+0x41/0x80
[   25.446917]  ret_from_fork_asm+0x1a/0x30
[   25.447736] 
[   25.448109] The buggy address belongs to the object at ffff888101af1300
[   25.448109]  which belongs to the cache kmalloc-32 of size 32
[   25.449495] The buggy address is located 16 bytes inside of
[   25.449495]  freed 32-byte region [ffff888101af1300, ffff888101af1320)
[   25.450437] 
[   25.451004] The buggy address belongs to the physical page:
[   25.451602] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af1
[   25.453132] flags: 0x200000000000000(node=0|zone=2)
[   25.453769] page_type: f5(slab)
[   25.454005] raw: 0200000000000000 ffff888100041780 dead000000000122 0000000000000000
[   25.454819] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   25.455290] page dumped because: kasan: bad access detected
[   25.455677] 
[   25.455894] Memory state around the buggy address:
[   25.456359]  ffff888101af1200: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   25.457980]  ffff888101af1280: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   25.458892] >ffff888101af1300: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   25.459817]                          ^
[   25.460538]  ffff888101af1380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.461055]  ffff888101af1400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.461799] ==================================================================

test log snippet boot parser 138

Home

[   25.515579] ==================================================================
[   25.517465] BUG: KASAN: slab-use-after-free in strlen+0x8f/0xb0
[   25.518108] Read of size 1 at addr ffff888101af1310 by task kunit_try_catch/267
[   25.518608] 
[   25.519643] CPU: 0 UID: 0 PID: 267 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.519710] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.519729] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.519759] Call Trace:
[   25.519778]  <TASK>
[   25.519796]  dump_stack_lvl+0x73/0xb0
[   25.519837]  print_report+0xd1/0x640
[   25.519870]  ? __virt_addr_valid+0x1db/0x2d0
[   25.519904]  ? kasan_complete_mode_report_info+0x64/0x200
[   25.519939]  kasan_report+0x102/0x140
[   25.519970]  ? strlen+0x8f/0xb0
[   25.519999]  ? strlen+0x8f/0xb0
[   25.520030]  __asan_report_load1_noabort+0x18/0x20
[   25.520061]  strlen+0x8f/0xb0
[   25.520087]  kasan_strings+0x432/0xb60
[   25.520118]  ? __pfx_kasan_strings+0x10/0x10
[   25.520149]  ? __schedule+0xc6f/0x27d0
[   25.520180]  ? __pfx_read_tsc+0x10/0x10
[   25.520250]  ? ktime_get_ts64+0x86/0x230
[   25.520334]  kunit_try_run_case+0x1b3/0x490
[   25.520375]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.520427]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.520459]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.520495]  ? __kthread_parkme+0x82/0x160
[   25.520522]  ? preempt_count_sub+0x50/0x80
[   25.520555]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.520627]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.520691]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.520749]  kthread+0x324/0x6d0
[   25.520800]  ? trace_preempt_on+0x20/0xc0
[   25.520852]  ? __pfx_kthread+0x10/0x10
[   25.520902]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.520942]  ? calculate_sigpending+0x7b/0xa0
[   25.520976]  ? __pfx_kthread+0x10/0x10
[   25.521005]  ret_from_fork+0x41/0x80
[   25.521036]  ? __pfx_kthread+0x10/0x10
[   25.521066]  ret_from_fork_asm+0x1a/0x30
[   25.521110]  </TASK>
[   25.521124] 
[   25.537192] Allocated by task 267:
[   25.537575]  kasan_save_stack+0x3d/0x60
[   25.537999]  kasan_save_track+0x18/0x40
[   25.538971]  kasan_save_alloc_info+0x3b/0x50
[   25.539520]  __kasan_kmalloc+0xb7/0xc0
[   25.540146]  __kmalloc_cache_noprof+0x184/0x410
[   25.540810]  kasan_strings+0xb3/0xb60
[   25.541314]  kunit_try_run_case+0x1b3/0x490
[   25.541921]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.542560]  kthread+0x324/0x6d0
[   25.543084]  ret_from_fork+0x41/0x80
[   25.543430]  ret_from_fork_asm+0x1a/0x30
[   25.543860] 
[   25.544082] Freed by task 267:
[   25.544437]  kasan_save_stack+0x3d/0x60
[   25.544852]  kasan_save_track+0x18/0x40
[   25.545226]  kasan_save_free_info+0x3f/0x60
[   25.545549]  __kasan_slab_free+0x56/0x70
[   25.545830]  kfree+0x123/0x3f0
[   25.546063]  kasan_strings+0x13a/0xb60
[   25.546421]  kunit_try_run_case+0x1b3/0x490
[   25.547679]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.548476]  kthread+0x324/0x6d0
[   25.549000]  ret_from_fork+0x41/0x80
[   25.549740]  ret_from_fork_asm+0x1a/0x30
[   25.550186] 
[   25.551103] The buggy address belongs to the object at ffff888101af1300
[   25.551103]  which belongs to the cache kmalloc-32 of size 32
[   25.552666] The buggy address is located 16 bytes inside of
[   25.552666]  freed 32-byte region [ffff888101af1300, ffff888101af1320)
[   25.553616] 
[   25.553840] The buggy address belongs to the physical page:
[   25.554381] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af1
[   25.555420] flags: 0x200000000000000(node=0|zone=2)
[   25.556166] page_type: f5(slab)
[   25.556926] raw: 0200000000000000 ffff888100041780 dead000000000122 0000000000000000
[   25.557762] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   25.558566] page dumped because: kasan: bad access detected
[   25.559313] 
[   25.559603] Memory state around the buggy address:
[   25.560583]  ffff888101af1200: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   25.561509]  ffff888101af1280: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   25.562029] >ffff888101af1300: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   25.562751]                          ^
[   25.563369]  ffff888101af1380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.564204]  ffff888101af1400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.564843] ==================================================================

test log snippet boot parser 139

Home

[   25.566057] ==================================================================
[   25.566699] BUG: KASAN: slab-use-after-free in strnlen+0x73/0x80
[   25.567127] Read of size 1 at addr ffff888101af1310 by task kunit_try_catch/267
[   25.568098] 
[   25.568979] CPU: 0 UID: 0 PID: 267 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.569097] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.569134] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.569188] Call Trace:
[   25.569217]  <TASK>
[   25.569251]  dump_stack_lvl+0x73/0xb0
[   25.569455]  print_report+0xd1/0x640
[   25.569539]  ? __virt_addr_valid+0x1db/0x2d0
[   25.569603]  ? kasan_complete_mode_report_info+0x64/0x200
[   25.569792]  kasan_report+0x102/0x140
[   25.569856]  ? strnlen+0x73/0x80
[   25.569909]  ? strnlen+0x73/0x80
[   25.569971]  __asan_report_load1_noabort+0x18/0x20
[   25.570027]  strnlen+0x73/0x80
[   25.570082]  kasan_strings+0x4cc/0xb60
[   25.570136]  ? __pfx_kasan_strings+0x10/0x10
[   25.570195]  ? __schedule+0xc6f/0x27d0
[   25.570251]  ? __pfx_read_tsc+0x10/0x10
[   25.570304]  ? ktime_get_ts64+0x86/0x230
[   25.570346]  kunit_try_run_case+0x1b3/0x490
[   25.570395]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.570448]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.570480]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.570513]  ? __kthread_parkme+0x82/0x160
[   25.570540]  ? preempt_count_sub+0x50/0x80
[   25.570573]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.570677]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.570740]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.570804]  kthread+0x324/0x6d0
[   25.570857]  ? trace_preempt_on+0x20/0xc0
[   25.570907]  ? __pfx_kthread+0x10/0x10
[   25.570943]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.570977]  ? calculate_sigpending+0x7b/0xa0
[   25.571010]  ? __pfx_kthread+0x10/0x10
[   25.571041]  ret_from_fork+0x41/0x80
[   25.571072]  ? __pfx_kthread+0x10/0x10
[   25.571100]  ret_from_fork_asm+0x1a/0x30
[   25.571144]  </TASK>
[   25.571160] 
[   25.588917] Allocated by task 267:
[   25.589887]  kasan_save_stack+0x3d/0x60
[   25.590578]  kasan_save_track+0x18/0x40
[   25.590922]  kasan_save_alloc_info+0x3b/0x50
[   25.591765]  __kasan_kmalloc+0xb7/0xc0
[   25.592253]  __kmalloc_cache_noprof+0x184/0x410
[   25.592654]  kasan_strings+0xb3/0xb60
[   25.594160]  kunit_try_run_case+0x1b3/0x490
[   25.594616]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.595002]  kthread+0x324/0x6d0
[   25.595320]  ret_from_fork+0x41/0x80
[   25.595710]  ret_from_fork_asm+0x1a/0x30
[   25.596038] 
[   25.596257] Freed by task 267:
[   25.597284]  kasan_save_stack+0x3d/0x60
[   25.597813]  kasan_save_track+0x18/0x40
[   25.598706]  kasan_save_free_info+0x3f/0x60
[   25.599484]  __kasan_slab_free+0x56/0x70
[   25.600015]  kfree+0x123/0x3f0
[   25.600487]  kasan_strings+0x13a/0xb60
[   25.601339]  kunit_try_run_case+0x1b3/0x490
[   25.601642]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.602581]  kthread+0x324/0x6d0
[   25.602967]  ret_from_fork+0x41/0x80
[   25.603777]  ret_from_fork_asm+0x1a/0x30
[   25.604528] 
[   25.604704] The buggy address belongs to the object at ffff888101af1300
[   25.604704]  which belongs to the cache kmalloc-32 of size 32
[   25.606716] The buggy address is located 16 bytes inside of
[   25.606716]  freed 32-byte region [ffff888101af1300, ffff888101af1320)
[   25.607421] 
[   25.607872] The buggy address belongs to the physical page:
[   25.608590] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af1
[   25.609747] flags: 0x200000000000000(node=0|zone=2)
[   25.610324] page_type: f5(slab)
[   25.610963] raw: 0200000000000000 ffff888100041780 dead000000000122 0000000000000000
[   25.611987] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   25.612807] page dumped because: kasan: bad access detected
[   25.613229] 
[   25.613478] Memory state around the buggy address:
[   25.614098]  ffff888101af1200: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   25.615001]  ffff888101af1280: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   25.615294] >ffff888101af1300: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   25.615623]                          ^
[   25.615980]  ffff888101af1380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.617448]  ffff888101af1400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.618367] ==================================================================

test log snippet boot parser 140

Home

[   25.361207] ==================================================================
[   25.361865] BUG: KASAN: slab-use-after-free in strrchr+0x64/0x70
[   25.362718] Read of size 1 at addr ffff888101af1310 by task kunit_try_catch/267
[   25.363162] 
[   25.363451] CPU: 0 UID: 0 PID: 267 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.363555] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.363720] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.363787] Call Trace:
[   25.363822]  <TASK>
[   25.363854]  dump_stack_lvl+0x73/0xb0
[   25.363926]  print_report+0xd1/0x640
[   25.363994]  ? __virt_addr_valid+0x1db/0x2d0
[   25.364053]  ? kasan_complete_mode_report_info+0x64/0x200
[   25.364118]  kasan_report+0x102/0x140
[   25.364171]  ? strrchr+0x64/0x70
[   25.364276]  ? strrchr+0x64/0x70
[   25.364340]  __asan_report_load1_noabort+0x18/0x20
[   25.364420]  strrchr+0x64/0x70
[   25.364477]  kasan_strings+0x24c/0xb60
[   25.364538]  ? __pfx_kasan_strings+0x10/0x10
[   25.364745]  ? __schedule+0xc6f/0x27d0
[   25.364814]  ? __pfx_read_tsc+0x10/0x10
[   25.364846]  ? ktime_get_ts64+0x86/0x230
[   25.364883]  kunit_try_run_case+0x1b3/0x490
[   25.364918]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.364951]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.364984]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.365017]  ? __kthread_parkme+0x82/0x160
[   25.365045]  ? preempt_count_sub+0x50/0x80
[   25.365078]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.365112]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.365145]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.365178]  kthread+0x324/0x6d0
[   25.365243]  ? trace_preempt_on+0x20/0xc0
[   25.365275]  ? __pfx_kthread+0x10/0x10
[   25.365305]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.365336]  ? calculate_sigpending+0x7b/0xa0
[   25.365369]  ? __pfx_kthread+0x10/0x10
[   25.365416]  ret_from_fork+0x41/0x80
[   25.365448]  ? __pfx_kthread+0x10/0x10
[   25.365477]  ret_from_fork_asm+0x1a/0x30
[   25.365521]  </TASK>
[   25.365537] 
[   25.382892] Allocated by task 267:
[   25.383443]  kasan_save_stack+0x3d/0x60
[   25.383747]  kasan_save_track+0x18/0x40
[   25.384278]  kasan_save_alloc_info+0x3b/0x50
[   25.384628]  __kasan_kmalloc+0xb7/0xc0
[   25.385026]  __kmalloc_cache_noprof+0x184/0x410
[   25.385438]  kasan_strings+0xb3/0xb60
[   25.385722]  kunit_try_run_case+0x1b3/0x490
[   25.386177]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.387282]  kthread+0x324/0x6d0
[   25.387784]  ret_from_fork+0x41/0x80
[   25.388126]  ret_from_fork_asm+0x1a/0x30
[   25.388467] 
[   25.388806] Freed by task 267:
[   25.389136]  kasan_save_stack+0x3d/0x60
[   25.389578]  kasan_save_track+0x18/0x40
[   25.389918]  kasan_save_free_info+0x3f/0x60
[   25.390396]  __kasan_slab_free+0x56/0x70
[   25.390909]  kfree+0x123/0x3f0
[   25.391601]  kasan_strings+0x13a/0xb60
[   25.392072]  kunit_try_run_case+0x1b3/0x490
[   25.392549]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.392894]  kthread+0x324/0x6d0
[   25.393153]  ret_from_fork+0x41/0x80
[   25.393439]  ret_from_fork_asm+0x1a/0x30
[   25.394011] 
[   25.394251] The buggy address belongs to the object at ffff888101af1300
[   25.394251]  which belongs to the cache kmalloc-32 of size 32
[   25.395646] The buggy address is located 16 bytes inside of
[   25.395646]  freed 32-byte region [ffff888101af1300, ffff888101af1320)
[   25.396906] 
[   25.397125] The buggy address belongs to the physical page:
[   25.397741] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101af1
[   25.398301] flags: 0x200000000000000(node=0|zone=2)
[   25.398646] page_type: f5(slab)
[   25.399036] raw: 0200000000000000 ffff888100041780 dead000000000122 0000000000000000
[   25.399836] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   25.400652] page dumped because: kasan: bad access detected
[   25.401086] 
[   25.401243] Memory state around the buggy address:
[   25.401633]  ffff888101af1200: fa fb fb fb fc fc fc fc 00 00 00 fc fc fc fc fc
[   25.402940]  ffff888101af1280: 00 00 00 fc fc fc fc fc 00 00 07 fc fc fc fc fc
[   25.403593] >ffff888101af1300: fa fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
[   25.403978]                          ^
[   25.404234]  ffff888101af1380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.404822]  ffff888101af1400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   25.405564] ==================================================================

test log snippet boot parser 141

Home

[   23.462151] ==================================================================
[   23.462926] BUG: KASAN: slab-use-after-free in workqueue_uaf+0x4d8/0x560
[   23.463460] Read of size 8 at addr ffff888101adce40 by task kunit_try_catch/208
[   23.464916] 
[   23.465163] CPU: 0 UID: 0 PID: 208 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   23.465326] Tainted: [B]=BAD_PAGE, [N]=TEST
[   23.465417] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   23.465519] Call Trace:
[   23.465606]  <TASK>
[   23.465646]  dump_stack_lvl+0x73/0xb0
[   23.465724]  print_report+0xd1/0x640
[   23.465785]  ? __virt_addr_valid+0x1db/0x2d0
[   23.465848]  ? kasan_complete_mode_report_info+0x64/0x200
[   23.465906]  kasan_report+0x102/0x140
[   23.465959]  ? workqueue_uaf+0x4d8/0x560
[   23.465996]  ? workqueue_uaf+0x4d8/0x560
[   23.466034]  __asan_report_load8_noabort+0x18/0x20
[   23.466064]  workqueue_uaf+0x4d8/0x560
[   23.466095]  ? __pfx_workqueue_uaf+0x10/0x10
[   23.466127]  ? __schedule+0xc6f/0x27d0
[   23.466157]  ? __pfx_read_tsc+0x10/0x10
[   23.466212]  ? ktime_get_ts64+0x86/0x230
[   23.466247]  kunit_try_run_case+0x1b3/0x490
[   23.466280]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.466310]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   23.466341]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   23.466379]  ? __kthread_parkme+0x82/0x160
[   23.466424]  ? preempt_count_sub+0x50/0x80
[   23.466457]  ? __pfx_kunit_try_run_case+0x10/0x10
[   23.466489]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.466521]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   23.466552]  kthread+0x324/0x6d0
[   23.466609]  ? trace_preempt_on+0x20/0xc0
[   23.466662]  ? __pfx_kthread+0x10/0x10
[   23.466712]  ? _raw_spin_unlock_irq+0x47/0x80
[   23.466746]  ? calculate_sigpending+0x7b/0xa0
[   23.466780]  ? __pfx_kthread+0x10/0x10
[   23.466810]  ret_from_fork+0x41/0x80
[   23.466839]  ? __pfx_kthread+0x10/0x10
[   23.466867]  ret_from_fork_asm+0x1a/0x30
[   23.466909]  </TASK>
[   23.466924] 
[   23.481303] Allocated by task 208:
[   23.481680]  kasan_save_stack+0x3d/0x60
[   23.482090]  kasan_save_track+0x18/0x40
[   23.482740]  kasan_save_alloc_info+0x3b/0x50
[   23.483236]  __kasan_kmalloc+0xb7/0xc0
[   23.483719]  __kmalloc_cache_noprof+0x184/0x410
[   23.484092]  workqueue_uaf+0x153/0x560
[   23.484436]  kunit_try_run_case+0x1b3/0x490
[   23.485012]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.485566]  kthread+0x324/0x6d0
[   23.486101]  ret_from_fork+0x41/0x80
[   23.486552]  ret_from_fork_asm+0x1a/0x30
[   23.487175] 
[   23.487428] Freed by task 8:
[   23.487904]  kasan_save_stack+0x3d/0x60
[   23.488359]  kasan_save_track+0x18/0x40
[   23.488825]  kasan_save_free_info+0x3f/0x60
[   23.489266]  __kasan_slab_free+0x56/0x70
[   23.489897]  kfree+0x123/0x3f0
[   23.490258]  workqueue_uaf_work+0x12/0x20
[   23.490944]  process_one_work+0x5ee/0xf60
[   23.491426]  worker_thread+0x720/0x1300
[   23.491886]  kthread+0x324/0x6d0
[   23.492146]  ret_from_fork+0x41/0x80
[   23.492444]  ret_from_fork_asm+0x1a/0x30
[   23.492815] 
[   23.493215] Last potentially related work creation:
[   23.493814]  kasan_save_stack+0x3d/0x60
[   23.494266]  __kasan_record_aux_stack+0xae/0xc0
[   23.494942]  kasan_record_aux_stack_noalloc+0xf/0x20
[   23.495491]  __queue_work+0x626/0xe60
[   23.496041]  queue_work_on+0x74/0xa0
[   23.496459]  workqueue_uaf+0x26e/0x560
[   23.496874]  kunit_try_run_case+0x1b3/0x490
[   23.497310]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   23.497802]  kthread+0x324/0x6d0
[   23.498191]  ret_from_fork+0x41/0x80
[   23.498905]  ret_from_fork_asm+0x1a/0x30
[   23.499375] 
[   23.499753] The buggy address belongs to the object at ffff888101adce40
[   23.499753]  which belongs to the cache kmalloc-32 of size 32
[   23.500729] The buggy address is located 0 bytes inside of
[   23.500729]  freed 32-byte region [ffff888101adce40, ffff888101adce60)
[   23.501379] 
[   23.501755] The buggy address belongs to the physical page:
[   23.502272] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x101adc
[   23.503224] flags: 0x200000000000000(node=0|zone=2)
[   23.503888] page_type: f5(slab)
[   23.504187] raw: 0200000000000000 ffff888100041780 dead000000000122 0000000000000000
[   23.504972] raw: 0000000000000000 0000000080400040 00000001f5000000 0000000000000000
[   23.505437] page dumped because: kasan: bad access detected
[   23.505961] 
[   23.506179] Memory state around the buggy address:
[   23.506887]  ffff888101adcd00: 00 00 05 fc fc fc fc fc fa fb fb fb fc fc fc fc
[   23.507552]  ffff888101adcd80: fa fb fb fb fc fc fc fc fa fb fb fb fc fc fc fc
[   23.508422] >ffff888101adce00: fa fb fb fb fc fc fc fc fa fb fb fb fc fc fc fc
[   23.509203]                                            ^
[   23.509748]  ffff888101adce80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.510153]  ffff888101adcf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   23.511038] ==================================================================

test log snippet boot parser 142

Home

[   25.094225] ==================================================================
[   25.095840] BUG: KASAN: stack-out-of-bounds in kasan_stack_oob+0x2ae/0x300
[   25.096299] Read of size 1 at addr ffff888102d47d02 by task kunit_try_catch/257
[   25.096962] 
[   25.097139] CPU: 0 UID: 0 PID: 257 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   25.097240] Tainted: [B]=BAD_PAGE, [N]=TEST
[   25.097275] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   25.097326] Call Trace:
[   25.097357]  <TASK>
[   25.097435]  dump_stack_lvl+0x73/0xb0
[   25.097520]  print_report+0xd1/0x640
[   25.097588]  ? __virt_addr_valid+0x1db/0x2d0
[   25.097656]  ? kasan_addr_to_slab+0x11/0xa0
[   25.097720]  kasan_report+0x102/0x140
[   25.097776]  ? kasan_stack_oob+0x2ae/0x300
[   25.097833]  ? kasan_stack_oob+0x2ae/0x300
[   25.097901]  __asan_report_load1_noabort+0x18/0x20
[   25.098003]  kasan_stack_oob+0x2ae/0x300
[   25.098067]  ? __pfx_kasan_stack_oob+0x10/0x10
[   25.098152]  ? finish_task_switch.isra.0+0x153/0x700
[   25.098215]  ? __switch_to+0x5d9/0xf60
[   25.098280]  ? __schedule+0xc6f/0x27d0
[   25.098339]  ? __pfx_read_tsc+0x10/0x10
[   25.098413]  ? ktime_get_ts64+0x86/0x230
[   25.098483]  kunit_try_run_case+0x1b3/0x490
[   25.098549]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.098607]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   25.098669]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   25.098732]  ? __kthread_parkme+0x82/0x160
[   25.098769]  ? preempt_count_sub+0x50/0x80
[   25.098802]  ? __pfx_kunit_try_run_case+0x10/0x10
[   25.098838]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   25.098872]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   25.098906]  kthread+0x324/0x6d0
[   25.098933]  ? trace_preempt_on+0x20/0xc0
[   25.098965]  ? __pfx_kthread+0x10/0x10
[   25.098996]  ? _raw_spin_unlock_irq+0x47/0x80
[   25.099026]  ? calculate_sigpending+0x7b/0xa0
[   25.099060]  ? __pfx_kthread+0x10/0x10
[   25.099090]  ret_from_fork+0x41/0x80
[   25.099119]  ? __pfx_kthread+0x10/0x10
[   25.099148]  ret_from_fork_asm+0x1a/0x30
[   25.099196]  </TASK>
[   25.099242] 
[   25.113454] The buggy address belongs to stack of task kunit_try_catch/257
[   25.114092]  and is located at offset 138 in frame:
[   25.114721]  kasan_stack_oob+0x0/0x300
[   25.115550] 
[   25.115839] This frame has 4 objects:
[   25.116628]  [48, 49) '__assertion'
[   25.116733]  [64, 72) 'array'
[   25.117226]  [96, 112) '__assertion'
[   25.117691]  [128, 138) 'stack_array'
[   25.118115] 
[   25.118830] The buggy address belongs to the physical page:
[   25.119351] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102d47
[   25.119942] flags: 0x200000000000000(node=0|zone=2)
[   25.120435] raw: 0200000000000000 ffffea00040b51c8 ffffea00040b51c8 0000000000000000
[   25.121225] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
[   25.121983] page dumped because: kasan: bad access detected
[   25.122524] 
[   25.122684] Memory state around the buggy address:
[   25.123015]  ffff888102d47c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 f1
[   25.123831]  ffff888102d47c80: f1 f1 f1 f1 f1 01 f2 00 f2 f2 f2 00 00 f2 f2 00
[   25.124615] >ffff888102d47d00: 02 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 f1
[   25.125259]                    ^
[   25.125638]  ffff888102d47d80: f1 f1 f1 00 00 f2 f2 00 00 f2 f2 00 00 f3 f3 00
[   25.126293]  ffff888102d47e00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[   25.126936] ==================================================================

test log snippet boot parser 143

Home

[   21.442370] ==================================================================
[   21.443938] BUG: KASAN: use-after-free in kmalloc_large_uaf+0x2f3/0x340
[   21.445022] Read of size 1 at addr ffff888102324000 by task kunit_try_catch/156
[   21.445577] 
[   21.446441] CPU: 0 UID: 0 PID: 156 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.446550] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.446584] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.446673] Call Trace:
[   21.446708]  <TASK>
[   21.446743]  dump_stack_lvl+0x73/0xb0
[   21.446819]  print_report+0xd1/0x640
[   21.446879]  ? __virt_addr_valid+0x1db/0x2d0
[   21.446946]  ? kasan_addr_to_slab+0x11/0xa0
[   21.447007]  kasan_report+0x102/0x140
[   21.447061]  ? kmalloc_large_uaf+0x2f3/0x340
[   21.447117]  ? kmalloc_large_uaf+0x2f3/0x340
[   21.447156]  __asan_report_load1_noabort+0x18/0x20
[   21.447189]  kmalloc_large_uaf+0x2f3/0x340
[   21.447247]  ? __pfx_kmalloc_large_uaf+0x10/0x10
[   21.447279]  ? __schedule+0xc6f/0x27d0
[   21.447312]  ? __pfx_read_tsc+0x10/0x10
[   21.447340]  ? ktime_get_ts64+0x86/0x230
[   21.447375]  kunit_try_run_case+0x1b3/0x490
[   21.447426]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.447457]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.447489]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.447520]  ? __kthread_parkme+0x82/0x160
[   21.447549]  ? preempt_count_sub+0x50/0x80
[   21.447583]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.447656]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.447709]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.447760]  kthread+0x324/0x6d0
[   21.447806]  ? trace_preempt_on+0x20/0xc0
[   21.447854]  ? __pfx_kthread+0x10/0x10
[   21.447900]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.447948]  ? calculate_sigpending+0x7b/0xa0
[   21.447992]  ? __pfx_kthread+0x10/0x10
[   21.448022]  ret_from_fork+0x41/0x80
[   21.448052]  ? __pfx_kthread+0x10/0x10
[   21.448110]  ret_from_fork_asm+0x1a/0x30
[   21.448155]  </TASK>
[   21.448171] 
[   21.464421] The buggy address belongs to the physical page:
[   21.465074] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102324
[   21.466112] flags: 0x200000000000000(node=0|zone=2)
[   21.466732] raw: 0200000000000000 ffffea000408ca08 ffff88815b03f000 0000000000000000
[   21.467299] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
[   21.467998] page dumped because: kasan: bad access detected
[   21.468506] 
[   21.468736] Memory state around the buggy address:
[   21.469197]  ffff888102323f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.470093]  ffff888102323f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   21.470990] >ffff888102324000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   21.471452]                    ^
[   21.471851]  ffff888102324080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   21.472203]  ffff888102324100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   21.473071] ==================================================================

test log snippet boot parser 144

Home

[   24.770538] ==================================================================
[   24.771791] BUG: KASAN: use-after-free in mempool_uaf_helper+0x394/0x400
[   24.772281] Read of size 1 at addr ffff888102d6c000 by task kunit_try_catch/241
[   24.773040] 
[   24.773654] CPU: 0 UID: 0 PID: 241 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.773724] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.773751] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.773802] Call Trace:
[   24.773837]  <TASK>
[   24.773875]  dump_stack_lvl+0x73/0xb0
[   24.774001]  print_report+0xd1/0x640
[   24.774072]  ? __virt_addr_valid+0x1db/0x2d0
[   24.774141]  ? kasan_addr_to_slab+0x11/0xa0
[   24.774204]  kasan_report+0x102/0x140
[   24.774272]  ? mempool_uaf_helper+0x394/0x400
[   24.774310]  ? mempool_uaf_helper+0x394/0x400
[   24.774361]  __asan_report_load1_noabort+0x18/0x20
[   24.774392]  mempool_uaf_helper+0x394/0x400
[   24.774444]  ? __pfx_mempool_uaf_helper+0x10/0x10
[   24.774485]  ? finish_task_switch.isra.0+0x153/0x700
[   24.774524]  mempool_page_alloc_uaf+0xb1/0x100
[   24.774554]  ? __pfx_mempool_page_alloc_uaf+0x10/0x10
[   24.774583]  ? __switch_to+0x5d9/0xf60
[   24.774617]  ? __pfx_mempool_alloc_pages+0x10/0x10
[   24.774650]  ? __pfx_mempool_free_pages+0x10/0x10
[   24.774684]  ? __pfx_read_tsc+0x10/0x10
[   24.774713]  ? ktime_get_ts64+0x86/0x230
[   24.774749]  kunit_try_run_case+0x1b3/0x490
[   24.774784]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.774815]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.774849]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.774881]  ? __kthread_parkme+0x82/0x160
[   24.774911]  ? preempt_count_sub+0x50/0x80
[   24.774943]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.774977]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.775011]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.775043]  kthread+0x324/0x6d0
[   24.775072]  ? trace_preempt_on+0x20/0xc0
[   24.775103]  ? __pfx_kthread+0x10/0x10
[   24.775133]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.775163]  ? calculate_sigpending+0x7b/0xa0
[   24.775220]  ? __pfx_kthread+0x10/0x10
[   24.775255]  ret_from_fork+0x41/0x80
[   24.775285]  ? __pfx_kthread+0x10/0x10
[   24.775315]  ret_from_fork_asm+0x1a/0x30
[   24.775358]  </TASK>
[   24.775375] 
[   24.791110] The buggy address belongs to the physical page:
[   24.791696] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102d6c
[   24.792441] flags: 0x200000000000000(node=0|zone=2)
[   24.792854] raw: 0200000000000000 0000000000000000 dead000000000122 0000000000000000
[   24.793557] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   24.794012] page dumped because: kasan: bad access detected
[   24.794611] 
[   24.794839] Memory state around the buggy address:
[   24.795373]  ffff888102d6bf00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.795863]  ffff888102d6bf80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.796567] >ffff888102d6c000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.797049]                    ^
[   24.797343]  ffff888102d6c080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.798021]  ffff888102d6c100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.798712] ==================================================================

test log snippet boot parser 145

Home

[   24.657562] ==================================================================
[   24.658489] BUG: KASAN: use-after-free in mempool_uaf_helper+0x394/0x400
[   24.659186] Read of size 1 at addr ffff888102bd4000 by task kunit_try_catch/237
[   24.659834] 
[   24.660090] CPU: 1 UID: 0 PID: 237 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   24.660203] Tainted: [B]=BAD_PAGE, [N]=TEST
[   24.660292] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   24.660360] Call Trace:
[   24.660394]  <TASK>
[   24.660446]  dump_stack_lvl+0x73/0xb0
[   24.660569]  print_report+0xd1/0x640
[   24.660637]  ? __virt_addr_valid+0x1db/0x2d0
[   24.660711]  ? kasan_addr_to_slab+0x11/0xa0
[   24.660873]  kasan_report+0x102/0x140
[   24.660962]  ? mempool_uaf_helper+0x394/0x400
[   24.661079]  ? mempool_uaf_helper+0x394/0x400
[   24.661159]  __asan_report_load1_noabort+0x18/0x20
[   24.661233]  mempool_uaf_helper+0x394/0x400
[   24.661306]  ? __pfx_mempool_uaf_helper+0x10/0x10
[   24.661367]  ? update_load_avg+0x1c3/0x2110
[   24.661452]  ? update_curr+0x80/0x810
[   24.661517]  ? finish_task_switch.isra.0+0x153/0x700
[   24.661729]  mempool_kmalloc_large_uaf+0xb3/0x100
[   24.661822]  ? __pfx_mempool_kmalloc_large_uaf+0x10/0x10
[   24.661890]  ? __switch_to+0x5d9/0xf60
[   24.661933]  ? __pfx_mempool_kmalloc+0x10/0x10
[   24.661966]  ? __pfx_mempool_kfree+0x10/0x10
[   24.662001]  ? __pfx_read_tsc+0x10/0x10
[   24.662029]  ? ktime_get_ts64+0x86/0x230
[   24.662066]  kunit_try_run_case+0x1b3/0x490
[   24.662103]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.662134]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   24.662168]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   24.662225]  ? __kthread_parkme+0x82/0x160
[   24.662260]  ? preempt_count_sub+0x50/0x80
[   24.662294]  ? __pfx_kunit_try_run_case+0x10/0x10
[   24.662327]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   24.662369]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   24.662418]  kthread+0x324/0x6d0
[   24.662448]  ? trace_preempt_on+0x20/0xc0
[   24.662481]  ? __pfx_kthread+0x10/0x10
[   24.662512]  ? _raw_spin_unlock_irq+0x47/0x80
[   24.662541]  ? calculate_sigpending+0x7b/0xa0
[   24.662575]  ? __pfx_kthread+0x10/0x10
[   24.662666]  ret_from_fork+0x41/0x80
[   24.662727]  ? __pfx_kthread+0x10/0x10
[   24.662779]  ret_from_fork_asm+0x1a/0x30
[   24.662857]  </TASK>
[   24.662883] 
[   24.682263] The buggy address belongs to the physical page:
[   24.683029] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102bd4
[   24.683541] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[   24.684389] flags: 0x200000000000040(head|node=0|zone=2)
[   24.685098] raw: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   24.686268] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   24.687156] head: 0200000000000040 0000000000000000 dead000000000122 0000000000000000
[   24.687961] head: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
[   24.688850] head: 0200000000000002 ffffea00040af501 ffffffffffffffff 0000000000000000
[   24.689429] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[   24.690293] page dumped because: kasan: bad access detected
[   24.691159] 
[   24.691437] Memory state around the buggy address:
[   24.691928]  ffff888102bd3f00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.692859]  ffff888102bd3f80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.693571] >ffff888102bd4000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.694557]                    ^
[   24.694962]  ffff888102bd4080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.695818]  ffff888102bd4100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   24.696590] ==================================================================

test log snippet boot parser 146

Home

[   21.530282] ==================================================================
[   21.531936] BUG: KASAN: use-after-free in page_alloc_uaf+0x358/0x3d0
[   21.532349] Read of size 1 at addr ffff888102cf0000 by task kunit_try_catch/162
[   21.532975] 
[   21.533209] CPU: 0 UID: 0 PID: 162 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   21.533317] Tainted: [B]=BAD_PAGE, [N]=TEST
[   21.533350] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   21.533416] Call Trace:
[   21.533452]  <TASK>
[   21.533485]  dump_stack_lvl+0x73/0xb0
[   21.533559]  print_report+0xd1/0x640
[   21.533617]  ? __virt_addr_valid+0x1db/0x2d0
[   21.533679]  ? kasan_addr_to_slab+0x11/0xa0
[   21.533744]  kasan_report+0x102/0x140
[   21.533794]  ? page_alloc_uaf+0x358/0x3d0
[   21.533845]  ? page_alloc_uaf+0x358/0x3d0
[   21.533906]  __asan_report_load1_noabort+0x18/0x20
[   21.533960]  page_alloc_uaf+0x358/0x3d0
[   21.534008]  ? __pfx_page_alloc_uaf+0x10/0x10
[   21.534062]  ? __pfx_page_alloc_uaf+0x10/0x10
[   21.534124]  kunit_try_run_case+0x1b3/0x490
[   21.534212]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.534283]  ? _raw_spin_lock_irqsave+0xa2/0x110
[   21.534462]  ? _raw_spin_unlock_irqrestore+0x5f/0x90
[   21.534883]  ? __kthread_parkme+0x82/0x160
[   21.534965]  ? preempt_count_sub+0x50/0x80
[   21.535004]  ? __pfx_kunit_try_run_case+0x10/0x10
[   21.535038]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   21.535074]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
[   21.535105]  kthread+0x324/0x6d0
[   21.535133]  ? trace_preempt_on+0x20/0xc0
[   21.535164]  ? __pfx_kthread+0x10/0x10
[   21.535199]  ? _raw_spin_unlock_irq+0x47/0x80
[   21.535248]  ? calculate_sigpending+0x7b/0xa0
[   21.535283]  ? __pfx_kthread+0x10/0x10
[   21.535312]  ret_from_fork+0x41/0x80
[   21.535341]  ? __pfx_kthread+0x10/0x10
[   21.535370]  ret_from_fork_asm+0x1a/0x30
[   21.535428]  </TASK>
[   21.535445] 
[   21.551987] The buggy address belongs to the physical page:
[   21.552766] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x102cf0
[   21.553837] flags: 0x200000000000000(node=0|zone=2)
[   21.554305] page_type: f0(buddy)
[   21.554682] raw: 0200000000000000 ffff88817fffb4a0 ffff88817fffb4a0 0000000000000000
[   21.555516] raw: 0000000000000000 0000000000000004 00000000f0000000 0000000000000000
[   21.555994] page dumped because: kasan: bad access detected
[   21.556548] 
[   21.556991] Memory state around the buggy address:
[   21.557609]  ffff888102ceff00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   21.558055]  ffff888102ceff80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   21.558938] >ffff888102cf0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   21.559647]                    ^
[   21.560176]  ffff888102cf0080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   21.561095]  ffff888102cf0100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[   21.561908] ==================================================================

test log snippet boot parser 147

Home

[   31.378372] ==================================================================
[   31.379023] BUG: KFENCE: invalid free in test_double_free+0x1d5/0x260
[   31.379023] 
[   31.379968] Invalid free of 0x(____ptrval____) (in kfence-#113):
[   31.381213]  test_double_free+0x1d5/0x260
[   31.381819]  kunit_try_run_case+0x1b3/0x490
[   31.382377]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.382980]  kthread+0x324/0x6d0
[   31.383295]  ret_from_fork+0x41/0x80
[   31.383911]  ret_from_fork_asm+0x1a/0x30
[   31.384569] 
[   31.384843] kfence-#113: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=kmalloc-32
[   31.384843] 
[   31.386085] allocated by task 311 on cpu 0 at 31.377991s (0.008088s ago):
[   31.386667]  test_alloc+0x35f/0x10d0
[   31.386947]  test_double_free+0xdc/0x260
[   31.387675]  kunit_try_run_case+0x1b3/0x490
[   31.388230]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.389093]  kthread+0x324/0x6d0
[   31.389722]  ret_from_fork+0x41/0x80
[   31.390144]  ret_from_fork_asm+0x1a/0x30
[   31.390731] 
[   31.390949] freed by task 311 on cpu 0 at 31.378086s (0.012858s ago):
[   31.391499]  test_double_free+0x1e2/0x260
[   31.392109]  kunit_try_run_case+0x1b3/0x490
[   31.392565]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.393111]  kthread+0x324/0x6d0
[   31.393481]  ret_from_fork+0x41/0x80
[   31.394155]  ret_from_fork_asm+0x1a/0x30
[   31.395044] 
[   31.395293] CPU: 0 UID: 0 PID: 311 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   31.396106] Tainted: [B]=BAD_PAGE, [N]=TEST
[   31.396684] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   31.397309] ==================================================================

test log snippet boot parser 148

Home

[   31.482443] ==================================================================
[   31.482985] BUG: KFENCE: invalid free in test_double_free+0x113/0x260
[   31.482985] 
[   31.483588] Invalid free of 0x(____ptrval____) (in kfence-#114):
[   31.484151]  test_double_free+0x113/0x260
[   31.485422]  kunit_try_run_case+0x1b3/0x490
[   31.485964]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.486673]  kthread+0x324/0x6d0
[   31.487129]  ret_from_fork+0x41/0x80
[   31.487670]  ret_from_fork_asm+0x1a/0x30
[   31.488146] 
[   31.488336] kfence-#114: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=test
[   31.488336] 
[   31.489119] allocated by task 313 on cpu 1 at 31.482074s (0.007038s ago):
[   31.489792]  test_alloc+0x2a7/0x10d0
[   31.490262]  test_double_free+0xdc/0x260
[   31.490622]  kunit_try_run_case+0x1b3/0x490
[   31.491084]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.491495]  kthread+0x324/0x6d0
[   31.491886]  ret_from_fork+0x41/0x80
[   31.492288]  ret_from_fork_asm+0x1a/0x30
[   31.492738] 
[   31.492939] freed by task 313 on cpu 1 at 31.482152s (0.010781s ago):
[   31.493346]  test_double_free+0xfb/0x260
[   31.493695]  kunit_try_run_case+0x1b3/0x490
[   31.494119]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.494705]  kthread+0x324/0x6d0
[   31.495073]  ret_from_fork+0x41/0x80
[   31.495549]  ret_from_fork_asm+0x1a/0x30
[   31.495977] 
[   31.496277] CPU: 1 UID: 0 PID: 313 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   31.496859] Tainted: [B]=BAD_PAGE, [N]=TEST
[   31.497314] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   31.497956] ==================================================================

test log snippet boot parser 149

Home

[   31.586126] ==================================================================
[   31.586681] BUG: KFENCE: invalid free in test_invalid_addr_free+0x1e3/0x260
[   31.586681] 
[   31.587300] Invalid free of 0x(____ptrval____) (in kfence-#115):
[   31.588646]  test_invalid_addr_free+0x1e3/0x260
[   31.589374]  kunit_try_run_case+0x1b3/0x490
[   31.589945]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.590592]  kthread+0x324/0x6d0
[   31.590917]  ret_from_fork+0x41/0x80
[   31.591550]  ret_from_fork_asm+0x1a/0x30
[   31.592001] 
[   31.592117] kfence-#115: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=kmalloc-32
[   31.592117] 
[   31.592608] allocated by task 315 on cpu 0 at 31.585976s (0.006626s ago):
[   31.593271]  test_alloc+0x35f/0x10d0
[   31.593642]  test_invalid_addr_free+0xdc/0x260
[   31.594930]  kunit_try_run_case+0x1b3/0x490
[   31.596160]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.596859]  kthread+0x324/0x6d0
[   31.597687]  ret_from_fork+0x41/0x80
[   31.598047]  ret_from_fork_asm+0x1a/0x30
[   31.598821] 
[   31.599181] CPU: 0 UID: 0 PID: 315 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   31.599851] Tainted: [B]=BAD_PAGE, [N]=TEST
[   31.600140] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   31.601224] ==================================================================

test log snippet boot parser 150

Home

[   31.690060] ==================================================================
[   31.690644] BUG: KFENCE: invalid free in test_invalid_addr_free+0xfc/0x260
[   31.690644] 
[   31.691177] Invalid free of 0x(____ptrval____) (in kfence-#116):
[   31.691958]  test_invalid_addr_free+0xfc/0x260
[   31.692291]  kunit_try_run_case+0x1b3/0x490
[   31.692822]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.693259]  kthread+0x324/0x6d0
[   31.693708]  ret_from_fork+0x41/0x80
[   31.694042]  ret_from_fork_asm+0x1a/0x30
[   31.694622] 
[   31.694861] kfence-#116: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=test
[   31.694861] 
[   31.695427] allocated by task 317 on cpu 1 at 31.689950s (0.005471s ago):
[   31.696119]  test_alloc+0x2a7/0x10d0
[   31.696615]  test_invalid_addr_free+0xdc/0x260
[   31.696973]  kunit_try_run_case+0x1b3/0x490
[   31.697603]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.698003]  kthread+0x324/0x6d0
[   31.698420]  ret_from_fork+0x41/0x80
[   31.698833]  ret_from_fork_asm+0x1a/0x30
[   31.699210] 
[   31.699508] CPU: 1 UID: 0 PID: 317 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   31.700284] Tainted: [B]=BAD_PAGE, [N]=TEST
[   31.700649] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   31.701598] ==================================================================

test log snippet boot parser 151

Home

[   32.210245] ==================================================================
[   32.210840] BUG: KFENCE: memory corruption in test_corruption+0x2e1/0x3e0
[   32.210840] 
[   32.211554] Corrupted memory at 0x(____ptrval____) [ ! ] (in kfence-#121):
[   32.212450]  test_corruption+0x2e1/0x3e0
[   32.212899]  kunit_try_run_case+0x1b3/0x490
[   32.213494]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.214115]  kthread+0x324/0x6d0
[   32.214615]  ret_from_fork+0x41/0x80
[   32.215062]  ret_from_fork_asm+0x1a/0x30
[   32.215632] 
[   32.215879] kfence-#121: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=kmalloc-32
[   32.215879] 
[   32.216654] allocated by task 319 on cpu 1 at 32.209946s (0.006703s ago):
[   32.217380]  test_alloc+0x35f/0x10d0
[   32.217834]  test_corruption+0x1cc/0x3e0
[   32.218436]  kunit_try_run_case+0x1b3/0x490
[   32.218864]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.219540]  kthread+0x324/0x6d0
[   32.219898]  ret_from_fork+0x41/0x80
[   32.220270]  ret_from_fork_asm+0x1a/0x30
[   32.220752] 
[   32.221060] freed by task 319 on cpu 1 at 32.210068s (0.010986s ago):
[   32.221807]  test_corruption+0x2e1/0x3e0
[   32.222355]  kunit_try_run_case+0x1b3/0x490
[   32.222903]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.223432]  kthread+0x324/0x6d0
[   32.223812]  ret_from_fork+0x41/0x80
[   32.224387]  ret_from_fork_asm+0x1a/0x30
[   32.224919] 
[   32.225178] CPU: 1 UID: 0 PID: 319 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   32.226075] Tainted: [B]=BAD_PAGE, [N]=TEST
[   32.226430] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   32.227315] ==================================================================

test log snippet boot parser 152

Home

[   32.626036] ==================================================================
[   32.626632] BUG: KFENCE: memory corruption in test_corruption+0x132/0x3e0
[   32.626632] 
[   32.627262] Corrupted memory at 0x(____ptrval____) [ ! . . . . . . . . . . . . . . . ] (in kfence-#125):
[   32.628580]  test_corruption+0x132/0x3e0
[   32.629030]  kunit_try_run_case+0x1b3/0x490
[   32.629567]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.629938]  kthread+0x324/0x6d0
[   32.630415]  ret_from_fork+0x41/0x80
[   32.630796]  ret_from_fork_asm+0x1a/0x30
[   32.631108] 
[   32.631351] kfence-#125: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=test
[   32.631351] 
[   32.632166] allocated by task 321 on cpu 1 at 32.625914s (0.006247s ago):
[   32.632761]  test_alloc+0x2a7/0x10d0
[   32.633049]  test_corruption+0xe7/0x3e0
[   32.633530]  kunit_try_run_case+0x1b3/0x490
[   32.633977]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.634579]  kthread+0x324/0x6d0
[   32.634990]  ret_from_fork+0x41/0x80
[   32.635374]  ret_from_fork_asm+0x1a/0x30
[   32.635747] 
[   32.635923] freed by task 321 on cpu 1 at 32.625967s (0.009950s ago):
[   32.636358]  test_corruption+0x132/0x3e0
[   32.636827]  kunit_try_run_case+0x1b3/0x490
[   32.637320]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.637851]  kthread+0x324/0x6d0
[   32.638192]  ret_from_fork+0x41/0x80
[   32.638543]  ret_from_fork_asm+0x1a/0x30
[   32.638851] 
[   32.639126] CPU: 1 UID: 0 PID: 321 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   32.640219] Tainted: [B]=BAD_PAGE, [N]=TEST
[   32.640664] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   32.641588] ==================================================================

test log snippet boot parser 153

Home

[   32.730084] ==================================================================
[   32.730577] BUG: KFENCE: memory corruption in test_corruption+0x217/0x3e0
[   32.730577] 
[   32.731192] Corrupted memory at 0x(____ptrval____) [ ! ] (in kfence-#126):
[   32.731947]  test_corruption+0x217/0x3e0
[   32.732413]  kunit_try_run_case+0x1b3/0x490
[   32.732769]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.733303]  kthread+0x324/0x6d0
[   32.733718]  ret_from_fork+0x41/0x80
[   32.734053]  ret_from_fork_asm+0x1a/0x30
[   32.734532] 
[   32.734774] kfence-#126: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=test
[   32.734774] 
[   32.735384] allocated by task 321 on cpu 1 at 32.729948s (0.005431s ago):
[   32.736083]  test_alloc+0x2a7/0x10d0
[   32.736546]  test_corruption+0x1cc/0x3e0
[   32.736842]  kunit_try_run_case+0x1b3/0x490
[   32.737246]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.737811]  kthread+0x324/0x6d0
[   32.738222]  ret_from_fork+0x41/0x80
[   32.738666]  ret_from_fork_asm+0x1a/0x30
[   32.739034] 
[   32.739272] freed by task 321 on cpu 1 at 32.730006s (0.009260s ago):
[   32.739831]  test_corruption+0x217/0x3e0
[   32.740168]  kunit_try_run_case+0x1b3/0x490
[   32.740668]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.741071]  kthread+0x324/0x6d0
[   32.741522]  ret_from_fork+0x41/0x80
[   32.741929]  ret_from_fork_asm+0x1a/0x30
[   32.742383] 
[   32.742672] CPU: 1 UID: 0 PID: 321 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   32.743302] Tainted: [B]=BAD_PAGE, [N]=TEST
[   32.743745] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   32.744258] ==================================================================

test log snippet boot parser 154

Home

[   32.106221] ==================================================================
[   32.106902] BUG: KFENCE: memory corruption in test_corruption+0x2d4/0x3e0
[   32.106902] 
[   32.107624] Corrupted memory at 0x(____ptrval____) [ ! . . . . . . . . . . . . . . . ] (in kfence-#120):
[   32.109285]  test_corruption+0x2d4/0x3e0
[   32.109774]  kunit_try_run_case+0x1b3/0x490
[   32.110260]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.110768]  kthread+0x324/0x6d0
[   32.111198]  ret_from_fork+0x41/0x80
[   32.111511]  ret_from_fork_asm+0x1a/0x30
[   32.111915] 
[   32.112196] kfence-#120: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=kmalloc-32
[   32.112196] 
[   32.113434] allocated by task 319 on cpu 1 at 32.105928s (0.007500s ago):
[   32.113917]  test_alloc+0x35f/0x10d0
[   32.114434]  test_corruption+0xe7/0x3e0
[   32.114870]  kunit_try_run_case+0x1b3/0x490
[   32.115310]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.116064]  kthread+0x324/0x6d0
[   32.116474]  ret_from_fork+0x41/0x80
[   32.116878]  ret_from_fork_asm+0x1a/0x30
[   32.117179] 
[   32.117425] freed by task 319 on cpu 1 at 32.106024s (0.011394s ago):
[   32.118291]  test_corruption+0x2d4/0x3e0
[   32.118794]  kunit_try_run_case+0x1b3/0x490
[   32.119265]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   32.119706]  kthread+0x324/0x6d0
[   32.120017]  ret_from_fork+0x41/0x80
[   32.120535]  ret_from_fork_asm+0x1a/0x30
[   32.121006] 
[   32.121288] CPU: 1 UID: 0 PID: 319 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   32.122105] Tainted: [B]=BAD_PAGE, [N]=TEST
[   32.122730] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   32.123358] ==================================================================

test log snippet boot parser 155

Home

[   37.306249] ==================================================================
[   37.306782] BUG: KFENCE: memory corruption in test_kmalloc_aligned_oob_write+0x251/0x340
[   37.306782] 
[   37.307435] Corrupted memory at 0x(____ptrval____) [ ! . . . . . . . . . . . . . . . ] (in kfence-#170):
[   37.308601]  test_kmalloc_aligned_oob_write+0x251/0x340
[   37.309123]  kunit_try_run_case+0x1b3/0x490
[   37.309667]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   37.310022]  kthread+0x324/0x6d0
[   37.310449]  ret_from_fork+0x41/0x80
[   37.310903]  ret_from_fork_asm+0x1a/0x30
[   37.311263] 
[   37.311522] kfence-#170: 0x(____ptrval____)-0x(____ptrval____), size=73, cache=kmalloc-96
[   37.311522] 
[   37.312092] allocated by task 333 on cpu 0 at 37.305942s (0.006145s ago):
[   37.312589]  test_alloc+0x35f/0x10d0
[   37.313025]  test_kmalloc_aligned_oob_write+0xc9/0x340
[   37.313719]  kunit_try_run_case+0x1b3/0x490
[   37.314116]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   37.314520]  kthread+0x324/0x6d0
[   37.314902]  ret_from_fork+0x41/0x80
[   37.315304]  ret_from_fork_asm+0x1a/0x30
[   37.315627] 
[   37.315799] freed by task 333 on cpu 0 at 37.306131s (0.009663s ago):
[   37.316402]  test_kmalloc_aligned_oob_write+0x251/0x340
[   37.317079]  kunit_try_run_case+0x1b3/0x490
[   37.317557]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   37.317998]  kthread+0x324/0x6d0
[   37.318260]  ret_from_fork+0x41/0x80
[   37.318730]  ret_from_fork_asm+0x1a/0x30
[   37.319159] 
[   37.319474] CPU: 0 UID: 0 PID: 333 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   37.320084] Tainted: [B]=BAD_PAGE, [N]=TEST
[   37.320677] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   37.321162] ==================================================================

test log snippet boot parser 156

Home

[   37.202223] ==================================================================
[   37.202714] BUG: KFENCE: out-of-bounds read in test_kmalloc_aligned_oob_read+0x27f/0x570
[   37.202714] 
[   37.203452] Out-of-bounds read at 0x(____ptrval____) (105B right of kfence-#169):
[   37.204086]  test_kmalloc_aligned_oob_read+0x27f/0x570
[   37.204710]  kunit_try_run_case+0x1b3/0x490
[   37.205037]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   37.205712]  kthread+0x324/0x6d0
[   37.206116]  ret_from_fork+0x41/0x80
[   37.206538]  ret_from_fork_asm+0x1a/0x30
[   37.206856] 
[   37.207084] kfence-#169: 0x(____ptrval____)-0x(____ptrval____), size=73, cache=kmalloc-96
[   37.207084] 
[   37.207831] allocated by task 331 on cpu 0 at 37.201935s (0.005889s ago):
[   37.208457]  test_alloc+0x35f/0x10d0
[   37.208866]  test_kmalloc_aligned_oob_read+0x106/0x570
[   37.209336]  kunit_try_run_case+0x1b3/0x490
[   37.209675]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   37.210182]  kthread+0x324/0x6d0
[   37.210469]  ret_from_fork+0x41/0x80
[   37.210746]  ret_from_fork_asm+0x1a/0x30
[   37.211244] 
[   37.211552] CPU: 0 UID: 0 PID: 331 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   37.212702] Tainted: [B]=BAD_PAGE, [N]=TEST
[   37.213105] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   37.213696] ==================================================================

test log snippet boot parser 157

Home

[   30.650049] ==================================================================
[   30.650591] BUG: KFENCE: out-of-bounds read in test_out_of_bounds_read+0x217/0x4e0
[   30.650591] 
[   30.651537] Out-of-bounds read at 0x(____ptrval____) (32B right of kfence-#106):
[   30.652218]  test_out_of_bounds_read+0x217/0x4e0
[   30.652771]  kunit_try_run_case+0x1b3/0x490
[   30.653320]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.653700]  kthread+0x324/0x6d0
[   30.654093]  ret_from_fork+0x41/0x80
[   30.654566]  ret_from_fork_asm+0x1a/0x30
[   30.654929] 
[   30.655136] kfence-#106: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=test
[   30.655136] 
[   30.655787] allocated by task 297 on cpu 1 at 30.649962s (0.005819s ago):
[   30.656489]  test_alloc+0x2a7/0x10d0
[   30.656770]  test_out_of_bounds_read+0x1e3/0x4e0
[   30.657085]  kunit_try_run_case+0x1b3/0x490
[   30.657569]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.658120]  kthread+0x324/0x6d0
[   30.658557]  ret_from_fork+0x41/0x80
[   30.658914]  ret_from_fork_asm+0x1a/0x30
[   30.659213] 
[   30.659426] CPU: 1 UID: 0 PID: 297 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   30.660292] Tainted: [B]=BAD_PAGE, [N]=TEST
[   30.660755] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   30.661440] ==================================================================

test log snippet boot parser 158

Home

[   30.130196] ==================================================================
[   30.130743] BUG: KFENCE: out-of-bounds read in test_out_of_bounds_read+0x217/0x4e0
[   30.130743] 
[   30.131524] Out-of-bounds read at 0x(____ptrval____) (32B right of kfence-#101):
[   30.132528]  test_out_of_bounds_read+0x217/0x4e0
[   30.133255]  kunit_try_run_case+0x1b3/0x490
[   30.133667]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.133905]  kthread+0x324/0x6d0
[   30.134062]  ret_from_fork+0x41/0x80
[   30.134507]  ret_from_fork_asm+0x1a/0x30
[   30.135144] 
[   30.135394] kfence-#101: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=kmalloc-32
[   30.135394] 
[   30.136856] allocated by task 295 on cpu 0 at 30.129944s (0.006906s ago):
[   30.137635]  test_alloc+0x35f/0x10d0
[   30.138017]  test_out_of_bounds_read+0x1e3/0x4e0
[   30.138545]  kunit_try_run_case+0x1b3/0x490
[   30.138858]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.139333]  kthread+0x324/0x6d0
[   30.139722]  ret_from_fork+0x41/0x80
[   30.140019]  ret_from_fork_asm+0x1a/0x30
[   30.140480] 
[   30.140690] CPU: 0 UID: 0 PID: 295 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   30.141433] Tainted: [B]=BAD_PAGE, [N]=TEST
[   30.141771] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   30.142626] ==================================================================

test log snippet boot parser 159

Home

[   30.027490] ==================================================================
[   30.028209] BUG: KFENCE: out-of-bounds read in test_out_of_bounds_read+0x127/0x4e0
[   30.028209] 
[   30.029059] Out-of-bounds read at 0x(____ptrval____) (1B left of kfence-#100):
[   30.030258]  test_out_of_bounds_read+0x127/0x4e0
[   30.030935]  kunit_try_run_case+0x1b3/0x490
[   30.031943]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.032662]  kthread+0x324/0x6d0
[   30.033019]  ret_from_fork+0x41/0x80
[   30.033731]  ret_from_fork_asm+0x1a/0x30
[   30.034374] 
[   30.035107] kfence-#100: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=kmalloc-32
[   30.035107] 
[   30.036446] allocated by task 295 on cpu 0 at 30.025971s (0.010278s ago):
[   30.037630]  test_alloc+0x35f/0x10d0
[   30.038000]  test_out_of_bounds_read+0xee/0x4e0
[   30.038574]  kunit_try_run_case+0x1b3/0x490
[   30.039001]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.039658]  kthread+0x324/0x6d0
[   30.039981]  ret_from_fork+0x41/0x80
[   30.040567]  ret_from_fork_asm+0x1a/0x30
[   30.041273] 
[   30.041743] CPU: 0 UID: 0 PID: 295 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   30.042763] Tainted: [B]=BAD_PAGE, [N]=TEST
[   30.043260] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   30.043921] ==================================================================

test log snippet boot parser 160

Home

[   30.234112] ==================================================================
[   30.234738] BUG: KFENCE: out-of-bounds read in test_out_of_bounds_read+0x127/0x4e0
[   30.234738] 
[   30.235384] Out-of-bounds read at 0x(____ptrval____) (1B left of kfence-#102):
[   30.236175]  test_out_of_bounds_read+0x127/0x4e0
[   30.236555]  kunit_try_run_case+0x1b3/0x490
[   30.237019]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.237450]  kthread+0x324/0x6d0
[   30.237912]  ret_from_fork+0x41/0x80
[   30.238268]  ret_from_fork_asm+0x1a/0x30
[   30.238670] 
[   30.238837] kfence-#102: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=test
[   30.238837] 
[   30.239688] allocated by task 297 on cpu 1 at 30.234027s (0.005654s ago):
[   30.240354]  test_alloc+0x2a7/0x10d0
[   30.240738]  test_out_of_bounds_read+0xee/0x4e0
[   30.241157]  kunit_try_run_case+0x1b3/0x490
[   30.241685]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.242097]  kthread+0x324/0x6d0
[   30.242653]  ret_from_fork+0x41/0x80
[   30.242958]  ret_from_fork_asm+0x1a/0x30
[   30.243253] 
[   30.243469] CPU: 1 UID: 0 PID: 297 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   30.244359] Tainted: [B]=BAD_PAGE, [N]=TEST
[   30.244806] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   30.245375] ==================================================================

test log snippet boot parser 161

Home

[   30.857990] ==================================================================
[   30.858589] BUG: KFENCE: out-of-bounds write in test_out_of_bounds_write+0x10e/0x260
[   30.858589] 
[   30.859111] Out-of-bounds write at 0x(____ptrval____) (1B left of kfence-#108):
[   30.860528]  test_out_of_bounds_write+0x10e/0x260
[   30.861097]  kunit_try_run_case+0x1b3/0x490
[   30.861537]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.862089]  kthread+0x324/0x6d0
[   30.862481]  ret_from_fork+0x41/0x80
[   30.862908]  ret_from_fork_asm+0x1a/0x30
[   30.863446] 
[   30.863693] kfence-#108: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=test
[   30.863693] 
[   30.864600] allocated by task 301 on cpu 0 at 30.857924s (0.006671s ago):
[   30.865359]  test_alloc+0x2a7/0x10d0
[   30.865771]  test_out_of_bounds_write+0xd5/0x260
[   30.866221]  kunit_try_run_case+0x1b3/0x490
[   30.866587]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.867213]  kthread+0x324/0x6d0
[   30.867628]  ret_from_fork+0x41/0x80
[   30.868080]  ret_from_fork_asm+0x1a/0x30
[   30.868507] 
[   30.868808] CPU: 0 UID: 0 PID: 301 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   30.869609] Tainted: [B]=BAD_PAGE, [N]=TEST
[   30.870043] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   30.870783] ==================================================================

test log snippet boot parser 162

Home

[   30.754111] ==================================================================
[   30.754756] BUG: KFENCE: out-of-bounds write in test_out_of_bounds_write+0x10e/0x260
[   30.754756] 
[   30.755419] Out-of-bounds write at 0x(____ptrval____) (1B left of kfence-#107):
[   30.755904]  test_out_of_bounds_write+0x10e/0x260
[   30.756379]  kunit_try_run_case+0x1b3/0x490
[   30.756948]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.757601]  kthread+0x324/0x6d0
[   30.758035]  ret_from_fork+0x41/0x80
[   30.758569]  ret_from_fork_asm+0x1a/0x30
[   30.759051] 
[   30.759346] kfence-#107: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=kmalloc-32
[   30.759346] 
[   30.760317] allocated by task 299 on cpu 1 at 30.753961s (0.006350s ago):
[   30.760990]  test_alloc+0x35f/0x10d0
[   30.761366]  test_out_of_bounds_write+0xd5/0x260
[   30.761926]  kunit_try_run_case+0x1b3/0x490
[   30.762503]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.763111]  kthread+0x324/0x6d0
[   30.763521]  ret_from_fork+0x41/0x80
[   30.763757]  ret_from_fork_asm+0x1a/0x30
[   30.763931] 
[   30.764039] CPU: 1 UID: 0 PID: 299 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   30.764896] Tainted: [B]=BAD_PAGE, [N]=TEST
[   30.765455] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   30.766343] ==================================================================

test log snippet boot parser 163

Home

[   61.650992] ==================================================================
[   61.651694] BUG: KFENCE: use-after-free read in test_krealloc+0x6fd/0xbe0
[   61.651694] 
[   61.652164] Use-after-free read at 0x(____ptrval____) (in kfence-#175):
[   61.653115]  test_krealloc+0x6fd/0xbe0
[   61.653643]  kunit_try_run_case+0x1b3/0x490
[   61.654139]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   61.654699]  kthread+0x324/0x6d0
[   61.655134]  ret_from_fork+0x41/0x80
[   61.655777]  ret_from_fork_asm+0x1a/0x30
[   61.656081] 
[   61.656321] kfence-#175: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=kmalloc-32
[   61.656321] 
[   61.656901] allocated by task 345 on cpu 0 at 61.649977s (0.006918s ago):
[   61.657583]  test_alloc+0x35f/0x10d0
[   61.657925]  test_krealloc+0xae/0xbe0
[   61.658377]  kunit_try_run_case+0x1b3/0x490
[   61.658741]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   61.659289]  kthread+0x324/0x6d0
[   61.659645]  ret_from_fork+0x41/0x80
[   61.659925]  ret_from_fork_asm+0x1a/0x30
[   61.660381] 
[   61.660620] freed by task 345 on cpu 0 at 61.650335s (0.010280s ago):
[   61.661284]  krealloc_noprof+0x108/0x340
[   61.661743]  test_krealloc+0x227/0xbe0
[   61.662059]  kunit_try_run_case+0x1b3/0x490
[   61.662541]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   61.662882]  kthread+0x324/0x6d0
[   61.663318]  ret_from_fork+0x41/0x80
[   61.663720]  ret_from_fork_asm+0x1a/0x30
[   61.664158] 
[   61.664455] CPU: 0 UID: 0 PID: 345 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   61.665123] Tainted: [B]=BAD_PAGE, [N]=TEST
[   61.665526] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   61.666357] ==================================================================

test log snippet boot parser 164

Home

[   61.550659] ==================================================================
[   61.551220] BUG: KFENCE: use-after-free read in test_memcache_typesafe_by_rcu+0x2ed/0x670
[   61.551220] 
[   61.551990] Use-after-free read at 0x(____ptrval____) (in kfence-#174):
[   61.553048]  test_memcache_typesafe_by_rcu+0x2ed/0x670
[   61.553989]  kunit_try_run_case+0x1b3/0x490
[   61.554288]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   61.554907]  kthread+0x324/0x6d0
[   61.555196]  ret_from_fork+0x41/0x80
[   61.555775]  ret_from_fork_asm+0x1a/0x30
[   61.556396] 
[   61.556591] kfence-#174: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=test
[   61.556591] 
[   61.557245] allocated by task 343 on cpu 0 at 61.543009s (0.014229s ago):
[   61.557856]  test_alloc+0x2a7/0x10d0
[   61.558251]  test_memcache_typesafe_by_rcu+0x170/0x670
[   61.558694]  kunit_try_run_case+0x1b3/0x490
[   61.559003]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   61.559553]  kthread+0x324/0x6d0
[   61.559940]  ret_from_fork+0x41/0x80
[   61.560382]  ret_from_fork_asm+0x1a/0x30
[   61.560838] 
[   61.561072] freed by task 343 on cpu 0 at 61.543181s (0.017885s ago):
[   61.561719]  test_memcache_typesafe_by_rcu+0x1c0/0x670
[   61.562057]  kunit_try_run_case+0x1b3/0x490
[   61.562554]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   61.563076]  kthread+0x324/0x6d0
[   61.563514]  ret_from_fork+0x41/0x80
[   61.563908]  ret_from_fork_asm+0x1a/0x30
[   61.564318] 
[   61.564539] CPU: 0 UID: 0 PID: 343 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   61.565339] Tainted: [B]=BAD_PAGE, [N]=TEST
[   61.565782] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   61.566662] ==================================================================

test log snippet boot parser 165

Home

[   30.962266] ==================================================================
[   30.962925] BUG: KFENCE: use-after-free read in test_use_after_free_read+0x12a/0x270
[   30.962925] 
[   30.964148] Use-after-free read at 0x(____ptrval____) (in kfence-#109):
[   30.964923]  test_use_after_free_read+0x12a/0x270
[   30.965649]  kunit_try_run_case+0x1b3/0x490
[   30.966136]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.966727]  kthread+0x324/0x6d0
[   30.967067]  ret_from_fork+0x41/0x80
[   30.967464]  ret_from_fork_asm+0x1a/0x30
[   30.967916] 
[   30.968244] kfence-#109: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=kmalloc-32
[   30.968244] 
[   30.969023] allocated by task 303 on cpu 1 at 30.961992s (0.007025s ago):
[   30.969768]  test_alloc+0x35f/0x10d0
[   30.970250]  test_use_after_free_read+0xdd/0x270
[   30.970627]  kunit_try_run_case+0x1b3/0x490
[   30.971131]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.971680]  kthread+0x324/0x6d0
[   30.972098]  ret_from_fork+0x41/0x80
[   30.972600]  ret_from_fork_asm+0x1a/0x30
[   30.972962] 
[   30.973581] freed by task 303 on cpu 1 at 30.962078s (0.011267s ago):
[   30.974381]  test_use_after_free_read+0x1e9/0x270
[   30.974955]  kunit_try_run_case+0x1b3/0x490
[   30.975534]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   30.975951]  kthread+0x324/0x6d0
[   30.976421]  ret_from_fork+0x41/0x80
[   30.976769]  ret_from_fork_asm+0x1a/0x30
[   30.977313] 
[   30.977629] CPU: 1 UID: 0 PID: 303 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   30.978316] Tainted: [B]=BAD_PAGE, [N]=TEST
[   30.978801] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   30.979457] ==================================================================

test log snippet boot parser 166

Home

[   31.066179] ==================================================================
[   31.066692] BUG: KFENCE: use-after-free read in test_use_after_free_read+0x12a/0x270
[   31.066692] 
[   31.067435] Use-after-free read at 0x(____ptrval____) (in kfence-#110):
[   31.068280]  test_use_after_free_read+0x12a/0x270
[   31.068704]  kunit_try_run_case+0x1b3/0x490
[   31.069196]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.069869]  kthread+0x324/0x6d0
[   31.070182]  ret_from_fork+0x41/0x80
[   31.070675]  ret_from_fork_asm+0x1a/0x30
[   31.071041] 
[   31.071306] kfence-#110: 0x(____ptrval____)-0x(____ptrval____), size=32, cache=test
[   31.071306] 
[   31.072033] allocated by task 305 on cpu 0 at 31.065969s (0.006059s ago):
[   31.072597]  test_alloc+0x2a7/0x10d0
[   31.073028]  test_use_after_free_read+0xdd/0x270
[   31.073674]  kunit_try_run_case+0x1b3/0x490
[   31.074164]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.074647]  kthread+0x324/0x6d0
[   31.074977]  ret_from_fork+0x41/0x80
[   31.075446]  ret_from_fork_asm+0x1a/0x30
[   31.075931] 
[   31.076163] freed by task 305 on cpu 0 at 31.066034s (0.010123s ago):
[   31.076803]  test_use_after_free_read+0xfc/0x270
[   31.077197]  kunit_try_run_case+0x1b3/0x490
[   31.077675]  kunit_generic_run_threadfn_adapter+0x85/0xf0
[   31.078264]  kthread+0x324/0x6d0
[   31.078684]  ret_from_fork+0x41/0x80
[   31.079121]  ret_from_fork_asm+0x1a/0x30
[   31.079525] 
[   31.079723] CPU: 0 UID: 0 PID: 305 Comm: kunit_try_catch Tainted: G    B            N 6.13.0 #1
[   31.080649] Tainted: [B]=BAD_PAGE, [N]=TEST
[   31.081115] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   31.081708] ==================================================================

test log snippet boot parser 167

Home

------------[ cut here ]------------
[    5.346649] WARNING: CPU: 0 PID: 0 at kernel/smp.c:815 smp_call_function_many_cond+0x560/0x5e8
[    5.355346] Modules linked in:
[    5.358428] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.13.0 #1
[    5.358459] Hardware name: Generic DRA74X (Flattened Device Tree)
[    5.358459] Call trace: 
[    5.358459]  unwind_backtrace from show_stack+0x18/0x1c
[    5.358489]  show_stack from dump_stack_lvl+0x6c/0x8c
[    5.358489]  dump_stack_lvl from __warn+0x84/0x130
[    5.358520]  __warn from warn_slowpath_fmt+0x18c/0x194
[    5.358520]  warn_slowpath_fmt from smp_call_function_many_cond+0x560/0x5e8
[    5.358551]  smp_call_function_many_cond from smp_call_function+0x34/0x3c
[    5.358581]  smp_call_function from set_memory_valid+0x84/0xc4
[    5.358612]  set_memory_valid from kfence_guarded_free+0x170/0x2dc
[    5.358612]  kfence_guarded_free from rcu_core+0x2cc/0x9dc
[    5.358642]  rcu_core from handle_softirqs+0x14c/0x3d0
[    5.358673]  handle_softirqs from __irq_exit_rcu+0xa0/0x114
[    5.358673]  __irq_exit_rcu from irq_exit+0x10/0x30
[    5.358703]  irq_exit from __irq_svc+0x80/0xac
[    5.358703] Exception stack(0xc2401f20 to 0xc2401f68)
[    5.358734] 1f20: 0000290c 00000001 2d88c000 00000000 c240ff40 c269d760 c240a0dc c240a11c
[    5.358734] 1f40: 00000000 00000000 c240a080 ffffffff c269cb60 c2401f70 c15b5e50 c15b5e54
[    5.358734] 1f60: 60030013 ffffffff
[    5.358734]  __irq_svc from default_idle_call+0x20/0xc0
[    5.358764]  default_idle_call from do_idle+0x22c/0x2bc
[    5.358795]  do_idle from cpu_startup_entry+0x30/0x34
[    5.358795]  cpu_startup_entry from kernel_init+0x0/0x13c
[    5.358825]  kernel_init from start_kernel+0x708/0x724
[    5.358856] ---[ end trace 0000000000000000 ]---

test log snippet test parser 1

Home

------------[ cut here ]------------
[  210.976482] WARNING: CPU: 1 PID: 50 at kernel/exit.c:885 do_exit+0x628/0x928
[  210.978351] Modules linked in: crct10dif_ce sm3_ce sm3 sha3_ce sha512_ce sha512_arm64 fuse drm backlight ip_tables x_tables
[  210.986601] CPU: 1 UID: 0 PID: 50 Comm: kworker/u8:3 Tainted: G      D            6.13.0 #1
[  210.990478] Tainted: [D]=DIE
[  210.992008] Hardware name: linux,dummy-virt (DT)
[  210.992900] Workqueue: writeback wb_workfn (flush-254:0)
[  210.995904] pstate: 63402009 (nZCv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
[  210.997333] pc : do_exit+0x628/0x928
[  210.998512] lr : do_exit+0x7c/0x928
[  211.000128] sp : ffff800080563460
[  211.001414] x29: ffff800080563470 x28: fff10d5080cb2600 x27: fff10d5080cb2600
[  211.003808] x26: ffffc63502101e40 x25: 00000000003dd601 x24: ffff800080563524
[  211.006910] x23: ffffa9fe1e5c4e34 x22: fff10d508093db00 x21: fff10d5080caa940
[  211.009436] x20: 000000000000000b x19: fff10d5080cb2600 x18: 0000000000000000
[  211.012037] x17: 0000000000000000 x16: 0000000000000001 x15: 0000000000003000
[  211.014782] x14: 0000000000000008 x13: 0000000000000408 x12: 0000000000200000
[  211.017919] x11: 0000000000000008 x10: fff10d5085c05000 x9 : 8491d7d21bb84100
[  211.021103] x8 : ffff800080563bc8 x7 : 0000000000000000 x6 : 0000000000000031
[  211.024142] x5 : 0a0a0a0a0a0a0a0a x4 : 0000000000000001 x3 : 0000000000000010
[  211.027574] x2 : 00000000000000e9 x1 : ffffa9fe1e237778 x0 : ffffa9fe1e237778
[  211.030085] Call trace:
[  211.030855]  do_exit+0x628/0x928 (P)
[  211.032446]  make_task_dead+0x90/0x140 (F)
[  211.033874]  die+0x328/0x338 (F)
[  211.035066]  __do_kernel_fault+0x240/0x250 (F)
[  211.036340]  do_bad_area+0x3c/0x100 (F)
[  211.037369]  do_translation_fault+0x5c/0x78 (F)
[  211.038377]  do_mem_abort+0x48/0xe8
[  211.039346]  el1_abort+0x3c/0x60 (F)
[  211.040349]  el1h_64_sync_handler+0x50/0xd0
[  211.041792]  el1h_64_sync+0x6c/0x70
[  211.043167]  __kmalloc_noprof+0x11c/0x4c0 (P)
[  211.044645]  virtqueue_add+0x138/0xef0
[  211.045873]  virtqueue_add_sgs+0xac/0xd0
[  211.047187]  virtblk_add_req_batch+0x120/0x1d8
[  211.048447]  virtio_queue_rqs+0x114/0x168
[  211.050096]  blk_mq_flush_plug_list+0x598/0x700
[  211.051487]  __blk_flush_plug+0xe0/0x148
[  211.052366]  blk_finish_plug+0x34/0x50
[  211.053348]  wb_writeback+0x3c4/0x418
[  211.054358]  wb_workfn+0x380/0x588
[  211.055507]  process_scheduled_works+0x180/0x3e0
[  211.056605]  worker_thread+0x294/0x3a8 (F)
[  211.057598]  kthread+0x1c4/0x210
[  211.058713]  ret_from_fork+0x10/0x20
[  211.059834] ---[ end trace 0000000000000000 ]---

test log snippet test parser 2

Home

------------[ cut here ]------------
[  210.976482] WARNING: CPU: 1 PID: 50 at kernel/exit.c:885 do_exit+0x628/0x928
[  210.978351] Modules linked in: crct10dif_ce sm3_ce sm3 sha3_ce sha512_ce sha512_arm64 fuse drm backlight ip_tables x_tables
[  210.986601] CPU: 1 UID: 0 PID: 50 Comm: kworker/u8:3 Tainted: G      D            6.13.0 #1
[  210.990478] Tainted: [D]=DIE
[  210.992008] Hardware name: linux,dummy-virt (DT)
[  210.992900] Workqueue: writeback wb_workfn (flush-254:0)
[  210.995904] pstate: 63402009 (nZCv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
[  210.997333] pc : do_exit+0x628/0x928
[  210.998512] lr : do_exit+0x7c/0x928
[  211.000128] sp : ffff800080563460
[  211.001414] x29: ffff800080563470 x28: fff10d5080cb2600 x27: fff10d5080cb2600
[  211.003808] x26: ffffc63502101e40 x25: 00000000003dd601 x24: ffff800080563524
[  211.006910] x23: ffffa9fe1e5c4e34 x22: fff10d508093db00 x21: fff10d5080caa940
[  211.009436] x20: 000000000000000b x19: fff10d5080cb2600 x18: 0000000000000000
[  211.012037] x17: 0000000000000000 x16: 0000000000000001 x15: 0000000000003000
[  211.014782] x14: 0000000000000008 x13: 0000000000000408 x12: 0000000000200000
[  211.017919] x11: 0000000000000008 x10: fff10d5085c05000 x9 : 8491d7d21bb84100
[  211.021103] x8 : ffff800080563bc8 x7 : 0000000000000000 x6 : 0000000000000031
[  211.024142] x5 : 0a0a0a0a0a0a0a0a x4 : 0000000000000001 x3 : 0000000000000010
[  211.027574] x2 : 00000000000000e9 x1 : ffffa9fe1e237778 x0 : ffffa9fe1e237778
[  211.030085] Call trace:
[  211.030855]  do_exit+0x628/0x928 (P)
[  211.032446]  make_task_dead+0x90/0x140 (F)
[  211.033874]  die+0x328/0x338 (F)
[  211.035066]  __do_kernel_fault+0x240/0x250 (F)
[  211.036340]  do_bad_area+0x3c/0x100 (F)
[  211.037369]  do_translation_fault+0x5c/0x78 (F)
[  211.038377]  do_mem_abort+0x48/0xe8
[  211.039346]  el1_abort+0x3c/0x60 (F)
[  211.040349]  el1h_64_sync_handler+0x50/0xd0
[  211.041792]  el1h_64_sync+0x6c/0x70
[  211.043167]  __kmalloc_noprof+0x11c/0x4c0 (P)
[  211.044645]  virtqueue_add+0x138/0xef0
[  211.045873]  virtqueue_add_sgs+0xac/0xd0
[  211.047187]  virtblk_add_req_batch+0x120/0x1d8
[  211.048447]  virtio_queue_rqs+0x114/0x168
[  211.050096]  blk_mq_flush_plug_list+0x598/0x700
[  211.051487]  __blk_flush_plug+0xe0/0x148
[  211.052366]  blk_finish_plug+0x34/0x50
[  211.053348]  wb_writeback+0x3c4/0x418
[  211.054358]  wb_workfn+0x380/0x588
[  211.055507]  process_scheduled_works+0x180/0x3e0
[  211.056605]  worker_thread+0x294/0x3a8 (F)
[  211.057598]  kthread+0x1c4/0x210
[  211.058713]  ret_from_fork+0x10/0x20
[  211.059834] ---[ end trace 0000000000000000 ]---

test log snippet test parser 3

Home

[  210.808915] Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP
[  210.810838] Modules linked in: crct10dif_ce sm3_ce sm3 sha3_ce sha512_ce sha512_arm64 fuse drm backlight ip_tables x_tables
[  210.821445] CPU: 1 UID: 0 PID: 50 Comm: kworker/u8:3 Not tainted 6.13.0 #1
[  210.824674] Hardware name: linux,dummy-virt (DT)
[  210.826710] Workqueue: writeback wb_workfn (flush-254:0)
[  210.829433] pstate: 624020c9 (nZCv daIF +PAN -UAO +TCO -DIT -SSBS BTYPE=--)
[  210.831088] pc : __kmalloc_noprof+0x11c/0x4c0
[  210.832916] lr : virtqueue_add+0x138/0xef0
[  210.834331] sp : ffff8000805637c0
[  210.835894] x29: ffff8000805637e0 x28: 0000000000000001 x27: fff10d5080cb2600
[  210.838471] x26: ffffc63502101e40 x25: 00000000003dd601 x24: 00000000ffffffff
[  210.841531] x23: b741f3a200000000 x22: fff10d5080001200 x21: ffffa9fe1ec6d7c8
[  210.844648] x20: 0000000000000820 x19: 0000000000000060 x18: 0000000000000000
[  210.847850] x17: 0000000000000000 x16: ffffa9fe20506000 x15: 0000000000003000
[  210.850560] x14: 0000000000000009 x13: 0000000000000408 x12: 0000000000210000
[  210.852740] x11: ffffa9fe2050e428 x10: 0000000000000070 x9 : fff163529ebfb000
[  210.855999] x8 : 0000000000000030 x7 : 0000000000000000 x6 : 0000000000000000
[  210.858463] x5 : fff10d5081810870 x4 : 0000000000000001 x3 : 00000000003dd801
[  210.861100] x2 : 0000000000000003 x1 : 0000000000000820 x0 : 0000000000000060
[  210.863921] Call trace:
[  210.864952]  __kmalloc_noprof+0x11c/0x4c0 (P)
[  210.866332]  virtqueue_add+0x138/0xef0
[  210.868369]  virtqueue_add_sgs+0xac/0xd0
[  210.869376]  virtblk_add_req_batch+0x120/0x1d8
[  210.870719]  virtio_queue_rqs+0x114/0x168
[  210.871779]  blk_mq_flush_plug_list+0x598/0x700
[  210.873195]  __blk_flush_plug+0xe0/0x148
[  210.874955]  blk_finish_plug+0x34/0x50
[  210.877120]  wb_writeback+0x3c4/0x418
[  210.877934]  wb_workfn+0x380/0x588
[  210.878908]  process_scheduled_works+0x180/0x3e0
[  210.880881]  worker_thread+0x294/0x3a8 (F)
[  210.881998]  kthread+0x1c4/0x210
[  210.882706]  ret_from_fork+0x10/0x20
[  210.884489] Code: 6b08031f 54000c81 b9402ac8 91080323 (f877691a) 
[  210.886882] ---[ end trace 0000000000000000 ]---

test log snippet test parser 4

Home

[  210.808915] Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP
[  210.810838] Modules linked in: crct10dif_ce sm3_ce sm3 sha3_ce sha512_ce sha512_arm64 fuse drm backlight ip_tables x_tables
[  210.821445] CPU: 1 UID: 0 PID: 50 Comm: kworker/u8:3 Not tainted 6.13.0 #1
[  210.824674] Hardware name: linux,dummy-virt (DT)
[  210.826710] Workqueue: writeback wb_workfn (flush-254:0)
[  210.829433] pstate: 624020c9 (nZCv daIF +PAN -UAO +TCO -DIT -SSBS BTYPE=--)
[  210.831088] pc : __kmalloc_noprof+0x11c/0x4c0
[  210.832916] lr : virtqueue_add+0x138/0xef0
[  210.834331] sp : ffff8000805637c0
[  210.835894] x29: ffff8000805637e0 x28: 0000000000000001 x27: fff10d5080cb2600
[  210.838471] x26: ffffc63502101e40 x25: 00000000003dd601 x24: 00000000ffffffff
[  210.841531] x23: b741f3a200000000 x22: fff10d5080001200 x21: ffffa9fe1ec6d7c8
[  210.844648] x20: 0000000000000820 x19: 0000000000000060 x18: 0000000000000000
[  210.847850] x17: 0000000000000000 x16: ffffa9fe20506000 x15: 0000000000003000
[  210.850560] x14: 0000000000000009 x13: 0000000000000408 x12: 0000000000210000
[  210.852740] x11: ffffa9fe2050e428 x10: 0000000000000070 x9 : fff163529ebfb000
[  210.855999] x8 : 0000000000000030 x7 : 0000000000000000 x6 : 0000000000000000
[  210.858463] x5 : fff10d5081810870 x4 : 0000000000000001 x3 : 00000000003dd801
[  210.861100] x2 : 0000000000000003 x1 : 0000000000000820 x0 : 0000000000000060
[  210.863921] Call trace:
[  210.864952]  __kmalloc_noprof+0x11c/0x4c0 (P)
[  210.866332]  virtqueue_add+0x138/0xef0
[  210.868369]  virtqueue_add_sgs+0xac/0xd0
[  210.869376]  virtblk_add_req_batch+0x120/0x1d8
[  210.870719]  virtio_queue_rqs+0x114/0x168
[  210.871779]  blk_mq_flush_plug_list+0x598/0x700
[  210.873195]  __blk_flush_plug+0xe0/0x148
[  210.874955]  blk_finish_plug+0x34/0x50
[  210.877120]  wb_writeback+0x3c4/0x418
[  210.877934]  wb_workfn+0x380/0x588
[  210.878908]  process_scheduled_works+0x180/0x3e0
[  210.880881]  worker_thread+0x294/0x3a8 (F)
[  210.881998]  kthread+0x1c4/0x210
[  210.882706]  ret_from_fork+0x10/0x20
[  210.884489] Code: 6b08031f 54000c81 b9402ac8 91080323 (f877691a) 
[  210.886882] ---[ end trace 0000000000000000 ]---

test log snippet test parser 5

Home

------------[ cut here ]------------
[   39.053731] WARNING: CPU: 0 PID: 469 at mm/util.c:657 __kvmalloc_node_noprof+0x11c/0x124
[   39.056510] Modules linked in: crct10dif_ce sm3_ce sm3 sha3_ce sha512_ce sha512_arm64 drm fuse backlight ip_tables x_tables
[   39.060139] CPU: 0 UID: 0 PID: 469 Comm: unshare_test Not tainted 6.13.0 #1
[   39.061457] Hardware name: linux,dummy-virt (DT)
[   39.062603] pstate: 23402009 (nzCv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
[   39.063147] pc : __kvmalloc_node_noprof+0x11c/0x124
[   39.063609] lr : __kvmalloc_node_noprof+0x50/0x124
[   39.064054] sp : ffff800080adbce0
[   39.064352] x29: ffff800080adbcf0 x28: fff18298019cb6c0 x27: 0000000000000000
[   39.065174] x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000
[   39.066491] x23: 0000000080001000 x22: 05ef928e7775d6d8 x21: 0000000200001e00
[   39.066912] x20: 0000000000400cc0 x19: 00000000ffffffff x18: 0000000000000000
[   39.067330] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
[   39.067741] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
[   39.068174] x11: 0000000000000040 x10: 614f0b0a612dc400 x9 : 614f0b0a612dc400
[   39.069404] x8 : 0000000000000004 x7 : 0000000000000000 x6 : 0000000000000000
[   39.071679] x5 : 0000000000000000 x4 : ffff800080adbcb0 x3 : 0000000000000000
[   39.073118] x2 : 0000000000000000 x1 : 0000000000000016 x0 : 0000000000000000
[   39.074414] Call trace:
[   39.075188]  __kvmalloc_node_noprof+0x11c/0x124 (P)
[   39.076001]  alloc_fdtable+0x74/0x118
[   39.076689]  expand_files+0x114/0x2e0
[   39.077567]  ksys_dup3+0x64/0xe8
[   39.078403]  __arm64_sys_dup3+0x20/0x30
[   39.079268]  invoke_syscall+0x40/0xf8
[   39.080530]  el0_svc_common+0xa8/0xd8
[   39.081305]  do_el0_svc+0x1c/0x28
[   39.081960]  el0_svc+0x38/0x68
[   39.082197]  el0t_64_sync_handler+0x78/0x108
[   39.082452]  el0t_64_sync+0x198/0x19c
[   39.082901] ---[ end trace 0000000000000000 ]---

test log snippet test parser 6

Home

------------[ cut here ]------------
[   65.540786] ------------[ cut here ]------------
[   65.541427] Bad frame pointer: expected ff2000000048ada0, received ffffffff81c7ceac
[   65.541427]   from func cpus_read_lock return to ffffffff80211412
[   65.542930] WARNING: CPU: 1 PID: 302 at kernel/trace/fgraph.c:760 ftrace_return_to_handler+0x1fc/0x222
[   65.544211] Modules linked in:
[   65.545399] Bad frame pointer: expected ff200000005eab50, received ffffffff81a09ea4
[   65.545399]   from func __might_sleep return to ffffffff8003e362
[   65.545399]  drm backlight drm_panel_orientation_quirks configfs dm_mod ip_tables x_tables
[   65.547696] CPU: 1 UID: 0 PID: 302 Comm: ftrace_stress_t Tainted: G        W          6.13.0 #1
[   65.548720] WARNING: CPU: 0 PID: 317 at kernel/trace/fgraph.c:760 ftrace_return_to_handler+0x1fc/0x222
[   65.548783] Modules linked in: drm backlight drm_panel_orientation_quirks configfs dm_mod ip_tables x_tables
[   65.548965] CPU: 0 UID: 0 PID: 317 Comm: sh Tainted: G        W          6.13.0 #1
[   65.548982] Tainted: G        W         
[   65.549008] Tainted: [W]=WARN
[   65.549014] Hardware name: riscv-virtio,qemu (DT)
[   65.549029] Hardware name: riscv-virtio,qemu (DT)
[   65.549152] epc : ftrace_return_to_handler+0x1fc/0x222
[   65.549176] epc : ftrace_return_to_handler+0x1fc/0x222
[   65.549198]  ra : ftrace_return_to_handler+0x1fc/0x222
[   65.549215]  ra : ftrace_return_to_handler+0x1fc/0x222
[   65.549237] epc : ffffffff801671ce ra : ffffffff801671ce sp : ff2000000048ac90
[   65.549253] epc : ffffffff801671ce ra : ffffffff801671ce sp : ff200000005eaa50
[   65.549265]  gp : ffffffff81c78988 tp : ff60000081b88000 t0 : ffffffff800b1bf4
[   65.549279]  gp : ffffffff81c78988 tp : ff60000081b89d80 t0 : ffffffff800acb2e
[   65.549290]  t1 : 0000000000000000 t2 : 0000000000000008 s0 : ff2000000048ad30
[   65.549304]  t1 : 0000000000000000 t2 : 746867696c6b6361 s0 : ff200000005eaaf0
[   65.549316]  s1 : ff60000089d58ee8 a0 : fb8f0f399b5b3a00 a1 : fb8f0f399b5b3a00
[   65.549329]  s1 : ff60000089d5cee8 a0 : fb8f0f399b5b3a00 a1 : fb8f0f399b5b3a00
[   65.549342]  a2 : 0000000000000023 a3 : 0000000000000001 a4 : 0000000000000fff
[   65.549355]  a2 : 0000000000000023 a3 : 0000000000000001 a4 : 0000000000000fff
[   65.549368]  a5 : ffffffff81a3f740 a6 : 0000000000000000 a7 : 0000000000000111
[   65.549380]  a5 : ffffffff81a3f740 a6 : 0000000000000000 a7 : 0000000000000115
[   65.549394]  s2 : ffffffff81c7ceac s3 : ffffffff81c7cff0 s4 : 00000000000001dd
[   65.549404]  s2 : ffffffff81a09ea4 s3 : ffffffff81c7cff0 s4 : 00000000000001dd
[   65.549420]  s5 : ff2000000048ad30 s6 : ffffffff81c7d380 s7 : ffffffff81c783a2
[   65.549429]  s5 : ff200000005eaaf0 s6 : ffffffff81c7d380 s7 : ffffffff81c783a2
[   65.549445]  s8 : ffffffffffffffff s9 : ff2000000048bdc0 s10: 000055558a9e0388
[   65.549471]  s11: ff2000000048be40 t3 : ffffffffffffffff t4 : ff2000000048ab80
[   65.549454]  s8 : ffffffffffffffff s9 : ff200000005eba50 s10: ffffffff81c80512
[   65.549496]  t5 : ffffffff81c96090 t6 : ffffffff81c9613e
[   65.549498]  s11: ff200000005ebad0 t3 : ffffffff81cb1cba t4 : ff200000005ea7e0
[   65.549519] status: 0000000200000100 badaddr: ffffffff801671ce cause: 0000000000000003
[   65.549522]  t5 : ffffffff81cb1cc0 t6 : ffffffff81cb1cba
[   65.549547] status: 0000000200000100 badaddr: ffffffff801671ce cause: 0000000000000003
[   65.549607] [<ffffffff801671ce>] ftrace_return_to_handler+0x1fc/0x222
[   65.549652] [<ffffffff80013916>] return_to_handler+0x16/0x24
[   65.549654] [<ffffffff801671ce>] ftrace_return_to_handler+0x1fc/0x222
[   65.549697] [<ffffffff80013916>] return_to_handler+0x16/0x24
[   65.549854] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.549866] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.549897] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.549933] ---[ end trace 0000000000000000 ]---
------------[ cut here ]------------
[   65.550057] ------------[ cut here ]------------
[   65.550082] WARNING: CPU: 1 PID: 302 at kernel/trace/fgraph.c:824 ftrace_return_to_handler+0x20e/0x222
[   65.550146] Modules linked in: drm backlight drm_panel_orientation_quirks
[   65.550083] WARNING: CPU: 0 PID: 317 at kernel/trace/fgraph.c:824 ftrace_return_to_handler+0x20e/0x222
[   65.550231]  configfs
[   65.550237] Modules linked in:
[   65.550252]  dm_mod ip_tables
[   65.550273]  drm
[   65.550287]  x_tables
[   65.550295]  backlight drm_panel_orientation_quirks
[   65.550314] 
[   65.550345] CPU: 1 UID: 0 PID: 302 Comm: ftrace_stress_t Tainted: G        W          6.13.0 #1
[   65.550360]  configfs dm_mod
[   65.550386] Tainted: [W]=WARN
[   65.550396]  ip_tables
[   65.550407] Hardware name: riscv-virtio,qemu (DT)
[   65.550416]  x_tables
[   65.550432] epc : ftrace_return_to_handler+0x20e/0x222
[   65.550458] CPU: 0 UID: 0 PID: 317 Comm: sh Tainted: G        W          6.13.0 #1
[   65.550472]  ra : ftrace_return_to_handler+0x20e/0x222
[   65.550499] Tainted: [W]=WARN
[   65.550510] epc : ffffffff801671e0 ra : ffffffff801671e0 sp : ff2000000048ac90
[   65.550519] Hardware name: riscv-virtio,qemu (DT)
[   65.550538]  gp : ffffffff81c78988 tp : ff60000081b88000 t0 : ffffffff8003e440
[   65.550542] epc : ftrace_return_to_handler+0x20e/0x222
[   65.550564]  t1 : 0000000000000000 t2 : 0000000000000008 s0 : ff2000000048ad30
[   65.550588]  s1 : ff60000089d58ee8 a0 : 0000000000000000 a1 : 0000000000000000
[   65.550583]  ra : ftrace_return_to_handler+0x20e/0x222
[   65.550612]  a2 : ffffffffffffffff a3 : ffffffff81cf6598 a4 : ffffffff81c7ceac
[   65.550622] epc : ffffffff801671e0 ra : ffffffff801671e0 sp : ff200000005eaa50
[   65.550638]  a5 : 0000000000000000 a6 : ffffffff81a09ea4 a7 : 0000000000000111
[   65.550654]  gp : ffffffff81c78988 tp : ff60000081b89d80 t0 : ffffffff8003e440
[   65.550670]  s2 : ffffffff81c7ceac s3 : ffffffff81c7cff0 s4 : 00000000000001dd
[   65.550679]  t1 : 0000000000000000 t2 : 746867696c6b6361 s0 : ff200000005eaaf0
[   65.550695]  s5 : ff2000000048ad30 s6 : ffffffff81c7d380 s7 : ffffffff81c783a2
[   65.550703]  s1 : ff60000089d5cee8 a0 : 0000000000000000 a1 : 0000000000000000
[   65.550721]  s8 : ffffffffffffffff s9 : ff2000000048bdc0 s10: 000055558a9e0388
[   65.550727]  a2 : 0000000000000002 a3 : ffffffff81cf6598 a4 : ffffffff81c7ceac
[   65.550746]  s11: ff2000000048be40 t3 : ffffffffffffffff t4 : ff2000000048ac70
[   65.550751]  a5 : 0000000000000000 a6 : ffffffff81a09ea4 a7 : 0000000000000115
[   65.550771]  t5 : ffffffff81c96090 t6 : ffffffff81c9613e
[   65.550777]  s2 : ffffffff81a09ea4 s3 : ffffffff81c7cff0 s4 : 00000000000001dd
[   65.550794] status: 0000000200000100 badaddr: ffffffff801671e0 cause: 0000000000000003
[   65.550801]  s5 : ff200000005eaaf0 s6 : ffffffff81c7d380 s7 : ffffffff81c783a2
[   65.550825]  s8 : ffffffffffffffff s9 : ff200000005eba50 s10: ffffffff81c80512
[   65.550822] [<ffffffff801671e0>] ftrace_return_to_handler+0x20e/0x222
[   65.550849]  s11: ff200000005ebad0 t3 : ffffffff81cb1cba t4 : ff200000005eaa30
[   65.550865] [<ffffffff80013916>] return_to_handler+0x16/0x24
[   65.550873]  t5 : ffffffff81cb1cc0 t6 : ffffffff81cb1cba
[   65.550895] status: 0000000200000100 badaddr: ffffffff801671e0 cause: 0000000000000003
[   65.550906] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.550921] [<ffffffff801671e0>] ftrace_return_to_handler+0x20e/0x222
[   65.550946] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.550962] [<ffffffff80013916>] return_to_handler+0x16/0x24
[   65.550985] [<ffffffff80013916>] return_to_handler+0x16/0x24
[   65.551002] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.551030] ---[ end trace 0000000000000000 ]---

test log snippet test parser 7

Home

------------[ cut here ]------------
[   65.540786] ------------[ cut here ]------------
[   65.541427] Bad frame pointer: expected ff2000000048ada0, received ffffffff81c7ceac
[   65.541427]   from func cpus_read_lock return to ffffffff80211412
[   65.542930] WARNING: CPU: 1 PID: 302 at kernel/trace/fgraph.c:760 ftrace_return_to_handler+0x1fc/0x222
[   65.544211] Modules linked in:
[   65.545399] Bad frame pointer: expected ff200000005eab50, received ffffffff81a09ea4
[   65.545399]   from func __might_sleep return to ffffffff8003e362
[   65.545399]  drm backlight drm_panel_orientation_quirks configfs dm_mod ip_tables x_tables
[   65.547696] CPU: 1 UID: 0 PID: 302 Comm: ftrace_stress_t Tainted: G        W          6.13.0 #1
[   65.548720] WARNING: CPU: 0 PID: 317 at kernel/trace/fgraph.c:760 ftrace_return_to_handler+0x1fc/0x222
[   65.548783] Modules linked in: drm backlight drm_panel_orientation_quirks configfs dm_mod ip_tables x_tables
[   65.548965] CPU: 0 UID: 0 PID: 317 Comm: sh Tainted: G        W          6.13.0 #1
[   65.548982] Tainted: G        W         
[   65.549008] Tainted: [W]=WARN
[   65.549014] Hardware name: riscv-virtio,qemu (DT)
[   65.549029] Hardware name: riscv-virtio,qemu (DT)
[   65.549152] epc : ftrace_return_to_handler+0x1fc/0x222
[   65.549176] epc : ftrace_return_to_handler+0x1fc/0x222
[   65.549198]  ra : ftrace_return_to_handler+0x1fc/0x222
[   65.549215]  ra : ftrace_return_to_handler+0x1fc/0x222
[   65.549237] epc : ffffffff801671ce ra : ffffffff801671ce sp : ff2000000048ac90
[   65.549253] epc : ffffffff801671ce ra : ffffffff801671ce sp : ff200000005eaa50
[   65.549265]  gp : ffffffff81c78988 tp : ff60000081b88000 t0 : ffffffff800b1bf4
[   65.549279]  gp : ffffffff81c78988 tp : ff60000081b89d80 t0 : ffffffff800acb2e
[   65.549290]  t1 : 0000000000000000 t2 : 0000000000000008 s0 : ff2000000048ad30
[   65.549304]  t1 : 0000000000000000 t2 : 746867696c6b6361 s0 : ff200000005eaaf0
[   65.549316]  s1 : ff60000089d58ee8 a0 : fb8f0f399b5b3a00 a1 : fb8f0f399b5b3a00
[   65.549329]  s1 : ff60000089d5cee8 a0 : fb8f0f399b5b3a00 a1 : fb8f0f399b5b3a00
[   65.549342]  a2 : 0000000000000023 a3 : 0000000000000001 a4 : 0000000000000fff
[   65.549355]  a2 : 0000000000000023 a3 : 0000000000000001 a4 : 0000000000000fff
[   65.549368]  a5 : ffffffff81a3f740 a6 : 0000000000000000 a7 : 0000000000000111
[   65.549380]  a5 : ffffffff81a3f740 a6 : 0000000000000000 a7 : 0000000000000115
[   65.549394]  s2 : ffffffff81c7ceac s3 : ffffffff81c7cff0 s4 : 00000000000001dd
[   65.549404]  s2 : ffffffff81a09ea4 s3 : ffffffff81c7cff0 s4 : 00000000000001dd
[   65.549420]  s5 : ff2000000048ad30 s6 : ffffffff81c7d380 s7 : ffffffff81c783a2
[   65.549429]  s5 : ff200000005eaaf0 s6 : ffffffff81c7d380 s7 : ffffffff81c783a2
[   65.549445]  s8 : ffffffffffffffff s9 : ff2000000048bdc0 s10: 000055558a9e0388
[   65.549471]  s11: ff2000000048be40 t3 : ffffffffffffffff t4 : ff2000000048ab80
[   65.549454]  s8 : ffffffffffffffff s9 : ff200000005eba50 s10: ffffffff81c80512
[   65.549496]  t5 : ffffffff81c96090 t6 : ffffffff81c9613e
[   65.549498]  s11: ff200000005ebad0 t3 : ffffffff81cb1cba t4 : ff200000005ea7e0
[   65.549519] status: 0000000200000100 badaddr: ffffffff801671ce cause: 0000000000000003
[   65.549522]  t5 : ffffffff81cb1cc0 t6 : ffffffff81cb1cba
[   65.549547] status: 0000000200000100 badaddr: ffffffff801671ce cause: 0000000000000003
[   65.549607] [<ffffffff801671ce>] ftrace_return_to_handler+0x1fc/0x222
[   65.549652] [<ffffffff80013916>] return_to_handler+0x16/0x24
[   65.549654] [<ffffffff801671ce>] ftrace_return_to_handler+0x1fc/0x222
[   65.549697] [<ffffffff80013916>] return_to_handler+0x16/0x24
[   65.549854] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.549866] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.549897] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.549933] ---[ end trace 0000000000000000 ]---

test log snippet test parser 8

Home

------------[ cut here ]------------
[   65.550057] ------------[ cut here ]------------
[   65.550082] WARNING: CPU: 1 PID: 302 at kernel/trace/fgraph.c:824 ftrace_return_to_handler+0x20e/0x222
[   65.550146] Modules linked in: drm backlight drm_panel_orientation_quirks
[   65.550083] WARNING: CPU: 0 PID: 317 at kernel/trace/fgraph.c:824 ftrace_return_to_handler+0x20e/0x222
[   65.550231]  configfs
[   65.550237] Modules linked in:
[   65.550252]  dm_mod ip_tables
[   65.550273]  drm
[   65.550287]  x_tables
[   65.550295]  backlight drm_panel_orientation_quirks
[   65.550314] 
[   65.550345] CPU: 1 UID: 0 PID: 302 Comm: ftrace_stress_t Tainted: G        W          6.13.0 #1
[   65.550360]  configfs dm_mod
[   65.550386] Tainted: [W]=WARN
[   65.550396]  ip_tables
[   65.550407] Hardware name: riscv-virtio,qemu (DT)
[   65.550416]  x_tables
[   65.550432] epc : ftrace_return_to_handler+0x20e/0x222
[   65.550458] CPU: 0 UID: 0 PID: 317 Comm: sh Tainted: G        W          6.13.0 #1
[   65.550472]  ra : ftrace_return_to_handler+0x20e/0x222
[   65.550499] Tainted: [W]=WARN
[   65.550510] epc : ffffffff801671e0 ra : ffffffff801671e0 sp : ff2000000048ac90
[   65.550519] Hardware name: riscv-virtio,qemu (DT)
[   65.550538]  gp : ffffffff81c78988 tp : ff60000081b88000 t0 : ffffffff8003e440
[   65.550542] epc : ftrace_return_to_handler+0x20e/0x222
[   65.550564]  t1 : 0000000000000000 t2 : 0000000000000008 s0 : ff2000000048ad30
[   65.550588]  s1 : ff60000089d58ee8 a0 : 0000000000000000 a1 : 0000000000000000
[   65.550583]  ra : ftrace_return_to_handler+0x20e/0x222
[   65.550612]  a2 : ffffffffffffffff a3 : ffffffff81cf6598 a4 : ffffffff81c7ceac
[   65.550622] epc : ffffffff801671e0 ra : ffffffff801671e0 sp : ff200000005eaa50
[   65.550638]  a5 : 0000000000000000 a6 : ffffffff81a09ea4 a7 : 0000000000000111
[   65.550654]  gp : ffffffff81c78988 tp : ff60000081b89d80 t0 : ffffffff8003e440
[   65.550670]  s2 : ffffffff81c7ceac s3 : ffffffff81c7cff0 s4 : 00000000000001dd
[   65.550679]  t1 : 0000000000000000 t2 : 746867696c6b6361 s0 : ff200000005eaaf0
[   65.550695]  s5 : ff2000000048ad30 s6 : ffffffff81c7d380 s7 : ffffffff81c783a2
[   65.550703]  s1 : ff60000089d5cee8 a0 : 0000000000000000 a1 : 0000000000000000
[   65.550721]  s8 : ffffffffffffffff s9 : ff2000000048bdc0 s10: 000055558a9e0388
[   65.550727]  a2 : 0000000000000002 a3 : ffffffff81cf6598 a4 : ffffffff81c7ceac
[   65.550746]  s11: ff2000000048be40 t3 : ffffffffffffffff t4 : ff2000000048ac70
[   65.550751]  a5 : 0000000000000000 a6 : ffffffff81a09ea4 a7 : 0000000000000115
[   65.550771]  t5 : ffffffff81c96090 t6 : ffffffff81c9613e
[   65.550777]  s2 : ffffffff81a09ea4 s3 : ffffffff81c7cff0 s4 : 00000000000001dd
[   65.550794] status: 0000000200000100 badaddr: ffffffff801671e0 cause: 0000000000000003
[   65.550801]  s5 : ff200000005eaaf0 s6 : ffffffff81c7d380 s7 : ffffffff81c783a2
[   65.550825]  s8 : ffffffffffffffff s9 : ff200000005eba50 s10: ffffffff81c80512
[   65.550822] [<ffffffff801671e0>] ftrace_return_to_handler+0x20e/0x222
[   65.550849]  s11: ff200000005ebad0 t3 : ffffffff81cb1cba t4 : ff200000005eaa30
[   65.550865] [<ffffffff80013916>] return_to_handler+0x16/0x24
[   65.550873]  t5 : ffffffff81cb1cc0 t6 : ffffffff81cb1cba
[   65.550895] status: 0000000200000100 badaddr: ffffffff801671e0 cause: 0000000000000003
[   65.550906] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.550921] [<ffffffff801671e0>] ftrace_return_to_handler+0x20e/0x222
[   65.550946] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.550962] [<ffffffff80013916>] return_to_handler+0x16/0x24
[   65.550985] [<ffffffff80013916>] return_to_handler+0x16/0x24
[   65.551002] [<ffffffff80013900>] return_to_handler+0x0/0x24
[   65.551030] ---[ end trace 0000000000000000 ]---

test log snippet test parser 9

Home

[   83.894953] BUG: scheduling while atomic: cron/180/0x00000003

test log snippet test parser 10

Home

[   83.894953] BUG: scheduling while atomic: cron/180/0x00000003

test log snippet test parser 11

Home

[   54.082619] BUG: scheduling while atomic: ftrace_stress_t/300/0x00000003
[   54.069241] BUG: scheduling while atomic: ftrace_stress_t/321/0x00000003

test log snippet test parser 12

Home

[   54.082619] BUG: scheduling while atomic: ftrace_stress_t/300/0x00000003
---
[   54.069241] BUG: scheduling while atomic: ftrace_stress_t/321/0x00000003

test log snippet test parser 13

Home

[   54.169278] BUG: scheduling while atomic: kcompactd0/39/0x00000003

test log snippet test parser 14

Home

[   54.169278] BUG: scheduling while atomic: kcompactd0/39/0x00000003

test log snippet test parser 15

Home

[   54.272912] BUG: scheduling while atomic: kirk/263/0x00000003

test log snippet test parser 16

Home

[   54.272912] BUG: scheduling while atomic: kirk/263/0x00000003

test log snippet test parser 17

Home

[   58.137145] BUG: scheduling while atomic: kworker/0:1H/53/0x00000003

test log snippet test parser 18

Home

[   58.137145] BUG: scheduling while atomic: kworker/0:1H/53/0x00000003

test log snippet test parser 19

Home

[   54.307138] BUG: scheduling while atomic: tst_sleep/318/0x00000003

test log snippet test parser 20

Home

[   54.307138] BUG: scheduling while atomic: tst_sleep/318/0x00000003

test log snippet test parser 21

Home

------------[ cut here ]------------
[   44.871284] WARNING: CPU: 1 PID: 480 at mm/util.c:657 __kvmalloc_node_noprof+0xdf/0xf0
[   44.873482] Modules linked in:
[   44.874896] CPU: 1 UID: 0 PID: 480 Comm: unshare_test Not tainted 6.13.0 #1
[   44.876478] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   44.877483] RIP: 0010:__kvmalloc_node_noprof+0xdf/0xf0
[   44.877843] Code: 89 f0 ff 75 08 53 68 00 04 00 00 e8 9b 44 04 00 48 83 c4 18 5b 41 5e 41 5f 5d c3 cc cc cc cc cc 41 f7 c6 00 20 00 00 75 81 90 <0f> 0b 90 31 c0 eb e3 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90
[   44.880172] RSP: 0018:ffff9df500627db8 EFLAGS: 00010246
[   44.880785] RAX: 0000000000000004 RBX: 00000000ffffffff RCX: 0000000400000000
[   44.881489] RDX: 0000000200001e00 RSI: 0000000000000000 RDI: ffffffffa7669a08
[   44.881738] RBP: ffff9df500627dd0 R08: 0000000000412cc0 R09: 00000000ffffffff
[   44.882016] R10: 0000000000000000 R11: 0000000000000000 R12: 00000000400003c0
[   44.882772] R13: 00000000400003c0 R14: 0000000000400cc0 R15: 0000000200001e00
[   44.884643] FS:  00007fcd8a220740(0000) GS:ffff91bafbd00000(0000) knlGS:0000000000000000
[   44.886157] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   44.886970] CR2: 00007fcd8a3c4680 CR3: 0000000102020000 CR4: 00000000000006f0
[   44.887912] Call Trace:
[   44.889667]  <TASK>
[   44.890413]  ? show_regs+0x5d/0x70
[   44.890965]  ? __warn+0xcf/0x1d0
[   44.891218]  ? __kvmalloc_node_noprof+0xdf/0xf0
[   44.891376]  ? report_bug+0x14c/0x1f0
[   44.891508]  ? handle_bug+0x67/0x90
[   44.892046]  ? exc_invalid_op+0x1f/0x50
[   44.892486]  ? asm_exc_invalid_op+0x1f/0x30
[   44.893002]  ? __kvmalloc_node_noprof+0x48/0xf0
[   44.893774]  ? __kvmalloc_node_noprof+0xdf/0xf0
[   44.894279]  ? __kvmalloc_node_noprof+0x48/0xf0
[   44.894783]  alloc_fdtable+0x7b/0x120
[   44.895477]  expand_files+0x114/0x300
[   44.895864]  ? enqueue_hrtimer+0x3a/0xa0
[   44.896319]  ? ktime_get+0x3e/0xe0
[   44.896803]  ? lapic_next_event+0x15/0x20
[   44.897300]  ? clockevents_program_event+0x8e/0x110
[   44.897951]  ksys_dup3+0x71/0xd0
[   44.898234]  __x64_sys_dup2+0x1e/0x80
[   44.898377]  x64_sys_call+0x8cf/0x2f50
[   44.898706]  do_syscall_64+0xd0/0x1a0
[   44.899137]  ? irqentry_exit+0x16/0x60
[   44.899660]  ? sysvec_apic_timer_interrupt+0x4a/0x90
[   44.900257]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[   44.901150] RIP: 0033:0x7fcd8a318c17
[   44.902143] Code: 73 01 c3 48 8b 0d 09 62 0e 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 21 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 01 c3 48 8b 15 d9 61 0e 00 f7 d8 64 89 02 b8
[   44.903434] RSP: 002b:00007ffd88544638 EFLAGS: 00000202 ORIG_RAX: 0000000000000021
[   44.904585] RAX: ffffffffffffffda RBX: 00007ffd88544e98 RCX: 00007fcd8a318c17
[   44.905243] RDX: 00007ffd885447b8 RSI: 0000000040000038 RDI: 0000000000000002
[   44.905529] RBP: 00007ffd88544850 R08: 0000000000000064 R09: 0000000000000000
[   44.905748] R10: 00007fcd8a235170 R11: 0000000000000202 R12: 0000000000000000
[   44.905966] R13: 00007ffd88544ea8 R14: 00007fcd8a454000 R15: 000055eb3c4e2cf8
[   44.906305]  </TASK>
[   44.906542] ---[ end trace 0000000000000000 ]---

test log snippet test parser 22

Home

------------[ cut here ]------------
[   44.413915] WARNING: CPU: 1 PID: 501 at mm/util.c:657 __kvmalloc_node_noprof+0xd7/0xe0
[   44.415765] Modules linked in:
[   44.416550] CPU: 1 UID: 0 PID: 501 Comm: unshare_test Not tainted 6.13.0 #1
[   44.417094] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[   44.418028] RIP: 0010:__kvmalloc_node_noprof+0xd7/0xe0
[   44.418918] Code: 00 48 83 c4 18 48 8d 65 e8 5b 41 5c 41 5d 5d c3 cc cc cc cc 48 b9 00 00 00 00 00 20 00 00 eb bd 80 e7 20 0f 85 52 ff ff ff 90 <0f> 0b 90 e9 49 ff ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 90
[   44.419398] RSP: 0018:ffffb484408c7dd8 EFLAGS: 00010246
[   44.419599] RAX: 0000000000000000 RBX: 00000000004000c0 RCX: 0000000000000015
[   44.419887] RDX: 0000000000000015 RSI: ffffffffb6636d1d RDI: 0000000000452cc0
[   44.420186] RBP: ffffb484408c7df0 R08: 0000000000000040 R09: 0000000000000000
[   44.422087] R10: ffffb484408c7dd8 R11: 0000000000000000 R12: 0000000200001e00
[   44.423312] R13: 00000000ffffffff R14: ffff8e9340f3e9c0 R15: ffff8e9340f3e940
[   44.424660] FS:  00007ff469183740(0000) GS:ffff8e93bbd00000(0000) knlGS:0000000000000000
[   44.425465] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   44.425719] CR2: 00007ff46922c8f0 CR3: 0000000104488000 CR4: 00000000000006f0
[   44.426177] Call Trace:
[   44.426505]  <TASK>
[   44.428224]  ? show_regs+0x68/0x80
[   44.428794]  ? __warn+0x91/0x140
[   44.429206]  ? __kvmalloc_node_noprof+0xd7/0xe0
[   44.429659]  ? report_bug+0x175/0x1a0
[   44.430204]  ? handle_bug+0x5c/0xa0
[   44.430683]  ? exc_invalid_op+0x1c/0x80
[   44.431140]  ? asm_exc_invalid_op+0x1f/0x30
[   44.431337]  ? __kmalloc_node_noprof+0x39d/0x4c0
[   44.431602]  ? __kvmalloc_node_noprof+0xd7/0xe0
[   44.431976]  ? __kvmalloc_node_noprof+0x53/0xe0
[   44.432847]  alloc_fdtable+0x78/0x120
[   44.433838]  expand_files+0x68/0x2e0
[   44.435321]  ? do_prlimit+0x134/0x210
[   44.436952]  ? trace_preempt_off+0x1e/0x70
[   44.437609]  ? ksys_dup3+0x61/0x100
[   44.438260]  ksys_dup3+0x6c/0x100
[   44.439000]  __x64_sys_dup2+0x32/0xc0
[   44.439292]  x64_sys_call+0x2082/0x20b0
[   44.439590]  do_syscall_64+0xb2/0x1d0
[   44.439827]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[   44.440145] RIP: 0033:0x7ff46927bc17
[   44.440702] Code: 73 01 c3 48 8b 0d 09 62 0e 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 21 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 01 c3 48 8b 15 d9 61 0e 00 f7 d8 64 89 02 b8
[   44.442993] RSP: 002b:00007ffd911a7518 EFLAGS: 00000206 ORIG_RAX: 0000000000000021
[   44.444232] RAX: ffffffffffffffda RBX: 00007ffd911a7d78 RCX: 00007ff46927bc17
[   44.444489] RDX: 00007ffd911a7540 RSI: 0000000040000038 RDI: 0000000000000002
[   44.445150] RBP: 00007ffd911a7740 R08: 0000000000000064 R09: 0000000000000000
[   44.446108] R10: 00007ff469198170 R11: 0000000000000206 R12: 0000000000000000
[   44.447201] R13: 00007ffd911a7d88 R14: 00007ff4693b7000 R15: 00005621af716dd8
[   44.447596]  </TASK>
[   44.448272] ---[ end trace 0000000000000000 ]---

test log snippet test parser 23

Home

------------[ cut here ]------------
[  195.819820] WARNING: CPU: 3 PID: 1467 at mm/util.c:657 __kvmalloc_node_noprof+0x11c/0x124
[  195.820551] Modules linked in: brcmfmac brcmutil cfg80211 snd_soc_hdmi_codec dw_hdmi_cec dw_hdmi_i2s_audio hci_uart hantro_vpu btqca btbcm v4l2_h264 v4l2_vp9 bluetooth crct10dif_ce snd_soc_spdif_tx reset_gpio panfrost ecdh_generic snd_soc_audio_graph_card snd_soc_simple_card gpu_sched v4l2_jpeg rockchipdrm drm_shmem_helper snd_soc_simple_card_utils ecc dw_mipi_dsi rfkill analogix_dp dw_hdmi v4l2_mem2mem pwrseq_core cec videobuf2_v4l2 phy_rockchip_pcie videodev snd_soc_rockchip_i2s snd_soc_es8316 videobuf2_dma_contig drm_display_helper videobuf2_memops rtc_rk808 videobuf2_common drm_dma_helper drm_client_lib mc drm_kms_helper rockchip_thermal rockchip_saradc industrialio_triggered_buffer kfifo_buf coresight_cpu_debug coresight pcie_rockchip_host fuse drm backlight ip_tables x_tables
[  195.826727] CPU: 3 UID: 0 PID: 1467 Comm: unshare_test Not tainted 6.13.0 #1
[  195.827352] Hardware name: Radxa ROCK Pi 4B (DT)
[  195.827761] pstate: 20000005 (nzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[  195.828374] pc : __kvmalloc_node_noprof+0x11c/0x124
[  195.828815] lr : __kvmalloc_node_noprof+0x50/0x124
[  195.829241] sp : ffff800084debce0
[  195.829534] x29: ffff800084debcf0 x28: ffff0000013da480 x27: 0000000000000000
[  195.830168] x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000
[  195.830800] x23: 0000000080000000 x22: ffff80008035d6d8 x21: 0000000200001e00
[  195.831432] x20: 0000000000400cc0 x19: 00000000ffffffff x18: 0000000000000000
[  195.832064] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
[  195.832696] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
[  195.833328] x11: 0000000000000040 x10: 4ba7863e91867c00 x9 : 4ba7863e91867c00
[  195.833960] x8 : 0000000000000004 x7 : 0000000000000000 x6 : 0000000000000000
[  195.834591] x5 : 0000000000000000 x4 : ffff800084debcb0 x3 : 0000000000000000
[  195.835222] x2 : 0000000000000000 x1 : 0000000000000016 x0 : 0000000000000000
[  195.835854] Call trace:
[  195.836074]  __kvmalloc_node_noprof+0x11c/0x124 (P)
[  195.836514]  alloc_fdtable+0x74/0x118
[  195.836845]  expand_files+0x114/0x2e0
[  195.837173]  ksys_dup3+0x64/0xe8
[  195.837464]  __arm64_sys_dup3+0x20/0x30
[  195.837807]  invoke_syscall+0x40/0xf8
[  195.838136]  el0_svc_common+0xa8/0xd8
[  195.838464]  do_el0_svc+0x1c/0x28
[  195.838760]  el0_svc+0x38/0x68
[  195.839039]  el0t_64_sync_handler+0x78/0x108
[  195.839418]  el0t_64_sync+0x198/0x19c
[  195.839747] ---[ end trace 0000000000000000 ]---

test log snippet test parser 24

Home

------------[ cut here ]------------
[  202.750354] WARNING: CPU: 3 PID: 1470 at mm/util.c:657 __kvmalloc_node_noprof+0x138/0x148
[  202.751087] Modules linked in: brcmfmac brcmutil cfg80211 snd_soc_hdmi_codec dw_hdmi_i2s_audio dw_hdmi_cec hantro_vpu hci_uart btqca v4l2_vp9 crct10dif_ce v4l2_h264 btbcm snd_soc_simple_card v4l2_jpeg bluetooth snd_soc_audio_graph_card reset_gpio panfrost v4l2_mem2mem snd_soc_spdif_tx snd_soc_simple_card_utils drm_shmem_helper videobuf2_dma_contig gpu_sched ecdh_generic videobuf2_memops rockchipdrm ecc analogix_dp dw_mipi_dsi videobuf2_v4l2 dw_hdmi cec phy_rockchip_pcie videodev drm_display_helper rfkill pwrseq_core rtc_rk808 snd_soc_rockchip_i2s drm_client_lib drm_dma_helper videobuf2_common mc snd_soc_es8316 drm_kms_helper rockchip_saradc industrialio_triggered_buffer pcie_rockchip_host coresight_cpu_debug kfifo_buf coresight rockchip_thermal fuse drm backlight ip_tables x_tables
[  202.757269] CPU: 3 UID: 0 PID: 1470 Comm: unshare_test Not tainted 6.13.0 #1
[  202.757894] Hardware name: Radxa ROCK Pi 4B (DT)
[  202.758303] pstate: 20000005 (nzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[  202.758915] pc : __kvmalloc_node_noprof+0x138/0x148
[  202.759352] lr : __kvmalloc_node_noprof+0x64/0x148
[  202.759777] sp : ffff800084f23cd0
[  202.760070] x29: ffff800084f23ce0 x28: ffff00000123ed80 x27: 0000000000000000
[  202.760707] x26: 0000000000000000 x25: 0000000000000000 x24: ffff00000523bc80
[  202.761341] x23: ffff00000523bd00 x22: ffff8000803391ac x21: 0000000200001e00
[  202.761975] x20: 00000000ffffffff x19: 0000000000400cc0 x18: 0000000000000000
[  202.762609] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
[  202.763241] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
[  202.763872] x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
[  202.764504] x8 : 0000000000000001 x7 : 0000000000000001 x6 : 0000000000000005
[  202.765135] x5 : 0000000000000000 x4 : ffff00000123ed80 x3 : 0000000000000000
[  202.765769] x2 : 0000000000000000 x1 : 000000007fffffff x0 : 0000000000000000
[  202.766402] Call trace:
[  202.766623]  __kvmalloc_node_noprof+0x138/0x148 (P)
[  202.767061]  alloc_fdtable+0x84/0x128
[  202.767393]  expand_files+0x74/0x2e4
[  202.767713]  ksys_dup3+0x60/0x120
[  202.768010]  __arm64_sys_dup3+0x20/0x30
[  202.768355]  invoke_syscall+0x48/0x10c
[  202.768694]  el0_svc_common.constprop.0+0x40/0xe0
[  202.769115]  do_el0_svc+0x1c/0x28
[  202.769414]  el0_svc+0x30/0xcc
[  202.769694]  el0t_64_sync_handler+0x10c/0x138
[  202.770080]  el0t_64_sync+0x198/0x19c
[  202.770410] ---[ end trace 0000000000000000 ]---

test log snippet test parser 25

Home

[    3.765716] Internal error: Oops - undefined instruction: 0 [#1] SMP ARM
[    3.772491] Modules linked in:
[    3.775543] CPU: 1 UID: 0 PID: 45 Comm: kworker/u10:2 Not tainted 6.13.0 #1
[    3.782562] Hardware name: Generic DRA74X (Flattened Device Tree)
[    3.788696] Workqueue: events_unbound deferred_probe_work_func
[    3.794555] PC is at edma_probe+0x1024/0x1034
[    3.798950] LR is at devm_kmalloc+0xf4/0x134
[    3.803253] pc : [<c0c1b28c>]    lr : [<c0dfae20>]    psr: 60000013
[    3.809539] sp : f0105978  ip : 00070007  fp : efcd56f0
[    3.814788] r10: c3d0d010  r9 : 00000003  r8 : c3d0d000
[    3.820037] r7 : 00000002  r6 : c23d28ec  r5 : c3919040  r4 : 00000001
[    3.826599] r3 : 00000001  r2 : 00000002  r1 : 00000002  r0 : c2e8ecc0
[    3.833160] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
[    3.840332] Control: 30c5387d  Table: 80203000  DAC: 55555555
[    3.846130] Register r0 information: slab kmalloc-rnd-08-128 start c2e8ec80 pointer offset 64 size 128
[    3.855499] Register r1 information: non-paged memory
[    3.860595] Register r2 information: non-paged memory
[    3.865661] Register r3 information: non-paged memory
[    3.870758] Register r4 information: non-paged memory
[    3.875823] Register r5 information: slab kmalloc-rnd-05-512 start c3919000 pointer offset 64 size 512
[    3.885192] Register r6 information: non-slab/vmalloc memory
[    3.890899] Register r7 information: non-paged memory
[    3.895996] Register r8 information: slab kmalloc-rnd-08-1k start c3d0d000 pointer offset 0 size 1024
[    3.905273] Register r9 information: non-paged memory
[    3.910339] Register r10 information: slab kmalloc-rnd-08-1k start c3d0d000 pointer offset 16 size 1024
[    3.919830] Register r11 information: non-slab/vmalloc memory
[    3.925598] Register r12 information: non-paged memory
[    3.930755] Process kworker/u10:2 (pid: 45, stack limit = 0x711a39d6)
[    3.937225] Stack: (0xf0105978 to 0xf0106000)
[    3.941619] 5960:                                                       00000001 c133d5d8
[    3.949829] 5980: c3d0d010 c23d28ec c23d28ec 3f45655f 00000001 c0b72fc0 c2e8e740 00000000
[    3.958068] 59a0: 00000001 00000000 c24d3d88 c133a010 c1d8cfb0 c1d290a2 efcd56f0 3f45655f
[    3.966278] 59c0: c24d3d88 fc8f9ed5 f0105ac0 c133ac68 ffffffff c0715800 00000000 00000000
[    3.974517] 59e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 b33001b7
[    3.982727] 5a00: b33001b7 c3d0d010 c23d28ec c23d28ec 3f45655f c24d3d88 fc8f9ed5 f0105ac0
[    3.990936] 5a20: 00000001 c0df89e4 c3d0d010 c23d28ec 00000000 c0df50fc c3d0d010 c23d28ec
[    3.999176] 5a40: c24d3d90 000000ac 00000001 c0df4234 c3d0d010 c3d0d010 00000001 c0df4e0c
[    4.007385] 5a60: c3d0d010 c23d28ec f0105ac0 00000000 c2b83700 c0df4cc4 c0df4b90 c0df4b90
[    4.015594] 5a80: 00000000 f0105a90 00000001 c0df12b4 c2b8376c c2e483b8 c2b83700 b33001b7
[    4.023834] 5aa0: c3d0d010 c3d0d054 00000000 00000001 00000000 c2b83700 00000000 c0df404c
[    4.032043] 5ac0: c3d0d010 00000001 00000100 b33001b7 c3d0d010 c24d3d7c c2b83700 c1a47a04
[    4.040283] 5ae0: c3d0d010 c0df1520 c3d0d010 c3d0e010 00000000 c3d0e010 c3d0d010 efcd5754
[    4.048492] 5b00: 00000001 c0decc2c b33001b7 00000000 00000000 00000000 c3d0d000 efcd56f0
[    4.056701] 5b20: c3d0d010 efcd5754 00000000 c133cc10 efcd56f0 c3d0e010 c221d330 00000000
[    4.064941] 5b40: 00000000 efcd5754 c1843a3c c133cea8 c2bbb410 c3d0e010 00000000 00000000
[    4.073150] 5b60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[    4.081359] 5b80: b33001b7 efcd56f0 c221d330 c1843a3c efcd539c 00000000 c3d0e010 00000001
[    4.089569] 5ba0: c24c2770 c133d124 00000001 c1cfe56d c3ce4540 efcd5520 f0105c08 00000001
[    4.097808] 5bc0: 00000000 c1cfe56d c1e04370 c0a46398 00000000 00000000 c1d8cfb0 c3d0e010
[    4.106018] 5be0: c221d300 efcd539c 00000000 00000000 43300000 00000000 00100000 00000000
[    4.114257] 5c00: 00000200 00000000 00000002 c1b16420 ff9e82f0 ff9e82f0 00000001 00000001
[    4.122467] 5c20: 00000001 00000000 b33001b7 c3d0e010 c230f3c0 c230f3c0 3f45655f c24d3d88
[    4.130676] 5c40: fc8f9ed5 f0105ce8 00000001 c0df89e4 c3d0e010 c230f3c0 00000000 c0df50fc
[    4.138916] 5c60: c3d0e010 c230f3c0 c24d3d90 000000ac 00000001 c0df4234 c3d0e010 c3d0e010
[    4.147125] 5c80: 00000001 c0df4e0c c3d0e010 c230f3c0 f0105ce8 00000000 c2b83700 c0df4cc4
[    4.155334] 5ca0: c0df4b90 c0df4b90 00000000 f0105cb8 00000001 c0df12b4 c2b8376c c386e538
[    4.163574] 5cc0: c2b83700 b33001b7 c3d0e010 c3d0e054 00000000 00000001 00000000 c2b83700
[    4.171783] 5ce0: c221d300 c0df404c c3d0e010 00000001 00000100 b33001b7 c3d0e010 c24d3d7c
[    4.179992] 5d00: c2b83700 c1a47a04 c3d0e010 c0df1520 c3d0e010 c2bbb410 00000000 c2bbb410
[    4.188232] 5d20: c3d0e010 efcd5400 00000001 c0decc2c b33001b7 00000000 00000000 00000000
[    4.196441] 5d40: c3d0e000 efcd539c c3d0e010 efcd5400 c221d300 c133cc10 efcd539c c2bbb410
[    4.204650] 5d60: c221d330 c221d300 00000000 efcd5400 00000000 c133cea8 00000000 c2bbb410
[    4.212890] 5d80: 43300000 00000000 43300003 00000000 ff9e8254 00000200 00000000 00000000
[    4.221099] 5da0: 00000000 00000000 b33001b7 efcd539c c221d330 00000000 efc6946c 00000000
[    4.229309] 5dc0: c2bbb410 00000001 c291080d c133d124 00000001 f0105eb0 c2bbb410 efc6946c
[    4.237548] 5de0: c2e69f40 c2bbb400 c230f2e4 c221d330 f0105eb0 c0a45664 c2bbb410 c230f2e4
[    4.245758] 5e00: c230f2e4 3f45655f c24d3d88 fc8f9ed5 c291080d c0df89e4 c2bbb410 c230f2e4
[    4.253997] 5e20: 00000000 c0df50fc c2bbb410 c230f2e4 c24d3d90 00000001 c291080d c0df4234
[    4.262207] 5e40: c2bbb410 c2bbb410 c291080d c0df4e0c c2bbb410 c230f2e4 f0105eb0 00000000
[    4.270416] 5e60: c2b83700 c0df4cc4 c0df4b90 c0df4b90 00000000 f0105e80 c291080d c0df12b4
[    4.278656] 5e80: c2b8376c c386e3b8 c2b83700 b33001b7 c2bbb410 c2bbb454 00000000 00000001
[    4.286865] 5ea0: 00000000 c2b83700 c2910800 c0df404c c2bbb410 00000001 c291080d b33001b7
[    4.295074] 5ec0: c2bbb410 c24d3d7c c2b83700 c1a47a04 00000000 c0df1520 c23feab4 c2bbb410
[    4.303314] 5ee0: c2bf3854 c23fead0 00000000 00000004 c291080d c0df4924 c2e90780 c23fead8
[    4.311523] 5f00: c28c9000 c2910880 0000000a c04959a0 f0105f68 c1805d30 00000000 00000002
[    4.319732] 5f20: b33001b7 c3832e40 c2e90798 c2e907d0 c2e907ac c28c9000 c2e90780 c2e907ac
[    4.327972] 5f40: c3832e40 c28c9020 c246aad8 00000402 00000000 c0497df4 c29b8500 c0497b44
[    4.336181] 5f60: c2e90780 00000040 c3833524 c29b851c c29b8500 c0497b44 c2e90780 c3832e40
[    4.344390] 5f80: 00000000 c049ea74 00000000 00000000 c29b6440 c049e930 00000000 00000000
[    4.352630] 5fa0: 00000000 00000000 00000000 c040029c 00000000 00000000 00000000 00000000
[    4.360839] 5fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[    4.369049] 5fe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000
[    4.377288] Call trace: 
[    4.377288]  edma_probe from platform_probe+0xac/0xe0
[    4.384918]  platform_probe from really_probe+0x170/0x3cc
[    4.390350]  really_probe from __driver_probe_device+0x94/0xdc
[    4.396209]  __driver_probe_device from driver_probe_device+0x3c/0x1bc
[    4.402801]  driver_probe_device from __device_attach_driver+0x134/0x17c
[    4.409515]  __device_attach_driver from bus_for_each_drv+0x130/0x18c
[    4.416015]  bus_for_each_drv from __device_attach+0xcc/0x180
[    4.421813]  __device_attach from bus_probe_device+0xac/0x128
[    4.427581]  bus_probe_device from device_add+0x2b0/0x4d0
[    4.433013]  device_add from of_platform_device_create_pdata+0xc4/0xf8
[    4.439605]  of_platform_device_create_pdata from of_platform_bus_create+0x170/0x370
[    4.447387]  of_platform_bus_create from of_platform_populate+0x7c/0xc4
[    4.454040]  of_platform_populate from sysc_probe+0xa98/0xba8
[    4.459838]  sysc_probe from platform_probe+0xac/0xe0
[    4.464935]  platform_probe from really_probe+0x170/0x3cc
[    4.470367]  really_probe from __driver_probe_device+0x94/0xdc
[    4.476226]  __driver_probe_device from driver_probe_device+0x3c/0x1bc
[    4.482788]  driver_probe_device from __device_attach_driver+0x134/0x17c
[    4.489532]  __device_attach_driver from bus_for_each_drv+0x130/0x18c
[    4.496002]  bus_for_each_drv from __device_attach+0xcc/0x180
[    4.501800]  __device_attach from bus_probe_device+0xac/0x128
[    4.507598]  bus_probe_device from device_add+0x2b0/0x4d0
[    4.513031]  device_add from of_platform_device_create_pdata+0xc4/0xf8
[    4.519592]  of_platform_device_create_pdata from of_platform_bus_create+0x170/0x370
[    4.527374]  of_platform_bus_create from of_platform_populate+0x7c/0xc4
[    4.534057]  of_platform_populate from simple_pm_bus_probe+0xa8/0xf4
[    4.540435]  simple_pm_bus_probe from platform_probe+0xac/0xe0
[    4.546295]  platform_probe from really_probe+0x170/0x3cc
[    4.551727]  really_probe from __driver_probe_device+0x94/0xdc
[    4.557617]  __driver_probe_device from driver_probe_device+0x3c/0x1bc
[    4.564178]  driver_probe_device from __device_attach_driver+0x134/0x17c
[    4.570922]  __device_attach_driver from bus_for_each_drv+0x130/0x18c
[    4.577392]  bus_for_each_drv from __device_attach+0xcc/0x180
[    4.583190]  __device_attach from bus_probe_device+0xac/0x128
[    4.588958]  bus_probe_device from deferred_probe_work_func+0x94/0xd4
[    4.595458]  deferred_probe_work_func from process_scheduled_works+0x25c/0x504
[    4.602722]  process_scheduled_works from worker_thread+0x2b0/0x3ac
[    4.609039]  worker_thread from kthread+0x144/0x350
[    4.613952]  kthread from ret_from_fork+0x14/0x38
[    4.618682] Exception stack(0xf0105fb0 to 0xf0105ff8)
[    4.623748] 5fa0:                                     00000000 00000000 00000000 00000000
[    4.631988] 5fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[    4.640197] 5fe0: 00000000 00000000 00000000 00000000 00000013 00000000
[    4.646850] Code: e3a01004 e3570008 e5c01008 1a000001 (e7ffdefe) 
[    4.652984] ---[ end trace 0000000000000000 ]---

test log snippet test parser 26

Home

[    3.806701] Internal error: Oops - undefined instruction: 0 [#1] SMP ARM
[    3.813446] Modules linked in:
[    3.816528] CPU: 1 UID: 0 PID: 45 Comm: kworker/u10:2 Not tainted 6.13.0 #1
[    3.823516] Hardware name: Generic DRA74X (Flattened Device Tree)
[    3.829650] Workqueue: events_unbound deferred_probe_work_func
[    3.835540] PC is at edma_probe+0xb34/0xfe4
[    3.839752] LR is at devm_kmalloc+0xf4/0x12c
[    3.844024] pc : [<c0c0c3f4>]    lr : [<c0dec9cc>]    psr: 60000013
[    3.850341] sp : f0105930  ip : 00070007  fp : efcd56f0
[    3.855590] r10: c3d28810  r9 : c3c72640  r8 : c3d28800
[    3.860839] r7 : 00000002  r6 : 00000000  r5 : 00000003  r4 : 00000001
[    3.867401] r3 : 00000001  r2 : 00000002  r1 : 00000002  r0 : c3d0ee40
[    3.873962] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
[    3.881134] Control: 30c5387d  Table: 80203000  DAC: 55555555
[    3.886901] Register r0 information: slab kmalloc-rnd-10-128 start c3d0ee00 pointer offset 64 size 128
[    3.896301] Register r1 information: non-paged memory
[    3.901367] Register r2 information: non-paged memory
[    3.906463] Register r3 information: non-paged memory
[    3.911529] Register r4 information: non-paged memory
[    3.916625] Register r5 information: non-paged memory
[    3.921691] Register r6 information: NULL pointer
[    3.926422] Register r7 information: non-paged memory
[    3.931518] Register r8 information: slab kmalloc-1k start c3d28800 pointer offset 0 size 1024
[    3.940185] Register r9 information: slab kmalloc-rnd-02-512 start c3c72600 pointer offset 64 size 512
[    3.949554] Register r10 information: slab kmalloc-1k start c3d28800 pointer offset 16 size 1024
[    3.958404] Register r11 information: non-slab/vmalloc memory
[    3.964202] Register r12 information: non-paged memory
[    3.969360] Process kworker/u10:2 (pid: 45, stack limit = 0x7a36fbe8)
[    3.975830] Stack: (0xf0105930 to 0xf0106000)
[    3.980224] 5920:                                     00000001 c1338b1c c3d28810 c23d2814
[    3.988433] 5940: c23d2814 efcd56f0 00000001 c3d0e740 c29227bc 00000000 00000001 00000000
[    3.996673] 5960: c24d3d10 c1335558 c1d8cb3a c1d28c12 efcd56f0 3f45655f c24d3d10 fc8f9ed5
[    4.004882] 5980: f0105a90 c13361a0 ffffffff c070398c 00000000 00000000 00000000 00000000
[    4.013092] 59a0: 00000000 00000000 00000000 00000000 00000000 1c704106 1c704106 c3d28810
[    4.021331] 59c0: c23d2814 c23d2814 3f45655f c24d3d10 fc8f9ed5 f0105a90 00000001 c0dea5d0
[    4.029541] 59e0: c3d28810 c23d2814 00000000 c0de6d0c c3d28810 c23d2814 c24d3d18 000000ac
[    4.037750] 5a00: 00000001 c0de5d90 60000013 efcd56f0 c3d28810 c3d28810 c24d3d18 000000ac
[    4.045989] 5a20: c24d3d10 fc8f9ed5 00000001 c0de6a1c c3d28810 c23d2814 f0105a90 00000000
[    4.054199] 5a40: c28d4400 c0de68d4 c0de67a0 c0de67a0 00000000 f0105a60 00000001 c0de2e10
[    4.062408] 5a60: c28d446c c29cf6b8 c28d4400 1c704106 c3d28810 c3d28854 00000000 00000001
[    4.070648] 5a80: 00000000 c28d4400 00000000 c0de5ba8 c3d28810 00000001 00000100 1c704106
[    4.078857] 5aa0: c3d28810 c24d3d04 c28d4400 c1a47a68 c3d28810 c0de307c c3d28810 c3d29c10
[    4.087066] 5ac0: 00000000 c3d29c10 c3d28810 efcd5754 00000001 c0dde770 1c704106 00000000
[    4.095306] 5ae0: 00000000 00000000 c3d28800 efcd56f0 c3d28810 efcd5754 00000000 c1338154
[    4.103515] 5b00: efcd56f0 c3d29c10 c221d2b8 00000000 00000000 efcd5754 c1843b7c c13383ec
[    4.111724] 5b20: c2bc1410 c3d29c10 00000000 00000000 00000000 00000000 00000000 00000000
[    4.119964] 5b40: 00000000 00000000 00000000 00000000 1c704106 efcd56f0 c221d2b8 c1843b7c
[    4.128173] 5b60: efcd539c 00000000 c3d29c10 00000001 c24c26f0 c1338668 00000001 c1cfe0dd
[    4.136413] 5b80: c3d0a640 efcd5520 f0105bd8 00000001 00000000 c1cfe0dd c1e03ee2 c0a372b0
[    4.144622] 5ba0: 00000000 00000000 c1d8cb3a c3d29c10 c221d524 efcd539c 00000000 00000000
[    4.152832] 5bc0: 43300000 00000000 00100000 00000000 00000200 00000000 00000002 c1b16500
[    4.161041] 5be0: ff9e82f0 ff9e82f0 00000001 00000001 00000001 00000000 1c704106 c3d29c10
[    4.169281] 5c00: c230f300 c230f300 3f45655f c24d3d10 fc8f9ed5 f0105cd0 00000001 c0dea5d0
[    4.177490] 5c20: c3d29c10 c230f300 00000000 c0de6d0c c3d29c10 c230f300 c24d3d18 000000ac
[    4.185729] 5c40: 00000001 c0de5d90 60000013 efcd539c c3d29c10 c3d29c10 c24d3d18 000000ac
[    4.193939] 5c60: c24d3d10 fc8f9ed5 00000001 c0de6a1c c3d29c10 c230f300 f0105cd0 00000000
[    4.202148] 5c80: c28d4400 c0de68d4 c0de67a0 c0de67a0 00000000 f0105ca0 00000001 c0de2e10
[    4.210388] 5ca0: c28d446c c2ea49b8 c28d4400 1c704106 c3d29c10 c3d29c54 00000000 00000001
[    4.218597] 5cc0: 00000000 c28d4400 c221d524 c0de5ba8 c3d29c10 00000001 00000100 1c704106
[    4.226806] 5ce0: c3d29c10 c24d3d04 c28d4400 c1a47a68 c3d29c10 c0de307c c3d29c10 c2bc1410
[    4.235046] 5d00: 00000000 c2bc1410 c3d29c10 efcd5400 00000001 c0dde770 1c704106 00000000
[    4.243255] 5d20: 00000000 00000000 c3d29c00 efcd539c c3d29c10 efcd5400 c221d524 c1338154
[    4.251464] 5d40: efcd539c c2bc1410 c221d2b8 c221d524 00000000 efcd5400 00000000 c13383ec
[    4.259704] 5d60: 00000000 c2bc1410 43300000 00000000 43300003 00000000 ff9e8254 00000200
[    4.267913] 5d80: 00000000 00000000 00000000 00000000 1c704106 efcd539c c221d2b8 00000000
[    4.276123] 5da0: efc6946c 00000000 c2bc1410 00000001 c291c80d c1338668 00000001 f0105eb0
[    4.284362] 5dc0: c2bc1410 efc6946c c29c7c40 c2bc1400 c230f224 c221d2b8 f0105eb0 c0a36578
[    4.292572] 5de0: c2bc1410 c230f224 c230f224 3f45655f c24d3d10 fc8f9ed5 c291c80d c0dea5d0
[    4.300781] 5e00: c2bc1410 c230f224 00000000 c0de6d0c c2bc1410 c230f224 c24d3d18 00000001
[    4.309020] 5e20: c291c80d c0de5d90 60000013 efc6946c c2bc1410 c2bc1410 c24d3d18 00000001
[    4.317230] 5e40: c24d3d10 fc8f9ed5 c291c80d c0de6a1c c2bc1410 c230f224 f0105eb0 00000000
[    4.325439] 5e60: c28d4400 c0de68d4 c0de67a0 c0de67a0 00000000 f0105e80 c291c80d c0de2e10
[    4.333679] 5e80: c28d446c c2ea4238 c28d4400 1c704106 c2bc1410 c2bc1454 00000000 00000001
[    4.341888] 5ea0: 00000000 c28d4400 c291c800 c0de5ba8 c2bc1410 00000001 c291c80d 1c704106
[    4.350097] 5ec0: c2bc1410 c24d3d04 c28d4400 c1a47a68 00000000 c0de307c c23fe9dc c2bc1410
[    4.358337] 5ee0: c293a954 c23fe9f8 00000000 00000004 c291c80d c0de6534 c29bb180 c23fea00
[    4.366546] 5f00: c2976600 c291c880 0000000a c047f77c f0105f68 c1805e28 00000000 00000002
[    4.374755] 5f20: 1c704106 c380ef00 c29bb198 c29bb1d0 c29bb1ac c2976600 c29bb180 c29bb1ac
[    4.382995] 5f40: c380ef00 c2976620 c246a9d8 00000402 00000000 c0481c44 c29cc200 c0481994
[    4.391204] 5f60: c29bb180 00000040 c380f5e4 c29cc21c c29cc200 c0481994 c29bb180 c380ef00
[    4.399444] 5f80: 00000000 c048891c 00000000 00000000 c29c0c00 c04887d8 00000000 00000000
[    4.407653] 5fa0: 00000000 00000000 00000000 c040029c 00000000 00000000 00000000 00000000
[    4.415863] 5fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[    4.424102] 5fe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000
[    4.432312] Call trace: 
[    4.432312]  edma_probe from platform_probe+0xac/0xe0
[    4.439941]  platform_probe from really_probe+0x170/0x3cc
[    4.445373]  really_probe from __driver_probe_device+0x90/0x18c
[    4.451354]  __driver_probe_device from driver_probe_device+0x3c/0x1bc
[    4.457916]  driver_probe_device from __device_attach_driver+0x134/0x17c
[    4.464660]  __device_attach_driver from bus_for_each_drv+0x130/0x18c
[    4.471130]  bus_for_each_drv from __device_attach+0xcc/0x184
[    4.476928]  __device_attach from bus_probe_device+0xac/0x128
[    4.482696]  bus_probe_device from device_add+0x2b0/0x4d0
[    4.488128]  device_add from of_platform_device_create_pdata+0xc4/0xf8
[    4.494720]  of_platform_device_create_pdata from of_platform_bus_create+0x170/0x370
[    4.502502]  of_platform_bus_create from of_platform_populate+0x7c/0xc4
[    4.509155]  of_platform_populate from sysc_probe+0xa9c/0xbac
[    4.514953]  sysc_probe from platform_probe+0xac/0xe0
[    4.520019]  platform_probe from really_probe+0x170/0x3cc
[    4.525482]  really_probe from __driver_probe_device+0x90/0x18c
[    4.531433]  __driver_probe_device from driver_probe_device+0x3c/0x1bc
[    4.537994]  driver_probe_device from __device_attach_driver+0x134/0x17c
[    4.544738]  __device_attach_driver from bus_for_each_drv+0x130/0x18c
[    4.551208]  bus_for_each_drv from __device_attach+0xcc/0x184
[    4.557006]  __device_attach from bus_probe_device+0xac/0x128
[    4.562774]  bus_probe_device from device_add+0x2b0/0x4d0
[    4.568206]  device_add from of_platform_device_create_pdata+0xc4/0xf8
[    4.574768]  of_platform_device_create_pdata from of_platform_bus_create+0x170/0x370
[    4.582580]  of_platform_bus_create from of_platform_populate+0x7c/0xc4
[    4.589233]  of_platform_populate from simple_pm_bus_probe+0xa8/0xf4
[    4.595611]  simple_pm_bus_probe from platform_probe+0xac/0xe0
[    4.601501]  platform_probe from really_probe+0x170/0x3cc
[    4.606933]  really_probe from __driver_probe_device+0x90/0x18c
[    4.612884]  __driver_probe_device from driver_probe_device+0x3c/0x1bc
[    4.619445]  driver_probe_device from __device_attach_driver+0x134/0x17c
[    4.626190]  __device_attach_driver from bus_for_each_drv+0x130/0x18c
[    4.632690]  bus_for_each_drv from __device_attach+0xcc/0x184
[    4.638458]  __device_attach from bus_probe_device+0xac/0x128
[    4.644226]  bus_probe_device from deferred_probe_work_func+0x94/0xd4
[    4.650726]  deferred_probe_work_func from process_scheduled_works+0x25c/0x504
[    4.657989]  process_scheduled_works from worker_thread+0x2b0/0x3ac
[    4.664306]  worker_thread from kthread+0x144/0x350
[    4.669219]  kthread from ret_from_fork+0x14/0x38
[    4.673950] Exception stack(0xf0105fb0 to 0xf0105ff8)
[    4.679016] 5fa0:                                     00000000 00000000 00000000 00000000
[    4.687255] 5fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[    4.695465] 5fe0: 00000000 00000000 00000000 00000000 00000013 00000000
[    4.702117] Code: e3a03001 13540001 e5c03002 1a000006 (e7ffdefe) 
[    4.708251] ---[ end trace 0000000000000000 ]---