| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
audit: improve robustness of the audit queue handling
If the audit daemon were ever to get stuck in a stopped state the
kernel's kauditd_thread() could get blocked attempting to send audit
records to the userspace audit daemon. With the kernel thread
blocked it is possible that the audit queue could grow unbounded as
certain audit record generating events must be exempt from the queue
limits else the system enter a deadlock state.
This patch resolves this problem by lowering the kernel thread's
socket sending timeout from MAX_SCHEDULE_TIMEOUT to HZ/10 and tweaks
the kauditd_send_queue() function to better manage the various audit
queues when connection problems occur between the kernel and the
audit daemon. With this patch, the backlog may temporarily grow
beyond the defined limits when the audit daemon is stopped and the
system is under heavy audit pressure, but kauditd_thread() will
continue to make progress and drain the queues as it would for other
connection problems. For example, with the audit daemon put into a
stopped state and the system configured to audit every syscall it
was still possible to shutdown the system without a kernel panic,
deadlock, etc.; granted, the system was slow to shutdown but that is
to be expected given the extreme pressure of recording every syscall.
The timeout value of HZ/10 was chosen primarily through
experimentation and this developer's "gut feeling". There is likely
no one perfect value, but as this scenario is limited in scope (root
privileges would be needed to send SIGSTOP to the audit daemon), it
is likely not worth exposing this as a tunable at present. This can
always be done at a later date if it proves necessary. |
| In the Linux kernel, the following vulnerability has been resolved:
s390/qeth: fix deadlock during failing recovery
Commit 0b9902c1fcc5 ("s390/qeth: fix deadlock during recovery") removed
taking discipline_mutex inside qeth_do_reset(), fixing potential
deadlocks. An error path was missed though, that still takes
discipline_mutex and thus has the original deadlock potential.
Intermittent deadlocks were seen when a qeth channel path is configured
offline, causing a race between qeth_do_reset and ccwgroup_remove.
Call qeth_set_offline() directly in the qeth_do_reset() error case and
then a new variant of ccwgroup_set_offline(), without taking
discipline_mutex. |
| In the Linux kernel, the following vulnerability has been resolved:
scsi: core: sysfs: Fix hang when device state is set via sysfs
This fixes a regression added with:
commit f0f82e2476f6 ("scsi: core: Fix capacity set to zero after
offlinining device")
The problem is that after iSCSI recovery, iscsid will call into the kernel
to set the dev's state to running, and with that patch we now call
scsi_rescan_device() with the state_mutex held. If the SCSI error handler
thread is just starting to test the device in scsi_send_eh_cmnd() then it's
going to try to grab the state_mutex.
We are then stuck, because when scsi_rescan_device() tries to send its I/O
scsi_queue_rq() calls -> scsi_host_queue_ready() -> scsi_host_in_recovery()
which will return true (the host state is still in recovery) and I/O will
just be requeued. scsi_send_eh_cmnd() will then never be able to grab the
state_mutex to finish error handling.
To prevent the deadlock move the rescan-related code to after we drop the
state_mutex.
This also adds a check for if we are already in the running state. This
prevents extra scans and helps the iscsid case where if the transport class
has already onlined the device during its recovery process then we don't
need userspace to do it again plus possibly block that daemon. |
| In the Linux kernel, the following vulnerability has been resolved:
mtd: require write permissions for locking and badblock ioctls
MEMLOCK, MEMUNLOCK and OTPLOCK modify protection bits. Thus require
write permission. Depending on the hardware MEMLOCK might even be
write-once, e.g. for SPI-NOR flashes with their WP# tied to GND. OTPLOCK
is always write-once.
MEMSETBADBLOCK modifies the bad block table. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/panthor: Lock XArray when getting entries for the VM
Similar to commit cac075706f29 ("drm/panthor: Fix race when converting
group handle to group object") we need to use the XArray's internal
locking when retrieving a vm pointer from there.
v2: Removed part of the patch that was trying to protect fetching
the heap pointer from XArray, as that operation is protected by
the @pool->lock. |
| In the Linux kernel, the following vulnerability has been resolved:
scsi: ufs: core: Fix another deadlock during RTC update
If ufshcd_rtc_work calls ufshcd_rpm_put_sync() and the pm's usage_count
is 0, we will enter the runtime suspend callback. However, the runtime
suspend callback will wait to flush ufshcd_rtc_work, causing a deadlock.
Replace ufshcd_rpm_put_sync() with ufshcd_rpm_put() to avoid the
deadlock. |
| In the Linux kernel, the following vulnerability has been resolved:
vrf: revert "vrf: Remove unnecessary RCU-bh critical section"
This reverts commit 504fc6f4f7f681d2a03aa5f68aad549d90eab853.
dev_queue_xmit_nit is expected to be called with BH disabled.
__dev_queue_xmit has the following:
/* Disable soft irqs for various locks below. Also
* stops preemption for RCU.
*/
rcu_read_lock_bh();
VRF must follow this invariant. The referenced commit removed this
protection. Which triggered a lockdep warning:
================================
WARNING: inconsistent lock state
6.11.0 #1 Tainted: G W
--------------------------------
inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
btserver/134819 [HC0[0]:SC0[0]:HE1:SE1] takes:
ffff8882da30c118 (rlock-AF_PACKET){+.?.}-{2:2}, at: tpacket_rcv+0x863/0x3b30
{IN-SOFTIRQ-W} state was registered at:
lock_acquire+0x19a/0x4f0
_raw_spin_lock+0x27/0x40
packet_rcv+0xa33/0x1320
__netif_receive_skb_core.constprop.0+0xcb0/0x3a90
__netif_receive_skb_list_core+0x2c9/0x890
netif_receive_skb_list_internal+0x610/0xcc0
[...]
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(rlock-AF_PACKET);
<Interrupt>
lock(rlock-AF_PACKET);
*** DEADLOCK ***
Call Trace:
<TASK>
dump_stack_lvl+0x73/0xa0
mark_lock+0x102e/0x16b0
__lock_acquire+0x9ae/0x6170
lock_acquire+0x19a/0x4f0
_raw_spin_lock+0x27/0x40
tpacket_rcv+0x863/0x3b30
dev_queue_xmit_nit+0x709/0xa40
vrf_finish_direct+0x26e/0x340 [vrf]
vrf_l3_out+0x5f4/0xe80 [vrf]
__ip_local_out+0x51e/0x7a0
[...] |
| In the Linux kernel, the following vulnerability has been resolved:
tracing/timerlat: Drop interface_lock in stop_kthread()
stop_kthread() is the offline callback for "trace/osnoise:online", since
commit 5bfbcd1ee57b ("tracing/timerlat: Add interface_lock around clearing
of kthread in stop_kthread()"), the following ABBA deadlock scenario is
introduced:
T1 | T2 [BP] | T3 [AP]
osnoise_hotplug_workfn() | work_for_cpu_fn() | cpuhp_thread_fun()
| _cpu_down() | osnoise_cpu_die()
mutex_lock(&interface_lock) | | stop_kthread()
| cpus_write_lock() | mutex_lock(&interface_lock)
cpus_read_lock() | cpuhp_kick_ap() |
As the interface_lock here in just for protecting the "kthread" field of
the osn_var, use xchg() instead to fix this issue. Also use
for_each_online_cpu() back in stop_per_cpu_kthreads() as it can take
cpu_read_lock() again. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/xe/guc_submit: add missing locking in wedged_fini
Any non-wedged queue can have a zero refcount here and can be running
concurrently with an async queue destroy, therefore dereferencing the
queue ptr to check wedge status after the lookup can trigger UAF if
queue is not wedged. Fix this by keeping the submission_state lock held
around the check to postpone the free and make the check safe, before
dropping again around the put() to avoid the deadlock.
(cherry picked from commit d28af0b6b9580b9f90c265a7da0315b0ad20bbfd) |
| In the Linux kernel, the following vulnerability has been resolved:
fuse: use exclusive lock when FUSE_I_CACHE_IO_MODE is set
This may be a typo. The comment has said shared locks are
not allowed when this bit is set. If using shared lock, the
wait in `fuse_file_cached_io_open` may be forever. |
| In the Linux kernel, the following vulnerability has been resolved:
KVM: Use dedicated mutex to protect kvm_usage_count to avoid deadlock
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock
on x86 due to a chain of locks and SRCU synchronizations. Translating the
below lockdep splat, CPU1 #6 will wait on CPU0 #1, CPU0 #8 will wait on
CPU2 #3, and CPU2 #7 will wait on CPU1 #4 (if there's a writer, due to the
fairness of r/w semaphores).
CPU0 CPU1 CPU2
1 lock(&kvm->slots_lock);
2 lock(&vcpu->mutex);
3 lock(&kvm->srcu);
4 lock(cpu_hotplug_lock);
5 lock(kvm_lock);
6 lock(&kvm->slots_lock);
7 lock(cpu_hotplug_lock);
8 sync(&kvm->srcu);
Note, there are likely more potential deadlocks in KVM x86, e.g. the same
pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with
__kvmclock_cpufreq_notifier():
cpuhp_cpufreq_online()
|
-> cpufreq_online()
|
-> cpufreq_gov_performance_limits()
|
-> __cpufreq_driver_target()
|
-> __target_index()
|
-> cpufreq_freq_transition_begin()
|
-> cpufreq_notify_transition()
|
-> ... __kvmclock_cpufreq_notifier()
But, actually triggering such deadlocks is beyond rare due to the
combination of dependencies and timings involved. E.g. the cpufreq
notifier is only used on older CPUs without a constant TSC, mucking with
the NX hugepage mitigation while VMs are running is very uncommon, and
doing so while also onlining/offlining a CPU (necessary to generate
contention on cpu_hotplug_lock) would be even more unusual.
The most robust solution to the general cpu_hotplug_lock issue is likely
to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq
notifier doesn't to take kvm_lock. For now, settle for fixing the most
blatant deadlock, as switching to an RCU-protected list is a much more
involved change, but add a comment in locking.rst to call out that care
needs to be taken when walking holding kvm_lock and walking vm_list.
======================================================
WARNING: possible circular locking dependency detected
6.10.0-smp--c257535a0c9d-pip #330 Tainted: G S O
------------------------------------------------------
tee/35048 is trying to acquire lock:
ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm]
but task is already holding lock:
ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm]
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (kvm_lock){+.+.}-{3:3}:
__mutex_lock+0x6a/0xb40
mutex_lock_nested+0x1f/0x30
kvm_dev_ioctl+0x4fb/0xe50 [kvm]
__se_sys_ioctl+0x7b/0xd0
__x64_sys_ioctl+0x21/0x30
x64_sys_call+0x15d0/0x2e60
do_syscall_64+0x83/0x160
entry_SYSCALL_64_after_hwframe+0x76/0x7e
-> #2 (cpu_hotplug_lock){++++}-{0:0}:
cpus_read_lock+0x2e/0xb0
static_key_slow_inc+0x16/0x30
kvm_lapic_set_base+0x6a/0x1c0 [kvm]
kvm_set_apic_base+0x8f/0xe0 [kvm]
kvm_set_msr_common+0x9ae/0xf80 [kvm]
vmx_set_msr+0xa54/0xbe0 [kvm_intel]
__kvm_set_msr+0xb6/0x1a0 [kvm]
kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm]
kvm_vcpu_ioctl+0x485/0x5b0 [kvm]
__se_sys_ioctl+0x7b/0xd0
__x64_sys_ioctl+0x21/0x30
x64_sys_call+0x15d0/0x2e60
do_syscall_64+0x83/0x160
entry_SYSCALL_64_after_hwframe+0x76/0x7e
-> #1 (&kvm->srcu){.+.+}-{0:0}:
__synchronize_srcu+0x44/0x1a0
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
erofs: handle overlapped pclusters out of crafted images properly
syzbot reported a task hang issue due to a deadlock case where it is
waiting for the folio lock of a cached folio that will be used for
cache I/Os.
After looking into the crafted fuzzed image, I found it's formed with
several overlapped big pclusters as below:
Ext: logical offset | length : physical offset | length
0: 0.. 16384 | 16384 : 151552.. 167936 | 16384
1: 16384.. 32768 | 16384 : 155648.. 172032 | 16384
2: 32768.. 49152 | 16384 : 537223168.. 537239552 | 16384
...
Here, extent 0/1 are physically overlapped although it's entirely
_impossible_ for normal filesystem images generated by mkfs.
First, managed folios containing compressed data will be marked as
up-to-date and then unlocked immediately (unlike in-place folios) when
compressed I/Os are complete. If physical blocks are not submitted in
the incremental order, there should be separate BIOs to avoid dependency
issues. However, the current code mis-arranges z_erofs_fill_bio_vec()
and BIO submission which causes unexpected BIO waits.
Second, managed folios will be connected to their own pclusters for
efficient inter-queries. However, this is somewhat hard to implement
easily if overlapped big pclusters exist. Again, these only appear in
fuzzed images so let's simply fall back to temporary short-lived pages
for correctness.
Additionally, it justifies that referenced managed folios cannot be
truncated for now and reverts part of commit 2080ca1ed3e4 ("erofs: tidy
up `struct z_erofs_bvec`") for simplicity although it shouldn't be any
difference. |
| In the Linux kernel, the following vulnerability has been resolved:
ARM: 9410/1: vfp: Use asm volatile in fmrx/fmxr macros
Floating point instructions in userspace can crash some arm kernels
built with clang/LLD 17.0.6:
BUG: unsupported FP instruction in kernel mode
FPEXC == 0xc0000780
Internal error: Oops - undefined instruction: 0 [#1] ARM
CPU: 0 PID: 196 Comm: vfp-reproducer Not tainted 6.10.0 #1
Hardware name: BCM2835
PC is at vfp_support_entry+0xc8/0x2cc
LR is at do_undefinstr+0xa8/0x250
pc : [<c0101d50>] lr : [<c010a80c>] psr: a0000013
sp : dc8d1f68 ip : 60000013 fp : bedea19c
r10: ec532b17 r9 : 00000010 r8 : 0044766c
r7 : c0000780 r6 : ec532b17 r5 : c1c13800 r4 : dc8d1fb0
r3 : c10072c4 r2 : c0101c88 r1 : ec532b17 r0 : 0044766c
Flags: NzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none
Control: 00c5387d Table: 0251c008 DAC: 00000051
Register r0 information: non-paged memory
Register r1 information: vmalloc memory
Register r2 information: non-slab/vmalloc memory
Register r3 information: non-slab/vmalloc memory
Register r4 information: 2-page vmalloc region
Register r5 information: slab kmalloc-cg-2k
Register r6 information: vmalloc memory
Register r7 information: non-slab/vmalloc memory
Register r8 information: non-paged memory
Register r9 information: zero-size pointer
Register r10 information: vmalloc memory
Register r11 information: non-paged memory
Register r12 information: non-paged memory
Process vfp-reproducer (pid: 196, stack limit = 0x61aaaf8b)
Stack: (0xdc8d1f68 to 0xdc8d2000)
1f60: 0000081f b6f69300 0000000f c10073f4 c10072c4 dc8d1fb0
1f80: ec532b17 0c532b17 0044766c b6f9ccd8 00000000 c010a80c 00447670 60000010
1fa0: ffffffff c1c13800 00c5387d c0100f10 b6f68af8 00448fc0 00000000 bedea188
1fc0: bedea314 00000001 00448ebc b6f9d000 00447608 b6f9ccd8 00000000 bedea19c
1fe0: bede9198 bedea188 b6e1061c 0044766c 60000010 ffffffff 00000000 00000000
Call trace:
[<c0101d50>] (vfp_support_entry) from [<c010a80c>] (do_undefinstr+0xa8/0x250)
[<c010a80c>] (do_undefinstr) from [<c0100f10>] (__und_usr+0x70/0x80)
Exception stack(0xdc8d1fb0 to 0xdc8d1ff8)
1fa0: b6f68af8 00448fc0 00000000 bedea188
1fc0: bedea314 00000001 00448ebc b6f9d000 00447608 b6f9ccd8 00000000 bedea19c
1fe0: bede9198 bedea188 b6e1061c 0044766c 60000010 ffffffff
Code: 0a000061 e3877202 e594003c e3a09010 (eef16a10)
---[ end trace 0000000000000000 ]---
Kernel panic - not syncing: Fatal exception in interrupt
---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---
This is a minimal userspace reproducer on a Raspberry Pi Zero W:
#include <stdio.h>
#include <math.h>
int main(void)
{
double v = 1.0;
printf("%fn", NAN + *(volatile double *)&v);
return 0;
}
Another way to consistently trigger the oops is:
calvin@raspberry-pi-zero-w ~$ python -c "import json"
The bug reproduces only when the kernel is built with DYNAMIC_DEBUG=n,
because the pr_debug() calls act as barriers even when not activated.
This is the output from the same kernel source built with the same
compiler and DYNAMIC_DEBUG=y, where the userspace reproducer works as
expected:
VFP: bounce: trigger ec532b17 fpexc c0000780
VFP: emulate: INST=0xee377b06 SCR=0x00000000
VFP: bounce: trigger eef1fa10 fpexc c0000780
VFP: emulate: INST=0xeeb40b40 SCR=0x00000000
VFP: raising exceptions 30000000
calvin@raspberry-pi-zero-w ~$ ./vfp-reproducer
nan
Crudely grepping for vmsr/vmrs instructions in the otherwise nearly
idential text for vfp_support_entry() makes the problem obvious:
vmlinux.llvm.good [0xc0101cb8] <+48>: vmrs r7, fpexc
vmlinux.llvm.good [0xc0101cd8] <+80>: vmsr fpexc, r0
vmlinux.llvm.good [0xc0101d20
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
firmware: qcom: uefisecapp: Fix deadlock in qcuefi_acquire()
If the __qcuefi pointer is not set, then in the original code, we would
hold onto the lock. That means that if we tried to set it later, then
it would cause a deadlock. Drop the lock on the error path. That's
what all the callers are expecting. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/xe/client: fix deadlock in show_meminfo()
There is a real deadlock as well as sleeping in atomic() bug in here, if
the bo put happens to be the last ref, since bo destruction wants to
grab the same spinlock and sleeping locks. Fix that by dropping the ref
using xe_bo_put_deferred(), and moving the final commit outside of the
lock. Dropping the lock around the put is tricky since the bo can go
out of scope and delete itself from the list, making it difficult to
navigate to the next list entry.
(cherry picked from commit 0083b8e6f11d7662283a267d4ce7c966812ffd8a) |
| In the Linux kernel, the following vulnerability has been resolved:
drm/xe/client: add missing bo locking in show_meminfo()
bo_meminfo() wants to inspect bo state like tt and the ttm resource,
however this state can change at any point leading to stuff like NPD and
UAF, if the bo lock is not held. Grab the bo lock when calling
bo_meminfo(), ensuring we drop any spinlocks first. In the case of
object_idr we now also need to hold a ref.
v2 (MattB)
- Also add xe_bo_assert_held()
(cherry picked from commit 4f63d712fa104c3ebefcb289d1e733e86d8698c7) |
| In the Linux kernel, the following vulnerability has been resolved:
drm/amdgpu/vcn: remove irq disabling in vcn 5 suspend
We do not directly enable/disable VCN IRQ in vcn 5.0.0.
And we do not handle the IRQ state as well. So the calls to
disable IRQ and set state are removed. This effectively gets
rid of the warining of
"WARN_ON(!amdgpu_irq_enabled(adev, src, type))"
in amdgpu_irq_put(). |
| In the Linux kernel, the following vulnerability has been resolved:
powerpc/qspinlock: Fix deadlock in MCS queue
If an interrupt occurs in queued_spin_lock_slowpath() after we increment
qnodesp->count and before node->lock is initialized, another CPU might
see stale lock values in get_tail_qnode(). If the stale lock value happens
to match the lock on that CPU, then we write to the "next" pointer of
the wrong qnode. This causes a deadlock as the former CPU, once it becomes
the head of the MCS queue, will spin indefinitely until it's "next" pointer
is set by its successor in the queue.
Running stress-ng on a 16 core (16EC/16VP) shared LPAR, results in
occasional lockups similar to the following:
$ stress-ng --all 128 --vm-bytes 80% --aggressive \
--maximize --oomable --verify --syslog \
--metrics --times --timeout 5m
watchdog: CPU 15 Hard LOCKUP
......
NIP [c0000000000b78f4] queued_spin_lock_slowpath+0x1184/0x1490
LR [c000000001037c5c] _raw_spin_lock+0x6c/0x90
Call Trace:
0xc000002cfffa3bf0 (unreliable)
_raw_spin_lock+0x6c/0x90
raw_spin_rq_lock_nested.part.135+0x4c/0xd0
sched_ttwu_pending+0x60/0x1f0
__flush_smp_call_function_queue+0x1dc/0x670
smp_ipi_demux_relaxed+0xa4/0x100
xive_muxed_ipi_action+0x20/0x40
__handle_irq_event_percpu+0x80/0x240
handle_irq_event_percpu+0x2c/0x80
handle_percpu_irq+0x84/0xd0
generic_handle_irq+0x54/0x80
__do_irq+0xac/0x210
__do_IRQ+0x74/0xd0
0x0
do_IRQ+0x8c/0x170
hardware_interrupt_common_virt+0x29c/0x2a0
--- interrupt: 500 at queued_spin_lock_slowpath+0x4b8/0x1490
......
NIP [c0000000000b6c28] queued_spin_lock_slowpath+0x4b8/0x1490
LR [c000000001037c5c] _raw_spin_lock+0x6c/0x90
--- interrupt: 500
0xc0000029c1a41d00 (unreliable)
_raw_spin_lock+0x6c/0x90
futex_wake+0x100/0x260
do_futex+0x21c/0x2a0
sys_futex+0x98/0x270
system_call_exception+0x14c/0x2f0
system_call_vectored_common+0x15c/0x2ec
The following code flow illustrates how the deadlock occurs.
For the sake of brevity, assume that both locks (A and B) are
contended and we call the queued_spin_lock_slowpath() function.
CPU0 CPU1
---- ----
spin_lock_irqsave(A) |
spin_unlock_irqrestore(A) |
spin_lock(B) |
| |
▼ |
id = qnodesp->count++; |
(Note that nodes[0].lock == A) |
| |
▼ |
Interrupt |
(happens before "nodes[0].lock = B") |
| |
▼ |
spin_lock_irqsave(A) |
| |
▼ |
id = qnodesp->count++ |
nodes[1].lock = A |
| |
▼ |
Tail of MCS queue |
| spin_lock_irqsave(A)
▼ |
Head of MCS queue ▼
| CPU0 is previous tail
▼ |
Spin indefinitely ▼
(until "nodes[1].next != NULL") prev = get_tail_qnode(A, CPU0)
|
▼
prev == &qnodes[CPU0].nodes[0]
(as qnodes
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
firmware: qcom: scm: Mark get_wq_ctx() as atomic call
Currently get_wq_ctx() is wrongly configured as a standard call. When two
SMC calls are in sleep and one SMC wakes up, it calls get_wq_ctx() to
resume the corresponding sleeping thread. But if get_wq_ctx() is
interrupted, goes to sleep and another SMC call is waiting to be allocated
a waitq context, it leads to a deadlock.
To avoid this get_wq_ctx() must be an atomic call and can't be a standard
SMC call. Hence mark get_wq_ctx() as a fast call. |
| In the Linux kernel, the following vulnerability has been resolved:
pktgen: use cpus_read_lock() in pg_net_init()
I have seen the WARN_ON(smp_processor_id() != cpu) firing
in pktgen_thread_worker() during tests.
We must use cpus_read_lock()/cpus_read_unlock()
around the for_each_online_cpu(cpu) loop.
While we are at it use WARN_ON_ONCE() to avoid a possible syslog flood. |