| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
arch_topology: Fix incorrect error check in topology_parse_cpu_capacity()
Fix incorrect use of PTR_ERR_OR_ZERO() in topology_parse_cpu_capacity()
which causes the code to proceed with NULL clock pointers. The current
logic uses !PTR_ERR_OR_ZERO(cpu_clk) which evaluates to true for both
valid pointers and NULL, leading to potential NULL pointer dereference
in clk_get_rate().
Per include/linux/err.h documentation, PTR_ERR_OR_ZERO(ptr) returns:
"The error code within @ptr if it is an error pointer; 0 otherwise."
This means PTR_ERR_OR_ZERO() returns 0 for both valid pointers AND NULL
pointers. Therefore !PTR_ERR_OR_ZERO(cpu_clk) evaluates to true (proceed)
when cpu_clk is either valid or NULL, causing clk_get_rate(NULL) to be
called when of_clk_get() returns NULL.
Replace with !IS_ERR_OR_NULL(cpu_clk) which only proceeds for valid
pointers, preventing potential NULL pointer dereference in clk_get_rate(). |
| In the Linux kernel, the following vulnerability has been resolved:
net: enetc: fix the deadlock of enetc_mdio_lock
After applying the workaround for err050089, the LS1028A platform
experiences RCU stalls on RT kernel. This issue is caused by the
recursive acquisition of the read lock enetc_mdio_lock. Here list some
of the call stacks identified under the enetc_poll path that may lead to
a deadlock:
enetc_poll
-> enetc_lock_mdio
-> enetc_clean_rx_ring OR napi_complete_done
-> napi_gro_receive
-> enetc_start_xmit
-> enetc_lock_mdio
-> enetc_map_tx_buffs
-> enetc_unlock_mdio
-> enetc_unlock_mdio
After enetc_poll acquires the read lock, a higher-priority writer attempts
to acquire the lock, causing preemption. The writer detects that a
read lock is already held and is scheduled out. However, readers under
enetc_poll cannot acquire the read lock again because a writer is already
waiting, leading to a thread hang.
Currently, the deadlock is avoided by adjusting enetc_lock_mdio to prevent
recursive lock acquisition. |
| In the Linux kernel, the following vulnerability has been resolved:
slab: Avoid race on slab->obj_exts in alloc_slab_obj_exts
If two competing threads enter alloc_slab_obj_exts() and one of them
fails to allocate the object extension vector, it might override the
valid slab->obj_exts allocated by the other thread with
OBJEXTS_ALLOC_FAIL. This will cause the thread that lost this race and
expects a valid pointer to dereference a NULL pointer later on.
Update slab->obj_exts atomically using cmpxchg() to avoid
slab->obj_exts overrides by racing threads.
Thanks for Vlastimil and Suren's help with debugging. |
| In the Linux kernel, the following vulnerability has been resolved:
net/mlx5e: RX, Fix generating skb from non-linear xdp_buff for striding RQ
XDP programs can change the layout of an xdp_buff through
bpf_xdp_adjust_tail() and bpf_xdp_adjust_head(). Therefore, the driver
cannot assume the size of the linear data area nor fragments. Fix the
bug in mlx5 by generating skb according to xdp_buff after XDP programs
run.
Currently, when handling multi-buf XDP, the mlx5 driver assumes the
layout of an xdp_buff to be unchanged. That is, the linear data area
continues to be empty and fragments remain the same. This may cause
the driver to generate erroneous skb or triggering a kernel
warning. When an XDP program added linear data through
bpf_xdp_adjust_head(), the linear data will be ignored as
mlx5e_build_linear_skb() builds an skb without linear data and then
pull data from fragments to fill the linear data area. When an XDP
program has shrunk the non-linear data through bpf_xdp_adjust_tail(),
the delta passed to __pskb_pull_tail() may exceed the actual nonlinear
data size and trigger the BUG_ON in it.
To fix the issue, first record the original number of fragments. If the
number of fragments changes after the XDP program runs, rewind the end
fragment pointer by the difference and recalculate the truesize. Then,
build the skb with the linear data area matching the xdp_buff. Finally,
only pull data in if there is non-linear data and fill the linear part
up to 256 bytes. |
| In the Linux kernel, the following vulnerability has been resolved:
platform/mellanox: mlxbf-pmc: add sysfs_attr_init() to count_clock init
The lock-related debug logic (CONFIG_LOCK_STAT) in the kernel is noting
the following warning when the BlueField-3 SOC is booted:
BUG: key ffff00008a3402a8 has not been registered!
------------[ cut here ]------------
DEBUG_LOCKS_WARN_ON(1)
WARNING: CPU: 4 PID: 592 at kernel/locking/lockdep.c:4801 lockdep_init_map_type+0x1d4/0x2a0
<snip>
Call trace:
lockdep_init_map_type+0x1d4/0x2a0
__kernfs_create_file+0x84/0x140
sysfs_add_file_mode_ns+0xcc/0x1cc
internal_create_group+0x110/0x3d4
internal_create_groups.part.0+0x54/0xcc
sysfs_create_groups+0x24/0x40
device_add+0x6e8/0x93c
device_register+0x28/0x40
__hwmon_device_register+0x4b0/0x8a0
devm_hwmon_device_register_with_groups+0x7c/0xe0
mlxbf_pmc_probe+0x1e8/0x3e0 [mlxbf_pmc]
platform_probe+0x70/0x110
The mlxbf_pmc driver must call sysfs_attr_init() during the
initialization of the "count_clock" data structure to avoid
this warning. |
| In the Linux kernel, the following vulnerability has been resolved:
arm64: mte: Do not warn if the page is already tagged in copy_highpage()
The arm64 copy_highpage() assumes that the destination page is newly
allocated and not MTE-tagged (PG_mte_tagged unset) and warns
accordingly. However, following commit 060913999d7a ("mm: migrate:
support poisoned recover from migrate folio"), folio_mc_copy() is called
before __folio_migrate_mapping(). If the latter fails (-EAGAIN), the
copy will be done again to the same destination page. Since
copy_highpage() already set the PG_mte_tagged flag, this second copy
will warn.
Replace the WARN_ON_ONCE(page already tagged) in the arm64
copy_highpage() with a comment. |
| In the Linux kernel, the following vulnerability has been resolved:
sysfs: check visibility before changing group attribute ownership
Since commit 0c17270f9b92 ("net: sysfs: Implement is_visible for
phys_(port_id, port_name, switch_id)"), __dev_change_net_namespace() can
hit WARN_ON() when trying to change owner of a file that isn't visible.
See the trace below:
WARNING: CPU: 6 PID: 2938 at net/core/dev.c:12410 __dev_change_net_namespace+0xb89/0xc30
CPU: 6 UID: 0 PID: 2938 Comm: incusd Not tainted 6.17.1-1-mainline #1 PREEMPT(full) 4b783b4a638669fb644857f484487d17cb45ed1f
Hardware name: Framework Laptop 13 (AMD Ryzen 7040Series)/FRANMDCP07, BIOS 03.07 02/19/2025
RIP: 0010:__dev_change_net_namespace+0xb89/0xc30
[...]
Call Trace:
<TASK>
? if6_seq_show+0x30/0x50
do_setlink.isra.0+0xc7/0x1270
? __nla_validate_parse+0x5c/0xcc0
? security_capable+0x94/0x1a0
rtnl_newlink+0x858/0xc20
? update_curr+0x8e/0x1c0
? update_entity_lag+0x71/0x80
? sched_balance_newidle+0x358/0x450
? psi_task_switch+0x113/0x2a0
? __pfx_rtnl_newlink+0x10/0x10
rtnetlink_rcv_msg+0x346/0x3e0
? sched_clock+0x10/0x30
? __pfx_rtnetlink_rcv_msg+0x10/0x10
netlink_rcv_skb+0x59/0x110
netlink_unicast+0x285/0x3c0
? __alloc_skb+0xdb/0x1a0
netlink_sendmsg+0x20d/0x430
____sys_sendmsg+0x39f/0x3d0
? import_iovec+0x2f/0x40
___sys_sendmsg+0x99/0xe0
__sys_sendmsg+0x8a/0xf0
do_syscall_64+0x81/0x970
? __sys_bind+0xe3/0x110
? syscall_exit_work+0x143/0x1b0
? do_syscall_64+0x244/0x970
? sock_alloc_file+0x63/0xc0
? syscall_exit_work+0x143/0x1b0
? do_syscall_64+0x244/0x970
? alloc_fd+0x12e/0x190
? put_unused_fd+0x2a/0x70
? do_sys_openat2+0xa2/0xe0
? syscall_exit_work+0x143/0x1b0
? do_syscall_64+0x244/0x970
? exc_page_fault+0x7e/0x1a0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
[...]
</TASK>
Fix this by checking is_visible() before trying to touch the attribute. |
| In the Linux kernel, the following vulnerability has been resolved:
fs: ext4: change GFP_KERNEL to GFP_NOFS to avoid deadlock
The parent function ext4_xattr_inode_lookup_create already uses GFP_NOFS for memory alloction, so the function ext4_xattr_inode_cache_find should use same gfp_flag. |
| In the Linux kernel, the following vulnerability has been resolved:
perf/x86/intel: Fix KASAN global-out-of-bounds warning
When running "perf mem record" command on CWF, the below KASAN
global-out-of-bounds warning is seen.
==================================================================
BUG: KASAN: global-out-of-bounds in cmt_latency_data+0x176/0x1b0
Read of size 4 at addr ffffffffb721d000 by task dtlb/9850
Call Trace:
kasan_report+0xb8/0xf0
cmt_latency_data+0x176/0x1b0
setup_arch_pebs_sample_data+0xf49/0x2560
intel_pmu_drain_arch_pebs+0x577/0xb00
handle_pmi_common+0x6c4/0xc80
The issue is caused by below code in __grt_latency_data(). The code
tries to access x86_hybrid_pmu structure which doesn't exist on
non-hybrid platform like CWF.
WARN_ON_ONCE(hybrid_pmu(event->pmu)->pmu_type == hybrid_big)
So add is_hybrid() check before calling this WARN_ON_ONCE to fix the
global-out-of-bounds access issue. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/sysfb: Do not dereference NULL pointer in plane reset
The plane state in __drm_gem_reset_shadow_plane() can be NULL. Do not
deref that pointer, but forward NULL to the other plane-reset helpers.
Clears plane->state to NULL.
v2:
- fix typo in commit description (Javier) |
| In the Linux kernel, the following vulnerability has been resolved:
ceph: fix multifs mds auth caps issue
The mds auth caps check should also validate the
fsname along with the associated caps. Not doing
so would result in applying the mds auth caps of
one fs on to the other fs in a multifs ceph cluster.
The bug causes multiple issues w.r.t user
authentication, following is one such example.
Steps to Reproduce (on vstart cluster):
1. Create two file systems in a cluster, say 'fsname1' and 'fsname2'
2. Authorize read only permission to the user 'client.usr' on fs 'fsname1'
$ceph fs authorize fsname1 client.usr / r
3. Authorize read and write permission to the same user 'client.usr' on fs 'fsname2'
$ceph fs authorize fsname2 client.usr / rw
4. Update the keyring
$ceph auth get client.usr >> ./keyring
With above permssions for the user 'client.usr', following is the
expectation.
a. The 'client.usr' should be able to only read the contents
and not allowed to create or delete files on file system 'fsname1'.
b. The 'client.usr' should be able to read/write on file system 'fsname2'.
But, with this bug, the 'client.usr' is allowed to read/write on file
system 'fsname1'. See below.
5. Mount the file system 'fsname1' with the user 'client.usr'
$sudo bin/mount.ceph usr@.fsname1=/ /kmnt_fsname1_usr/
6. Try creating a file on file system 'fsname1' with user 'client.usr'. This
should fail but passes with this bug.
$touch /kmnt_fsname1_usr/file1
7. Mount the file system 'fsname1' with the user 'client.admin' and create a
file.
$sudo bin/mount.ceph admin@.fsname1=/ /kmnt_fsname1_admin
$echo "data" > /kmnt_fsname1_admin/admin_file1
8. Try removing an existing file on file system 'fsname1' with the user
'client.usr'. This shoudn't succeed but succeeds with the bug.
$rm -f /kmnt_fsname1_usr/admin_file1
For more information, please take a look at the corresponding mds/fuse patch
and tests added by looking into the tracker mentioned below.
v2: Fix a possible null dereference in doutc
v3: Don't store fsname from mdsmap, validate against
ceph_mount_options's fsname and use it
v4: Code refactor, better warning message and
fix possible compiler warning
[ Slava.Dubeyko: "fsname check failed" -> "fsname mismatch" ] |
| In the Linux kernel, the following vulnerability has been resolved:
net: ipv6: fix field-spanning memcpy warning in AH output
Fix field-spanning memcpy warnings in ah6_output() and
ah6_output_done() where extension headers are copied to/from IPv6
address fields, triggering fortify-string warnings about writes beyond
the 16-byte address fields.
memcpy: detected field-spanning write (size 40) of single field "&top_iph->saddr" at net/ipv6/ah6.c:439 (size 16)
WARNING: CPU: 0 PID: 8838 at net/ipv6/ah6.c:439 ah6_output+0xe7e/0x14e0 net/ipv6/ah6.c:439
The warnings are false positives as the extension headers are
intentionally placed after the IPv6 header in memory. Fix by properly
copying addresses and extension headers separately, and introduce
helper functions to avoid code duplication. |
| In the Linux kernel, the following vulnerability has been resolved:
mtd: rawnand: cadence: fix DMA device NULL pointer dereference
The DMA device pointer `dma_dev` was being dereferenced before ensuring
that `cdns_ctrl->dmac` is properly initialized.
Move the assignment of `dma_dev` after successfully acquiring the DMA
channel to ensure the pointer is valid before use. |
| In the Linux kernel, the following vulnerability has been resolved:
fs: Fix uninitialized 'offp' in statmount_string()
In statmount_string(), most flags assign an output offset pointer (offp)
which is later updated with the string offset. However, the
STATMOUNT_MNT_UIDMAP and STATMOUNT_MNT_GIDMAP cases directly set the
struct fields instead of using offp. This leaves offp uninitialized,
leading to a possible uninitialized dereference when *offp is updated.
Fix it by assigning offp for UIDMAP and GIDMAP as well, keeping the code
path consistent. |
| In the Linux kernel, the following vulnerability has been resolved:
veth: more robust handing of race to avoid txq getting stuck
Commit dc82a33297fc ("veth: apply qdisc backpressure on full ptr_ring to
reduce TX drops") introduced a race condition that can lead to a permanently
stalled TXQ. This was observed in production on ARM64 systems (Ampere Altra
Max).
The race occurs in veth_xmit(). The producer observes a full ptr_ring and
stops the queue (netif_tx_stop_queue()). The subsequent conditional logic,
intended to re-wake the queue if the consumer had just emptied it (if
(__ptr_ring_empty(...)) netif_tx_wake_queue()), can fail. This leads to a
"lost wakeup" where the TXQ remains stopped (QUEUE_STATE_DRV_XOFF) and
traffic halts.
This failure is caused by an incorrect use of the __ptr_ring_empty() API
from the producer side. As noted in kernel comments, this check is not
guaranteed to be correct if a consumer is operating on another CPU. The
empty test is based on ptr_ring->consumer_head, making it reliable only for
the consumer. Using this check from the producer side is fundamentally racy.
This patch fixes the race by adopting the more robust logic from an earlier
version V4 of the patchset, which always flushed the peer:
(1) In veth_xmit(), the racy conditional wake-up logic and its memory barrier
are removed. Instead, after stopping the queue, we unconditionally call
__veth_xdp_flush(rq). This guarantees that the NAPI consumer is scheduled,
making it solely responsible for re-waking the TXQ.
This handles the race where veth_poll() consumes all packets and completes
NAPI *before* veth_xmit() on the producer side has called netif_tx_stop_queue.
The __veth_xdp_flush(rq) will observe rx_notify_masked is false and schedule
NAPI.
(2) On the consumer side, the logic for waking the peer TXQ is moved out of
veth_xdp_rcv() and placed at the end of the veth_poll() function. This
placement is part of fixing the race, as the netif_tx_queue_stopped() check
must occur after rx_notify_masked is potentially set to false during NAPI
completion.
This handles the race where veth_poll() consumes all packets, but haven't
finished (rx_notify_masked is still true). The producer veth_xmit() stops the
TXQ and __veth_xdp_flush(rq) will observe rx_notify_masked is true, meaning
not starting NAPI. Then veth_poll() change rx_notify_masked to false and
stops NAPI. Before exiting veth_poll() will observe TXQ is stopped and wake
it up. |
| In the Linux kernel, the following vulnerability has been resolved:
nouveau/firmware: Add missing kfree() of nvkm_falcon_fw::boot
nvkm_falcon_fw::boot is allocated, but no one frees it. This causes a
kmemleak warning.
Make sure this data is deallocated. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/i915: Avoid lock inversion when pinning to GGTT on CHV/BXT+VTD
On completion of i915_vma_pin_ww(), a synchronous variant of
dma_fence_work_commit() is called. When pinning a VMA to GGTT address
space on a Cherry View family processor, or on a Broxton generation SoC
with VTD enabled, i.e., when stop_machine() is then called from
intel_ggtt_bind_vma(), that can potentially lead to lock inversion among
reservation_ww and cpu_hotplug locks.
[86.861179] ======================================================
[86.861193] WARNING: possible circular locking dependency detected
[86.861209] 6.15.0-rc5-CI_DRM_16515-gca0305cadc2d+ #1 Tainted: G U
[86.861226] ------------------------------------------------------
[86.861238] i915_module_loa/1432 is trying to acquire lock:
[86.861252] ffffffff83489090 (cpu_hotplug_lock){++++}-{0:0}, at: stop_machine+0x1c/0x50
[86.861290]
but task is already holding lock:
[86.861303] ffffc90002e0b4c8 (reservation_ww_class_mutex){+.+.}-{3:3}, at: i915_vma_pin.constprop.0+0x39/0x1d0 [i915]
[86.862233]
which lock already depends on the new lock.
[86.862251]
the existing dependency chain (in reverse order) is:
[86.862265]
-> #5 (reservation_ww_class_mutex){+.+.}-{3:3}:
[86.862292] dma_resv_lockdep+0x19a/0x390
[86.862315] do_one_initcall+0x60/0x3f0
[86.862334] kernel_init_freeable+0x3cd/0x680
[86.862353] kernel_init+0x1b/0x200
[86.862369] ret_from_fork+0x47/0x70
[86.862383] ret_from_fork_asm+0x1a/0x30
[86.862399]
-> #4 (reservation_ww_class_acquire){+.+.}-{0:0}:
[86.862425] dma_resv_lockdep+0x178/0x390
[86.862440] do_one_initcall+0x60/0x3f0
[86.862454] kernel_init_freeable+0x3cd/0x680
[86.862470] kernel_init+0x1b/0x200
[86.862482] ret_from_fork+0x47/0x70
[86.862495] ret_from_fork_asm+0x1a/0x30
[86.862509]
-> #3 (&mm->mmap_lock){++++}-{3:3}:
[86.862531] down_read_killable+0x46/0x1e0
[86.862546] lock_mm_and_find_vma+0xa2/0x280
[86.862561] do_user_addr_fault+0x266/0x8e0
[86.862578] exc_page_fault+0x8a/0x2f0
[86.862593] asm_exc_page_fault+0x27/0x30
[86.862607] filldir64+0xeb/0x180
[86.862620] kernfs_fop_readdir+0x118/0x480
[86.862635] iterate_dir+0xcf/0x2b0
[86.862648] __x64_sys_getdents64+0x84/0x140
[86.862661] x64_sys_call+0x1058/0x2660
[86.862675] do_syscall_64+0x91/0xe90
[86.862689] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[86.862703]
-> #2 (&root->kernfs_rwsem){++++}-{3:3}:
[86.862725] down_write+0x3e/0xf0
[86.862738] kernfs_add_one+0x30/0x3c0
[86.862751] kernfs_create_dir_ns+0x53/0xb0
[86.862765] internal_create_group+0x134/0x4c0
[86.862779] sysfs_create_group+0x13/0x20
[86.862792] topology_add_dev+0x1d/0x30
[86.862806] cpuhp_invoke_callback+0x4b5/0x850
[86.862822] cpuhp_issue_call+0xbf/0x1f0
[86.862836] __cpuhp_setup_state_cpuslocked+0x111/0x320
[86.862852] __cpuhp_setup_state+0xb0/0x220
[86.862866] topology_sysfs_init+0x30/0x50
[86.862879] do_one_initcall+0x60/0x3f0
[86.862893] kernel_init_freeable+0x3cd/0x680
[86.862908] kernel_init+0x1b/0x200
[86.862921] ret_from_fork+0x47/0x70
[86.862934] ret_from_fork_asm+0x1a/0x30
[86.862947]
-> #1 (cpuhp_state_mutex){+.+.}-{3:3}:
[86.862969] __mutex_lock+0xaa/0xed0
[86.862982] mutex_lock_nested+0x1b/0x30
[86.862995] __cpuhp_setup_state_cpuslocked+0x67/0x320
[86.863012] __cpuhp_setup_state+0xb0/0x220
[86.863026] page_alloc_init_cpuhp+0x2d/0x60
[86.863041] mm_core_init+0x22/0x2d0
[86.863054] start_kernel+0x576/0xbd0
[86.863068] x86_64_start_reservations+0x18/0x30
[86.863084] x86_64_start_kernel+0xbf/0x110
[86.863098] common_startup_64+0x13e/0x141
[86.863114]
-> #0 (cpu_hotplug_lock){++++}-{0:0}:
[86.863135] __lock_acquire+0x16
---truncated--- |
| NVIDIA Isaac Lab contains a deserialization vulnerability. A successful exploit of this vulnerability might lead to code execution. |
| In the Linux kernel, the following vulnerability has been resolved:
staging: rtl8723bs: fix out-of-bounds read in OnBeacon ESR IE parsing
The Extended Supported Rates (ESR) IE handling in OnBeacon accessed
*(p + 1 + ielen) and *(p + 2 + ielen) without verifying that these
offsets lie within the received frame buffer. A malformed beacon with
an ESR IE positioned at the end of the buffer could cause an
out-of-bounds read, potentially triggering a kernel panic.
Add a boundary check to ensure that the ESR IE body and the subsequent
bytes are within the limits of the frame before attempting to access
them.
This prevents OOB reads caused by malformed beacon frames. |
| In the Linux kernel, the following vulnerability has been resolved:
rust_binder: fix race condition on death_list
Rust Binder contains the following unsafe operation:
// SAFETY: A `NodeDeath` is never inserted into the death list
// of any node other than its owner, so it is either in this
// death list or in no death list.
unsafe { node_inner.death_list.remove(self) };
This operation is unsafe because when touching the prev/next pointers of
a list element, we have to ensure that no other thread is also touching
them in parallel. If the node is present in the list that `remove` is
called on, then that is fine because we have exclusive access to that
list. If the node is not in any list, then it's also ok. But if it's
present in a different list that may be accessed in parallel, then that
may be a data race on the prev/next pointers.
And unfortunately that is exactly what is happening here. In
Node::release, we:
1. Take the lock.
2. Move all items to a local list on the stack.
3. Drop the lock.
4. Iterate the local list on the stack.
Combined with threads using the unsafe remove method on the original
list, this leads to memory corruption of the prev/next pointers. This
leads to crashes like this one:
Unable to handle kernel paging request at virtual address 000bb9841bcac70e
Mem abort info:
ESR = 0x0000000096000044
EC = 0x25: DABT (current EL), IL = 32 bits
SET = 0, FnV = 0
EA = 0, S1PTW = 0
FSC = 0x04: level 0 translation fault
Data abort info:
ISV = 0, ISS = 0x00000044, ISS2 = 0x00000000
CM = 0, WnR = 1, TnD = 0, TagAccess = 0
GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[000bb9841bcac70e] address between user and kernel address ranges
Internal error: Oops: 0000000096000044 [#1] PREEMPT SMP
google-cdd 538c004.gcdd: context saved(CPU:1)
item - log_kevents is disabled
Modules linked in: ... rust_binder
CPU: 1 UID: 0 PID: 2092 Comm: kworker/1:178 Tainted: G S W OE 6.12.52-android16-5-g98debd5df505-4k #1 f94a6367396c5488d635708e43ee0c888d230b0b
Tainted: [S]=CPU_OUT_OF_SPEC, [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
Hardware name: MUSTANG PVT 1.0 based on LGA (DT)
Workqueue: events _RNvXs6_NtCsdfZWD8DztAw_6kernel9workqueueINtNtNtB7_4sync3arc3ArcNtNtCs8QPsHWIn21X_16rust_binder_main7process7ProcessEINtB5_15WorkItemPointerKy0_E3runB13_ [rust_binder]
pstate: 23400005 (nzCv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
pc : _RNvXs3_NtCs8QPsHWIn21X_16rust_binder_main7processNtB5_7ProcessNtNtCsdfZWD8DztAw_6kernel9workqueue8WorkItem3run+0x450/0x11f8 [rust_binder]
lr : _RNvXs3_NtCs8QPsHWIn21X_16rust_binder_main7processNtB5_7ProcessNtNtCsdfZWD8DztAw_6kernel9workqueue8WorkItem3run+0x464/0x11f8 [rust_binder]
sp : ffffffc09b433ac0
x29: ffffffc09b433d30 x28: ffffff8821690000 x27: ffffffd40cbaa448
x26: ffffff8821690000 x25: 00000000ffffffff x24: ffffff88d0376578
x23: 0000000000000001 x22: ffffffc09b433c78 x21: ffffff88e8f9bf40
x20: ffffff88e8f9bf40 x19: ffffff882692b000 x18: ffffffd40f10bf00
x17: 00000000c006287d x16: 00000000c006287d x15: 00000000000003b0
x14: 0000000000000100 x13: 000000201cb79ae0 x12: fffffffffffffff0
x11: 0000000000000000 x10: 0000000000000001 x9 : 0000000000000000
x8 : b80bb9841bcac706 x7 : 0000000000000001 x6 : fffffffebee63f30
x5 : 0000000000000000 x4 : 0000000000000001 x3 : 0000000000000000
x2 : 0000000000004c31 x1 : ffffff88216900c0 x0 : ffffff88e8f9bf00
Call trace:
_RNvXs3_NtCs8QPsHWIn21X_16rust_binder_main7processNtB5_7ProcessNtNtCsdfZWD8DztAw_6kernel9workqueue8WorkItem3run+0x450/0x11f8 [rust_binder bbc172b53665bbc815363b22e97e3f7e3fe971fc]
process_scheduled_works+0x1c4/0x45c
worker_thread+0x32c/0x3e8
kthread+0x11c/0x1c8
ret_from_fork+0x10/0x20
Code: 94218d85 b4000155 a94026a8 d10102a0 (f9000509)
---[ end trace 0000000000000000 ]---
Thus, modify Node::release to pop items directly off the original list. |