| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
rxrpc: Fix integer overflow in rxgk_verify_response()
In rxgk_verify_response(), there's a potential integer overflow due to
rounding up token_len before checking it, thereby allowing the length check to
be bypassed.
Fix this by checking the unrounded value against len too (len is limited as
the response must fit in a single UDP packet). |
| In the Linux kernel, the following vulnerability has been resolved:
rxrpc: fix reference count leak in rxrpc_server_keyring()
This patch fixes a reference count leak in rxrpc_server_keyring()
by checking if rx->securities is already set. |
| In the Linux kernel, the following vulnerability has been resolved:
rxrpc: fix oversized RESPONSE authenticator length check
rxgk_verify_response() decodes auth_len from the packet and is supposed
to verify that it fits in the remaining bytes. The existing check is
inverted, so oversized RESPONSE authenticators are accepted and passed
to rxgk_decrypt_skb(), which can later reach skb_to_sgvec() with an
impossible length and hit BUG_ON(len).
Decoded from the original latest-net reproduction logs with
scripts/decode_stacktrace.sh:
RIP: __skb_to_sgvec()
[net/core/skbuff.c:5285 (discriminator 1)]
Call Trace:
skb_to_sgvec() [net/core/skbuff.c:5305]
rxgk_decrypt_skb() [net/rxrpc/rxgk_common.h:81]
rxgk_verify_response() [net/rxrpc/rxgk.c:1268]
rxrpc_process_connection()
[net/rxrpc/conn_event.c:266 net/rxrpc/conn_event.c:364
net/rxrpc/conn_event.c:386]
process_one_work() [kernel/workqueue.c:3281]
worker_thread()
[kernel/workqueue.c:3353 kernel/workqueue.c:3440]
kthread() [kernel/kthread.c:436]
ret_from_fork() [arch/x86/kernel/process.c:164]
Reject authenticator lengths that exceed the remaining packet payload. |
| In the Linux kernel, the following vulnerability has been resolved:
rxrpc: fix RESPONSE authenticator parser OOB read
rxgk_verify_authenticator() copies auth_len bytes into a temporary
buffer and then passes p + auth_len as the parser limit to
rxgk_do_verify_authenticator(). Since p is a __be32 *, that inflates the
parser end pointer by a factor of four and lets malformed RESPONSE
authenticators read past the kmalloc() buffer.
Decoded from the original latest-net reproduction logs with
scripts/decode_stacktrace.sh:
BUG: KASAN: slab-out-of-bounds in rxgk_verify_response()
Call Trace:
dump_stack_lvl() [lib/dump_stack.c:123]
print_report() [mm/kasan/report.c:379 mm/kasan/report.c:482]
kasan_report() [mm/kasan/report.c:597]
rxgk_verify_response()
[net/rxrpc/rxgk.c:1103 net/rxrpc/rxgk.c:1167
net/rxrpc/rxgk.c:1274]
rxrpc_process_connection()
[net/rxrpc/conn_event.c:266 net/rxrpc/conn_event.c:364
net/rxrpc/conn_event.c:386]
process_one_work() [kernel/workqueue.c:3281]
worker_thread()
[kernel/workqueue.c:3353 kernel/workqueue.c:3440]
kthread() [kernel/kthread.c:436]
ret_from_fork() [arch/x86/kernel/process.c:164]
Allocated by task 54:
rxgk_verify_response()
[include/linux/slab.h:954 net/rxrpc/rxgk.c:1155
net/rxrpc/rxgk.c:1274]
rxrpc_process_connection()
[net/rxrpc/conn_event.c:266 net/rxrpc/conn_event.c:364
net/rxrpc/conn_event.c:386]
Convert the byte count to __be32 units before constructing the parser
limit. |
| In the Linux kernel, the following vulnerability has been resolved:
rxrpc: Only put the call ref if one was acquired
rxrpc_input_packet_on_conn() can process a to-client packet after the
current client call on the channel has already been torn down. In that
case chan->call is NULL, rxrpc_try_get_call() returns NULL and there is
no reference to drop.
The client-side implicit-end error path does not account for that and
unconditionally calls rxrpc_put_call(). This turns a protocol error
path into a kernel crash instead of rejecting the packet.
Only drop the call reference if one was actually acquired. Keep the
existing protocol error handling unchanged. |
| In the Linux kernel, the following vulnerability has been resolved:
rxrpc: Fix key reference count leak from call->key
When creating a client call in rxrpc_alloc_client_call(), the code obtains
a reference to the key. This is never cleaned up and gets leaked when the
call is destroyed.
Fix this by freeing call->key in rxrpc_destroy_call().
Before the patch, it shows the key reference counter elevated:
$ cat /proc/keys | grep afs@54321
1bffe9cd I--Q--i 8053480 4169w 3b010000 1000 1000 rxrpc afs@54321: ka
$
After the patch, the invalidated key is removed when the code exits:
$ cat /proc/keys | grep afs@54321
$ |
| In the Linux kernel, the following vulnerability has been resolved:
rxrpc: Fix RxGK token loading to check bounds
rxrpc_preparse_xdr_yfs_rxgk() reads the raw key length and ticket length
from the XDR token as u32 values and passes each through round_up(x, 4)
before using the rounded value for validation and allocation. When the raw
length is >= 0xfffffffd, round_up() wraps to 0, so the bounds check and
kzalloc both use 0 while the subsequent memcpy still copies the original
~4 GiB value, producing a heap buffer overflow reachable from an
unprivileged add_key() call.
Fix this by:
(1) Rejecting raw key lengths above AFSTOKEN_GK_KEY_MAX and raw ticket
lengths above AFSTOKEN_GK_TOKEN_MAX before rounding, consistent with
the caps that the RxKAD path already enforces via AFSTOKEN_RK_TIX_MAX.
(2) Sizing the flexible-array allocation from the validated raw key
length via struct_size_t() instead of the rounded value.
(3) Caching the raw lengths so that the later field assignments and
memcpy calls do not re-read from the token, eliminating a class of
TOCTOU re-parse.
The control path (valid token with lengths within bounds) is unaffected. |
| In the Linux kernel, the following vulnerability has been resolved:
rxrpc: Fix key parsing memleak
In rxrpc_preparse_xdr_yfs_rxgk(), the memory attached to token->rxgk can be
leaked in a few error paths after it's allocated.
Fix this by freeing it in the "reject_token:" case. |
| In the Linux kernel, the following vulnerability has been resolved:
rxrpc: Fix call removal to use RCU safe deletion
Fix rxrpc call removal from the rxnet->calls list to use list_del_rcu()
rather than list_del_init() to prevent stuffing up reading
/proc/net/rxrpc/calls from potentially getting into an infinite loop.
This, however, means that list_empty() no longer works on an entry that's
been deleted from the list, making it harder to detect prior deletion. Fix
this by:
Firstly, make rxrpc_destroy_all_calls() only dump the first ten calls that
are unexpectedly still on the list. Limiting the number of steps means
there's no need to call cond_resched() or to remove calls from the list
here, thereby eliminating the need for rxrpc_put_call() to check for that.
rxrpc_put_call() can then be fixed to unconditionally delete the call from
the list as it is the only place that the deletion occurs. |
| In the Linux kernel, the following vulnerability has been resolved:
net: lan966x: fix use-after-free and leak in lan966x_fdma_reload()
When lan966x_fdma_reload() fails to allocate new RX buffers, the restore
path restarts DMA using old descriptors whose pages were already freed
via lan966x_fdma_rx_free_pages(). Since page_pool_put_full_page() can
release pages back to the buddy allocator, the hardware may DMA into
memory now owned by other kernel subsystems.
Additionally, on the restore path, the newly created page pool (if
allocation partially succeeded) is overwritten without being destroyed,
leaking it.
Fix both issues by deferring the release of old pages until after the
new allocation succeeds. Save the old page array before the allocation
so old pages can be freed on the success path. On the failure path, the
old descriptors, pages and page pool are all still valid, making the
restore safe. Also ensure the restore path re-enables NAPI and wakes
the netdev, matching the success path. |
| In the Linux kernel, the following vulnerability has been resolved:
net: lan966x: fix page pool leak in error paths
lan966x_fdma_rx_alloc() creates a page pool but does not destroy it if
the subsequent fdma_alloc_coherent() call fails, leaking the pool.
Similarly, lan966x_fdma_init() frees the coherent DMA memory when
lan966x_fdma_tx_alloc() fails but does not destroy the page pool that
was successfully created by lan966x_fdma_rx_alloc(), leaking it.
Add the missing page_pool_destroy() calls in both error paths. |
| In the Linux kernel, the following vulnerability has been resolved:
idpf: fix PREEMPT_RT raw/bh spinlock nesting for async VC handling
Switch from using the completion's raw spinlock to a local lock in the
idpf_vc_xn struct. The conversion is safe because complete/_all() are
called outside the lock and there is no reason to share the completion
lock in the current logic. This avoids invalid wait context reported by
the kernel due to the async handler taking BH spinlock:
[ 805.726977] =============================
[ 805.726991] [ BUG: Invalid wait context ]
[ 805.727006] 7.0.0-rc2-net-devq-031026+ #28 Tainted: G S OE
[ 805.727026] -----------------------------
[ 805.727038] kworker/u261:0/572 is trying to lock:
[ 805.727051] ff190da6a8dbb6a0 (&vport_config->mac_filter_list_lock){+...}-{3:3}, at: idpf_mac_filter_async_handler+0xe9/0x260 [idpf]
[ 805.727099] other info that might help us debug this:
[ 805.727111] context-{5:5}
[ 805.727119] 3 locks held by kworker/u261:0/572:
[ 805.727132] #0: ff190da6db3e6148 ((wq_completion)idpf-0000:83:00.0-mbx){+.+.}-{0:0}, at: process_one_work+0x4b5/0x730
[ 805.727163] #1: ff3c6f0a6131fe50 ((work_completion)(&(&adapter->mbx_task)->work)){+.+.}-{0:0}, at: process_one_work+0x1e5/0x730
[ 805.727191] #2: ff190da765190020 (&x->wait#34){+.+.}-{2:2}, at: idpf_recv_mb_msg+0xc8/0x710 [idpf]
[ 805.727218] stack backtrace:
...
[ 805.727238] Workqueue: idpf-0000:83:00.0-mbx idpf_mbx_task [idpf]
[ 805.727247] Call Trace:
[ 805.727249] <TASK>
[ 805.727251] dump_stack_lvl+0x77/0xb0
[ 805.727259] __lock_acquire+0xb3b/0x2290
[ 805.727268] ? __irq_work_queue_local+0x59/0x130
[ 805.727275] lock_acquire+0xc6/0x2f0
[ 805.727277] ? idpf_mac_filter_async_handler+0xe9/0x260 [idpf]
[ 805.727284] ? _printk+0x5b/0x80
[ 805.727290] _raw_spin_lock_bh+0x38/0x50
[ 805.727298] ? idpf_mac_filter_async_handler+0xe9/0x260 [idpf]
[ 805.727303] idpf_mac_filter_async_handler+0xe9/0x260 [idpf]
[ 805.727310] idpf_recv_mb_msg+0x1c8/0x710 [idpf]
[ 805.727317] process_one_work+0x226/0x730
[ 805.727322] worker_thread+0x19e/0x340
[ 805.727325] ? __pfx_worker_thread+0x10/0x10
[ 805.727328] kthread+0xf4/0x130
[ 805.727333] ? __pfx_kthread+0x10/0x10
[ 805.727336] ret_from_fork+0x32c/0x410
[ 805.727345] ? __pfx_kthread+0x10/0x10
[ 805.727347] ret_from_fork_asm+0x1a/0x30
[ 805.727354] </TASK> |
| In the Linux kernel, the following vulnerability has been resolved:
mmc: vub300: fix NULL-deref on disconnect
Make sure to deregister the controller before dropping the reference to
the driver data on disconnect to avoid NULL-pointer dereferences or
use-after-free. |
| In the Linux kernel, the following vulnerability has been resolved:
net: stmmac: fix integer underflow in chain mode
The jumbo_frm() chain-mode implementation unconditionally computes
len = nopaged_len - bmax;
where nopaged_len = skb_headlen(skb) (linear bytes only) and bmax is
BUF_SIZE_8KiB or BUF_SIZE_2KiB. However, the caller stmmac_xmit()
decides to invoke jumbo_frm() based on skb->len (total length including
page fragments):
is_jumbo = stmmac_is_jumbo_frm(priv, skb->len, enh_desc);
When a packet has a small linear portion (nopaged_len <= bmax) but a
large total length due to page fragments (skb->len > bmax), the
subtraction wraps as an unsigned integer, producing a huge len value
(~0xFFFFxxxx). This causes the while (len != 0) loop to execute
hundreds of thousands of iterations, passing skb->data + bmax * i
pointers far beyond the skb buffer to dma_map_single(). On IOMMU-less
SoCs (the typical deployment for stmmac), this maps arbitrary kernel
memory to the DMA engine, constituting a kernel memory disclosure and
potential memory corruption from hardware.
Fix this by introducing a buf_len local variable clamped to
min(nopaged_len, bmax). Computing len = nopaged_len - buf_len is then
always safe: it is zero when the linear portion fits within a single
descriptor, causing the while (len != 0) loop to be skipped naturally,
and the fragment loop in stmmac_xmit() handles page fragments afterward. |
| In the Linux kernel, the following vulnerability has been resolved:
mm/damon/stat: deallocate damon_call() failure leaking damon_ctx
damon_stat_start() always allocates the module's damon_ctx object
(damon_stat_context). Meanwhile, if damon_call() in the function fails,
the damon_ctx object is not deallocated. Hence, if the damon_call() is
failed, and the user writes Y to “enabled” again, the previously
allocated damon_ctx object is leaked.
This cannot simply be fixed by deallocating the damon_ctx object when
damon_call() fails. That's because damon_call() failure doesn't guarantee
the kdamond main function, which accesses the damon_ctx object, is
completely finished. In other words, if damon_stat_start() deallocates
the damon_ctx object after damon_call() failure, the not-yet-terminated
kdamond could access the freed memory (use-after-free).
Fix the leak while avoiding the use-after-free by keeping returning
damon_stat_start() without deallocating the damon_ctx object after
damon_call() failure, but deallocating it when the function is invoked
again and the kdamond is completely terminated. If the kdamond is not yet
terminated, simply return -EAGAIN, as the kdamond will soon be terminated.
The issue was discovered [1] by sashiko. |
| In the Linux kernel, the following vulnerability has been resolved:
mptcp: fix slab-use-after-free in __inet_lookup_established
The ehash table lookups are lockless and rely on
SLAB_TYPESAFE_BY_RCU to guarantee socket memory stability
during RCU read-side critical sections. Both tcp_prot and
tcpv6_prot have their slab caches created with this flag
via proto_register().
However, MPTCP's mptcp_subflow_init() copies tcpv6_prot into
tcpv6_prot_override during inet_init() (fs_initcall, level 5),
before inet6_init() (module_init/device_initcall, level 6) has
called proto_register(&tcpv6_prot). At that point,
tcpv6_prot.slab is still NULL, so tcpv6_prot_override.slab
remains NULL permanently.
This causes MPTCP v6 subflow child sockets to be allocated via
kmalloc (falling into kmalloc-4k) instead of the TCPv6 slab
cache. The kmalloc-4k cache lacks SLAB_TYPESAFE_BY_RCU, so
when these sockets are freed without SOCK_RCU_FREE (which is
cleared for child sockets by design), the memory can be
immediately reused. Concurrent ehash lookups under
rcu_read_lock can then access freed memory, triggering a
slab-use-after-free in __inet_lookup_established.
Fix this by splitting the IPv6-specific initialization out of
mptcp_subflow_init() into a new mptcp_subflow_v6_init(), called
from mptcp_proto_v6_init() before protocol registration. This
ensures tcpv6_prot_override.slab correctly inherits the
SLAB_TYPESAFE_BY_RCU slab cache. |
| In the Linux kernel, the following vulnerability has been resolved:
mm/vma: fix memory leak in __mmap_region()
commit 605f6586ecf7 ("mm/vma: do not leak memory when .mmap_prepare
swaps the file") handled the success path by skipping get_file() via
file_doesnt_need_get, but missed the error path.
When /dev/zero is mmap'd with MAP_SHARED, mmap_zero_prepare() calls
shmem_zero_setup_desc() which allocates a new shmem file to back the
mapping. If __mmap_new_vma() subsequently fails, this replacement
file is never fput()'d - the original is released by
ksys_mmap_pgoff(), but nobody releases the new one.
Add fput() for the swapped file in the error path.
Reproducible with fault injection.
FAULT_INJECTION: forcing a failure.
name failslab, interval 1, probability 0, space 0, times 1
CPU: 2 UID: 0 PID: 366 Comm: syz.7.14 Not tainted 7.0.0-rc6 #2 PREEMPT(full)
Hardware name: QEMU Ubuntu 24.04 PC v2 (i440FX + PIIX, arch_caps fix, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x164/0x1f0
should_fail_ex+0x525/0x650
should_failslab+0xdf/0x140
kmem_cache_alloc_noprof+0x78/0x630
vm_area_alloc+0x24/0x160
__mmap_region+0xf6b/0x2660
mmap_region+0x2eb/0x3a0
do_mmap+0xc79/0x1240
vm_mmap_pgoff+0x252/0x4c0
ksys_mmap_pgoff+0xf8/0x120
__x64_sys_mmap+0x12a/0x190
do_syscall_64+0xa9/0x580
entry_SYSCALL_64_after_hwframe+0x76/0x7e
</TASK>
kmemleak: 1 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
BUG: memory leak
unreferenced object 0xffff8881118aca80 (size 360):
comm "syz.7.14", pid 366, jiffies 4294913255
hex dump (first 32 bytes):
00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
ff ff ff ff ff ff ff ff c0 28 4d ae ff ff ff ff .........(M.....
backtrace (crc db0f53bc):
kmem_cache_alloc_noprof+0x3ab/0x630
alloc_empty_file+0x5a/0x1e0
alloc_file_pseudo+0x135/0x220
__shmem_file_setup+0x274/0x420
shmem_zero_setup_desc+0x9c/0x170
mmap_zero_prepare+0x123/0x140
__mmap_region+0xdda/0x2660
mmap_region+0x2eb/0x3a0
do_mmap+0xc79/0x1240
vm_mmap_pgoff+0x252/0x4c0
ksys_mmap_pgoff+0xf8/0x120
__x64_sys_mmap+0x12a/0x190
do_syscall_64+0xa9/0x580
entry_SYSCALL_64_after_hwframe+0x76/0x7e
Found by syzkaller. |
| In the Linux kernel, the following vulnerability has been resolved:
pmdomain: imx8mp-blk-ctrl: Keep the NOC_HDCP clock enabled
Keep the NOC_HDCP clock always enabled to fix the potential hang
caused by the NoC ADB400 port power down handshake. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/i915/gt: fix refcount underflow in intel_engine_park_heartbeat
A use-after-free / refcount underflow is possible when the heartbeat
worker and intel_engine_park_heartbeat() race to release the same
engine->heartbeat.systole request.
The heartbeat worker reads engine->heartbeat.systole and calls
i915_request_put() on it when the request is complete, but clears
the pointer in a separate, non-atomic step. Concurrently, a request
retirement on another CPU can drop the engine wakeref to zero, triggering
__engine_park() -> intel_engine_park_heartbeat(). If the heartbeat
timer is pending at that point, cancel_delayed_work() returns true and
intel_engine_park_heartbeat() reads the stale non-NULL systole pointer
and calls i915_request_put() on it again, causing a refcount underflow:
```
<4> [487.221889] Workqueue: i915-unordered engine_retire [i915]
<4> [487.222640] RIP: 0010:refcount_warn_saturate+0x68/0xb0
...
<4> [487.222707] Call Trace:
<4> [487.222711] <TASK>
<4> [487.222716] intel_engine_park_heartbeat.part.0+0x6f/0x80 [i915]
<4> [487.223115] intel_engine_park_heartbeat+0x25/0x40 [i915]
<4> [487.223566] __engine_park+0xb9/0x650 [i915]
<4> [487.223973] ____intel_wakeref_put_last+0x2e/0xb0 [i915]
<4> [487.224408] __intel_wakeref_put_last+0x72/0x90 [i915]
<4> [487.224797] intel_context_exit_engine+0x7c/0x80 [i915]
<4> [487.225238] intel_context_exit+0xf1/0x1b0 [i915]
<4> [487.225695] i915_request_retire.part.0+0x1b9/0x530 [i915]
<4> [487.226178] i915_request_retire+0x1c/0x40 [i915]
<4> [487.226625] engine_retire+0x122/0x180 [i915]
<4> [487.227037] process_one_work+0x239/0x760
<4> [487.227060] worker_thread+0x200/0x3f0
<4> [487.227068] ? __pfx_worker_thread+0x10/0x10
<4> [487.227075] kthread+0x10d/0x150
<4> [487.227083] ? __pfx_kthread+0x10/0x10
<4> [487.227092] ret_from_fork+0x3d4/0x480
<4> [487.227099] ? __pfx_kthread+0x10/0x10
<4> [487.227107] ret_from_fork_asm+0x1a/0x30
<4> [487.227141] </TASK>
```
Fix this by replacing the non-atomic pointer read + separate clear with
xchg() in both racing paths. xchg() is a single indivisible hardware
instruction that atomically reads the old pointer and writes NULL. This
guarantees only one of the two concurrent callers obtains the non-NULL
pointer and performs the put, the other gets NULL and skips it.
(cherry picked from commit 13238dc0ee4f9ab8dafa2cca7295736191ae2f42) |
| In the Linux kernel, the following vulnerability has been resolved:
batman-adv: hold claim backbone gateways by reference
batadv_bla_add_claim() can replace claim->backbone_gw and drop the old
gateway's last reference while readers still follow the pointer.
The netlink claim dump path dereferences claim->backbone_gw->orig and
takes claim->backbone_gw->crc_lock without pinning the underlying
backbone gateway. batadv_bla_check_claim() still has the same naked
pointer access pattern.
Reuse batadv_bla_claim_get_backbone_gw() in both readers so they operate
on a stable gateway reference until the read-side work is complete.
This keeps the dump and claim-check paths aligned with the lifetime
rules introduced for the other BLA claim readers. |