| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
smb: client: fix double put of @cfile in smb2_set_path_size()
If smb2_compound_op() is called with a valid @cfile and returned
-EINVAL, we need to call cifs_get_writable_path() before retrying it
as the reference of @cfile was already dropped by previous call.
This fixes the following KASAN splat when running fstests generic/013
against Windows Server 2022:
CIFS: Attempting to mount //w22-fs0/scratch
run fstests generic/013 at 2024-09-02 19:48:59
==================================================================
BUG: KASAN: slab-use-after-free in detach_if_pending+0xab/0x200
Write of size 8 at addr ffff88811f1a3730 by task kworker/3:2/176
CPU: 3 UID: 0 PID: 176 Comm: kworker/3:2 Not tainted 6.11.0-rc6 #2
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-2.fc40
04/01/2014
Workqueue: cifsoplockd cifs_oplock_break [cifs]
Call Trace:
<TASK>
dump_stack_lvl+0x5d/0x80
? detach_if_pending+0xab/0x200
print_report+0x156/0x4d9
? detach_if_pending+0xab/0x200
? __virt_addr_valid+0x145/0x300
? __phys_addr+0x46/0x90
? detach_if_pending+0xab/0x200
kasan_report+0xda/0x110
? detach_if_pending+0xab/0x200
detach_if_pending+0xab/0x200
timer_delete+0x96/0xe0
? __pfx_timer_delete+0x10/0x10
? rcu_is_watching+0x20/0x50
try_to_grab_pending+0x46/0x3b0
__cancel_work+0x89/0x1b0
? __pfx___cancel_work+0x10/0x10
? kasan_save_track+0x14/0x30
cifs_close_deferred_file+0x110/0x2c0 [cifs]
? __pfx_cifs_close_deferred_file+0x10/0x10 [cifs]
? __pfx_down_read+0x10/0x10
cifs_oplock_break+0x4c1/0xa50 [cifs]
? __pfx_cifs_oplock_break+0x10/0x10 [cifs]
? lock_is_held_type+0x85/0xf0
? mark_held_locks+0x1a/0x90
process_one_work+0x4c6/0x9f0
? find_held_lock+0x8a/0xa0
? __pfx_process_one_work+0x10/0x10
? lock_acquired+0x220/0x550
? __list_add_valid_or_report+0x37/0x100
worker_thread+0x2e4/0x570
? __kthread_parkme+0xd1/0xf0
? __pfx_worker_thread+0x10/0x10
kthread+0x17f/0x1c0
? kthread+0xda/0x1c0
? __pfx_kthread+0x10/0x10
ret_from_fork+0x31/0x60
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1a/0x30
</TASK>
Allocated by task 1118:
kasan_save_stack+0x30/0x50
kasan_save_track+0x14/0x30
__kasan_kmalloc+0xaa/0xb0
cifs_new_fileinfo+0xc8/0x9d0 [cifs]
cifs_atomic_open+0x467/0x770 [cifs]
lookup_open.isra.0+0x665/0x8b0
path_openat+0x4c3/0x1380
do_filp_open+0x167/0x270
do_sys_openat2+0x129/0x160
__x64_sys_creat+0xad/0xe0
do_syscall_64+0xbb/0x1d0
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Freed by task 83:
kasan_save_stack+0x30/0x50
kasan_save_track+0x14/0x30
kasan_save_free_info+0x3b/0x70
poison_slab_object+0xe9/0x160
__kasan_slab_free+0x32/0x50
kfree+0xf2/0x300
process_one_work+0x4c6/0x9f0
worker_thread+0x2e4/0x570
kthread+0x17f/0x1c0
ret_from_fork+0x31/0x60
ret_from_fork_asm+0x1a/0x30
Last potentially related work creation:
kasan_save_stack+0x30/0x50
__kasan_record_aux_stack+0xad/0xc0
insert_work+0x29/0xe0
__queue_work+0x5ea/0x760
queue_work_on+0x6d/0x90
_cifsFileInfo_put+0x3f6/0x770 [cifs]
smb2_compound_op+0x911/0x3940 [cifs]
smb2_set_path_size+0x228/0x270 [cifs]
cifs_set_file_size+0x197/0x460 [cifs]
cifs_setattr+0xd9c/0x14b0 [cifs]
notify_change+0x4e3/0x740
do_truncate+0xfa/0x180
vfs_truncate+0x195/0x200
__x64_sys_truncate+0x109/0x150
do_syscall_64+0xbb/0x1d0
entry_SYSCALL_64_after_hwframe+0x77/0x7f |
| In the Linux kernel, the following vulnerability has been resolved:
fscache: delete fscache_cookie_lru_timer when fscache exits to avoid UAF
The fscache_cookie_lru_timer is initialized when the fscache module
is inserted, but is not deleted when the fscache module is removed.
If timer_reduce() is called before removing the fscache module,
the fscache_cookie_lru_timer will be added to the timer list of
the current cpu. Afterwards, a use-after-free will be triggered
in the softIRQ after removing the fscache module, as follows:
==================================================================
BUG: unable to handle page fault for address: fffffbfff803c9e9
PF: supervisor read access in kernel mode
PF: error_code(0x0000) - not-present page
PGD 21ffea067 P4D 21ffea067 PUD 21ffe6067 PMD 110a7c067 PTE 0
Oops: Oops: 0000 [#1] PREEMPT SMP KASAN PTI
CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Tainted: G W 6.11.0-rc3 #855
Tainted: [W]=WARN
RIP: 0010:__run_timer_base.part.0+0x254/0x8a0
Call Trace:
<IRQ>
tmigr_handle_remote_up+0x627/0x810
__walk_groups.isra.0+0x47/0x140
tmigr_handle_remote+0x1fa/0x2f0
handle_softirqs+0x180/0x590
irq_exit_rcu+0x84/0xb0
sysvec_apic_timer_interrupt+0x6e/0x90
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x1a/0x20
RIP: 0010:default_idle+0xf/0x20
default_idle_call+0x38/0x60
do_idle+0x2b5/0x300
cpu_startup_entry+0x54/0x60
start_secondary+0x20d/0x280
common_startup_64+0x13e/0x148
</TASK>
Modules linked in: [last unloaded: netfs]
==================================================================
Therefore delete fscache_cookie_lru_timer when removing the fscahe module. |
| In the Linux kernel, the following vulnerability has been resolved:
xen: privcmd: Fix possible access to a freed kirqfd instance
Nothing prevents simultaneous ioctl calls to privcmd_irqfd_assign() and
privcmd_irqfd_deassign(). If that happens, it is possible that a kirqfd
created and added to the irqfds_list by privcmd_irqfd_assign() may get
removed by another thread executing privcmd_irqfd_deassign(), while the
former is still using it after dropping the locks.
This can lead to a situation where an already freed kirqfd instance may
be accessed and cause kernel oops.
Use SRCU locking to prevent the same, as is done for the KVM
implementation for irqfds. |
| In the Linux kernel, the following vulnerability has been resolved:
misc: fastrpc: Fix double free of 'buf' in error path
smatch warning:
drivers/misc/fastrpc.c:1926 fastrpc_req_mmap() error: double free of 'buf'
In fastrpc_req_mmap() error path, the fastrpc buffer is freed in
fastrpc_req_munmap_impl() if unmap is successful.
But in the end, there is an unconditional call to fastrpc_buf_free().
So the above case triggers the double free of fastrpc buf. |
| In the Linux kernel, the following vulnerability has been resolved:
nfsd: fix potential UAF in nfsd4_cb_getattr_release
Once we drop the delegation reference, the fields embedded in it are no
longer safe to access. Do that last. |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: fix a use-after-free when hitting errors inside btrfs_submit_chunk()
[BUG]
There is an internal report that KASAN is reporting use-after-free, with
the following backtrace:
BUG: KASAN: slab-use-after-free in btrfs_check_read_bio+0xa68/0xb70 [btrfs]
Read of size 4 at addr ffff8881117cec28 by task kworker/u16:2/45
CPU: 1 UID: 0 PID: 45 Comm: kworker/u16:2 Not tainted 6.11.0-rc2-next-20240805-default+ #76
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.2-3-gd478f380-rebuilt.opensuse.org 04/01/2014
Workqueue: btrfs-endio btrfs_end_bio_work [btrfs]
Call Trace:
dump_stack_lvl+0x61/0x80
print_address_description.constprop.0+0x5e/0x2f0
print_report+0x118/0x216
kasan_report+0x11d/0x1f0
btrfs_check_read_bio+0xa68/0xb70 [btrfs]
process_one_work+0xce0/0x12a0
worker_thread+0x717/0x1250
kthread+0x2e3/0x3c0
ret_from_fork+0x2d/0x70
ret_from_fork_asm+0x11/0x20
Allocated by task 20917:
kasan_save_stack+0x37/0x60
kasan_save_track+0x10/0x30
__kasan_slab_alloc+0x7d/0x80
kmem_cache_alloc_noprof+0x16e/0x3e0
mempool_alloc_noprof+0x12e/0x310
bio_alloc_bioset+0x3f0/0x7a0
btrfs_bio_alloc+0x2e/0x50 [btrfs]
submit_extent_page+0x4d1/0xdb0 [btrfs]
btrfs_do_readpage+0x8b4/0x12a0 [btrfs]
btrfs_readahead+0x29a/0x430 [btrfs]
read_pages+0x1a7/0xc60
page_cache_ra_unbounded+0x2ad/0x560
filemap_get_pages+0x629/0xa20
filemap_read+0x335/0xbf0
vfs_read+0x790/0xcb0
ksys_read+0xfd/0x1d0
do_syscall_64+0x6d/0x140
entry_SYSCALL_64_after_hwframe+0x4b/0x53
Freed by task 20917:
kasan_save_stack+0x37/0x60
kasan_save_track+0x10/0x30
kasan_save_free_info+0x37/0x50
__kasan_slab_free+0x4b/0x60
kmem_cache_free+0x214/0x5d0
bio_free+0xed/0x180
end_bbio_data_read+0x1cc/0x580 [btrfs]
btrfs_submit_chunk+0x98d/0x1880 [btrfs]
btrfs_submit_bio+0x33/0x70 [btrfs]
submit_one_bio+0xd4/0x130 [btrfs]
submit_extent_page+0x3ea/0xdb0 [btrfs]
btrfs_do_readpage+0x8b4/0x12a0 [btrfs]
btrfs_readahead+0x29a/0x430 [btrfs]
read_pages+0x1a7/0xc60
page_cache_ra_unbounded+0x2ad/0x560
filemap_get_pages+0x629/0xa20
filemap_read+0x335/0xbf0
vfs_read+0x790/0xcb0
ksys_read+0xfd/0x1d0
do_syscall_64+0x6d/0x140
entry_SYSCALL_64_after_hwframe+0x4b/0x53
[CAUSE]
Although I cannot reproduce the error, the report itself is good enough
to pin down the cause.
The call trace is the regular endio workqueue context, but the
free-by-task trace is showing that during btrfs_submit_chunk() we
already hit a critical error, and is calling btrfs_bio_end_io() to error
out. And the original endio function called bio_put() to free the whole
bio.
This means a double freeing thus causing use-after-free, e.g.:
1. Enter btrfs_submit_bio() with a read bio
The read bio length is 128K, crossing two 64K stripes.
2. The first run of btrfs_submit_chunk()
2.1 Call btrfs_map_block(), which returns 64K
2.2 Call btrfs_split_bio()
Now there are two bios, one referring to the first 64K, the other
referring to the second 64K.
2.3 The first half is submitted.
3. The second run of btrfs_submit_chunk()
3.1 Call btrfs_map_block(), which by somehow failed
Now we call btrfs_bio_end_io() to handle the error
3.2 btrfs_bio_end_io() calls the original endio function
Which is end_bbio_data_read(), and it calls bio_put() for the
original bio.
Now the original bio is freed.
4. The submitted first 64K bio finished
Now we call into btrfs_check_read_bio() and tries to advance the bio
iter.
But since the original bio (thus its iter) is already freed, we
trigger the above use-after free.
And even if the memory is not poisoned/corrupted, we will later call
the original endio function, causing a double freeing.
[FIX]
Instead of calling btrfs_bio_end_io(), call btrfs_orig_bbio_end_io(),
which has the extra check on split bios and do the pr
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
drm/xe: prevent UAF around preempt fence
The fence lock is part of the queue, therefore in the current design
anything locking the fence should then also hold a ref to the queue to
prevent the queue from being freed.
However, currently it looks like we signal the fence and then drop the
queue ref, but if something is waiting on the fence, the waiter is
kicked to wake up at some later point, where upon waking up it first
grabs the lock before checking the fence state. But if we have already
dropped the queue ref, then the lock might already be freed as part of
the queue, leading to uaf.
To prevent this, move the fence lock into the fence itself so we don't
run into lifetime issues. Alternative might be to have device level
lock, or only release the queue in the fence release callback, however
that might require pushing to another worker to avoid locking issues.
References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2454
References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2342
References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2020
(cherry picked from commit 7116c35aacedc38be6d15bd21b2fc936eed0008b) |
| In the Linux kernel, the following vulnerability has been resolved:
nvme: move stopping keep-alive into nvme_uninit_ctrl()
Commit 4733b65d82bd ("nvme: start keep-alive after admin queue setup")
moves starting keep-alive from nvme_start_ctrl() into
nvme_init_ctrl_finish(), but don't move stopping keep-alive into
nvme_uninit_ctrl(), so keep-alive work can be started and keep pending
after failing to start controller, finally use-after-free is triggered if
nvme host driver is unloaded.
This patch fixes kernel panic when running nvme/004 in case that connection
failure is triggered, by moving stopping keep-alive into nvme_uninit_ctrl().
This way is reasonable because keep-alive is now started in
nvme_init_ctrl_finish(). |
| In the Linux kernel, the following vulnerability has been resolved:
net: ethernet: mtk_wed: fix use-after-free panic in mtk_wed_setup_tc_block_cb()
When there are multiple ap interfaces on one band and with WED on,
turning the interface down will cause a kernel panic on MT798X.
Previously, cb_priv was freed in mtk_wed_setup_tc_block() without
marking NULL,and mtk_wed_setup_tc_block_cb() didn't check the value, too.
Assign NULL after free cb_priv in mtk_wed_setup_tc_block() and check NULL
in mtk_wed_setup_tc_block_cb().
----------
Unable to handle kernel paging request at virtual address 0072460bca32b4f5
Call trace:
mtk_wed_setup_tc_block_cb+0x4/0x38
0xffffffc0794084bc
tcf_block_playback_offloads+0x70/0x1e8
tcf_block_unbind+0x6c/0xc8
...
--------- |
| In the Linux kernel, the following vulnerability has been resolved:
drm/xe: Free job before xe_exec_queue_put
Free job depends on job->vm being valid, the last xe_exec_queue_put can
destroy the VM. Prevent UAF by freeing job before xe_exec_queue_put.
(cherry picked from commit 32a42c93b74c8ca6d0915ea3eba21bceff53042f) |
| In the Linux kernel, the following vulnerability has been resolved:
idpf: fix memory leaks and crashes while performing a soft reset
The second tagged commit introduced a UAF, as it removed restoring
q_vector->vport pointers after reinitializating the structures.
This is due to that all queue allocation functions are performed here
with the new temporary vport structure and those functions rewrite
the backpointers to the vport. Then, this new struct is freed and
the pointers start leading to nowhere.
But generally speaking, the current logic is very fragile. It claims
to be more reliable when the system is low on memory, but in fact, it
consumes two times more memory as at the moment of running this
function, there are two vports allocated with their queues and vectors.
Moreover, it claims to prevent the driver from running into "bad state",
but in fact, any error during the rebuild leaves the old vport in the
partially allocated state.
Finally, if the interface is down when the function is called, it always
allocates a new queue set, but when the user decides to enable the
interface later on, vport_open() allocates them once again, IOW there's
a clear memory leak here.
Just don't allocate a new queue set when performing a reset, that solves
crashes and memory leaks. Readd the old queue number and reopen the
interface on rollback - that solves limbo states when the device is left
disabled and/or without HW queues enabled. |
| In the Linux kernel, the following vulnerability has been resolved:
idpf: fix UAFs when destroying the queues
The second tagged commit started sometimes (very rarely, but possible)
throwing WARNs from
net/core/page_pool.c:page_pool_disable_direct_recycling().
Turned out idpf frees interrupt vectors with embedded NAPIs *before*
freeing the queues making page_pools' NAPI pointers lead to freed
memory before these pools are destroyed by libeth.
It's not clear whether there are other accesses to the freed vectors
when destroying the queues, but anyway, we usually free queue/interrupt
vectors only when the queues are destroyed and the NAPIs are guaranteed
to not be referenced anywhere.
Invert the allocation and freeing logic making queue/interrupt vectors
be allocated first and freed last. Vectors don't require queues to be
present, so this is safe. Additionally, this change allows to remove
that useless queue->q_vector pointer cleanup, as vectors are still
valid when freeing the queues (+ both are freed within one function,
so it's not clear why nullify the pointers at all). |
| In the Linux kernel, the following vulnerability has been resolved:
tracing: Have format file honor EVENT_FILE_FL_FREED
When eventfs was introduced, special care had to be done to coordinate the
freeing of the file meta data with the files that are exposed to user
space. The file meta data would have a ref count that is set when the file
is created and would be decremented and freed after the last user that
opened the file closed it. When the file meta data was to be freed, it
would set a flag (EVENT_FILE_FL_FREED) to denote that the file is freed,
and any new references made (like new opens or reads) would fail as it is
marked freed. This allowed other meta data to be freed after this flag was
set (under the event_mutex).
All the files that were dynamically created in the events directory had a
pointer to the file meta data and would call event_release() when the last
reference to the user space file was closed. This would be the time that it
is safe to free the file meta data.
A shortcut was made for the "format" file. It's i_private would point to
the "call" entry directly and not point to the file's meta data. This is
because all format files are the same for the same "call", so it was
thought there was no reason to differentiate them. The other files
maintain state (like the "enable", "trigger", etc). But this meant if the
file were to disappear, the "format" file would be unaware of it.
This caused a race that could be trigger via the user_events test (that
would create dynamic events and free them), and running a loop that would
read the user_events format files:
In one console run:
# cd tools/testing/selftests/user_events
# while true; do ./ftrace_test; done
And in another console run:
# cd /sys/kernel/tracing/
# while true; do cat events/user_events/__test_event/format; done 2>/dev/null
With KASAN memory checking, it would trigger a use-after-free bug report
(which was a real bug). This was because the format file was not checking
the file's meta data flag "EVENT_FILE_FL_FREED", so it would access the
event that the file meta data pointed to after the event was freed.
After inspection, there are other locations that were found to not check
the EVENT_FILE_FL_FREED flag when accessing the trace_event_file. Add a
new helper function: event_file_file() that will make sure that the
event_mutex is held, and will return NULL if the trace_event_file has the
EVENT_FILE_FL_FREED flag set. Have the first reference of the struct file
pointer use event_file_file() and check for NULL. Later uses can still use
the event_file_data() helper function if the event_mutex is still held and
was not released since the event_file_file() call. |
| In the Linux kernel, the following vulnerability has been resolved:
mm: list_lru: fix UAF for memory cgroup
The mem_cgroup_from_slab_obj() is supposed to be called under rcu lock or
cgroup_mutex or others which could prevent returned memcg from being
freed. Fix it by adding missing rcu read lock.
Found by code inspection.
[songmuchun@bytedance.com: only grab rcu lock when necessary, per Vlastimil] |
| In the Linux kernel, the following vulnerability has been resolved:
mm: fix crashes from deferred split racing folio migration
Even on 6.10-rc6, I've been seeing elusive "Bad page state"s (often on
flags when freeing, yet the flags shown are not bad: PG_locked had been
set and cleared??), and VM_BUG_ON_PAGE(page_ref_count(page) == 0)s from
deferred_split_scan()'s folio_put(), and a variety of other BUG and WARN
symptoms implying double free by deferred split and large folio migration.
6.7 commit 9bcef5973e31 ("mm: memcg: fix split queue list crash when large
folio migration") was right to fix the memcg-dependent locking broken in
85ce2c517ade ("memcontrol: only transfer the memcg data for migration"),
but missed a subtlety of deferred_split_scan(): it moves folios to its own
local list to work on them without split_queue_lock, during which time
folio->_deferred_list is not empty, but even the "right" lock does nothing
to secure the folio and the list it is on.
Fortunately, deferred_split_scan() is careful to use folio_try_get(): so
folio_migrate_mapping() can avoid the race by folio_undo_large_rmappable()
while the old folio's reference count is temporarily frozen to 0 - adding
such a freeze in the !mapping case too (originally, folio lock and
unmapping and no swap cache left an anon folio unreachable, so no freezing
was needed there: but the deferred split queue offers a way to reach it). |
| In the Linux kernel, the following vulnerability has been resolved:
net: rswitch: Avoid use-after-free in rswitch_poll()
The use-after-free is actually in rswitch_tx_free(), which is inlined in
rswitch_poll(). Since `skb` and `gq->skbs[gq->dirty]` are in fact the
same pointer, the skb is first freed using dev_kfree_skb_any(), then the
value in skb->len is used to update the interface statistics.
Let's move around the instructions to use skb->len before the skb is
freed.
This bug is trivial to reproduce using KFENCE. It will trigger a splat
every few packets. A simple ARP request or ICMP echo request is enough. |
| In the Linux kernel, the following vulnerability has been resolved:
bpf: Fix remap of arena.
The bpf arena logic didn't account for mremap operation. Add a refcnt for
multiple mmap events to prevent use-after-free in arena_vm_close. |
| In the Linux kernel, the following vulnerability has been resolved:
block: avoid to reuse `hctx` not removed from cpuhp callback list
If the 'hctx' isn't removed from cpuhp callback list, we can't reuse it,
otherwise use-after-free may be triggered. |
| In the Linux kernel, the following vulnerability has been resolved:
bpf: Fix too early release of tcx_entry
Pedro Pinto and later independently also Hyunwoo Kim and Wongi Lee reported
an issue that the tcx_entry can be released too early leading to a use
after free (UAF) when an active old-style ingress or clsact qdisc with a
shared tc block is later replaced by another ingress or clsact instance.
Essentially, the sequence to trigger the UAF (one example) can be as follows:
1. A network namespace is created
2. An ingress qdisc is created. This allocates a tcx_entry, and
&tcx_entry->miniq is stored in the qdisc's miniqp->p_miniq. At the
same time, a tcf block with index 1 is created.
3. chain0 is attached to the tcf block. chain0 must be connected to
the block linked to the ingress qdisc to later reach the function
tcf_chain0_head_change_cb_del() which triggers the UAF.
4. Create and graft a clsact qdisc. This causes the ingress qdisc
created in step 1 to be removed, thus freeing the previously linked
tcx_entry:
rtnetlink_rcv_msg()
=> tc_modify_qdisc()
=> qdisc_create()
=> clsact_init() [a]
=> qdisc_graft()
=> qdisc_destroy()
=> __qdisc_destroy()
=> ingress_destroy() [b]
=> tcx_entry_free()
=> kfree_rcu() // tcx_entry freed
5. Finally, the network namespace is closed. This registers the
cleanup_net worker, and during the process of releasing the
remaining clsact qdisc, it accesses the tcx_entry that was
already freed in step 4, causing the UAF to occur:
cleanup_net()
=> ops_exit_list()
=> default_device_exit_batch()
=> unregister_netdevice_many()
=> unregister_netdevice_many_notify()
=> dev_shutdown()
=> qdisc_put()
=> clsact_destroy() [c]
=> tcf_block_put_ext()
=> tcf_chain0_head_change_cb_del()
=> tcf_chain_head_change_item()
=> clsact_chain_head_change()
=> mini_qdisc_pair_swap() // UAF
There are also other variants, the gist is to add an ingress (or clsact)
qdisc with a specific shared block, then to replace that qdisc, waiting
for the tcx_entry kfree_rcu() to be executed and subsequently accessing
the current active qdisc's miniq one way or another.
The correct fix is to turn the miniq_active boolean into a counter. What
can be observed, at step 2 above, the counter transitions from 0->1, at
step [a] from 1->2 (in order for the miniq object to remain active during
the replacement), then in [b] from 2->1 and finally [c] 1->0 with the
eventual release. The reference counter in general ranges from [0,2] and
it does not need to be atomic since all access to the counter is protected
by the rtnl mutex. With this in place, there is no longer a UAF happening
and the tcx_entry is freed at the correct time. |
| In the Linux kernel, the following vulnerability has been resolved:
ionic: fix kernel panic in XDP_TX action
In the XDP_TX path, ionic driver sends a packet to the TX path with rx
page and corresponding dma address.
After tx is done, ionic_tx_clean() frees that page.
But RX ring buffer isn't reset to NULL.
So, it uses a freed page, which causes kernel panic.
BUG: unable to handle page fault for address: ffff8881576c110c
PGD 773801067 P4D 773801067 PUD 87f086067 PMD 87efca067 PTE 800ffffea893e060
Oops: Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC KASAN NOPTI
CPU: 1 PID: 25 Comm: ksoftirqd/1 Not tainted 6.9.0+ #11
Hardware name: ASUS System Product Name/PRIME Z690-P D4, BIOS 0603 11/01/2021
RIP: 0010:bpf_prog_f0b8caeac1068a55_balancer_ingress+0x3b/0x44f
Code: 00 53 41 55 41 56 41 57 b8 01 00 00 00 48 8b 5f 08 4c 8b 77 00 4c 89 f7 48 83 c7 0e 48 39 d8
RSP: 0018:ffff888104e6fa28 EFLAGS: 00010283
RAX: 0000000000000002 RBX: ffff8881576c1140 RCX: 0000000000000002
RDX: ffffffffc0051f64 RSI: ffffc90002d33048 RDI: ffff8881576c110e
RBP: ffff888104e6fa88 R08: 0000000000000000 R09: ffffed1027a04a23
R10: 0000000000000000 R11: 0000000000000000 R12: ffff8881b03a21a8
R13: ffff8881589f800f R14: ffff8881576c1100 R15: 00000001576c1100
FS: 0000000000000000(0000) GS:ffff88881ae00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffff8881576c110c CR3: 0000000767a90000 CR4: 00000000007506f0
PKRU: 55555554
Call Trace:
<TASK>
? __die+0x20/0x70
? page_fault_oops+0x254/0x790
? __pfx_page_fault_oops+0x10/0x10
? __pfx_is_prefetch.constprop.0+0x10/0x10
? search_bpf_extables+0x165/0x260
? fixup_exception+0x4a/0x970
? exc_page_fault+0xcb/0xe0
? asm_exc_page_fault+0x22/0x30
? 0xffffffffc0051f64
? bpf_prog_f0b8caeac1068a55_balancer_ingress+0x3b/0x44f
? do_raw_spin_unlock+0x54/0x220
ionic_rx_service+0x11ab/0x3010 [ionic 9180c3001ab627d82bbc5f3ebe8a0decaf6bb864]
? ionic_tx_clean+0x29b/0xc60 [ionic 9180c3001ab627d82bbc5f3ebe8a0decaf6bb864]
? __pfx_ionic_tx_clean+0x10/0x10 [ionic 9180c3001ab627d82bbc5f3ebe8a0decaf6bb864]
? __pfx_ionic_rx_service+0x10/0x10 [ionic 9180c3001ab627d82bbc5f3ebe8a0decaf6bb864]
? ionic_tx_cq_service+0x25d/0xa00 [ionic 9180c3001ab627d82bbc5f3ebe8a0decaf6bb864]
? __pfx_ionic_rx_service+0x10/0x10 [ionic 9180c3001ab627d82bbc5f3ebe8a0decaf6bb864]
ionic_cq_service+0x69/0x150 [ionic 9180c3001ab627d82bbc5f3ebe8a0decaf6bb864]
ionic_txrx_napi+0x11a/0x540 [ionic 9180c3001ab627d82bbc5f3ebe8a0decaf6bb864]
__napi_poll.constprop.0+0xa0/0x440
net_rx_action+0x7e7/0xc30
? __pfx_net_rx_action+0x10/0x10 |