CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
In the Linux kernel, the following vulnerability has been resolved:
net, hsr: reject HSR frame if skb can't hold tag
Receiving HSR frame with insufficient space to hold HSR tag in the skb
can result in a crash (kernel BUG):
[ 45.390915] skbuff: skb_under_panic: text:ffffffff86f32cac len:26 put:14 head:ffff888042418000 data:ffff888042417ff4 tail:0xe end:0x180 dev:bridge_slave_1
[ 45.392559] ------------[ cut here ]------------
[ 45.392912] kernel BUG at net/core/skbuff.c:211!
[ 45.393276] Oops: invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN NOPTI
[ 45.393809] CPU: 1 UID: 0 PID: 2496 Comm: reproducer Not tainted 6.15.0 #12 PREEMPT(undef)
[ 45.394433] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
[ 45.395273] RIP: 0010:skb_panic+0x15b/0x1d0
<snip registers, remove unreliable trace>
[ 45.402911] Call Trace:
[ 45.403105] <IRQ>
[ 45.404470] skb_push+0xcd/0xf0
[ 45.404726] br_dev_queue_push_xmit+0x7c/0x6c0
[ 45.406513] br_forward_finish+0x128/0x260
[ 45.408483] __br_forward+0x42d/0x590
[ 45.409464] maybe_deliver+0x2eb/0x420
[ 45.409763] br_flood+0x174/0x4a0
[ 45.410030] br_handle_frame_finish+0xc7c/0x1bc0
[ 45.411618] br_handle_frame+0xac3/0x1230
[ 45.413674] __netif_receive_skb_core.constprop.0+0x808/0x3df0
[ 45.422966] __netif_receive_skb_one_core+0xb4/0x1f0
[ 45.424478] __netif_receive_skb+0x22/0x170
[ 45.424806] process_backlog+0x242/0x6d0
[ 45.425116] __napi_poll+0xbb/0x630
[ 45.425394] net_rx_action+0x4d1/0xcc0
[ 45.427613] handle_softirqs+0x1a4/0x580
[ 45.427926] do_softirq+0x74/0x90
[ 45.428196] </IRQ>
This issue was found by syzkaller.
The panic happens in br_dev_queue_push_xmit() once it receives a
corrupted skb with ETH header already pushed in linear data. When it
attempts the skb_push() call, there's not enough headroom and
skb_push() panics.
The corrupted skb is put on the queue by HSR layer, which makes a
sequence of unintended transformations when it receives a specific
corrupted HSR frame (with incomplete TAG).
Fix it by dropping and consuming frames that are not long enough to
contain both ethernet and hsr headers.
Alternative fix would be to check for enough headroom before skb_push()
in br_dev_queue_push_xmit().
In the reproducer, this is injected via AF_PACKET, but I don't easily
see why it couldn't be sent over the wire from adjacent network.
Further Details:
In the reproducer, the following network interface chain is set up:
┌────────────────┐ ┌────────────────┐
│ veth0_to_hsr ├───┤ hsr_slave0 ┼───┐
└────────────────┘ └────────────────┘ │
│ ┌──────┐
├─┤ hsr0 ├───┐
│ └──────┘ │
┌────────────────┐ ┌────────────────┐ │ │┌────────┐
│ veth1_to_hsr ┼───┤ hsr_slave1 ├───┘ └┤ │
└────────────────┘ └────────────────┘ ┌┼ bridge │
││ │
│└────────┘
│
┌───────┐ │
│ ... ├──────┘
└───────┘
To trigger the events leading up to crash, reproducer sends a corrupted
HSR fr
---truncated--- |
In the Linux kernel, the following vulnerability has been resolved:
ACPI: pfr_update: Fix the driver update version check
The security-version-number check should be used rather
than the runtime version check for driver updates.
Otherwise, the firmware update would fail when the update binary had
a lower runtime version number than the current one.
[ rjw: Changelog edits ] |
In the Linux kernel, the following vulnerability has been resolved:
crypto: caam - Prevent crash on suspend with iMX8QM / iMX8ULP
Since the CAAM on these SoCs is managed by another ARM core, called the
SECO (Security Controller) on iMX8QM and Secure Enclave on iMX8ULP, which
also reserves access to register page 0 suspend operations cannot touch
this page.
This is similar to when running OPTEE, where OPTEE will reserve page 0.
Track this situation using a new state variable no_page0, reflecting if
page 0 is reserved elsewhere, either by other management cores in SoC or
by OPTEE.
Replace the optee_en check in suspend/resume with the new check.
optee_en cannot go away as it's needed elsewhere to gate OPTEE specific
situations.
Fixes the following splat at suspend:
Internal error: synchronous external abort: 0000000096000010 [#1] SMP
Hardware name: Freescale i.MX8QXP ACU6C (DT)
pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : readl+0x0/0x18
lr : rd_reg32+0x18/0x3c
sp : ffffffc08192ba20
x29: ffffffc08192ba20 x28: ffffff8025190000 x27: 0000000000000000
x26: ffffffc0808ae808 x25: ffffffc080922338 x24: ffffff8020e89090
x23: 0000000000000000 x22: ffffffc080922000 x21: ffffff8020e89010
x20: ffffffc080387ef8 x19: ffffff8020e89010 x18: 000000005d8000d5
x17: 0000000030f35963 x16: 000000008f785f3f x15: 000000003b8ef57c
x14: 00000000c418aef8 x13: 00000000f5fea526 x12: 0000000000000001
x11: 0000000000000002 x10: 0000000000000001 x9 : 0000000000000000
x8 : ffffff8025190870 x7 : ffffff8021726880 x6 : 0000000000000002
x5 : ffffff80217268f0 x4 : ffffff8021726880 x3 : ffffffc081200000
x2 : 0000000000000001 x1 : ffffff8020e89010 x0 : ffffffc081200004
Call trace:
readl+0x0/0x18
caam_ctrl_suspend+0x30/0xdc
dpm_run_callback.constprop.0+0x24/0x5c
device_suspend+0x170/0x2e8
dpm_suspend+0xa0/0x104
dpm_suspend_start+0x48/0x50
suspend_devices_and_enter+0x7c/0x45c
pm_suspend+0x148/0x160
state_store+0xb4/0xf8
kobj_attr_store+0x14/0x24
sysfs_kf_write+0x38/0x48
kernfs_fop_write_iter+0xb4/0x178
vfs_write+0x118/0x178
ksys_write+0x6c/0xd0
__arm64_sys_write+0x14/0x1c
invoke_syscall.constprop.0+0x64/0xb0
do_el0_svc+0x90/0xb0
el0_svc+0x18/0x44
el0t_64_sync_handler+0x88/0x124
el0t_64_sync+0x150/0x154
Code: 88dffc21 88dffc21 5ac00800 d65f03c0 (b9400000) |
In the Linux kernel, the following vulnerability has been resolved:
media: mt9m114: Fix deadlock in get_frame_interval/set_frame_interval
Getting / Setting the frame interval using the V4L2 subdev pad ops
get_frame_interval/set_frame_interval causes a deadlock, as the
subdev state is locked in the [1] but also in the driver itself.
In [2] it's described that the caller is responsible to acquire and
release the lock in this case. Therefore, acquiring the lock in the
driver is wrong.
Remove the lock acquisitions/releases from mt9m114_ifp_get_frame_interval()
and mt9m114_ifp_set_frame_interval().
[1] drivers/media/v4l2-core/v4l2-subdev.c - line 1129
[2] Documentation/driver-api/media/v4l2-subdev.rst |
In the Linux kernel, the following vulnerability has been resolved:
tls: fix handling of zero-length records on the rx_list
Each recvmsg() call must process either
- only contiguous DATA records (any number of them)
- one non-DATA record
If the next record has different type than what has already been
processed we break out of the main processing loop. If the record
has already been decrypted (which may be the case for TLS 1.3 where
we don't know type until decryption) we queue the pending record
to the rx_list. Next recvmsg() will pick it up from there.
Queuing the skb to rx_list after zero-copy decrypt is not possible,
since in that case we decrypted directly to the user space buffer,
and we don't have an skb to queue (darg.skb points to the ciphertext
skb for access to metadata like length).
Only data records are allowed zero-copy, and we break the processing
loop after each non-data record. So we should never zero-copy and
then find out that the record type has changed. The corner case
we missed is when the initial record comes from rx_list, and it's
zero length. |
In the Linux kernel, the following vulnerability has been resolved:
net/sched: Fix backlog accounting in qdisc_dequeue_internal
This issue applies for the following qdiscs: hhf, fq, fq_codel, and
fq_pie, and occurs in their change handlers when adjusting to the new
limit. The problem is the following in the values passed to the
subsequent qdisc_tree_reduce_backlog call given a tbf parent:
When the tbf parent runs out of tokens, skbs of these qdiscs will
be placed in gso_skb. Their peek handlers are qdisc_peek_dequeued,
which accounts for both qlen and backlog. However, in the case of
qdisc_dequeue_internal, ONLY qlen is accounted for when pulling
from gso_skb. This means that these qdiscs are missing a
qdisc_qstats_backlog_dec when dropping packets to satisfy the
new limit in their change handlers.
One can observe this issue with the following (with tc patched to
support a limit of 0):
export TARGET=fq
tc qdisc del dev lo root
tc qdisc add dev lo root handle 1: tbf rate 8bit burst 100b latency 1ms
tc qdisc replace dev lo handle 3: parent 1:1 $TARGET limit 1000
echo ''; echo 'add child'; tc -s -d qdisc show dev lo
ping -I lo -f -c2 -s32 -W0.001 127.0.0.1 2>&1 >/dev/null
echo ''; echo 'after ping'; tc -s -d qdisc show dev lo
tc qdisc change dev lo handle 3: parent 1:1 $TARGET limit 0
echo ''; echo 'after limit drop'; tc -s -d qdisc show dev lo
tc qdisc replace dev lo handle 2: parent 1:1 sfq
echo ''; echo 'post graft'; tc -s -d qdisc show dev lo
The second to last show command shows 0 packets but a positive
number (74) of backlog bytes. The problem becomes clearer in the
last show command, where qdisc_purge_queue triggers
qdisc_tree_reduce_backlog with the positive backlog and causes an
underflow in the tbf parent's backlog (4096 Mb instead of 0).
To fix this issue, the codepath for all clients of qdisc_dequeue_internal
has been simplified: codel, pie, hhf, fq, fq_pie, and fq_codel.
qdisc_dequeue_internal handles the backlog adjustments for all cases that
do not directly use the dequeue handler.
The old fq_codel_change limit adjustment loop accumulated the arguments to
the subsequent qdisc_tree_reduce_backlog call through the cstats field.
However, this is confusing and error prone as fq_codel_dequeue could also
potentially mutate this field (which qdisc_dequeue_internal calls in the
non gso_skb case), so we have unified the code here with other qdiscs. |
In the Linux kernel, the following vulnerability has been resolved:
mm/vmscan: fix hwpoisoned large folio handling in shrink_folio_list
In shrink_folio_list(), the hwpoisoned folio may be large folio, which
can't be handled by unmap_poisoned_folio(). For THP, try_to_unmap_one()
must be passed with TTU_SPLIT_HUGE_PMD to split huge PMD first and then
retry. Without TTU_SPLIT_HUGE_PMD, we will trigger null-ptr deref of
pvmw.pte. Even we passed TTU_SPLIT_HUGE_PMD, we will trigger a
WARN_ON_ONCE due to the page isn't in swapcache.
Since UCE is rare in real world, and race with reclaimation is more rare,
just skipping the hwpoisoned large folio is enough. memory_failure() will
handle it if the UCE is triggered again.
This happens when memory reclaim for large folio races with
memory_failure(), and will lead to kernel panic. The race is as
follows:
cpu0 cpu1
shrink_folio_list memory_failure
TestSetPageHWPoison
unmap_poisoned_folio
--> trigger BUG_ON due to
unmap_poisoned_folio couldn't
handle large folio
[tujinjiang@huawei.com: add comment to unmap_poisoned_folio()] |
In the Linux kernel, the following vulnerability has been resolved:
media: venus: protect against spurious interrupts during probe
Make sure the interrupt handler is initialized before the interrupt is
registered.
If the IRQ is registered before hfi_create(), it's possible that an
interrupt fires before the handler setup is complete, leading to a NULL
dereference.
This error condition has been observed during system boot on Rb3Gen2. |
In the Linux kernel, the following vulnerability has been resolved:
media: venus: Add a check for packet size after reading from shared memory
Add a check to ensure that the packet size does not exceed the number of
available words after reading the packet header from shared memory. This
ensures that the size provided by the firmware is safe to process and
prevent potential out-of-bounds memory access. |
In the Linux kernel, the following vulnerability has been resolved:
fs/buffer: fix use-after-free when call bh_read() helper
There's issue as follows:
BUG: KASAN: stack-out-of-bounds in end_buffer_read_sync+0xe3/0x110
Read of size 8 at addr ffffc9000168f7f8 by task swapper/3/0
CPU: 3 UID: 0 PID: 0 Comm: swapper/3 Not tainted 6.16.0-862.14.0.6.x86_64
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)
Call Trace:
<IRQ>
dump_stack_lvl+0x55/0x70
print_address_description.constprop.0+0x2c/0x390
print_report+0xb4/0x270
kasan_report+0xb8/0xf0
end_buffer_read_sync+0xe3/0x110
end_bio_bh_io_sync+0x56/0x80
blk_update_request+0x30a/0x720
scsi_end_request+0x51/0x2b0
scsi_io_completion+0xe3/0x480
? scsi_device_unbusy+0x11e/0x160
blk_complete_reqs+0x7b/0x90
handle_softirqs+0xef/0x370
irq_exit_rcu+0xa5/0xd0
sysvec_apic_timer_interrupt+0x6e/0x90
</IRQ>
Above issue happens when do ntfs3 filesystem mount, issue may happens
as follows:
mount IRQ
ntfs_fill_super
read_cache_page
do_read_cache_folio
filemap_read_folio
mpage_read_folio
do_mpage_readpage
ntfs_get_block_vbo
bh_read
submit_bh
wait_on_buffer(bh);
blk_complete_reqs
scsi_io_completion
scsi_end_request
blk_update_request
end_bio_bh_io_sync
end_buffer_read_sync
__end_buffer_read_notouch
unlock_buffer
wait_on_buffer(bh);--> return will return to caller
put_bh
--> trigger stack-out-of-bounds
In the mpage_read_folio() function, the stack variable 'map_bh' is
passed to ntfs_get_block_vbo(). Once unlock_buffer() unlocks and
wait_on_buffer() returns to continue processing, the stack variable
is likely to be reclaimed. Consequently, during the end_buffer_read_sync()
process, calling put_bh() may result in stack overrun.
If the bh is not allocated on the stack, it belongs to a folio. Freeing
a buffer head which belongs to a folio is done by drop_buffers() which
will fail to free buffers which are still locked. So it is safe to call
put_bh() before __end_buffer_read_notouch(). |
In the Linux kernel, the following vulnerability has been resolved:
media: usbtv: Lock resolution while streaming
When an program is streaming (ffplay) and another program (qv4l2)
changes the TV standard from NTSC to PAL, the kernel crashes due to trying
to copy to unmapped memory.
Changing from NTSC to PAL increases the resolution in the usbtv struct,
but the video plane buffer isn't adjusted, so it overflows.
[hverkuil: call vb2_is_busy instead of vb2_is_streaming] |
In the Linux kernel, the following vulnerability has been resolved:
parisc: Revise __get_user() to probe user read access
Because of the way read access support is implemented, read access
interruptions are only triggered at privilege levels 2 and 3. The
kernel executes at privilege level 0, so __get_user() never triggers
a read access interruption (code 26). Thus, it is currently possible
for user code to access a read protected address via a system call.
Fix this by probing read access rights at privilege level 3 (PRIV_USER)
and setting __gu_err to -EFAULT (-14) if access isn't allowed.
Note the cmpiclr instruction does a 32-bit compare because COND macro
doesn't work inside asm. |
In the Linux kernel, the following vulnerability has been resolved:
drm/amdgpu: check if hubbub is NULL in debugfs/amdgpu_dm_capabilities
HUBBUB structure is not initialized on DCE hardware, so check if it is NULL
to avoid null dereference while accessing amdgpu_dm_capabilities file in
debugfs. |
In the Linux kernel, the following vulnerability has been resolved:
RDMA/rxe: Flush delayed SKBs while releasing RXE resources
When skb packets are sent out, these skb packets still depends on
the rxe resources, for example, QP, sk, when these packets are
destroyed.
If these rxe resources are released when the skb packets are destroyed,
the call traces will appear.
To avoid skb packets hang too long time in some network devices,
a timestamp is added when these skb packets are created. If these
skb packets hang too long time in network devices, these network
devices can free these skb packets to release rxe resources. |
In the Linux kernel, the following vulnerability has been resolved:
platform/x86/amd/hsmp: Ensure sock->metric_tbl_addr is non-NULL
If metric table address is not allocated, accessing metrics_bin will
result in a NULL pointer dereference, so add a check. |
In the Linux kernel, the following vulnerability has been resolved:
gve: prevent ethtool ops after shutdown
A crash can occur if an ethtool operation is invoked
after shutdown() is called.
shutdown() is invoked during system shutdown to stop DMA operations
without performing expensive deallocations. It is discouraged to
unregister the netdev in this path, so the device may still be visible
to userspace and kernel helpers.
In gve, shutdown() tears down most internal data structures. If an
ethtool operation is dispatched after shutdown(), it will dereference
freed or NULL pointers, leading to a kernel panic. While graceful
shutdown normally quiesces userspace before invoking the reboot
syscall, forced shutdowns (as observed on GCP VMs) can still trigger
this path.
Fix by calling netif_device_detach() in shutdown().
This marks the device as detached so the ethtool ioctl handler
will skip dispatching operations to the driver. |
In the Linux kernel, the following vulnerability has been resolved:
media: iris: Fix NULL pointer dereference
A warning reported by smatch indicated a possible null pointer
dereference where one of the arguments to API
"iris_hfi_gen2_handle_system_error" could sometimes be null.
To fix this, add a check to validate that the argument passed is not
null before accessing its members. |
In the Linux kernel, the following vulnerability has been resolved:
netfilter: nf_reject: don't leak dst refcount for loopback packets
recent patches to add a WARN() when replacing skb dst entry found an
old bug:
WARNING: include/linux/skbuff.h:1165 skb_dst_check_unset include/linux/skbuff.h:1164 [inline]
WARNING: include/linux/skbuff.h:1165 skb_dst_set include/linux/skbuff.h:1210 [inline]
WARNING: include/linux/skbuff.h:1165 nf_reject_fill_skb_dst+0x2a4/0x330 net/ipv4/netfilter/nf_reject_ipv4.c:234
[..]
Call Trace:
nf_send_unreach+0x17b/0x6e0 net/ipv4/netfilter/nf_reject_ipv4.c:325
nft_reject_inet_eval+0x4bc/0x690 net/netfilter/nft_reject_inet.c:27
expr_call_ops_eval net/netfilter/nf_tables_core.c:237 [inline]
..
This is because blamed commit forgot about loopback packets.
Such packets already have a dst_entry attached, even at PRE_ROUTING stage.
Instead of checking hook just check if the skb already has a route
attached to it. |
In the Linux kernel, the following vulnerability has been resolved:
s390/mm: Do not map lowcore with identity mapping
Since the identity mapping is pinned to address zero the lowcore is always
also mapped to address zero, this happens regardless of the relocate_lowcore
command line option. If the option is specified the lowcore is mapped
twice, instead of only once.
This means that NULL pointer accesses will succeed instead of causing an
exception (low address protection still applies, but covers only parts).
To fix this never map the first two pages of physical memory with the
identity mapping. |
In the Linux kernel, the following vulnerability has been resolved:
netfs: Fix unbuffered write error handling
If all the subrequests in an unbuffered write stream fail, the subrequest
collector doesn't update the stream->transferred value and it retains its
initial LONG_MAX value. Unfortunately, if all active streams fail, then we
take the smallest value of { LONG_MAX, LONG_MAX, ... } as the value to set
in wreq->transferred - which is then returned from ->write_iter().
LONG_MAX was chosen as the initial value so that all the streams can be
quickly assessed by taking the smallest value of all stream->transferred -
but this only works if we've set any of them.
Fix this by adding a flag to indicate whether the value in
stream->transferred is valid and checking that when we integrate the
values. stream->transferred can then be initialised to zero.
This was found by running the generic/750 xfstest against cifs with
cache=none. It splices data to the target file. Once (if) it has used up
all the available scratch space, the writes start failing with ENOSPC.
This causes ->write_iter() to fail. However, it was returning
wreq->transferred, i.e. LONG_MAX, rather than an error (because it thought
the amount transferred was non-zero) and iter_file_splice_write() would
then try to clean up that amount of pipe bufferage - leading to an oops
when it overran. The kernel log showed:
CIFS: VFS: Send error in write = -28
followed by:
BUG: kernel NULL pointer dereference, address: 0000000000000008
with:
RIP: 0010:iter_file_splice_write+0x3a4/0x520
do_splice+0x197/0x4e0
or:
RIP: 0010:pipe_buf_release (include/linux/pipe_fs_i.h:282)
iter_file_splice_write (fs/splice.c:755)
Also put a warning check into splice to announce if ->write_iter() returned
that it had written more than it was asked to. |