CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
In the Linux kernel, the following vulnerability has been resolved:
net: reenable NETIF_F_IPV6_CSUM offload for BIG TCP packets
The blamed commit disabled hardware offoad of IPv6 packets with
extension headers on devices that advertise NETIF_F_IPV6_CSUM,
based on the definition of that feature in skbuff.h:
* * - %NETIF_F_IPV6_CSUM
* - Driver (device) is only able to checksum plain
* TCP or UDP packets over IPv6. These are specifically
* unencapsulated packets of the form IPv6|TCP or
* IPv6|UDP where the Next Header field in the IPv6
* header is either TCP or UDP. IPv6 extension headers
* are not supported with this feature. This feature
* cannot be set in features for a device with
* NETIF_F_HW_CSUM also set. This feature is being
* DEPRECATED (see below).
The change causes skb_warn_bad_offload to fire for BIG TCP
packets.
[ 496.310233] WARNING: CPU: 13 PID: 23472 at net/core/dev.c:3129 skb_warn_bad_offload+0xc4/0xe0
[ 496.310297] ? skb_warn_bad_offload+0xc4/0xe0
[ 496.310300] skb_checksum_help+0x129/0x1f0
[ 496.310303] skb_csum_hwoffload_help+0x150/0x1b0
[ 496.310306] validate_xmit_skb+0x159/0x270
[ 496.310309] validate_xmit_skb_list+0x41/0x70
[ 496.310312] sch_direct_xmit+0x5c/0x250
[ 496.310317] __qdisc_run+0x388/0x620
BIG TCP introduced an IPV6_TLV_JUMBO IPv6 extension header to
communicate packet length, as this is an IPv6 jumbogram. But, the
feature is only enabled on devices that support BIG TCP TSO. The
header is only present for PF_PACKET taps like tcpdump, and not
transmitted by physical devices.
For this specific case of extension headers that are not
transmitted, return to the situation before the blamed commit
and support hardware offload.
ipv6_has_hopopt_jumbo() tests not only whether this header is present,
but also that it is the only extension header before a terminal (L4)
header. |
In the Linux kernel, the following vulnerability has been resolved:
ALSA: memalloc: prefer dma_mapping_error() over explicit address checking
With CONFIG_DMA_API_DEBUG enabled, the following warning is observed:
DMA-API: snd_hda_intel 0000:03:00.1: device driver failed to check map error[device address=0x00000000ffff0000] [size=20480 bytes] [mapped as single]
WARNING: CPU: 28 PID: 2255 at kernel/dma/debug.c:1036 check_unmap+0x1408/0x2430
CPU: 28 UID: 42 PID: 2255 Comm: wireplumber Tainted: G W L 6.12.0-10-133577cad6bf48e5a7848c4338124081393bfe8a+ #759
debug_dma_unmap_page+0xe9/0xf0
snd_dma_wc_free+0x85/0x130 [snd_pcm]
snd_pcm_lib_free_pages+0x1e3/0x440 [snd_pcm]
snd_pcm_common_ioctl+0x1c9a/0x2960 [snd_pcm]
snd_pcm_ioctl+0x6a/0xc0 [snd_pcm]
...
Check for returned DMA addresses using specialized dma_mapping_error()
helper which is generally recommended for this purpose by
Documentation/core-api/dma-api.rst. |
In the Linux kernel, the following vulnerability has been resolved:
power: supply: gpio-charger: Fix set charge current limits
Fix set charge current limits for devices which allow to set the lowest
charge current limit to be greater zero. If requested charge current limit
is below lowest limit, the index equals current_limit_map_size which leads
to accessing memory beyond allocated memory. |
In the Linux kernel, the following vulnerability has been resolved:
net/smc: check return value of sock_recvmsg when draining clc data
When receiving clc msg, the field length in smc_clc_msg_hdr indicates the
length of msg should be received from network and the value should not be
fully trusted as it is from the network. Once the value of length exceeds
the value of buflen in function smc_clc_wait_msg it may run into deadloop
when trying to drain the remaining data exceeding buflen.
This patch checks the return value of sock_recvmsg when draining data in
case of deadloop in draining. |
In Progress Chef Automate, versions earlier than 4.13.295, on Linux x86 platform, an authenticated attacker can gain access to Chef Automate restricted functionality in multiple services via improperly neutralized inputs used in an SQL command. |
In Progress Chef Automate, versions earlier than 4.13.295, on Linux x86 platform, an authenticated attacker can gain access to Chef Automate restricted functionality in the compliance service via
improperly neutralized inputs used in an SQL command using a well-known token. |
In the Linux kernel, the following vulnerability has been resolved:
accel/ivpu: Fix general protection fault in ivpu_bo_list()
Check if ctx is not NULL before accessing its fields. |
In the Linux kernel, the following vulnerability has been resolved:
net: renesas: rswitch: avoid use-after-put for a device tree node
The device tree node saved in the rswitch_device structure is used at
several driver locations. So passing this node to of_node_put() after
the first use is wrong.
Move of_node_put() for this node to exit paths. |
In the Linux kernel, the following vulnerability has been resolved:
KVM: x86: Play nice with protected guests in complete_hypercall_exit()
Use is_64_bit_hypercall() instead of is_64_bit_mode() to detect a 64-bit
hypercall when completing said hypercall. For guests with protected state,
e.g. SEV-ES and SEV-SNP, KVM must assume the hypercall was made in 64-bit
mode as the vCPU state needed to detect 64-bit mode is unavailable.
Hacking the sev_smoke_test selftest to generate a KVM_HC_MAP_GPA_RANGE
hypercall via VMGEXIT trips the WARN:
------------[ cut here ]------------
WARNING: CPU: 273 PID: 326626 at arch/x86/kvm/x86.h:180 complete_hypercall_exit+0x44/0xe0 [kvm]
Modules linked in: kvm_amd kvm ... [last unloaded: kvm]
CPU: 273 UID: 0 PID: 326626 Comm: sev_smoke_test Not tainted 6.12.0-smp--392e932fa0f3-feat #470
Hardware name: Google Astoria/astoria, BIOS 0.20240617.0-0 06/17/2024
RIP: 0010:complete_hypercall_exit+0x44/0xe0 [kvm]
Call Trace:
<TASK>
kvm_arch_vcpu_ioctl_run+0x2400/0x2720 [kvm]
kvm_vcpu_ioctl+0x54f/0x630 [kvm]
__se_sys_ioctl+0x6b/0xc0
do_syscall_64+0x83/0x160
entry_SYSCALL_64_after_hwframe+0x76/0x7e
</TASK>
---[ end trace 0000000000000000 ]--- |
IBM Engineering Requirements Management Doors Next 7.0.2, 7.0.3, and 7.1 could allow an authenticated user to cause a denial of service by uploading specially crafted files using uncontrolled recursion. |
IBM Engineering Requirements Management Doors Next 7.0.2, 7.0.3, and 7.1 could allow an authenticated user on the network to spoof email identity of the sender due to improper verification of source data. |
IBM Engineering Requirements Management Doors Next 7.0.2, 7.0.3, and 7.1 could allow an authenticated user on the network to delete reviews from other users due to client-side enforcement of server-side security. |
IBM Engineering Requirements Management Doors Next 7.0.2, 7.0.3, and 7.1
could allow an authenticated user on the network to delete comments from other users due to client-side enforcement of server-side security. |
In the Linux kernel, the following vulnerability has been resolved:
pfifo_tail_enqueue: Drop new packet when sch->limit == 0
Expected behaviour:
In case we reach scheduler's limit, pfifo_tail_enqueue() will drop a
packet in scheduler's queue and decrease scheduler's qlen by one.
Then, pfifo_tail_enqueue() enqueue new packet and increase
scheduler's qlen by one. Finally, pfifo_tail_enqueue() return
`NET_XMIT_CN` status code.
Weird behaviour:
In case we set `sch->limit == 0` and trigger pfifo_tail_enqueue() on a
scheduler that has no packet, the 'drop a packet' step will do nothing.
This means the scheduler's qlen still has value equal 0.
Then, we continue to enqueue new packet and increase scheduler's qlen by
one. In summary, we can leverage pfifo_tail_enqueue() to increase qlen by
one and return `NET_XMIT_CN` status code.
The problem is:
Let's say we have two qdiscs: Qdisc_A and Qdisc_B.
- Qdisc_A's type must have '->graft()' function to create parent/child relationship.
Let's say Qdisc_A's type is `hfsc`. Enqueue packet to this qdisc will trigger `hfsc_enqueue`.
- Qdisc_B's type is pfifo_head_drop. Enqueue packet to this qdisc will trigger `pfifo_tail_enqueue`.
- Qdisc_B is configured to have `sch->limit == 0`.
- Qdisc_A is configured to route the enqueued's packet to Qdisc_B.
Enqueue packet through Qdisc_A will lead to:
- hfsc_enqueue(Qdisc_A) -> pfifo_tail_enqueue(Qdisc_B)
- Qdisc_B->q.qlen += 1
- pfifo_tail_enqueue() return `NET_XMIT_CN`
- hfsc_enqueue() check for `NET_XMIT_SUCCESS` and see `NET_XMIT_CN` => hfsc_enqueue() don't increase qlen of Qdisc_A.
The whole process lead to a situation where Qdisc_A->q.qlen == 0 and Qdisc_B->q.qlen == 1.
Replace 'hfsc' with other type (for example: 'drr') still lead to the same problem.
This violate the design where parent's qlen should equal to the sum of its childrens'qlen.
Bug impact: This issue can be used for user->kernel privilege escalation when it is reachable. |
In the Linux kernel, the following vulnerability has been resolved:
nvme-rdma: unquiesce admin_q before destroy it
Kernel will hang on destroy admin_q while we create ctrl failed, such
as following calltrace:
PID: 23644 TASK: ff2d52b40f439fc0 CPU: 2 COMMAND: "nvme"
#0 [ff61d23de260fb78] __schedule at ffffffff8323bc15
#1 [ff61d23de260fc08] schedule at ffffffff8323c014
#2 [ff61d23de260fc28] blk_mq_freeze_queue_wait at ffffffff82a3dba1
#3 [ff61d23de260fc78] blk_freeze_queue at ffffffff82a4113a
#4 [ff61d23de260fc90] blk_cleanup_queue at ffffffff82a33006
#5 [ff61d23de260fcb0] nvme_rdma_destroy_admin_queue at ffffffffc12686ce
#6 [ff61d23de260fcc8] nvme_rdma_setup_ctrl at ffffffffc1268ced
#7 [ff61d23de260fd28] nvme_rdma_create_ctrl at ffffffffc126919b
#8 [ff61d23de260fd68] nvmf_dev_write at ffffffffc024f362
#9 [ff61d23de260fe38] vfs_write at ffffffff827d5f25
RIP: 00007fda7891d574 RSP: 00007ffe2ef06958 RFLAGS: 00000202
RAX: ffffffffffffffda RBX: 000055e8122a4d90 RCX: 00007fda7891d574
RDX: 000000000000012b RSI: 000055e8122a4d90 RDI: 0000000000000004
RBP: 00007ffe2ef079c0 R8: 000000000000012b R9: 000055e8122a4d90
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000004
R13: 000055e8122923c0 R14: 000000000000012b R15: 00007fda78a54500
ORIG_RAX: 0000000000000001 CS: 0033 SS: 002b
This due to we have quiesced admi_q before cancel requests, but forgot
to unquiesce before destroy it, as a result we fail to drain the
pending requests, and hang on blk_mq_freeze_queue_wait() forever. Here
try to reuse nvme_rdma_teardown_admin_queue() to fix this issue and
simplify the code. |
In the Linux kernel, the following vulnerability has been resolved:
nilfs2: prevent use of deleted inode
syzbot reported a WARNING in nilfs_rmdir. [1]
Because the inode bitmap is corrupted, an inode with an inode number that
should exist as a ".nilfs" file was reassigned by nilfs_mkdir for "file0",
causing an inode duplication during execution. And this causes an
underflow of i_nlink in rmdir operations.
The inode is used twice by the same task to unmount and remove directories
".nilfs" and "file0", it trigger warning in nilfs_rmdir.
Avoid to this issue, check i_nlink in nilfs_iget(), if it is 0, it means
that this inode has been deleted, and iput is executed to reclaim it.
[1]
WARNING: CPU: 1 PID: 5824 at fs/inode.c:407 drop_nlink+0xc4/0x110 fs/inode.c:407
...
Call Trace:
<TASK>
nilfs_rmdir+0x1b0/0x250 fs/nilfs2/namei.c:342
vfs_rmdir+0x3a3/0x510 fs/namei.c:4394
do_rmdir+0x3b5/0x580 fs/namei.c:4453
__do_sys_rmdir fs/namei.c:4472 [inline]
__se_sys_rmdir fs/namei.c:4470 [inline]
__x64_sys_rmdir+0x47/0x50 fs/namei.c:4470
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f |
In the Linux kernel, the following vulnerability has been resolved:
riscv: Fix IPIs usage in kfence_protect_page()
flush_tlb_kernel_range() may use IPIs to flush the TLBs of all the
cores, which triggers the following warning when the irqs are disabled:
[ 3.455330] WARNING: CPU: 1 PID: 0 at kernel/smp.c:815 smp_call_function_many_cond+0x452/0x520
[ 3.456647] Modules linked in:
[ 3.457218] CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Not tainted 6.12.0-rc7-00010-g91d3de7240b8 #1
[ 3.457416] Hardware name: QEMU QEMU Virtual Machine, BIOS
[ 3.457633] epc : smp_call_function_many_cond+0x452/0x520
[ 3.457736] ra : on_each_cpu_cond_mask+0x1e/0x30
[ 3.457786] epc : ffffffff800b669a ra : ffffffff800b67c2 sp : ff2000000000bb50
[ 3.457824] gp : ffffffff815212b8 tp : ff6000008014f080 t0 : 000000000000003f
[ 3.457859] t1 : ffffffff815221e0 t2 : 000000000000000f s0 : ff2000000000bc10
[ 3.457920] s1 : 0000000000000040 a0 : ffffffff815221e0 a1 : 0000000000000001
[ 3.457953] a2 : 0000000000010000 a3 : 0000000000000003 a4 : 0000000000000000
[ 3.458006] a5 : 0000000000000000 a6 : ffffffffffffffff a7 : 0000000000000000
[ 3.458042] s2 : ffffffff815223be s3 : 00fffffffffff000 s4 : ff600001ffe38fc0
[ 3.458076] s5 : ff600001ff950d00 s6 : 0000000200000120 s7 : 0000000000000001
[ 3.458109] s8 : 0000000000000001 s9 : ff60000080841ef0 s10: 0000000000000001
[ 3.458141] s11: ffffffff81524812 t3 : 0000000000000001 t4 : ff60000080092bc0
[ 3.458172] t5 : 0000000000000000 t6 : ff200000000236d0
[ 3.458203] status: 0000000200000100 badaddr: ffffffff800b669a cause: 0000000000000003
[ 3.458373] [<ffffffff800b669a>] smp_call_function_many_cond+0x452/0x520
[ 3.458593] [<ffffffff800b67c2>] on_each_cpu_cond_mask+0x1e/0x30
[ 3.458625] [<ffffffff8000e4ca>] __flush_tlb_range+0x118/0x1ca
[ 3.458656] [<ffffffff8000e6b2>] flush_tlb_kernel_range+0x1e/0x26
[ 3.458683] [<ffffffff801ea56a>] kfence_protect+0xc0/0xce
[ 3.458717] [<ffffffff801e9456>] kfence_guarded_free+0xc6/0x1c0
[ 3.458742] [<ffffffff801e9d6c>] __kfence_free+0x62/0xc6
[ 3.458764] [<ffffffff801c57d8>] kfree+0x106/0x32c
[ 3.458786] [<ffffffff80588cf2>] detach_buf_split+0x188/0x1a8
[ 3.458816] [<ffffffff8058708c>] virtqueue_get_buf_ctx+0xb6/0x1f6
[ 3.458839] [<ffffffff805871da>] virtqueue_get_buf+0xe/0x16
[ 3.458880] [<ffffffff80613d6a>] virtblk_done+0x5c/0xe2
[ 3.458908] [<ffffffff8058766e>] vring_interrupt+0x6a/0x74
[ 3.458930] [<ffffffff800747d8>] __handle_irq_event_percpu+0x7c/0xe2
[ 3.458956] [<ffffffff800748f0>] handle_irq_event+0x3c/0x86
[ 3.458978] [<ffffffff800786cc>] handle_simple_irq+0x9e/0xbe
[ 3.459004] [<ffffffff80073934>] generic_handle_domain_irq+0x1c/0x2a
[ 3.459027] [<ffffffff804bf87c>] imsic_handle_irq+0xba/0x120
[ 3.459056] [<ffffffff80073934>] generic_handle_domain_irq+0x1c/0x2a
[ 3.459080] [<ffffffff804bdb76>] riscv_intc_aia_irq+0x24/0x34
[ 3.459103] [<ffffffff809d0452>] handle_riscv_irq+0x2e/0x4c
[ 3.459133] [<ffffffff809d923e>] call_on_irq_stack+0x32/0x40
So only flush the local TLB and let the lazy kfence page fault handling
deal with the faults which could happen when a core has an old protected
pte version cached in its TLB. That leads to potential inaccuracies which
can be tolerated when using kfence. |
In the Linux kernel, the following vulnerability has been resolved:
ceph: give up on paths longer than PATH_MAX
If the full path to be built by ceph_mdsc_build_path() happens to be
longer than PATH_MAX, then this function will enter an endless (retry)
loop, effectively blocking the whole task. Most of the machine
becomes unusable, making this a very simple and effective DoS
vulnerability.
I cannot imagine why this retry was ever implemented, but it seems
rather useless and harmful to me. Let's remove it and fail with
ENAMETOOLONG instead. |
In the Linux kernel, the following vulnerability has been resolved:
regulator: axp20x: AXP717: set ramp_delay
AXP717 datasheet says that regulator ramp delay is 15.625 us/step,
which is 10mV in our case.
Add a AXP_DESC_RANGES_DELAY macro and update AXP_DESC_RANGES macro to
expand to AXP_DESC_RANGES_DELAY with ramp_delay = 0
For DCDC4, steps is 100mv
Add a AXP_DESC_DELAY macro and update AXP_DESC macro to
expand to AXP_DESC_DELAY with ramp_delay = 0
This patch fix crashes when using CPU DVFS. |
In the Linux kernel, the following vulnerability has been resolved:
sched/fair: Fix NEXT_BUDDY
Adam reports that enabling NEXT_BUDDY insta triggers a WARN in
pick_next_entity().
Moving clear_buddies() up before the delayed dequeue bits ensures
no ->next buddy becomes delayed. Further ensure no new ->next buddy
ever starts as delayed. |