CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: btintel_pcie: Allocate memory for driver private data
Fix driver not allocating memory for struct btintel_data which is used
to store internal data. |
In the Linux kernel, the following vulnerability has been resolved:
net/mlx5e: SHAMPO, Fix incorrect page release
Under the following conditions:
1) No skb created yet
2) header_size == 0 (no SHAMPO header)
3) header_index + 1 % MLX5E_SHAMPO_WQ_HEADER_PER_PAGE == 0 (this is the
last page fragment of a SHAMPO header page)
a new skb is formed with a page that is NOT a SHAMPO header page (it
is a regular data page). Further down in the same function
(mlx5e_handle_rx_cqe_mpwrq_shampo()), a SHAMPO header page from
header_index is released. This is wrong and it leads to SHAMPO header
pages being released more than once. |
In the Linux kernel, the following vulnerability has been resolved:
Revert "serial: 8250_omap: Set the console genpd always on if no console suspend"
This reverts commit 68e6939ea9ec3d6579eadeab16060339cdeaf940.
Kevin reported that this causes a crash during suspend on platforms that
dont use PM domains. |
In the Linux kernel, the following vulnerability has been resolved:
drm/v3d: Disable preemption while updating GPU stats
We forgot to disable preemption around the write_seqcount_begin/end() pair
while updating GPU stats:
[ ] WARNING: CPU: 2 PID: 12 at include/linux/seqlock.h:221 __seqprop_assert.isra.0+0x128/0x150 [v3d]
[ ] Workqueue: v3d_bin drm_sched_run_job_work [gpu_sched]
<...snip...>
[ ] Call trace:
[ ] __seqprop_assert.isra.0+0x128/0x150 [v3d]
[ ] v3d_job_start_stats.isra.0+0x90/0x218 [v3d]
[ ] v3d_bin_job_run+0x23c/0x388 [v3d]
[ ] drm_sched_run_job_work+0x520/0x6d0 [gpu_sched]
[ ] process_one_work+0x62c/0xb48
[ ] worker_thread+0x468/0x5b0
[ ] kthread+0x1c4/0x1e0
[ ] ret_from_fork+0x10/0x20
Fix it. |
In the Linux kernel, the following vulnerability has been resolved:
ethtool: check device is present when getting link settings
A sysfs reader can race with a device reset or removal, attempting to
read device state when the device is not actually present. eg:
[exception RIP: qed_get_current_link+17]
#8 [ffffb9e4f2907c48] qede_get_link_ksettings at ffffffffc07a994a [qede]
#9 [ffffb9e4f2907cd8] __rh_call_get_link_ksettings at ffffffff992b01a3
#10 [ffffb9e4f2907d38] __ethtool_get_link_ksettings at ffffffff992b04e4
#11 [ffffb9e4f2907d90] duplex_show at ffffffff99260300
#12 [ffffb9e4f2907e38] dev_attr_show at ffffffff9905a01c
#13 [ffffb9e4f2907e50] sysfs_kf_seq_show at ffffffff98e0145b
#14 [ffffb9e4f2907e68] seq_read at ffffffff98d902e3
#15 [ffffb9e4f2907ec8] vfs_read at ffffffff98d657d1
#16 [ffffb9e4f2907f00] ksys_read at ffffffff98d65c3f
#17 [ffffb9e4f2907f38] do_syscall_64 at ffffffff98a052fb
crash> struct net_device.state ffff9a9d21336000
state = 5,
state 5 is __LINK_STATE_START (0b1) and __LINK_STATE_NOCARRIER (0b100).
The device is not present, note lack of __LINK_STATE_PRESENT (0b10).
This is the same sort of panic as observed in commit 4224cfd7fb65
("net-sysfs: add check for netdevice being present to speed_show").
There are many other callers of __ethtool_get_link_ksettings() which
don't have a device presence check.
Move this check into ethtool to protect all callers. |
In the Linux kernel, the following vulnerability has been resolved:
memcg_write_event_control(): fix a user-triggerable oops
we are *not* guaranteed that anything past the terminating NUL
is mapped (let alone initialized with anything sane). |
In the Linux kernel, the following vulnerability has been resolved:
wifi: cfg80211: handle 2x996 RU allocation in cfg80211_calculate_bitrate_he()
Currently NL80211_RATE_INFO_HE_RU_ALLOC_2x996 is not handled in
cfg80211_calculate_bitrate_he(), leading to below warning:
kernel: invalid HE MCS: bw:6, ru:6
kernel: WARNING: CPU: 0 PID: 2312 at net/wireless/util.c:1501 cfg80211_calculate_bitrate_he+0x22b/0x270 [cfg80211]
Fix it by handling 2x996 RU allocation in the same way as 160 MHz bandwidth. |
In the Linux kernel, the following vulnerability has been resolved:
xfrm: Fix input error path memory access
When there is a misconfiguration of input state slow path
KASAN report error. Fix this error.
west login:
[ 52.987278] eth1: renamed from veth11
[ 53.078814] eth1: renamed from veth21
[ 53.181355] eth1: renamed from veth31
[ 54.921702] ==================================================================
[ 54.922602] BUG: KASAN: wild-memory-access in xfrmi_rcv_cb+0x2d/0x295
[ 54.923393] Read of size 8 at addr 6b6b6b6b00000000 by task ping/512
[ 54.924169]
[ 54.924386] CPU: 0 PID: 512 Comm: ping Not tainted 6.9.0-08574-gcd29a4313a1b #25
[ 54.925290] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 54.926401] Call Trace:
[ 54.926731] <IRQ>
[ 54.927009] dump_stack_lvl+0x2a/0x3b
[ 54.927478] kasan_report+0x84/0xa6
[ 54.927930] ? xfrmi_rcv_cb+0x2d/0x295
[ 54.928410] xfrmi_rcv_cb+0x2d/0x295
[ 54.928872] ? xfrm4_rcv_cb+0x3d/0x5e
[ 54.929354] xfrm4_rcv_cb+0x46/0x5e
[ 54.929804] xfrm_rcv_cb+0x7e/0xa1
[ 54.930240] xfrm_input+0x1b3a/0x1b96
[ 54.930715] ? xfrm_offload+0x41/0x41
[ 54.931182] ? raw_rcv+0x292/0x292
[ 54.931617] ? nf_conntrack_confirm+0xa2/0xa2
[ 54.932158] ? skb_sec_path+0xd/0x3f
[ 54.932610] ? xfrmi_input+0x90/0xce
[ 54.933066] xfrm4_esp_rcv+0x33/0x54
[ 54.933521] ip_protocol_deliver_rcu+0xd7/0x1b2
[ 54.934089] ip_local_deliver_finish+0x110/0x120
[ 54.934659] ? ip_protocol_deliver_rcu+0x1b2/0x1b2
[ 54.935248] NF_HOOK.constprop.0+0xf8/0x138
[ 54.935767] ? ip_sublist_rcv_finish+0x68/0x68
[ 54.936317] ? secure_tcpv6_ts_off+0x23/0x168
[ 54.936859] ? ip_protocol_deliver_rcu+0x1b2/0x1b2
[ 54.937454] ? __xfrm_policy_check2.constprop.0+0x18d/0x18d
[ 54.938135] NF_HOOK.constprop.0+0xf8/0x138
[ 54.938663] ? ip_sublist_rcv_finish+0x68/0x68
[ 54.939220] ? __xfrm_policy_check2.constprop.0+0x18d/0x18d
[ 54.939904] ? ip_local_deliver_finish+0x120/0x120
[ 54.940497] __netif_receive_skb_one_core+0xc9/0x107
[ 54.941121] ? __netif_receive_skb_list_core+0x1c2/0x1c2
[ 54.941771] ? blk_mq_start_stopped_hw_queues+0xc7/0xf9
[ 54.942413] ? blk_mq_start_stopped_hw_queue+0x38/0x38
[ 54.943044] ? virtqueue_get_buf_ctx+0x295/0x46b
[ 54.943618] process_backlog+0xb3/0x187
[ 54.944102] __napi_poll.constprop.0+0x57/0x1a7
[ 54.944669] net_rx_action+0x1cb/0x380
[ 54.945150] ? __napi_poll.constprop.0+0x1a7/0x1a7
[ 54.945744] ? vring_new_virtqueue+0x17a/0x17a
[ 54.946300] ? note_interrupt+0x2cd/0x367
[ 54.946805] handle_softirqs+0x13c/0x2c9
[ 54.947300] do_softirq+0x5f/0x7d
[ 54.947727] </IRQ>
[ 54.948014] <TASK>
[ 54.948300] __local_bh_enable_ip+0x48/0x62
[ 54.948832] __neigh_event_send+0x3fd/0x4ca
[ 54.949361] neigh_resolve_output+0x1e/0x210
[ 54.949896] ip_finish_output2+0x4bf/0x4f0
[ 54.950410] ? __ip_finish_output+0x171/0x1b8
[ 54.950956] ip_send_skb+0x25/0x57
[ 54.951390] raw_sendmsg+0xf95/0x10c0
[ 54.951850] ? check_new_pages+0x45/0x71
[ 54.952343] ? raw_hash_sk+0x21b/0x21b
[ 54.952815] ? kernel_init_pages+0x42/0x51
[ 54.953337] ? prep_new_page+0x44/0x51
[ 54.953811] ? get_page_from_freelist+0x72b/0x915
[ 54.954390] ? signal_pending_state+0x77/0x77
[ 54.954936] ? preempt_count_sub+0x14/0xb3
[ 54.955450] ? __might_resched+0x8a/0x240
[ 54.955951] ? __might_sleep+0x25/0xa0
[ 54.956424] ? first_zones_zonelist+0x2c/0x43
[ 54.956977] ? __rcu_read_lock+0x2d/0x3a
[ 54.957476] ? __pte_offset_map+0x32/0xa4
[ 54.957980] ? __might_resched+0x8a/0x240
[ 54.958483] ? __might_sleep+0x25/0xa0
[ 54.958963] ? inet_send_prepare+0x54/0x54
[ 54.959478] ? sock_sendmsg_nosec+0x42/0x6c
[ 54.960000] sock_sendmsg_nosec+0x42/0x6c
[ 54.960502] __sys_sendto+0x15d/0x1cc
[ 54.960966] ? __x64_sys_getpeername+0x44/0x44
[ 54.961522] ? __handle_mm_fault+0x679/0xae4
[ 54.962068] ? find_vma+0x6b/0x
---truncated--- |
In the Linux kernel, the following vulnerability has been resolved:
RDMA/hns: Fix soft lockup under heavy CEQE load
CEQEs are handled in interrupt handler currently. This may cause the
CPU core staying in interrupt context too long and lead to soft lockup
under heavy load.
Handle CEQEs in BH workqueue and set an upper limit for the number of
CEQE handled by a single call of work handler. |
In the Linux kernel, the following vulnerability has been resolved:
net: flow_dissector: use DEBUG_NET_WARN_ON_ONCE
The following splat is easy to reproduce upstream as well as in -stable
kernels. Florian Westphal provided the following commit:
d1dab4f71d37 ("net: add and use __skb_get_hash_symmetric_net")
but this complementary fix has been also suggested by Willem de Bruijn
and it can be easily backported to -stable kernel which consists in
using DEBUG_NET_WARN_ON_ONCE instead to silence the following splat
given __skb_get_hash() is used by the nftables tracing infrastructure to
to identify packets in traces.
[69133.561393] ------------[ cut here ]------------
[69133.561404] WARNING: CPU: 0 PID: 43576 at net/core/flow_dissector.c:1104 __skb_flow_dissect+0x134f/
[...]
[69133.561944] CPU: 0 PID: 43576 Comm: socat Not tainted 6.10.0-rc7+ #379
[69133.561959] RIP: 0010:__skb_flow_dissect+0x134f/0x2ad0
[69133.561970] Code: 83 f9 04 0f 84 b3 00 00 00 45 85 c9 0f 84 aa 00 00 00 41 83 f9 02 0f 84 81 fc ff
ff 44 0f b7 b4 24 80 00 00 00 e9 8b f9 ff ff <0f> 0b e9 20 f3 ff ff 41 f6 c6 20 0f 84 e4 ef ff ff 48 8d 7b 12 e8
[69133.561979] RSP: 0018:ffffc90000006fc0 EFLAGS: 00010246
[69133.561988] RAX: 0000000000000000 RBX: ffffffff82f33e20 RCX: ffffffff81ab7e19
[69133.561994] RDX: dffffc0000000000 RSI: ffffc90000007388 RDI: ffff888103a1b418
[69133.562001] RBP: ffffc90000007310 R08: 0000000000000000 R09: 0000000000000000
[69133.562007] R10: ffffc90000007388 R11: ffffffff810cface R12: ffff888103a1b400
[69133.562013] R13: 0000000000000000 R14: ffffffff82f33e2a R15: ffffffff82f33e28
[69133.562020] FS: 00007f40f7131740(0000) GS:ffff888390800000(0000) knlGS:0000000000000000
[69133.562027] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[69133.562033] CR2: 00007f40f7346ee0 CR3: 000000015d200001 CR4: 00000000001706f0
[69133.562040] Call Trace:
[69133.562044] <IRQ>
[69133.562049] ? __warn+0x9f/0x1a0
[ 1211.841384] ? __skb_flow_dissect+0x107e/0x2860
[...]
[ 1211.841496] ? bpf_flow_dissect+0x160/0x160
[ 1211.841753] __skb_get_hash+0x97/0x280
[ 1211.841765] ? __skb_get_hash_symmetric+0x230/0x230
[ 1211.841776] ? mod_find+0xbf/0xe0
[ 1211.841786] ? get_stack_info_noinstr+0x12/0xe0
[ 1211.841798] ? bpf_ksym_find+0x56/0xe0
[ 1211.841807] ? __rcu_read_unlock+0x2a/0x70
[ 1211.841819] nft_trace_init+0x1b9/0x1c0 [nf_tables]
[ 1211.841895] ? nft_trace_notify+0x830/0x830 [nf_tables]
[ 1211.841964] ? get_stack_info+0x2b/0x80
[ 1211.841975] ? nft_do_chain_arp+0x80/0x80 [nf_tables]
[ 1211.842044] nft_do_chain+0x79c/0x850 [nf_tables] |
In the Linux kernel, the following vulnerability has been resolved:
mailbox: mtk-cmdq: Move devm_mbox_controller_register() after devm_pm_runtime_enable()
When mtk-cmdq unbinds, a WARN_ON message with condition
pm_runtime_get_sync() < 0 occurs.
According to the call tracei below:
cmdq_mbox_shutdown
mbox_free_channel
mbox_controller_unregister
__devm_mbox_controller_unregister
...
The root cause can be deduced to be calling pm_runtime_get_sync() after
calling pm_runtime_disable() as observed below:
1. CMDQ driver uses devm_mbox_controller_register() in cmdq_probe()
to bind the cmdq device to the mbox_controller, so
devm_mbox_controller_unregister() will automatically unregister
the device bound to the mailbox controller when the device-managed
resource is removed. That means devm_mbox_controller_unregister()
and cmdq_mbox_shoutdown() will be called after cmdq_remove().
2. CMDQ driver also uses devm_pm_runtime_enable() in cmdq_probe() after
devm_mbox_controller_register(), so that devm_pm_runtime_disable()
will be called after cmdq_remove(), but before
devm_mbox_controller_unregister().
To fix this problem, cmdq_probe() needs to move
devm_mbox_controller_register() after devm_pm_runtime_enable() to make
devm_pm_runtime_disable() be called after
devm_mbox_controller_unregister(). |
In the Linux kernel, the following vulnerability has been resolved:
landlock: Don't lose track of restrictions on cred_transfer
When a process' cred struct is replaced, this _almost_ always invokes
the cred_prepare LSM hook; but in one special case (when
KEYCTL_SESSION_TO_PARENT updates the parent's credentials), the
cred_transfer LSM hook is used instead. Landlock only implements the
cred_prepare hook, not cred_transfer, so KEYCTL_SESSION_TO_PARENT causes
all information on Landlock restrictions to be lost.
This basically means that a process with the ability to use the fork()
and keyctl() syscalls can get rid of all Landlock restrictions on
itself.
Fix it by adding a cred_transfer hook that does the same thing as the
existing cred_prepare hook. (Implemented by having hook_cred_prepare()
call hook_cred_transfer() so that the two functions are less likely to
accidentally diverge in the future.) |
In the Linux kernel, the following vulnerability has been resolved:
mm/huge_memory: avoid PMD-size page cache if needed
xarray can't support arbitrary page cache size. the largest and supported
page cache size is defined as MAX_PAGECACHE_ORDER by commit 099d90642a71
("mm/filemap: make MAX_PAGECACHE_ORDER acceptable to xarray"). However,
it's possible to have 512MB page cache in the huge memory's collapsing
path on ARM64 system whose base page size is 64KB. 512MB page cache is
breaking the limitation and a warning is raised when the xarray entry is
split as shown in the following example.
[root@dhcp-10-26-1-207 ~]# cat /proc/1/smaps | grep KernelPageSize
KernelPageSize: 64 kB
[root@dhcp-10-26-1-207 ~]# cat /tmp/test.c
:
int main(int argc, char **argv)
{
const char *filename = TEST_XFS_FILENAME;
int fd = 0;
void *buf = (void *)-1, *p;
int pgsize = getpagesize();
int ret = 0;
if (pgsize != 0x10000) {
fprintf(stdout, "System with 64KB base page size is required!\n");
return -EPERM;
}
system("echo 0 > /sys/devices/virtual/bdi/253:0/read_ahead_kb");
system("echo 1 > /proc/sys/vm/drop_caches");
/* Open the xfs file */
fd = open(filename, O_RDONLY);
assert(fd > 0);
/* Create VMA */
buf = mmap(NULL, TEST_MEM_SIZE, PROT_READ, MAP_SHARED, fd, 0);
assert(buf != (void *)-1);
fprintf(stdout, "mapped buffer at 0x%p\n", buf);
/* Populate VMA */
ret = madvise(buf, TEST_MEM_SIZE, MADV_NOHUGEPAGE);
assert(ret == 0);
ret = madvise(buf, TEST_MEM_SIZE, MADV_POPULATE_READ);
assert(ret == 0);
/* Collapse VMA */
ret = madvise(buf, TEST_MEM_SIZE, MADV_HUGEPAGE);
assert(ret == 0);
ret = madvise(buf, TEST_MEM_SIZE, MADV_COLLAPSE);
if (ret) {
fprintf(stdout, "Error %d to madvise(MADV_COLLAPSE)\n", errno);
goto out;
}
/* Split xarray entry. Write permission is needed */
munmap(buf, TEST_MEM_SIZE);
buf = (void *)-1;
close(fd);
fd = open(filename, O_RDWR);
assert(fd > 0);
fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE,
TEST_MEM_SIZE - pgsize, pgsize);
out:
if (buf != (void *)-1)
munmap(buf, TEST_MEM_SIZE);
if (fd > 0)
close(fd);
return ret;
}
[root@dhcp-10-26-1-207 ~]# gcc /tmp/test.c -o /tmp/test
[root@dhcp-10-26-1-207 ~]# /tmp/test
------------[ cut here ]------------
WARNING: CPU: 25 PID: 7560 at lib/xarray.c:1025 xas_split_alloc+0xf8/0x128
Modules linked in: nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib \
nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct \
nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 \
ip_set rfkill nf_tables nfnetlink vfat fat virtio_balloon drm fuse \
xfs libcrc32c crct10dif_ce ghash_ce sha2_ce sha256_arm64 virtio_net \
sha1_ce net_failover virtio_blk virtio_console failover dimlib virtio_mmio
CPU: 25 PID: 7560 Comm: test Kdump: loaded Not tainted 6.10.0-rc7-gavin+ #9
Hardware name: QEMU KVM Virtual Machine, BIOS edk2-20240524-1.el9 05/24/2024
pstate: 83400005 (Nzcv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
pc : xas_split_alloc+0xf8/0x128
lr : split_huge_page_to_list_to_order+0x1c4/0x780
sp : ffff8000ac32f660
x29: ffff8000ac32f660 x28: ffff0000e0969eb0 x27: ffff8000ac32f6c0
x26: 0000000000000c40 x25: ffff0000e0969eb0 x24: 000000000000000d
x23: ffff8000ac32f6c0 x22: ffffffdfc0700000 x21: 0000000000000000
x20: 0000000000000000 x19: ffffffdfc0700000 x18: 0000000000000000
x17: 0000000000000000 x16: ffffd5f3708ffc70 x15: 0000000000000000
x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
x11: ffffffffffffffc0 x10: 0000000000000040 x9 : ffffd5f3708e692c
x8 : 0000000000000003 x7 : 0000000000000000 x6 : ffff0000e0969eb8
x5 : ffffd5f37289e378 x4 : 0000000000000000 x3 : 0000000000000c40
x2 : 000000000000000d x1 : 000000000000000c x0 : 0000000000000000
Call trace:
xas_split_alloc+0xf8/0x128
split_huge_page_to_list_to_order+0x1c4/0x780
truncate_inode_partial_folio+0xdc/0x160
truncate_inode_pages_range+0x1b4/0x4a8
truncate_pagecache_range+0x84/0xa
---truncated--- |
In the Linux kernel, the following vulnerability has been resolved:
riscv/mm: Add handling for VM_FAULT_SIGSEGV in mm_fault_error()
Handle VM_FAULT_SIGSEGV in the page fault path so that we correctly
kill the process and we don't BUG() the kernel. |
In the Linux kernel, the following vulnerability has been resolved:
protect the fetch of ->fd[fd] in do_dup2() from mispredictions
both callers have verified that fd is not greater than ->max_fds;
however, misprediction might end up with
tofree = fdt->fd[fd];
being speculatively executed. That's wrong for the same reasons
why it's wrong in close_fd()/file_close_fd_locked(); the same
solution applies - array_index_nospec(fd, fdt->max_fds) could differ
from fd only in case of speculative execution on mispredicted path. |
In the Linux kernel, the following vulnerability has been resolved:
spi: don't unoptimize message in spi_async()
Calling spi_maybe_unoptimize_message() in spi_async() is wrong because
the message is likely to be in the queue and not transferred yet. This
can corrupt the message while it is being used by the controller driver.
spi_maybe_unoptimize_message() is already called in the correct place
in spi_finalize_current_message() to balance the call to
spi_maybe_optimize_message() in spi_async(). |
In the Linux kernel, the following vulnerability has been resolved:
Revert "sched/fair: Make sure to try to detach at least one movable task"
This reverts commit b0defa7ae03ecf91b8bfd10ede430cff12fcbd06.
b0defa7ae03ec changed the load balancing logic to ignore env.max_loop if
all tasks examined to that point were pinned. The goal of the patch was
to make it more likely to be able to detach a task buried in a long list
of pinned tasks. However, this has the unfortunate side effect of
creating an O(n) iteration in detach_tasks(), as we now must fully
iterate every task on a cpu if all or most are pinned. Since this load
balance code is done with rq lock held, and often in softirq context, it
is very easy to trigger hard lockups. We observed such hard lockups with
a user who affined O(10k) threads to a single cpu.
When I discussed this with Vincent he initially suggested that we keep
the limit on the number of tasks to detach, but increase the number of
tasks we can search. However, after some back and forth on the mailing
list, he recommended we instead revert the original patch, as it seems
likely no one was actually getting hit by the original issue. |
In the Linux kernel, the following vulnerability has been resolved:
USB: serial: mos7840: fix crash on resume
Since commit c49cfa917025 ("USB: serial: use generic method if no
alternative is provided in usb serial layer"), USB serial core calls the
generic resume implementation when the driver has not provided one.
This can trigger a crash on resume with mos7840 since support for
multiple read URBs was added back in 2011. Specifically, both port read
URBs are now submitted on resume for open ports, but the context pointer
of the second URB is left set to the core rather than mos7840 port
structure.
Fix this by implementing dedicated suspend and resume functions for
mos7840.
Tested with Delock 87414 USB 2.0 to 4x serial adapter.
[ johan: analyse crash and rewrite commit message; set busy flag on
resume; drop bulk-in check; drop unnecessary usb_kill_urb() ] |
In the Linux kernel, the following vulnerability has been resolved:
mmc: sdhci: Fix max_seg_size for 64KiB PAGE_SIZE
blk_queue_max_segment_size() ensured:
if (max_size < PAGE_SIZE)
max_size = PAGE_SIZE;
whereas:
blk_validate_limits() makes it an error:
if (WARN_ON_ONCE(lim->max_segment_size < PAGE_SIZE))
return -EINVAL;
The change from one to the other, exposed sdhci which was setting maximum
segment size too low in some circumstances.
Fix the maximum segment size when it is too low. |
In the Linux kernel, the following vulnerability has been resolved:
mm/shmem: disable PMD-sized page cache if needed
For shmem files, it's possible that PMD-sized page cache can't be
supported by xarray. For example, 512MB page cache on ARM64 when the base
page size is 64KB can't be supported by xarray. It leads to errors as the
following messages indicate when this sort of xarray entry is split.
WARNING: CPU: 34 PID: 7578 at lib/xarray.c:1025 xas_split_alloc+0xf8/0x128
Modules linked in: binfmt_misc nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 \
nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject \
nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 \
ip_set rfkill nf_tables nfnetlink vfat fat virtio_balloon drm fuse xfs \
libcrc32c crct10dif_ce ghash_ce sha2_ce sha256_arm64 sha1_ce virtio_net \
net_failover virtio_console virtio_blk failover dimlib virtio_mmio
CPU: 34 PID: 7578 Comm: test Kdump: loaded Tainted: G W 6.10.0-rc5-gavin+ #9
Hardware name: QEMU KVM Virtual Machine, BIOS edk2-20240524-1.el9 05/24/2024
pstate: 83400005 (Nzcv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
pc : xas_split_alloc+0xf8/0x128
lr : split_huge_page_to_list_to_order+0x1c4/0x720
sp : ffff8000882af5f0
x29: ffff8000882af5f0 x28: ffff8000882af650 x27: ffff8000882af768
x26: 0000000000000cc0 x25: 000000000000000d x24: ffff00010625b858
x23: ffff8000882af650 x22: ffffffdfc0900000 x21: 0000000000000000
x20: 0000000000000000 x19: ffffffdfc0900000 x18: 0000000000000000
x17: 0000000000000000 x16: 0000018000000000 x15: 52f8004000000000
x14: 0000e00000000000 x13: 0000000000002000 x12: 0000000000000020
x11: 52f8000000000000 x10: 52f8e1c0ffff6000 x9 : ffffbeb9619a681c
x8 : 0000000000000003 x7 : 0000000000000000 x6 : ffff00010b02ddb0
x5 : ffffbeb96395e378 x4 : 0000000000000000 x3 : 0000000000000cc0
x2 : 000000000000000d x1 : 000000000000000c x0 : 0000000000000000
Call trace:
xas_split_alloc+0xf8/0x128
split_huge_page_to_list_to_order+0x1c4/0x720
truncate_inode_partial_folio+0xdc/0x160
shmem_undo_range+0x2bc/0x6a8
shmem_fallocate+0x134/0x430
vfs_fallocate+0x124/0x2e8
ksys_fallocate+0x4c/0xa0
__arm64_sys_fallocate+0x24/0x38
invoke_syscall.constprop.0+0x7c/0xd8
do_el0_svc+0xb4/0xd0
el0_svc+0x44/0x1d8
el0t_64_sync_handler+0x134/0x150
el0t_64_sync+0x17c/0x180
Fix it by disabling PMD-sized page cache when HPAGE_PMD_ORDER is larger
than MAX_PAGECACHE_ORDER. As Matthew Wilcox pointed, the page cache in a
shmem file isn't represented by a multi-index entry and doesn't have this
limitation when the xarry entry is split until commit 6b24ca4a1a8d ("mm:
Use multi-index entries in the page cache"). |