| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| A flaw was found in the X server's request handling. Non-zero 'bytes to ignore' in a client's request can cause the server to skip processing another client's request, potentially leading to a denial of service. |
| A flaw was found in the QEMU NBD Server. This vulnerability allows a denial of service (DoS) attack via improper synchronization during socket closure when a client keeps a socket open as the server is taken offline. |
| A bug in QEMU could cause a guest I/O operation otherwise addressed to an arbitrary disk offset to be targeted to offset 0 instead (potentially overwriting the VM's boot code). This could be used, for example, by L2 guests with a virtual disk (vdiskL2) stored on a virtual disk of an L1 (vdiskL1) hypervisor to read and/or write data to LBA 0 of vdiskL1, potentially gaining control of L1 at its next reboot. |
| In the Linux kernel, the following vulnerability has been resolved:
net/9p: use a dedicated spinlock for trans_fd
Shamelessly copying the explanation from Tetsuo Handa's suggested
patch[1] (slightly reworded):
syzbot is reporting inconsistent lock state in p9_req_put()[2],
for p9_tag_remove() from p9_req_put() from IRQ context is using
spin_lock_irqsave() on "struct p9_client"->lock but trans_fd
(not from IRQ context) is using spin_lock().
Since the locks actually protect different things in client.c and in
trans_fd.c, just replace trans_fd.c's lock by a new one specific to the
transport (client.c's protect the idr for fid/tag allocations,
while trans_fd.c's protects its own req list and request status field
that acts as the transport's state machine) |
| In the Linux kernel, the following vulnerability has been resolved:
9p: trans_fd/p9_conn_cancel: drop client lock earlier
syzbot reported a double-lock here and we no longer need this
lock after requests have been moved off to local list:
just drop the lock earlier. |
| A flaw was found in the way the "flags" member of the new pipe buffer structure was lacking proper initialization in copy_page_to_iter_pipe and push_pipe functions in the Linux kernel and could thus contain stale values. An unprivileged local user could use this flaw to write to pages in the page cache backed by read only files and as such escalate their privileges on the system. |
| In the Linux kernel, the following vulnerability has been resolved:
PCI: vmd: Make vmd_dev::cfg_lock a raw_spinlock_t type
The access to the PCI config space via pci_ops::read and pci_ops::write is
a low-level hardware access. The functions can be accessed with disabled
interrupts even on PREEMPT_RT. The pci_lock is a raw_spinlock_t for this
purpose.
A spinlock_t becomes a sleeping lock on PREEMPT_RT, so it cannot be
acquired with disabled interrupts. The vmd_dev::cfg_lock is accessed in
the same context as the pci_lock.
Make vmd_dev::cfg_lock a raw_spinlock_t type so it can be used with
interrupts disabled.
This was reported as:
BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
Call Trace:
rt_spin_lock+0x4e/0x130
vmd_pci_read+0x8d/0x100 [vmd]
pci_user_read_config_byte+0x6f/0xe0
pci_read_config+0xfe/0x290
sysfs_kf_bin_read+0x68/0x90
[bigeasy: reword commit message]
Tested-off-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
[kwilczynski: commit log]
[bhelgaas: add back report info from
https://lore.kernel.org/lkml/20241218115951.83062-1-ryotkkr98@gmail.com/] |
| In the Linux kernel, the following vulnerability has been resolved:
net: vlan: don't propagate flags on open
With the device instance lock, there is now a possibility of a deadlock:
[ 1.211455] ============================================
[ 1.211571] WARNING: possible recursive locking detected
[ 1.211687] 6.14.0-rc5-01215-g032756b4ca7a-dirty #5 Not tainted
[ 1.211823] --------------------------------------------
[ 1.211936] ip/184 is trying to acquire lock:
[ 1.212032] ffff8881024a4c30 (&dev->lock){+.+.}-{4:4}, at: dev_set_allmulti+0x4e/0xb0
[ 1.212207]
[ 1.212207] but task is already holding lock:
[ 1.212332] ffff8881024a4c30 (&dev->lock){+.+.}-{4:4}, at: dev_open+0x50/0xb0
[ 1.212487]
[ 1.212487] other info that might help us debug this:
[ 1.212626] Possible unsafe locking scenario:
[ 1.212626]
[ 1.212751] CPU0
[ 1.212815] ----
[ 1.212871] lock(&dev->lock);
[ 1.212944] lock(&dev->lock);
[ 1.213016]
[ 1.213016] *** DEADLOCK ***
[ 1.213016]
[ 1.213143] May be due to missing lock nesting notation
[ 1.213143]
[ 1.213294] 3 locks held by ip/184:
[ 1.213371] #0: ffffffff838b53e0 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock+0x1b/0xa0
[ 1.213543] #1: ffffffff84e5fc70 (&net->rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock+0x37/0xa0
[ 1.213727] #2: ffff8881024a4c30 (&dev->lock){+.+.}-{4:4}, at: dev_open+0x50/0xb0
[ 1.213895]
[ 1.213895] stack backtrace:
[ 1.213991] CPU: 0 UID: 0 PID: 184 Comm: ip Not tainted 6.14.0-rc5-01215-g032756b4ca7a-dirty #5
[ 1.213993] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014
[ 1.213994] Call Trace:
[ 1.213995] <TASK>
[ 1.213996] dump_stack_lvl+0x8e/0xd0
[ 1.214000] print_deadlock_bug+0x28b/0x2a0
[ 1.214020] lock_acquire+0xea/0x2a0
[ 1.214027] __mutex_lock+0xbf/0xd40
[ 1.214038] dev_set_allmulti+0x4e/0xb0 # real_dev->flags & IFF_ALLMULTI
[ 1.214040] vlan_dev_open+0xa5/0x170 # ndo_open on vlandev
[ 1.214042] __dev_open+0x145/0x270
[ 1.214046] __dev_change_flags+0xb0/0x1e0
[ 1.214051] netif_change_flags+0x22/0x60 # IFF_UP vlandev
[ 1.214053] dev_change_flags+0x61/0xb0 # for each device in group from dev->vlan_info
[ 1.214055] vlan_device_event+0x766/0x7c0 # on netdevsim0
[ 1.214058] notifier_call_chain+0x78/0x120
[ 1.214062] netif_open+0x6d/0x90
[ 1.214064] dev_open+0x5b/0xb0 # locks netdevsim0
[ 1.214066] bond_enslave+0x64c/0x1230
[ 1.214075] do_set_master+0x175/0x1e0 # on netdevsim0
[ 1.214077] do_setlink+0x516/0x13b0
[ 1.214094] rtnl_newlink+0xaba/0xb80
[ 1.214132] rtnetlink_rcv_msg+0x440/0x490
[ 1.214144] netlink_rcv_skb+0xeb/0x120
[ 1.214150] netlink_unicast+0x1f9/0x320
[ 1.214153] netlink_sendmsg+0x346/0x3f0
[ 1.214157] __sock_sendmsg+0x86/0xb0
[ 1.214160] ____sys_sendmsg+0x1c8/0x220
[ 1.214164] ___sys_sendmsg+0x28f/0x2d0
[ 1.214179] __x64_sys_sendmsg+0xef/0x140
[ 1.214184] do_syscall_64+0xec/0x1d0
[ 1.214190] entry_SYSCALL_64_after_hwframe+0x77/0x7f
[ 1.214191] RIP: 0033:0x7f2d1b4a7e56
Device setup:
netdevsim0 (down)
^ ^
bond netdevsim1.100@netdevsim1 allmulticast=on (down)
When we enslave the lower device (netdevsim0) which has a vlan, we
propagate vlan's allmuti/promisc flags during ndo_open. This causes
(re)locking on of the real_dev.
Propagate allmulti/promisc on flags change, not on the open. There
is a slight semantics change that vlans that are down now propagate
the flags, but this seems unlikely to result in the real issues.
Reproducer:
echo 0 1 > /sys/bus/netdevsim/new_device
dev_path=$(ls -d /sys/bus/netdevsim/devices/netdevsim0/net/*)
dev=$(echo $dev_path | rev | cut -d/ -f1 | rev)
ip link set dev $dev name netdevsim0
ip link set dev netdevsim0 up
ip link add link netdevsim0 name netdevsim0.100 type vlan id 100
ip link set dev netdevsim0.100 allm
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
PM: hibernate: Avoid deadlock in hibernate_compressor_param_set()
syzbot reported a deadlock in lock_system_sleep() (see below).
The write operation to "/sys/module/hibernate/parameters/compressor"
conflicts with the registration of ieee80211 device, resulting in a deadlock
when attempting to acquire system_transition_mutex under param_lock.
To avoid this deadlock, change hibernate_compressor_param_set() to use
mutex_trylock() for attempting to acquire system_transition_mutex and
return -EBUSY when it fails.
Task flags need not be saved or adjusted before calling
mutex_trylock(&system_transition_mutex) because the caller is not going
to end up waiting for this mutex and if it runs concurrently with system
suspend in progress, it will be frozen properly when it returns to user
space.
syzbot report:
syz-executor895/5833 is trying to acquire lock:
ffffffff8e0828c8 (system_transition_mutex){+.+.}-{4:4}, at: lock_system_sleep+0x87/0xa0 kernel/power/main.c:56
but task is already holding lock:
ffffffff8e07dc68 (param_lock){+.+.}-{4:4}, at: kernel_param_lock kernel/params.c:607 [inline]
ffffffff8e07dc68 (param_lock){+.+.}-{4:4}, at: param_attr_store+0xe6/0x300 kernel/params.c:586
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (param_lock){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/mutex.c:585 [inline]
__mutex_lock+0x19b/0xb10 kernel/locking/mutex.c:730
ieee80211_rate_control_ops_get net/mac80211/rate.c:220 [inline]
rate_control_alloc net/mac80211/rate.c:266 [inline]
ieee80211_init_rate_ctrl_alg+0x18d/0x6b0 net/mac80211/rate.c:1015
ieee80211_register_hw+0x20cd/0x4060 net/mac80211/main.c:1531
mac80211_hwsim_new_radio+0x304e/0x54e0 drivers/net/wireless/virtual/mac80211_hwsim.c:5558
init_mac80211_hwsim+0x432/0x8c0 drivers/net/wireless/virtual/mac80211_hwsim.c:6910
do_one_initcall+0x128/0x700 init/main.c:1257
do_initcall_level init/main.c:1319 [inline]
do_initcalls init/main.c:1335 [inline]
do_basic_setup init/main.c:1354 [inline]
kernel_init_freeable+0x5c7/0x900 init/main.c:1568
kernel_init+0x1c/0x2b0 init/main.c:1457
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
-> #2 (rtnl_mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/mutex.c:585 [inline]
__mutex_lock+0x19b/0xb10 kernel/locking/mutex.c:730
wg_pm_notification drivers/net/wireguard/device.c:80 [inline]
wg_pm_notification+0x49/0x180 drivers/net/wireguard/device.c:64
notifier_call_chain+0xb7/0x410 kernel/notifier.c:85
notifier_call_chain_robust kernel/notifier.c:120 [inline]
blocking_notifier_call_chain_robust kernel/notifier.c:345 [inline]
blocking_notifier_call_chain_robust+0xc9/0x170 kernel/notifier.c:333
pm_notifier_call_chain_robust+0x27/0x60 kernel/power/main.c:102
snapshot_open+0x189/0x2b0 kernel/power/user.c:77
misc_open+0x35a/0x420 drivers/char/misc.c:179
chrdev_open+0x237/0x6a0 fs/char_dev.c:414
do_dentry_open+0x735/0x1c40 fs/open.c:956
vfs_open+0x82/0x3f0 fs/open.c:1086
do_open fs/namei.c:3830 [inline]
path_openat+0x1e88/0x2d80 fs/namei.c:3989
do_filp_open+0x20c/0x470 fs/namei.c:4016
do_sys_openat2+0x17a/0x1e0 fs/open.c:1428
do_sys_open fs/open.c:1443 [inline]
__do_sys_openat fs/open.c:1459 [inline]
__se_sys_openat fs/open.c:1454 [inline]
__x64_sys_openat+0x175/0x210 fs/open.c:1454
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #1 ((pm_chain_head).rwsem){++++}-{4:4}:
down_read+0x9a/0x330 kernel/locking/rwsem.c:1524
blocking_notifier_call_chain_robust kerne
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
io_uring/net: fix io_req_post_cqe abuse by send bundle
[ 114.987980][ T5313] WARNING: CPU: 6 PID: 5313 at io_uring/io_uring.c:872 io_req_post_cqe+0x12e/0x4f0
[ 114.991597][ T5313] RIP: 0010:io_req_post_cqe+0x12e/0x4f0
[ 115.001880][ T5313] Call Trace:
[ 115.002222][ T5313] <TASK>
[ 115.007813][ T5313] io_send+0x4fe/0x10f0
[ 115.009317][ T5313] io_issue_sqe+0x1a6/0x1740
[ 115.012094][ T5313] io_wq_submit_work+0x38b/0xed0
[ 115.013223][ T5313] io_worker_handle_work+0x62a/0x1600
[ 115.013876][ T5313] io_wq_worker+0x34f/0xdf0
As the comment states, io_req_post_cqe() should only be used by
multishot requests, i.e. REQ_F_APOLL_MULTISHOT, which bundled sends are
not. Add a flag signifying whether a request wants to post multiple
CQEs. Eventually REQ_F_APOLL_MULTISHOT should imply the new flag, but
that's left out for simplicity. |
| The attacker can use the raft server protocol in an unauthenticated way. The attacker can see the server's resources, including directories and files.
This issue affects Apache Zeppelin: from 0.10.1 up to 0.12.0.
Users are recommended to upgrade to version 0.12.0, which fixes the issue by removing the Cluster Interpreter. |
| SENEC Storage Box V1,V2 and V3 accidentially expose a management UI accessible with publicly known admin credentials. |
| Cortex-A77 cores (r0p0 and r1p0) are affected by erratum 1508412
where software, under certain circumstances, could deadlock a core
due to the execution of either a load to device or non-cacheable memory,
and either a store exclusive or register read of the Physical
Address Register (PAR_EL1) in close proximity.
|
| An information disclosure vulnerability exists in the CtEnumCa() functionality of SoftEther VPN 4.41-9782-beta and 5.01.9674. Specially crafted network packets can lead to a disclosure of sensitive information. An attacker can send packets to trigger this vulnerability. |
| A memory initialization issue was addressed. This issue is fixed in macOS Ventura 13.3, macOS Monterey 12.6.4. A remote attacker may be able to cause unexpected app termination or arbitrary code execution. |
| p2putil.c in iNet wireless daemon (IWD) through 2.15 allows attackers to cause a denial of service (daemon crash) or possibly have unspecified other impact because of initialization issues in situations where parsing of advertised service information fails. |
| In the Linux kernel, the following vulnerability has been resolved:
dma-buf/sw-sync: don't enable IRQ from sync_print_obj()
Since commit a6aa8fca4d79 ("dma-buf/sw-sync: Reduce irqsave/irqrestore from
known context") by error replaced spin_unlock_irqrestore() with
spin_unlock_irq() for both sync_debugfs_show() and sync_print_obj() despite
sync_print_obj() is called from sync_debugfs_show(), lockdep complains
inconsistent lock state warning.
Use plain spin_{lock,unlock}() for sync_print_obj(), for
sync_debugfs_show() is already using spin_{lock,unlock}_irq(). |
| In the Linux kernel, the following vulnerability has been resolved:
serial: max3100: Lock port->lock when calling uart_handle_cts_change()
uart_handle_cts_change() has to be called with port lock taken,
Since we run it in a separate work, the lock may not be taken at
the time of running. Make sure that it's taken by explicitly doing
that. Without it we got a splat:
WARNING: CPU: 0 PID: 10 at drivers/tty/serial/serial_core.c:3491 uart_handle_cts_change+0xa6/0xb0
...
Workqueue: max3100-0 max3100_work [max3100]
RIP: 0010:uart_handle_cts_change+0xa6/0xb0
...
max3100_handlerx+0xc5/0x110 [max3100]
max3100_work+0x12a/0x340 [max3100] |
| In the Linux kernel, the following vulnerability has been resolved:
md: fix resync softlockup when bitmap size is less than array size
Is is reported that for dm-raid10, lvextend + lvchange --syncaction will
trigger following softlockup:
kernel:watchdog: BUG: soft lockup - CPU#3 stuck for 26s! [mdX_resync:6976]
CPU: 7 PID: 3588 Comm: mdX_resync Kdump: loaded Not tainted 6.9.0-rc4-next-20240419 #1
RIP: 0010:_raw_spin_unlock_irq+0x13/0x30
Call Trace:
<TASK>
md_bitmap_start_sync+0x6b/0xf0
raid10_sync_request+0x25c/0x1b40 [raid10]
md_do_sync+0x64b/0x1020
md_thread+0xa7/0x170
kthread+0xcf/0x100
ret_from_fork+0x30/0x50
ret_from_fork_asm+0x1a/0x30
And the detailed process is as follows:
md_do_sync
j = mddev->resync_min
while (j < max_sectors)
sectors = raid10_sync_request(mddev, j, &skipped)
if (!md_bitmap_start_sync(..., &sync_blocks))
// md_bitmap_start_sync set sync_blocks to 0
return sync_blocks + sectors_skippe;
// sectors = 0;
j += sectors;
// j never change
Root cause is that commit 301867b1c168 ("md/raid10: check
slab-out-of-bounds in md_bitmap_get_counter") return early from
md_bitmap_get_counter(), without setting returned blocks.
Fix this problem by always set returned blocks from
md_bitmap_get_counter"(), as it used to be.
Noted that this patch just fix the softlockup problem in kernel, the
case that bitmap size doesn't match array size still need to be fixed. |
| In the Linux kernel, the following vulnerability has been resolved:
netrom: fix possible dead-lock in nr_rt_ioctl()
syzbot loves netrom, and found a possible deadlock in nr_rt_ioctl [1]
Make sure we always acquire nr_node_list_lock before nr_node_lock(nr_node)
[1]
WARNING: possible circular locking dependency detected
6.9.0-rc7-syzkaller-02147-g654de42f3fc6 #0 Not tainted
------------------------------------------------------
syz-executor350/5129 is trying to acquire lock:
ffff8880186e2070 (&nr_node->node_lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
ffff8880186e2070 (&nr_node->node_lock){+...}-{2:2}, at: nr_node_lock include/net/netrom.h:152 [inline]
ffff8880186e2070 (&nr_node->node_lock){+...}-{2:2}, at: nr_dec_obs net/netrom/nr_route.c:464 [inline]
ffff8880186e2070 (&nr_node->node_lock){+...}-{2:2}, at: nr_rt_ioctl+0x1bb/0x1090 net/netrom/nr_route.c:697
but task is already holding lock:
ffffffff8f7053b8 (nr_node_list_lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
ffffffff8f7053b8 (nr_node_list_lock){+...}-{2:2}, at: nr_dec_obs net/netrom/nr_route.c:462 [inline]
ffffffff8f7053b8 (nr_node_list_lock){+...}-{2:2}, at: nr_rt_ioctl+0x10a/0x1090 net/netrom/nr_route.c:697
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (nr_node_list_lock){+...}-{2:2}:
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
nr_remove_node net/netrom/nr_route.c:299 [inline]
nr_del_node+0x4b4/0x820 net/netrom/nr_route.c:355
nr_rt_ioctl+0xa95/0x1090 net/netrom/nr_route.c:683
sock_do_ioctl+0x158/0x460 net/socket.c:1222
sock_ioctl+0x629/0x8e0 net/socket.c:1341
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:904 [inline]
__se_sys_ioctl+0xfc/0x170 fs/ioctl.c:890
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&nr_node->node_lock){+...}-{2:2}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
__lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
nr_node_lock include/net/netrom.h:152 [inline]
nr_dec_obs net/netrom/nr_route.c:464 [inline]
nr_rt_ioctl+0x1bb/0x1090 net/netrom/nr_route.c:697
sock_do_ioctl+0x158/0x460 net/socket.c:1222
sock_ioctl+0x629/0x8e0 net/socket.c:1341
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:904 [inline]
__se_sys_ioctl+0xfc/0x170 fs/ioctl.c:890
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(nr_node_list_lock);
lock(&nr_node->node_lock);
lock(nr_node_list_lock);
lock(&nr_node->node_lock);
*** DEADLOCK ***
1 lock held by syz-executor350/5129:
#0: ffffffff8f7053b8 (nr_node_list_lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
#0: ffffffff8f7053b8 (nr_node_list_lock){+...}-{2:2}, at: nr_dec_obs net/netrom/nr_route.c:462 [inline]
#0: ffffffff8f70
---truncated--- |