| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
fs: ext4: change GFP_KERNEL to GFP_NOFS to avoid deadlock
The parent function ext4_xattr_inode_lookup_create already uses GFP_NOFS for memory alloction, so the function ext4_xattr_inode_cache_find should use same gfp_flag. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/mediatek: Disable AFBC support on Mediatek DRM driver
Commit c410fa9b07c3 ("drm/mediatek: Add AFBC support to Mediatek DRM
driver") added AFBC support to Mediatek DRM and enabled the
32x8/split/sparse modifier.
However, this is currently broken on Mediatek MT8188 (Genio 700 EVK
platform); tested using upstream Kernel and Mesa (v25.2.1), AFBC is used by
default since Mesa v25.0.
Kernel trace reports vblank timeouts constantly, and the render is garbled:
```
[CRTC:62:crtc-0] vblank wait timed out
WARNING: CPU: 7 PID: 70 at drivers/gpu/drm/drm_atomic_helper.c:1835 drm_atomic_helper_wait_for_vblanks.part.0+0x24c/0x27c
[...]
Hardware name: MediaTek Genio-700 EVK (DT)
Workqueue: events_unbound commit_work
pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : drm_atomic_helper_wait_for_vblanks.part.0+0x24c/0x27c
lr : drm_atomic_helper_wait_for_vblanks.part.0+0x24c/0x27c
sp : ffff80008337bca0
x29: ffff80008337bcd0 x28: 0000000000000061 x27: 0000000000000000
x26: 0000000000000001 x25: 0000000000000000 x24: ffff0000c9dcc000
x23: 0000000000000001 x22: 0000000000000000 x21: ffff0000c66f2f80
x20: ffff0000c0d7d880 x19: 0000000000000000 x18: 000000000000000a
x17: 000000040044ffff x16: 005000f2b5503510 x15: 0000000000000000
x14: 0000000000000000 x13: 74756f2064656d69 x12: 742074696177206b
x11: 0000000000000058 x10: 0000000000000018 x9 : ffff800082396a70
x8 : 0000000000057fa8 x7 : 0000000000000cce x6 : ffff8000823eea70
x5 : ffff0001fef5f408 x4 : ffff80017ccee000 x3 : ffff0000c12cb480
x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff0000c12cb480
Call trace:
drm_atomic_helper_wait_for_vblanks.part.0+0x24c/0x27c (P)
drm_atomic_helper_commit_tail_rpm+0x64/0x80
commit_tail+0xa4/0x1a4
commit_work+0x14/0x20
process_one_work+0x150/0x290
worker_thread+0x2d0/0x3ec
kthread+0x12c/0x210
ret_from_fork+0x10/0x20
---[ end trace 0000000000000000 ]---
```
Until this gets fixed upstream, disable AFBC support on this platform, as
it's currently broken with upstream Mesa. |
| In the Linux kernel, the following vulnerability has been resolved:
net: enetc: fix the deadlock of enetc_mdio_lock
After applying the workaround for err050089, the LS1028A platform
experiences RCU stalls on RT kernel. This issue is caused by the
recursive acquisition of the read lock enetc_mdio_lock. Here list some
of the call stacks identified under the enetc_poll path that may lead to
a deadlock:
enetc_poll
-> enetc_lock_mdio
-> enetc_clean_rx_ring OR napi_complete_done
-> napi_gro_receive
-> enetc_start_xmit
-> enetc_lock_mdio
-> enetc_map_tx_buffs
-> enetc_unlock_mdio
-> enetc_unlock_mdio
After enetc_poll acquires the read lock, a higher-priority writer attempts
to acquire the lock, causing preemption. The writer detects that a
read lock is already held and is scheduled out. However, readers under
enetc_poll cannot acquire the read lock again because a writer is already
waiting, leading to a thread hang.
Currently, the deadlock is avoided by adjusting enetc_lock_mdio to prevent
recursive lock acquisition. |
| In the Linux kernel, the following vulnerability has been resolved:
net: usb: qmi_wwan: initialize MAC header offset in qmimux_rx_fixup
Raw IP packets have no MAC header, leaving skb->mac_header uninitialized.
This can trigger kernel panics on ARM64 when xfrm or other subsystems
access the offset due to strict alignment checks.
Initialize the MAC header to prevent such crashes.
This can trigger kernel panics on ARM when running IPsec over the
qmimux0 interface.
Example trace:
Internal error: Oops: 000000009600004f [#1] SMP
CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.12.34-gbe78e49cb433 #1
Hardware name: LS1028A RDB Board (DT)
pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : xfrm_input+0xde8/0x1318
lr : xfrm_input+0x61c/0x1318
sp : ffff800080003b20
Call trace:
xfrm_input+0xde8/0x1318
xfrm6_rcv+0x38/0x44
xfrm6_esp_rcv+0x48/0xa8
ip6_protocol_deliver_rcu+0x94/0x4b0
ip6_input_finish+0x44/0x70
ip6_input+0x44/0xc0
ipv6_rcv+0x6c/0x114
__netif_receive_skb_one_core+0x5c/0x8c
__netif_receive_skb+0x18/0x60
process_backlog+0x78/0x17c
__napi_poll+0x38/0x180
net_rx_action+0x168/0x2f0 |
| In the Linux kernel, the following vulnerability has been resolved:
drm/sysfb: Do not dereference NULL pointer in plane reset
The plane state in __drm_gem_reset_shadow_plane() can be NULL. Do not
deref that pointer, but forward NULL to the other plane-reset helpers.
Clears plane->state to NULL.
v2:
- fix typo in commit description (Javier) |
| In the Linux kernel, the following vulnerability has been resolved:
sched_ext: Fix unsafe locking in the scx_dump_state()
For built with CONFIG_PREEMPT_RT=y kernels, the dump_lock will be converted
sleepable spinlock and not disable-irq, so the following scenarios occur:
inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
irq_work/0/27 [HC0[0]:SC0[0]:HE1:SE1] takes:
(&rq->__lock){?...}-{2:2}, at: raw_spin_rq_lock_nested+0x2b/0x40
{IN-HARDIRQ-W} state was registered at:
lock_acquire+0x1e1/0x510
_raw_spin_lock_nested+0x42/0x80
raw_spin_rq_lock_nested+0x2b/0x40
sched_tick+0xae/0x7b0
update_process_times+0x14c/0x1b0
tick_periodic+0x62/0x1f0
tick_handle_periodic+0x48/0xf0
timer_interrupt+0x55/0x80
__handle_irq_event_percpu+0x20a/0x5c0
handle_irq_event_percpu+0x18/0xc0
handle_irq_event+0xb5/0x150
handle_level_irq+0x220/0x460
__common_interrupt+0xa2/0x1e0
common_interrupt+0xb0/0xd0
asm_common_interrupt+0x2b/0x40
_raw_spin_unlock_irqrestore+0x45/0x80
__setup_irq+0xc34/0x1a30
request_threaded_irq+0x214/0x2f0
hpet_time_init+0x3e/0x60
x86_late_time_init+0x5b/0xb0
start_kernel+0x308/0x410
x86_64_start_reservations+0x1c/0x30
x86_64_start_kernel+0x96/0xa0
common_startup_64+0x13e/0x148
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&rq->__lock);
<Interrupt>
lock(&rq->__lock);
*** DEADLOCK ***
stack backtrace:
CPU: 0 UID: 0 PID: 27 Comm: irq_work/0
Call Trace:
<TASK>
dump_stack_lvl+0x8c/0xd0
dump_stack+0x14/0x20
print_usage_bug+0x42e/0x690
mark_lock.part.44+0x867/0xa70
? __pfx_mark_lock.part.44+0x10/0x10
? string_nocheck+0x19c/0x310
? number+0x739/0x9f0
? __pfx_string_nocheck+0x10/0x10
? __pfx_check_pointer+0x10/0x10
? kvm_sched_clock_read+0x15/0x30
? sched_clock_noinstr+0xd/0x20
? local_clock_noinstr+0x1c/0xe0
__lock_acquire+0xc4b/0x62b0
? __pfx_format_decode+0x10/0x10
? __pfx_string+0x10/0x10
? __pfx___lock_acquire+0x10/0x10
? __pfx_vsnprintf+0x10/0x10
lock_acquire+0x1e1/0x510
? raw_spin_rq_lock_nested+0x2b/0x40
? __pfx_lock_acquire+0x10/0x10
? dump_line+0x12e/0x270
? raw_spin_rq_lock_nested+0x20/0x40
_raw_spin_lock_nested+0x42/0x80
? raw_spin_rq_lock_nested+0x2b/0x40
raw_spin_rq_lock_nested+0x2b/0x40
scx_dump_state+0x3b3/0x1270
? finish_task_switch+0x27e/0x840
scx_ops_error_irq_workfn+0x67/0x80
irq_work_single+0x113/0x260
irq_work_run_list.part.3+0x44/0x70
run_irq_workd+0x6b/0x90
? __pfx_run_irq_workd+0x10/0x10
smpboot_thread_fn+0x529/0x870
? __pfx_smpboot_thread_fn+0x10/0x10
kthread+0x305/0x3f0
? __pfx_kthread+0x10/0x10
ret_from_fork+0x40/0x70
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1a/0x30
</TASK>
This commit therefore use rq_lock_irqsave/irqrestore() to replace
rq_lock/unlock() in the scx_dump_state(). |
| In the Linux kernel, the following vulnerability has been resolved:
ceph: fix multifs mds auth caps issue
The mds auth caps check should also validate the
fsname along with the associated caps. Not doing
so would result in applying the mds auth caps of
one fs on to the other fs in a multifs ceph cluster.
The bug causes multiple issues w.r.t user
authentication, following is one such example.
Steps to Reproduce (on vstart cluster):
1. Create two file systems in a cluster, say 'fsname1' and 'fsname2'
2. Authorize read only permission to the user 'client.usr' on fs 'fsname1'
$ceph fs authorize fsname1 client.usr / r
3. Authorize read and write permission to the same user 'client.usr' on fs 'fsname2'
$ceph fs authorize fsname2 client.usr / rw
4. Update the keyring
$ceph auth get client.usr >> ./keyring
With above permssions for the user 'client.usr', following is the
expectation.
a. The 'client.usr' should be able to only read the contents
and not allowed to create or delete files on file system 'fsname1'.
b. The 'client.usr' should be able to read/write on file system 'fsname2'.
But, with this bug, the 'client.usr' is allowed to read/write on file
system 'fsname1'. See below.
5. Mount the file system 'fsname1' with the user 'client.usr'
$sudo bin/mount.ceph usr@.fsname1=/ /kmnt_fsname1_usr/
6. Try creating a file on file system 'fsname1' with user 'client.usr'. This
should fail but passes with this bug.
$touch /kmnt_fsname1_usr/file1
7. Mount the file system 'fsname1' with the user 'client.admin' and create a
file.
$sudo bin/mount.ceph admin@.fsname1=/ /kmnt_fsname1_admin
$echo "data" > /kmnt_fsname1_admin/admin_file1
8. Try removing an existing file on file system 'fsname1' with the user
'client.usr'. This shoudn't succeed but succeeds with the bug.
$rm -f /kmnt_fsname1_usr/admin_file1
For more information, please take a look at the corresponding mds/fuse patch
and tests added by looking into the tracker mentioned below.
v2: Fix a possible null dereference in doutc
v3: Don't store fsname from mdsmap, validate against
ceph_mount_options's fsname and use it
v4: Code refactor, better warning message and
fix possible compiler warning
[ Slava.Dubeyko: "fsname check failed" -> "fsname mismatch" ] |
| In the Linux kernel, the following vulnerability has been resolved:
drm/xe/guc: Synchronize Dead CT worker with unbind
Cancel and wait for any Dead CT worker to complete before continuing
with device unbinding. Else the worker will end up using resources freed
by the undind operation.
(cherry picked from commit 492671339114e376aaa38626d637a2751cdef263) |
| In the Linux kernel, the following vulnerability has been resolved:
drm/radeon: Do not kfree() devres managed rdev
Since the allocation of the drivers main structure was changed to
devm_drm_dev_alloc() rdev is managed by devres and we shouldn't be calling
kfree() on it.
This fixes things exploding if the driver probe fails and devres cleans up
the rdev after we already free'd it.
(cherry picked from commit 16c0681617b8a045773d4d87b6140002fa75b03b) |
| In the Linux kernel, the following vulnerability has been resolved:
ima: don't clear IMA_DIGSIG flag when setting or removing non-IMA xattr
Currently when both IMA and EVM are in fix mode, the IMA signature will
be reset to IMA hash if a program first stores IMA signature in
security.ima and then writes/removes some other security xattr for the
file.
For example, on Fedora, after booting the kernel with "ima_appraise=fix
evm=fix ima_policy=appraise_tcb" and installing rpm-plugin-ima,
installing/reinstalling a package will not make good reference IMA
signature generated. Instead IMA hash is generated,
# getfattr -m - -d -e hex /usr/bin/bash
# file: usr/bin/bash
security.ima=0x0404...
This happens because when setting security.selinux, the IMA_DIGSIG flag
that had been set early was cleared. As a result, IMA hash is generated
when the file is closed.
Similarly, IMA signature can be cleared on file close after removing
security xattr like security.evm or setting/removing ACL.
Prevent replacing the IMA file signature with a file hash, by preventing
the IMA_DIGSIG flag from being reset.
Here's a minimal C reproducer which sets security.selinux as the last
step which can also replaced by removing security.evm or setting ACL,
#include <stdio.h>
#include <sys/xattr.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
int main() {
const char* file_path = "/usr/sbin/test_binary";
const char* hex_string = "030204d33204490066306402304";
int length = strlen(hex_string);
char* ima_attr_value;
int fd;
fd = open(file_path, O_WRONLY|O_CREAT|O_EXCL, 0644);
if (fd == -1) {
perror("Error opening file");
return 1;
}
ima_attr_value = (char*)malloc(length / 2 );
for (int i = 0, j = 0; i < length; i += 2, j++) {
sscanf(hex_string + i, "%2hhx", &ima_attr_value[j]);
}
if (fsetxattr(fd, "security.ima", ima_attr_value, length/2, 0) == -1) {
perror("Error setting extended attribute");
close(fd);
return 1;
}
const char* selinux_value= "system_u:object_r:bin_t:s0";
if (fsetxattr(fd, "security.selinux", selinux_value, strlen(selinux_value), 0) == -1) {
perror("Error setting extended attribute");
close(fd);
return 1;
}
close(fd);
return 0;
} |
| In the Linux kernel, the following vulnerability has been resolved:
netfilter: nft_ct: add seqadj extension for natted connections
Sequence adjustment may be required for FTP traffic with PASV/EPSV modes.
due to need to re-write packet payload (IP, port) on the ftp control
connection. This can require changes to the TCP length and expected
seq / ack_seq.
The easiest way to reproduce this issue is with PASV mode.
Example ruleset:
table inet ftp_nat {
ct helper ftp_helper {
type "ftp" protocol tcp
l3proto inet
}
chain prerouting {
type filter hook prerouting priority 0; policy accept;
tcp dport 21 ct state new ct helper set "ftp_helper"
}
}
table ip nat {
chain prerouting {
type nat hook prerouting priority -100; policy accept;
tcp dport 21 dnat ip prefix to ip daddr map {
192.168.100.1 : 192.168.13.2/32 }
}
chain postrouting {
type nat hook postrouting priority 100 ; policy accept;
tcp sport 21 snat ip prefix to ip saddr map {
192.168.13.2 : 192.168.100.1/32 }
}
}
Note that the ftp helper gets assigned *after* the dnat setup.
The inverse (nat after helper assign) is handled by an existing
check in nf_nat_setup_info() and will not show the problem.
Topoloy:
+-------------------+ +----------------------------------+
| FTP: 192.168.13.2 | <-> | NAT: 192.168.13.3, 192.168.100.1 |
+-------------------+ +----------------------------------+
|
+-----------------------+
| Client: 192.168.100.2 |
+-----------------------+
ftp nat changes do not work as expected in this case:
Connected to 192.168.100.1.
[..]
ftp> epsv
EPSV/EPRT on IPv4 off.
ftp> ls
227 Entering passive mode (192,168,100,1,209,129).
421 Service not available, remote server has closed connection.
Kernel logs:
Missing nfct_seqadj_ext_add() setup call
WARNING: CPU: 1 PID: 0 at net/netfilter/nf_conntrack_seqadj.c:41
[..]
__nf_nat_mangle_tcp_packet+0x100/0x160 [nf_nat]
nf_nat_ftp+0x142/0x280 [nf_nat_ftp]
help+0x4d1/0x880 [nf_conntrack_ftp]
nf_confirm+0x122/0x2e0 [nf_conntrack]
nf_hook_slow+0x3c/0xb0
..
Fix this by adding the required extension when a conntrack helper is assigned
to a connection that has a nat binding. |
| In the Linux kernel, the following vulnerability has been resolved:
bpf: account for current allocated stack depth in widen_imprecise_scalars()
The usage pattern for widen_imprecise_scalars() looks as follows:
prev_st = find_prev_entry(env, ...);
queued_st = push_stack(...);
widen_imprecise_scalars(env, prev_st, queued_st);
Where prev_st is an ancestor of the queued_st in the explored states
tree. This ancestor is not guaranteed to have same allocated stack
depth as queued_st. E.g. in the following case:
def main():
for i in 1..2:
foo(i) // same callsite, differnt param
def foo(i):
if i == 1:
use 128 bytes of stack
iterator based loop
Here, for a second 'foo' call prev_st->allocated_stack is 128,
while queued_st->allocated_stack is much smaller.
widen_imprecise_scalars() needs to take this into account and avoid
accessing bpf_verifier_state->frame[*]->stack out of bounds. |
| In the Linux kernel, the following vulnerability has been resolved:
mlx5: Fix default values in create CQ
Currently, CQs without a completion function are assigned the
mlx5_add_cq_to_tasklet function by default. This is problematic since
only user CQs created through the mlx5_ib driver are intended to use
this function.
Additionally, all CQs that will use doorbells instead of polling for
completions must call mlx5_cq_arm. However, the default CQ creation flow
leaves a valid value in the CQ's arm_db field, allowing FW to send
interrupts to polling-only CQs in certain corner cases.
These two factors would allow a polling-only kernel CQ to be triggered
by an EQ interrupt and call a completion function intended only for user
CQs, causing a null pointer exception.
Some areas in the driver have prevented this issue with one-off fixes
but did not address the root cause.
This patch fixes the described issue by adding defaults to the create CQ
flow. It adds a default dummy completion function to protect against
null pointer exceptions, and it sets an invalid command sequence number
by default in kernel CQs to prevent the FW from sending an interrupt to
the CQ until it is armed. User CQs are responsible for their own
initialization values.
Callers of mlx5_core_create_cq are responsible for changing the
completion function and arming the CQ per their needs. |
| In the Linux kernel, the following vulnerability has been resolved:
netpoll: Fix deadlock in memory allocation under spinlock
Fix a AA deadlock in refill_skbs() where memory allocation while holding
skb_pool->lock can trigger a recursive lock acquisition attempt.
The deadlock scenario occurs when the system is under severe memory
pressure:
1. refill_skbs() acquires skb_pool->lock (spinlock)
2. alloc_skb() is called while holding the lock
3. Memory allocator fails and calls slab_out_of_memory()
4. This triggers printk() for the OOM warning
5. The console output path calls netpoll_send_udp()
6. netpoll_send_udp() attempts to acquire the same skb_pool->lock
7. Deadlock: the lock is already held by the same CPU
Call stack:
refill_skbs()
spin_lock_irqsave(&skb_pool->lock) <- lock acquired
__alloc_skb()
kmem_cache_alloc_node_noprof()
slab_out_of_memory()
printk()
console_flush_all()
netpoll_send_udp()
skb_dequeue()
spin_lock_irqsave(&skb_pool->lock) <- deadlock attempt
This bug was exposed by commit 248f6571fd4c51 ("netpoll: Optimize skb
refilling on critical path") which removed refill_skbs() from the
critical path (where nested printk was being deferred), letting nested
printk being called from inside refill_skbs()
Refactor refill_skbs() to never allocate memory while holding
the spinlock.
Another possible solution to fix this problem is protecting the
refill_skbs() from nested printks, basically calling
printk_deferred_{enter,exit}() in refill_skbs(), then, any nested
pr_warn() would be deferred.
I prefer this approach, given I _think_ it might be a good idea to move
the alloc_skb() from GFP_ATOMIC to GFP_KERNEL in the future, so, having
the alloc_skb() outside of the lock will be necessary step.
There is a possible TOCTOU issue when checking for the pool length, and
queueing the new allocated skb, but, this is not an issue, given that
an extra SKB in the pool is harmless and it will be eventually used. |
| In the Linux kernel, the following vulnerability has been resolved:
sysfs: check visibility before changing group attribute ownership
Since commit 0c17270f9b92 ("net: sysfs: Implement is_visible for
phys_(port_id, port_name, switch_id)"), __dev_change_net_namespace() can
hit WARN_ON() when trying to change owner of a file that isn't visible.
See the trace below:
WARNING: CPU: 6 PID: 2938 at net/core/dev.c:12410 __dev_change_net_namespace+0xb89/0xc30
CPU: 6 UID: 0 PID: 2938 Comm: incusd Not tainted 6.17.1-1-mainline #1 PREEMPT(full) 4b783b4a638669fb644857f484487d17cb45ed1f
Hardware name: Framework Laptop 13 (AMD Ryzen 7040Series)/FRANMDCP07, BIOS 03.07 02/19/2025
RIP: 0010:__dev_change_net_namespace+0xb89/0xc30
[...]
Call Trace:
<TASK>
? if6_seq_show+0x30/0x50
do_setlink.isra.0+0xc7/0x1270
? __nla_validate_parse+0x5c/0xcc0
? security_capable+0x94/0x1a0
rtnl_newlink+0x858/0xc20
? update_curr+0x8e/0x1c0
? update_entity_lag+0x71/0x80
? sched_balance_newidle+0x358/0x450
? psi_task_switch+0x113/0x2a0
? __pfx_rtnl_newlink+0x10/0x10
rtnetlink_rcv_msg+0x346/0x3e0
? sched_clock+0x10/0x30
? __pfx_rtnetlink_rcv_msg+0x10/0x10
netlink_rcv_skb+0x59/0x110
netlink_unicast+0x285/0x3c0
? __alloc_skb+0xdb/0x1a0
netlink_sendmsg+0x20d/0x430
____sys_sendmsg+0x39f/0x3d0
? import_iovec+0x2f/0x40
___sys_sendmsg+0x99/0xe0
__sys_sendmsg+0x8a/0xf0
do_syscall_64+0x81/0x970
? __sys_bind+0xe3/0x110
? syscall_exit_work+0x143/0x1b0
? do_syscall_64+0x244/0x970
? sock_alloc_file+0x63/0xc0
? syscall_exit_work+0x143/0x1b0
? do_syscall_64+0x244/0x970
? alloc_fd+0x12e/0x190
? put_unused_fd+0x2a/0x70
? do_sys_openat2+0xa2/0xe0
? syscall_exit_work+0x143/0x1b0
? do_syscall_64+0x244/0x970
? exc_page_fault+0x7e/0x1a0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
[...]
</TASK>
Fix this by checking is_visible() before trying to touch the attribute. |
| In the Linux kernel, the following vulnerability has been resolved:
net: ipv6: fix field-spanning memcpy warning in AH output
Fix field-spanning memcpy warnings in ah6_output() and
ah6_output_done() where extension headers are copied to/from IPv6
address fields, triggering fortify-string warnings about writes beyond
the 16-byte address fields.
memcpy: detected field-spanning write (size 40) of single field "&top_iph->saddr" at net/ipv6/ah6.c:439 (size 16)
WARNING: CPU: 0 PID: 8838 at net/ipv6/ah6.c:439 ah6_output+0xe7e/0x14e0 net/ipv6/ah6.c:439
The warnings are false positives as the extension headers are
intentionally placed after the IPv6 header in memory. Fix by properly
copying addresses and extension headers separately, and introduce
helper functions to avoid code duplication. |
| In the Linux kernel, the following vulnerability has been resolved:
jfs: fix uninitialized waitqueue in transaction manager
The transaction manager initialization in txInit() was not properly
initializing TxBlock[0].waitor waitqueue, causing a crash when
txEnd(0) is called on read-only filesystems.
When a filesystem is mounted read-only, txBegin() returns tid=0 to
indicate no transaction. However, txEnd(0) still gets called and
tries to access TxBlock[0].waitor via tid_to_tblock(0), but this
waitqueue was never initialized because the initialization loop
started at index 1 instead of 0.
This causes a 'non-static key' lockdep warning and system crash:
INFO: trying to register non-static key in txEnd
Fix by ensuring all transaction blocks including TxBlock[0] have
their waitqueues properly initialized during txInit(). |
| In the Linux kernel, the following vulnerability has been resolved:
riscv: stacktrace: Disable KASAN checks for non-current tasks
Unwinding the stack of a task other than current, KASAN would report
"BUG: KASAN: out-of-bounds in walk_stackframe+0x41c/0x460"
There is a same issue on x86 and has been resolved by the commit
84936118bdf3 ("x86/unwind: Disable KASAN checks for non-current tasks")
The solution could be applied to RISC-V too.
This patch also can solve the issue:
https://seclists.org/oss-sec/2025/q4/23
[pjw@kernel.org: clean up checkpatch issues] |
| In the Linux kernel, the following vulnerability has been resolved:
gpiolib: fix invalid pointer access in debugfs
If the memory allocation in gpiolib_seq_start() fails, the s->private
field remains uninitialized and is later dereferenced without checking
in gpiolib_seq_stop(). Initialize s->private to NULL before calling
kzalloc() and check it before dereferencing it. |
| In the Linux kernel, the following vulnerability has been resolved:
s390: Disable ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP
As reported by Luiz Capitulino enabling HVO on s390 leads to reproducible
crashes. The problem is that kernel page tables are modified without
flushing corresponding TLB entries.
Even if it looks like the empty flush_tlb_all() implementation on s390 is
the problem, it is actually a different problem: on s390 it is not allowed
to replace an active/valid page table entry with another valid page table
entry without the detour over an invalid entry. A direct replacement may
lead to random crashes and/or data corruption.
In order to invalidate an entry special instructions have to be used
(e.g. ipte or idte). Alternatively there are also special instructions
available which allow to replace a valid entry with a different valid
entry (e.g. crdte or cspg).
Given that the HVO code currently does not provide the hooks to allow for
an implementation which is compliant with the s390 architecture
requirements, disable ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP again, which is
basically a revert of the original patch which enabled it. |