xsk: Fix race condition in AF_XDP generic RX path
Move rx_lock from xsk_socket to xsk_buff_pool.
Fix synchronization for shared umem mode in
generic RX path where multiple sockets share
single xsk_buff_pool.
RX queue is exclusive to xsk_socket, while FILL
queue can be shared between multiple sockets.
This could result in race condition where two
CPU cores access RX path of two different sockets
sharing the same umem.
Protect both queues by acquiring spinlock in shared
xsk_buff_pool.
Lock contention may be minimized in the future by some
per-thread FQ buffering.
It's safe and necessary to move spin_lock_bh(rx_lock)
after xsk_rcv_check():
* xs->pool and spinlock_init is synchronized by
xsk_bind() -> xsk_is_bound() memory barriers.
* xsk_rcv_check() may return true at the moment
of xsk_release() or xsk_unbind_dev(),
however this will not cause any data races or
race conditions. xsk_unbind_dev() removes xdp
socket from all maps and waits for completion
of all outstanding rx operations. Packets in
RX path will either complete safely or drop.
Metrics
Affected Vendors & Products
| Source | ID | Title |
|---|---|---|
EUVD |
EUVD-2025-15945 | In the Linux kernel, the following vulnerability has been resolved: xsk: Fix race condition in AF_XDP generic RX path Move rx_lock from xsk_socket to xsk_buff_pool. Fix synchronization for shared umem mode in generic RX path where multiple sockets share single xsk_buff_pool. RX queue is exclusive to xsk_socket, while FILL queue can be shared between multiple sockets. This could result in race condition where two CPU cores access RX path of two different sockets sharing the same umem. Protect both queues by acquiring spinlock in shared xsk_buff_pool. Lock contention may be minimized in the future by some per-thread FQ buffering. It's safe and necessary to move spin_lock_bh(rx_lock) after xsk_rcv_check(): * xs->pool and spinlock_init is synchronized by xsk_bind() -> xsk_is_bound() memory barriers. * xsk_rcv_check() may return true at the moment of xsk_release() or xsk_unbind_dev(), however this will not cause any data races or race conditions. xsk_unbind_dev() removes xdp socket from all maps and waits for completion of all outstanding rx operations. Packets in RX path will either complete safely or drop. |
Ubuntu USN |
USN-7649-1 | Linux kernel vulnerabilities |
Ubuntu USN |
USN-7649-2 | Linux kernel (AWS) vulnerabilities |
Ubuntu USN |
USN-7650-1 | Linux kernel (OEM) vulnerabilities |
Ubuntu USN |
USN-7665-1 | Linux kernel (Oracle) vulnerabilities |
Ubuntu USN |
USN-7665-2 | Linux kernel (AWS) vulnerabilities |
Ubuntu USN |
USN-7721-1 | Linux kernel (Azure) vulnerabilities |
Solution
No solution given by the vendor.
Workaround
No workaround given by the vendor.
Mon, 10 Nov 2025 21:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Weaknesses | CWE-362 | |
| CPEs | cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* cpe:2.3:o:linux:linux_kernel:6.15:rc1:*:*:*:*:*:* cpe:2.3:o:linux:linux_kernel:6.15:rc2:*:*:*:*:*:* cpe:2.3:o:linux:linux_kernel:6.15:rc3:*:*:*:*:*:* cpe:2.3:o:linux:linux_kernel:6.15:rc4:*:*:*:*:*:* |
|
| Metrics |
cvssV3_1
|
cvssV3_1
|
Wed, 09 Jul 2025 02:30:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
cvssV3_1
|
cvssV3_1
|
Wed, 21 May 2025 03:00:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| References |
| |
| Metrics |
threat_severity
|
cvssV3_1
|
Tue, 20 May 2025 15:30:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | In the Linux kernel, the following vulnerability has been resolved: xsk: Fix race condition in AF_XDP generic RX path Move rx_lock from xsk_socket to xsk_buff_pool. Fix synchronization for shared umem mode in generic RX path where multiple sockets share single xsk_buff_pool. RX queue is exclusive to xsk_socket, while FILL queue can be shared between multiple sockets. This could result in race condition where two CPU cores access RX path of two different sockets sharing the same umem. Protect both queues by acquiring spinlock in shared xsk_buff_pool. Lock contention may be minimized in the future by some per-thread FQ buffering. It's safe and necessary to move spin_lock_bh(rx_lock) after xsk_rcv_check(): * xs->pool and spinlock_init is synchronized by xsk_bind() -> xsk_is_bound() memory barriers. * xsk_rcv_check() may return true at the moment of xsk_release() or xsk_unbind_dev(), however this will not cause any data races or race conditions. xsk_unbind_dev() removes xdp socket from all maps and waits for completion of all outstanding rx operations. Packets in RX path will either complete safely or drop. | |
| Title | xsk: Fix race condition in AF_XDP generic RX path | |
| References |
|
Status: PUBLISHED
Assigner: Linux
Published:
Updated: 2025-05-26T05:23:44.292Z
Reserved: 2025-04-16T04:51:23.968Z
Link: CVE-2025-37920
No data.
Status : Analyzed
Published: 2025-05-20T16:15:28.603
Modified: 2025-11-10T21:11:09.640
Link: CVE-2025-37920
OpenCVE Enrichment
Updated: 2025-06-27T09:26:51Z
EUVD
Ubuntu USN