Description
In the Linux kernel, the following vulnerability has been resolved:

mm/shmem, swap: fix race of truncate and swap entry split

The helper for shmem swap freeing is not handling the order of swap
entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but it
gets the entry order before that using xa_get_order without lock
protection, and it may get an outdated order value if the entry is split
or changed in other ways after the xa_get_order and before the
xa_cmpxchg_irq.

And besides, the order could grow and be larger than expected, and cause
truncation to erase data beyond the end border. For example, if the
target entry and following entries are swapped in or freed, then a large
folio was added in place and swapped out, using the same entry, the
xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.

To fix that, open code the Xarray cmpxchg and put the order retrieval and
value checking in the same critical section. Also, ensure the order won't
exceed the end border, skip it if the entry goes across the border.

Skipping large swap entries crosses the end border is safe here. Shmem
truncate iterates the range twice, in the first iteration,
find_lock_entries already filtered such entries, and shmem will swapin the
entries that cross the end border and partially truncate the folio (split
the folio or at least zero part of it). So in the second loop here, if we
see a swap entry that crosses the end order, it must at least have its
content erased already.

I observed random swapoff hangs and kernel panics when stress testing
ZSWAP with shmem. After applying this patch, all problems are gone.
Published: 2026-02-14
Score: 7.3 High
EPSS: < 1% Very Low
KEV: No
Impact: Kernel memory corruption or kernel panic leading to denial of service
Action: Apply Patch
AI Analysis

Impact

The Linux kernel’s memory‑management subsystem contains a race condition in the shared‑memory (shmem) swap‑freeing routine. The helper obtains the order of a swap entry with xa_get_order outside a lock and later performs an atomic compare‑and‑swap to delete the entry. If the entry is split or otherwise modified between these two steps, the order value becomes stale. The stale order can cause the helper to truncate data beyond the intended end boundary or leave a swap entry uncleared, resulting in kernel memory corruption and potentially a panic when swapoff is called.

Affected Systems

The flaw exists in the Linux kernel itself, specifically in versions 6.19 release candidates rc1 through rc7. Systems running these kernels and utilizing the shmem swap facility are potentially vulnerable.

Risk and Exploitability

The CVSS score of 7.3 indicates a high‑severity impact. The EPSS score is reported as < 1 % and the vulnerability is not listed in the CISA KEV catalog. Exploitation would require a precise race between truncation of a region and modification of a swap entry; such a race is technically possible but practically difficult to trigger from user space. The risk is therefore primarily a denial‑of‑service scenario rather than privilege escalation.

Generated by OpenCVE AI on April 18, 2026 at 18:02 UTC.

Remediation

No vendor fix or workaround currently provided.

OpenCVE Recommended Actions

  • Upgrade to a patched Linux kernel release (e.g., 6.19 rc8 or newer) that incorporates the race‑condition fix.
  • If an upgrade is not immediately feasible, disable ZSWAP or set vm.swappiness=0 to reduce use of shmem swap and lower the attack surface.
  • Monitor system logs (dmesg, journalctl) for swapoff hangs or kernel panics and consider applying SELinux/AppArmor profiles to harden the runtime environment.

Generated by OpenCVE AI on April 18, 2026 at 18:02 UTC.

Tracking

Sign in to view the affected projects.

Advisories

No advisories yet.

History

Fri, 03 Apr 2026 14:00:00 +0000

Type Values Removed Values Added
Metrics cvssV3_1

{'score': 4.7, 'vector': 'CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H'}

cvssV3_1

{'score': 7.3, 'vector': 'CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:H/A:H'}


Wed, 18 Mar 2026 15:15:00 +0000

Type Values Removed Values Added
Weaknesses CWE-362
CPEs cpe:2.3:o:linux:linux_kernel:6.19:rc1:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc2:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc3:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc4:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc5:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc6:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc7:*:*:*:*:*:*
Metrics cvssV3_1

{'score': 7.0, 'vector': 'CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H'}

cvssV3_1

{'score': 4.7, 'vector': 'CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H'}


Tue, 17 Feb 2026 00:15:00 +0000

Type Values Removed Values Added
References
Metrics threat_severity

None

cvssV3_1

{'score': 7.0, 'vector': 'CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H'}

threat_severity

Moderate


Sat, 14 Feb 2026 16:15:00 +0000

Type Values Removed Values Added
Description In the Linux kernel, the following vulnerability has been resolved: mm/shmem, swap: fix race of truncate and swap entry split The helper for shmem swap freeing is not handling the order of swap entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but it gets the entry order before that using xa_get_order without lock protection, and it may get an outdated order value if the entry is split or changed in other ways after the xa_get_order and before the xa_cmpxchg_irq. And besides, the order could grow and be larger than expected, and cause truncation to erase data beyond the end border. For example, if the target entry and following entries are swapped in or freed, then a large folio was added in place and swapped out, using the same entry, the xa_cmpxchg_irq will still succeed, it's very unlikely to happen though. To fix that, open code the Xarray cmpxchg and put the order retrieval and value checking in the same critical section. Also, ensure the order won't exceed the end border, skip it if the entry goes across the border. Skipping large swap entries crosses the end border is safe here. Shmem truncate iterates the range twice, in the first iteration, find_lock_entries already filtered such entries, and shmem will swapin the entries that cross the end border and partially truncate the folio (split the folio or at least zero part of it). So in the second loop here, if we see a swap entry that crosses the end order, it must at least have its content erased already. I observed random swapoff hangs and kernel panics when stress testing ZSWAP with shmem. After applying this patch, all problems are gone.
Title mm/shmem, swap: fix race of truncate and swap entry split
First Time appeared Linux
Linux linux Kernel
CPEs cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*
Vendors & Products Linux
Linux linux Kernel
References

Subscriptions

Linux Linux Kernel
cve-icon MITRE

Status: PUBLISHED

Assigner: Linux

Published:

Updated: 2026-04-03T13:32:07.174Z

Reserved: 2026-01-13T15:37:45.979Z

Link: CVE-2026-23161

cve-icon Vulnrichment

No data.

cve-icon NVD

Status : Modified

Published: 2026-02-14T16:15:56.277

Modified: 2026-04-03T14:16:24.830

Link: CVE-2026-23161

cve-icon Redhat

Severity : Moderate

Publid Date: 2026-02-14T00:00:00Z

Links: CVE-2026-23161 - Bugzilla

cve-icon OpenCVE Enrichment

Updated: 2026-04-18T18:15:06Z

Weaknesses