Description
In the Linux kernel, the following vulnerability has been resolved:

vsock/virtio: cap TX credit to local buffer size

The virtio transports derives its TX credit directly from peer_buf_alloc,
which is set from the remote endpoint's SO_VM_SOCKETS_BUFFER_SIZE value.

On the host side this means that the amount of data we are willing to
queue for a connection is scaled by a guest-chosen buffer size, rather
than the host's own vsock configuration. A malicious guest can advertise
a large buffer and read slowly, causing the host to allocate a
correspondingly large amount of sk_buff memory.
The same thing would happen in the guest with a malicious host, since
virtio transports share the same code base.

Introduce a small helper, virtio_transport_tx_buf_size(), that
returns min(peer_buf_alloc, buf_alloc), and use it wherever we consume
peer_buf_alloc.

This ensures the effective TX window is bounded by both the peer's
advertised buffer and our own buf_alloc (already clamped to
buffer_max_size via SO_VM_SOCKETS_BUFFER_MAX_SIZE), so a remote peer
cannot force the other to queue more data than allowed by its own
vsock settings.

On an unpatched Ubuntu 22.04 host (~64 GiB RAM), running a PoC with
32 guest vsock connections advertising 2 GiB each and reading slowly
drove Slab/SUnreclaim from ~0.5 GiB to ~57 GiB; the system only
recovered after killing the QEMU process. That said, if QEMU memory is
limited with cgroups, the maximum memory used will be limited.

With this patch applied:

Before:
MemFree: ~61.6 GiB
Slab: ~142 MiB
SUnreclaim: ~117 MiB

After 32 high-credit connections:
MemFree: ~61.5 GiB
Slab: ~178 MiB
SUnreclaim: ~152 MiB

Only ~35 MiB increase in Slab/SUnreclaim, no host OOM, and the guest
remains responsive.

Compatibility with non-virtio transports:

- VMCI uses the AF_VSOCK buffer knobs to size its queue pairs per
socket based on the local vsk->buffer_* values; the remote side
cannot enlarge those queues beyond what the local endpoint
configured.

- Hyper-V's vsock transport uses fixed-size VMBus ring buffers and
an MTU bound; there is no peer-controlled credit field comparable
to peer_buf_alloc, and the remote endpoint cannot drive in-flight
kernel memory above those ring sizes.

- The loopback path reuses virtio_transport_common.c, so it
naturally follows the same semantics as the virtio transport.

This change is limited to virtio_transport_common.c and thus affects
virtio-vsock, vhost-vsock, and loopback, bringing them in line with the
"remote window intersected with local policy" behaviour that VMCI and
Hyper-V already effectively have.

[Stefano: small adjustments after changing the previous patch]
[Stefano: tweak the commit message]
Published: 2026-02-04
Score: 5.5 Medium
EPSS: < 1% Very Low
KEV: No
Impact: Denial of Service via memory exhaustion
Action: Patch immediately
AI Analysis

Impact

The vulnerability in the Linux kernel’s virtio vsock transport lets a malicious peer advertise an arbitrarily large send buffer size. This forces the host or guest to reserve an equivalent amount of sk_buff memory, causing slab allocation to grow uncontrollably. An attacker can therefore exhaust kernel memory and trigger out‑of‑memory conditions or make the system unresponsive. The flaw is a classic case of uncontrolled resource consumption.

Affected Systems

All Linux kernel builds that include virtio vsock support, such as typical KVM/QEMU hosts and their guests, are affected by this issue in versions before the patch that caps the TX credit to the lower of the peer’s buffer and the local limit. Kernel releases that integrate the virtio_transport_tx_buf_size change, including 6.19 rc6 and later, have removed the flaw.

Risk and Exploitability

The CVSS base score of 5.5 indicates moderate severity, while an EPSS value of less than 1% points to a low likelihood of exploitation. The vulnerability is not listed in the CISA KEV catalog. Exploitation requires control over a virtual machine or host to open many vsock connections advertising large buffers; such activity can be detected and limited with cgroups or memory constraints. Successful exploitation can consume tens of gigabytes of memory, potentially leading to kernel OOM kills and downtime.

Generated by OpenCVE AI on April 17, 2026 at 23:35 UTC.

Remediation

No vendor fix or workaround currently provided.

OpenCVE Recommended Actions

  • Upgrade the Linux kernel to a release that includes the virtio_transport_tx_buf_size patch (such as kernel 6.19 rc6 or later).
  • If an immediate kernel upgrade is not feasible, limit the maximum vsock buffer that a guest or host can advertise by configuring SO_VM_SOCKETS_BUFFER_MAX_SIZE or applying cgroup memory constraints to prevent excessive slab allocation.
  • Reduce the number of simultaneous virtio‑vsock connections or enforce per‑VM or per‑guest connection limits through hypervisor configuration to mitigate large‑buffer amplification.
  • Monitor slab allocation and memory usage metrics to detect abnormal growth patterns indicative of exploitation attempts.

Generated by OpenCVE AI on April 17, 2026 at 23:35 UTC.

Tracking

Sign in to view the affected projects.

Advisories
Source ID Title
Debian DLA Debian DLA DLA-4476-1 linux-6.1 security update
Debian DSA Debian DSA DSA-6126-1 linux security update
Debian DSA Debian DSA DSA-6127-1 linux security update
History

Sat, 18 Apr 2026 00:00:00 +0000

Type Values Removed Values Added
Weaknesses CWE-400

Tue, 17 Mar 2026 21:15:00 +0000

Type Values Removed Values Added
Weaknesses NVD-CWE-noinfo
CPEs cpe:2.3:o:linux:linux_kernel:6.19:rc1:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc2:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc3:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc4:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc5:*:*:*:*:*:*
cpe:2.3:o:linux:linux_kernel:6.19:rc6:*:*:*:*:*:*
Metrics cvssV3_1

{'score': 6.2, 'vector': 'CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H'}

cvssV3_1

{'score': 5.5, 'vector': 'CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H'}


Fri, 06 Feb 2026 17:00:00 +0000


Thu, 05 Feb 2026 12:15:00 +0000

Type Values Removed Values Added
References
Metrics threat_severity

None

cvssV3_1

{'score': 6.2, 'vector': 'CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H'}

threat_severity

Moderate


Wed, 04 Feb 2026 16:30:00 +0000

Type Values Removed Values Added
Description In the Linux kernel, the following vulnerability has been resolved: vsock/virtio: cap TX credit to local buffer size The virtio transports derives its TX credit directly from peer_buf_alloc, which is set from the remote endpoint's SO_VM_SOCKETS_BUFFER_SIZE value. On the host side this means that the amount of data we are willing to queue for a connection is scaled by a guest-chosen buffer size, rather than the host's own vsock configuration. A malicious guest can advertise a large buffer and read slowly, causing the host to allocate a correspondingly large amount of sk_buff memory. The same thing would happen in the guest with a malicious host, since virtio transports share the same code base. Introduce a small helper, virtio_transport_tx_buf_size(), that returns min(peer_buf_alloc, buf_alloc), and use it wherever we consume peer_buf_alloc. This ensures the effective TX window is bounded by both the peer's advertised buffer and our own buf_alloc (already clamped to buffer_max_size via SO_VM_SOCKETS_BUFFER_MAX_SIZE), so a remote peer cannot force the other to queue more data than allowed by its own vsock settings. On an unpatched Ubuntu 22.04 host (~64 GiB RAM), running a PoC with 32 guest vsock connections advertising 2 GiB each and reading slowly drove Slab/SUnreclaim from ~0.5 GiB to ~57 GiB; the system only recovered after killing the QEMU process. That said, if QEMU memory is limited with cgroups, the maximum memory used will be limited. With this patch applied: Before: MemFree: ~61.6 GiB Slab: ~142 MiB SUnreclaim: ~117 MiB After 32 high-credit connections: MemFree: ~61.5 GiB Slab: ~178 MiB SUnreclaim: ~152 MiB Only ~35 MiB increase in Slab/SUnreclaim, no host OOM, and the guest remains responsive. Compatibility with non-virtio transports: - VMCI uses the AF_VSOCK buffer knobs to size its queue pairs per socket based on the local vsk->buffer_* values; the remote side cannot enlarge those queues beyond what the local endpoint configured. - Hyper-V's vsock transport uses fixed-size VMBus ring buffers and an MTU bound; there is no peer-controlled credit field comparable to peer_buf_alloc, and the remote endpoint cannot drive in-flight kernel memory above those ring sizes. - The loopback path reuses virtio_transport_common.c, so it naturally follows the same semantics as the virtio transport. This change is limited to virtio_transport_common.c and thus affects virtio-vsock, vhost-vsock, and loopback, bringing them in line with the "remote window intersected with local policy" behaviour that VMCI and Hyper-V already effectively have. [Stefano: small adjustments after changing the previous patch] [Stefano: tweak the commit message]
Title vsock/virtio: cap TX credit to local buffer size
First Time appeared Linux
Linux linux Kernel
CPEs cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*
Vendors & Products Linux
Linux linux Kernel
References

Subscriptions

Linux Linux Kernel
cve-icon MITRE

Status: PUBLISHED

Assigner: Linux

Published:

Updated: 2026-02-09T08:38:26.222Z

Reserved: 2026-01-13T15:37:45.961Z

Link: CVE-2026-23086

cve-icon Vulnrichment

No data.

cve-icon NVD

Status : Analyzed

Published: 2026-02-04T17:16:19.467

Modified: 2026-03-17T21:10:14.740

Link: CVE-2026-23086

cve-icon Redhat

Severity : Moderate

Publid Date: 2026-02-04T00:00:00Z

Links: CVE-2026-23086 - Bugzilla

cve-icon OpenCVE Enrichment

Updated: 2026-04-17T23:45:25Z

Weaknesses