Description
llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.
Published: 2026-04-01
Score: 9.8 Critical
EPSS: < 1% Very Low
KEV: No
Impact: Remote Code Execution
Action: Immediate Patch
AI Analysis

Impact

A bounds‑validation flaw in llama.cpp’s RPC backend allows an unauthenticated attacker to send a crafted GRAPH_COMPUTE message that causes the deserialize_tensor() routine to read or write arbitrary process memory. The vulnerability is a classic out‑of‑bounds memory access (CWE‑119) and is combined with leaked pointers from ALLOC_BUFFER/BUFFER_GET_BASE to bypass address space layout randomization, enabling full remote code execution.

Affected Systems

The issue affects the llama.cpp inference library from ggml‑org. All releases prior to commit b8492 are vulnerable; the patch is included in b8492 and later versions.

Risk and Exploitability

The CVSS score of 9.8 indicates critical severity, and the server is reachable via plain TCP on its RPC port without any authentication. EPSS data is not available, but the known exploit paths and high severity suggest a high likelihood of exploitation in exposed environments. The vulnerability is not yet listed in the CISA KEV catalog.

Generated by OpenCVE AI on April 2, 2026 at 03:46 UTC.

Remediation

No vendor fix or workaround currently provided.

OpenCVE Recommended Actions

  • Apply the latest llama.cpp release (commit b8492 or later) to patch the vulnerable RPC backend.
  • If the patch cannot be applied immediately, block or restrict TCP access to the RPC server port so that only trusted hosts can reach it.
  • Continue to monitor the official repository and security advisories for any updates or additional mitigations.

Generated by OpenCVE AI on April 2, 2026 at 03:46 UTC.

Tracking

Sign in to view the affected projects.

Advisories

No advisories yet.

History

Thu, 02 Apr 2026 20:30:00 +0000

Type Values Removed Values Added
First Time appeared Ggml
Ggml llama.cpp
Vendors & Products Ggml
Ggml llama.cpp

Wed, 01 Apr 2026 23:45:00 +0000

Type Values Removed Values Added
Description llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.
Title llama.cpp: Unauthenticated RCE via GRAPH_COMPUTE buffer=0 bypass in llama.cpp RPC backend
Weaknesses CWE-119
References
Metrics cvssV3_1

{'score': 9.8, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H'}

ssvc

{'options': {'Automatable': 'yes', 'Exploitation': 'poc', 'Technical Impact': 'total'}, 'version': '2.0.3'}


cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2026-04-02T03:56:11.820Z

Reserved: 2026-03-25T20:12:04.197Z

Link: CVE-2026-34159

cve-icon Vulnrichment

Updated: 2026-04-01T19:07:52.812Z

cve-icon NVD

Status : Awaiting Analysis

Published: 2026-04-01T18:16:29.687

Modified: 2026-04-03T16:10:52.680

Link: CVE-2026-34159

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

Updated: 2026-04-02T20:17:10Z

Weaknesses