llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.
History

Wed, 27 Aug 2025 14:15:00 +0000

Type Values Removed Values Added
First Time appeared Ggml
Ggml llama.cpp
CPEs cpe:2.3:a:ggml:llama.cpp:*:*:*:*:*:*:*:*
Vendors & Products Ggml
Ggml llama.cpp

Tue, 24 Jun 2025 22:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'no', 'Exploitation': 'poc', 'Technical Impact': 'total'}, 'version': '2.0.3'}


Tue, 24 Jun 2025 03:45:00 +0000

Type Values Removed Values Added
Description llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.
Title llama.cpp tokenizer signed vs. unsigned heap overflow
Weaknesses CWE-119
CWE-195
References
Metrics cvssV3_1

{'score': 8.6, 'vector': 'CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H'}


cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2025-06-24T21:49:53.200Z

Reserved: 2025-06-18T03:55:52.036Z

Link: CVE-2025-52566

cve-icon Vulnrichment

Updated: 2025-06-24T21:49:47.523Z

cve-icon NVD

Status : Analyzed

Published: 2025-06-24T04:15:46.967

Modified: 2025-08-27T14:01:31.297

Link: CVE-2025-52566

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

No data.