Llama.cpp is LLM inference in C/C++. There is a use of uninitialized heap variable vulnerability in gguf_init_from_file, the code will free this uninitialized variable later. In a simple POC, it will directly cause a crash. If the file is carefully constructed, it may be possible to control this uninitialized value and cause arbitrary address free problems. This may further lead to be exploited. Causes llama.cpp to crash (DoS) and may even lead to arbitrary code execution (RCE). This vulnerability has been patched in commit b2740.
Metrics
Affected Vendors & Products
References
History
No history.
MITRE
Status: PUBLISHED
Assigner: GitHub_M
Published: 2024-04-26T20:31:53.813Z
Updated: 2024-08-02T02:20:35.603Z
Reserved: 2024-04-19T14:07:11.230Z
Link: CVE-2024-32878
Vulnrichment
Updated: 2024-07-03T14:57:13.559Z
NVD
Status : Awaiting Analysis
Published: 2024-04-26T21:15:49.260
Modified: 2024-11-21T09:15:55.373
Link: CVE-2024-32878
Redhat
No data.