Impact
LiteLLM, an AI gateway proxy, allowed authenticated users to submit prompt templates to the POST /prompts/test endpoint, which rendered those templates without any sandbox or validation. Because the rendering engine executed the template as part of the LiteLLM process, a malicious template could trigger arbitrary code execution within that process, granting the attacker full control of the server running LiteLLM. The flaw is identified as CWE‑1336 and poses a severe confidentiality, integrity, and availability risk, as the process may contain sensitive environment variables such as LLM provider keys or database credentials.
Affected Systems
The vulnerability affects BerriAI's LiteLLM product versions from 1.80.5 up to, but not including, 1.83.7. Any deployment of LiteLLM within this range that exposes the /prompts/test endpoint to authenticated users is impacted.
Risk and Exploitability
The CVSS base score of 8.6 indicates a high severity. EPSS is not available, so exploitation likelihood is uncertain, yet the flaw is not listed in CISA KEV. Because the endpoint accepts any user with a valid proxy API key, the attack vector is straightforward for authenticated users. If the LiteLLM deployment is exposed to potential attackers, the risk of arbitrary code execution and secret leakage is considerable.
OpenCVE Enrichment
Github GHSA