Impact
LangChain, a Python framework for building agents and LLM‑powered applications, contains an incomplete validation of f‑string prompt templates in versions prior to 0.3.84 and 1.2.28. The bug allows DictPromptTemplate and ImagePromptTemplate to accept f‑string templates that contain attribute access or indexing expressions, and also fails to reject nested replacement fields inside format specifiers. Because Python’s formatter evaluates these expressions at runtime, an attacker can embed arbitrary Python code in a prompt template and cause the application to execute it, corresponding to CWE‑1336.
Affected Systems
The vulnerability applies to all installations of langchain‑ai/langchain released before core 0.3.84 and before 1.2.28. Any deployment that utilizes DictPromptTemplate or ImagePromptTemplate for dynamic prompt generation without upgrading to the patched versions remains vulnerable.
Risk and Exploitability
The CVSS base score of 5.3 indicates medium severity, and EPSS data is unavailable; the vulnerability is not listed in the CISA KEV catalog, suggesting limited known exploitation. The likely attack vector is local or insider, where an attacker can influence the prompt template that the application formats. Once a malicious template is processed, the injected expressions run with the process’s privileges, posing a credible risk to applications that accept user‑generated prompts. This exploit path requires the attacker to provide a crafted prompt template to the application, which is typically available only through trusted or privileged input channels.
OpenCVE Enrichment
Github GHSA