In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
Metrics
Affected Vendors & Products
Advisories
| Source | ID | Title |
|---|---|---|
EUVD |
EUVD-2023-0118 | In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method. |
Github GHSA |
GHSA-fprp-p869-w6q2 | LangChain vulnerable to code injection |
Fixes
Solution
No solution given by the vendor.
Workaround
No workaround given by the vendor.
References
History
Wed, 12 Feb 2025 17:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
ssvc
|
Status: PUBLISHED
Assigner: mitre
Published:
Updated: 2025-02-12T16:24:39.291Z
Reserved: 2023-04-05T00:00:00.000Z
Link: CVE-2023-29374
Updated: 2024-08-02T14:07:45.736Z
Status : Modified
Published: 2023-04-05T02:15:37.340
Modified: 2025-02-12T17:15:18.260
Link: CVE-2023-29374
No data.
OpenCVE Enrichment
No data.
EUVD
Github GHSA