Description
In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
No analysis available yet.
Remediation
No remediation available yet.
Tracking
Sign in to view the affected projects.
Advisories
| Source | ID | Title |
|---|---|---|
EUVD |
EUVD-2023-0118 | In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method. |
Github GHSA |
GHSA-fprp-p869-w6q2 | LangChain vulnerable to code injection |
References
History
Wed, 12 Feb 2025 17:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
ssvc
|
Status: PUBLISHED
Assigner: mitre
Published:
Updated: 2025-02-12T16:24:39.291Z
Reserved: 2023-04-05T00:00:00.000Z
Link: CVE-2023-29374
Updated: 2024-08-02T14:07:45.736Z
Status : Modified
Published: 2023-04-05T02:15:37.340
Modified: 2025-02-12T17:15:18.260
Link: CVE-2023-29374
No data.
OpenCVE Enrichment
No data.
Weaknesses
EUVD
Github GHSA