Description
LangChain is a framework for building agents and LLM-powered applications. Prior to version 1.2.22, multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). This issue has been patched in version 1.2.22.
Published: 2026-03-31
Score: 7.5 High
EPSS: < 1% Very Low
KEV: No
Impact: Remote File Read
Action: Immediate Patch
AI Analysis

Impact

The vulnerability in LangChain Core allows an attacker to read arbitrary files on the host file system. The flaw exists in several legacy prompt loading functions that deserialize configuration dictionaries and then open files based on path entries. The code does not sanitize these paths for directory traversal or absolute path injection. An attacker who can influence prompt configuration parameters can cause the application to read any file with an allowed extension (such as .txt, .json, or .yaml), potentially exposing sensitive data. This weakness corresponds to a path traversal flaw (CWE‑22).

Affected Systems

The affected product is LangChain Core supplied by langchain-ai. Versions earlier than 1.2.22 are vulnerable. All environments running those versions that invoke load_prompt or load_prompt_from_config with user‑controlled configuration data are impacted.

Risk and Exploitability

The CVSS score is 7.5, indicating a high severity. The EPSS score is very low (<1%), suggesting that exploitation is currently uncommon in the wild. The vulnerability is not listed in the CISA KEV catalog. Attackers could exploit the flaw through any interface that accepts prompt configuration data, which can be supplied by end users or third‑party integrations. No special privileges are required beyond the application’s runtime context, so the risk depends on the trust level of the data source.

Generated by OpenCVE AI on April 2, 2026 at 22:02 UTC.

Remediation

No vendor fix or workaround currently provided.

OpenCVE Recommended Actions

  • Apply the LangChain Core release 1.2.22 or later, which contains the path traversal fix.
  • If an immediate upgrade is not possible, restrict or sanitize prompt configuration inputs so that file paths cannot contain suspicious sequences such as '..' or absolute paths.

Generated by OpenCVE AI on April 2, 2026 at 22:02 UTC.

Tracking

Sign in to view the affected projects.

Advisories
Source ID Title
Github GHSA Github GHSA GHSA-qh6h-p6c9-ff54 LangChain Core has Path Traversal vulnerabilites in legacy `load_prompt` functions
History

Thu, 02 Apr 2026 20:30:00 +0000

Type Values Removed Values Added
First Time appeared Langchain
Langchain langchain
CPEs cpe:2.3:a:langchain:langchain:*:*:*:*:*:*:*:*
Vendors & Products Langchain
Langchain langchain

Wed, 01 Apr 2026 02:15:00 +0000

Type Values Removed Values Added
First Time appeared Langchain-ai
Langchain-ai langchain
Vendors & Products Langchain-ai
Langchain-ai langchain
References
Metrics threat_severity

None

threat_severity

Important


Tue, 31 Mar 2026 18:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'yes', 'Exploitation': 'poc', 'Technical Impact': 'partial'}, 'version': '2.0.3'}


Tue, 31 Mar 2026 03:00:00 +0000

Type Values Removed Values Added
Description LangChain is a framework for building agents and LLM-powered applications. Prior to version 1.2.22, multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). This issue has been patched in version 1.2.22.
Title LangChain Core has Path Traversal vulnerabilites in legacy `load_prompt` functions
Weaknesses CWE-22
References
Metrics cvssV3_1

{'score': 7.5, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N'}


Subscriptions

Langchain Langchain
Langchain-ai Langchain
cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2026-03-31T18:04:59.283Z

Reserved: 2026-03-25T16:21:40.867Z

Link: CVE-2026-34070

cve-icon Vulnrichment

Updated: 2026-03-31T15:17:43.293Z

cve-icon NVD

Status : Analyzed

Published: 2026-03-31T03:15:58.947

Modified: 2026-04-02T17:04:43.713

Link: CVE-2026-34070

cve-icon Redhat

Severity : Important

Publid Date: 2026-03-31T02:01:49Z

Links: CVE-2026-34070 - Bugzilla

cve-icon OpenCVE Enrichment

Updated: 2026-04-03T09:19:34Z

Weaknesses