Filtered by vendor Langchain Subscriptions
Total 18 CVE
CVE Vendors Products Updated CVSS v3.1
CVE-2024-46946 1 Langchain 1 Langchain Experimental 2024-09-19 9.8 Critical
langchain_experimental (aka LangChain Experimental) 0.1.17 through 0.3.0 for LangChain allows attackers to execute arbitrary code through sympy.sympify (which uses eval) in LLMSymbolicMathChain. LLMSymbolicMathChain was introduced in fcccde406dd9e9b05fc9babcbeb9ff527b0ec0c6 (2023-10-05).
CVE-2024-5998 1 Langchain 1 Langchain 2024-09-17 N/A
A vulnerability in the FAISS.deserialize_from_bytes function of langchain-ai/langchain allows for pickle deserialization of untrusted data. This can lead to the execution of arbitrary commands via the os.system function. The issue affects the latest version of the product.
CVE-2023-46229 1 Langchain 1 Langchain 2024-09-12 8.8 High
LangChain before 0.0.317 allows SSRF via document_loaders/recursive_url_loader.py because crawling can proceed from an external server to an internal server.
CVE-2023-32786 1 Langchain 1 Langchain 2024-09-12 7.5 High
In Langchain through 0.0.155, prompt injection allows an attacker to force the service to retrieve data from an arbitrary URL, essentially providing SSRF and potentially injecting content into downstream tasks.
CVE-2023-44467 1 Langchain 1 Langchain Experimental 2024-08-02 9.8 Critical
langchain_experimental (aka LangChain Experimental) in LangChain before 0.0.306 allows an attacker to bypass the CVE-2023-36258 fix and execute arbitrary code via __import__ in Python code, which is not prohibited by pal_chain/base.py.
CVE-2023-39659 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
An issue in langchain langchain-ai v.0.0.232 and before allows a remote attacker to execute arbitrary code via a crafted script to the PythonAstREPLTool._run component.
CVE-2023-39631 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
An issue in LanChain-ai Langchain v.0.0.245 allows a remote attacker to execute arbitrary code via the evaluate function in the numexpr library.
CVE-2023-38896 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
An issue in Harrison Chase langchain v.0.0.194 and before allows a remote attacker to execute arbitrary code via the from_math_prompt and from_colored_object_prompt functions.
CVE-2023-38860 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
An issue in LangChain v.0.0.231 allows a remote attacker to execute arbitrary code via the prompt parameter.
CVE-2023-36258 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
An issue in LangChain before 0.0.236 allows an attacker to execute arbitrary code because Python code with os.system, exec, or eval can be used.
CVE-2023-36281 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
An issue in langchain v.0.0.171 allows a remote attacker to execute arbitrary code via a JSON file to load_prompt. This is related to __subclasses__ or a template.
CVE-2023-36189 1 Langchain 1 Langchain 2024-08-02 7.5 High
SQL injection vulnerability in langchain before v0.0.247 allows a remote attacker to obtain sensitive information via the SQLDatabaseChain component.
CVE-2023-36095 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
An issue in Harrison Chase langchain v.0.0.194 allows an attacker to execute arbitrary code via the python exec calls in the PALChain, affected functions include from_math_prompt and from_colored_object_prompt.
CVE-2023-36188 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
An issue in langchain v.0.0.64 allows a remote attacker to execute arbitrary code via the PALChain parameter in the Python exec method.
CVE-2023-34541 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
Langchain 0.0.171 is vulnerable to Arbitrary code execution in load_prompt.
CVE-2023-34540 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
Langchain before v0.0.225 was discovered to contain a remote code execution (RCE) vulnerability in the component JiraAPIWrapper (aka the JIRA API wrapper). This vulnerability allows attackers to execute arbitrary code via crafted input. As noted in the "releases/tag" reference, a fix is available.
CVE-2023-29374 1 Langchain 1 Langchain 2024-08-02 9.8 Critical
In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
CVE-2024-21513 1 Langchain 2 Langchain-experimental, Langchain Experimental 2024-08-01 8.5 High
Versions of the package langchain-experimental from 0.0.15 and before 0.0.21 are vulnerable to Arbitrary Code Execution when retrieving values from the database, the code will attempt to call 'eval' on all values. An attacker can exploit this vulnerability and execute arbitrary python code if they can control the input prompt and the server is configured with VectorSQLDatabaseChain. **Notes:** Impact on the Confidentiality, Integrity and Availability of the vulnerable component: Confidentiality: Code execution happens within the impacted component, in this case langchain-experimental, so all resources are necessarily accessible. Integrity: There is nothing protected by the impacted component inherently. Although anything returned from the component counts as 'information' for which the trustworthiness can be compromised. Availability: The loss of availability isn't caused by the attack itself, but it happens as a result during the attacker's post-exploitation steps. Impact on the Confidentiality, Integrity and Availability of the subsequent system: As a legitimate low-privileged user of the package (PR:L) the attacker does not have more access to data owned by the package as a result of this vulnerability than they did with normal usage (e.g. can query the DB). The unintended action that one can perform by breaking out of the app environment and exfiltrating files, making remote connections etc. happens during the post exploitation phase in the subsequent system - in this case, the OS. AT:P: An attacker needs to be able to influence the input prompt, whilst the server is configured with the VectorSQLDatabaseChain plugin.