Impact
The identified flaw resides in the Agentic Assistant validation component of Langflow. During the validation of LLM‑generated Python scripts, the code is executed on the server without adequate input sanitisation. This allows an attacker who can influence the model output to trigger arbitrary Python execution in the application’s runtime, effectively granting full control over the server. The weakness is a classic instance of unsafe dynamic code execution, mapped to CWE‑94.
Affected Systems
All installations of Langflow running a version earlier than 1.9.0 pose a risk when the Agentic Assistant feature is exposed to users. The failure applies globally within the product when an authenticated user can supply prompts that affect the model’s response. The description does not list specific sub‑products; however, any deployment that exposes this feature is potentially affected.
Risk and Exploitability
The vulnerability carries a CVSS score of 9.3, indicating critical severity, but the EPSS score is below 1%, suggesting that automated exploitation is presently uncommon. The attack requires authenticated access to the Agentic Assistant endpoint and the ability to craft model inputs that produce malicious code; once these conditions are met, an attacker can execute arbitrary instructions on the server. The CVE is not currently on the CISA KEV catalog.
OpenCVE Enrichment
Github GHSA