Impact
The vulnerability allows an attacker to craft prompts that trick the AI Code model into classifying malicious commands as "safe," thereby bypassing the user approval step and executing arbitrary shell commands. This results in remote code execution, enabling attackers to gain full control over the system hosting the AI Code environment and potentially exfiltrate data, persist malware, or disrupt services. The weakness corresponds to an OS command injection flaw, where input validation is insufficient to prevent malicious instructions.
Affected Systems
Any installation of AI Code that relies on its automatic terminal command execution feature is affected, as the vulnerability exists in the design of the safe command decision logic rather than in a specific vendor or version. Systems that permit user input to influence the model’s output—including IDE extensions or custom integrations—are at risk if they enable auto‑execution of safe commands.
Risk and Exploitability
The flaw carries a high risk score due to the ability to execute arbitrary commands with administrative privileges. No EPSS score or KEV listing is available, but the potential impact and the ease of triggering via prompt injection suggest a significant likelihood of exploitation. The attack vector is inferred to be through any interface that accepts user prompts for the AI model; an attacker only needs to supply a crafted prompt that misleads the model into classifying destructive commands as safe.
OpenCVE Enrichment