Description
In its design for automatic terminal command execution, AI Code offers two options: Execute safe commands and execute all commands. The description for the former states that commands determined by the model to be safe will be automatically executed, whereas if the model judges a command to be potentially destructive, it still requires user approval. However, this design is highly susceptible to prompt injection attacks. An attacker can employ a generic template to wrap any malicious command and mislead the model into misclassifying it as a 'safe' command, thereby bypassing the user approval requirement and resulting in arbitrary command execution.
Published: 2026-03-27
Score: 9.6 Critical
EPSS: < 1% Very Low
KEV: No
Impact: Remote Code Execution
Action: Remove Extension
AI Analysis

Impact

This vulnerability is found in the AI Code extension for Visual Studio Code. By using a generic template, a malicious user can inject a prompt that fools the AI model into classifying harmful terminal commands as safe. When the model judges the command to be safe, it is executed automatically without user approval, giving the attacker arbitrary control over the local system. The core weakness is an input validation error (CWE-20) that allows prompt injections to escape the model’s safety checks.

Affected Systems

The affected software is the AI Code extension developed by tianguaduizhang, available on the Visual Studio Code marketplace. The product name is "ai_code". No specific version numbers are listed in the advisory, so all installations of this extension are potentially vulnerable until a patch is released.

Risk and Exploitability

The calculated CVSS score of 9.6 classifies this flaw as critical, indicating a high potential for serious impact. The EPSS score is below 1 %, suggesting that, while the vulnerability is severe, it is not widely exploited at present. The flaw is not yet listed in the CISA KEV catalog. Because the exploit requires only that a user inject a crafted prompt into the extension—an action that can be performed locally or through any interface that allows input to the extension—it can be used by attackers who have a foothold in the development environment or who can lure a user into providing a malicious prompt. The lack of a public fix means that the safest approach today is to remove or disable the extension until a vendor release addresses the issue.

Generated by OpenCVE AI on April 3, 2026 at 18:49 UTC.

Remediation

No vendor fix or workaround currently provided.

OpenCVE Recommended Actions

  • Uninstall or disable the AI Code extension in Visual Studio Code to remove the vulnerable functionality immediately.
  • If the extension must remain installed, configure VS Code to prompt for confirmation on any terminal command execution and manually verify commands that the model marks as safe.
  • Regularly monitor system logs for unexpected command executions and other signs of compromise.
  • Check the vendor’s GitHub repository and the VS Code marketplace for updates or a patch release, and apply any new edition promptly.

Generated by OpenCVE AI on April 3, 2026 at 18:49 UTC.

Tracking

Sign in to view the affected projects.

Advisories

No advisories yet.

History

Fri, 03 Apr 2026 21:30:00 +0000

Type Values Removed Values Added
Title AI Code Extension Vulnerable to Prompt Injection Leading to Arbitrary Command Execution

Fri, 03 Apr 2026 16:30:00 +0000

Type Values Removed Values Added
CPEs cpe:2.3:a:tianguaduizhang:ai_code:*:*:*:*:*:*:*:* cpe:2.3:a:tianguaduizhang:ai_code:*:*:*:*:*:visual_studio_code:*:*

Fri, 03 Apr 2026 10:15:00 +0000

Type Values Removed Values Added
Title AI Code Extension Vulnerable to Prompt Injection Leading to Arbitrary Command Execution

Thu, 02 Apr 2026 20:30:00 +0000

Type Values Removed Values Added
CPEs cpe:2.3:a:tianguaduizhang:ai_code:*:*:*:*:*:*:*:*

Mon, 30 Mar 2026 08:15:00 +0000

Type Values Removed Values Added
First Time appeared Tianguaduizhang
Tianguaduizhang ai Code
Vendors & Products Tianguaduizhang
Tianguaduizhang ai Code

Sun, 29 Mar 2026 20:45:00 +0000

Type Values Removed Values Added
Title AI Code Prompt Injection Leading to Arbitrary Command Execution
Weaknesses CWE-78

Sat, 28 Mar 2026 17:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'no', 'Exploitation': 'poc', 'Technical Impact': 'total'}, 'version': '2.0.3'}


Fri, 27 Mar 2026 20:30:00 +0000

Type Values Removed Values Added
Title AI Code Prompt Injection Leading to Arbitrary Command Execution
Weaknesses CWE-20
CWE-78
Metrics cvssV3_1

{'score': 9.6, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H'}


Fri, 27 Mar 2026 14:30:00 +0000

Type Values Removed Values Added
Description In its design for automatic terminal command execution, AI Code offers two options: Execute safe commands and execute all commands. The description for the former states that commands determined by the model to be safe will be automatically executed, whereas if the model judges a command to be potentially destructive, it still requires user approval. However, this design is highly susceptible to prompt injection attacks. An attacker can employ a generic template to wrap any malicious command and mislead the model into misclassifying it as a 'safe' command, thereby bypassing the user approval requirement and resulting in arbitrary command execution.
References

Subscriptions

Tianguaduizhang Ai Code
cve-icon MITRE

Status: PUBLISHED

Assigner: mitre

Published:

Updated: 2026-03-27T19:48:56.483Z

Reserved: 2026-03-04T00:00:00.000Z

Link: CVE-2026-30304

cve-icon Vulnrichment

Updated: 2026-03-27T19:48:19.379Z

cve-icon NVD

Status : Analyzed

Published: 2026-03-27T15:16:53.263

Modified: 2026-04-03T16:00:09.687

Link: CVE-2026-30304

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

Updated: 2026-04-03T21:18:02Z

Weaknesses