Impact
An indirect prompt injection flaw in nanobot's email channel allows a remote, unauthenticated attacker to deliver malicious content through a monitored email address, where the bot treats the email body as fully trusted input, injects it into the large language model for instruction generation, and can trigger system tools; the attacker can thus execute arbitrary LLM instructions and effectively run code on the host, leading to full system compromise.
Affected Systems
The vulnerability affects the nanobot personal AI assistant developed by HKUDS in any release prior to version 0.1.6, including 0.1.4 and its post versions (post1 through post5), as well as any earlier releases that contain the same email channel code; the software runs in Python environments.
Risk and Exploitability
The CVSS score of 8.9 classifies this vulnerability as high severity, and the EPSS score of less than 1 % indicates a low likelihood of exploitation in the wild; nevertheless, the attack requires only a single, zero‑click email with no user interaction, and since the flaw is not listed in the CISA KEV catalog there is no evidence of widespread exploitation yet, but administrators should treat it as a high‑risk issue that can be triggered remotely via email.
OpenCVE Enrichment