Description
AnythingLLM is an application that turns pieces of content into context that any LLM can use as references during chatting. In 1.11.1 and earlier, AnythingLLM Desktop contains a Streaming Phase XSS vulnerability in the chat rendering pipeline that escalates to Remote Code Execution on the host OS due to insecure Electron configuration. This works with default settings and requires no user interaction beyond normal chat usage. The custom markdown-it image renderer in frontend/src/utils/chat/markdown.js interpolates token.content directly into the alt attribute without HTML entity escaping. The PromptReply component renders this output via dangerouslySetInnerHTML without DOMPurify sanitization — unlike HistoricalMessage which correctly applies DOMPurify.sanitize().
Published: 2026-03-13
Score: 9.7 Critical
EPSS: < 1% Very Low
KEV: No
Impact: Remote Code Execution
Action: Immediate Patch
AI Analysis

Impact

AnythingLLM Desktop 1.11.1 and earlier contains a Streaming Phase cross‑site scripting flaw in the chat rendering pipeline that can be escalated to remote code execution on the host OS. The flaw arises from the custom markdown-it image renderer that inserts unescaped token.content into the alt attribute, while the rendered output is injected into the DOM via dangerouslySetInnerHTML without DOMPurify sanitization. This allows an attacker who can influence the LLM response to inject malicious JavaScript that runs with the desktop application's privileges, potentially compromising system integrity and confidentiality.

Affected Systems

Affected systems are desktop installations of AnythingLLM provided by Mintplex-Labs, specifically version 1.11.1 and all earlier releases. The vulnerability originates from the default Electron configuration, which remains unchanged until an upgrade. Users running these versions are at risk if they engage in normal chat usage with any LLM that can supply injected content, and the flaw exists without additional user interaction.

Risk and Exploitability

Risk assessment: The CVSS base score of 9.7 classifies this as critical, while the EPSS score of less than 1% indicates low current exploit prevalence. The vulnerability is not listed in the CISA KEV catalog. Exploitation requires only normal chat interaction with the vulnerable version; once malicious content is rendered, the JavaScript runs with full desktop process privileges, potentially giving an attacker full control over the host system.

Generated by OpenCVE AI on March 16, 2026 at 23:08 UTC.

Remediation

No vendor fix or workaround currently provided.

OpenCVE Recommended Actions

  • Update AnythingLLM Desktop to the latest patched version (1.11.2 or later).

Generated by OpenCVE AI on March 16, 2026 at 23:08 UTC.

Tracking

Sign in to view the affected projects.

Advisories

No advisories yet.

History

Mon, 16 Mar 2026 21:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'yes', 'Exploitation': 'none', 'Technical Impact': 'total'}, 'version': '2.0.3'}


Mon, 16 Mar 2026 20:45:00 +0000

Type Values Removed Values Added
First Time appeared Mintplexlabs anythingllm
CPEs cpe:2.3:a:mintplexlabs:anythingllm:*:*:*:*:*:*:*:*
Vendors & Products Mintplexlabs anythingllm

Mon, 16 Mar 2026 10:15:00 +0000

Type Values Removed Values Added
First Time appeared Mintplexlabs
Mintplexlabs anything-llm
Vendors & Products Mintplexlabs
Mintplexlabs anything-llm

Fri, 13 Mar 2026 20:30:00 +0000

Type Values Removed Values Added
Description AnythingLLM is an application that turns pieces of content into context that any LLM can use as references during chatting. In 1.11.1 and earlier, AnythingLLM Desktop contains a Streaming Phase XSS vulnerability in the chat rendering pipeline that escalates to Remote Code Execution on the host OS due to insecure Electron configuration. This works with default settings and requires no user interaction beyond normal chat usage. The custom markdown-it image renderer in frontend/src/utils/chat/markdown.js interpolates token.content directly into the alt attribute without HTML entity escaping. The PromptReply component renders this output via dangerouslySetInnerHTML without DOMPurify sanitization — unlike HistoricalMessage which correctly applies DOMPurify.sanitize().
Title AnythingLLM has a Streaming Phase XSS to RCE via LLM Response Injection
Weaknesses CWE-79
References
Metrics cvssV3_1

{'score': 9.7, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H'}


Subscriptions

Mintplexlabs Anything-llm Anythingllm
cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2026-03-16T20:13:43.696Z

Reserved: 2026-03-12T15:29:36.558Z

Link: CVE-2026-32626

cve-icon Vulnrichment

Updated: 2026-03-16T20:13:39.523Z

cve-icon NVD

Status : Analyzed

Published: 2026-03-16T14:19:40.033

Modified: 2026-03-16T20:34:47.637

Link: CVE-2026-32626

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

Updated: 2026-03-23T13:39:57Z

Weaknesses