Description
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
Published: 2026-03-26
Score: 8.8 High
EPSS: < 1% Very Low
KEV: No
Impact: Remote Code Execution
Action: Immediate Patch
AI Analysis

Impact

vLLM, a large language model inference engine, contains a configuration flaw that hardcodes the option to trust remote code as true. This bypasses the user’s explicit opt‑out via the --trust-remote-code flag, allowing malicious code embedded in a model repository to execute with the privileges of the deployment process. The underlying weakness is a trust management error (CWE‑501) combined with insufficient security controls (CWE‑693). The result is a severe remote code execution vulnerability.

Affected Systems

The affected product is vLLM from the vllm‑project. Versions from 0.10.1 up to but not including 0.18.0 are impacted. Users running these releases are susceptible to execution of arbitrary code when loading models from potentially untrusted repositories.

Risk and Exploitability

The CVSS score of 8.8 marks it as high severity, yet the EPSS score of less than 1% indicates a low probability of exploitation in the wild. The vulnerability is not listed in the CISA KEV catalog, suggesting it has not yet been widely exploited. Exploitation requires the ability to supply or influence a model load operation, typically via an administrative interface or data pipeline. Once triggered, the attacker gains the same rights as the inference service process, which may include system privileges or access to sensitive data.

Generated by OpenCVE AI on March 30, 2026 at 20:39 UTC.

Remediation

No vendor fix or workaround currently provided.

OpenCVE Recommended Actions

  • Check the currently installed vLLM version.
  • Upgrade to vLLM 0.18.0 or later, where the hardcoded trust_remote_code issue is fixed.
  • Verify that the configuration does not override the patch; keep --trust-remote-code set to false when loading models from untrusted sources.
  • Monitor vendor advisories and security feeds for future updates.

Generated by OpenCVE AI on March 30, 2026 at 20:39 UTC.

Tracking

Sign in to view the affected projects.

Advisories
Source ID Title
Github GHSA Github GHSA GHSA-7972-pg2x-xr59 vLLM has Hardcoded Trust Override in Model Files Enables RCE Despite Explicit User Opt-Out
History

Mon, 30 Mar 2026 19:00:00 +0000

Type Values Removed Values Added
First Time appeared Vllm
Vllm vllm
CPEs cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*
Vendors & Products Vllm
Vllm vllm

Fri, 27 Mar 2026 14:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'no', 'Exploitation': 'none', 'Technical Impact': 'total'}, 'version': '2.0.3'}


Fri, 27 Mar 2026 12:15:00 +0000

Type Values Removed Values Added
Weaknesses CWE-501
References
Metrics threat_severity

None

threat_severity

Important


Fri, 27 Mar 2026 08:45:00 +0000

Type Values Removed Values Added
First Time appeared Vllm-project
Vllm-project vllm
Vendors & Products Vllm-project
Vllm-project vllm

Fri, 27 Mar 2026 04:00:00 +0000

Type Values Removed Values Added
Description vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
Title vLLM's hardcoded trust_remote_code=True in NemotronVL and KimiK25 bypasses user security opt-out
Weaknesses CWE-693
References
Metrics cvssV3_1

{'score': 8.8, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H'}


cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2026-03-27T13:52:33.526Z

Reserved: 2026-02-24T15:19:29.717Z

Link: CVE-2026-27893

cve-icon Vulnrichment

Updated: 2026-03-27T13:26:45.730Z

cve-icon NVD

Status : Analyzed

Published: 2026-03-27T00:16:22.333

Modified: 2026-03-30T18:56:21.730

Link: CVE-2026-27893

cve-icon Redhat

Severity : Important

Publid Date: 2026-03-26T23:56:53Z

Links: CVE-2026-27893 - Bugzilla

cve-icon OpenCVE Enrichment

Updated: 2026-03-30T20:57:19Z

Weaknesses