Description
vLLM is an inference and serving engine for large language models (LLMs). From 0.16.0 to before 0.19.0, a server-side request forgery (SSRF) vulnerability in download_bytes_from_url allows any actor who can control batch input JSON to make the vLLM batch runner issue arbitrary HTTP/HTTPS requests from the server, without any URL validation or domain restrictions.
This can be used to target internal services (e.g. cloud metadata endpoints or internal HTTP APIs) reachable from the vLLM host. This vulnerability is fixed in 0.19.0.
Published: 2026-04-06
Score: 5.4 Medium
EPSS: n/a
KEV: No
Impact: Server-side request forgery
Action: Patch Immediately
AI Analysis

Impact

vLLM, an inference engine for large language models, contains a server‑side request forgery flaw in the download_bytes_from_url function. An attacker who can supply or modify the JSON payload sent to the batch endpoint can cause the vLLM server to initiate arbitrary HTTP or HTTPS requests. Because the function lacks URL validation or domain whitelisting, the attacker can reach any internal or external web resource visible to the host.

Affected Systems

The vulnerability affects versions of the vLLM project from 0.16.0 up to, but not including, 0.19.0. Systems running those releases with publicly exposed inference endpoints are susceptible. Updating to vLLM 0.19.0 or later removes the flaw.

Risk and Exploitability

With a CVSS score of 5.4 the issue is categorized as moderate severity. The exploit does not require authentication and can be triggered through normal API usage for batch inference. The lack of an EPSS score and absence from the CISA KEV list suggest limited current exploitation data, but the ability to reach internal services such as cloud metadata endpoints could enable data exfiltration or lateral movement if the host is connected to sensitive networks.

Generated by OpenCVE AI on April 6, 2026 at 19:40 UTC.

Remediation

No vendor fix or workaround currently provided.

OpenCVE Recommended Actions

  • Upgrade vLLM to version 0.19.0 or later.

Generated by OpenCVE AI on April 6, 2026 at 19:40 UTC.

Tracking

Sign in to view the affected projects.

Advisories
Source ID Title
Github GHSA Github GHSA GHSA-pf3h-qjgv-vcpr vLLM: Server-Side Request Forgery (SSRF) in `download_bytes_from_url `
History

Tue, 07 Apr 2026 00:00:00 +0000

Type Values Removed Values Added
First Time appeared Vllm-project
Vllm-project vllm
Vendors & Products Vllm-project
Vllm-project vllm
References
Metrics threat_severity

None

threat_severity

Moderate


Mon, 06 Apr 2026 16:45:00 +0000

Type Values Removed Values Added
Description vLLM is an inference and serving engine for large language models (LLMs). From 0.16.0 to before 0.19.0, a server-side request forgery (SSRF) vulnerability in download_bytes_from_url allows any actor who can control batch input JSON to make the vLLM batch runner issue arbitrary HTTP/HTTPS requests from the server, without any URL validation or domain restrictions. This can be used to target internal services (e.g. cloud metadata endpoints or internal HTTP APIs) reachable from the vLLM host. This vulnerability is fixed in 0.19.0.
Title vLLM affected by Server-Side Request Forgery (SSRF) in `download_bytes_from_url `
Weaknesses CWE-918
References
Metrics cvssV3_1

{'score': 5.4, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:L'}


Subscriptions

Vllm-project Vllm
cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2026-04-06T15:36:52.942Z

Reserved: 2026-03-30T19:17:10.225Z

Link: CVE-2026-34753

cve-icon Vulnrichment

No data.

cve-icon NVD

Status : Received

Published: 2026-04-06T16:16:36.307

Modified: 2026-04-06T16:16:36.307

Link: CVE-2026-34753

cve-icon Redhat

Severity : Moderate

Publid Date: 2026-04-06T15:36:52Z

Links: CVE-2026-34753 - Bugzilla

cve-icon OpenCVE Enrichment

Updated: 2026-04-06T21:31:54Z

Weaknesses