vLLM is an inference and serving engine for large language models (LLMs). Version 0.8.0 up to but excluding 0.9.0 have a Denial of Service (ReDoS) that causes the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg/CVE-2025-48942, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
Advisories
Source ID Title
EUVD EUVD EUVD-2025-16355 vLLM is an inference and serving engine for large language models (LLMs). Version 0.8.0 up to but excluding 0.9.0 have a Denial of Service (ReDoS) that causes the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg/CVE-2025-48942, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
EUVD EUVD EUVD-2025-16515 vLLM is an inference and serving engine for large language models (LLMs). Version 0.8.0 up to but excluding 0.9.0 have a Denial of Service (ReDoS) that causes the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg/CVE-2025-48942, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
Github GHSA Github GHSA GHSA-9hcf-v7m4-6m2j vLLM allows clients to crash the openai server with invalid regex
Fixes

Solution

No solution given by the vendor.


Workaround

No workaround given by the vendor.

History

Tue, 24 Jun 2025 18:00:00 +0000

Type Values Removed Values Added
First Time appeared Vllm
Vllm vllm
CPEs cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*
Vendors & Products Vllm
Vllm vllm

Sat, 31 May 2025 03:15:00 +0000

Type Values Removed Values Added
References
Metrics threat_severity

None

threat_severity

Moderate


Fri, 30 May 2025 19:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'no', 'Exploitation': 'none', 'Technical Impact': 'partial'}, 'version': '2.0.3'}


Fri, 30 May 2025 18:45:00 +0000

Type Values Removed Values Added
Description vLLM is an inference and serving engine for large language models (LLMs). Version 0.8.0 up to but excluding 0.9.0 have a Denial of Service (ReDoS) that causes the vLLM server to crash if an invalid regex was provided while using structured output. This vulnerability is similar to GHSA-6qc9-v4r8-22xg/CVE-2025-48942, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
Title vLLM allows clients to crash the openai server with invalid regex
Weaknesses CWE-248
References
Metrics cvssV3_1

{'score': 6.5, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H'}


cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2025-05-30T18:56:18.715Z

Reserved: 2025-05-28T18:49:07.582Z

Link: CVE-2025-48943

cve-icon Vulnrichment

Updated: 2025-05-30T18:55:36.545Z

cve-icon NVD

Status : Analyzed

Published: 2025-05-30T19:15:30.280

Modified: 2025-06-24T17:40:52.923

Link: CVE-2025-48943

cve-icon Redhat

Severity : Moderate

Publid Date: 2025-05-30T18:36:01Z

Links: CVE-2025-48943 - Bugzilla

cve-icon OpenCVE Enrichment

Updated: 2025-06-24T09:44:17Z