Metrics
Affected Vendors & Products
| Source | ID | Title |
|---|---|---|
EUVD |
EUVD-2025-4074 | vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability. |
Github GHSA |
GHSA-rm76-4mrf-v9r8 | vLLM uses Python 3.12 built-in hash() which leads to predictable hash collisions in prefix cache |
Solution
No solution given by the vendor.
Workaround
No workaround given by the vendor.
Tue, 01 Jul 2025 21:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| First Time appeared |
Vllm
Vllm vllm |
|
| CPEs | cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | |
| Vendors & Products |
Vllm
Vllm vllm |
Wed, 12 Feb 2025 21:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
ssvc
|
Mon, 10 Feb 2025 13:30:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Weaknesses | CWE-916 | |
| References |
| |
| Metrics |
threat_severity
|
threat_severity
|
Fri, 07 Feb 2025 20:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability. | |
| Title | vLLM using built-in hash() from Python 3.12 leads to predictable hash collisions in vLLM prefix cache | |
| Weaknesses | CWE-354 | |
| References |
| |
| Metrics |
cvssV3_1
|
Status: PUBLISHED
Assigner: GitHub_M
Published:
Updated: 2025-02-12T20:51:46.402Z
Reserved: 2025-02-03T19:30:53.399Z
Link: CVE-2025-25183
Updated: 2025-02-12T20:48:59.400Z
Status : Analyzed
Published: 2025-02-07T20:15:34.083
Modified: 2025-07-01T20:58:00.170
Link: CVE-2025-25183
OpenCVE Enrichment
No data.
EUVD
Github GHSA