Improper Validation of Integrity Check Value Affecting py3.10-vllm-cuda-12.6 package, versions <0.7.2-r0


Severity

Recommended
low

Based on default assessment until relevant scores are available.

Threat Intelligence

Exploit Maturity
Not Defined
EPSS
0.05% (18th percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-CHAINGUARDLATEST-PY310VLLMCUDA126-8722404
  • published14 Feb 2025
  • disclosed7 Feb 2025

Introduced: 7 Feb 2025

NewCVE-2025-25183  (opens in a new tab)
CWE-354  (opens in a new tab)

How to fix?

Upgrade Chainguard py3.10-vllm-cuda-12.6 to version 0.7.2-r0 or higher.

NVD Description

Note: Versions mentioned in the description apply only to the upstream py3.10-vllm-cuda-12.6 package and not the py3.10-vllm-cuda-12.6 package as distributed by Chainguard. See How to fix? for Chainguard relevant fixed versions and status.

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.