vllm@0.7.1 vulnerabilities

A high-throughput and memory-efficient inference and serving engine for LLMs

Direct Vulnerabilities

Known vulnerabilities in the vllm package. This does not include vulnerabilities belonging to this package’s dependencies.

How to fix?

Automatically find and fix vulnerabilities affecting your projects. Snyk scans for vulnerabilities and provides fixes for free.

Fix for free
VulnerabilityVulnerable Version
  • L
Use of Weak Hash

vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs

Affected versions of this package are vulnerable to Use of Weak Hash due to the use of a predictable constant value in the Python 3.12 built-in hash function. An attacker can interfere with subsequent responses and cause unintended behavior by exploiting predictable hash collisions to populate the cache with prompts known to collide with another prompt in use.

How to fix Use of Weak Hash?

Upgrade vllm to version 0.7.2 or higher.

[,0.7.2)
  • M
Uncontrolled Resource Consumption ('Resource Exhaustion')

vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs

Affected versions of this package are vulnerable to Uncontrolled Resource Consumption ('Resource Exhaustion') via the best_of parameter. An attacker can cause the system to become unresponsive and prevent legitimate users from accessing the service by consuming excessive system resources.

How to fix Uncontrolled Resource Consumption ('Resource Exhaustion')?

There is no fixed version for vllm.

[0,)