vllm@0.6.6.post1 vulnerabilities

A high-throughput and memory-efficient inference and serving engine for LLMs

Direct Vulnerabilities

Known vulnerabilities in the vllm package. This does not include vulnerabilities belonging to this package’s dependencies.

How to fix?

Automatically find and fix vulnerabilities affecting your projects. Snyk scans for vulnerabilities and provides fixes for free.

Fix for free
VulnerabilityVulnerable Version
  • H
Deserialization of Untrusted Data

vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs

Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the hf_model_weights_iterator process due to the usage of the torch.load function with the weights_only parameter set to False, which is considered insecure. An attacker can execute arbitrary code during the unpickling process by supplying malicious pickle data.

How to fix Deserialization of Untrusted Data?

Upgrade vllm to version 0.7.0 or higher.

[,0.7.0)
  • M
Uncontrolled Resource Consumption ('Resource Exhaustion')

vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs

Affected versions of this package are vulnerable to Uncontrolled Resource Consumption ('Resource Exhaustion') via the best_of parameter. An attacker can cause the system to become unresponsive and prevent legitimate users from accessing the service by consuming excessive system resources.

How to fix Uncontrolled Resource Consumption ('Resource Exhaustion')?

There is no fixed version for vllm.

[0,)