vllm@0.13.0 vulnerabilities

A high-throughput and memory-efficient inference and serving engine for LLMs

Direct Vulnerabilities

Known vulnerabilities in the vllm package. This does not include vulnerabilities belonging to this package’s dependencies.

Fix vulnerabilities automatically

Snyk's AI Trust Platform automatically finds the best upgrade path and integrates with your development workflows. Secure your code at zero cost.

Fix for free
VulnerabilityVulnerable Version
  • H
Server-side Request Forgery (SSRF)

vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs

Affected versions of this package are vulnerable to Server-side Request Forgery (SSRF) via the MediaConnector class. An attacker can access internal network resources and cause system instability or access sensitive data by providing URLs that bypass host name restrictions, due to inconsistent parsing of backslashes between different libraries. The load_from_url() function accepts URL strings and validates them using urllib.urlparse(), but the request it made using urllib3.parse_url.

How to fix Server-side Request Forgery (SSRF)?

Upgrade vllm to version 0.14.1 or higher.

[,0.14.1)
  • H
Arbitrary Code Injection

vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs

Affected versions of this package are vulnerable to Arbitrary Code Injection via the auto_map process during model initialization, even when trust_remote_code is false. An attacker can execute arbitrary code on the host system by supplying a malicious model repository or path that contains attacker-controlled Python code, which is loaded and executed during the model resolution phase.

How to fix Arbitrary Code Injection?

Upgrade vllm to version 0.14.0 or higher.

[0.10.1,0.14.0)
  • H
Deserialization of Untrusted Data

vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs

Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the SUB ZeroMQ socket, where the deserialization is performed using the unsafe pickle library. An attacker on the same cluster can execute arbitrary code on the remote machine by sending maliciously crafted deserialized payloads.

Note The V0 engine is off by default since v0.8.0, and the V1 engine is not affected. Due to the V0 engine's deprecated status and the invasive nature of a fix, the developers recommend ensuring a secure network environment if the V0 engine with multi-host tensor parallelism is still in use.

How to fix Deserialization of Untrusted Data?

There is no fixed version for vllm.

[0.5.2,)
  • H
Deserialization of Untrusted Data

vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs

Affected versions of this package are vulnerable to Deserialization of Untrusted Data in the MessageQueue.dequeue() API function. An attacker can execute arbitrary code by sending a malicious payload to the message queue.

How to fix Deserialization of Untrusted Data?

There is no fixed version for vllm.

[0,)