0.12.0
2 years ago
10 days ago
Known vulnerabilities in the vllm package. This does not include vulnerabilities belonging to this package’s dependencies.
Snyk's AI Trust Platform automatically finds the best upgrade path and integrates with your development workflows. Secure your code at zero cost.
Fix for free| Vulnerability | Vulnerable Version |
|---|---|
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Arbitrary Code Injection via the config class named How to fix Arbitrary Code Injection? Upgrade | [,0.11.1) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Server-side Request Forgery (SSRF) via the Note: This vulnerability is particularly critical in containerized environments like ##Workaround To address this vulnerability, it is essential to restrict the URLs that the MediaConnector can access. The principle of least privilege should be applied. It is recommend to implement a configurable allowlist or denylist for domains and IP addresses.
A check should be added at the beginning of the How to fix Server-side Request Forgery (SSRF)? Upgrade | [0.5.0,0.11.0) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Allocation of Resources Without Limits or Throttling through the How to fix Allocation of Resources Without Limits or Throttling? Upgrade | [,0.11.0) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Covert Timing Channel via the How to fix Covert Timing Channel? Upgrade | [,0.11.0) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Allocation of Resources Without Limits or Throttling due to an How to fix Allocation of Resources Without Limits or Throttling? Upgrade | [,0.10.1.1) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Timing Attack due to the How to fix Timing Attack? Upgrade | [,0.9.0) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the Note The V0 engine is off by default since v0.8.0, and the V1 engine is not affected. Due to the V0 engine's deprecated status and the invasive nature of a fix, the developers recommend ensuring a secure network environment if the V0 engine with multi-host tensor parallelism is still in use. How to fix Deserialization of Untrusted Data? There is no fixed version for | [0.5.2,) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Binding to an Unrestricted IP Address through the This could also be avoided if the port for the XPUB socket is blocked or restricted by a firewall. How to fix Binding to an Unrestricted IP Address? Upgrade | [0.5.2,0.8.5) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Deserialization of Untrusted Data in the How to fix Deserialization of Untrusted Data? There is no fixed version for | [0,) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Deserialization of Untrusted Data in the How to fix Deserialization of Untrusted Data? Upgrade | [,0.6.2) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Allocation of Resources Without Limits or Throttling in How to fix Allocation of Resources Without Limits or Throttling? Upgrade | [,0.8.0) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Use of Weak Hash due to the use of a predictable constant value in the Python 3.12 built-in How to fix Use of Weak Hash? Upgrade | [,0.7.2) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the How to fix Deserialization of Untrusted Data? Upgrade | [,0.7.0) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Uncontrolled Resource Consumption ('Resource Exhaustion') via the How to fix Uncontrolled Resource Consumption ('Resource Exhaustion')? Upgrade | [,0.6.3) |
vllm is an A high-throughput and memory-efficient inference and serving engine for LLMs Affected versions of this package are vulnerable to Improper Validation of Syntactic Correctness of Input in the How to fix Improper Validation of Syntactic Correctness of Input? Upgrade | [,0.5.5) |