guardrails-ai@0.4.3 vulnerabilities

Adding guardrails to large language models.

Direct Vulnerabilities

Known vulnerabilities in the guardrails-ai package. This does not include vulnerabilities belonging to this package’s dependencies.

Automatically find and fix vulnerabilities affecting your projects. Snyk scans for vulnerabilities and provides fixes for free.
Fix for free
Vulnerability Vulnerable Version
  • H
Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection')

guardrails-ai is an Adding guardrails to large language models.

Affected versions of this package are vulnerable to Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection') due to improper validation in the parse_token method of the ValidatorsAttr class in the guardrails/guardrails/validatorsattr.py file. An attacker can execute arbitrary code on the user's machine by loading a maliciously crafted XML file that contains Python code, which is then passed to an eval function.

How to fix Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection')?

Upgrade guardrails-ai to version 0.5.10 or higher.

[0.2.9,0.5.10)
  • H
XML External Entity (XXE) Injection

guardrails-ai is an Adding guardrails to large language models.

Affected versions of this package are vulnerable to XML External Entity (XXE) Injection when consuming RAIL documents from external sources. An attacker can leak internal file data by exploiting the SYSTEM entity in the XML structure.

How to fix XML External Entity (XXE) Injection?

Upgrade guardrails-ai to version 0.5.0 or higher.

[,0.5.0)