Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection') Affecting guardrails-ai package, versions [0.2.9,0.5.10)
Threat Intelligence
EPSS
0.04% (11th
percentile)
Do your applications use this vulnerable package?
In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.
Test your applications- Snyk ID SNYK-PYTHON-GUARDRAILSAI-8056285
- published 19 Sep 2024
- disclosed 18 Sep 2024
- credit Leo Ring
Introduced: 18 Sep 2024
CVE-2024-45858 Open this link in a new tabHow to fix?
Upgrade guardrails-ai
to version 0.5.10 or higher.
Overview
guardrails-ai is an Adding guardrails to large language models.
Affected versions of this package are vulnerable to Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection') due to improper validation in the parse_token
method of the ValidatorsAttr
class in the guardrails/guardrails/validatorsattr.py
file. An attacker can execute arbitrary code on the user's machine by loading a maliciously crafted XML file that contains Python code, which is then passed to an eval
function.