Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection') Affecting guardrails-ai package, versions [0.2.9,0.5.10)


Severity

Recommended
0.0
high
0
10

CVSS assessment made by Snyk's Security Team

    Threat Intelligence

    EPSS
    0.04% (11th percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk ID SNYK-PYTHON-GUARDRAILSAI-8056285
  • published 19 Sep 2024
  • disclosed 18 Sep 2024
  • credit Leo Ring

How to fix?

Upgrade guardrails-ai to version 0.5.10 or higher.

Overview

guardrails-ai is an Adding guardrails to large language models.

Affected versions of this package are vulnerable to Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection') due to improper validation in the parse_token method of the ValidatorsAttr class in the guardrails/guardrails/validatorsattr.py file. An attacker can execute arbitrary code on the user's machine by loading a maliciously crafted XML file that contains Python code, which is then passed to an eval function.

CVSS Scores

version 4.0
version 3.1
Expand this section

Snyk

Recommended
8.6 high
  • Attack Vector (AV)
    Network
  • Attack Complexity (AC)
    Low
  • Attack Requirements (AT)
    None
  • Privileges Required (PR)
    None
  • User Interaction (UI)
    Active
  • Confidentiality (VC)
    High
  • Integrity (VI)
    High
  • Availability (VA)
    High
  • Confidentiality (SC)
    None
  • Integrity (SI)
    None
  • Availability (SA)
    None