Improper Neutralization of Input Used for LLM Prompting Affecting @google/gemini-cli package, versions <0.1.14


Severity

Recommended
0.0
high
0
10

CVSS assessment by Snyk's Security Team. Learn more

Threat Intelligence

Exploit Maturity
Proof of Concept

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-JS-GOOGLEGEMINICLI-11342370
  • published31 Jul 2025
  • disclosed27 Jul 2025
  • creditSam Cox

Introduced: 27 Jul 2025

New CVE NOT AVAILABLE CWE-1427  (opens in a new tab)

How to fix?

Upgrade @google/gemini-cli to version 0.1.14 or higher.

Overview

@google/gemini-cli is a Gemini CLI

Affected versions of this package are vulnerable to Improper Neutralization of Input Used for LLM Prompting due to an improper implementation of the getCommandRoot() and isCommandAllowed() functions. An attacker can inject malicious commands by leveraging command chaining (e.g., using &&) and contextual instructions embedded within arbitrary files (e.g., README.md), bypassing the tool's security checks. This can lead to the silent execution of arbitrary code when a user runs the tool on an untrusted repository, resulting in potential credential theft and system compromise.

Notes:

  • In response to the disclosure, Google highlighted that Gemini CLI is offering sandboxing options via pre-built containers; Google also emphasized the existence of persistent warnings shown to users who don't have the feature enabled.

  • The researchers who discovered the vulnerability acknowledged this but noted that this less-secure, non-sandboxed mode is the default setting.

  • Users are recommended to use sandboxing mode when utilizing AI agents whenever possible.

CVSS Base Scores

version 4.0
version 3.1