Improper Neutralization of Input Used for LLM Prompting@google/gemini-cli is a Gemini CLI
Affected versions of this package are vulnerable to Improper Neutralization of Input Used for LLM Prompting due to an improper implementation of the getCommandRoot() and isCommandAllowed() functions. An attacker can inject malicious commands by leveraging command chaining (e.g., using &&) and contextual instructions embedded within arbitrary files (e.g., README.md), bypassing the tool's security checks. This can lead to the silent execution of arbitrary code when a user runs the tool on an untrusted repository, resulting in potential credential theft and system compromise.
Notes:
In response to the disclosure, Google highlighted that Gemini CLI is offering sandboxing options via pre-built containers; Google also emphasized the existence of persistent warnings shown to users who don't have the feature enabled.
The researchers who discovered the vulnerability acknowledged this but noted that this less-secure, non-sandboxed mode is the default setting.
Users are recommended to use sandboxing mode when utilizing AI agents whenever possible.
How to fix Improper Neutralization of Input Used for LLM Prompting? Upgrade @google/gemini-cli to version 0.1.14 or higher.
| |