torch@2.11.0

Tensors and Dynamic neural networks in Python with strong GPU acceleration

  • latest version

    2.11.0

  • first published

    7 years ago

  • latest version published

    19 days ago

  • licenses detected

  • Direct Vulnerabilities

    Known vulnerabilities in the torch package. This does not include vulnerabilities belonging to this package’s dependencies.

    Fix vulnerabilities automatically

    Snyk's AI Trust Platform automatically finds the best upgrade path and integrates with your development workflows. Secure your code at zero cost.

    Fix for free
    VulnerabilityVulnerable Version
    • M
    Deserialization of Untrusted Data

    torch is a Tensors and Dynamic neural networks in Python with strong GPU acceleration

    Affected versions of this package are vulnerable to Deserialization of Untrusted Data in the .pt2 Loading Handler. An attacker can execute arbitrary code or alter application behavior by providing malicious serialized data and deserializing it with weights_only=True.

    How to fix Deserialization of Untrusted Data?

    There is no fixed version for torch.

    [0,)
    • M
    Integer Overflow or Wraparound

    torch is a Tensors and Dynamic neural networks in Python with strong GPU acceleration

    Affected versions of this package are vulnerable to Integer Overflow or Wraparound via the torch.nan_to_num() function when used with .long() to convert float("inf") in eager mode. An attacker can cause unexpected behavior by providing specially crafted input that triggers an integer overflow.

    How to fix Integer Overflow or Wraparound?

    There is no fixed version for torch.

    [0,)
    • M
    Mismatched Memory Management Routines

    torch is a Tensors and Dynamic neural networks in Python with strong GPU acceleration

    Affected versions of this package are vulnerable to Mismatched Memory Management Routines through the torch.cuda.memory.caching_allocator_delete function. An attacker can corrupt memory by manipulating the function locally.

    How to fix Mismatched Memory Management Routines?

    There is no fixed version for torch.

    [0,)
    • M
    Out-of-bounds Write

    torch is a Tensors and Dynamic neural networks in Python with strong GPU acceleration

    Affected versions of this package are vulnerable to Out-of-bounds Write through the torch.jit.jit_module_from_flatbuffer function. An attacker can corrupt memory by manipulating the input data to this function.

    How to fix Out-of-bounds Write?

    There is no fixed version for torch.

    [0,)
    • M
    Out-of-bounds Write

    torch is a Tensors and Dynamic neural networks in Python with strong GPU acceleration

    Affected versions of this package are vulnerable to Out-of-bounds Write when using @torch.jit.script. An attacker can corrupt memory by manipulating the function's input.

    Note: This is only exploitable if the attacker has local access to the system.

    How to fix Out-of-bounds Write?

    There is no fixed version for torch.

    [0,)
    • M
    Out-of-bounds Write

    torch is a Tensors and Dynamic neural networks in Python with strong GPU acceleration

    Affected versions of this package are vulnerable to Out-of-bounds Write due to the torch.lstm_cell function. An attacker can corrupt memory by manipulating the function's input.

    Note: This is only exploitable if the attacker has local access to the system.

    How to fix Out-of-bounds Write?

    A fix was pushed into the master branch but not yet published.

    [0,)
    • M
    Improper Resource Shutdown or Release

    torch is a Tensors and Dynamic neural networks in Python with strong GPU acceleration

    Affected versions of this package are vulnerable to Improper Resource Shutdown or Release through the torch.cuda.nccl.reduce function in the file torch/cuda/nccl.py. An attacker can cause the application to crash by manipulating the function inputs on a local host.

    How to fix Improper Resource Shutdown or Release?

    A fix was pushed into the master branch but not yet published.

    [0,)