mlflow@2.21.2 vulnerabilities

MLflow is an open source platform for the complete machine learning lifecycle

Direct Vulnerabilities

Known vulnerabilities in the mlflow package. This does not include vulnerabilities belonging to this package’s dependencies.

How to fix?

Automatically find and fix vulnerabilities affecting your projects. Snyk scans for vulnerabilities and provides fixes for free.

Fix for free
VulnerabilityVulnerable Version
  • M
Missing Input Length Validation

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Missing Input Length Validation in the experiment_name - passed to the run_name() function - and artifact_location parameters. An attacker can cause the UI panel to become unresponsive by passing in an experiment name including a large number of integers.

How to fix Missing Input Length Validation?

There is no fixed version for mlflow.

[0,)
  • H
Allocation of Resources Without Limits or Throttling

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Allocation of Resources Without Limits or Throttling in handlers.py, which is exploitable over the /graphql endpoint. An attacker can occupy all available workers and make the server unresponsive to other connections by sending large batches of GraphQL queries that repeatedly request all runs from a given experiment and stay in a pending state. Experiments configured to have a large number of runs are vulnerable.

How to fix Allocation of Resources Without Limits or Throttling?

There is no fixed version for mlflow.

[0,)
  • H
Deserialization of Untrusted Data

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the load function in the BaseCard class within the recipes/cards/__init__.py file. An attacker can execute arbitrary code on the target system by creating an MLProject Recipe containing a malicious pickle file (e.g. pickle.pkl) and a python script that calls BaseCard.load(pickle.pkl). The pickle file will be deserialized when the project is run.

Note:

If you are not running MLflow on a publicly accessible server, this vulnerability won't apply to you.

How to fix Deserialization of Untrusted Data?

There is no fixed version for mlflow.

[1.27.0,)
  • H
Deserialization of Untrusted Data

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the _load_model function in the mlflow/pytorch/__init__.py file. An attacker can execute arbitrary code on the victim's system by injecting a malicious pickle object into a PyFunc model which will then be deserialized when the model is loaded.

How to fix Deserialization of Untrusted Data?

There is no fixed version for mlflow.

[0.5.0,)
  • H
Improper Control of Generation of Code ('Code Injection')

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Improper Control of Generation of Code ('Code Injection') via the _run_entry_point function in the projects/backend/local.py file. An attacker can execute arbitrary code on the victim's system by submitting a maliciously crafted MLproject file.

How to fix Improper Control of Generation of Code ('Code Injection')?

There is no fixed version for mlflow.

[1.11.0,)
  • H
Deserialization of Untrusted Data

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the _load_from_pickle function in the mlflow/langchain/utils.py file. An attacker can execute arbitrary code on the victim's system by injecting a malicious pickle object into a PyFunc model which will then be deserialized when the model is loaded.

How to fix Deserialization of Untrusted Data?

There is no fixed version for mlflow.

[2.5.0,)
  • H
Deserialization of Untrusted Data

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the _load_custom_objects function in the mlflow/tensorflow/__init__.py file. An attacker can execute arbitrary code on the victim's system by injecting a malicious pickle object into a PyFunc model which will then be deserialized when the model is loaded.

How to fix Deserialization of Untrusted Data?

There is no fixed version for mlflow.

[2.0.0rc0,)
  • H
Deserialization of Untrusted Data

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the _load_model function in the mlflow/lightgbm/__init__.py file. An attacker can execute arbitrary code on the victim's system by injecting a malicious pickle object into a PyFunc model which will then be deserialized when the model is loaded.

How to fix Deserialization of Untrusted Data?

There is no fixed version for mlflow.

[1.23.0,)
  • H
Deserialization of Untrusted Data

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the _load_model function in the pmdarima/__init__.py file. An attacker can execute arbitrary code on the victim's system by injecting a malicious pickle object into a PyFunc model which will then be deserialized when the model is loaded.

How to fix Deserialization of Untrusted Data?

There is no fixed version for mlflow.

[1.24.0,)
  • H
Deserialization of Untrusted Data

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the _load_model_from_local_file function in the sklearn/__init__.py file. An attacker can execute arbitrary code on the victim's system by injecting a malicious pickle object into a PyFunc model, which will then be deserialized when the model is loaded.

How to fix Deserialization of Untrusted Data?

There is no fixed version for mlflow.

[1.1.0,)
  • H
Deserialization of Untrusted Data

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Deserialization of Untrusted Data via the _load_pyfunc function in the mlflow/pyfunc/model.py file. An attacker can execute arbitrary code on the victim's system by injecting a malicious pickle object into a PyFunc model which will then be deserialized when the model is loaded.

How to fix Deserialization of Untrusted Data?

There is no fixed version for mlflow.

[0.9.0,)
  • H
Path Traversal

mlflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models.

Affected versions of this package are vulnerable to Path Traversal due to improper sanitization of user-supplied paths in the artifact deletion functionality. An attacker can delete arbitrary directories on the server's filesystem by exploiting the double decoding process in the _delete_artifact_mlflow_artifacts handler and local_file_uri_to_path function. This vulnerability arises from an additional unquote operation in the delete_artifacts function of local_artifact_repo.py, which fails to adequately prevent path traversal sequences.

How to fix Path Traversal?

There is no fixed version for mlflow.

[0,)