Heap-based Buffer Overflow Affecting ggml package, versions <0.0~git20250711.b6d2ebd-1


Severity

Recommended
low

Based on default assessment until relevant scores are available.

Threat Intelligence

EPSS
0.04% (13th percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-DEBIANUNSTABLE-GGML-10734102
  • published14 Jul 2025
  • disclosed10 Jul 2025

Introduced: 10 Jul 2025

NewCVE-2025-53630  (opens in a new tab)
CWE-122  (opens in a new tab)
CWE-680  (opens in a new tab)

How to fix?

Upgrade Debian:unstable ggml to version 0.0~git20250711.b6d2ebd-1 or higher.

NVD Description

Note: Versions mentioned in the description apply only to the upstream ggml package and not the ggml package as distributed by Debian. See How to fix? for Debian:unstable relevant fixed versions and status.

llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.