Out-of-Bounds Affecting llama.cpp package, versions <5760+dfsg-1


Severity

Recommended
low

Based on default assessment until relevant scores are available.

Threat Intelligence

EPSS
0.01% (2nd percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-DEBIANUNSTABLE-LLAMACPP-10498804
  • published25 Jun 2025
  • disclosed24 Jun 2025

Introduced: 24 Jun 2025

NewCVE-2025-52566  (opens in a new tab)
CWE-119  (opens in a new tab)
CWE-195  (opens in a new tab)

How to fix?

Upgrade Debian:unstable llama.cpp to version 5760+dfsg-1 or higher.

NVD Description

Note: Versions mentioned in the description apply only to the upstream llama.cpp package and not the llama.cpp package as distributed by Debian. See How to fix? for Debian:unstable relevant fixed versions and status.

llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.