Out-of-Bounds Affecting llama.cpp package, versions <5713+dfsg-1


Severity

Recommended
low

Based on default assessment until relevant scores are available.

Threat Intelligence

EPSS
0.05% (17th percentile)

Do your applications use this vulnerable package?

In a few clicks we can analyze your entire application and see what components are vulnerable in your application, and suggest you quick fixes.

Test your applications
  • Snyk IDSNYK-DEBIANUNSTABLE-LLAMACPP-10389794
  • published19 Jun 2025
  • disclosed17 Jun 2025

Introduced: 17 Jun 2025

NewCVE-2025-49847  (opens in a new tab)
CWE-119  (opens in a new tab)
CWE-195  (opens in a new tab)

How to fix?

Upgrade Debian:unstable llama.cpp to version 5713+dfsg-1 or higher.

NVD Description

Note: Versions mentioned in the description apply only to the upstream llama.cpp package and not the llama.cpp package as distributed by Debian. See How to fix? for Debian:unstable relevant fixed versions and status.

llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662.