Vulnerability Information
Although we use advanced large model technology, its output may still contain inaccurate or outdated information.Shenlong tries to ensure data accuracy, but please verify and judge based on the actual situation.
Vulnerability Title
llama.cpp Vulnerable to Buffer Overflow via Malicious GGUF Model
Vulnerability Description
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662.
CVSS Information
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Vulnerability Type
内存缓冲区边界内操作的限制不恰当
Vulnerability Title
llama.cpp 安全漏洞
Vulnerability Description
llama.cpp是Georgi Gerganov个人开发者的一个多模态模型。 llama.cpp b5662之前版本存在安全漏洞,该漏洞源于GGUF模型词汇表可能触发缓冲区溢出,可能导致内存损坏和执行任意代码。
CVSS Information
N/A
Vulnerability Type
N/A