Vulnerability Information
Although we use advanced large model technology, its output may still contain inaccurate or outdated information.Shenlong tries to ensure data accuracy, but please verify and judge based on the actual situation.
Vulnerability Title
vLLM using built-in hash() from Python 3.12 leads to predictable hash collisions in vLLM prefix cache
Vulnerability Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.
CVSS Information
CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:N/I:L/A:N
Vulnerability Type
完整性检查值验证不恰当
Vulnerability Title
vLLM 安全漏洞
Vulnerability Description
vLLM是vLLM开源的一个适用于 LLM 的高吞吐量和内存高效推理和服务引擎。 vLLM存在安全漏洞,该漏洞源于恶意构造的语句可能导致哈希冲突,从而导致缓存重用,这可能会干扰后续响应并导致意外行为。
CVSS Information
N/A
Vulnerability Type
N/A