漏洞信息
尽管我们使用了先进的大模型技术,但其输出仍可能包含不准确或过时的信息。神龙努力确保数据的准确性,但请您根据实际情况进行核实和判断。
Vulnerability Title
vLLM affected by RCE via auto_map dynamic module loading during model initialization
Vulnerability Description
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.
CVSS Information
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Vulnerability Type
对生成代码的控制不恰当(代码注入)
Vulnerability Title
vLLM 代码注入漏洞
Vulnerability Description
vLLM是vLLM开源的一个适用于 LLM 的高吞吐量和内存高效推理和服务引擎。 vLLM 0.10.1版本至0.14.0之前版本存在代码注入漏洞,该漏洞源于在模型解析期间加载Hugging Face auto_map动态模块时未受信任远程代码控制,可能导致攻击者在模型加载时执行任意代码。
CVSS Information
N/A
Vulnerability Type
N/A