关键信息 漏洞名称 SQL Injection Vulnerability Due to Unvalidated LLM-Generated SQL 影响的组件 in the function in the method. 漏洞描述 The code contains a vulnerability and depends on the LastPrompt feature, where it passes parameters directly into the generated prompt provided by the large language model (LLM) as unvalidated SQL. The SQL is executed repeatedly through the function from the method, which can potentially cause an injection between the parameter. Basically, if the user knows how to construct SQL statements, he can do almost anything, including data deletion, modification, or even system takeover. Given that SQL cannot be validated dynamically, these vulnerabilities could be widely exploited in applications building on this AI toolkit. 漏洞代码示例 建议修复措施 1. Strengthened SQL Post-processing and Validation: Before executing any generated SQL, perform post-processing and validation to ensure the correctness of the constructed , , queries. 2. Whitelisting Database: Only allow certain types of SQL commands to pass through (e.g., , , or ). 3. LLM Secure Prompt Engineering: Double-check all prompts for LLMs by providing general guidance, including introducing security-related keywords and syntax checks that forbid injecting SQL. 4. Principle of Least Privilege: Ensure that the database account running the application only has the necessary privileges to perform its tasks.