Safety vulnerability ID: 71792
The information on this page was manually curated by our Cybersecurity Intelligence Team.
A command injection vulnerability exists in the run-llama/llama_index repository, specifically within the safe_eval function. Attackers can bypass the intended security mechanism, which checks for the presence of underscores in code generated by LLM, to execute arbitrary code. This is achieved by crafting input that does not contain an underscore but still results in the execution of OS commands. The vulnerability allows for remote code execution (RCE) on the server hosting the application.
Latest version: 0.12.5
Interface between LLMs and your data
This vulnerability has no description
Scan your Python project for dependency vulnerabilities in two minutes
Scan your application