Safety vulnerability ID: 71793
The information on this page was manually curated by our Cybersecurity Intelligence Team.
A command injection vulnerability exists in the RunGptLLM class of the llama_index library, version 0.9.47, used by the RunGpt framework from JinaAI to connect to Language Learning Models (LLMs). The vulnerability arises from the improper use of the eval function, allowing a malicious or compromised LLM hosting provider to execute arbitrary commands on the client's machine. This issue was fixed in version 0.10.13. The exploitation of this vulnerability could lead to a hosting provider gaining full control over client machines.
Latest version: 0.12.1
Interface between LLMs and your data
This vulnerability has no description
Scan your Python project for dependency vulnerabilities in two minutes
Scan your application