Safety vulnerability ID: 65038
The information on this page was manually curated by our Cybersecurity Intelligence Team.
An issue in pandas-ai v.0.9.1 and before allows a remote attacker to execute arbitrary code via the _is_jailbreak function.
Latest version: 2.4.0
Chat with your database (SQL, CSV, pandas, polars, mongodb, noSQL, etc). PandasAI makes data analysis conversational using LLMs (GPT 3.5 / 4, Anthropic, VertexAI) and RAG.
This vulnerability has no description
Scan your Python project for dependency vulnerabilities in two minutes
Scan your application