Safety vulnerability ID: 71650
The information on this page was manually curated by our Cybersecurity Intelligence Team.
BerriAI/litellm is vulnerable to Server-Side Template Injection (SSTI) via the /completions endpoint. The vulnerability arises from the hf_chat_template method processing the chat_template parameter from the tokenizer_config.json file through the Jinja template engine without proper sanitization. Attackers can exploit this by crafting malicious tokenizer_config.json files that execute arbitrary code on the server.
Latest version: 1.52.14
Library to easily interface with LLM API providers
This vulnerability has no description
Scan your Python project for dependency vulnerabilities in two minutes
Scan your application