Litellm

Latest version: v1.61.11

Safety actively analyzes 707607 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 96 of 110

1.23.4

Not secure
What's Changed
* [FEAT] 76 % Faster s3 logging Proxy / litellm.acompletion / router.acompletion 🚀 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1892
* (feat) Add support for AWS credentials from profile file by dleen in https://github.com/BerriAI/litellm/pull/1895
* Litellm langfuse error logging - log input by krrishdholakia in https://github.com/BerriAI/litellm/pull/1898
* Admin UI - View Models, TPM, RPM Limit of a Key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1903
* Admin UI - show delete confirmation when deleting keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1904

![litellm_key_gen5](https://github.com/BerriAI/litellm/assets/29436595/809f61ae-21b3-490e-9f01-eb962a19da9b)



**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.3...v1.23.4

1.23.3

Not secure
What's Changed
* [FEAT] 78% Faster s3 Cache⚡️- Proxy/ litellm.acompletion/ litellm.Router.acompletion by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1891


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.2...v1.23.3

1.23.2

Not secure
What's Changed 🐬
1. [FEAT] Azure Pricing - based on base_model in model_info
2. [Feat] Semantic Caching - Track Cost of using embedding, Use Langfuse Trace ID
3. [Feat] Slack Alert when budget tracking fails


1. [FEAT] Azure Pricing - based on base_model in model_info by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1874
Azure Pricing - Use Base model for cost calculation
Why ?
Azure returns `gpt-4` in the response when `azure/gpt-4-1106-preview` is used, We were using `gpt-4` when calculating response_cost

How to use - set `base_model` on config.yaml
yaml
model_list:
- model_name: azure-gpt-3.5
litellm_params:
model: azure/chatgpt-v-2
api_base: os.environ/AZURE_API_BASE
api_key: os.environ/AZURE_API_KEY
api_version: "2023-07-01-preview"
model_info:
base_model: azure/gpt-4-1106-preview


View Cost calculated on Langfuse
This used the correct pricing for `azure/gpt-4-1106-preview` = `(9*0.00001) + (28*0.00003)`
<img width="938" alt="Screenshot 2024-02-07 at 4 39 12 PM" src="https://github.com/BerriAI/litellm/assets/29436595/9edd3b8f-15d3-4c7f-82f7-2a0e3c08c17d">

2. [Feat] Semantic Caching - Track Cost of using embedding, Use Langfuse Trace ID by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1878
- If a `trace_id` is passed we'll place the semantic cache embedding call in the same trace
- We now track cost for the API key that will make the embedding call for semantic caching

<img width="1002" alt="Screenshot 2024-02-07 at 7 18 57 PM" src="https://github.com/BerriAI/litellm/assets/29436595/203e2d12-9d1e-4411-a1dd-4219de83a2b7">

3. [Feat] Slack Alert when budget tracking fails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1877
<img width="913" alt="Screenshot 2024-02-07 at 8 08 27 PM" src="https://github.com/BerriAI/litellm/assets/29436595/4c70c204-05bf-412d-8efe-18e25a8b8b17">


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.1...v1.23.2

1.23.1

Not secure
What's Changed
* [Feat] add azure/gpt-4-0125-preview by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1876


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.0...v1.23.1

1.23.0

Not secure
What's Changed
* feat(ui): enable admin to view all valid keys created on the proxy by krrishdholakia in https://github.com/BerriAI/litellm/pull/1843
* fix(proxy_server.py): prisma client fixes for high traffic by krrishdholakia in https://github.com/BerriAI/litellm/pull/1860


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.22.11...v1.23.0

1.22.11

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.22.10...v1.22.11

Page 96 of 110

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.