Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 24 of 93

1.43.6.dev1

What's Changed
* [Refactor+Testing] Refactor Prometheus metrics to use CustomLogger class + add testing for prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5149
* fix(main.py): safely fail stream_chunk_builder calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/5151
* Feat - track response latency on prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5152
* Feat - Proxy track fallback metrics on prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5153


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.6...v1.43.6.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.6.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 157.28893681111057 | 6.319769409590888 | 0.0 | 1890 | 0 | 112.02887599995393 | 1747.6605340000333 |
| Aggregated | Passed ✅ | 130.0 | 157.28893681111057 | 6.319769409590888 | 0.0 | 1890 | 0 | 112.02887599995393 | 1747.6605340000333 |

v1.43.6-stable
What's Changed
* fix(utils.py): set max_retries = num_retries, if given by krrishdholakia in https://github.com/BerriAI/litellm/pull/5143
* fix(litellm_logging.py): fix calling success callback w/ stream_options true by krrishdholakia in https://github.com/BerriAI/litellm/pull/5145


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.5...v1.43.6-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.6-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 130.1134297451799 | 6.418269735355047 | 0.0 | 1919 | 0 | 97.57446099996514 | 934.8782779999851 |
| Aggregated | Passed ✅ | 110.0 | 130.1134297451799 | 6.418269735355047 | 0.0 | 1919 | 0 | 97.57446099996514 | 934.8782779999851 |

1.43.5

Not secure
We're launching tracking LLM API Outages on LiteLLM 1.43.5 📈 Start here https://docs.litellm.ai/docs/proxy/prometheus
🪨 [Fix] Support for AWS Bedrock translating tool call names

✨ UI add support for adding Cohere embedding models on UI

💵 Added cost tracking for Cohere embedding models

🛠️ [Feat] v2 prometheus deployment outage, healthy, partial outage alerting

🪢 [Feat-Langfuse] log VertexAI Grounding Metadata as Spans
![Group 5948](https://github.com/user-attachments/assets/3fed2968-e39b-425c-82d3-38f65e5540f0)


What's Changed
* feat: hash prompt when caching by prd-tuong-nguyen in https://github.com/BerriAI/litellm/pull/5105
* feat: set max_internal_budget for user w/ sso by krrishdholakia in https://github.com/BerriAI/litellm/pull/5120
* Litellm sso team member add by krrishdholakia in https://github.com/BerriAI/litellm/pull/5129
* [Feat] Add pricing for cohere embedding models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5137
* [Feat] v2 prometheus deployment outage, healthy, partial outage alerting by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5134
* ui allow adding cohere models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5136
* Feat - Translate openai function names to bedrock converse schema by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5138
* [Feat-Langfuse] log VertexAI Grounding Metadata as Spans by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5139
* [Fix] Place bedrock modified tool call name in output by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5144

New Contributors
* prd-tuong-nguyen made their first contribution in https://github.com/BerriAI/litellm/pull/5105

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.4...v1.43.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.5



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 165.77518946539055 | 6.278079718762026 | 0.0 | 1878 | 0 | 111.6302299999461 | 1672.4383819999957 |
| Aggregated | Passed ✅ | 140.0 | 165.77518946539055 | 6.278079718762026 | 0.0 | 1878 | 0 | 111.6302299999461 | 1672.4383819999957 |

1.43.4

Not secure
✨ Today we're launching support for Gemini Context Caching on LiteLLM Proxy- Start here: https://docs.litellm.ai/docs/providers/vertex#context-caching

🔥 Fix UI - Easily add Groq models

⚡️ Admin UI - Azure OpenAI don't require api version when adding model

📈 UI - sort providers in alphabetical order on Models Page

🛠️ [Fix-Bug]: Whisper not working

📈 fix handle case when service logger has no attribute prometheusService

![Group 5940](https://github.com/user-attachments/assets/b3df80ab-5ad0-4f10-aa23-508e24e9f32c)


What's Changed
* fix handle case when service logger has no attribute prometheusService by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5115
* [Feat-Proxy] Add Support for VertexAI context caching by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5119
* [Fix-Bug]: Whisper is broken by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5114
* fix(user_api_key_auth.py): Fix issue with key auth w/ user not in db by krrishdholakia in https://github.com/BerriAI/litellm/pull/5117
* UI add groq models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5125
* ui show litellm model name by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5123
* Admin UI - add mistral ai by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5126
* Admin UI - Azure OpenAI dont require api version azure by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5127
* UI - sort providers in alphabetical order by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5128


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.3...v1.43.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 161.51749219841636 | 6.333549395955978 | 0.23729921219676753 | 1895 | 71 | 102.82188400003633 | 956.2377719999517 |
| Aggregated | Passed ✅ | 140.0 | 161.51749219841636 | 6.333549395955978 | 0.23729921219676753 | 1895 | 71 | 102.82188400003633 | 956.2377719999517 |

1.43.4.dev5

What's Changed
* feat: hash prompt when caching by prd-tuong-nguyen in https://github.com/BerriAI/litellm/pull/5105
* feat: set max_internal_budget for user w/ sso by krrishdholakia in https://github.com/BerriAI/litellm/pull/5120
* Litellm sso team member add by krrishdholakia in https://github.com/BerriAI/litellm/pull/5129
* [Feat] Add pricing for cohere embedding models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5137
* [Feat] v2 prometheus deployment outage, healthy, partial outage alerting by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5134
* ui allow adding cohere models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5136

New Contributors
* prd-tuong-nguyen made their first contribution in https://github.com/BerriAI/litellm/pull/5105

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.4...v1.43.4.dev5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.4.dev5



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 155.9265892127544 | 6.39414481763279 | 0.0 | 1913 | 0 | 104.33059599995431 | 2708.094066000001 |
| Aggregated | Passed ✅ | 130.0 | 155.9265892127544 | 6.39414481763279 | 0.0 | 1913 | 0 | 104.33059599995431 | 2708.094066000001 |

1.43.3

Not secure
🚨 Unstable Release - error with key auth if user not in db detected. Fix being worked on currently. Follow here - https://github.com/BerriAI/litellm/issues/5111

What's Changed
* feat(router.py): allows /chat/completion endpoint to work for request prioritization calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/5101
* fix(user_api_key_auth.py): respect team budgets over user budget, if key belongs to team by krrishdholakia in https://github.com/BerriAI/litellm/pull/5099



**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.2...v1.43.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.3



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 110.0 | 136.2897277365932 | 6.354799579736435 | 6.354799579736435 | 1902 | 1902 | 90.97570500000529 | 2509.1231650000054 |
| Aggregated | Failed ❌ | 110.0 | 136.2897277365932 | 6.354799579736435 | 6.354799579736435 | 1902 | 1902 | 90.97570500000529 | 2509.1231650000054 |

1.43.3dev1

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.3...v1.43.3-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.3-dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 66 | 79.86835958697893 | 6.566746890939233 | 6.566746890939233 | 1966 | 1966 | 50.02091200003633 | 2825.0938680000104 |
| Aggregated | Failed ❌ | 66 | 79.86835958697893 | 6.566746890939233 | 6.566746890939233 | 1966 | 1966 | 50.02091200003633 | 2825.0938680000104 |

Page 24 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.