Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 27 of 93

1.42.5

Not secure
🔥 We're launching filtering LLMs by provider, max_tokens on [https://models.litellm.ai](https://t.co/NiEIUSMVZN) 👉 View cost, max_tokens for 200+ LLMs ([LiteLLM](https://x.com/LiteLLM))

![litellm_filters](https://github.com/user-attachments/assets/bafc0b94-dd0f-49ef-9368-ab63b52695fb)

🔭 [Feat] - log writing BatchSpendUpdate events on OTEL

🔑 Proxy Enterprise - security - check max request size

🛡️ [Feat Enterprise] - check max response size

✅ Feat Enterprise - set max request / response size UI



What's Changed
* feat(ollama_chat.py): support ollama tool calling by krrishdholakia in https://github.com/BerriAI/litellm/pull/4918
* fix(proxy_server.py): fix get secret for environment_variables by krrishdholakia in https://github.com/BerriAI/litellm/pull/4907
* Fix Datadog JSON serialization by idris in https://github.com/BerriAI/litellm/pull/4920
* [Fix] using airgapped license for Enterprise by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4921
* [Feat] - log writing BatchSpendUpdate events on OTEL by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4924
* Fix Canary error with `docusaurus start` by yujonglee in https://github.com/BerriAI/litellm/pull/4919
* [Feature]: Allow using custom and on-demand models in Fireworks AI + update data to model_prices_and_context_window.json by danielbichuetti in https://github.com/BerriAI/litellm/pull/4730
* Proxy Enterprise - security - check max request size by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4926
* [Feat Enterprise] - check max response size by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4928
* Feat Enterprise - set max request / response size UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4927


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.4...v1.42.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.5



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 149.07872144345131 | 6.351580011280877 | 0.0 | 1901 | 0 | 107.79980099999875 | 1698.2656079999856 |
| Aggregated | Passed ✅ | 130.0 | 149.07872144345131 | 6.351580011280877 | 0.0 | 1901 | 0 | 107.79980099999875 | 1698.2656079999856 |

v1.42.4-stable
What's Changed
* feat(ollama_chat.py): support ollama tool calling by krrishdholakia in https://github.com/BerriAI/litellm/pull/4918
* fix(proxy_server.py): fix get secret for environment_variables by krrishdholakia in https://github.com/BerriAI/litellm/pull/4907


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.4...v1.42.4-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.4-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 133.85920054106 | 6.428664812430578 | 0.0 | 1924 | 0 | 95.5478369999696 | 1841.4105720000293 |
| Aggregated | Passed ✅ | 110.0 | 133.85920054106 | 6.428664812430578 | 0.0 | 1924 | 0 | 95.5478369999696 | 1841.4105720000293 |

1.42.5dev2

What's Changed
* [Feat-Proxy] - Langfuse log /audio/transcription on langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4939
* Fix: 4942. Remove verbose logging when exception can be handled by dleen in https://github.com/BerriAI/litellm/pull/4943
* fixes: 4947 Bedrock context exception does not have a response by dleen in https://github.com/BerriAI/litellm/pull/4948
* [Feat] Bedrock add support for Bedrock Guardrails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4946
* build(deps): bump fast-xml-parser from 4.3.2 to 4.4.1 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/4950
* ui - allow entering custom model names for all all provider (azure ai, openai, etc) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4951
* Fix bug in cohere_chat.py by pat-cohere in https://github.com/BerriAI/litellm/pull/4949
* Feat UI - allow using custom header for litellm api key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4916
* [Feat] Add `litellm.create_fine_tuning_job()` , `litellm.list_fine_tuning_jobs()`, `litellm.cancel_fine_tuning_job()` finetuning endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4956
* [Feature]: GET /v1/batches to return list of batches by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4969
* [Fix-Proxy] ProxyException code as str - Make OpenAI Compatible by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4973
* Proxy Admin UI - switch off console logs in production mode by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4975
* feat(huggingface_restapi.py): Support multiple hf embedding types + async hf embeddings by krrishdholakia in https://github.com/BerriAI/litellm/pull/4976
* fix(cohere.py): support async cohere embedding calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/4977
* fix(utils.py): fix model registeration to model cost map by krrishdholakia in https://github.com/BerriAI/litellm/pull/4979

New Contributors
* pat-cohere made their first contribution in https://github.com/BerriAI/litellm/pull/4949

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.5-dev1...v1.42.5-dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.5-dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 80 | 94.54394269072652 | 6.341772574201341 | 0.0 | 1898 | 0 | 67.73792799998546 | 1524.3011940000315 |
| Aggregated | Passed ✅ | 80 | 94.54394269072652 | 6.341772574201341 | 0.0 | 1898 | 0 | 67.73792799998546 | 1524.3011940000315 |

1.42.5dev1

What's Changed
* feat(vertex_ai_partner.py): Vertex AI Mistral Support by krrishdholakia in https://github.com/BerriAI/litellm/pull/4925
* Support vertex mistral cost tracking by krrishdholakia in https://github.com/BerriAI/litellm/pull/4929


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.5...v1.42.5-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.5-dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 169.33167536684496 | 6.252815823477715 | 0.0 | 1870 | 0 | 114.0791189999959 | 2588.271402000032 |
| Aggregated | Passed ✅ | 140.0 | 169.33167536684496 | 6.252815823477715 | 0.0 | 1870 | 0 | 114.0791189999959 | 2588.271402000032 |

v1.42.5-stable
What's Changed
* Fix Datadog JSON serialization by idris in https://github.com/BerriAI/litellm/pull/4920
* [Fix] using airgapped license for Enterprise by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4921
* [Feat] - log writing BatchSpendUpdate events on OTEL by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4924
* Fix Canary error with `docusaurus start` by yujonglee in https://github.com/BerriAI/litellm/pull/4919
* [Feature]: Allow using custom and on-demand models in Fireworks AI + update data to model_prices_and_context_window.json by danielbichuetti in https://github.com/BerriAI/litellm/pull/4730
* Proxy Enterprise - security - check max request size by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4926
* [Feat Enterprise] - check max response size by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4928
* Feat Enterprise - set max request / response size UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4927


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.4-stable...v1.42.5-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.5-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.5-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 161.68211911395028 | 6.545409328079534 | 0.0 | 1957 | 0 | 110.24948799996537 | 2273.3069750000254 |
| Aggregated | Passed ✅ | 130.0 | 161.68211911395028 | 6.545409328079534 | 0.0 | 1957 | 0 | 110.24948799996537 | 2273.3069750000254 |

1.42.4

Not secure
What's Changed
* [Docs] Better search experience with Canary by yujonglee in https://github.com/BerriAI/litellm/pull/4893
* Fixed tool_call for Helicone integration by maamalama in https://github.com/BerriAI/litellm/pull/4869
* Fix Datadog logging attributes by idris in https://github.com/BerriAI/litellm/pull/4909
* [Proxy-Fix + Test] - /batches endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4914
* [Proxy-Fix] - raise more descriptive errors when crossing tpm / rpm limits on keys, user, global limits by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4913
* [Feat] Link to https://models.litellm.ai/ on Swagger docs and docs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4917

New Contributors
* idris made their first contribution in https://github.com/BerriAI/litellm/pull/4909

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.3...v1.42.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 99 | 137.24637847135392 | 6.41608880575569 | 0.0 | 1920 | 0 | 81.7409160000011 | 2139.0533030000256 |
| Aggregated | Passed ✅ | 99 | 137.24637847135392 | 6.41608880575569 | 0.0 | 1920 | 0 | 81.7409160000011 | 2139.0533030000256 |

v1.42.3-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.3...v1.42.3-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.3-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 140.6355745616567 | 6.449369490323953 | 0.0 | 1930 | 0 | 105.54520000005141 | 523.7298220000071 |
| Aggregated | Passed ✅ | 120.0 | 140.6355745616567 | 6.449369490323953 | 0.0 | 1930 | 0 | 105.54520000005141 | 523.7298220000071 |

1.42.3

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.2...v1.42.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.3



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 151.58794230154533 | 6.272264971082291 | 0.0 | 1877 | 0 | 108.66291499996805 | 1878.6819829999786 |
| Aggregated | Passed ✅ | 130.0 | 151.58794230154533 | 6.272264971082291 | 0.0 | 1877 | 0 | 108.66291499996805 | 1878.6819829999786 |

v1.42.2-stable
What's Changed
* [Feat] - Add `mistral/mistral-large 2` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4873
* [Fix] OpenAI STT, TTS Health Checks on LiteLLM Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4889
* docs - add info about routing strategy on load balancing docs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4890
* feat(custom_llm.py): Support Custom LLM Handlers by krrishdholakia in https://github.com/BerriAI/litellm/pull/4887
* Add Single-Token predictions support for Replicate by fracapuano in https://github.com/BerriAI/litellm/pull/4879
* Add mistral.mistral-large-2407-v1:0 on Amazon Bedrock by Manouchehri in https://github.com/BerriAI/litellm/pull/4884
* Add Llama 3.1 405b & Tool Calling for Amazon Bedrock by Manouchehri in https://github.com/BerriAI/litellm/pull/4883
* feat(auth_check.py): support using redis cache for team objects by krrishdholakia in https://github.com/BerriAI/litellm/pull/4870
* fix logfire - don't load_dotenv by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4894
* Docs Proxy - add example usage with mistral SDK with Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4896
* Docs add example using anthropic sdk with litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4897
* [Feat] Support /* for multiple providers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4891

New Contributors
* fracapuano made their first contribution in https://github.com/BerriAI/litellm/pull/4879

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.1...v1.42.2-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.2-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 149.75578451616298 | 6.41284316524727 | 0.0 | 1918 | 0 | 106.08121300003859 | 1931.1223589999997 |
| Aggregated | Passed ✅ | 120.0 | 149.75578451616298 | 6.41284316524727 | 0.0 | 1918 | 0 | 106.08121300003859 | 1931.1223589999997 |

1.42.2

Not secure
🔥 LiteLLM now supports using with Anthropic Python SDK and Mistral AI Python SDK with LiteLLM Proxy 👉 View [reference here:](https://docs.litellm.ai/docs/proxy/user_keys)

🐛 Fix logfire - don't load_dotenv()

📚 Docs Proxy - add example usage with mistral SDK with Proxy

📚 Docs add example using anthropic sdk with litellm proxy

✨ [Feat] Support proxying all models from providers without adding them to config.yaml

[Feat] - Add mistral/mistral-large 2

🔧 [Fix] OpenAI STT, TTS Health Checks on LiteLLM Proxy

📚 Docs - add info about routing strategy on load balancing docs
![codeimage-snippet_26 (2)](https://github.com/user-attachments/assets/ac51fcf4-ebba-4f90-834f-6829d6a2a3a5)


What's Changed
* [Feat] - Add `mistral/mistral-large 2` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4873
* [Fix] OpenAI STT, TTS Health Checks on LiteLLM Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4889
* docs - add info about routing strategy on load balancing docs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4890
* feat(custom_llm.py): Support Custom LLM Handlers by krrishdholakia in https://github.com/BerriAI/litellm/pull/4887
* Add Single-Token predictions support for Replicate by fracapuano in https://github.com/BerriAI/litellm/pull/4879
* Add mistral.mistral-large-2407-v1:0 on Amazon Bedrock by Manouchehri in https://github.com/BerriAI/litellm/pull/4884
* Add Llama 3.1 405b & Tool Calling for Amazon Bedrock by Manouchehri in https://github.com/BerriAI/litellm/pull/4883
* feat(auth_check.py): support using redis cache for team objects by krrishdholakia in https://github.com/BerriAI/litellm/pull/4870
* fix logfire - don't load_dotenv by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4894
* Docs Proxy - add example usage with mistral SDK with Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4896
* Docs add example using anthropic sdk with litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4897
* [Feat] Support /* for multiple providers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4891

New Contributors
* fracapuano made their first contribution in https://github.com/BerriAI/litellm/pull/4879

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.1...v1.42.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 147.70873449153422 | 6.316886999957656 | 0.0 | 1890 | 0 | 105.04327400002467 | 816.6183410000372 |
| Aggregated | Passed ✅ | 120.0 | 147.70873449153422 | 6.316886999957656 | 0.0 | 1890 | 0 | 105.04327400002467 | 816.6183410000372 |

Page 27 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.