Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 29 of 93

1.41.25

Not secure
What's Changed
* Add medlm models to cost map by skucherlapati in https://github.com/BerriAI/litellm/pull/4766
* feat(aporio_ai.py): support aporio ai prompt injection for chat completion requests by krrishdholakia in https://github.com/BerriAI/litellm/pull/4762
* Add enabled_roles to Guardrails configuration, Update Lakera guardrail moderation hook by vingiarrusso in https://github.com/BerriAI/litellm/pull/4729
* feat(proxy): support hiding health check details by fgreinacher in https://github.com/BerriAI/litellm/pull/4772
* [Feat] Add OpenAI GPT-4o mini by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4776
* [Feat] run guardrail moderation check on embedding by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4764
* [FEAT] - add Google AI Studio: gemini-gemma-2-27b-it, gemini-gemma-2-9b-it by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4782
* Docs - add `LITELLM_SALT_KEY` to docker compose by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4779
* [Feat-Enterprise] Use free/paid tiers for Virtual Keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4786
* [Feat] Router - Route based on free/paid tier by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4785
* feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/4784
* [Fix] Admin UI - make ui session last 12 hours by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4787
* [Feat-Router] - Tag based routing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4789
* Alias `/health/liveliness` as `/health/liveness` by msabramo in https://github.com/BerriAI/litellm/pull/4781
* Removed weird replicate model from model prices list by areibman in https://github.com/BerriAI/litellm/pull/4783
* Add missing `num_gpu` ollama configuration parameter by titusz in https://github.com/BerriAI/litellm/pull/4773
* docs(docusaurus.config.js): fix docusaurus base url by krrishdholakia in https://github.com/BerriAI/litellm/pull/4287
* fix ui - make default session 24 hours by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4791
* UI redirect logout after session, only show 1 error message by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4792
* docs - show curl examples of how to control cache on / off per request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4793
* fix - add fix to update spend logs discrepancy for team spend by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4794
* fix health check - make sure one failing deployment does not stop the health check by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4798
* ui - rename api_key -> virtual key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4797
* feat(bedrock_httpx.py): add ai21 jamba instruct as bedrock model by krrishdholakia in https://github.com/BerriAI/litellm/pull/4788

New Contributors
* vingiarrusso made their first contribution in https://github.com/BerriAI/litellm/pull/4729
* fgreinacher made their first contribution in https://github.com/BerriAI/litellm/pull/4772
* areibman made their first contribution in https://github.com/BerriAI/litellm/pull/4783
* titusz made their first contribution in https://github.com/BerriAI/litellm/pull/4773

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.24...v1.41.25



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.25



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 133.07006499947943 | 6.410337009992175 | 0.0 | 1918 | 0 | 96.78975299999593 | 3914.958607000017 |
| Aggregated | Passed ✅ | 110.0 | 133.07006499947943 | 6.410337009992175 | 0.0 | 1918 | 0 | 96.78975299999593 | 3914.958607000017 |

1.41.24

Not secure
What's Changed
* Add token counting for OpenAI tools/tool_choice by pamelafox in https://github.com/BerriAI/litellm/pull/4716
* [UI] Clear cookies on logging out by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4743
* ui - allow admin viewer to view caching analytics page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4741
* Docs - How to invite users to view usage, caching by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4744
* [Fix-Proxy] Slack Alerting Store Request Status in memory by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4732
* feat - use custom api key header name when using litellm virtual keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4739
* fix: enables batch embedding support for triton by davidschuler-8451 in https://github.com/BerriAI/litellm/pull/4736
* [Fix + Test] anthropic raise litellm.AuthenticationError when no Anthropic API Key provided by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4747
* fix(proxy/utils.py): fix failure logging for rejected requests by krrishdholakia in https://github.com/BerriAI/litellm/pull/4742
* fix(factory.py): use stronger typing for anthropic translation by krrishdholakia in https://github.com/BerriAI/litellm/pull/4746
* ui-fix: clear token when user logging in by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4755
* [Fix] Langsmith - Don't Log Provider API Keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4754
* [Feat] Proxy + Langsmith - Log user_api_key_user_id, user_api_key_team_alias by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4757
* fix(litellm_logging.py): don't run async caching for sync streaming calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/4756
* [Feat] Use Async Httpx client for langsmith logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4758
* add new groq tool models by themrzmaster in https://github.com/BerriAI/litellm/pull/4751
* Helicone headers to metadata by maamalama in https://github.com/BerriAI/litellm/pull/4763
* Add Medlm cost calc by skucherlapati in https://github.com/BerriAI/litellm/pull/4760
* Fix failing tests on PR-4760 by skucherlapati in https://github.com/BerriAI/litellm/pull/4765

New Contributors
* pamelafox made their first contribution in https://github.com/BerriAI/litellm/pull/4716
* davidschuler-8451 made their first contribution in https://github.com/BerriAI/litellm/pull/4736
* skucherlapati made their first contribution in https://github.com/BerriAI/litellm/pull/4760

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.23...v1.41.24



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.24



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 100.0 | 129.16254622602372 | 6.444436535071256 | 0.0 | 1929 | 0 | 86.27257299997382 | 2671.388750999995 |
| Aggregated | Passed ✅ | 100.0 | 129.16254622602372 | 6.444436535071256 | 0.0 | 1929 | 0 | 86.27257299997382 | 2671.388750999995 |

1.41.24.dev1

What's Changed
* Add medlm models to cost map by skucherlapati in https://github.com/BerriAI/litellm/pull/4766
* feat(aporio_ai.py): support aporio ai prompt injection for chat completion requests by krrishdholakia in https://github.com/BerriAI/litellm/pull/4762
* Add enabled_roles to Guardrails configuration, Update Lakera guardrail moderation hook by vingiarrusso in https://github.com/BerriAI/litellm/pull/4729
* feat(proxy): support hiding health check details by fgreinacher in https://github.com/BerriAI/litellm/pull/4772
* [Feat] Add OpenAI GPT-4o mini by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4776
* [Feat] run guardrail moderation check on embedding by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4764
* [FEAT] - add Google AI Studio: gemini-gemma-2-27b-it, gemini-gemma-2-9b-it by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4782
* Docs - add `LITELLM_SALT_KEY` to docker compose by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4779
* [Feat-Enterprise] Use free/paid tiers for Virtual Keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4786
* [Feat] Router - Route based on free/paid tier by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4785
* feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/4784
* [Fix] Admin UI - make ui session last 12 hours by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4787
* [Feat-Router] - Tag based routing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4789
* Alias `/health/liveliness` as `/health/liveness` by msabramo in https://github.com/BerriAI/litellm/pull/4781
* Removed weird replicate model from model prices list by areibman in https://github.com/BerriAI/litellm/pull/4783
* Add missing `num_gpu` ollama configuration parameter by titusz in https://github.com/BerriAI/litellm/pull/4773
* docs(docusaurus.config.js): fix docusaurus base url by krrishdholakia in https://github.com/BerriAI/litellm/pull/4287
* fix ui - make default session 24 hours by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4791
* UI redirect logout after session, only show 1 error message by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4792
* docs - show curl examples of how to control cache on / off per request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4793
* fix - add fix to update spend logs discrepancy for team spend by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4794

New Contributors
* vingiarrusso made their first contribution in https://github.com/BerriAI/litellm/pull/4729
* fgreinacher made their first contribution in https://github.com/BerriAI/litellm/pull/4772
* areibman made their first contribution in https://github.com/BerriAI/litellm/pull/4783
* titusz made their first contribution in https://github.com/BerriAI/litellm/pull/4773

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.24...v1.41.24.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.24.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 158.57868968091697 | 6.410383023433154 | 0.0 | 1918 | 0 | 113.07968199997731 | 689.7396169999865 |
| Aggregated | Passed ✅ | 130.0 | 158.57868968091697 | 6.410383023433154 | 0.0 | 1918 | 0 | 113.07968199997731 | 689.7396169999865 |

1.41.23

Not secure
What's Changed
* updates cost tracking example code in docs to resolve errors by djliden in https://github.com/BerriAI/litellm/pull/4714
* Admin UI - Stack Cache hits vs misses on UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4715
* [Fix] /audio/transcription - don't write to the local file system by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4719
* [Feat] - set max file size on /audio/transcriptions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4724
* [Feature]: Add Fireworks AI data to model_prices_and_context_window.json by danielbichuetti in https://github.com/BerriAI/litellm/pull/4721
* fix(utils.py): allow passing dynamic api base for openai-compatible endpoints (Fireworks AI, etc.) by krrishdholakia in https://github.com/BerriAI/litellm/pull/4723


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.22...v1.41.23



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.23



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 154.4940451549665 | 6.426530640686624 | 0.0 | 1923 | 0 | 114.97331199996097 | 2915.000806999956 |
| Aggregated | Passed ✅ | 130.0 | 154.4940451549665 | 6.426530640686624 | 0.0 | 1923 | 0 | 114.97331199996097 | 2915.000806999956 |

1.41.22

Not secure
What's Changed
* feat mem utils debugging return size of in memory cache by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4705
* [Fix Memory Usage] - only use per request tracking if slack alerting is being used by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4703
* [Debug-Utils] Add some useful memory usage debugging utils by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4704
* Return `retry-after` header for rate limited requests by krrishdholakia in https://github.com/BerriAI/litellm/pull/4706
* add azure ai pricing + token info (mistral/jamba instruct/llama3) by krrishdholakia in https://github.com/BerriAI/litellm/pull/4702
* Allow setting `logging_only` in guardrails config by krrishdholakia in https://github.com/BerriAI/litellm/pull/4696


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.21...v1.41.22



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.22



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 170.61999054511887 | 6.341044942359792 | 0.0 | 1895 | 0 | 122.10332300003301 | 1263.2002629999874 |
| Aggregated | Passed ✅ | 150.0 | 170.61999054511887 | 6.341044942359792 | 0.0 | 1895 | 0 | 122.10332300003301 | 1263.2002629999874 |

1.41.22.dev4

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.22...v1.41.22.dev4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.22.dev4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.22.dev4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 134.15405305124258 | 6.456041237697578 | 0.0 | 1932 | 0 | 98.79139999998188 | 1350.2609079999957 |
| Aggregated | Passed ✅ | 110.0 | 134.15405305124258 | 6.456041237697578 | 0.0 | 1932 | 0 | 98.79139999998188 | 1350.2609079999957 |

Page 29 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.