Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 41 of 93

1.40.1

Not secure
What's Changed
* [Feat] return `num_retries` and `max_retries` in exceptions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3962
* [FEAT]- set custom AllowedFailsPolicy on litellm.Router by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3963
* feat(proxy_server.py): log litellm api version to langfuse by krrishdholakia in https://github.com/BerriAI/litellm/pull/3969
* feat - add batches api to docs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3973
* [Fix] Traceloop / OTEL logging fixes + easier docs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3970
* add dall-e 3 required comment by rhtenhove in https://github.com/BerriAI/litellm/pull/3984
* [Feat] Log Raw Request from LiteLLM on Langfuse - when `"log_raw_request": true` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3983
* [Feat] Admin UI - Multi-Select Tags, Viewing spend by tags by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3994
* [Feat] UI - Filter model latency by API Key Alias by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3995
* feat(assistants/main.py): Azure Assistants API support by krrishdholakia in https://github.com/BerriAI/litellm/pull/3996
* [Admin UI] Filter Model Latency by Customer, API Key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3997
* fix(router.py): use `litellm.request_timeout` as default for router clients by krrishdholakia in https://github.com/BerriAI/litellm/pull/3992
* [Doc] - Spend tracking with litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3991

New Contributors
* rhtenhove made their first contribution in https://github.com/BerriAI/litellm/pull/3984

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.0...v1.40.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 139.78250550967104 | 6.395300383667639 | 0.0 | 1913 | 0 | 95.28932899991105 | 1526.2213239999483 |
| Aggregated | Passed ✅ | 120.0 | 139.78250550967104 | 6.395300383667639 | 0.0 | 1913 | 0 | 95.28932899991105 | 1526.2213239999483 |

1.40.1.dev4

What's Changed
* Add simple OpenTelemetry tracer by yujonglee in https://github.com/BerriAI/litellm/pull/3974
* [FEAT] Add native OTEL logging to LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4010
* [Docs] Use OTEL logging on LiteLLM Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4011
* fix(bedrock): raise nested error response by pharindoko in https://github.com/BerriAI/litellm/pull/3989
* [Feat] Admin UI - Add, Edit all LiteLLM callbacks on UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4014
* feat(assistants/main.py): add assistants api streaming support by krrishdholakia in https://github.com/BerriAI/litellm/pull/4012
* feat(utils.py): Support `stream_options` param across all providers by krrishdholakia in https://github.com/BerriAI/litellm/pull/4015
* fix(utils.py): fix cost calculation for openai-compatible streaming object by krrishdholakia in https://github.com/BerriAI/litellm/pull/4009
* [Fix] Admin UI Internal Users by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4016


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.1...v1.40.1.dev4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.1.dev4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 130.49834083376624 | 6.432223242582805 | 0.0 | 1925 | 0 | 92.76206099997353 | 2155.1117690000297 |
| Aggregated | Passed ✅ | 110.0 | 130.49834083376624 | 6.432223242582805 | 0.0 | 1925 | 0 | 92.76206099997353 | 2155.1117690000297 |

1.40.1.dev2

What's Changed
* Add simple OpenTelemetry tracer by yujonglee in https://github.com/BerriAI/litellm/pull/3974


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.1...v1.40.1.dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.1.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 177.0382996107586 | 6.334561220731733 | 0.0 | 1896 | 0 | 114.13910500004931 | 1784.0317350000134 |
| Aggregated | Passed ✅ | 140.0 | 177.0382996107586 | 6.334561220731733 | 0.0 | 1896 | 0 | 114.13910500004931 | 1784.0317350000134 |

1.40.0

Not secure
What's Changed
* fix: fix streaming with httpx client by krrishdholakia in https://github.com/BerriAI/litellm/pull/3944
* feat(scheduler.py): add request prioritization scheduler by krrishdholakia in https://github.com/BerriAI/litellm/pull/3954
* [FEAT] Perf improvements - litellm.completion / litellm.acompletion - Cache OpenAI client by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3956
* fix(http_handler.py): support verify_ssl=False when using httpx client by krrishdholakia in https://github.com/BerriAI/litellm/pull/3959
* Litellm docker compose start by krrishdholakia in https://github.com/BerriAI/litellm/pull/3961


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.39.6...v1.40.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.0



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 133.63252197830545 | 6.467733658247951 | 0.0 | 1936 | 0 | 94.77090299998281 | 801.180971000008 |
| Aggregated | Passed ✅ | 120.0 | 133.63252197830545 | 6.467733658247951 | 0.0 | 1936 | 0 | 94.77090299998281 | 801.180971000008 |

1.39.6

Not secure
We're launching team member invites (No SSO Required) on v1.39.6 🔥 Invite team member to view LLM Usage, Spend per service https://docs.litellm.ai/docs/proxy/ui

👍 [Fix] Cache Vertex AI clients - Major Perf improvement for VertexAI models

✨ Feat - Send new users invite emails on creation (using 'send_invite_email' on /user/new)

💻 UI - allow users to sign in with with email/password

🔓 [UI] Admin UI Invite Links for non SSO

✨ PR - [FEAT] Perf improvements - litellm.completion / litellm.acompletion - Cache OpenAI client
![inviting_members_ui](https://github.com/BerriAI/litellm/assets/29436595/6fcb76b2-25a5-4cc9-a784-950b50b37855)


What's Changed
* Fix warnings from pydantic by lj-wego in https://github.com/BerriAI/litellm/pull/3670
* Update pydantic version in CI requirements.txt by lj-wego in https://github.com/BerriAI/litellm/pull/3938
* Allow admin to give invite links to others by krrishdholakia in https://github.com/BerriAI/litellm/pull/3875
* Update model config definition to use v2 style by lj-wego in https://github.com/BerriAI/litellm/pull/3943
* Add OIDC + unit test for bedrock httpx by Manouchehri in https://github.com/BerriAI/litellm/pull/3688
* (fix) Update Mistral model list and prices by alexpeattie in https://github.com/BerriAI/litellm/pull/3945
* feat - `send_invite_email` on /user/new by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3942
* [UI] Admin UI Invite Links for non SSO users by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3950
* [Feat] Admin UI - invite users to view spend by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3952
* UI - allow users to sign in with with email/password by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3953
* feat(proxy_server.py): add assistants api endpoints to proxy server by krrishdholakia in https://github.com/BerriAI/litellm/pull/3936
* [Fix] Cache Vertex AI clients - Perf improvement by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3935
* fix(bedrock): convert botocore credentials when role is assumed by pharindoko in https://github.com/BerriAI/litellm/pull/3939



New Contributors
* lj-wego made their first contribution in https://github.com/BerriAI/litellm/pull/3670
* alexpeattie made their first contribution in https://github.com/BerriAI/litellm/pull/3945
* pharindoko made their first contribution in https://github.com/BerriAI/litellm/pull/3939

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.39.5...v1.39.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.39.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 78 | 90.37559010674164 | 6.5521693586672445 | 0.0 | 1958 | 0 | 65.34477100001368 | 961.3953589999937 |
| Aggregated | Passed ✅ | 78 | 90.37559010674164 | 6.5521693586672445 | 0.0 | 1958 | 0 | 65.34477100001368 | 961.3953589999937 |

v1.39.5-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.39.5...v1.39.5-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.39.5-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 82 | 98.4437988109365 | 6.414126443845541 | 0.0 | 1920 | 0 | 65.89902199999642 | 1363.2986580000193 |
| Aggregated | Passed ✅ | 82 | 98.4437988109365 | 6.414126443845541 | 0.0 | 1920 | 0 | 65.89902199999642 | 1363.2986580000193 |

1.39.5

Not secure
What's Changed
* fix(router.py): cooldown on 404 errors by krrishdholakia in https://github.com/BerriAI/litellm/pull/3926
* [Feat] LiteLLM Proxy - use enums for user roles by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3927
* UI - View user role on admin ui by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3930
* [UI] edit user role admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3929
* fix: add missing seed parameter to ollama input 3923 by devdev999 in https://github.com/BerriAI/litellm/pull/3924
* feat(main.py): support openai tts endpoint by krrishdholakia in https://github.com/BerriAI/litellm/pull/3928
* [Feat] UI - cleanup editing users by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3931
* [Feat- admin UI] Show number of rate limit errors by deployment per day by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3932

New Contributors
![pika-1717129556201-1x](https://github.com/BerriAI/litellm/assets/29436595/fee63836-f819-4aaa-acfc-c6eb01bf9cdc)

* devdev999 made their first contribution in https://github.com/BerriAI/litellm/pull/3924

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.39.4...v1.39.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.39.5



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 168.39339172958924 | 6.4252258901831345 | 0.0 | 1923 | 0 | 109.15407800001731 | 1833.3729599999913 |
| Aggregated | Passed ✅ | 130.0 | 168.39339172958924 | 6.4252258901831345 | 0.0 | 1923 | 0 | 109.15407800001731 | 1833.3729599999913 |

Page 41 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.