Litellm

Latest version: v1.65.1

Safety actively analyzes 723144 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 112

1.60.0.dev4

What's Changed
* Azure AI Foundry - Deepseek R1 by elabbarw in https://github.com/BerriAI/litellm/pull/8188
* fix(main.py): fix passing openrouter specific params by krrishdholakia in https://github.com/BerriAI/litellm/pull/8184
* Complete o3 model support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8183
* Easier user onboarding via SSO by krrishdholakia in https://github.com/BerriAI/litellm/pull/8187
* LiteLLM Minor Fixes & Improvements (01/16/2025) - p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7828
* Added deprecation date for gemini-1.5 models by yurchik11 in https://github.com/BerriAI/litellm/pull/8210
* docs: Updating the available VoyageAI models in the docs by fzowl in https://github.com/BerriAI/litellm/pull/8215
* build: ui updates by krrishdholakia in https://github.com/BerriAI/litellm/pull/8206
* Fix tokens for deepseek by SmartManoj in https://github.com/BerriAI/litellm/pull/8207
* (UI Fixes for add new model flow) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8216


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.0.dev2...v1.60.0.dev4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.0.dev4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 179.79463683736907 | 6.359486247668494 | 0.0 | 1900 | 0 | 123.9115270000184 | 3798.7273850000065 |
| Aggregated | Passed ✅ | 150.0 | 179.79463683736907 | 6.359486247668494 | 0.0 | 1900 | 0 | 123.9115270000184 | 3798.7273850000065 |

1.60.0.dev2

What's Changed
* Control Model Access by IDP 'groups' by krrishdholakia in https://github.com/BerriAI/litellm/pull/8164
* build(schema.prisma): add new `sso_user_id` to LiteLLM_UserTable by krrishdholakia in https://github.com/BerriAI/litellm/pull/8167
* Litellm dev contributor prs 01 31 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8168
* Improved O3 + Azure O3 support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8181
* test: add more unit testing for team member endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/8170
* Add azure/deepseek-r1 by Klohto in https://github.com/BerriAI/litellm/pull/8177
* [Bug Fix] - `/vertex_ai/` was not detected as llm_api_route on pass through but `vertex-ai` was by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8186
* (UI + SpendLogs) - Store SpendLogs in UTC Timezone, Fix filtering logs by start/end time by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8190

New Contributors
* Klohto made their first contribution in https://github.com/BerriAI/litellm/pull/8177

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.0...v1.60.0.dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.0.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 160.0 | 179.3387644765704 | 6.274867330705683 | 0.0 | 1878 | 0 | 134.8906900000202 | 3148.732781000035 |
| Aggregated | Passed ✅ | 160.0 | 179.3387644765704 | 6.274867330705683 | 0.0 | 1878 | 0 | 134.8906900000202 | 3148.732781000035 |

1.60.0.dev1

What's Changed
* Control Model Access by IDP 'groups' by krrishdholakia in https://github.com/BerriAI/litellm/pull/8164
* build(schema.prisma): add new `sso_user_id` to LiteLLM_UserTable by krrishdholakia in https://github.com/BerriAI/litellm/pull/8167
* Litellm dev contributor prs 01 31 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8168
* Improved O3 + Azure O3 support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8181
* test: add more unit testing for team member endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/8170
* Add azure/deepseek-r1 by Klohto in https://github.com/BerriAI/litellm/pull/8177

New Contributors
* Klohto made their first contribution in https://github.com/BerriAI/litellm/pull/8177

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.0...v1.60.0.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.0.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 170.0 | 193.07171769197802 | 6.24812141882662 | 0.0 | 1870 | 0 | 149.06627900001013 | 846.9972659999883 |
| Aggregated | Passed ✅ | 170.0 | 193.07171769197802 | 6.24812141882662 | 0.0 | 1870 | 0 | 149.06627900001013 | 846.9972659999883 |

1.59.10

Not secure
What's Changed
* (UI) - View Logs Page - Refinement by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8087
* (Feat) pass through vertex - allow using credentials defined on litellm router for vertex pass through by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8100
* (UI) Allow using a model / credentials for pass through routes by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8099
* ui - fix chat ui tab sending `model` param by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8105
* Litellm dev 01 29 2025 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8097
* Support new `bedrock/converse_like/<model>` route by krrishdholakia in https://github.com/BerriAI/litellm/pull/8102
* feat(databricks/chat/transformation.py): add tools and 'tool_choice' param support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8076


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.9...v1.59.10



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.10



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 210.0 | 239.24647793068146 | 6.21745665443628 | 0.00334092243655899 | 1861 | 1 | 73.25327600000264 | 3903.3159660000083 |
| Aggregated | Passed ✅ | 210.0 | 239.24647793068146 | 6.21745665443628 | 0.00334092243655899 | 1861 | 1 | 73.25327600000264 | 3903.3159660000083 |

v1.59.8-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.57.8-stable...v1.59.8-stable

Known Issues
🚨 Detected issue with Langfuse Logging when Langfuse credentials are stored in DB


Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.59.8-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 260.0 | 291.2207591958183 | 6.075260080470321 | 0.0 | 1818 | 0 | 223.10552599998346 | 3813.1267819999266 |
| Aggregated | Passed ✅ | 260.0 | 291.2207591958183 | 6.075260080470321 | 0.0 | 1818 | 0 | 223.10552599998346 | 3813.1267819999266 |

1.59.9

Not secure
What's Changed
* Fix custom pricing - separate provider info from model info by krrishdholakia in https://github.com/BerriAI/litellm/pull/7990
* Litellm dev 01 25 2025 p4 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8006
* (UI) - Adding new models enhancement - show provider logo by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8033
* (UI enhancement) - allow onboarding wildcard models on UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8034
* add openrouter/deepseek/deepseek-r1 by paul-gauthier in https://github.com/BerriAI/litellm/pull/8038
* (UI) - allow assigning wildcard models to a team / key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8041
* Add smolagents by aymeric-roucher in https://github.com/BerriAI/litellm/pull/8026
* (UI) fixes to add model flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8043
* github - run stale issue/pr bot by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8045
* (doc) Add nvidia as provider by raspawar in https://github.com/BerriAI/litellm/pull/8023
* feat(handle_jwt.py): initial commit adding custom RBAC support on jwt… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8037
* fix(utils.py): handle failed hf tokenizer request during calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/8032
* Bedrock document processing fixes by krrishdholakia in https://github.com/BerriAI/litellm/pull/8005
* Fix bedrock model pricing + add unit test using bedrock pricing api by krrishdholakia in https://github.com/BerriAI/litellm/pull/7978
* Add openai `metadata` param preview support + new `x-litellm-timeout` request header by krrishdholakia in https://github.com/BerriAI/litellm/pull/8047
* (beta ui - spend logs view fixes & Improvements 1) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8062
* (fix) - proxy reliability, ensure duplicate callbacks are not added to proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8067
* (UI) Fixes for Adding model page - keep existing page as default, have 2nd tab for wildcard models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8073

New Contributors
* aymeric-roucher made their first contribution in https://github.com/BerriAI/litellm/pull/8026
* raspawar made their first contribution in https://github.com/BerriAI/litellm/pull/8023

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.8...v1.59.9



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.9



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 270.0 | 301.01550717582927 | 6.14169679840119 | 0.0 | 1837 | 0 | 234.85362500002793 | 3027.238808999982 |
| Aggregated | Failed ❌ | 270.0 | 301.01550717582927 | 6.14169679840119 | 0.0 | 1837 | 0 | 234.85362500002793 | 3027.238808999982 |

1.59.8

Not secure
What's Changed
* refactor: cleanup dead codeblock by krrishdholakia in https://github.com/BerriAI/litellm/pull/7936
* add type annotation for litellm.api_base (7980) by krrishdholakia in https://github.com/BerriAI/litellm/pull/7994
* (QA / testing) - Add unit testing for key model access checks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7999
* (Prometheus) - emit key budget metrics on startup by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8002
* (Feat) set guardrails per team by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7993
* Supported nested json schema on anthropic calls via proxy + fix langfuse sync sdk issues by krrishdholakia in https://github.com/BerriAI/litellm/pull/8003
* Bug fix - [Bug]: If you create a key tied to a user that does not belong to a team, and then edit the key to add it to a team (the user is still not a part of a team), using that key results in an unexpected error by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8008
* (QA / testing) - Add e2e tests for key model access auth checks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8000
* (Fix) langfuse - setting `LANGFUSE_FLUSH_INTERVAL` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8007


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.7...v1.59.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.8



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 280.0 | 325.48398318207154 | 6.003526201462839 | 0.0 | 1796 | 0 | 234.56590200004257 | 3690.442290999954 |
| Aggregated | Failed ❌ | 280.0 | 325.48398318207154 | 6.003526201462839 | 0.0 | 1796 | 0 | 234.56590200004257 | 3690.442290999954 |

Page 7 of 112

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.