Litellm

Latest version: v1.61.11

Safety actively analyzes 707607 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 110

2.0

Admin UI is now on the Proxy Server
- When you start the proxy you'll be able to find your admin UI link on the swagger docs
- The UI is a Static App h/t Manouchehri for this suggestion
- Doc on getting started: https://docs.litellm.ai/docs/proxy/ui
- cc bsu3338 this change impacts you - the UI is by default on the proxy server (GIF shows how to get the UI link), let me know if you have any questions
![litellm_ui_3](https://github.com/BerriAI/litellm/assets/29436595/8a8220b0-99bf-458a-8ae5-ccc233e078ef)

Admin UI uses jwts
- The UI never shows a Proxy API key in the URL param (we've move to jwts in the query params) cc Manouchehri

Admin UI - Remove'd the need for setting allow_user_auth: True if user is logged in with SSO)

* [Fix] UI - Use jwts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1730
* [Feat] Add Admin UI on Proxy Server (Static Web App) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1726
* [Fix-UI] If user is already logged in using SSO, set allow_user_auth: True by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1728 * [Fix] UI - Use jwts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1730

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.20.8...v1.20.9

1.61.9.dev1

What's Changed
* fix(team_endpoints.py): allow team member to view team info by krrishdholakia in https://github.com/BerriAI/litellm/pull/8644
* build: build ui by krrishdholakia in https://github.com/BerriAI/litellm/pull/8654
* (UI + Proxy) Cache Health Check Page - Cleanup/Improvements by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8665


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.9-nightly...v1.61.9.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.9.dev1


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 180.0 | 209.72659395983104 | 6.321588488030633 | 6.321588488030633 | 1892 | 1892 | 147.1097109999846 | 3268.0857999999944 |
| Aggregated | Failed ❌ | 180.0 | 209.72659395983104 | 6.321588488030633 | 6.321588488030633 | 1892 | 1892 | 147.1097109999846 | 3268.0857999999944 |

v1.61.9-nightly
What's Changed
* Pass router tags in request headers - `x-litellm-tags` + fix openai metadata param check by krrishdholakia in https://github.com/BerriAI/litellm/pull/8609
* (Fix) Redis async context usage for Redis Cluster + 94% lower median latency when using Redis Cluster by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8622
* add openrouter/google/gemini-2.0-flash-001 by HeMuling in https://github.com/BerriAI/litellm/pull/8619
* feat: add oss license check for related packages by krrishdholakia in https://github.com/BerriAI/litellm/pull/8623
* fix(model_cost_map): fix json parse error on model cost map + add uni… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8629
* [Feature]: Redis Caching - Allow setting a namespace for redis cache by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8624
* Cleanup ui filter icon + pass timeout for Sagemaker messages API by krrishdholakia in https://github.com/BerriAI/litellm/pull/8630
* Add Elroy to projects built with litellm by elroy-bot in https://github.com/BerriAI/litellm/pull/8642
* Add OSS license check to ci/cd by krrishdholakia in https://github.com/BerriAI/litellm/pull/8626
* Fix parallel request limiter on proxy by krrishdholakia in https://github.com/BerriAI/litellm/pull/8639
* Cleanup user <-> team association on `/team/delete` + Fix bedrock/deepseek_r1/ translation by krrishdholakia in https://github.com/BerriAI/litellm/pull/8640
* (Polish/Fixes) - Fixes for Adding Team Specific Models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8645

New Contributors
* HeMuling made their first contribution in https://github.com/BerriAI/litellm/pull/8619
* elroy-bot made their first contribution in https://github.com/BerriAI/litellm/pull/8642

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.8-nightly...v1.61.9-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.9-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 180.0 | 203.54644847482734 | 6.3054769799102575 | 6.3054769799102575 | 1887 | 1887 | 146.3379119999786 | 3805.3281139999626 |
| Aggregated | Failed ❌ | 180.0 | 203.54644847482734 | 6.3054769799102575 | 6.3054769799102575 | 1887 | 1887 | 146.3379119999786 | 3805.3281139999626 |

v1.61.8-nightly
What's Changed
* (UI) Allow adding models for a Team (8598) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8601
* (UI) Refactor Add Models for Specific Teams by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8592
* (UI) Improvements to Add Team Model Flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8603


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.7...v1.61.8-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.8-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.8-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.8-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 120.0 | 129.19425708375965 | 6.54112229454407 | 6.54112229454407 | 1958 | 1958 | 94.39574200001744 | 2020.834275000027 |
| Aggregated | Failed ❌ | 120.0 | 129.19425708375965 | 6.54112229454407 | 6.54112229454407 | 1958 | 1958 | 94.39574200001744 | 2020.834275000027 |

1.61.7

What's Changed
* docs(perplexity.md): removing `return_citations` documentation by miraclebakelaser in https://github.com/BerriAI/litellm/pull/8527
* (docs - cookbook) litellm proxy x langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8541
* UI Fixes and Improvements (02/14/2025) p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8546
* (Feat) - Add `/bedrock/meta.llama3-3-70b-instruct-v1:0` tool calling support + cost tracking + base llm unit test for tool calling by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8545
* fix(general_settings.tsx): filter out empty dictionaries post fallbac… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8550
* (perf) Fix memory leak on `/completions` route by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8551
* Org Flow Improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/8549
* feat(openai/o_series_transformation.py): support native streaming for o1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8552
* fix(team_endpoints.py): fix team info check to handle team keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/8529
* build: ui build update by krrishdholakia in https://github.com/BerriAI/litellm/pull/8553
* Optimize Alpine Dockerfile by removing redundant apk commands by PeterDaveHello in https://github.com/BerriAI/litellm/pull/5016
* fix(main.py): fix key leak error when unknown provider given by krrishdholakia in https://github.com/BerriAI/litellm/pull/8556
* (Feat) - return `x-litellm-attempted-fallbacks` in responses from litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8558
* Add remaining org CRUD endpoints + support deleting orgs on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8561
* Enable update/delete org members on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8560
* (Bug Fix) - Add Regenerate Key on Virtual Keys Tab by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8567
* (Bug Fix + Better Observability) - BudgetResetJob: for reseting key, team, user budgets by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8562
* (Patch/bug fix) - UI, filter out litellm ui session tokens on Virtual Keys Page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8568
* refactor(teams.tsx): refactor to display all teams, across all orgs by krrishdholakia in https://github.com/BerriAI/litellm/pull/8565
* docs: update README.md API key and model example typos by colesmcintosh in https://github.com/BerriAI/litellm/pull/8590
* Fix typo in main readme by scosman in https://github.com/BerriAI/litellm/pull/8574
* (UI) Allow adding models for a Team by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8598
* feat(ui): alert when adding model without STORE_MODEL_IN_DB by Aditya8840 in https://github.com/BerriAI/litellm/pull/8591
* Revert "(UI) Allow adding models for a Team" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8600
* Litellm stable UI 02 17 2025 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8599

New Contributors
* PeterDaveHello made their first contribution in https://github.com/BerriAI/litellm/pull/5016
* colesmcintosh made their first contribution in https://github.com/BerriAI/litellm/pull/8590
* scosman made their first contribution in https://github.com/BerriAI/litellm/pull/8574
* Aditya8840 made their first contribution in https://github.com/BerriAI/litellm/pull/8591

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.3...v1.61.7



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.7


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.7


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 180.0 | 206.98769618433857 | 6.145029010811349 | 6.145029010811349 | 1839 | 1839 | 146.21495699998377 | 3174.8161250000067 |
| Aggregated | Failed ❌ | 180.0 | 206.98769618433857 | 6.145029010811349 | 6.145029010811349 | 1839 | 1839 | 146.21495699998377 | 3174.8161250000067 |

1.61.7.dev1

What's Changed
* (UI) Allow adding models for a Team (8598) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8601
* (UI) Refactor Add Models for Specific Teams by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8592
* (UI) Improvements to Add Team Model Flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8603


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.7...v1.61.7.dev1

v1.61.7-nightly
What's Changed
* docs: update README.md API key and model example typos by colesmcintosh in https://github.com/BerriAI/litellm/pull/8590
* Fix typo in main readme by scosman in https://github.com/BerriAI/litellm/pull/8574
* (UI) Allow adding models for a Team by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8598
* feat(ui): alert when adding model without STORE_MODEL_IN_DB by Aditya8840 in https://github.com/BerriAI/litellm/pull/8591
* Revert "(UI) Allow adding models for a Team" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8600
* Litellm stable UI 02 17 2025 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8599

New Contributors
* colesmcintosh made their first contribution in https://github.com/BerriAI/litellm/pull/8590
* scosman made their first contribution in https://github.com/BerriAI/litellm/pull/8574
* Aditya8840 made their first contribution in https://github.com/BerriAI/litellm/pull/8591

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.6-nightly...v1.61.7-nightly

1.61.6.dev1

What's Changed
* docs: update README.md API key and model example typos by colesmcintosh in https://github.com/BerriAI/litellm/pull/8590
* Fix typo in main readme by scosman in https://github.com/BerriAI/litellm/pull/8574
* (UI) Allow adding models for a Team by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8598
* feat(ui): alert when adding model without STORE_MODEL_IN_DB by Aditya8840 in https://github.com/BerriAI/litellm/pull/8591
* Revert "(UI) Allow adding models for a Team" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8600
* Litellm stable UI 02 17 2025 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8599
* (UI) Allow adding models for a Team (8598) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8601
* (UI) Refactor Add Models for Specific Teams by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8592
* (UI) Improvements to Add Team Model Flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8603

New Contributors
* colesmcintosh made their first contribution in https://github.com/BerriAI/litellm/pull/8590
* scosman made their first contribution in https://github.com/BerriAI/litellm/pull/8574
* Aditya8840 made their first contribution in https://github.com/BerriAI/litellm/pull/8591

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.6-nightly...v1.61.6.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.6.dev1


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 170.0 | 197.04136517618934 | 6.316924319787487 | 6.316924319787487 | 1890 | 1890 | 142.7094059999945 | 2646.323271999961 |
| Aggregated | Failed ❌ | 170.0 | 197.04136517618934 | 6.316924319787487 | 6.316924319787487 | 1890 | 1890 | 142.7094059999945 | 2646.323271999961 |

v1.61.6-nightly
What's Changed
* refactor(teams.tsx): refactor to display all teams, across all orgs by krrishdholakia in https://github.com/BerriAI/litellm/pull/8565


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.5-nightly...v1.61.6-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.6-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 170.0 | 197.37858561234376 | 6.172709160882249 | 6.172709160882249 | 1847 | 1847 | 139.8097940000298 | 3194.1706680000266 |
| Aggregated | Failed ❌ | 170.0 | 197.37858561234376 | 6.172709160882249 | 6.172709160882249 | 1847 | 1847 | 139.8097940000298 | 3194.1706680000266 |

v1.61.5-nightly
What's Changed
* Optimize Alpine Dockerfile by removing redundant apk commands by PeterDaveHello in https://github.com/BerriAI/litellm/pull/5016
* fix(main.py): fix key leak error when unknown provider given by krrishdholakia in https://github.com/BerriAI/litellm/pull/8556
* (Feat) - return `x-litellm-attempted-fallbacks` in responses from litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8558
* Add remaining org CRUD endpoints + support deleting orgs on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8561
* Enable update/delete org members on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8560
* (Bug Fix) - Add Regenerate Key on Virtual Keys Tab by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8567
* (Bug Fix + Better Observability) - BudgetResetJob: for reseting key, team, user budgets by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8562
* (Patch/bug fix) - UI, filter out litellm ui session tokens on Virtual Keys Page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8568

New Contributors
* PeterDaveHello made their first contribution in https://github.com/BerriAI/litellm/pull/5016

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.3.dev1...v1.61.5-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.5-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 150.0 | 169.92952748954406 | 6.233287189548679 | 6.233287189548679 | 1865 | 1865 | 130.2254270000276 | 1515.568768999998 |
| Aggregated | Failed ❌ | 150.0 | 169.92952748954406 | 6.233287189548679 | 6.233287189548679 | 1865 | 1865 | 130.2254270000276 | 1515.568768999998 |

v1.61.3-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.3...v1.61.3-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm_stable_release_branch-v1.61.3-stable


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 150.0 | 178.8079934268893 | 6.237874881624696 | 6.237874881624696 | 1867 | 1867 | 130.88419100000692 | 2746.1132829999997 |
| Aggregated | Failed ❌ | 150.0 | 178.8079934268893 | 6.237874881624696 | 6.237874881624696 | 1867 | 1867 | 130.88419100000692 | 2746.1132829999997 |

v1.61.4-nightly
What's Changed
* docs(perplexity.md): removing `return_citations` documentation by miraclebakelaser in https://github.com/BerriAI/litellm/pull/8527
* (docs - cookbook) litellm proxy x langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8541
* UI Fixes and Improvements (02/14/2025) p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8546
* (Feat) - Add `/bedrock/meta.llama3-3-70b-instruct-v1:0` tool calling support + cost tracking + base llm unit test for tool calling by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8545
* fix(general_settings.tsx): filter out empty dictionaries post fallbac… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8550
* (perf) Fix memory leak on `/completions` route by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8551
* Org Flow Improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/8549
* feat(openai/o_series_transformation.py): support native streaming for o1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8552
* fix(team_endpoints.py): fix team info check to handle team keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/8529
* build: ui build update by krrishdholakia in https://github.com/BerriAI/litellm/pull/8553


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.3...v1.61.4-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.4-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 190.0 | 216.89425311206062 | 6.2617791082055785 | 6.2617791082055785 | 1874 | 1874 | 143.52555700003222 | 3508.21726800001 |
| Aggregated | Failed ❌ | 190.0 | 216.89425311206062 | 6.2617791082055785 | 6.2617791082055785 | 1874 | 1874 | 143.52555700003222 | 3508.21726800001 |

1.61.3

What's Changed
* Improved wildcard route handling on `/models` and `/model_group/info` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8473
* (Bug fix) - Using `include_usage` for /completions requests + unit testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8484
* add sonar pricings by themrzmaster in https://github.com/BerriAI/litellm/pull/8476
* (bug fix) `PerplexityChatConfig` - track correct OpenAI compatible params by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8496
* (fix 2) don't block proxy startup if license check fails & using prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8492
* ci(config.yml): mark daily docker builds with `-nightly` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8499
* (Redis Cluster) - Fixes for using redis cluster + pipeline by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8442
* Litellm UI stable version 02 12 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8497
* fix: fix test by krrishdholakia in https://github.com/BerriAI/litellm/pull/8501
* enables no auth for SMTP by krrishdholakia in https://github.com/BerriAI/litellm/pull/8494
* UI Fixes p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8502
* add phoenix docs for observability integration by exiao in https://github.com/BerriAI/litellm/pull/8522
* Added custom_attributes to additional_keys which can be sent to athina by vivek-athina in https://github.com/BerriAI/litellm/pull/8518
* (UI) fix log details page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8524
* Add UI Support for Admins to Call /cache/ping and View Cache Analytics (8475) by tahaali-dev in https://github.com/BerriAI/litellm/pull/8519
* LiteLLM Improvements (02/13/2025) p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8523
* fix(utils.py): fix vertex ai optional param handling by krrishdholakia in https://github.com/BerriAI/litellm/pull/8477
* Add 'prediction' param for Azure + Add `gemini-2.0-pro-exp-02-05` vertex ai model to cost map + New `bedrock/deepseek_r1/*` route by krrishdholakia in https://github.com/BerriAI/litellm/pull/8525
* (UI) - Refactor View Key Table by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8526


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.1...v1.61.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.3


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.3


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 110.0 | 127.51554087063036 | 6.408067444109619 | 6.408067444109619 | 1917 | 1917 | 94.95955199997752 | 2825.282969 |
| Aggregated | Failed ❌ | 110.0 | 127.51554087063036 | 6.408067444109619 | 6.408067444109619 | 1917 | 1917 | 94.95955199997752 | 2825.282969 |

v1.61.2-nightly
What's Changed
* Improved wildcard route handling on `/models` and `/model_group/info` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8473
* (Bug fix) - Using `include_usage` for /completions requests + unit testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8484
* add sonar pricings by themrzmaster in https://github.com/BerriAI/litellm/pull/8476
* (bug fix) `PerplexityChatConfig` - track correct OpenAI compatible params by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8496
* (fix 2) don't block proxy startup if license check fails & using prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8492
* ci(config.yml): mark daily docker builds with `-nightly` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8499
* (Redis Cluster) - Fixes for using redis cluster + pipeline by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8442
* Litellm UI stable version 02 12 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8497
* fix: fix test by krrishdholakia in https://github.com/BerriAI/litellm/pull/8501
* enables no auth for SMTP by krrishdholakia in https://github.com/BerriAI/litellm/pull/8494
* UI Fixes p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8502
* add phoenix docs for observability integration by exiao in https://github.com/BerriAI/litellm/pull/8522
* Added custom_attributes to additional_keys which can be sent to athina by vivek-athina in https://github.com/BerriAI/litellm/pull/8518
* (UI) fix log details page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8524
* Add UI Support for Admins to Call /cache/ping and View Cache Analytics (8475) by tahaali-dev in https://github.com/BerriAI/litellm/pull/8519
* LiteLLM Improvements (02/13/2025) p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8523


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.1...v1.61.2-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.2-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 180.0 | 216.33586769555853 | 6.245273580245063 | 6.245273580245063 | 1869 | 1869 | 145.7912179999994 | 3665.8740830000056 |
| Aggregated | Failed ❌ | 180.0 | 216.33586769555853 | 6.245273580245063 | 6.245273580245063 | 1869 | 1869 | 145.7912179999994 | 3665.8740830000056 |

Page 1 of 110

Links

Releases

Has known vulnerabilities

Β© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.