Litellm

Latest version: v1.65.1

Safety actively analyzes 723144 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 112

1.61.20.dev1

What's Changed
* Allow team/org filters to be searchable on the Create Key Page + Show team alias on Keys Table by krrishdholakia in https://github.com/BerriAI/litellm/pull/8881
* Add `created_by` and `updated_by` fields to Keys table by krrishdholakia in https://github.com/BerriAI/litellm/pull/8885
* (Proxy improvement) - Raise `BadRequestError` when unknown model passed in request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8886
* (Improvements) use `/openai/` pass through with OpenAI Ruby for Assistants API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8884
* Update model path and documentation for Cerebras API call by marscod in https://github.com/BerriAI/litellm/pull/8862
* docs: update sambanova docs by jhpiedrahitao in https://github.com/BerriAI/litellm/pull/8875
* Update model settings data by yurchik11 in https://github.com/BerriAI/litellm/pull/8871
* (security fix) - Enforce model access restrictions on Azure OpenAI route by krrishdholakia in https://github.com/BerriAI/litellm/pull/8888
* Show 'user_email' on key table on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8887
* fix: ollama chat async stream error propagation by Tomas2D in https://github.com/BerriAI/litellm/pull/8870

New Contributors
* marscod made their first contribution in https://github.com/BerriAI/litellm/pull/8862
* jhpiedrahitao made their first contribution in https://github.com/BerriAI/litellm/pull/8875
* Tomas2D made their first contribution in https://github.com/BerriAI/litellm/pull/8870

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.20.rc...v1.61.20.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.20.dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 240.0 | 254.1835915962261 | 6.2026122151904675 | 0.0 | 1855 | 0 | 211.9885020000538 | 1218.0083719999857 |
| Aggregated | Passed βœ… | 240.0 | 254.1835915962261 | 6.2026122151904675 | 0.0 | 1855 | 0 | 211.9885020000538 | 1218.0083719999857 |

1.61.19.dev1

What's Changed
* vertex ai anthropic thinking param support + cost calculation for vertex_ai/claude-3-7-sonnet by krrishdholakia in https://github.com/BerriAI/litellm/pull/8853


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.19-nightly...v1.61.19.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.19.dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 279.81049794793904 | 6.161760602979549 | 0.0 | 1844 | 0 | 217.23908899997468 | 4265.23701100001 |
| Aggregated | Passed βœ… | 250.0 | 279.81049794793904 | 6.161760602979549 | 0.0 | 1844 | 0 | 217.23908899997468 | 4265.23701100001 |

v1.61.19-nightly
What's Changed
* fix(get_litellm_params.py): handle no-log being passed in via kwargs by krrishdholakia in https://github.com/BerriAI/litellm/pull/8830
* fix(o_series_transformation.py): fix optional param check for o-serie… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8787
* chore: set ttlSecondsAfterFinished on the migration job in the litellm-helm chart by ashwin153 in https://github.com/BerriAI/litellm/pull/8593
* Litellm dev bedrock anthropic 3 7 v2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8843
* Mark Claude Haiku 3.5 as vision-capable by minhduc0711 in https://github.com/BerriAI/litellm/pull/8840
* feat: enhance migrations job with additional configurable properties by mknet3 in https://github.com/BerriAI/litellm/pull/8636
* (UI + Backend) Fix Adding Azure, Azure AI Studio models on LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8856
* fix caching on main branch by krrishdholakia in https://github.com/BerriAI/litellm/pull/8858
* [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8860

New Contributors
* ashwin153 made their first contribution in https://github.com/BerriAI/litellm/pull/8593
* mknet3 made their first contribution in https://github.com/BerriAI/litellm/pull/8636

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.17-nightly...v1.61.19-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.19-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 190.0 | 197.9691385267413 | 6.185555293319119 | 0.003341737057438746 | 1851 | 1 | 53.4934159999807 | 967.1408859999815 |
| Aggregated | Passed βœ… | 190.0 | 197.9691385267413 | 6.185555293319119 | 0.003341737057438746 | 1851 | 1 | 53.4934159999807 | 967.1408859999815 |

1.61.17.dev1

What's Changed
* fix(get_litellm_params.py): handle no-log being passed in via kwargs by krrishdholakia in https://github.com/BerriAI/litellm/pull/8830
* fix(o_series_transformation.py): fix optional param check for o-serie… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8787
* chore: set ttlSecondsAfterFinished on the migration job in the litellm-helm chart by ashwin153 in https://github.com/BerriAI/litellm/pull/8593

New Contributors
* ashwin153 made their first contribution in https://github.com/BerriAI/litellm/pull/8593

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.17-nightly...v1.61.17.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.17.dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 170.0 | 197.91492646891462 | 6.342832402753434 | 6.342832402753434 | 1898 | 1898 | 138.25161099998695 | 4268.093897000028 |
| Aggregated | Failed ❌ | 170.0 | 197.91492646891462 | 6.342832402753434 | 6.342832402753434 | 1898 | 1898 | 138.25161099998695 | 4268.093897000028 |

v1.61.17-nightly
What's Changed
* (UI) Fixes for managing Internal Users by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8786
* Litellm contributor prs 02 24 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8781
* (UI) Allow adding MSFT SSO on UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8779
* (UI) Minor fix, clear new team form after adding a new team by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8776
* [ui] Icons on navbar profile dropdown by Mte90 in https://github.com/BerriAI/litellm/pull/8792
* fix(UI): model name overflow in model hub cards by Aditya-A-G in https://github.com/BerriAI/litellm/pull/8749
* fix vertex_ai claude 3.7 naming by emerzon in https://github.com/BerriAI/litellm/pull/8807
* (Router) - If `allowed_fails` or `allowed_fail_policy` set, use that for single deployment cooldown logic by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8668
* (Bug fix) - reading /parsing request body when on hypercorn by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8734
* (Bug fix) - running litellm proxy on wndows by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8735
* Anthropic model cost map updates by krrishdholakia in https://github.com/BerriAI/litellm/pull/8816
* Adding Azure Phi-4 by emerzon in https://github.com/BerriAI/litellm/pull/8808
* (Bug Fix) Using LiteLLM Python SDK with model=`litellm_proxy/` for embedding, image_generation, transcription, speech, rerank by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8815
* (Bug fix) - allow using Assistants GET, DELETE on `/openai` pass through routes by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8818
* (Bug fix) dd-trace used by default on litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8817
* Adding openrouter claude-3.7-sonnet by fengjiajie in https://github.com/BerriAI/litellm/pull/8826
* (UI) - Create Key flow for existing users by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8844

New Contributors
* Mte90 made their first contribution in https://github.com/BerriAI/litellm/pull/8792
* Aditya-A-G made their first contribution in https://github.com/BerriAI/litellm/pull/8749

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.16-nightly...v1.61.17-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.17-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.17-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 110.0 | 132.37492176765105 | 6.343856763236461 | 6.343856763236461 | 1898 | 1898 | 93.89094700003398 | 3315.9179240000185 |
| Aggregated | Failed ❌ | 110.0 | 132.37492176765105 | 6.343856763236461 | 6.343856763236461 | 1898 | 1898 | 93.89094700003398 | 3315.9179240000185 |

v1.61.16-nightly
What's Changed
* fix: remove aws params from bedrock embedding request body (8618) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8696
* Add anthropic3-7-sonnet by dragosMC91 in https://github.com/BerriAI/litellm/pull/8766
* fix incorrect variable name in reliability section of docs by niinpatel in https://github.com/BerriAI/litellm/pull/8753
* Litellm contributor prs 02 24 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8775
* Add anthropic thinking + reasoning content support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8778

New Contributors
* niinpatel made their first contribution in https://github.com/BerriAI/litellm/pull/8753

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.15-nightly...v1.61.16-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.16-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 150.0 | 173.09079325414655 | 6.248259928705226 | 6.248259928705226 | 1869 | 1869 | 131.67032300003711 | 1529.8640780000028 |
| Aggregated | Failed ❌ | 150.0 | 173.09079325414655 | 6.248259928705226 | 6.248259928705226 | 1869 | 1869 | 131.67032300003711 | 1529.8640780000028 |

v1.61.15-nightly
What's Changed
* Add cost tracking for rerank via bedrock + jina ai by krrishdholakia in https://github.com/BerriAI/litellm/pull/8691
* add bedrock llama vision support + cohere / infinity rerank - 'return_documents' support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8684
* Correct spelling in user_management_heirarchy.md by oaustegard in https://github.com/BerriAI/litellm/pull/8716
* (Feat) - UI, Allow sorting models by Created_At and all other columns on the UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8725
* (UI) Edit Model flow improvements by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8729
* Support arize phoenix on litellm proxy (7756) by krrishdholakia in https://github.com/BerriAI/litellm/pull/8715
* fix(amazon_deepseek_transformation.py): remove </think> from stream o… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8717
* Add cohere v2/rerank support (8421) by krrishdholakia in https://github.com/BerriAI/litellm/pull/8605
* fix(proxy/_types.py): fixes issue where internal user able to escalat… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8740

New Contributors
* oaustegard made their first contribution in https://github.com/BerriAI/litellm/pull/8716

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.13-nightly...v1.61.15-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.15-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 140.0 | 149.21232461729608 | 6.452882828983183 | 6.452882828983183 | 1931 | 1931 | 114.22628599996187 | 662.278525000005 |
| Aggregated | Failed ❌ | 140.0 | 149.21232461729608 | 6.452882828983183 | 6.452882828983183 | 1931 | 1931 | 114.22628599996187 | 662.278525000005 |

v1.61.13-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.13-nightly...v1.61.13-stable

🚨 Known Issue:
- DD Trace was enabled by default on litellm docker: https://github.com/BerriAI/litellm/issues/8788
- Expect a patched v1.61.13-stable with the fix



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.61.13-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 180.0 | 204.35323738803527 | 6.313934995711798 | 6.313934995711798 | 1889 | 1889 | 146.12962300003574 | 2180.2391240000247 |
| Aggregated | Failed ❌ | 180.0 | 204.35323738803527 | 6.313934995711798 | 6.313934995711798 | 1889 | 1889 | 146.12962300003574 | 2180.2391240000247 |

v1.55.8-stable-patched
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.8-stable...v1.55.8-stable-patched



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm_stable_release_branch-v1.55.8-stable-patched


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 150.0 | 172.39167931961987 | 6.334948735889217 | 6.334948735889217 | 1896 | 1896 | 131.01931900001773 | 2316.6445349999663 |
| Aggregated | Failed ❌ | 150.0 | 172.39167931961987 | 6.334948735889217 | 6.334948735889217 | 1896 | 1896 | 131.01931900001773 | 2316.6445349999663 |

1.61.13.rc

What's Changed
* LiteLLM Contributor PRs (02/18/2025). by krrishdholakia in https://github.com/BerriAI/litellm/pull/8643
* fix(utils.py): handle token counter error when invalid message passed in by krrishdholakia in https://github.com/BerriAI/litellm/pull/8670
* (Bug fix) - Cache Health not working when configured with prometheus service logger by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8687
* (Redis fix) - use mget_non_atomic by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8682
* (Observability) - Add more detailed dd tracing on Proxy Auth, Bedrock Auth by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8693
* (Infra/DB) - Allow running older litellm version when out of sync with current state of DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8695


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.11-nightly...v1.61.13.rc



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.13.rc


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 150.0 | 182.1689693668056 | 6.422359014184563 | 6.422359014184563 | 1922 | 1922 | 131.7662689999679 | 3100.3508269999998 |
| Aggregated | Failed ❌ | 150.0 | 182.1689693668056 | 6.422359014184563 | 6.422359014184563 | 1922 | 1922 | 131.7662689999679 | 3100.3508269999998 |

v1.61.13-nightly
What's Changed
* LiteLLM Contributor PRs (02/18/2025). by krrishdholakia in https://github.com/BerriAI/litellm/pull/8643
* fix(utils.py): handle token counter error when invalid message passed in by krrishdholakia in https://github.com/BerriAI/litellm/pull/8670
* (Bug fix) - Cache Health not working when configured with prometheus service logger by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8687
* (Redis fix) - use mget_non_atomic by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8682
* (Observability) - Add more detailed dd tracing on Proxy Auth, Bedrock Auth by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8693
* (Infra/DB) - Allow running older litellm version when out of sync with current state of DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8695


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.11-nightly...v1.61.13-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.13-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 150.0 | 177.70403054473582 | 6.34954513279557 | 6.34954513279557 | 1900 | 1900 | 131.07585400001653 | 3605.6919650000054 |
| Aggregated | Failed ❌ | 150.0 | 177.70403054473582 | 6.34954513279557 | 6.34954513279557 | 1900 | 1900 | 131.07585400001653 | 3605.6919650000054 |

v1.61.11-nightly
What's Changed
* fix(team_endpoints.py): allow team member to view team info by krrishdholakia in https://github.com/BerriAI/litellm/pull/8644
* build: build ui by krrishdholakia in https://github.com/BerriAI/litellm/pull/8654
* (UI + Proxy) Cache Health Check Page - Cleanup/Improvements by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8665
* (Bug Fix Redis) - Fix running redis.mget operations with `None` Keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8666
* (Bug fix) prometheus - safely set latency metrics by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8669
* extract `<think>..</think>` block for amazon deepseek r1 and put in `reasoning_content` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8664
* Add all `/key/generate` api params to UI + add metadata fields on team AND org add/update by krrishdholakia in https://github.com/BerriAI/litellm/pull/8667


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.9-nightly...v1.61.11-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.11-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 120.0 | 146.33082595240526 | 6.457801208431416 | 6.457801208431416 | 1933 | 1933 | 97.35924100004922 | 4080.5825460000165 |
| Aggregated | Failed ❌ | 120.0 | 146.33082595240526 | 6.457801208431416 | 6.457801208431416 | 1933 | 1933 | 97.35924100004922 | 4080.5825460000165 |

1.61.9.dev1

What's Changed
* fix(team_endpoints.py): allow team member to view team info by krrishdholakia in https://github.com/BerriAI/litellm/pull/8644
* build: build ui by krrishdholakia in https://github.com/BerriAI/litellm/pull/8654
* (UI + Proxy) Cache Health Check Page - Cleanup/Improvements by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8665


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.9-nightly...v1.61.9.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.9.dev1


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 180.0 | 209.72659395983104 | 6.321588488030633 | 6.321588488030633 | 1892 | 1892 | 147.1097109999846 | 3268.0857999999944 |
| Aggregated | Failed ❌ | 180.0 | 209.72659395983104 | 6.321588488030633 | 6.321588488030633 | 1892 | 1892 | 147.1097109999846 | 3268.0857999999944 |

v1.61.9-nightly
What's Changed
* Pass router tags in request headers - `x-litellm-tags` + fix openai metadata param check by krrishdholakia in https://github.com/BerriAI/litellm/pull/8609
* (Fix) Redis async context usage for Redis Cluster + 94% lower median latency when using Redis Cluster by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8622
* add openrouter/google/gemini-2.0-flash-001 by HeMuling in https://github.com/BerriAI/litellm/pull/8619
* feat: add oss license check for related packages by krrishdholakia in https://github.com/BerriAI/litellm/pull/8623
* fix(model_cost_map): fix json parse error on model cost map + add uni… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8629
* [Feature]: Redis Caching - Allow setting a namespace for redis cache by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8624
* Cleanup ui filter icon + pass timeout for Sagemaker messages API by krrishdholakia in https://github.com/BerriAI/litellm/pull/8630
* Add Elroy to projects built with litellm by elroy-bot in https://github.com/BerriAI/litellm/pull/8642
* Add OSS license check to ci/cd by krrishdholakia in https://github.com/BerriAI/litellm/pull/8626
* Fix parallel request limiter on proxy by krrishdholakia in https://github.com/BerriAI/litellm/pull/8639
* Cleanup user <-> team association on `/team/delete` + Fix bedrock/deepseek_r1/ translation by krrishdholakia in https://github.com/BerriAI/litellm/pull/8640
* (Polish/Fixes) - Fixes for Adding Team Specific Models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8645

New Contributors
* HeMuling made their first contribution in https://github.com/BerriAI/litellm/pull/8619
* elroy-bot made their first contribution in https://github.com/BerriAI/litellm/pull/8642

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.8-nightly...v1.61.9-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.9-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 180.0 | 203.54644847482734 | 6.3054769799102575 | 6.3054769799102575 | 1887 | 1887 | 146.3379119999786 | 3805.3281139999626 |
| Aggregated | Failed ❌ | 180.0 | 203.54644847482734 | 6.3054769799102575 | 6.3054769799102575 | 1887 | 1887 | 146.3379119999786 | 3805.3281139999626 |

v1.61.8-nightly
What's Changed
* (UI) Allow adding models for a Team (8598) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8601
* (UI) Refactor Add Models for Specific Teams by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8592
* (UI) Improvements to Add Team Model Flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8603


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.7...v1.61.8-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.8-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.8-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.8-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 120.0 | 129.19425708375965 | 6.54112229454407 | 6.54112229454407 | 1958 | 1958 | 94.39574200001744 | 2020.834275000027 |
| Aggregated | Failed ❌ | 120.0 | 129.19425708375965 | 6.54112229454407 | 6.54112229454407 | 1958 | 1958 | 94.39574200001744 | 2020.834275000027 |

1.61.7

Not secure
What's Changed
* docs(perplexity.md): removing `return_citations` documentation by miraclebakelaser in https://github.com/BerriAI/litellm/pull/8527
* (docs - cookbook) litellm proxy x langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8541
* UI Fixes and Improvements (02/14/2025) p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8546
* (Feat) - Add `/bedrock/meta.llama3-3-70b-instruct-v1:0` tool calling support + cost tracking + base llm unit test for tool calling by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8545
* fix(general_settings.tsx): filter out empty dictionaries post fallbac… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8550
* (perf) Fix memory leak on `/completions` route by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8551
* Org Flow Improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/8549
* feat(openai/o_series_transformation.py): support native streaming for o1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8552
* fix(team_endpoints.py): fix team info check to handle team keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/8529
* build: ui build update by krrishdholakia in https://github.com/BerriAI/litellm/pull/8553
* Optimize Alpine Dockerfile by removing redundant apk commands by PeterDaveHello in https://github.com/BerriAI/litellm/pull/5016
* fix(main.py): fix key leak error when unknown provider given by krrishdholakia in https://github.com/BerriAI/litellm/pull/8556
* (Feat) - return `x-litellm-attempted-fallbacks` in responses from litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8558
* Add remaining org CRUD endpoints + support deleting orgs on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8561
* Enable update/delete org members on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8560
* (Bug Fix) - Add Regenerate Key on Virtual Keys Tab by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8567
* (Bug Fix + Better Observability) - BudgetResetJob: for reseting key, team, user budgets by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8562
* (Patch/bug fix) - UI, filter out litellm ui session tokens on Virtual Keys Page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8568
* refactor(teams.tsx): refactor to display all teams, across all orgs by krrishdholakia in https://github.com/BerriAI/litellm/pull/8565
* docs: update README.md API key and model example typos by colesmcintosh in https://github.com/BerriAI/litellm/pull/8590
* Fix typo in main readme by scosman in https://github.com/BerriAI/litellm/pull/8574
* (UI) Allow adding models for a Team by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8598
* feat(ui): alert when adding model without STORE_MODEL_IN_DB by Aditya8840 in https://github.com/BerriAI/litellm/pull/8591
* Revert "(UI) Allow adding models for a Team" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8600
* Litellm stable UI 02 17 2025 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8599

New Contributors
* PeterDaveHello made their first contribution in https://github.com/BerriAI/litellm/pull/5016
* colesmcintosh made their first contribution in https://github.com/BerriAI/litellm/pull/8590
* scosman made their first contribution in https://github.com/BerriAI/litellm/pull/8574
* Aditya8840 made their first contribution in https://github.com/BerriAI/litellm/pull/8591

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.3...v1.61.7



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.7


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.7


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 180.0 | 206.98769618433857 | 6.145029010811349 | 6.145029010811349 | 1839 | 1839 | 146.21495699998377 | 3174.8161250000067 |
| Aggregated | Failed ❌ | 180.0 | 206.98769618433857 | 6.145029010811349 | 6.145029010811349 | 1839 | 1839 | 146.21495699998377 | 3174.8161250000067 |

Page 3 of 112

Links

Releases

Has known vulnerabilities

Β© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.