Litellm

Latest version: v1.65.1

Safety actively analyzes 723607 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 112

1.61.1.dev2

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.1...v1.61.1.dev2

1.61.1.dev1

What's Changed
* Improved wildcard route handling on `/models` and `/model_group/info` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8473
* (Bug fix) - Using `include_usage` for /completions requests + unit testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8484
* add sonar pricings by themrzmaster in https://github.com/BerriAI/litellm/pull/8476
* (bug fix) `PerplexityChatConfig` - track correct OpenAI compatible params by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8496
* (fix 2) don't block proxy startup if license check fails & using prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8492
* ci(config.yml): mark daily docker builds with `-nightly` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8499
* (Redis Cluster) - Fixes for using redis cluster + pipeline by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8442
* Litellm UI stable version 02 12 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8497
* fix: fix test by krrishdholakia in https://github.com/BerriAI/litellm/pull/8501


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.1...v1.61.1.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.1.dev1


Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 180.0 | 213.07786790233536 | 6.297898153114872 | 6.297898153114872 | 1884 | 1884 | 146.15093399999068 | 4776.909474999997 |
| Aggregated | Failed ❌ | 180.0 | 213.07786790233536 | 6.297898153114872 | 6.297898153114872 | 1884 | 1884 | 146.15093399999068 | 4776.909474999997 |

1.61.0

Not secure
What's Changed
* (Feat) - Allow calling Nova models on `/bedrock/invoke/` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8397
* Run litellm in dev mode by CakeCrusher in https://github.com/BerriAI/litellm/pull/8404
* (Bug Fix) - Bedrock completions with aws_region_name by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8384
* added gemini 2.0 models to docs by mubashir1osmani in https://github.com/BerriAI/litellm/pull/8412
* Added filter in Teams and fixed spacing & height issues in Teams tabs (6192) by tahaali-dev in https://github.com/BerriAI/litellm/pull/8357
* Revert "Added filter in Teams and fixed spacing & height issues in Teams tabs (6192)" by krrishdholakia in https://github.com/BerriAI/litellm/pull/8416
* Allow editing model api key + provider on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8406
* Allow org admin to create teams on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8407
* Azure api version check - fix str compare to convert to int by krrishdholakia in https://github.com/BerriAI/litellm/pull/8438
* Fix callback add when user_config passed + support passing openai org client-side by krrishdholakia in https://github.com/BerriAI/litellm/pull/8443
* Org UI Improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/8436
* (e2e testing) - add tests for using litellm `/team/` updates in multi-instance deployments with Redis by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8440
* (Feat) - Allow viewing Request/Response Logs stored in GCS Bucket by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8449

New Contributors
* CakeCrusher made their first contribution in https://github.com/BerriAI/litellm/pull/8404

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.8...v1.61.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.0



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 180.0 | 213.86169773089247 | 6.297834462789351 | 0.003342799608699231 | 1884 | 1 | 81.07622899996159 | 4173.802059999957 |
| Aggregated | Passed ✅ | 180.0 | 213.86169773089247 | 6.297834462789351 | 0.003342799608699231 | 1884 | 1 | 81.07622899996159 | 4173.802059999957 |

1.61.0.dev1

Not secure
What's Changed
* (Feat) - Allow calling Nova models on `/bedrock/invoke/` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8397
* Run litellm in dev mode by CakeCrusher in https://github.com/BerriAI/litellm/pull/8404
* (Bug Fix) - Bedrock completions with aws_region_name by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8384
* added gemini 2.0 models to docs by mubashir1osmani in https://github.com/BerriAI/litellm/pull/8412
* Added filter in Teams and fixed spacing & height issues in Teams tabs (6192) by tahaali-dev in https://github.com/BerriAI/litellm/pull/8357
* Revert "Added filter in Teams and fixed spacing & height issues in Teams tabs (6192)" by krrishdholakia in https://github.com/BerriAI/litellm/pull/8416
* Allow editing model api key + provider on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8406
* Allow org admin to create teams on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8407

New Contributors
* CakeCrusher made their first contribution in https://github.com/BerriAI/litellm/pull/8404

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.8...v1.61.0.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.0.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 210.0 | 239.66008324484403 | 6.157355937828466 | 0.0033427556665735435 | 1842 | 1 | 171.46882700001242 | 4024.462443999994 |
| Aggregated | Passed ✅ | 210.0 | 239.66008324484403 | 6.157355937828466 | 0.0033427556665735435 | 1842 | 1 | 171.46882700001242 | 4024.462443999994 |

1.60.8

Not secure
What's Changed
* UI Updates by krrishdholakia in https://github.com/BerriAI/litellm/pull/8345
* OIDC Scope based model access by krrishdholakia in https://github.com/BerriAI/litellm/pull/8343
* Fix azure max retries error by krrishdholakia in https://github.com/BerriAI/litellm/pull/8340
* Update deepseek API prices for 2025-02-08 by Winston-503 in https://github.com/BerriAI/litellm/pull/8363
* fix(nvidia_nim/embed.py): add 'dimensions' support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8302
* fix: dictionary changed size during iteration error (8327) by krrishdholakia in https://github.com/BerriAI/litellm/pull/8341
* fix: add azure/o1-2024-12-17 to model_prices_and_context_window.json by byrongrogan in https://github.com/BerriAI/litellm/pull/8371
* (Security fix) Mask redis pwd on `/cache/ping` + add timeout value and elapsed time on azure + http calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/8377
* Handle azure deepseek reasoning response (8288) by krrishdholakia in https://github.com/BerriAI/litellm/pull/8366
* Anthropic Citations API Support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8382
* (Feat) - Add `/bedrock/invoke` support for all Anthropic models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8383
* O3 mini native streaming support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8387


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.6...v1.60.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.8



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 170.0 | 189.56173781509457 | 6.206468643400922 | 0.0 | 1855 | 0 | 149.30551800000558 | 3488.08786699999 |
| Aggregated | Passed ✅ | 170.0 | 189.56173781509457 | 6.206468643400922 | 0.0 | 1855 | 0 | 149.30551800000558 | 3488.08786699999 |

v1.60.4-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.2-dev1...v1.60.4-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.4-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 150.928871417902 | 6.3094717994281515 | 0.0 | 1888 | 0 | 115.36740400003964 | 821.5439119999814 |
| Aggregated | Passed ✅ | 140.0 | 150.928871417902 | 6.3094717994281515 | 0.0 | 1888 | 0 | 115.36740400003964 | 821.5439119999814 |

1.60.6

Not secure
What's Changed
* Azure OpenAI improvements - o3 native streaming, improved tool call + response format handling by krrishdholakia in https://github.com/BerriAI/litellm/pull/8292
* Fix edit team on ui by krrishdholakia in https://github.com/BerriAI/litellm/pull/8295
* Improve rpm check on keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/8301
* docs: fix enterprise links by wagnerjt in https://github.com/BerriAI/litellm/pull/8294
* Add gemini-2.0-flash pricing + model info by krrishdholakia in https://github.com/BerriAI/litellm/pull/8303
* Add Arize Cookbook for Turning on LiteLLM Proxy by exiao in https://github.com/BerriAI/litellm/pull/8336
* Add aistudio GEMINI 2.0 to model_prices_and_context_window.json by dceluis in https://github.com/BerriAI/litellm/pull/8335
* Fix pricing for Gemini 2.0 Flash 001 by elabbarw in https://github.com/BerriAI/litellm/pull/8320
* [DOCS] Update local_debugging.md by rokbenko in https://github.com/BerriAI/litellm/pull/8308
* (Bug Fix - Langfuse) - fix for when model response has `choices=[]` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8339
* Fixed meta llama 3.3 key for Databricks API by anton164 in https://github.com/BerriAI/litellm/pull/8093
* fix(utils.py): handle key error in msg validation by krrishdholakia in https://github.com/BerriAI/litellm/pull/8325
* (bug fix router.py) - safely handle `choices=[]` on llm responses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8342
* (QA+UI) - e2e flow for adding assembly ai passthrough endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8337

New Contributors
* exiao made their first contribution in https://github.com/BerriAI/litellm/pull/8336
* dceluis made their first contribution in https://github.com/BerriAI/litellm/pull/8335
* rokbenko made their first contribution in https://github.com/BerriAI/litellm/pull/8308
* anton164 made their first contribution in https://github.com/BerriAI/litellm/pull/8093

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.5...v1.60.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 200.0 | 217.05167674521235 | 6.288425886864887 | 0.0 | 1880 | 0 | 164.17646499996863 | 2306.284880000021 |
| Aggregated | Passed ✅ | 200.0 | 217.05167674521235 | 6.288425886864887 | 0.0 | 1880 | 0 | 164.17646499996863 | 2306.284880000021 |

Page 5 of 112

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.