Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 15 of 93

1.46.0.dev1

What's Changed
* Litellm stable dev by krrishdholakia in https://github.com/BerriAI/litellm/pull/5711
* (models): Enable JSON Schema Support for Gemini 1.5 Flash Models by F1bos in https://github.com/BerriAI/litellm/pull/5708


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.46.0...v1.46.0.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.46.0.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 150.91571355091406 | 6.398594313787751 | 0.0 | 1915 | 0 | 102.77270100004898 | 1716.2696340000139 |
| Aggregated | Passed ✅ | 130.0 | 150.91571355091406 | 6.398594313787751 | 0.0 | 1915 | 0 | 102.77270100004898 | 1716.2696340000139 |

1.45.0

What's Changed
* fix(proxy/utils.py): auto-update if required view missing from db. raise warning for optional views. by krrishdholakia in https://github.com/BerriAI/litellm/pull/5675
* fix(user_dashboard.tsx): don't call /global/spend on startup by krrishdholakia in https://github.com/BerriAI/litellm/pull/5668
* Add o1 models on OpenRouter by Manouchehri in https://github.com/BerriAI/litellm/pull/5676
* LiteLLM Minor Fixes and Improvements (09/12/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5658


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.28...v1.45.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.45.0



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 177.0661821763475 | 6.386473018172288 | 0.0 | 1911 | 0 | 114.72044499998901 | 1404.9775940000018 |
| Aggregated | Passed ✅ | 150.0 | 177.0661821763475 | 6.386473018172288 | 0.0 | 1911 | 0 | 114.72044499998901 | 1404.9775940000018 |

1.45.0.dev2

What's Changed
* [Fix] Performance - use in memory cache when downloading images from a url by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5657
* [Feat - Perf Improvement] DataDog Logger 91% lower latency by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5687
* (models): Added missing gemini experimental models + fixed pricing for gemini-1.5-pro-exp-0827 by F1bos in https://github.com/BerriAI/litellm/pull/5693
* LiteLLM Minor Fixes and Improvements (09/13/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5689
* LiteLLM Minor Fixes and Improvements (09/14/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5697


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.45.0...v1.45.0.dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.45.0.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 205.17052222102976 | 6.231000961709177 | 0.0 | 1864 | 0 | 112.44966599997497 | 9699.892626999997 |
| Aggregated | Passed ✅ | 140.0 | 205.17052222102976 | 6.231000961709177 | 0.0 | 1864 | 0 | 112.44966599997497 | 9699.892626999997 |

1.45.0.dev1

What's Changed
* [Fix] Performance - use in memory cache when downloading images from a url by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5657
* [Feat - Perf Improvement] DataDog Logger 91% lower latency by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5687
* (models): Added missing gemini experimental models + fixed pricing for gemini-1.5-pro-exp-0827 by F1bos in https://github.com/BerriAI/litellm/pull/5693
* LiteLLM Minor Fixes and Improvements (09/13/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5689
* LiteLLM Minor Fixes and Improvements (09/14/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5697
* Update model_prices_and_context_window.json by Ahmet-Dedeler in https://github.com/BerriAI/litellm/pull/5700

New Contributors
* Ahmet-Dedeler made their first contribution in https://github.com/BerriAI/litellm/pull/5700

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.45.0...v1.45.0.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.45.0.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 167.2279827818844 | 6.419822417892336 | 0.0 | 1921 | 0 | 112.10127999999031 | 827.1091520000482 |
| Aggregated | Passed ✅ | 140.0 | 167.2279827818844 | 6.419822417892336 | 0.0 | 1921 | 0 | 112.10127999999031 | 827.1091520000482 |

1.44.28

What's Changed
* [Feat] Add OpenAI O1 Family Param mapping / config by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5666
* [Feat-Perf] Use Batching + Squashing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5645
* [Fix-Router] Don't cooldown when only 1 deployment exists by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5673


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.27...v1.44.28



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.28



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 170.38330738267751 | 6.367723091899888 | 0.0 | 1905 | 0 | 107.99958900003048 | 1323.9164130000063 |
| Aggregated | Passed ✅ | 140.0 | 170.38330738267751 | 6.367723091899888 | 0.0 | 1905 | 0 | 107.99958900003048 | 1323.9164130000063 |

1.44.27

What's Changed
* [Fix Ci/cd] Separate testing pipeline for litellm router by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5655
* Add gpt o1 and o1 mini models by lowjiansheng in https://github.com/BerriAI/litellm/pull/5660
* (models): Add o1 pricing. by Manouchehri in https://github.com/BerriAI/litellm/pull/5661
* O1 pricing fix by Manouchehri in https://github.com/BerriAI/litellm/pull/5662
* Refactor 'check_view_exists' logic by krrishdholakia in https://github.com/BerriAI/litellm/pull/5659


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.26...v1.44.27



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.27



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 171.4800647204195 | 6.38157690469209 | 0.0 | 1910 | 0 | 109.61952399998154 | 1963.4106249999945 |
| Aggregated | Passed ✅ | 140.0 | 171.4800647204195 | 6.38157690469209 | 0.0 | 1910 | 0 | 109.61952399998154 | 1963.4106249999945 |

Page 15 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.