Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 14 of 93

1.46.4

What's Changed
* Bump next from 14.1.1 to 14.2.10 in /ui/litellm-dashboard by dependabot in https://github.com/BerriAI/litellm/pull/5753
* [Fix] o1-mini causes pydantic warnings on `reasoning_tokens` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5754
* [Feat-Proxy-DataDog] Log Redis, Postgres Failure events on DataDog by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5750
* [Fix] Router/ Proxy - Tag Based routing, raise correct error when no deployments found and tag filtering is on by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5745
* [Feat] Log Request metadata on gcs bucket logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5743


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.46.2...v1.46.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.46.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 162.22738145044985 | 6.304508753690671 | 0.0 | 1887 | 0 | 111.54934299997876 | 2507.1398680000243 |
| Aggregated | Passed ✅ | 140.0 | 162.22738145044985 | 6.304508753690671 | 0.0 | 1887 | 0 | 111.54934299997876 | 2507.1398680000243 |

1.46.2

What's Changed
* LiteLLM Minor Fixes & Improvements (09/16/2024) (5723) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5731
* [Fix-Proxy] deal with case when check view exists returns None by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5740
* Revert "[Fix-Proxy] deal with case when check view exists returns None " by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5741
* Litellm fix router testing by krrishdholakia in https://github.com/BerriAI/litellm/pull/5748


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.46.1...v1.46.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.46.2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 125.37750355195692 | 6.398128484773343 | 0.0 | 1915 | 0 | 86.75882299996829 | 2125.433599999951 |
| Aggregated | Passed ✅ | 110.0 | 125.37750355195692 | 6.398128484773343 | 0.0 | 1915 | 0 | 86.75882299996829 | 2125.433599999951 |

1.46.1

What's Changed
* Litellm stable dev by krrishdholakia in https://github.com/BerriAI/litellm/pull/5711
* (models): Enable JSON Schema Support for Gemini 1.5 Flash Models by F1bos in https://github.com/BerriAI/litellm/pull/5708
* Add unsupported o1 params by Manouchehri in https://github.com/BerriAI/litellm/pull/5722
* Warning fix for Pydantic 2.0 (5679) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5707
* [Feat-Proxy] Add upperbound key duration param by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5727
* [Fix-Proxy] log exceptions from azure key vault on verbose_logger.exceptions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5719
* [Fix-Proxy] Azure Key Management - Secret Manager by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5728
* [Feat-Proxy] Slack Alerting - allow using os.environ/ vars for alert to webhook url by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5726


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.46.0...v1.46.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.46.1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 170.7951071671865 | 6.417749524099106 | 0.0 | 1920 | 0 | 115.48751200001561 | 2641.8766410000103 |
| Aggregated | Passed ✅ | 150.0 | 170.7951071671865 | 6.417749524099106 | 0.0 | 1920 | 0 | 115.48751200001561 | 2641.8766410000103 |

1.46.1.dev2

What's Changed
* LiteLLM Minor Fixes & Improvements (09/16/2024) (5723) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5731
* [Fix-Proxy] deal with case when check view exists returns None by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5740
* Revert "[Fix-Proxy] deal with case when check view exists returns None " by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5741


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.46.1...v1.46.1.dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.46.1.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 127.44153038969152 | 6.481679062555461 | 0.0 | 1940 | 0 | 86.10109800002874 | 2104.9686730000303 |
| Aggregated | Passed ✅ | 110.0 | 127.44153038969152 | 6.481679062555461 | 0.0 | 1940 | 0 | 86.10109800002874 | 2104.9686730000303 |

1.46.1.dev1

What's Changed
* LiteLLM Minor Fixes & Improvements (09/16/2024) (5723) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5731


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.46.1...v1.46.1.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.46.1.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 134.6393781724867 | 6.415203660604424 | 0.0 | 1919 | 0 | 89.29896299997608 | 3319.4228009999733 |
| Aggregated | Passed ✅ | 110.0 | 134.6393781724867 | 6.415203660604424 | 0.0 | 1919 | 0 | 89.29896299997608 | 3319.4228009999733 |

1.46.0

What's Changed
* [Fix] Performance - use in memory cache when downloading images from a url by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5657
* [Feat - Perf Improvement] DataDog Logger 91% lower latency by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5687
* (models): Added missing gemini experimental models + fixed pricing for gemini-1.5-pro-exp-0827 by F1bos in https://github.com/BerriAI/litellm/pull/5693
* LiteLLM Minor Fixes and Improvements (09/13/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5689
* LiteLLM Minor Fixes and Improvements (09/14/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5697
* Update model_prices_and_context_window.json by Ahmet-Dedeler in https://github.com/BerriAI/litellm/pull/5700
* [Feat] Add `max_completion_tokens` param by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5691
* [Feat] Stable Prs - Sep 14th (Sambanova API) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5703
* [Fix] Router cooldown logic - use % thresholds instead of allowed fails to cooldown deployments by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5698
* [Feat-Prometheus] Track exception status on `litellm_deployment_failure_responses` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5706
* [Feat-Prometheus] Add prometheus metric for tracking cooldown events by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5705

New Contributors
* Ahmet-Dedeler made their first contribution in https://github.com/BerriAI/litellm/pull/5700

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.45.0...v1.46.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.46.0



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 580.0 | 1015.3195512348642 | 5.0202586549188855 | 0.0 | 1503 | 0 | 71.49234000002025 | 12269.324192 |
| Aggregated | Failed ❌ | 580.0 | 1015.3195512348642 | 5.0202586549188855 | 0.0 | 1503 | 0 | 71.49234000002025 | 12269.324192 |

Page 14 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.