Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 16 of 93

1.44.27.dev1

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.27...v1.44.27.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.27.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 182.23598454107056 | 6.305120243078779 | 0.0 | 1887 | 0 | 112.3255520000157 | 1140.3433560000167 |
| Aggregated | Passed ✅ | 150.0 | 182.23598454107056 | 6.305120243078779 | 0.0 | 1887 | 0 | 112.3255520000157 | 1140.3433560000167 |

1.44.26

What's Changed
* LiteLLM Minor Fixes and Improvements (11/09/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5634


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.25...v1.44.26



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.26



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 175.16986906385236 | 6.332177262937632 | 0.0 | 1895 | 0 | 118.0159099999969 | 1899.836026999992 |
| Aggregated | Passed ✅ | 150.0 | 175.16986906385236 | 6.332177262937632 | 0.0 | 1895 | 0 | 118.0159099999969 | 1899.836026999992 |

1.44.25

What's Changed
* Bump send and express in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/5626
* Bump serve-static and express in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/5628
* Bump body-parser and express in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/5629
* docs: update ai21 docs by miri-bar in https://github.com/BerriAI/litellm/pull/5631
* Add gemini 1.5 flash exp 0827 by BabyChouSr in https://github.com/BerriAI/litellm/pull/5636
* LiteLLM Minor Fixes and Improvements (09/10/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5618
* Add the option to specify a schema via env variable by steffen-sbt in https://github.com/BerriAI/litellm/pull/5640
* [Langsmith Perf Improvement] Use /batch for Langsmith Logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5638
* [Fix-Perf] OTEL use sensible default values for logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5642
* [Feat] Add Load Testing for Langsmith, and OTEL logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5646

New Contributors
* miri-bar made their first contribution in https://github.com/BerriAI/litellm/pull/5631
* BabyChouSr made their first contribution in https://github.com/BerriAI/litellm/pull/5636
* steffen-sbt made their first contribution in https://github.com/BerriAI/litellm/pull/5640

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.24...v1.44.25



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.25



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 192.5806025932824 | 6.267308882379199 | 0.0 | 1876 | 0 | 119.9695099999758 | 3057.244871999984 |
| Aggregated | Passed ✅ | 150.0 | 192.5806025932824 | 6.267308882379199 | 0.0 | 1876 | 0 | 119.9695099999758 | 3057.244871999984 |

1.44.24

What's Changed
* [Feat-Proxy] allow turning off message logging for OTEL (callback specific) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5603
* fix 5614: Fixed the template error null pointer evaluation by Pit-Storm in https://github.com/BerriAI/litellm/pull/5615
* [Fix-Proxy] Regenerate keys when no duration is passed by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5616
* [Fix-Perf] Vertex AI cache httpx clients by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5619
* [Feat-Perf] Use common helper to get async httpx clients for all providers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5620
* [Feat-Perf Improvement Vertex] Only Refresh credentials when token is expired by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5622
* Add Cohere refresh models and update pricing by jalammar in https://github.com/BerriAI/litellm/pull/5571
* [Feat-Vertex Perf] Use async func to get auth credentials by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5623

New Contributors
* jalammar made their first contribution in https://github.com/BerriAI/litellm/pull/5571

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.23...v1.44.24



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.24



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 144.67113647299365 | 6.373192750121196 | 0.0 | 1907 | 0 | 99.51562199995578 | 2800.6129780000037 |
| Aggregated | Passed ✅ | 120.0 | 144.67113647299365 | 6.373192750121196 | 0.0 | 1907 | 0 | 99.51562199995578 | 2800.6129780000037 |

v1.44.23-stable
What's Changed
* build(deployment.yaml): Fix port + allow setting database url in helm chart by krrishdholakia in https://github.com/BerriAI/litellm/pull/5587
* [Feat] support using "callbacks" for prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5599
* Properly use `allowed_fails_policy` when it has fields with a value of 0 by eladsegal in https://github.com/BerriAI/litellm/pull/5604
* [Feat-Proxy] Allow using key based logging for success and failure by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5606
* [Fix - Otel logger] Set a max queue size of 100 logs for OTEL by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5576
* [Feat] Tag Routing - Allow setting default deployments by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5601
* LiteLLM Minor Fixes and Improvements (09/07/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5580
* LiteLLM Minor Fixes and Improvements (09/09/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5602


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.22...v1.44.23-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.23-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 153.73045870969366 | 6.342069904674704 | 0.0 | 1898 | 0 | 106.94551199992475 | 2384.351408000043 |
| Aggregated | Passed ✅ | 130.0 | 153.73045870969366 | 6.342069904674704 | 0.0 | 1898 | 0 | 106.94551199992475 | 2384.351408000043 |

1.44.23

What's Changed
* build(deployment.yaml): Fix port + allow setting database url in helm chart by krrishdholakia in https://github.com/BerriAI/litellm/pull/5587
* [Feat] support using "callbacks" for prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5599
* Properly use `allowed_fails_policy` when it has fields with a value of 0 by eladsegal in https://github.com/BerriAI/litellm/pull/5604
* [Feat-Proxy] Allow using key based logging for success and failure by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5606
* [Fix - Otel logger] Set a max queue size of 100 logs for OTEL by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5576
* [Feat] Tag Routing - Allow setting default deployments by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5601
* LiteLLM Minor Fixes and Improvements (09/07/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5580
* LiteLLM Minor Fixes and Improvements (09/09/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5602


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.22...v1.44.23



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.23



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 147.45273993067718 | 6.46209999252198 | 0.0 | 1933 | 0 | 84.24035100000538 | 7721.598281000013 |
| Aggregated | Passed ✅ | 110.0 | 147.45273993067718 | 6.46209999252198 | 0.0 | 1933 | 0 | 84.24035100000538 | 7721.598281000013 |

v1.44.22-stable
What's Changed
* feat(langsmith.py): support sampling langsmith traces by krrishdholakia in https://github.com/BerriAI/litellm/pull/5577
* fix missing class object instantiation in custom_llm_server provider documentation's quick start by pradhyumna85 in https://github.com/BerriAI/litellm/pull/5578
* litellm-helm: fix missing resource definitions in initContainer and missing DBname value for envVars in deployment.yaml by Pit-Storm in https://github.com/BerriAI/litellm/pull/5562
* [Feat] Allow setting up Redis Cluster using .env vars by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5579
* [Feat] Slack Alerting - Allow setting custom spend report frequency by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5581
* [Feat UI] allow setting input / output cost per M tokens by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5582
* [Docs] - Add Lifecycle of a request through LiteLLM Gateway by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5585
* Feat - Proxy add /key/list endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5586

New Contributors
* pradhyumna85 made their first contribution in https://github.com/BerriAI/litellm/pull/5578
* Pit-Storm made their first contribution in https://github.com/BerriAI/litellm/pull/5562

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.21...v1.44.22-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.22-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 170.75772394186066 | 6.327918225705201 | 0.0 | 1892 | 0 | 112.68324600001733 | 5122.226861000001 |
| Aggregated | Passed ✅ | 140.0 | 170.75772394186066 | 6.327918225705201 | 0.0 | 1892 | 0 | 112.68324600001733 | 5122.226861000001 |

1.44.22

What's Changed
* feat(langsmith.py): support sampling langsmith traces by krrishdholakia in https://github.com/BerriAI/litellm/pull/5577
* fix missing class object instantiation in custom_llm_server provider documentation's quick start by pradhyumna85 in https://github.com/BerriAI/litellm/pull/5578
* litellm-helm: fix missing resource definitions in initContainer and missing DBname value for envVars in deployment.yaml by Pit-Storm in https://github.com/BerriAI/litellm/pull/5562
* [Feat] Allow setting up Redis Cluster using .env vars by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5579
* [Feat] Slack Alerting - Allow setting custom spend report frequency by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5581
* [Feat UI] allow setting input / output cost per M tokens by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5582
* [Docs] - Add Lifecycle of a request through LiteLLM Gateway by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5585
* Feat - Proxy add /key/list endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5586

New Contributors
* pradhyumna85 made their first contribution in https://github.com/BerriAI/litellm/pull/5578
* Pit-Storm made their first contribution in https://github.com/BerriAI/litellm/pull/5562

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.21...v1.44.22



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.22



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 166.41686308426068 | 6.304697448333387 | 0.0 | 1887 | 0 | 114.11275900002238 | 591.4259420000008 |
| Aggregated | Passed ✅ | 140.0 | 166.41686308426068 | 6.304697448333387 | 0.0 | 1887 | 0 | 114.11275900002238 | 591.4259420000008 |

v1.44.21-stable
What's Changed
* [Fix] OTEL - Unsupported | type annotations in python3.9 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5548
* Update Hugging Face Doc by gary149 in https://github.com/BerriAI/litellm/pull/5411
* [Fix-Datdog Logger] Log exceptions when callbacks faces an error by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5570
* fix(navbar.tsx): only show 'get enterprise license' if user is not already a premium user by krrishdholakia in https://github.com/BerriAI/litellm/pull/5568
* LiteLLM Minor Fixes and Improvements (08/06/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5567
* [Feat-Proxy] Use DB Views to Get spend per Tag (Usage endpoints) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5574
* [Feat] Allow setting duration time when regenerating key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5566
* [Feat] Add cost tracking for cohere rerank by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5573
* Allow client-side credentials to be sent to proxy (accept only if complete credentials are given) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5575

New Contributors
* gary149 made their first contribution in https://github.com/BerriAI/litellm/pull/5411

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.19...v1.44.21-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.21-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 137.28568682202445 | 6.405426266891142 | 0.0 | 1916 | 0 | 96.66716199996017 | 1538.306079999984 |
| Aggregated | Passed ✅ | 110.0 | 137.28568682202445 | 6.405426266891142 | 0.0 | 1916 | 0 | 96.66716199996017 | 1538.306079999984 |

Page 16 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.