Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 93

1.48.6

What's Changed
* (perf improvement proxy) use one async async_batch_set_cache in parallel request limiter by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5956
* (fix proxy) model_group/info support rerank models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5955
* (perf improvement proxy) use one redis set cache to update spend in db (30-40% perf improvement) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5960
* (perf proxy) don't run redis async_set_cache_pipeline when empty list passed to it by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5962
* [Feat Proxy] Allow using hypercorn for http v2 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5950
* (feat proxy prometheus) track virtual key, key alias, error code, error code class on prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5968
* (proxy prometheus) track api key and team in latency metrics by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5966
* (feat prometheus proxy) track remaining team and key alias in deployment failure metrics by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5967
* (proxy docker) add sentry sdk to litellm docker by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5965


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.5...v1.48.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 123.41082122468366 | 6.334644427987359 | 0.0 | 1896 | 0 | 88.9820840000084 | 3007.4007179999853 |
| Aggregated | Passed ✅ | 110.0 | 123.41082122468366 | 6.334644427987359 | 0.0 | 1896 | 0 | 88.9820840000084 | 3007.4007179999853 |

1.48.5

What's Changed
* LiteLLM Minor Fixes & Improvements (09/27/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5938


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.4...v1.48.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.5



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 126.34459647382127 | 6.451296394624611 | 0.0 | 1929 | 0 | 89.99831900001709 | 744.0104459999759 |
| Aggregated | Passed ✅ | 110.0 | 126.34459647382127 | 6.451296394624611 | 0.0 | 1929 | 0 | 89.99831900001709 | 744.0104459999759 |

v1.48.4-stable
What's Changed
* docs(vertex.md): fix codestral fim placement by khanh-alice in https://github.com/BerriAI/litellm/pull/5946
* [Feat] Langfuse - allow setting `LANGFUSE_FLUSH_INTERVAL` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5944
* LiteLLM Minor Fixes & Improvements (09/26/2024) (5925) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5937
* [Vertex Multimodal embeddings] Fixes to work with Langchain OpenAI Embedding by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5949
* fix(proxy/utils.py): fix create missing views check by krrishdholakia in https://github.com/BerriAI/litellm/pull/5953

New Contributors
* khanh-alice made their first contribution in https://github.com/BerriAI/litellm/pull/5946

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.3...v1.48.4-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.4-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 167.97848202913636 | 6.422357115950983 | 0.0 | 1922 | 0 | 121.86883799998327 | 1237.94400700001 |
| Aggregated | Passed ✅ | 140.0 | 167.97848202913636 | 6.422357115950983 | 0.0 | 1922 | 0 | 121.86883799998327 | 1237.94400700001 |

1.48.5.dev1

What's Changed
* (perf improvement proxy) use one async async_batch_set_cache in parallel request limiter by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5956
* (fix proxy) model_group/info support rerank models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5955
* (perf improvement proxy) use one redis set cache to update spend in db (30-40% perf improvement) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5960
* (perf proxy) don't run redis async_set_cache_pipeline when empty list passed to it by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5962
* [Feat Proxy] Allow using hypercorn for http v2 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5950


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.5...v1.48.5.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.5.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.5.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 129.13028895631774 | 6.429070638139938 | 0.0 | 1923 | 0 | 90.43558999997003 | 762.7744659999962 |
| Aggregated | Passed ✅ | 110.0 | 129.13028895631774 | 6.429070638139938 | 0.0 | 1923 | 0 | 90.43558999997003 | 762.7744659999962 |

v1.48.5-stable
What's Changed
* LiteLLM Minor Fixes & Improvements (09/27/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5938


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.4...v1.48.5-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.5-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 139.2218604568159 | 6.42557693531554 | 0.0 | 1922 | 0 | 103.05609400001003 | 593.882968999992 |
| Aggregated | Passed ✅ | 120.0 | 139.2218604568159 | 6.42557693531554 | 0.0 | 1922 | 0 | 103.05609400001003 | 593.882968999992 |

1.48.4

What's Changed
* docs(vertex.md): fix codestral fim placement by khanh-alice in https://github.com/BerriAI/litellm/pull/5946
* [Feat] Langfuse - allow setting `LANGFUSE_FLUSH_INTERVAL` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5944
* LiteLLM Minor Fixes & Improvements (09/26/2024) (5925) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5937
* [Vertex Multimodal embeddings] Fixes to work with Langchain OpenAI Embedding by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5949
* fix(proxy/utils.py): fix create missing views check by krrishdholakia in https://github.com/BerriAI/litellm/pull/5953

New Contributors
* khanh-alice made their first contribution in https://github.com/BerriAI/litellm/pull/5946

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.3...v1.48.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 173.52601128308856 | 6.362609450797088 | 0.0 | 1904 | 0 | 125.68342399998755 | 2559.171334000041 |
| Aggregated | Passed ✅ | 150.0 | 173.52601128308856 | 6.362609450797088 | 0.0 | 1904 | 0 | 125.68342399998755 | 2559.171334000041 |

1.48.3

What's Changed
* Add Llama 3.2 90b model on Vertex AI. by Manouchehri in https://github.com/BerriAI/litellm/pull/5908
* Update litellm helm envconfigmap by Pit-Storm in https://github.com/BerriAI/litellm/pull/5872
* LiteLLM Minor Fixes & Improvements (09/24/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5880
* LiteLLM Minor Fixes & Improvements (09/25/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5893
* [feat-Prometheus] Track api key alias and api key hash for remaining tokens metric by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5924
* [Fix proxy perf] Use correct cache key when reading from redis cache by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5928
* [Fix] Perf use only async functions for get cache by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5930
* [docs] updated langfuse integration guide by jannikmaierhoefer in https://github.com/BerriAI/litellm/pull/5921
* Upgrade dependencies in dockerfile by Jacobh2 in https://github.com/BerriAI/litellm/pull/5862
* [Fix Azure AI Studio] drop_params_from_unprocessable_entity_error by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5936

New Contributors
* jannikmaierhoefer made their first contribution in https://github.com/BerriAI/litellm/pull/5921

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.2...v1.48.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.3



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 100.0 | 118.27303725401755 | 6.443801071488233 | 0.0 | 1929 | 0 | 76.61121600000342 | 2505.7243389999826 |
| Aggregated | Passed ✅ | 100.0 | 118.27303725401755 | 6.443801071488233 | 0.0 | 1929 | 0 | 76.61121600000342 | 2505.7243389999826 |

1.48.2

What's Changed
* Merge: 5815- feat(vertex): Use correct provider for response_schema support check by krrishdholakia in https://github.com/BerriAI/litellm/pull/5829
* [Feat-Router] Allow setting which environment to use a model on by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5892
* [Feat] Improve OTEL Tracking - Require all Redis Cache reads to be logged on OTEL by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5881
* [Proxy-Docs] service accounts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5900
* [Feat] add fireworks llama 3.2 models + cost tracking by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5905
* Add gemini-1.5-pro-002 and gemini-1.5-flash-002 by ushuz in https://github.com/BerriAI/litellm/pull/5879
* [Perf improvement Proxy] Use Dual Cache for getting key and team objects by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5903


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.1...v1.48.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 156.93737172798714 | 6.3760247399828325 | 0.0 | 1908 | 0 | 104.33158699993328 | 1799.4232979999651 |
| Aggregated | Passed ✅ | 130.0 | 156.93737172798714 | 6.3760247399828325 | 0.0 | 1908 | 0 | 104.33158699993328 | 1799.4232979999651 |

Page 10 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.