Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 18 of 93

1.44.15dev1

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.15...v1.44.15-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.15-dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 197.84245532309257 | 6.307285755524694 | 0.0 | 1888 | 0 | 116.31856799999696 | 7724.935090999963 |
| Aggregated | Passed ✅ | 150.0 | 197.84245532309257 | 6.307285755524694 | 0.0 | 1888 | 0 | 116.31856799999696 | 7724.935090999963 |

1.44.14

Not secure
What's Changed
* anthropic prompt caching cost tracking by krrishdholakia in https://github.com/BerriAI/litellm/pull/5453
* [Feat-Proxy] track spend logs for vertex pass through endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5457
* [Feat] New Provider - Add Cerebras AI API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5461
* [Feat - Prometheus] - Track error_code, model metric by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5463
* Minor LiteLLM Fixes and Improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/5456


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.13...v1.44.14



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.14



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 174.81784158205727 | 6.331611805444247 | 0.0 | 1895 | 0 | 108.71869999994033 | 5381.36602100002 |
| Aggregated | Passed ✅ | 140.0 | 174.81784158205727 | 6.331611805444247 | 0.0 | 1895 | 0 | 108.71869999994033 | 5381.36602100002 |

v1.44.13-stable
What's Changed
* Clarify support-related Exceptions in utils.py by jhtobigs in https://github.com/BerriAI/litellm/pull/5447
* - merge - fix TypeError: 'CompletionUsage' object is not subscriptable 5441 by krrishdholakia in https://github.com/BerriAI/litellm/pull/5448
* [Fix-Proxy] Allow running /health checks on vertex multimodal embedding requests by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5449
* [Fix] Use correct Vertex AI AI21 Cost tracking by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5439
* (models): Add gemini-1.5-pro-exp-0827 pricing. by Manouchehri in https://github.com/BerriAI/litellm/pull/5419
* [Fix-Proxy] Vertex SDK pass through - pass all relevant vertex creds by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5451
* [Fix-Proxy] - Allow Qdrant API Key to be optional by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5452
* [Feat-Proxy] Load config.yaml from GCS Bucket by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5450
* [Refactor] Refactor vertex text to speech to be in vertex directory by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5454
* [Fix-Proxy-Auth] allow pass through routes as LLM API routes by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5458
* [Feat] Vertex embeddings - map `input_type` to `text_type` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5455

New Contributors
* jhtobigs made their first contribution in https://github.com/BerriAI/litellm/pull/5447

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.12...v1.44.13-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.13-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 100.0 | 123.21960788963653 | 6.451188433400272 | 0.0 | 1930 | 0 | 85.36914300003673 | 2112.863508000032 |
| Aggregated | Passed ✅ | 100.0 | 123.21960788963653 | 6.451188433400272 | 0.0 | 1930 | 0 | 85.36914300003673 | 2112.863508000032 |

1.44.13

Not secure
What's Changed
* Clarify support-related Exceptions in utils.py by jhtobigs in https://github.com/BerriAI/litellm/pull/5447
* - merge - fix TypeError: 'CompletionUsage' object is not subscriptable 5441 by krrishdholakia in https://github.com/BerriAI/litellm/pull/5448
* [Fix-Proxy] Allow running /health checks on vertex multimodal embedding requests by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5449
* [Fix] Use correct Vertex AI AI21 Cost tracking by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5439
* (models): Add gemini-1.5-pro-exp-0827 pricing. by Manouchehri in https://github.com/BerriAI/litellm/pull/5419
* [Fix-Proxy] Vertex SDK pass through - pass all relevant vertex creds by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5451
* [Fix-Proxy] - Allow Qdrant API Key to be optional by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5452
* [Feat-Proxy] Load config.yaml from GCS Bucket by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5450
* [Refactor] Refactor vertex text to speech to be in vertex directory by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5454
* [Fix-Proxy-Auth] allow pass through routes as LLM API routes by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5458
* [Feat] Vertex embeddings - map `input_type` to `text_type` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5455

New Contributors
* jhtobigs made their first contribution in https://github.com/BerriAI/litellm/pull/5447

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.12...v1.44.13



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.13



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 84 | 96.56284169629994 | 6.505250331762072 | 0.0 | 1946 | 0 | 70.01490099997909 | 1023.0434680000258 |
| Aggregated | Passed ✅ | 84 | 96.56284169629994 | 6.505250331762072 | 0.0 | 1946 | 0 | 70.01490099997909 | 1023.0434680000258 |

v1.44.12-stable
What's Changed
* fix: Minor LiteLLM Fixes + Improvements (29/08/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5436
* [Pricing] Add pricing for Openai ft:gpt-4o by kiriloman in https://github.com/BerriAI/litellm/pull/5442
* [Feat-Proxy] Show all exceptioons types on swagger for LiteLLM Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5438


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.11...v1.44.12-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.12-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 147.17085532826442 | 6.3725237061125695 | 0.0 | 1907 | 0 | 101.22445500002186 | 1323.9057460000367 |
| Aggregated | Passed ✅ | 130.0 | 147.17085532826442 | 6.3725237061125695 | 0.0 | 1907 | 0 | 101.22445500002186 | 1323.9057460000367 |

1.44.12

Not secure
What's Changed
* fix: Minor LiteLLM Fixes + Improvements (29/08/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5436
* [Pricing] Add pricing for Openai ft:gpt-4o by kiriloman in https://github.com/BerriAI/litellm/pull/5442
* [Feat-Proxy] Show all exceptioons types on swagger for LiteLLM Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5438


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.11...v1.44.12



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.12



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 170.9298622524008 | 6.267070184477542 | 0.0 | 1874 | 0 | 104.92637400000149 | 4227.257602999998 |
| Aggregated | Passed ✅ | 130.0 | 170.9298622524008 | 6.267070184477542 | 0.0 | 1874 | 0 | 104.92637400000149 | 4227.257602999998 |

v1.44.11-stable
What's Changed
* fix(utils.py): correctly log streaming cache hits (5417) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5426
* fix(google_ai_studio): working context caching by krrishdholakia in https://github.com/BerriAI/litellm/pull/5421
* [Fix-Proxy] /health check for provider wildcard models (fireworks/*) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5431
* (bedrock): Add new cross-region inference support for Bedrock. by Manouchehri in https://github.com/BerriAI/litellm/pull/5430
* [Feat-Proxy] Pass through Vertex Endpoint - allow forwarding vertex credentials by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5435
* [Feat-Proxy] Set tags per team - (use tag based routing for team) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5432


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.10...v1.44.11-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.11-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 88 | 118.40403284020604 | 6.4826835045795566 | 0.0 | 1940 | 0 | 68.31259699998782 | 9454.062292000002 |
| Aggregated | Passed ✅ | 88 | 118.40403284020604 | 6.4826835045795566 | 0.0 | 1940 | 0 | 68.31259699998782 | 9454.062292000002 |

1.44.11

Not secure
What's Changed
* fix(utils.py): correctly log streaming cache hits (5417) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5426
* fix(google_ai_studio): working context caching by krrishdholakia in https://github.com/BerriAI/litellm/pull/5421
* [Fix-Proxy] /health check for provider wildcard models (fireworks/*) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5431
* (bedrock): Add new cross-region inference support for Bedrock. by Manouchehri in https://github.com/BerriAI/litellm/pull/5430
* [Feat-Proxy] Pass through Vertex Endpoint - allow forwarding vertex credentials by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5435
* [Feat-Proxy] Set tags per team - (use tag based routing for team) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5432


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.10...v1.44.11



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.11



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 160.16188890005222 | 6.353656816262974 | 0.0 | 1901 | 0 | 110.89551899999606 | 1160.6978220000315 |
| Aggregated | Passed ✅ | 140.0 | 160.16188890005222 | 6.353656816262974 | 0.0 | 1901 | 0 | 110.89551899999606 | 1160.6978220000315 |

v1.44.10-stable
What's Changed
* feat(vertex_ai_and_google_ai_studio): Support Google AI Studio Embedding Endpoint by krrishdholakia in https://github.com/BerriAI/litellm/pull/5393
* feat(team_endpoints.py): return team member budgets in /team/info call by krrishdholakia in https://github.com/BerriAI/litellm/pull/5423
* fix(team_endpoints.py): update to include the budget in the response by krrishdholakia in https://github.com/BerriAI/litellm/pull/5425
* fixes: minor litellm fixes by krrishdholakia in https://github.com/BerriAI/litellm/pull/5414


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.8-dev1...v1.44.10-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.10-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 171.3726737658333 | 6.279101958793521 | 0.0 | 1879 | 0 | 112.49015499998904 | 2388.604856000029 |
| Aggregated | Passed ✅ | 140.0 | 171.3726737658333 | 6.279101958793521 | 0.0 | 1879 | 0 | 112.49015499998904 | 2388.604856000029 |

1.44.10

Not secure
What's Changed
* feat(vertex_ai_and_google_ai_studio): Support Google AI Studio Embedding Endpoint by krrishdholakia in https://github.com/BerriAI/litellm/pull/5393
* feat(team_endpoints.py): return team member budgets in /team/info call by krrishdholakia in https://github.com/BerriAI/litellm/pull/5423
* fix(team_endpoints.py): update to include the budget in the response by krrishdholakia in https://github.com/BerriAI/litellm/pull/5425
* fixes: minor litellm fixes by krrishdholakia in https://github.com/BerriAI/litellm/pull/5414


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.9...v1.44.10



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.10



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 168.689059856842 | 6.354633584822659 | 0.0 | 1900 | 0 | 110.34484399999656 | 2138.7865149999925 |
| Aggregated | Passed ✅ | 140.0 | 168.689059856842 | 6.354633584822659 | 0.0 | 1900 | 0 | 110.34484399999656 | 2138.7865149999925 |

Page 18 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.