Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 21 of 93

1.43.19.dev2

What's Changed
* [Fix Router] - Don't cooldown Default Provider deployment by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5302
* [Fix] Router - Do not retry on 404 errors from LLM API providers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5298
* [Fix Router] Don't retry errors when healthy_deployments=0 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5299
* [Fix] Router - don't recursively use the same fallback by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5301


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.19.dev1...v1.43.19.dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.19.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.19.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 155.707955254718 | 6.374453265792973 | 0.0 | 1908 | 0 | 109.84177599999612 | 1317.8180300000122 |
| Aggregated | Passed ✅ | 140.0 | 155.707955254718 | 6.374453265792973 | 0.0 | 1908 | 0 | 109.84177599999612 | 1317.8180300000122 |

1.43.19.dev1

What's Changed
* Allow not displaying feedback box by msabramo in https://github.com/BerriAI/litellm/pull/4868
* Fix app_version in helm build by mikstew in https://github.com/BerriAI/litellm/pull/4649
* feat(azure.py): support 'json_schema' for older models by krrishdholakia in https://github.com/BerriAI/litellm/pull/5296
* fix(cost_calculator.py): only override base model if custom pricing is set by krrishdholakia in https://github.com/BerriAI/litellm/pull/5287
* feat(azure.py): support dynamic azure api versions by krrishdholakia in https://github.com/BerriAI/litellm/pull/5284
* Fix helm chart job by mikstew in https://github.com/BerriAI/litellm/pull/5297

New Contributors
* mikstew made their first contribution in https://github.com/BerriAI/litellm/pull/4649

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.19...v1.43.19.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.19.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 87 | 114.67080167127425 | 6.524676918212512 | 0.0 | 1953 | 0 | 68.25954099997489 | 8007.3363059999565 |
| Aggregated | Passed ✅ | 87 | 114.67080167127425 | 6.524676918212512 | 0.0 | 1953 | 0 | 68.25954099997489 | 8007.3363059999565 |

v1.43.19-stable
What's Changed
* feat: Bedrock pass-through endpoint support (All endpoints) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5264
* Update helm repo to v0.2.3 by lowjiansheng in https://github.com/BerriAI/litellm/pull/5280
* [Feat] V2 aporia guardrails litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5288
* [Feat] run aporia guardrail as post call success hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5281
* [Feat-Proxy] Allow accessing `data` in post call success hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5282
* [Feat-Proxy] Return applied guardrails in response headers as `x-litellm-applied-guardrails` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5283
* [Doc-Tutorial] use litellm proxy with aporia by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5286
* [Fix] Proxy - send slack alerting spend reports by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5289
* [Feat] - control guardrails per API Key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5294


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.18...v1.43.19-stable

1.43.18

Not secure
What's Changed
* [Feat] return `x-litellm-key-remaining-requests-{model}`: 1, `x-litellm-key-remaining-tokens-{model}: None` in response headers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5259
* [Feat] - Set tpm/rpm limits per Virtual Key + Model by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5256
* [Feat] add prometheus metric for remaining rpm/tpm limit for (model, api_ley) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5257
* [Feat] read model + API key tpm/rpm limits from db by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5258
* Pass-through endpoints for Gemini - Google AI Studio by krrishdholakia in https://github.com/BerriAI/litellm/pull/5260
* Fix incorrect message length check in cost calculator by dhlidongming in https://github.com/BerriAI/litellm/pull/5219
* [PRICING] Use specific llama2 and llama3 model names in Ollama by kiriloman in https://github.com/BerriAI/litellm/pull/5221
* [Feat-Proxy] set rpm/tpm limits per api key per model by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5261
* Fixes the `tool_use` indexes not being correctly mapped by Penagwin in https://github.com/BerriAI/litellm/pull/5232
* [Feat-Proxy] Use model access groups for teams by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5263

New Contributors
* dhlidongming made their first contribution in https://github.com/BerriAI/litellm/pull/5219
* Penagwin made their first contribution in https://github.com/BerriAI/litellm/pull/5232

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.17...v1.43.18



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.18



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 84 | 98.74206359221253 | 6.528009406771848 | 0.0 | 1952 | 0 | 67.36751900001536 | 1687.1762119999971 |
| Aggregated | Passed ✅ | 84 | 98.74206359221253 | 6.528009406771848 | 0.0 | 1952 | 0 | 67.36751900001536 | 1687.1762119999971 |

1.43.17

Not secure
What's Changed
* fix(utils.py): fix get_image_dimensions to handle more image types by krrishdholakia in https://github.com/BerriAI/litellm/pull/5255
* (oidc): Add support for loading tokens via a file, env var, and path in env var by Manouchehri in https://github.com/BerriAI/litellm/pull/5251
* refactor: replace .error() with .exception() logging for better debugging on sentry by krrishdholakia in https://github.com/BerriAI/litellm/pull/5244
* s3 - Log model price information by krrishdholakia in https://github.com/BerriAI/litellm/pull/5254


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.16...v1.43.17



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.17



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 147.74746761506447 | 6.432229653044687 | 0.0 | 1925 | 0 | 111.18721100001494 | 665.7542859999808 |
| Aggregated | Passed ✅ | 130.0 | 147.74746761506447 | 6.432229653044687 | 0.0 | 1925 | 0 | 111.18721100001494 | 665.7542859999808 |

v1.43.16-stable
What's Changed
* CRUD Endpoints for Pass-Through endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/5215
* fix(s3.py): fix s3 logging payload to have valid json values by krrishdholakia in https://github.com/BerriAI/litellm/pull/5235
* Allow zero temperature for Sagemaker models based on config by gitravin in https://github.com/BerriAI/litellm/pull/5173
* build(deps): bump aiohttp from 3.9.4 to 3.10.2 by dependabot in https://github.com/BerriAI/litellm/pull/5135
* [Docs] Sagemaker add example on using with LiteLLM Proxy and temperature=0 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5250
* [Feat-Proxy] Add Oauth 2.0 Support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5252
* [Feat] Add bedrock Guardrail `traces ` in response when trace=enabled by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5243

New Contributors
* gitravin made their first contribution in https://github.com/BerriAI/litellm/pull/5173

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.15...v1.43.16-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.16-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 137.42305914300667 | 6.401451605020026 | 0.0 | 1916 | 0 | 99.26967000001241 | 1313.6322230000133 |
| Aggregated | Passed ✅ | 120.0 | 137.42305914300667 | 6.401451605020026 | 0.0 | 1916 | 0 | 99.26967000001241 | 1313.6322230000133 |

1.43.16

Not secure
What's Changed
* CRUD Endpoints for Pass-Through endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/5215
* fix(s3.py): fix s3 logging payload to have valid json values by krrishdholakia in https://github.com/BerriAI/litellm/pull/5235
* Allow zero temperature for Sagemaker models based on config by gitravin in https://github.com/BerriAI/litellm/pull/5173
* build(deps): bump aiohttp from 3.9.4 to 3.10.2 by dependabot in https://github.com/BerriAI/litellm/pull/5135
* [Docs] Sagemaker add example on using with LiteLLM Proxy and temperature=0 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5250
* [Feat-Proxy] Add Oauth 2.0 Support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5252
* [Feat] Add bedrock Guardrail `traces ` in response when trace=enabled by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5243

New Contributors
* gitravin made their first contribution in https://github.com/BerriAI/litellm/pull/5173

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.15...v1.43.16



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.16



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 142.9490557028836 | 6.490376439327205 | 0.0 | 1942 | 0 | 99.74976099999822 | 2696.515713999986 |
| Aggregated | Passed ✅ | 120.0 | 142.9490557028836 | 6.490376439327205 | 0.0 | 1942 | 0 | 99.74976099999822 | 2696.515713999986 |

v1.43.15-stable
What's Changed
* [Fix-Proxy + Langfuse] Always log cache_key on hits/misses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5226
* [Fix] use BaseAWSLLM for bedrock, sagemaker by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5233
* [Feat] Make Sagemaker Async by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5237
* fix using anthropic prompt caching on proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5238


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.13.dev1...v1.43.15-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.15-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 86 | 100.22178494992286 | 6.473362884276337 | 0.0 | 1937 | 0 | 71.80498400003898 | 1222.4466490000623 |
| Aggregated | Passed ✅ | 86 | 100.22178494992286 | 6.473362884276337 | 0.0 | 1937 | 0 | 71.80498400003898 | 1222.4466490000623 |

1.43.15

Not secure
What's Changed
* fix(utils.py): support calling openai models via `azure_ai/` by krrishdholakia in https://github.com/BerriAI/litellm/pull/5209
* [Feat-Proxy] - user common helper to `route_request` for making llm call by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5224
* [Fix-Proxy + Langfuse] Always log cache_key on hits/misses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5226
* [Fix] use BaseAWSLLM for bedrock, sagemaker by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5233
* [Feat] Make Sagemaker Async by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5237
* fix using anthropic prompt caching on proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5238


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.13...v1.43.15



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.15



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 137.00972958622484 | 6.355893263955775 | 0.0 | 1902 | 0 | 98.4642859999667 | 1360.5994449999912 |
| Aggregated | Passed ✅ | 110.0 | 137.00972958622484 | 6.355893263955775 | 0.0 | 1902 | 0 | 98.4642859999667 | 1360.5994449999912 |

Page 21 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.