Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 20 of 93

1.44.4

Not secure
What's Changed
* feat(batches): add azure openai batches endpoint support by krrishdholakia in https://github.com/BerriAI/litellm/pull/5337


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.3...v1.44.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 192.79600859229922 | 6.337600045878344 | 0.0 | 1896 | 0 | 118.77573599997504 | 2805.56191200003 |
| Aggregated | Passed ✅ | 150.0 | 192.79600859229922 | 6.337600045878344 | 0.0 | 1896 | 0 | 118.77573599997504 | 2805.56191200003 |

1.44.4.dev2

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.4...v1.44.4.dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.4.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 162.65517572103985 | 6.440326127408493 | 0.0 | 1925 | 0 | 113.4063559999845 | 2001.223753999966 |
| Aggregated | Passed ✅ | 140.0 | 162.65517572103985 | 6.440326127408493 | 0.0 | 1925 | 0 | 113.4063559999845 | 2001.223753999966 |

1.44.3

Not secure
🔥 We're launching support for using Bedrock Guardrails on LiteLLM Gateway - use Bedrock guardrails with 100+ LLMs supported by LiteLLM

👉 Start here: https://docs.litellm.ai/docs/proxy/guardrails/bedrock

🔐 Support for using tenant_id, client_id (Entrata ID) auth for Azure OpenAI

https://docs.litellm.ai/docs/providers/azure#usage---litellm-proxy-server

⚡️ [Feat-Proxy] Prometheus Metrics to Track request latency, track llm api latency by

📖 Add example curl on /chat, /completions and /embeddings doc string

📖 Fix doc string for /user/delete

![Group 5980](https://github.com/user-attachments/assets/a4d70e03-42db-44da-b88c-8e7859d44448)


What's Changed
* build(deps): bump hono from 4.2.7 to 4.5.8 in /litellm-js/spend-logs by dependabot in https://github.com/BerriAI/litellm/pull/5331
* [Feat-Proxy] Prometheus Metrics to Track request latency, track llm api latency by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5335
* docs(projects): add dbally to sidebar by micpst in https://github.com/BerriAI/litellm/pull/5336
* feat(caching.py): redis cluster support by krrishdholakia in https://github.com/BerriAI/litellm/pull/5325
* [Feat-Proxy] add bedrock guardrails support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5339
* [Feat] Azure OpenAI add support for using azure_ad_token_provider with LiteLLM Proxy + Router by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5332


New Contributors
* micpst made their first contribution in https://github.com/BerriAI/litellm/pull/5336

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.2...v1.44.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.3



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 100.0 | 118.69984882374364 | 6.449788425533314 | 0.0 | 1929 | 0 | 86.01162700000486 | 913.5792179999953 |
| Aggregated | Passed ✅ | 100.0 | 118.69984882374364 | 6.449788425533314 | 0.0 | 1929 | 0 | 86.01162700000486 | 913.5792179999953 |

1.44.2

Not secure
What's Changed
* openrouter/anthropic/claude-3.5-sonnet: supports_assistant_prefill:true by paul-gauthier in https://github.com/BerriAI/litellm/pull/5315
* fix/docs: was missing a beta model from openrouter of claude sonnet by thiswillbeyourgithub in https://github.com/BerriAI/litellm/pull/5314
* docs - use litellm on gcp cloud run by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5317
* Qdrant Semantic Caching by haadirakhangi in https://github.com/BerriAI/litellm/pull/5018
* [Feat-Proxy] Add Qdrant Semantic Caching Support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5324
* feat(user_api_key_auth.py): allow team admin to add new members to team by krrishdholakia in https://github.com/BerriAI/litellm/pull/5308
* feat(proxy_server.py): support disabling storing master key hash in db by krrishdholakia in https://github.com/BerriAI/litellm/pull/5322
* Support LangSmith parent_run_id, trace_id, session_id by MarkRx in https://github.com/BerriAI/litellm/pull/5323
* add checksum/config pod annotation by mikstew in https://github.com/BerriAI/litellm/pull/5318
* Fixed code snippet import typo in Structured Output docs by beltranaceves in https://github.com/BerriAI/litellm/pull/5304
* Add the "stop" parameter to the mistral API interface by the-crypt-keeper in https://github.com/BerriAI/litellm/pull/5253
* [Feat] add vertex multimodal embedding support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5326
* [Feat-Proxy] Make LiteLLM Proxy (Gateway) compatible with VertexAI SDK 🔥 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5327

New Contributors
* haadirakhangi made their first contribution in https://github.com/BerriAI/litellm/pull/5018
* MarkRx made their first contribution in https://github.com/BerriAI/litellm/pull/5323
* beltranaceves made their first contribution in https://github.com/BerriAI/litellm/pull/5304
* the-crypt-keeper made their first contribution in https://github.com/BerriAI/litellm/pull/5253

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.1...v1.44.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 154.90018238974017 | 6.317539939751722 | 0.0 | 1891 | 0 | 104.25457799999549 | 2111.977472999911 |
| Aggregated | Passed ✅ | 130.0 | 154.90018238974017 | 6.317539939751722 | 0.0 | 1891 | 0 | 104.25457799999549 | 2111.977472999911 |

1.44.1

Not secure
Guardrails on LiteLLM Proxy are now Free 🔥
Start here: https://docs.litellm.ai/docs/proxy/guardrails/quick_start
![Group 5971](https://github.com/user-attachments/assets/cf9c3ed9-7ea6-46e3-8104-545f69de5248)

What's Changed
* Allow not displaying feedback box by msabramo in https://github.com/BerriAI/litellm/pull/4868
* Fix app_version in helm build by mikstew in https://github.com/BerriAI/litellm/pull/4649
* feat(azure.py): support 'json_schema' for older models by krrishdholakia in https://github.com/BerriAI/litellm/pull/5296
* fix(cost_calculator.py): only override base model if custom pricing is set by krrishdholakia in https://github.com/BerriAI/litellm/pull/5287
* feat(azure.py): support dynamic azure api versions by krrishdholakia in https://github.com/BerriAI/litellm/pull/5284
* Fix helm chart job by mikstew in https://github.com/BerriAI/litellm/pull/5297
* [Fix Router] - Don't cooldown Default Provider deployment by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5302
* [Fix] Router - Do not retry on 404 errors from LLM API providers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5298
* [Fix Router] Don't retry errors when healthy_deployments=0 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5299
* [Fix] Router - don't recursively use the same fallback by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5301
* [Fix Docker] Maintain separate docker image for running as non-root user by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5306
* [Feat-Proxy] Make Guardrails Free / OSS - Lakera AI, Aporia AI 🛡️ by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5303
* [Docs] - v2 Guardrails are now Free / Open Source by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5309

New Contributors
* mikstew made their first contribution in https://github.com/BerriAI/litellm/pull/4649

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.19...v1.44.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 224.93742085437768 | 6.223856027540356 | 0.0 | 1861 | 0 | 116.26619000003302 | 15052.881646000003 |
| Aggregated | Passed ✅ | 150.0 | 224.93742085437768 | 6.223856027540356 | 0.0 | 1861 | 0 | 116.26619000003302 | 15052.881646000003 |

1.43.19

Not secure
What's Changed
* feat: Bedrock pass-through endpoint support (All endpoints) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5264
* Update helm repo to v0.2.3 by lowjiansheng in https://github.com/BerriAI/litellm/pull/5280
* [Feat] V2 aporia guardrails litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5288
* [Feat] run aporia guardrail as post call success hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5281
* [Feat-Proxy] Allow accessing `data` in post call success hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5282
* [Feat-Proxy] Return applied guardrails in response headers as `x-litellm-applied-guardrails` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5283
* [Doc-Tutorial] use litellm proxy with aporia by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5286
* [Fix] Proxy - send slack alerting spend reports by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5289
* [Feat] - control guardrails per API Key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5294


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.18...v1.43.19



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.19



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.19



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 176.33174708857493 | 6.37526382998502 | 0.0 | 1908 | 0 | 116.48740000003954 | 2400.6608180000057 |
| Aggregated | Passed ✅ | 150.0 | 176.33174708857493 | 6.37526382998502 | 0.0 | 1908 | 0 | 116.48740000003954 | 2400.6608180000057 |

v1.43.18-stable
What's Changed
* [Feat] return `x-litellm-key-remaining-requests-{model}`: 1, `x-litellm-key-remaining-tokens-{model}: None` in response headers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5259
* [Feat] - Set tpm/rpm limits per Virtual Key + Model by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5256
* [Feat] add prometheus metric for remaining rpm/tpm limit for (model, api_ley) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5257
* [Feat] read model + API key tpm/rpm limits from db by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5258
* Pass-through endpoints for Gemini - Google AI Studio by krrishdholakia in https://github.com/BerriAI/litellm/pull/5260
* Fix incorrect message length check in cost calculator by dhlidongming in https://github.com/BerriAI/litellm/pull/5219
* [PRICING] Use specific llama2 and llama3 model names in Ollama by kiriloman in https://github.com/BerriAI/litellm/pull/5221
* [Feat-Proxy] set rpm/tpm limits per api key per model by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5261
* Fixes the `tool_use` indexes not being correctly mapped by Penagwin in https://github.com/BerriAI/litellm/pull/5232
* [Feat-Proxy] Use model access groups for teams by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5263

New Contributors
* dhlidongming made their first contribution in https://github.com/BerriAI/litellm/pull/5219
* Penagwin made their first contribution in https://github.com/BerriAI/litellm/pull/5232

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.17...v1.43.18-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.18-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.18-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 89 | 111.00064602360165 | 6.510491922043163 | 0.0 | 1949 | 0 | 69.99250100000154 | 2982.894710000039 |
| Aggregated | Passed ✅ | 89 | 111.00064602360165 | 6.510491922043163 | 0.0 | 1949 | 0 | 69.99250100000154 | 2982.894710000039 |

Page 20 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.