Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 19 of 93

1.44.9

Not secure
Launching support for using LiteLLM LLM Gateway with Oauth2 proxy authentication

🛠️ Security Fix - don't allow clients to set api_base, base_url for requests to LiteLLM Proxy
🔥 Use ssml input for Vertex Text to Speech APIs: https://docs.litellm.ai/docs/providers/vertex#usage---ssml-as-input

⚡️ [UI] Allow viewing / editing budget duration

🛠️ [minor fix Proxy] - prometheus fix use safe update start / end time
![Group 5997](https://github.com/user-attachments/assets/df45d870-e63c-4fca-b6b9-43e60e8c5586)

What's Changed
* Support for gemini experimental models by lilleswing in https://github.com/BerriAI/litellm/pull/5413
* [UI] Allow editing budget duration by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5409
* Feat - Add Google Text-to-Speech support ssml by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5415
* Make helm chart listen on IPv6 (and IPv4). by ArbitraryCritter in https://github.com/BerriAI/litellm/pull/5222
* [Fix - Proxy] Security fix, don't allow client side to set api_base, base_url by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5418
* [Feat-Proxy] Add hook for oauth2 proxy headers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5420
* [minor fix Proxy] - prometheus - safe update start / end time by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5422

New Contributors
* lilleswing made their first contribution in https://github.com/BerriAI/litellm/pull/5413
* ArbitraryCritter made their first contribution in https://github.com/BerriAI/litellm/pull/5222

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.8...v1.44.9



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.9



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 159.56966948018703 | 6.410714219145582 | 0.0 | 1918 | 0 | 110.58400799998935 | 1685.0977490000218 |
| Aggregated | Passed ✅ | 140.0 | 159.56966948018703 | 6.410714219145582 | 0.0 | 1918 | 0 | 110.58400799998935 | 1685.0977490000218 |

1.44.8

Not secure
What's Changed
* fix(sagemaker.py): support streaming for messages api by krrishdholakia in https://github.com/BerriAI/litellm/pull/5376
* feat(vertex_httpx.py): support 'functions' param for gemini google ai studio + vertex ai by krrishdholakia in https://github.com/BerriAI/litellm/pull/5368
* gemini context caching (openai format) support by krrishdholakia in https://github.com/BerriAI/litellm/pull/5381
* fix(factory.py): handle missing 'content' in cohere assistant messages by miraclebakelaser in https://github.com/BerriAI/litellm/pull/5384
* fix retry after - cooldown individual models based on their specific 'retry-after' header by krrishdholakia in https://github.com/BerriAI/litellm/pull/5358
* [Feat] Add Vertex AI21 support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5391
* [Feat] Add cohere rerank and together ai rerank by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5392
* feat - add rerank on litellm proxy / gateway by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5394
* build(deps): bump webpack from 5.93.0 to 5.94.0 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/5395
* docs: add time.sleep() between streaming calls by ajeetdsouza in https://github.com/BerriAI/litellm/pull/5402

New Contributors
* miraclebakelaser made their first contribution in https://github.com/BerriAI/litellm/pull/5384
* ajeetdsouza made their first contribution in https://github.com/BerriAI/litellm/pull/5402

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.7...v1.44.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.8



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 85 | 103.94574065444324 | 6.4691141286237 | 0.0 | 1936 | 0 | 71.05850200002806 | 2103.6041039999986 |
| Aggregated | Passed ✅ | 85 | 103.94574065444324 | 6.4691141286237 | 0.0 | 1936 | 0 | 71.05850200002806 | 2103.6041039999986 |

1.44.8dev1

What's Changed
* Support for gemini experimental models by lilleswing in https://github.com/BerriAI/litellm/pull/5413
* [UI] Allow editing budget duration by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5409
* Feat - Add Google Text-to-Speech support ssml by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5415
* Make helm chart listen on IPv6 (and IPv4). by ArbitraryCritter in https://github.com/BerriAI/litellm/pull/5222
* [Fix - Proxy] Security fix, don't allow client side to set api_base, base_url by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5418
* [Feat-Proxy] Add hook for oauth2 proxy headers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5420
* [minor fix Proxy] - prometheus - safe update start / end time by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5422

New Contributors
* lilleswing made their first contribution in https://github.com/BerriAI/litellm/pull/5413
* ArbitraryCritter made their first contribution in https://github.com/BerriAI/litellm/pull/5222

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.8...v1.44.8-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.8-dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.8-dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 165.5998142681111 | 6.323551839446042 | 0.0 | 1891 | 0 | 107.57471499999838 | 2245.6900200000405 |
| Aggregated | Passed ✅ | 140.0 | 165.5998142681111 | 6.323551839446042 | 0.0 | 1891 | 0 | 107.57471499999838 | 2245.6900200000405 |

1.44.7

Not secure
What's Changed
* [Docs] use litellm sdk with litellm proxy server by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5367
* [Feat] Add support for fine tuned vertexai models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5371
* [Refactor] Refactor cohere provider to be in a folder by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5374
* docs: Add pricing for imagen-3 and imagen-3-fast by ushuz in https://github.com/BerriAI/litellm/pull/5375
* [Feat-Proxy] Allow regenerating proxy virtual keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5379
* [Feat-Proxy] allow regenerating Virtual Keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5378
* build(deps): bump micromatch from 4.0.5 to 4.0.8 in /ui/litellm-dashboard by dependabot in https://github.com/BerriAI/litellm/pull/5380


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.6...v1.44.7



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.7



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 153.00695824408174 | 6.357979558515746 | 0.0 | 1901 | 0 | 107.4070600000141 | 1756.0737669999753 |
| Aggregated | Passed ✅ | 130.0 | 153.00695824408174 | 6.357979558515746 | 0.0 | 1901 | 0 | 107.4070600000141 | 1756.0737669999753 |

v1.44.6-stable
What's Changed
* [Fix] pip litellm proxy - No module named 'nacl' by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5353
* build(deps): bump micromatch from 4.0.5 to 4.0.8 in /ui by dependabot in https://github.com/BerriAI/litellm/pull/5349
* Admin UI - show user_id on Key Table by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5350
* [Feat-Vertex] Support using workload identity federation by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5354
* [Feat-Caching] allow setting caching mode to default off by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5352
* Feat - ui allow setting tpm / rpm limits on keys on ui by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5356
* [Feat] Admin UI - allow setting tpm / rpm per model by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5357
* refactor(main.py): migrate vertex gemini calls to vertex_httpx by krrishdholakia in https://github.com/BerriAI/litellm/pull/4582


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.5...v1.44.6-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.6-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 162.55832985676886 | 6.368387627768324 | 0.0 | 1906 | 0 | 111.55475200001774 | 1075.542062000011 |
| Aggregated | Passed ✅ | 140.0 | 162.55832985676886 | 6.368387627768324 | 0.0 | 1906 | 0 | 111.55475200001774 | 1075.542062000011 |

1.44.6

Not secure
What's Changed
* [Fix] pip litellm proxy - No module named 'nacl' by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5353
* build(deps): bump micromatch from 4.0.5 to 4.0.8 in /ui by dependabot in https://github.com/BerriAI/litellm/pull/5349
* Admin UI - show user_id on Key Table by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5350
* [Feat-Vertex] Support using workload identity federation by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5354
* [Feat-Caching] allow setting caching mode to default off by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5352
* Feat - ui allow setting tpm / rpm limits on keys on ui by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5356
* [Feat] Admin UI - allow setting tpm / rpm per model by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5357
* refactor(main.py): migrate vertex gemini calls to vertex_httpx by krrishdholakia in https://github.com/BerriAI/litellm/pull/4582


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.5...1.44.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-1.44.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 86 | 114.28056593503096 | 6.432087013394555 | 0.0033430805682923884 | 1924 | 1 | 67.2053769999934 | 8988.791535000018 |
| Aggregated | Passed ✅ | 86 | 114.28056593503096 | 6.432087013394555 | 0.0033430805682923884 | 1924 | 1 | 67.2053769999934 | 8988.791535000018 |

1.44.5

Not secure
What's Changed
* [Feat-Proxy] Add Custom Guardrails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5342
* [Feat-LiteLLM] Add Vertex AI - Text to speech support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5346
* Update SSL verification by OgnjenFrancuski in https://github.com/BerriAI/litellm/pull/5292
* feat(sagemaker.py): add sagemaker messages api support by krrishdholakia in https://github.com/BerriAI/litellm/pull/5343

New Contributors
* OgnjenFrancuski made their first contribution in https://github.com/BerriAI/litellm/pull/5292

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.44.4...v1.44.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.5



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 143.13824969574247 | 6.438620895098571 | 0.0 | 1926 | 0 | 104.47404999996479 | 1345.9387880000122 |
| Aggregated | Passed ✅ | 120.0 | 143.13824969574247 | 6.438620895098571 | 0.0 | 1926 | 0 | 104.47404999996479 | 1345.9387880000122 |

Page 19 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.