Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 33 of 93

1.41.8.dev1

What's Changed
* fix: typo in vision docs by berkecanrizai in https://github.com/BerriAI/litellm/pull/4555
* [Feat] Improve Proxy Mem Util (Reduces proxy startup memory util by 50%) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4577
* [fix] UI fix show models as dropdown by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4574
* UI - don't spam error messages when model list is not defined by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4575
* Azure proxy tts pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/4572
* feat(cost_calculator.py): support openai+azure tts calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/4571
* [Refactor] Use helper function to encrypt/decrypt model credentials by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4576
* [Feat-Enterprise] /spend/report view spend for a specific key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4578
* build(deps): bump aiohttp from 3.9.0 to 3.9.4 by dependabot in https://github.com/BerriAI/litellm/pull/4553
* Enforcing sync'd `poetry.lock` via `pre-commit` by jamesbraza in https://github.com/BerriAI/litellm/pull/4517
* [Feat] OTEL allow setting deployment environment by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4422

New Contributors
* berkecanrizai made their first contribution in https://github.com/BerriAI/litellm/pull/4555

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.8...v1.41.8.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.8.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 77 | 87.51172147931786 | 6.46157682222913 | 0.0 | 1934 | 0 | 67.4040659999946 | 582.6242449999768 |
| Aggregated | Passed ✅ | 77 | 87.51172147931786 | 6.46157682222913 | 0.0 | 1934 | 0 | 67.4040659999946 | 582.6242449999768 |

1.41.7

Not secure
What's Changed
* [Bug Fix] Use OpenAI Tool Response Spec When Converting To Gemini/VertexAI Tool Response by andrewmjc in https://github.com/BerriAI/litellm/pull/4522
* feat - show key alias on prometheus metrics by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4545
* Deepseek coder now has 128k context by paul-gauthier in https://github.com/BerriAI/litellm/pull/4541
* Cohere tool calling fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/4546
* fix: Include vertex_ai_beta in vertex_ai param mapping/Do not use google auth project_id by t968914 in https://github.com/BerriAI/litellm/pull/4461
* [Fix] Invite Links / Onboarding flow on admin ui by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4548
* feat - allow looking up model_id on `/model/info` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4547
* feat(internal_user_endpoints.py): expose `/user/delete` endpoint by krrishdholakia in https://github.com/BerriAI/litellm/pull/4386
* Return output_vector_size in get_model_info by tomusher in https://github.com/BerriAI/litellm/pull/4279
* [Feat] Add Groq/whisper-large-v3 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4549

New Contributors
* andrewmjc made their first contribution in https://github.com/BerriAI/litellm/pull/4522
* t968914 made their first contribution in https://github.com/BerriAI/litellm/pull/4461
* tomusher made their first contribution in https://github.com/BerriAI/litellm/pull/4279

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.6...v1.41.7



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.7



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 152.06898919521237 | 6.419721686734246 | 0.0 | 1921 | 0 | 111.60093299997698 | 1678.7594189999027 |
| Aggregated | Passed ✅ | 130.0 | 152.06898919521237 | 6.419721686734246 | 0.0 | 1921 | 0 | 111.60093299997698 | 1678.7594189999027 |

1.41.6

Not secure
What's Changed
* *real* Anthropic tool calling + streaming support by krrishdholakia in https://github.com/BerriAI/litellm/pull/4536
* fix(utils.py): fix vertex anthropic streaming by krrishdholakia in https://github.com/BerriAI/litellm/pull/4535


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.5...v1.41.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 99 | 120.03623455821207 | 6.427899106681398 | 0.0 | 1924 | 0 | 83.30082600002697 | 1524.837892999983 |
| Aggregated | Passed ✅ | 99 | 120.03623455821207 | 6.427899106681398 | 0.0 | 1924 | 0 | 83.30082600002697 | 1524.837892999983 |

1.41.5

Not secure
What's Changed
* Fix: Output Structure of Ollama chat by edwinjosegeorge in https://github.com/BerriAI/litellm/pull/4089
* Allow calling SageMaker endpoints from different regions by petermuller in https://github.com/BerriAI/litellm/pull/4499
* Doc set guardrails on litellm config.yaml by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4530
* [Feat] Allow users to set a guardrail config on proxy server by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4529
* [Feat] v2 - Control guardrails per LLM Call by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4532
* fix(vertex_anthropic.py): Vertex Anthropic tool calling - native params by krrishdholakia in https://github.com/BerriAI/litellm/pull/4531
* Revert "fix(vertex_anthropic.py): Vertex Anthropic tool calling - native params " by krrishdholakia in https://github.com/BerriAI/litellm/pull/4534
* Fix Granite Prompt template by nick-rackauckas in https://github.com/BerriAI/litellm/pull/4533

New Contributors
* petermuller made their first contribution in https://github.com/BerriAI/litellm/pull/4499

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.4...v1.41.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.5



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 158.84378552886133 | 6.424808718348793 | 0.030069307574175322 | 1923 | 9 | 83.97341500000266 | 2746.2116009999704 |
| Aggregated | Passed ✅ | 140.0 | 158.84378552886133 | 6.424808718348793 | 0.030069307574175322 | 1923 | 9 | 83.97341500000266 | 2746.2116009999704 |

1.41.5.dev1

What's Changed
* *real* Anthropic tool calling + streaming support by krrishdholakia in https://github.com/BerriAI/litellm/pull/4536
* fix(utils.py): fix vertex anthropic streaming by krrishdholakia in https://github.com/BerriAI/litellm/pull/4535


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.5...v1.41.5.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.5.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 150.69639806536688 | 6.33818146467335 | 0.0 | 1897 | 0 | 115.55820499995662 | 1375.4738929999917 |
| Aggregated | Passed ✅ | 130.0 | 150.69639806536688 | 6.33818146467335 | 0.0 | 1897 | 0 | 115.55820499995662 | 1375.4738929999917 |

1.41.4

Not secure
What's Changed
* fix(router.py): disable cooldowns by krrishdholakia in https://github.com/BerriAI/litellm/pull/4497
* fix(slack_alerting.py): use in-memory cache for checking request status by krrishdholakia in https://github.com/BerriAI/litellm/pull/4520
* feat(vertex_httpx.py): Support cachedContent. by Manouchehri in https://github.com/BerriAI/litellm/pull/4492
* [Fix+Test] /audio/transcriptions - use initialized OpenAI / Azure OpenAI clients by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4519
* [Fix-Proxy] Background health checks use deep copy of model list for _run_background_health_check by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4518
* refactor(azure.py): move azure dall-e calls to httpx client by krrishdholakia in https://github.com/BerriAI/litellm/pull/4523
* feat(dynamic_rate_limiter.py): support dynamic rate limiting on rpm by krrishdholakia in https://github.com/BerriAI/litellm/pull/4502
* [Enterprise] Check if Key should run secret_detection callback by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4524
* [Feat] Control Lakera AI per Request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4525


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.3...v1.41.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 174.7746187513257 | 6.304554345115949 | 0.0 | 1886 | 0 | 120.43884399997751 | 1842.0690810000337 |
| Aggregated | Passed ✅ | 150.0 | 174.7746187513257 | 6.304554345115949 | 0.0 | 1886 | 0 | 120.43884399997751 | 1842.0690810000337 |

Page 33 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.