Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 35 of 93

1.41.1

Not secure
What's Changed
* [Doc] Add spec on pass through endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4468
* feat - Allow adding authentication on pass through endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4469
* tests - pass through endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4470


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.0...v1.41.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 135.94747413256803 | 6.401388314451501 | 0.0 | 1916 | 0 | 98.61660699999675 | 1625.039872000002 |
| Aggregated | Passed ✅ | 120.0 | 135.94747413256803 | 6.401388314451501 | 0.0 | 1916 | 0 | 98.61660699999675 | 1625.039872000002 |

v1.41.0-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.0...v1.41.0-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.0-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 135.89504825622356 | 6.44293938318117 | 0.0 | 1928 | 0 | 99.83084399999598 | 1682.5208299999872 |
| Aggregated | Passed ✅ | 120.0 | 135.89504825622356 | 6.44293938318117 | 0.0 | 1928 | 0 | 99.83084399999598 | 1682.5208299999872 |

1.41.0

Not secure
What's Changed
* feat: decrypts aws keys in entrypoint.sh by krrishdholakia in https://github.com/BerriAI/litellm/pull/4437
* fix: replicate - catch 422 unprocessable entity error by krrishdholakia in
* fix: router.py - pre-call-checks (if enabled) only check context window limits for azure modes if base_model is set by krrishdholakia in https://github.com/BerriAI/litellm/commit/c9a424d28d23b798e1f4c5c00d95cfa0cf0eb13c
* fix: utils.py - correctly raise openrouter content filter error by krrishdholakia in https://github.com/BerriAI/litellm/commit/ca04244a0ab76291a819f0f9a475f5e0706d0808

Note: This release contains changes in how pre-call-checks run for azure models. Filtering models based on context window limits, will only apply to azure models if the base_model is set.

To enable pre-call-checks 👉 https://docs.litellm.ai/docs/routing#pre-call-checks-context-window-eu-regions

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.31...v1.41.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.0



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 156.9410206132132 | 6.2719899835647945 | 0.0 | 1877 | 0 | 112.84582399997589 | 1745.2864320000003 |
| Aggregated | Passed ✅ | 130.0 | 156.9410206132132 | 6.2719899835647945 | 0.0 | 1877 | 0 | 112.84582399997589 | 1745.2864320000003 |

1.40.31

Not secure
What's Changed
* [Fix] Azure Post-API Call occurs before Pre-API Call in CustomLogger by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4451
* [Fix-Proxy] Fix in memory caching memory leak by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4366
* fix: do not resolve vertex project id from creds by ushuz in https://github.com/BerriAI/litellm/pull/4445
* fix(utils.py): return 'response_cost' in completion call by krrishdholakia in https://github.com/BerriAI/litellm/pull/4436
* feat(azure.py): azure tts support by krrishdholakia in https://github.com/BerriAI/litellm/pull/4449
* fix(token_counter.py): New `get_modified_max_tokens' helper func by krrishdholakia in https://github.com/BerriAI/litellm/pull/4446
* docs: minor link repairs by nibalizer in https://github.com/BerriAI/litellm/pull/4460
* Fix typo by lnguyen in https://github.com/BerriAI/litellm/pull/4457
* Docs create pass through routes litellm proxy (tutorial setup cohere Re-Rank Endpoint) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4463
* [Feat] Allow users to set pass through endpoint + add Cohere Re-Rank by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4462
* [Enterprise] Return Raw response from Lakera in failed responses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4464
* [Feat] - Proxy support Passing through Langfuse requests by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4465

New Contributors
* lnguyen made their first contribution in https://github.com/BerriAI/litellm/pull/4457

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.29...v1.40.31



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.31



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 77 | 91.1696113255562 | 6.455451046411027 | 0.0 | 1929 | 0 | 66.79889399998729 | 1628.1963670000437 |
| Aggregated | Passed ✅ | 77 | 91.1696113255562 | 6.455451046411027 | 0.0 | 1929 | 0 | 66.79889399998729 | 1628.1963670000437 |

1.40.29

Not secure
What's Changed
* Updates Databricks provider docs by djliden in https://github.com/BerriAI/litellm/pull/4442
* [Feat] Improve secret detection call hook - catch more cases by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4444
* [Fix] Secret redaction logic when used with logging callbacks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4443
* [fix] error message on /v2/model info when no models exist by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4447

New Contributors
* djliden made their first contribution in https://github.com/BerriAI/litellm/pull/4442

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.28...v1.40.29



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.29



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 169.90346260031845 | 6.295057404345822 | 0.0 | 1884 | 0 | 116.81983199997603 | 1212.0624549999661 |
| Aggregated | Passed ✅ | 150.0 | 169.90346260031845 | 6.295057404345822 | 0.0 | 1884 | 0 | 116.81983199997603 | 1212.0624549999661 |

1.40.28

Not secure
What's Changed
* Added openrouter/anthropic/claude-3.5-sonnet & haiku to model json by paul-gauthier in https://github.com/BerriAI/litellm/pull/4400
* [Feat] Add Fireworks AI Tool calling support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4418
* fix add ollama codegemma to model cost map by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4424
* Add return type annotations to util types by guitard0g in https://github.com/BerriAI/litellm/pull/4420
* [Fix-Proxy] Store SpendLogs when using Whisper, Moderations etc by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4427
* [Fix-Proxy] Azure Embeddings use AsyncAzureOpenAI Client initialized on litellm.Router for requests by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4431
* [Feat] New Provider - Add volcano AI Engine by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4433
* [Fix] Forward OTEL Traceparent Header to provider by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4423
* [Feat] Add Codestral pricing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4435
* [Feat] Add all Vertex AI Models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4421

New Contributors
* guitard0g made their first contribution in https://github.com/BerriAI/litellm/pull/4420

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.27...v1.40.28



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.28



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 159.37771652368323 | 6.278392214516223 | 0.0 | 1879 | 0 | 117.58081900001116 | 1089.7057880000034 |
| Aggregated | Passed ✅ | 140.0 | 159.37771652368323 | 6.278392214516223 | 0.0 | 1879 | 0 | 117.58081900001116 | 1089.7057880000034 |

1.40.27

Not secure
✨ Thrilled to launch support for nvidia NIM LLM API on LiteLLM 1.40.27 👉 Start here: https://docs.litellm.ai/docs/providers/nvidia_nim

🔥 Proxy 100+ LLMS & set budgets

🔑 [Enterprise] Add secret detection pre call hook https://docs.litellm.ai/docs/proxy/enterprise#content-moderation

🛠️ [Fix] - use n in mock completion response on litellm mock responses

⚡️ [Feat] add endpoint to debug memory utilization

🔑 enterprise - allow verifying license in air gapped vpc
![Group 5865](https://github.com/BerriAI/litellm/assets/29436595/c77791b9-7ec5-4cbd-b285-75c616ce5c3b)


What's Changed
* [Fix-Improve] Improve Ollama prompt input and fix Ollama function calling key error and fix Ollama function calling `can only join an iterable` error by CorrM in https://github.com/BerriAI/litellm/pull/4373
* Fix Groq Prices by kiriloman in https://github.com/BerriAI/litellm/pull/4401
* [Feat] add endpoint to debug memory util by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4364
* [Feat-New Provider] Add Nvidia NIM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4403
* [Fix] - use `n` in mock completion responses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4405
* enterprise - allow verifying license in air gapped vpc by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4409
* Create litellm user to fix issue with prisma in k8s by lolsborn in https://github.com/BerriAI/litellm/pull/4402
* [Enterprise] Add secret detection pre call hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4410
* Revert "Create litellm user to fix issue with prisma in k8s " by krrishdholakia in https://github.com/BerriAI/litellm/pull/4412
* fix(router.py): set `cooldown_time:` per model by krrishdholakia in https://github.com/BerriAI/litellm/pull/4411

New Contributors
* CorrM made their first contribution in https://github.com/BerriAI/litellm/pull/4373
* kiriloman made their first contribution in https://github.com/BerriAI/litellm/pull/4401

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.26...v1.40.27



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.27



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 156.61068343517005 | 6.372506185089714 | 0.0 | 1905 | 0 | 109.52021800000011 | 1799.9076889999515 |
| Aggregated | Passed ✅ | 130.0 | 156.61068343517005 | 6.372506185089714 | 0.0 | 1905 | 0 | 109.52021800000011 | 1799.9076889999515 |

Page 35 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.