Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 32 of 93

1.41.12.dev1

What's Changed
* [Fix - Proxy] Raise `type=ProxyErrorTypes.budget_exceeded,` on Exceeded budget errors by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4606
* feat(httpx): Send litellm user-agent version upstream by Manouchehri in https://github.com/BerriAI/litellm/pull/4591
* fix(utils.py): change update to upsert by andresrguzman in https://github.com/BerriAI/litellm/pull/4610
* [Proxy-Fix]: Add /assistants, /threads as OpenAI routes by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4611

New Contributors
* andresrguzman made their first contribution in https://github.com/BerriAI/litellm/pull/4610

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.12...1.41.12.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-1.41.12.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 82 | 96.45983435381505 | 6.5233250071069735 | 0.030061405562704945 | 1953 | 9 | 40.952243999981874 | 1350.239547000001 |
| Aggregated | Passed ✅ | 82 | 96.45983435381505 | 6.5233250071069735 | 0.030061405562704945 | 1953 | 9 | 40.952243999981874 | 1350.239547000001 |

1.41.11

Not secure
What's Changed
* fix: typo in vision docs by berkecanrizai in https://github.com/BerriAI/litellm/pull/4555
* [Feat] Improve Proxy Mem Util (Reduces proxy startup memory util by 50%) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4577
* [fix] UI fix show models as dropdown by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4574
* UI - don't spam error messages when model list is not defined by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4575
* Azure proxy tts pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/4572
* feat(cost_calculator.py): support openai+azure tts calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/4571
* [Refactor] Use helper function to encrypt/decrypt model credentials by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4576
* [Feat-Enterprise] /spend/report view spend for a specific key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4578
* build(deps): bump aiohttp from 3.9.0 to 3.9.4 by dependabot in https://github.com/BerriAI/litellm/pull/4553
* Enforcing sync'd `poetry.lock` via `pre-commit` by jamesbraza in https://github.com/BerriAI/litellm/pull/4517
* [Feat] OTEL allow setting deployment environment by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4422

New Contributors
* berkecanrizai made their first contribution in https://github.com/BerriAI/litellm/pull/4555

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.8...v1.41.11

1.41.11.dev5

What's Changed
* Fix bugs with watsonx embedding/async endpoints by simonsanvil in https://github.com/BerriAI/litellm/pull/4586
* fix - setting rpm/tpm on proxy through admin ui by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4599


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.11.dev1...1.41.11.dev5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-1.41.11.dev5



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 166.18531687486887 | 6.332350486611239 | 0.0 | 1894 | 0 | 115.08063899998433 | 2456.9064480000407 |
| Aggregated | Passed ✅ | 140.0 | 166.18531687486887 | 6.332350486611239 | 0.0 | 1894 | 0 | 115.08063899998433 | 2456.9064480000407 |

1.41.11.dev1

What's Changed
* fix(vertex_httpx.py): support tool calling w/ streaming for vertex ai + gemini by krrishdholakia in https://github.com/BerriAI/litellm/pull/4579
* fix(router.py): fix setting httpx mounts by krrishdholakia in https://github.com/BerriAI/litellm/pull/4434


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.8.dev2...v1.41.11.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.11.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 77 | 88.81311274450754 | 6.537049007999015 | 0.0 | 1957 | 0 | 67.48113100002229 | 1314.0082060000395 |
| Aggregated | Passed ✅ | 77 | 88.81311274450754 | 6.537049007999015 | 0.0 | 1957 | 0 | 67.48113100002229 | 1314.0082060000395 |

1.41.8

Not secure
🔥 Excited to launch support for Logging LLM I/O on [🔭 Galileo](https://www.linkedin.com/feed/#) through [LiteLLM (YC W23)](https://www.linkedin.com/feed/#) Proxy https://docs.litellm.ai/docs/proxy/logging#logging-llm-io-to-galielo

📈 [docs] New example Grafana Dashboards https://github.com/BerriAI/litellm/tree/main/cookbook/litellm_proxy_server/grafana_dashboard


🛡️ feat - control guardrails per api key https://docs.litellm.ai/docs/proxy/guardrails#switch-guardrails-onoff-per-api-key


🛠️ fix - raise report Anthropic streaming errors (thanks [David Manouchehri](https://www.linkedin.com/feed/#))



✨ [Fix] Add nvidia nim param mapping based on model passed

![Group 5879](https://github.com/BerriAI/litellm/assets/29436595/87b9ff6c-9704-45dc-81be-88b47b1aebe8)



What's Changed
* fix(anthropic.py): add index to streaming tool use by igor-drozdov in https://github.com/BerriAI/litellm/pull/4554
* (fix) fixed bug with the watsonx embedding endpoint by simonsanvil in https://github.com/BerriAI/litellm/pull/4540
* Revert "(fix) fixed bug with the watsonx embedding endpoint" by krrishdholakia in https://github.com/BerriAI/litellm/pull/4561
* [docs] add example Grafana Dashboard by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4563
* build(deps): bump certifi from 2023.7.22 to 2024.7.4 by dependabot in https://github.com/BerriAI/litellm/pull/4568
* fix(proxy/utils.py): support logging rejected requests to langfuse, etc. by krrishdholakia in https://github.com/BerriAI/litellm/pull/4564
* [Feat] Add Galileo Logging Callback by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4567
* [Fix] Add nvidia nim param mapping based on `model` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4565
* fix - raise report Anthropic streaming errors by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4566
* feat - control guardrails per api key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4569

New Contributors
* igor-drozdov made their first contribution in https://github.com/BerriAI/litellm/pull/4554

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.7...v1.41.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.8



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 148.48763956993193 | 6.382118352365276 | 0.0 | 1909 | 0 | 109.10986900000808 | 1689.413720999994 |
| Aggregated | Passed ✅ | 120.0 | 148.48763956993193 | 6.382118352365276 | 0.0 | 1909 | 0 | 109.10986900000808 | 1689.413720999994 |

1.41.8.dev2

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.41.11...v1.41.8.dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.8.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.8.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 145.86702940844324 | 6.331699407694199 | 0.0 | 1895 | 0 | 106.76047199996219 | 387.6153609999733 |
| Aggregated | Passed ✅ | 130.0 | 145.86702940844324 | 6.331699407694199 | 0.0 | 1895 | 0 | 106.76047199996219 | 387.6153609999733 |

Page 32 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.