Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 23 of 93

1.43.9.dev3

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.9.dev2...v1.43.9.dev3

1.43.9.dev2

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.9.dev1...v1.43.9.dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.9.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 100.0 | 119.54295183229561 | 6.434968702063364 | 0.0 | 1926 | 0 | 87.48363899997003 | 1576.8922139999972 |
| Aggregated | Passed ✅ | 100.0 | 119.54295183229561 | 6.434968702063364 | 0.0 | 1926 | 0 | 87.48363899997003 | 1576.8922139999972 |

1.43.9.dev1

What's Changed
* [Feat-Proxy+langfuse] LiteLLM-specific Tags on Langfuse - `cache_hit`, `cache_key` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5180
* [Feat-Proxy Security] Allow Using `x-forwarded-for` for enforcing + tracking ip address by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5181


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.9...v1.43.9.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.9.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 201.6746962599047 | 6.326349098765157 | 0.0 | 1893 | 0 | 100.86242599999196 | 10702.858546999949 |
| Aggregated | Passed ✅ | 130.0 | 201.6746962599047 | 6.326349098765157 | 0.0 | 1893 | 0 | 100.86242599999196 | 10702.858546999949 |

1.43.7

Not secure
What's Changed
* [Refactor+Testing] Refactor Prometheus metrics to use CustomLogger class + add testing for prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5149
* fix(main.py): safely fail stream_chunk_builder calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/5151
* Feat - track response latency on prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5152
* Feat - Proxy track fallback metrics on prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5153


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.6...v1.43.7



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.7



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 82 | 93.54444385099963 | 6.525127678045322 | 0.0 | 1953 | 0 | 71.26952499999106 | 701.7027350000262 |
| Aggregated | Passed ✅ | 82 | 93.54444385099963 | 6.525127678045322 | 0.0 | 1953 | 0 | 71.26952499999106 | 701.7027350000262 |

1.43.7.dev1

What's Changed
* [Feat-Proxy] send prometheus fallbacks stats to slack by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5154
* [Feat-Security] Send Slack Alert when CRUD ops done on Virtual Keys, Teams, Internal Users by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5166
* [Proxy docstring] fix curl on docstring on /team endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5167
* [Feat Proxy] Send slack alert on CRUD endpoints for Internal Users by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5168
* [Feat] Log GCS logs in folders based on dd-m-yyyy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5171
* [Feat] GCS Bucket logging - log api key metadata + response cost by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5169


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.6.dev1...v1.43.7.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.7.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 83 | 99.22923393705152 | 6.5347647956346755 | 0.0 | 1954 | 0 | 70.33222400002614 | 1331.3259499999504 |
| Aggregated | Passed ✅ | 83 | 99.22923393705152 | 6.5347647956346755 | 0.0 | 1954 | 0 | 70.33222400002614 | 1331.3259499999504 |

v1.43.7-stable
📈 New Prometheus Metrics

doc: https://docs.litellm.ai/docs/proxy/prometheus#llm-api--provider-metrics
Release: https://github.com/BerriAI/litellm/releases/tag/v1.43.7-stable

llm_deployment_latency_per_output_token -> Track latency / output tokens
llm_deployment_failure_responses -> Calculate error rate per deployment (divide this by llm_deployment_total_requests
llm_deployment_successful_fallbacks -> Number of successful fallback requests from primary model -> fallback model
llm_deployment_failed_fallbacks -> Number of failed fallback requests from primary model -> fallback model


![Group 5949](https://github.com/user-attachments/assets/9a213d46-ecf3-423c-a58e-b1d598cb892d)


What's Changed
* [Refactor+Testing] Refactor Prometheus metrics to use CustomLogger class + add testing for prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5149
* fix(main.py): safely fail stream_chunk_builder calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/5151
* Feat - track response latency on prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5152
* Feat - Proxy track fallback metrics on prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5153


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.6...v1.43.7-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.7-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 158.2456064989428 | 6.32111960609156 | 0.0 | 1892 | 0 | 111.09798900000101 | 2661.257857999999 |
| Aggregated | Passed ✅ | 130.0 | 158.2456064989428 | 6.32111960609156 | 0.0 | 1892 | 0 | 111.09798900000101 | 2661.257857999999 |

1.43.6

Not secure
What's Changed
* fix(utils.py): set max_retries = num_retries, if given by krrishdholakia in https://github.com/BerriAI/litellm/pull/5143
* fix(litellm_logging.py): fix calling success callback w/ stream_options true by krrishdholakia in https://github.com/BerriAI/litellm/pull/5145


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.5...v1.43.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 100 | 125.43108280423552 | 6.46962941703228 | 0.0 | 1936 | 0 | 81.50035599999228 | 2114.0125620000276 |
| Aggregated | Passed ✅ | 100 | 125.43108280423552 | 6.46962941703228 | 0.0 | 1936 | 0 | 81.50035599999228 | 2114.0125620000276 |

v1.43.5-stable
What's Changed
* Feat - Translate openai function names to bedrock converse schema by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5138
* [Feat-Langfuse] log VertexAI Grounding Metadata as Spans by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5139
* [Fix] Place bedrock modified tool call name in output by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5144


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.4.dev5...v1.43.5-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.5-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 135.68491227253722 | 6.41393764623551 | 0.0 | 1919 | 0 | 97.89096399998698 | 1622.835828999996 |
| Aggregated | Passed ✅ | 120.0 | 135.68491227253722 | 6.41393764623551 | 0.0 | 1919 | 0 | 97.89096399998698 | 1622.835828999996 |

Page 23 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.