Litellm

Latest version: v1.65.1

Safety actively analyzes 723158 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 13 of 112

1.56.9

Not secure
What's Changed
* (fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7519
* (Fix) - Hashicorp secret manager - don't print hcorp secrets in debug logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7529
* [Bug-Fix]: None metadata not handled for `_PROXY_VirtualKeyModelMaxBudgetLimiter` hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7523
* Bump anthropic.claude-3-5-haiku-20241022-v1:0 to new limits by Manouchehri in https://github.com/BerriAI/litellm/pull/7118
* Fix langfuse prompt management on proxy by krrishdholakia in https://github.com/BerriAI/litellm/pull/7535
* (Feat) - Hashicorp secret manager, use TLS cert authentication by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7532
* Fix OTEL message redaction + Langfuse key leak in logs by krrishdholakia in https://github.com/BerriAI/litellm/pull/7516
* feat: implement support for limit, order, before, and after parameters in get_assistants by jeansouzak in https://github.com/BerriAI/litellm/pull/7537
* Add missing prefix for deepseek by SmartManoj in https://github.com/BerriAI/litellm/pull/7508
* (fix) `aiohttp_openai/` route - get to 1K RPS on single instance by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7539
* Revert "feat: implement support for limit, order, before, and after parameters in get_assistants" by krrishdholakia in https://github.com/BerriAI/litellm/pull/7542
* [Feature]: - allow print alert log to console by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7534
* (fix proxy perf) use `_read_request_body` instead of ast.literal_eval to get better performance by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7545

New Contributors
* jeansouzak made their first contribution in https://github.com/BerriAI/litellm/pull/7537
* SmartManoj made their first contribution in https://github.com/BerriAI/litellm/pull/7508

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.56.8...v1.56.9



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.9



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 240.0 | 269.3983699320639 | 6.149252570882109 | 0.0 | 1840 | 0 | 211.95807399999467 | 2571.210135000001 |
| Aggregated | Passed ✅ | 240.0 | 269.3983699320639 | 6.149252570882109 | 0.0 | 1840 | 0 | 211.95807399999467 | 2571.210135000001 |

1.56.8

Not secure
What's Changed
* Prometheus - custom metrics support + other improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/7489
* (feat) POST `/fine_tuning/jobs` support passing vertex specific hyper params by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7490
* (Feat) - LiteLLM Use `UsernamePasswordCredential` for Azure OpenAI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7496
* (docs) Add docs on load testing benchmarks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7499
* (Feat) Add support for reading secrets from Hashicorp vault by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7497
* Litellm dev 12 30 2024 p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7495
* Refactor Custom Metrics on Prometheus - allow setting k,v pairs on all metrics via config.yaml by krrishdholakia in https://github.com/BerriAI/litellm/pull/7498
* (fix) GCS bucket logger - apply `truncate_standard_logging_payload_content` to `standard_logging_payload` and ensure GCS flushes queue on fails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7500
* Litellm dev 01 01 2025 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7503
* Litellm dev 01 02 2025 p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7512
* Revert "(fix) GCS bucket logger - apply `truncate_standard_logging_payload_content` to `standard_logging_payload` and ensure GCS flushes queue on fails" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7515
* (perf) use `aiohttp` for `custom_openai` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7514
* (perf) use threadpool executor - for sync logging integrations by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7509


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.56.6...v1.56.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 230.0 | 247.81903455189286 | 6.181081075067931 | 0.0 | 1850 | 0 | 191.81740900000932 | 2126.8676100000903 |
| Aggregated | Passed ✅ | 230.0 | 247.81903455189286 | 6.181081075067931 | 0.0 | 1850 | 0 | 191.81740900000932 | 2126.8676100000903 |

1.56.8dev2

What's Changed
* (fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7519
* (Fix) - Hashicorp secret manager - don't print hcorp secrets in debug logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7529
* [Bug-Fix]: None metadata not handled for `_PROXY_VirtualKeyModelMaxBudgetLimiter` hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7523
* Bump anthropic.claude-3-5-haiku-20241022-v1:0 to new limits by Manouchehri in https://github.com/BerriAI/litellm/pull/7118
* Fix langfuse prompt management on proxy by krrishdholakia in https://github.com/BerriAI/litellm/pull/7535
* (Feat) - Hashicorp secret manager, use TLS cert authentication by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7532
* Fix OTEL message redaction + Langfuse key leak in logs by krrishdholakia in https://github.com/BerriAI/litellm/pull/7516
* feat: implement support for limit, order, before, and after parameters in get_assistants by jeansouzak in https://github.com/BerriAI/litellm/pull/7537
* Add missing prefix for deepseek by SmartManoj in https://github.com/BerriAI/litellm/pull/7508
* (fix) `aiohttp_openai/` route - get to 1K RPS on single instance by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7539

New Contributors
* jeansouzak made their first contribution in https://github.com/BerriAI/litellm/pull/7537
* SmartManoj made their first contribution in https://github.com/BerriAI/litellm/pull/7508

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.56.8...v1.56.8-dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8-dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 260.0 | 302.69986428167584 | 6.1480113905567375 | 0.0 | 1839 | 0 | 230.89517400001114 | 2985.9468520000405 |
| Aggregated | Failed ❌ | 260.0 | 302.69986428167584 | 6.1480113905567375 | 0.0 | 1839 | 0 | 230.89517400001114 | 2985.9468520000405 |

v1.56.3-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.8-stable...v1.56.3-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.56.3-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 250.0 | 285.39144223780414 | 6.0307890213828905 | 0.0033430094353563695 | 1804 | 1 | 125.146089999987 | 3186.0641239999836 |
| Aggregated | Passed ✅ | 250.0 | 285.39144223780414 | 6.0307890213828905 | 0.0033430094353563695 | 1804 | 1 | 125.146089999987 | 3186.0641239999836 |

1.56.8dev1

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.56.8...v1.56.8-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8-dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 250.0 | 284.69056873304487 | 6.157751312397796 | 0.0 | 1843 | 0 | 211.56842700003153 | 2410.6343400000014 |
| Aggregated | Passed ✅ | 250.0 | 284.69056873304487 | 6.157751312397796 | 0.0 | 1843 | 0 | 211.56842700003153 | 2410.6343400000014 |

1.56.6

Not secure
What's Changed
* (fix) `v1/fine_tuning/jobs` with VertexAI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7487
* (docs) Add docs on using Vertex with Fine Tuning APIs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7491
* Fix team-based logging to langfuse + allow custom tokenizer on `/token_counter` endpoint by krrishdholakia in https://github.com/BerriAI/litellm/pull/7493
* Fix team admin create key flow on UI + other improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/7488
* docs: added missing quote by dsdanielko in https://github.com/BerriAI/litellm/pull/7481
* fix ollama embedding model response 7451 by svenseeberg in https://github.com/BerriAI/litellm/pull/7473
* (Feat) - Add PagerDuty Alerting Integration by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7478

New Contributors
* dsdanielko made their first contribution in https://github.com/BerriAI/litellm/pull/7481
* svenseeberg made their first contribution in https://github.com/BerriAI/litellm/pull/7473

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.56.5...v1.56.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 250.0 | 287.411814751915 | 6.114731230663012 | 0.0 | 1830 | 0 | 228.32058200003758 | 3272.637599999939 |
| Aggregated | Passed ✅ | 250.0 | 287.411814751915 | 6.114731230663012 | 0.0 | 1830 | 0 | 228.32058200003758 | 3272.637599999939 |

1.56.6.dev1

What's Changed
* Prometheus - custom metrics support + other improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/7489
* (feat) POST `/fine_tuning/jobs` support passing vertex specific hyper params by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7490
* (Feat) - LiteLLM Use `UsernamePasswordCredential` for Azure OpenAI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7496
* (docs) Add docs on load testing benchmarks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7499
* (Feat) Add support for reading secrets from Hashicorp vault by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7497
* Litellm dev 12 30 2024 p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7495
* Refactor Custom Metrics on Prometheus - allow setting k,v pairs on all metrics via config.yaml by krrishdholakia in https://github.com/BerriAI/litellm/pull/7498
* (fix) GCS bucket logger - apply `truncate_standard_logging_payload_content` to `standard_logging_payload` and ensure GCS flushes queue on fails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7500
* Litellm dev 01 01 2025 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7503


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.56.6...v1.56.6.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.6.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 230.0 | 255.89973974836954 | 6.151774848433542 | 0.003343355895887794 | 1840 | 1 | 94.9865199999067 | 1259.9916519999965 |
| Aggregated | Passed ✅ | 230.0 | 255.89973974836954 | 6.151774848433542 | 0.003343355895887794 | 1840 | 1 | 94.9865199999067 | 1259.9916519999965 |

Page 13 of 112

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.