Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 65 of 93

1.33.3

Not secure
What's Changed
* fix(bedrock.py): support bedrock claude 3 function calling when stream=true by krrishdholakia in https://github.com/BerriAI/litellm/pull/2630
* (fix) include tenacity in req.txt by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2619
* [ fix ] retry logic - when using router/proxy - don't retry on the litellm.completion level too by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2620
* Bump fastapi version `0.104.1` to `0.109.1` by RoniGurvich in https://github.com/BerriAI/litellm/pull/2617

New Contributors
* RoniGurvich made their first contribution in https://github.com/BerriAI/litellm/pull/2617

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.33.2...v1.33.3

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 98 | 106.21492467973763 | 1.5333765656063003 | 0.0 | 459 | 0 | 94.59735499996214 | 1056.5959150000026 |
| /health/liveliness | Passed ✅ | 79 | 84.34175220631124 | 15.350469104490088 | 0.0 | 4595 | 0 | 76.97201599995651 | 7435.313418000022 |
| /health/readiness | Passed ✅ | 79 | 85.00047014053042 | 14.976311859723408 | 0.0 | 4483 | 0 | 77.01827100004266 | 7457.214043999955 |
| Aggregated | Passed ✅ | 79 | 85.704111298731 | 31.860157529819798 | 0.0 | 9537 | 0 | 76.97201599995651 | 7457.214043999955 |

1.33.2

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.33.1...v1.33.2

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 44 | 51.45205149667403 | 1.5062946961373946 | 0.0 | 451 | 0 | 39.71838399996841 | 920.8826870000166 |
| /health/liveliness | Passed ✅ | 25 | 27.315770033354834 | 15.520513199446725 | 0.010019698643929455 | 4647 | 3 | 23.040685000012218 | 894.733567000003 |
| /health/readiness | Passed ✅ | 25 | 27.144077305639076 | 15.517173299898749 | 0.0 | 4646 | 0 | 23.28338799998164 | 1520.0081569999497 |
| Aggregated | Passed ✅ | 25 | 28.351051080870196 | 32.54398119548287 | 0.010019698643929455 | 9744 | 3 | 23.040685000012218 | 1520.0081569999497 |

1.33.1

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.33.0...v1.33.1

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 82 | 92.66200379295492 | 1.5164045977482197 | 0.0 | 454 | 0 | 77.81299100003025 | 1358.0287479999242 |
| /health/liveliness | Passed ✅ | 62 | 64.67198939252287 | 15.367791969690657 | 0.0 | 4601 | 0 | 59.934072999908494 | 1474.2977180000025 |
| /health/readiness | Passed ✅ | 62 | 64.67469893723987 | 15.27426921916874 | 0.003340098232925594 | 4573 | 1 | 59.5637709999437 | 1446.6621210000312 |
| Aggregated | Passed ✅ | 62 | 65.99312122528015 | 32.158465786607614 | 0.003340098232925594 | 9628 | 1 | 59.5637709999437 | 1474.2977180000025 |

1.33.0

Not secure
What's Changed
* Ensure prompt injection attack 'known phrases' are >= 3 words by krrishdholakia in https://github.com/BerriAI/litellm/pull/2611
* fix(handle_jwt.py): track spend for user using jwt auth by krrishdholakia in https://github.com/BerriAI/litellm/pull/2606
* (docs) add example using vertex ai on litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2612
* (docs) Litellm fix quick start docker by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2613
* Update proxy_server.py by eltociear in https://github.com/BerriAI/litellm/pull/2563
* feat(proxy_server.py): enable llm api based prompt injection checks by krrishdholakia in https://github.com/BerriAI/litellm/pull/2614


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.32.9...v1.33.0

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 45 | 51.133834150226775 | 1.4896201673972045 | 0.0 | 446 | 0 | 40.35980700007258 | 1115.5469500000095 |
| /health/liveliness | Passed ✅ | 25 | 27.634887937034062 | 15.7011309572517 | 0.0 | 4701 | 0 | 23.182082000005266 | 1054.0938130000086 |
| /health/readiness | Passed ✅ | 25 | 28.51761090139341 | 15.343755715297661 | 0.0033399555322807272 | 4594 | 1 | 23.214589000019714 | 1252.2123720000309 |
| Aggregated | Passed ✅ | 25 | 29.127112483728485 | 32.534506839946566 | 0.0033399555322807272 | 9741 | 1 | 23.182082000005266 | 1252.2123720000309 |

1.32.9

Not secure
What's Changed
* (feat) litellm proxy /cache/ping by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2602
* (fix) start proxy with default num_workers=1 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2605
* [docs] using /cache/ping by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2609


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.32.7...v1.32.9

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 98 | 103.97209186094067 | 1.6332725335586213 | 0.0 | 489 | 0 | 92.71602000001167 | 778.1660549999998 |
| /health/liveliness | Passed ✅ | 76 | 78.6980065397209 | 15.0935758264855 | 0.0 | 4519 | 0 | 73.85529200001884 | 1178.0753489999825 |
| /health/readiness | Passed ✅ | 76 | 80.46716224015472 | 15.604599748028383 | 0.010020076892997677 | 4672 | 3 | 30.54659500003254 | 1448.410019999983 |
| Aggregated | Passed ✅ | 76 | 80.82863909700423 | 32.3314481080725 | 0.010020076892997677 | 9680 | 3 | 30.54659500003254 | 1448.410019999983 |

1.32.7

Not secure
What's Changed
* Add mistral medium latest to model prices by dragosMC91 in https://github.com/BerriAI/litellm/pull/2562
* fix(anthropic): tool calling detection by lucasmrdt in https://github.com/BerriAI/litellm/pull/2558
* Fixed azure ad token not being processed properly in embedding models by vilmar-hillow in https://github.com/BerriAI/litellm/pull/2142
* [FEAT] Litellm admin UI cleanup by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2588
* [Admin UI] Show models when creating teams by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2589
* build(deps): bump follow-redirects from 1.15.4 to 1.15.6 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/2521
* feat(handle_jwt.py): support authenticating admins into the proxy via jwt's by krrishdholakia in https://github.com/BerriAI/litellm/pull/2592
* [Feat] /metrics endpoint for Prometheus, Grafana by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2591
* fix(proxy/utils.py): fix reset budget logic by krrishdholakia in https://github.com/BerriAI/litellm/pull/2593

New Contributors
* dragosMC91 made their first contribution in https://github.com/BerriAI/litellm/pull/2562
* lucasmrdt made their first contribution in https://github.com/BerriAI/litellm/pull/2558
* vilmar-hillow made their first contribution in https://github.com/BerriAI/litellm/pull/2142

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.32.4...v1.32.7



Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 89 | 96.38442711304144 | 1.5365464658154921 | 0.0 | 460 | 0 | 82.7262380000775 | 600.5486389999533 |
| /health/liveliness | Passed ✅ | 66 | 69.08479451166716 | 15.45899357346543 | 0.0 | 4628 | 0 | 63.394482999910906 | 1327.8773580000234 |
| /health/readiness | Passed ✅ | 66 | 68.6079558634594 | 15.485716120697004 | 0.003340318403946722 | 4636 | 1 | 63.522563999868 | 1238.7321420001172 |
| Aggregated | Passed ✅ | 66 | 70.14888408628039 | 32.481256159977924 | 0.003340318403946722 | 9724 | 1 | 63.394482999910906 | 1327.8773580000234 |

Page 65 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.