Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 62 of 93

1.34.13

Not secure
What's Changed
* Fix XML function calling args parsing. by mnicstruwig in https://github.com/BerriAI/litellm/pull/2640
* Docs cleanup by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2751
* [FEAT] admin UI refactor by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2746

New Contributors
* mnicstruwig made their first contribution in https://github.com/BerriAI/litellm/pull/2640

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.34.12...v1.34.13

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 1700.0 | 1720.6599296916486 | 1.5597691501485915 | 1.5597691501485915 | 467 | 467 | 319.8903159999986 | 3570.9275919999754 |
| /health/liveliness | Passed ✅ | 40 | 42.81475298526769 | 14.963095915772357 | 0.0 | 4480 | 0 | 38.02822300002617 | 1332.4513609999826 |
| /health/readiness | Passed ✅ | 40 | 42.222838855951075 | 15.210254196523953 | 0.0 | 4554 | 0 | 38.3152320000022 | 716.6715859999897 |
| Aggregated | Passed ✅ | 40 | 125.00169336806573 | 31.7331192624449 | 1.5597691501485915 | 9501 | 467 | 38.02822300002617 | 3570.9275919999754 |

1.34.12

Not secure
What's Changed
* feat(proxy/utils.py): enable updating db in a separate server by krrishdholakia in https://github.com/BerriAI/litellm/pull/2722
* fix(proxy_server.py): don't auto-create user when creating key by krrishdholakia in https://github.com/BerriAI/litellm/pull/2724
* Batch embedding for Ollama by onukura in https://github.com/BerriAI/litellm/pull/2720
* Add `trace_name` in Langfuse logging by andreaponti5 in https://github.com/BerriAI/litellm/pull/2715
* Admin UI clearly show models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2735
* [Admin UI] Use consistent spacing, show mandatory fields by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2738
* (fix) ui - clean up username display by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2739
* (ui) new build by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2740
* fix(proxy_server.py): fix tpm/rpm limiting for jwt auth by krrishdholakia in https://github.com/BerriAI/litellm/pull/2741

New Contributors
* andreaponti5 made their first contribution in https://github.com/BerriAI/litellm/pull/2715

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.34.10...v1.34.12

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 1800.0 | 1755.1105577409173 | 1.379557015059167 | 1.379557015059167 | 413 | 413 | 314.5321039999658 | 3183.5194690000035 |
| /health/liveliness | Passed ✅ | 25 | 26.838340247288826 | 15.398929393275447 | 0.0 | 4610 | 0 | 22.767044999994823 | 1018.2174079999982 |
| /health/readiness | Passed ✅ | 25 | 28.248346697941912 | 15.094959203516646 | 0.0 | 4519 | 0 | 23.172290999980305 | 1362.1405929999924 |
| Aggregated | Passed ✅ | 25 | 102.30975556644304 | 31.87344561185126 | 1.379557015059167 | 9542 | 413 | 22.767044999994823 | 3183.5194690000035 |

1.34.10

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.34.10.dev1...v1.34.10



Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 1800.0 | 1761.8447134606747 | 1.4863563301473084 | 1.4863563301473084 | 445 | 445 | 280.82297299999937 | 3158.9710640000135 |
| /health/liveliness | Passed ✅ | 40 | 42.90798184780657 | 15.077331402887529 | 0.0 | 4514 | 0 | 38.579956000035054 | 1139.8761570000033 |
| /health/readiness | Passed ✅ | 40 | 42.83725997510891 | 15.297779757471174 | 0.0 | 4580 | 0 | 38.62827400007518 | 1062.1358200000373 |
| Aggregated | Passed ✅ | 40 | 123.06344252405925 | 31.86146749050601 | 1.4863563301473084 | 9539 | 445 | 38.579956000035054 | 3158.9710640000135 |

1.34.8

Not secure
What's Changed
* Fix 2713 Remove duplicated "blocked" field on LiteLLM_TeamTable by readevalprint in https://github.com/BerriAI/litellm/pull/2714
* Updating the default Claude3 max tokens by rmann-nflx in https://github.com/BerriAI/litellm/pull/2701
* (fix) bump uvicorn on proxy docker builds by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2719
* (fix) Proxy - remove background tasks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2721
* feat(auth_checks.py): enable admin to enforce 'user' param for all openai endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/2726
* feat(proxy_server.py): new `/spend/calculate` endpoint by krrishdholakia in https://github.com/BerriAI/litellm/pull/2725
* [FEAT] Improve Proxy Perf - access router model names in constant time by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2723
* [FEAT] Proxy - reduce deep copies by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2728

New Contributors
* readevalprint made their first contribution in https://github.com/BerriAI/litellm/pull/2714
* rmann-nflx made their first contribution in https://github.com/BerriAI/litellm/pull/2701

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.34.6...v1.34.8

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 39 | 47.89723312525114 | 1.6532145452982783 | 0.003339827364238946 | 495 | 1 | 35.51040400003558 | 1220.0857709999582 |
| /health/liveliness | Passed ✅ | 26 | 28.17128257893658 | 15.633731892002507 | 0.006679654728477892 | 4681 | 2 | 23.512639999978546 | 1222.522328000025 |
| /health/readiness | Passed ✅ | 26 | 30.12289661691968 | 15.396604149141542 | 0.0 | 4610 | 0 | 23.55434899999409 | 1311.0719019999806 |
| Aggregated | Passed ✅ | 26 | 30.08843833568385 | 32.68355058644232 | 0.010019482092716837 | 9786 | 3 | 23.512639999978546 | 1311.0719019999806 |

1.34.8.dev1

What's Changed
* (fix) show user their role when rejecting /team/new requests by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2729
* (feat) admin UI show models on team table by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2731
* (fix) UI - show user models when creating a key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2732
* [FEAT] UI - view team alias when creating keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2734


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.34.8...v1.34.8.dev1

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 90 | 95.18066933991201 | 1.5231486048986131 | 0.5477990596565188 | 456 | 164 | 73.70403299995587 | 1227.018284000053 |
| /health/liveliness | Failed ❌ | 78 | 85.42985518872867 | 15.291610336021602 | 5.230812972086027 | 4578 | 1566 | 73.2654260000345 | 1335.0114459999531 |
| /health/readiness | Failed ❌ | 78 | 85.73612634587776 | 15.355074861225711 | 5.284256782784223 | 4597 | 1582 | 73.44564999999648 | 1525.4359269999895 |
| Aggregated | Failed ❌ | 78 | 86.03771519935606 | 32.16983380214593 | 11.06286881452677 | 9631 | 3312 | 73.2654260000345 | 1525.4359269999895 |

1.34.6

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.34.5...v1.34.6

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 39 | 43.494775435583094 | 1.633318116162535 | 0.0 | 489 | 0 | 34.97501599997577 | 1143.8180330000023 |
| /health/liveliness | Passed ✅ | 25 | 27.179221879552216 | 15.50149156873277 | 0.0 | 4641 | 0 | 23.14285799997151 | 989.7288989999993 |
| /health/readiness | Passed ✅ | 25 | 27.2245580989883 | 15.521532281814519 | 0.010020356540874447 | 4647 | 3 | 23.04965500002254 | 1015.2391109999712 |
| Aggregated | Passed ✅ | 25 | 28.016798140227166 | 32.656341966709824 | 0.010020356540874447 | 9777 | 3 | 23.04965500002254 | 1143.8180330000023 |

Page 62 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.