Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 54 of 93

1.35.11

Not secure
What's Changed
* UI - sort models by latency by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3104
* UI - move model usage to usage tab by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3103
* [Proxy] Add PROXY_BASE_URL in slack alerts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3108
* add mixtral 8x22 by themrzmaster in https://github.com/BerriAI/litellm/pull/3109
* fix streaming special character flushing logic by krrishdholakia in https://github.com/BerriAI/litellm/pull/3111


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.10...v1.35.11

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 94 | 100.52679200638227 | 1.5700175715047135 | 0.0 | 470 | 0 | 86.20277200003557 | 1024.0533010000377 |
| /health/liveliness | Passed ✅ | 78 | 81.20203275317154 | 15.008699891001443 | 0.0066809258361902706 | 4493 | 2 | 73.74093499998935 | 2260.400893999986 |
| /health/readiness | Passed ✅ | 78 | 81.4637248292686 | 15.476364699534763 | 0.0 | 4633 | 0 | 73.4900209999978 | 1558.3156839999788 |
| Aggregated | Passed ✅ | 78 | 82.27488146488147 | 32.05508216204092 | 0.0066809258361902706 | 9596 | 2 | 73.4900209999978 | 2260.400893999986 |

1.35.10

Not secure
What's Changed
* add vertex_ai/text-embedding-preview-0409 by Dev-Khant in https://github.com/BerriAI/litellm/pull/2999
* Fix Anthropic system message handling by ligaz in https://github.com/BerriAI/litellm/pull/3019
* Update: gpt-4-turbo-preview pricing and context. Included in docs. by ryanwclark1 in https://github.com/BerriAI/litellm/pull/2817
* docs: add langfuse to callback docs by marcklingen in https://github.com/BerriAI/litellm/pull/3044
* test - setting up langfuse callback on proxy + assert logs written by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3067
* [Fix + Test] - Return correct params from `/user/new` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3066
* (feat): Add Claude v3 Opus on Amazon Bedrock. by Manouchehri in https://github.com/BerriAI/litellm/pull/3069
* fix(proxy_server.py): ensure id used in delete deployment matches id used in litellm Router by krrishdholakia in https://github.com/BerriAI/litellm/pull/3077
* UI - Save / Edit router settings UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3079
* [FEAT] - View router settings (backend) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3076
* UI - view router settings on UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3078
* build(ui): view_key_table.tsx by krrishdholakia in https://github.com/BerriAI/litellm/pull/3081
* [Fix] Prometheus /metrics - don't log user_api_key to prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3084
* Fix - show `model`, `deployment` and `model_group` in vertex exceptions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3087
* fix(vertex_ai.py): accept credentials as a json string by krrishdholakia in https://github.com/BerriAI/litellm/pull/3082
* Use `max_input_token` for `trim_messages` by cwang in https://github.com/BerriAI/litellm/pull/3062

New Contributors
* ligaz made their first contribution in https://github.com/BerriAI/litellm/pull/3019
* ryanwclark1 made their first contribution in https://github.com/BerriAI/litellm/pull/2817
* marcklingen made their first contribution in https://github.com/BerriAI/litellm/pull/3044
* cwang made their first contribution in https://github.com/BerriAI/litellm/pull/3062

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.8.dev1...v1.35.10

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 42 | 49.6385708482911 | 1.5629026707638956 | 0.0 | 468 | 0 | 36.01543100000981 | 1183.660677999967 |
| /health/liveliness | Passed ✅ | 26 | 29.30218259242742 | 15.699156955686053 | 0.0 | 4701 | 0 | 23.077996000040457 | 1128.3300599999961 |
| /health/readiness | Passed ✅ | 26 | 28.14200395417242 | 15.448691784089275 | 0.0 | 4626 | 0 | 23.042472000042835 | 1140.643462000014 |
| Aggregated | Passed ✅ | 26 | 29.725913406432227 | 32.710751410539224 | 0.0 | 9795 | 0 | 23.042472000042835 | 1183.660677999967 |

1.35.8

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.7...v1.35.8



Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 80 | 87.1989655181834 | 1.4698687495954712 | 0.0 | 440 | 0 | 74.66019299999971 | 1033.186795000006 |
| /health/liveliness | Passed ✅ | 65 | 68.48147806584099 | 15.373490876450814 | 0.0 | 4602 | 0 | 63.30231800001229 | 1396.735858999989 |
| /health/readiness | Passed ✅ | 65 | 68.18094052773615 | 15.59731179968467 | 0.0 | 4669 | 0 | 63.406801999974505 | 1063.5555970000041 |
| Aggregated | Passed ✅ | 65 | 69.18506005674008 | 32.440671425730955 | 0.0 | 9711 | 0 | 63.30231800001229 | 1396.735858999989 |

1.35.8.dev1

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.8...v1.35.8.dev1

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 77 | 82.5702124211668 | 1.5464604092525411 | 0.0 | 463 | 0 | 71.67428100001416 | 632.83159599996 |
| /health/liveliness | Passed ✅ | 62 | 64.37648616077583 | 15.498004965295445 | 0.0 | 4640 | 0 | 59.29384199998822 | 1162.1818330000337 |
| /health/readiness | Passed ✅ | 62 | 64.627800058514 | 15.240818244966189 | 0.006680174554006657 | 4563 | 2 | 59.27756900001668 | 1378.9221950000297 |
| Aggregated | Passed ✅ | 62 | 65.36660002110477 | 32.285283619514175 | 0.006680174554006657 | 9666 | 2 | 59.27756900001668 | 1378.9221950000297 |

1.35.7

Not secure
What's Changed
* Default model_name to None in _aembedding by grav in https://github.com/BerriAI/litellm/pull/2981
* (fix) Langfuse v2 renamed a few things. by Manouchehri in https://github.com/BerriAI/litellm/pull/2883
* [Fix] Set model_id in db on model creation + modal on model deletion by zJuuu in https://github.com/BerriAI/litellm/pull/3015
* [Fix + Test] key delete bug by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3047
* [Fix] better error msgs when `/key/delete` raises an error by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3048
* [Fix]- gemini bug `please run pip install Pillow` on LiteLLM Docker by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3053
* build(deps): bump pillow from 10.0.1 to 10.3.0 by dependabot in https://github.com/BerriAI/litellm/pull/3055
* [Feat] view models that `supports_vision` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3054

New Contributors
* grav made their first contribution in https://github.com/BerriAI/litellm/pull/2981
* zJuuu made their first contribution in https://github.com/BerriAI/litellm/pull/3015

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.6...v1.35.7

1.35.6

Not secure
What's Changed
* Feat - add groq tool calling + testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3029
* Docs - add groq tool calling example by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3030
* fix(anthropic_text.py): add support for async text completion calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/3028
* [Fix] show users tab on user + fix pagination by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3033
* Docs - add team based logging langfuse on langfuse proxy docs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3041
* [Fix + Test] - Team Based logging to Langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3039
* [fix] using wildcard openai models proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3042
* fix(vertex_ai_anthropic.py): Add vertex ai models via `/model/new` by krrishdholakia in https://github.com/BerriAI/litellm/pull/3043
* fix(proxy_server.py): fix /team/update endpoint by krrishdholakia in https://github.com/BerriAI/litellm/pull/3034


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.5...v1.35.6

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 80 | 91.32192700686522 | 1.4596854141948363 | 0.0033402412224138134 | 437 | 1 | 75.3214950000256 | 1438.5441430000014 |
| /health/liveliness | Passed ✅ | 66 | 68.08952963725082 | 15.405192517772507 | 0.0 | 4612 | 0 | 63.32589900000585 | 1327.5218320000022 |
| /health/readiness | Passed ✅ | 66 | 68.26662181489549 | 15.338387693324231 | 0.0033402412224138134 | 4592 | 1 | 46.533456000020124 | 1448.8824260000115 |
| Aggregated | Passed ✅ | 66 | 69.22693913110682 | 32.20326562529157 | 0.006680482444827627 | 9641 | 2 | 46.533456000020124 | 1448.8824260000115 |

Page 54 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.