Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 47 of 93

1.36.4

Not secure
What's Changed
* [Fix] `litellm.completion_cost(model="bedrock/anthropic.claude-instant-v1"..)` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3534
* [UI] show `End-User` Usage on Usage Tab by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3530

💵 Excited to launch - End User Cost Tracking on LiteLLM v1.36.4 Start here: https://docs.litellm.ai/docs/proxy/users

🔨 [Fix] Fixed completion_cost calculation issue in litellm.completion_cost()

🖼️ [UI] Implemented display of End-User Usage on Usage Tab

🚨 [Feat] Added alert functionality for cooling down deployment

🛠️ PR -Added support for stream_options OpenAI parameter.
![pika-1715233635688-1x](https://github.com/BerriAI/litellm/assets/29436595/3ef09ef4-5c8e-4227-9fc6-d6e47b582888)


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.36.3...v1.36.4

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 41 | 46.21350117257555 | 1.4127912157613525 | 0.0 | 423 | 0 | 35.30141800001729 | 483.34751900000583 |
| /health/liveliness | Passed ✅ | 25 | 27.78995257100329 | 15.734419426599837 | 0.0 | 4711 | 0 | 23.17750600002455 | 1106.7859930000168 |
| /health/readiness | Passed ✅ | 26 | 27.884833944178954 | 15.377046707719307 | 0.0033399319521544976 | 4604 | 1 | 23.45026500000813 | 1503.1187210000212 |
| Aggregated | Passed ✅ | 26 | 28.63509478712216 | 32.5242573500805 | 0.0033399319521544976 | 9738 | 1 | 23.17750600002455 | 1503.1187210000212 |

v1.37.0.dev_version_headers
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.0...v1.37.0.dev_version_headers

v1.37.0.dev2_completion_cost
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.0.dev_version_headers...v1.37.0.dev2_completion_cost



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.0.dev2_completion_cost



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.0.dev2_completion_cost



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 5 | 7.6399336212118945 | 1.5428541861741678 | 1.5428541861741678 | 462 | 462 | 3.75446399999646 | 200.7554310000046 |
| /health/liveliness | Failed ❌ | 4 | 6.095256364960607 | 16.049690949681928 | 16.049690949681928 | 4806 | 4806 | 2.4916609999650063 | 209.52107700003353 |
| /health/readiness | Failed ❌ | 4 | 6.231624336951809 | 15.381788704584887 | 15.381788704584887 | 4606 | 4606 | 2.578173000017614 | 1324.266863000048 |
| Aggregated | Failed ❌ | 4 | 6.231143722807432 | 32.974333840440984 | 32.974333840440984 | 9874 | 9874 | 2.4916609999650063 | 1324.266863000048 |

1.36.3

Not secure
What's Changed
* * feat(factory.py): add support for merging consecutive messages of one role when separated with empty message of another role by nkvch in https://github.com/BerriAI/litellm/pull/3478
* Revert "* feat(factory.py): add support for merging consecutive messages of one role when separated with empty message of another role" by krrishdholakia in https://github.com/BerriAI/litellm/pull/3518
* Edit cost per input + cost per output token on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/3512
* Pydantic warning conflict with protected namespace by CyanideByte in https://github.com/BerriAI/litellm/pull/3519
* [Feat] send alert on cooling down deployment by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3532
* Add `/engines/{model}/chat/completions` endpoint by msabramo in https://github.com/BerriAI/litellm/pull/3437
* feat(proxy_server.py): return litellm version in response headers by krrishdholakia in https://github.com/BerriAI/litellm/pull/3535

New Contributors
* nkvch made their first contribution in https://github.com/BerriAI/litellm/pull/3478

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.36.2-stable...v1.36.3

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 81 | 84.12845592706023 | 1.4197437828442634 | 0.0 | 425 | 0 | 75.00172599998223 | 276.58217700002297 |
| /health/liveliness | Passed ✅ | 65 | 68.02378255285612 | 15.263080808977504 | 0.003340573606692384 | 4569 | 1 | 63.314151000042784 | 1318.427387000014 |
| /health/readiness | Passed ✅ | 65 | 67.54219389117722 | 15.410066047671968 | 0.003340573606692384 | 4613 | 1 | 63.400273000013385 | 1442.6363040000183 |
| Aggregated | Passed ✅ | 65 | 68.50498560143636 | 32.09289063949374 | 0.006681147213384768 | 9607 | 2 | 63.314151000042784 | 1442.6363040000183 |

v1.36.2-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.36.2...v1.36.2-stable

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 82 | 86.14350377301878 | 1.5598228029353671 | 0.0 | 467 | 0 | 75.38395299997092 | 742.260956999985 |
| /health/liveliness | Passed ✅ | 66 | 68.56318463091652 | 15.38446216342677 | 0.006680183310215706 | 4606 | 2 | 46.13641599999596 | 1770.3959170000303 |
| /health/readiness | Passed ✅ | 66 | 68.15400066320011 | 15.093874189432386 | 0.0 | 4519 | 0 | 63.42066399997748 | 1238.310568000088 |
| Aggregated | Passed ✅ | 66 | 69.2263317002717 | 32.038159155794524 | 0.006680183310215706 | 9592 | 2 | 46.13641599999596 | 1770.3959170000303 |

1.36.2

Not secure
What's Changed
* Update support for langfuse metadata by alexanderepstein in https://github.com/BerriAI/litellm/pull/3459
* Synced the doc with the Mistral by paneru-rajan in https://github.com/BerriAI/litellm/pull/3471
* update langchain documentation to reflect refactor by sepiatone in https://github.com/BerriAI/litellm/pull/3464
* Add devcontainer by Manouchehri in https://github.com/BerriAI/litellm/pull/3494
* Added support for JWT auth with PEM cert public keys by ghaemisr in https://github.com/BerriAI/litellm/pull/3500
* [Feat + Test] Add lowest cost routing - litellm.Router by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3504
https://docs.litellm.ai/docs/routing#advanced---routing-strategies

* [Feat] Make lowest cost routing Async by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3510
* feat(slack_alerting.py): reintegrate langfuse trace url for slack alerts by krrishdholakia in https://github.com/BerriAI/litellm/pull/3506
* Added "deepseek/" as a supported provider (openai compatible) by paul-gauthier in https://github.com/BerriAI/litellm/pull/3503
* [Feat] litellm.Router / litellm.completion - send llm exceptions to slack by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3511
https://docs.litellm.ai/docs/routing#alerting-

* support sync ollama embeddings by mbektas in https://github.com/BerriAI/litellm/pull/3470
* add_function_to_prompt bug fix by phact in https://github.com/BerriAI/litellm/pull/3439

New Contributors
* alexanderepstein made their first contribution in https://github.com/BerriAI/litellm/pull/3459
* sepiatone made their first contribution in https://github.com/BerriAI/litellm/pull/3464
* ghaemisr made their first contribution in https://github.com/BerriAI/litellm/pull/3500
* mbektas made their first contribution in https://github.com/BerriAI/litellm/pull/3470

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.36.1...v1.36.2
![Group 5805](https://github.com/BerriAI/litellm/assets/29436595/fa4e5854-591c-427b-b307-c0534636656b)

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 92 | 96.04003103837316 | 1.4796974346523213 | 0.0 | 443 | 0 | 85.96445500000982 | 442.37826399998426 |
| /health/liveliness | Passed ✅ | 78 | 79.00126169393944 | 15.431607557773644 | 0.0 | 4620 | 0 | 73.91568399998505 | 315.2042289999599 |
| /health/readiness | Passed ✅ | 78 | 81.64751764383558 | 15.117631126944485 | 0.0033401747960549013 | 4526 | 1 | 73.83376600000702 | 1535.723929999989 |
| Aggregated | Passed ✅ | 78 | 81.03746247074821 | 32.02893611937045 | 0.0033401747960549013 | 9589 | 1 | 73.83376600000702 | 1535.723929999989 |

1.36.1

Not secure
🚨 Known Issue with Slack Alerting + Redis Cache on this Version

1.36.0

Not secure
What's Changed
* [Feat] Add Exception mapping for Azure ContentPolicyViolationError by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3449
* Add return_exceptions to litellm.batch_completion by ffreemt in https://github.com/BerriAI/litellm/pull/3397
* fix(caching.py): fix redis caching ping check by krrishdholakia in https://github.com/BerriAI/litellm/pull/3447
* change max_tokens type to int by TanaroSch in https://github.com/BerriAI/litellm/pull/1530
* Revert "Add return_exceptions to litellm.batch_completion" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3450
* fix(factory.py): support 'function' openai message role for anthropic by krrishdholakia in https://github.com/BerriAI/litellm/pull/3448
* [Feat] Return model, api_base and first 100 chars of messages in Azure Exceptions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3451
* [FEAT] router set custom num retries for ContentPolicyViolationErrorRetries, RateLimitErrorRetries, BadRequestErrorRetries etc by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3456
* [Feat] return num_retries in litellm.Router exceptions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3457
* [Feat] Set a Retry Policy per model group by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3460
* Fix OpenMeter sync logger by tothandras in https://github.com/BerriAI/litellm/pull/3452
* feat(openai.py): add support for openai assistants by krrishdholakia in https://github.com/BerriAI/litellm/pull/3455
* gunicorn version bump by RoniGurvichCycode in https://github.com/BerriAI/litellm/pull/3463
* Fix Ollama streamed tool calls. Set finish_reason to tool_calls for all tool_calls responses by jackmpcollins in https://github.com/BerriAI/litellm/pull/3469
* Allowing extra headers for bedrock by themrzmaster in https://github.com/BerriAI/litellm/pull/3299

New Contributors
* ffreemt made their first contribution in https://github.com/BerriAI/litellm/pull/3397
* TanaroSch made their first contribution in https://github.com/BerriAI/litellm/pull/1530
* tothandras made their first contribution in https://github.com/BerriAI/litellm/pull/3452
* RoniGurvichCycode made their first contribution in https://github.com/BerriAI/litellm/pull/3463

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.38-stable...v1.36.0

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 81 | 88.20979605555442 | 1.623255922227879 | 0.0 | 486 | 0 | 75.38953300002049 | 1264.5359969999959 |
| /health/liveliness | Passed ✅ | 65 | 68.12845653229724 | 15.253929623075564 | 0.0 | 4567 | 0 | 63.39287800000193 | 1385.0202130000184 |
| /health/readiness | Passed ✅ | 65 | 68.59345058785526 | 15.511112145733067 | 0.0033400327617857596 | 4644 | 1 | 63.46367399999053 | 1491.452105999997 |
| Aggregated | Passed ✅ | 65 | 69.35759579210092 | 32.38829769103651 | 0.0033400327617857596 | 9697 | 1 | 63.39287800000193 | 1491.452105999997 |

v1.35.38-stable
What's Changed
* UI select start/ end time for viewing model metrics by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3441
* Improve the document of Traceloop by paneru-rajan in https://github.com/BerriAI/litellm/pull/3445
* Improve mocking in `test_proxy_exception_mapping.py` by msabramo in https://github.com/BerriAI/litellm/pull/3408

New Contributors
* paneru-rajan made their first contribution in https://github.com/BerriAI/litellm/pull/3445

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.38...v1.35.38-stable

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 55 | 58.640919096329625 | 1.4563005047336437 | 0.0 | 436 | 0 | 50.270481999973526 | 588.9671030000159 |
| /health/liveliness | Passed ✅ | 40 | 43.084301563953716 | 15.5116044586767 | 0.0 | 4644 | 0 | 37.74072900000647 | 1439.319705999992 |
| /health/readiness | Passed ✅ | 40 | 43.8615925310188 | 15.398039740417655 | 0.006680277544649742 | 4610 | 2 | 37.88508399998136 | 1424.1174170000193 |
| Aggregated | Passed ✅ | 40 | 44.15406385521118 | 32.365944703828 | 0.006680277544649742 | 9690 | 2 | 37.74072900000647 | 1439.319705999992 |

1.35.38

Not secure
What's Changed
* [Test] Assert num Callbacks on Proxy don't increase by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3423
* UI - set DB Exceptions webhook_url on UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3426
* docs - simplify best practices for prod by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3431
* feat(proxy_server.py): return api base in response headers by krrishdholakia in https://github.com/BerriAI/litellm/pull/3430
* [Test] - Ensure only 1 slack callback + Size of of all callbacks do not grow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3427
* [Test] Add Slack Alerting unit tests by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3429
* Feat - add bedrock titan embed-v2 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3432
* Admin UI - filter exceptions by model group by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3435
* fix(anthropic.py): drop unsupported non-whitespace character value wh… by krrishdholakia in https://github.com/BerriAI/litellm/pull/3436
* fix(bedrock.py): convert httpx.timeout to boto3 valid timeout by krrishdholakia in https://github.com/BerriAI/litellm/pull/3433


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.37...v1.35.38

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 41 | 50.66770748000066 | 1.6698372244241906 | 0.0 | 500 | 0 | 35.41266199999882 | 1175.5095739999888 |
| /health/liveliness | Passed ✅ | 26 | 28.758076696307747 | 15.285689952379041 | 0.006679348897696762 | 4577 | 2 | 23.39240899999595 | 1262.2757970000293 |
| /health/readiness | Passed ✅ | 26 | 28.897759206984396 | 15.779961770808601 | 0.0 | 4725 | 0 | 23.345480999978463 | 1166.2576729999614 |
| Aggregated | Passed ✅ | 26 | 29.94302010120413 | 32.735488947611834 | 0.006679348897696762 | 9802 | 2 | 23.345480999978463 | 1262.2757970000293 |

Page 47 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.