Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 48 of 93

1.35.37

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.36-dev2...v1.35.37

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 40 | 45.927794851771 | 1.5098661209108717 | 0.0 | 452 | 0 | 34.30683700000259 | 1060.4514910000091 |
| /health/liveliness | Passed ✅ | 25 | 28.087583800042648 | 15.686573680967816 | 0.0 | 4696 | 0 | 23.097278999955506 | 1512.7727469999854 |
| /health/readiness | Passed ✅ | 25 | 27.44702863919547 | 15.442723621617167 | 0.0 | 4623 | 0 | 23.009613999988687 | 1090.2711250000152 |
| Aggregated | Passed ✅ | 25 | 28.609791239074898 | 32.63916342349586 | 0.0 | 9771 | 0 | 23.009613999988687 | 1512.7727469999854 |

1.35.36

Not secure
🚨 🚨 Detected perf issue with litellm lowest-latency-routing.

1.35.36dev2

What's Changed
* changing ollama response parsing to expected behaviour by TheDiscoMole in https://github.com/BerriAI/litellm/pull/1526
* Added cost & context metadata for openrouter/anthropic/claude-3-opus by paul-gauthier in https://github.com/BerriAI/litellm/pull/3382
* fix - error sending details to log on sentry by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3384
* [UI] Fix show latency < 0.0001 for deployments that have low latency + only show non cache hits on latency UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3388
* [UI] show slow responses + num requests per deployment by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3390
* [Fix + Test] Errant prints on langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3391
* Add langfuse `sdk_integration` by marcklingen in https://github.com/BerriAI/litellm/pull/2516
* Update langfuse in `requirements.txt` by Manouchehri in https://github.com/BerriAI/litellm/pull/3262
* feat(openmeter.py): add support for user billing by krrishdholakia in https://github.com/BerriAI/litellm/pull/3389
* [chore] Improve type-safety in Message & Delta classes by elisalimli in https://github.com/BerriAI/litellm/pull/3379
* docs: add .github/pull_request_template.md by nobu007 in https://github.com/BerriAI/litellm/pull/3349
* Update contributing instructions in README.md by DomMartin27 in https://github.com/BerriAI/litellm/pull/3217
* build(deps): bump hono/node-server from 1.9.0 to 1.10.1 in /litellm-js/spend-logs by dependabot in https://github.com/BerriAI/litellm/pull/3169
* build(deps): bump idna from 3.6 to 3.7 by dependabot in https://github.com/BerriAI/litellm/pull/2967
* Fix Greenscale Documentation by greenscale-nandesh in https://github.com/BerriAI/litellm/pull/3278
* [Fix] bug where langfuse was reinitialized on every call by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3392
* Fix route `/openai/deployments/{model}/chat/completions` not working properly by msabramo in https://github.com/BerriAI/litellm/pull/3375
* Litellm gh 3372 by msabramo in https://github.com/BerriAI/litellm/pull/3402
* Vision for Claude 3 Family + Info for Azure/GPT-4-0409 by azohra in https://github.com/BerriAI/litellm/pull/3405
* Improve mocking in `test_proxy_server.py` by msabramo in https://github.com/BerriAI/litellm/pull/3406
* Disambiguate invalid model name errors by msabramo in https://github.com/BerriAI/litellm/pull/3403
* fix - revert init langfuse client on slack alerts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3409
* Add Llama3 tokenizer and allow custom tokenizers. by Priva28 in https://github.com/BerriAI/litellm/pull/3393
* [Fix] Ensure callbacks are not added to router when `store_model_in_db=True` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3419
* fix(lowest_latency.py): fix the size of the latency list to 10 by default (can be modified) by krrishdholakia in https://github.com/BerriAI/litellm/pull/3422

New Contributors
* nobu007 made their first contribution in https://github.com/BerriAI/litellm/pull/3349
* DomMartin27 made their first contribution in https://github.com/BerriAI/litellm/pull/3217
* azohra made their first contribution in https://github.com/BerriAI/litellm/pull/3405
* Priva28 made their first contribution in https://github.com/BerriAI/litellm/pull/3393

**Full Changelog**: https://github.com/BerriAI/litellm/compare/1.35.33.dev4...v1.35.36-dev2

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 41 | 43.09456797200015 | 1.6698929667670133 | 0.0 | 500 | 0 | 34.43404899996949 | 214.21879600001148 |
| /health/liveliness | Passed ✅ | 25 | 28.224681616349365 | 15.443170156661338 | 0.006679571867068053 | 4624 | 2 | 23.087949000000663 | 1131.2702129999934 |
| /health/readiness | Passed ✅ | 25 | 27.7633421125326 | 15.403092725458931 | 0.0 | 4612 | 0 | 23.193645000048946 | 1376.8194570000105 |
| Aggregated | Passed ✅ | 25 | 28.769797206552926 | 32.51615584888728 | 0.006679571867068053 | 9736 | 2 | 23.087949000000663 | 1376.8194570000105 |

1.35.35

Not secure
What's Changed
* [UI] Fix show latency < 0.0001 for deployments that have low latency + only show non cache hits on latency UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3388
* [UI] show slow responses + num requests per deployment by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3390
* [Fix + Test] Errant prints on langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3391
* Add langfuse `sdk_integration` by marcklingen in https://github.com/BerriAI/litellm/pull/2516
* Update langfuse in `requirements.txt` by Manouchehri in https://github.com/BerriAI/litellm/pull/3262


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.34...v1.35.35

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 41 | 51.26542280364234 | 1.6500892298075267 | 0.0 | 494 | 0 | 35.884482000028584 | 1109.5307290000278 |
| /health/liveliness | Passed ✅ | 25 | 28.873856892782293 | 15.919686779883952 | 0.0 | 4766 | 0 | 23.275566999984676 | 1375.641074000015 |
| /health/readiness | Passed ✅ | 26 | 28.245912157308112 | 15.288377337710626 | 0.003340261598800661 | 4577 | 1 | 23.47498500000711 | 1363.4175379999647 |
| Aggregated | Passed ✅ | 26 | 29.706156425739458 | 32.8581533474021 | 0.003340261598800661 | 9837 | 1 | 23.275566999984676 | 1375.641074000015 |

1.35.34

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.33.dev1...v1.35.34

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 56 | 60.59063693520661 | 1.5464062051613479 | 0.0 | 463 | 0 | 50.10197399997196 | 776.683281000004 |
| /health/liveliness | Passed ✅ | 40 | 43.5946980195155 | 15.574281068396036 | 0.0 | 4663 | 0 | 37.752785000009226 | 1280.8077110000227 |
| /health/readiness | Passed ✅ | 40 | 43.24365085222429 | 15.617700681067953 | 0.0 | 4676 | 0 | 37.88299199999301 | 1433.2260259999998 |
| Aggregated | Passed ✅ | 40 | 44.23004010926335 | 32.73838795462534 | 0.0 | 9802 | 0 | 37.752785000009226 | 1433.2260259999998 |

1.35.33

Not secure
🚨 This Release has a change to DB schema (PR 3371), recommend waiting 1-2 releases to let this bake in

What's Changed
* usage based routing RPM count fix by sumanth13131 in https://github.com/BerriAI/litellm/pull/3358
* Fix Cohere tool calling by elisalimli in https://github.com/BerriAI/litellm/pull/3351
* [Feat] Write LLM Exception to LiteLLM Proxy DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3371
* fix(lowest_latency.py): allow setting a buffer for getting values within a certain latency threshold by krrishdholakia in https://github.com/BerriAI/litellm/pull/3370
* Disambiguate invalid model name errors by msabramo in https://github.com/BerriAI/litellm/pull/3374
* Revert "Disambiguate invalid model name errors" by krrishdholakia in https://github.com/BerriAI/litellm/pull/3377
* fix(router.py): unify retry timeout logic across sync + async function_with_retries by krrishdholakia in https://github.com/BerriAI/litellm/pull/3376

New Contributors
* msabramo made their first contribution in https://github.com/BerriAI/litellm/pull/3374

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.32.dev1...v1.35.33

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 82 | 87.50630895143662 | 1.5129454451077264 | 0.0 | 453 | 0 | 75.89100999999232 | 500.8229590000042 |
| /health/liveliness | Passed ✅ | 66 | 68.2478437879775 | 15.500176182659951 | 0.0 | 4641 | 0 | 63.3791120000069 | 1334.4145010000261 |
| /health/readiness | Passed ✅ | 66 | 69.03417801545166 | 15.346543753355414 | 0.003339835419663855 | 4595 | 1 | 63.32242299998825 | 1252.5400890000071 |
| Aggregated | Passed ✅ | 66 | 69.52117338796633 | 32.359665381123094 | 0.003339835419663855 | 9689 | 1 | 63.32242299998825 | 1334.4145010000261 |

Page 48 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.