Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 52 of 93

1.35.20

Not secure
What's Changed
* [UI-Polish] Cleanup Inputing Key Name, Team Name, User Email by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3218
* Update langsmith_integration.md by bllchmbrs in https://github.com/BerriAI/litellm/pull/3205
* Added openrouter/meta-llama/llama-3-70b-instruct context and cost metrics by paul-gauthier in https://github.com/BerriAI/litellm/pull/3223
* [UI-Fix] Show all teams on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3219
* fix(prometheus.py): add user tracking to prometheus by krrishdholakia in https://github.com/BerriAI/litellm/pull/3224
* [Bug-Fix] Alerting - don't send hanging request alert on failed request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3226
* improve(vertex_ai.py): Switch to simpler dict type for supporting JSON mode by Manouchehri in https://github.com/BerriAI/litellm/pull/3211
* (Vertex AI) - Add `frequency_penalty` and `presence_penalty` support by Manouchehri in https://github.com/BerriAI/litellm/pull/3214
* [Fix] Non-Admin SSO Login by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3228

New Contributors
* bllchmbrs made their first contribution in https://github.com/BerriAI/litellm/pull/3205
* paul-gauthier made their first contribution in https://github.com/BerriAI/litellm/pull/3223

**Full Changelog**: https://github.com/BerriAI/litellm/compare/1.35.19.dev1...v1.35.20

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 56 | 66.18343163694331 | 1.5732072585426606 | 0.003340142799453632 | 471 | 1 | 50.73058199997149 | 1221.7388810000216 |
| /health/liveliness | Passed ✅ | 40 | 43.59827081867388 | 15.418099162277965 | 0.0 | 4616 | 0 | 37.99386999997978 | 1076.7643240000098 |
| /health/readiness | Passed ✅ | 40 | 43.16513574967192 | 15.264452593503098 | 0.0 | 4570 | 0 | 38.20107499996084 | 1247.142073999953 |
| Aggregated | Passed ✅ | 40 | 44.49484154250821 | 32.25575901432372 | 0.003340142799453632 | 9657 | 1 | 37.99386999997978 | 1247.142073999953 |

1.35.20.dev2

What's Changed
* (feat) Langfuse - Add location logging, and add cache_hit to metadata. by Manouchehri in https://github.com/BerriAI/litellm/pull/2961
* [FEAT] Add `groq/llama3` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3176
* [UI] Show teams as dropdown in invite user flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3180
* [FEAT] Log team alias to langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3175
* fix: Stream completion responses from anthropic. (Fix 3129) by jmandel in https://github.com/BerriAI/litellm/pull/3174
* [Fix] - Langfuse log proxy_base_url to langfuse as a tag (if set by user) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3183
* fix(caching.py): dual cache async_batch_get_cache fix + testing by krrishdholakia in https://github.com/BerriAI/litellm/pull/3179
* fix(caching.py): fix redis url parsing logic to work with ssl urls by krrishdholakia in https://github.com/BerriAI/litellm/pull/3173
* [Fix] completion(model="gemini/gemini-pro-1.5-latest" raises Exception by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3186
* fix(_redis.py): support redis ssl as a kwarg `REDIS_SSL` by krrishdholakia in https://github.com/BerriAI/litellm/pull/3191
* FIX: ollama chat completion proxy internal server 500 by merefield in https://github.com/BerriAI/litellm/pull/3189
* Disable special tokens in ollama completion when counting tokens by rick-github in https://github.com/BerriAI/litellm/pull/3170
* [Fix] - `/audio/transcriptions` security fix by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3193
* [UI] - non admin flow - only Create + Test Key available by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3184
* Load google ADC before init AnthropicVertex by ushuz in https://github.com/BerriAI/litellm/pull/3150
* Fix tool call errors using anthropic by n1lanjan in https://github.com/BerriAI/litellm/pull/3118
* Fix new line issue in cohere_message_pt by elisalimli in https://github.com/BerriAI/litellm/pull/3115
* fix - slack alerting show `input` for embedding requests by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3196
* fix(router.py): Make TPM limits concurrency-safe by krrishdholakia in https://github.com/BerriAI/litellm/pull/3192
* [UI] - simplify "Create Key" for non admins by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3197
* ui - fix create key flow / cleanup non admin flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3198
* Revert "Load google ADC before init AnthropicVertex" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3203
* [Feat]- show langfuse trace in slack alerts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3209
* [UI] round up team spend to 2 decimals + diversify legend for team spend by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3210
* UI - increase default session time to 2 hours by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3212
* [UI-Polish] Cleanup Inputing Key Name, Team Name, User Email by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3218
* Update langsmith_integration.md by bllchmbrs in https://github.com/BerriAI/litellm/pull/3205
* Added openrouter/meta-llama/llama-3-70b-instruct context and cost metrics by paul-gauthier in https://github.com/BerriAI/litellm/pull/3223
* [UI-Fix] Show all teams on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3219
* fix(prometheus.py): add user tracking to prometheus by krrishdholakia in https://github.com/BerriAI/litellm/pull/3224
* [Bug-Fix] Alerting - don't send hanging request alert on failed request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3226
* improve(vertex_ai.py): Switch to simpler dict type for supporting JSON mode by Manouchehri in https://github.com/BerriAI/litellm/pull/3211
* (Vertex AI) - Add `frequency_penalty` and `presence_penalty` support by Manouchehri in https://github.com/BerriAI/litellm/pull/3214
* [Fix] Non-Admin SSO Login by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3228
* (utils.py) - Fix response_format typo for Groq by Manouchehri in https://github.com/BerriAI/litellm/pull/3231
* fix(router.py) handle initial model list being empty by krrishdholakia in https://github.com/BerriAI/litellm/pull/3242
* [Fix] Proxy: updating router settings from UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3245
* [Fix] Linking Langfuse Projects to Slack Alerts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3244

New Contributors
* jmandel made their first contribution in https://github.com/BerriAI/litellm/pull/3174
* merefield made their first contribution in https://github.com/BerriAI/litellm/pull/3189
* rick-github made their first contribution in https://github.com/BerriAI/litellm/pull/3170
* n1lanjan made their first contribution in https://github.com/BerriAI/litellm/pull/3118
* elisalimli made their first contribution in https://github.com/BerriAI/litellm/pull/3115
* bllchmbrs made their first contribution in https://github.com/BerriAI/litellm/pull/3205
* paul-gauthier made their first contribution in https://github.com/BerriAI/litellm/pull/3223

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.15-stable...v1.35.20.dev2

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 42 | 45.91843134549463 | 1.5564000119906491 | 0.0 | 466 | 0 | 35.09323800000175 | 641.0140170000318 |
| /health/liveliness | Passed ✅ | 26 | 28.603328343261236 | 15.58737951922824 | 0.0 | 4667 | 0 | 23.41880000000174 | 1197.4679479999963 |
| /health/readiness | Passed ✅ | 26 | 28.542194085641007 | 15.56066020571767 | 0.006679828377642271 | 4659 | 2 | 23.53846099998691 | 1258.7153820000196 |
| Aggregated | Passed ✅ | 26 | 29.39826436172399 | 32.70443973693656 | 0.006679828377642271 | 9792 | 2 | 23.41880000000174 | 1258.7153820000196 |

1.35.19

Not secure
What's Changed
* [Feat]- show langfuse trace in slack alerts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3209

https://docs.litellm.ai/docs/proxy/alerting
![pika-1713849265120-1x](https://github.com/BerriAI/litellm/assets/29436595/9a9f26c6-a0d1-4097-85ed-71c8e407db68)




**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.18...v1.35.19

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 41 | 46.08613956783032 | 1.7233151155653574 | 0.0 | 516 | 0 | 36.1015739999857 | 419.30066000000465 |
| /health/liveliness | Passed ✅ | 25 | 28.66264168568902 | 15.215937338208855 | 0.0 | 4556 | 0 | 23.215592999974888 | 1240.9464259999936 |
| /health/readiness | Passed ✅ | 25 | 28.984080272844423 | 15.569951683654452 | 0.0 | 4662 | 0 | 23.317095999999538 | 1266.7914550000319 |
| Aggregated | Passed ✅ | 25 | 29.74021222200552 | 32.50920413742866 | 0.0 | 9734 | 0 | 23.215592999974888 | 1266.7914550000319 |

1.35.19.dev1

What's Changed
* [UI] round up team spend to 2 decimals + diversify legend for team spend by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3210
* UI - increase default session time to 2 hours by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3212


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.19...1.35.19.dev1

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 82 | 87.0113330723138 | 1.6168236628153883 | 0.0 | 484 | 0 | 76.49579799999628 | 688.3324790000245 |
| /health/liveliness | Passed ✅ | 66 | 69.47131877358837 | 15.32975989392524 | 0.0033405447578830337 | 4589 | 1 | 63.48388399999294 | 1317.9960030000188 |
| /health/readiness | Passed ✅ | 66 | 70.14140744272815 | 15.136008297968026 | 0.0 | 4531 | 0 | 63.27464200001032 | 1354.6154009999896 |
| Aggregated | Passed ✅ | 66 | 70.67139568742188 | 32.08259185470865 | 0.0033405447578830337 | 9604 | 1 | 63.27464200001032 | 1354.6154009999896 |

1.35.18

Not secure
🚨 'simple-shuffle' async migration, missing random + tpm shuffle. Added in v`1.35.20`.

What's Changed
* fix(_redis.py): support redis ssl as a kwarg `REDIS_SSL` by krrishdholakia in https://github.com/BerriAI/litellm/pull/3191
* FIX: ollama chat completion proxy internal server 500 by merefield in https://github.com/BerriAI/litellm/pull/3189
* Disable special tokens in ollama completion when counting tokens by rick-github in https://github.com/BerriAI/litellm/pull/3170
* [Fix] - `/audio/transcriptions` security fix by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3193
* [UI] - non admin flow - only Create + Test Key available by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3184
* Load google ADC before init AnthropicVertex by ushuz in https://github.com/BerriAI/litellm/pull/3150
* Fix tool call errors using anthropic by n1lanjan in https://github.com/BerriAI/litellm/pull/3118
* Fix new line issue in cohere_message_pt by elisalimli in https://github.com/BerriAI/litellm/pull/3115
* fix - slack alerting show `input` for embedding requests by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3196
* fix(router.py): Make TPM limits concurrency-safe by krrishdholakia in https://github.com/BerriAI/litellm/pull/3192
* [UI] - simplify "Create Key" for non admins by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3197
* ui - fix create key flow / cleanup non admin flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3198
* Revert "Load google ADC before init AnthropicVertex" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3203

New Contributors
* merefield made their first contribution in https://github.com/BerriAI/litellm/pull/3189
* rick-github made their first contribution in https://github.com/BerriAI/litellm/pull/3170
* n1lanjan made their first contribution in https://github.com/BerriAI/litellm/pull/3118
* elisalimli made their first contribution in https://github.com/BerriAI/litellm/pull/3115

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.17...v1.35.18

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 81 | 84.52471885930657 | 1.5432420439247365 | 0.0 | 462 | 0 | 75.13792499997862 | 512.1452240000224 |
| /health/liveliness | Passed ✅ | 66 | 67.80148838042581 | 15.085023961827078 | 0.0 | 4516 | 0 | 63.41557500002182 | 575.8615130000067 |
| /health/readiness | Passed ✅ | 66 | 68.58760093607881 | 15.57271517051325 | 0.0 | 4662 | 0 | 63.40643299995463 | 1172.486718000016 |
| Aggregated | Passed ✅ | 66 | 68.98312626587143 | 32.20098117626507 | 0.0 | 9640 | 0 | 63.40643299995463 | 1172.486718000016 |

1.35.17

Not secure
What's Changed
* [Fix] completion(model="gemini/gemini-pro-1.5-latest" raises Exception by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3186


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.35.16...v1.35.17

Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 93 | 101.53622583668972 | 1.493386660987606 | 0.0033409097561243983 | 447 | 1 | 84.96015899999065 | 1145.1051079999957 |
| /health/liveliness | Passed ✅ | 78 | 80.17064068287783 | 15.044116631828166 | 0.0 | 4503 | 0 | 73.88907600000039 | 1396.897354000032 |
| /health/readiness | Passed ✅ | 78 | 80.22288756746751 | 15.301366683049745 | 0.0 | 4580 | 0 | 73.93075199996701 | 1436.8102980000117 |
| Aggregated | Passed ✅ | 78 | 81.19789223536186 | 31.838869975865517 | 0.0033409097561243983 | 9530 | 1 | 73.88907600000039 | 1436.8102980000117 |

Page 52 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.