Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 93

1.49.6

What's Changed
* (router testing) Add testing coverage for `run_async_fallback` and `run_sync_fallback` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6256
* LiteLLM Minor Fixes & Improvements (10/15/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6242
* (testing) Router add testing coverage by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6253
* (testing) add router unit testing for `send_llm_exception_alert` , `router_cooldown_event_callback` , cooldown utils by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6258
* Litellm router code coverage 3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6274
* Remove "ask mode" from Canary search by yujonglee in https://github.com/BerriAI/litellm/pull/6271
* LiteLLM Minor Fixes & Improvements (10/16/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6265


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.49.5...v1.49.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.6



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 286.97499258450677 | 6.169343856034057 | 0.0 | 1846 | 0 | 229.1845549999607 | 2911.2547339999537 |
| Aggregated | Passed βœ… | 250.0 | 286.97499258450677 | 6.169343856034057 | 0.0 | 1846 | 0 | 229.1845549999607 | 2911.2547339999537 |

1.49.5

What's Changed
* (fix) prompt caching cost calculation OpenAI, Azure OpenAI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6231
* (fix) arize handle optional params by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6243
* Bump hono from 4.5.8 to 4.6.5 in /litellm-js/spend-logs by dependabot in https://github.com/BerriAI/litellm/pull/6245
* (refactor) caching - use _sync_set_cache by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6224
* Make meta in rerank API Response optional - Compatible with Opensource APIs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6248
* (testing - litellm.Router ) add unit test coverage for pattern matching / wildcard routing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6250
* (refactor) sync caching - use `LLMCachingHandler` class for get_cache by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6249
* (refactor) - caching use separate files for each cache class by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6251


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.49.4...v1.49.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.5



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 286.11866113691053 | 6.100923329145553 | 0.0 | 1826 | 0 | 224.7632039999985 | 2036.4872069999365 |
| Aggregated | Passed βœ… | 250.0 | 286.11866113691053 | 6.100923329145553 | 0.0 | 1826 | 0 | 224.7632039999985 | 2036.4872069999365 |

1.49.4

What's Changed
* (refactor router.py ) - PR 3 - Ensure all functions under 100 lines by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6181
* [Bug Fix]: fix litellm.caching imports on python SDK by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6219
* LiteLLM Minor Fixes & Improvements (10/14/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6221
* test(router_code_coverage.py): check if all router functions are dire… by krrishdholakia in https://github.com/BerriAI/litellm/pull/6186
* (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6220
* (refactor) OTEL - use safe_set_attribute for setting attributes by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6226


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.49.3...v1.49.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 200.0 | 212.5333893868387 | 6.244178319118513 | 0.0 | 1869 | 0 | 178.2565319999776 | 1357.8999799999565 |
| Aggregated | Passed βœ… | 200.0 | 212.5333893868387 | 6.244178319118513 | 0.0 | 1869 | 0 | 178.2565319999776 | 1357.8999799999565 |

1.49.3

What's Changed
* Litellm Minor Fixes & Improvements (10/12/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6179
* build(config.yml): add codecov to repo by krrishdholakia in https://github.com/BerriAI/litellm/pull/6172
* ci(config.yml): add local_testing tests to codecov coverage check by krrishdholakia in https://github.com/BerriAI/litellm/pull/6183
* ci(config.yml): add further testing coverage to codecov by krrishdholakia in https://github.com/BerriAI/litellm/pull/6184
* docs(configs.md): document all environment variables by krrishdholakia in https://github.com/BerriAI/litellm/pull/6185
* (feat) add components to codecov yml by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6207
* (refactor) caching use LLMCachingHandler for async_get_cache and set_cache by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6208
* (feat) prometheus have well defined latency buckets by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6211
* (refactor caching) use LLMCachingHandler for caching streaming responses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6210
* bump getcanary/web1.0.9 by yujonglee in https://github.com/BerriAI/litellm/pull/6187
* (refactor caching) use common `_retrieve_from_cache` helper by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6212


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.49.2...v1.49.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.3



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 270.0 | 313.92829846432517 | 5.997591789487422 | 0.0 | 1794 | 0 | 231.2673659999973 | 3079.4730589999517 |
| Aggregated | Failed ❌ | 270.0 | 313.92829846432517 | 5.997591789487422 | 0.0 | 1794 | 0 | 231.2673659999973 | 3079.4730589999517 |

v1.49.2-stable
What's Changed
* Add literalai in the sidebar observability category by willydouhard in https://github.com/BerriAI/litellm/pull/6163
* Search across docs, GitHub issues, and discussions by yujonglee in https://github.com/BerriAI/litellm/pull/6160
* Feat: Add Langtrace integration by alizenhom in https://github.com/BerriAI/litellm/pull/5341
* (fix) add azure/gpt-4o-2024-05-13 pricing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6174
* LiteLLM Minor Fixes & Improvements (10/10/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6158
* (fix) batch_completion fails with bedrock due to extraneous [max_workers] key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6176
* (fix) provider wildcard routing - when models specificed without provider prefix by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6173

New Contributors
* alizenhom made their first contribution in https://github.com/BerriAI/litellm/pull/5341

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.49.1...v1.49.2-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.2-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 281.2975275095366 | 6.132088995403724 | 0.0 | 1835 | 0 | 226.91430599999762 | 1465.5594879999967 |
| Aggregated | Passed βœ… | 250.0 | 281.2975275095366 | 6.132088995403724 | 0.0 | 1835 | 0 | 226.91430599999762 | 1465.5594879999967 |

1.49.2

What's Changed
* Add literalai in the sidebar observability category by willydouhard in https://github.com/BerriAI/litellm/pull/6163
* Search across docs, GitHub issues, and discussions by yujonglee in https://github.com/BerriAI/litellm/pull/6160
* Feat: Add Langtrace integration by alizenhom in https://github.com/BerriAI/litellm/pull/5341
* (fix) add azure/gpt-4o-2024-05-13 pricing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6174
* LiteLLM Minor Fixes & Improvements (10/10/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6158
* (fix) batch_completion fails with bedrock due to extraneous [max_workers] key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6176
* (fix) provider wildcard routing - when models specificed without provider prefix by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6173

New Contributors
* alizenhom made their first contribution in https://github.com/BerriAI/litellm/pull/5341

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.49.1...v1.49.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.2



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 230.0 | 263.3150249465347 | 6.123838960549251 | 0.0 | 1833 | 0 | 205.50188100003197 | 2676.1843779999026 |
| Aggregated | Passed βœ… | 230.0 | 263.3150249465347 | 6.123838960549251 | 0.0 | 1833 | 0 | 205.50188100003197 | 2676.1843779999026 |

1.49.1

What's Changed
* (bug fix proxy ui) Default Team still rendered Even when disabled by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6134
* LiteLLM Minor Fixes & Improvements (10/09/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6139
* (feat) use regex pattern matching for wildcard routing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6150
* [Feat] Observability integration - Opik by Comet by jverre in https://github.com/BerriAI/litellm/pull/6062
* drop imghdr (5736) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6153

New Contributors
* jverre made their first contribution in https://github.com/BerriAI/litellm/pull/6062

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.49.0...v1.49.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 120.0 | 151.80524006444915 | 6.428687279464693 | 0.0 | 1924 | 0 | 106.09135400000014 | 2659.368719999975 |
| Aggregated | Passed βœ… | 120.0 | 151.80524006444915 | 6.428687279464693 | 0.0 | 1924 | 0 | 106.09135400000014 | 2659.368719999975 |

Page 6 of 93

Links

Releases

Has known vulnerabilities

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.