Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 93

1.51.3dev1

What's Changed
* LiteLLM Minor Fixes & Improvements (11/01/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6551
* Litellm dev 11 02 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6561


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.3...v1.51.3-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.3-dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 292.3714877928421 | 6.163980651581093 | 0.0 | 1844 | 0 | 226.11442700002726 | 2207.691740000001 |
| Aggregated | Passed βœ… | 250.0 | 292.3714877928421 | 6.163980651581093 | 0.0 | 1844 | 0 | 226.11442700002726 | 2207.691740000001 |

1.51.2

What's Changed
* (perf) Litellm redis router fix - ~100ms improvement by krrishdholakia in https://github.com/BerriAI/litellm/pull/6483
* LiteLLM Minor Fixes & Improvements (10/28/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6475
* Litellm dev 10 29 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6502
* Litellm router max depth by krrishdholakia in https://github.com/BerriAI/litellm/pull/6501
* (UI) fix bug with rendering max budget = 0 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6506
* (UI) fix + test displaying number of keys an internal user owns by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6507
* (UI) Fix viewing members, keys in a team + added testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6514


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.1...v1.51.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.2



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 287.73103211135674 | 6.039141910660648 | 0.0 | 1805 | 0 | 213.5724959999834 | 2930.6253560000073 |
| Aggregated | Passed βœ… | 250.0 | 287.73103211135674 | 6.039141910660648 | 0.0 | 1805 | 0 | 213.5724959999834 | 2930.6253560000073 |

v1.51.1-staging
What's Changed
* (perf) Litellm redis router fix - ~100ms improvement by krrishdholakia in https://github.com/BerriAI/litellm/pull/6483
* LiteLLM Minor Fixes & Improvements (10/28/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6475
* Litellm dev 10 29 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6502
* Litellm router max depth by krrishdholakia in https://github.com/BerriAI/litellm/pull/6501
* (UI) fix bug with rendering max budget = 0 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6506
* (UI) fix + test displaying number of keys an internal user owns by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6507
* (UI) Fix viewing members, keys in a team + added testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6514


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.1...v1.51.1-staging



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.1-staging



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 270.0 | 311.93605914725106 | 6.080288332872121 | 0.0033408177653143525 | 1820 | 1 | 117.93499300000576 | 3293.080912999983 |
| Aggregated | Failed ❌ | 270.0 | 311.93605914725106 | 6.080288332872121 | 0.0033408177653143525 | 1820 | 1 | 117.93499300000576 | 3293.080912999983 |

1.51.1

What's Changed
* (UI) Delete Internal Users on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6442
* (testing) increase prometheus.py test coverage to 90% by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6466
* (Feat) New Logging integration - add Datadog LLM Observability support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6449
* (feat) add pricing for amazon.titan-embed-image-v1 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6444
* LiteLLM Minor Fixes & Improvements (10/24/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6441
* Litellm dev 10 26 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6472
* (Testing) Add unit testing for DualCache - ensure in memory cache is used when expected by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6471
* redis otel tracing + async support for latency routing by krrishdholakia in https://github.com/BerriAI/litellm/pull/6452
* (fix) Prometheus - Log Postgres DB latency, status on prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6484
* (router_strategy/) ensure all async functions use async cache methods by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6489
* (fix) proxy - fix when `STORE_MODEL_IN_DB` should be set by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6492
* (fix) `PrometheusServicesLogger` `_get_metric` should return metric in Registry by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6486
* Add `azure/gpt-4o-mini-2024-07-18` to model_prices_and_context_window… by xingyaoww in https://github.com/BerriAI/litellm/pull/6477
* Update utils.py by vibhanshu-ob in https://github.com/BerriAI/litellm/pull/6468

New Contributors
* xingyaoww made their first contribution in https://github.com/BerriAI/litellm/pull/6477
* vibhanshu-ob made their first contribution in https://github.com/BerriAI/litellm/pull/6468

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.0...v1.51.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 200.0 | 217.96900764879396 | 6.232037758758388 | 0.0 | 1865 | 0 | 178.63059899997324 | 1220.8741560000362 |
| Aggregated | Passed βœ… | 200.0 | 217.96900764879396 | 6.232037758758388 | 0.0 | 1865 | 0 | 178.63059899997324 | 1220.8741560000362 |

1.51.0

What's Changed
* perf: remove 'always_read_redis' - adding +830ms on each llm call by krrishdholakia in https://github.com/BerriAI/litellm/pull/6414
* feat(litellm_logging.py): refactor standard_logging_payload function … by krrishdholakia in https://github.com/BerriAI/litellm/pull/6388
* LiteLLM Minor Fixes & Improvements (10/23/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6407
* allow configuring httpx hooks for AsyncHTTPHandler (6290) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6415
* feat(proxy_server.py): check if views exist on proxy server startup +… by krrishdholakia in https://github.com/BerriAI/litellm/pull/6360
* feat(litellm_pre_call_utils.py): support 'add_user_information_to_llm… by krrishdholakia in https://github.com/BerriAI/litellm/pull/6390
* (admin ui) - show created_at for virtual keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6429
* (feat) track created_at, updated_at for virtual keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6428
* Code cov - add checks for patch and overall repo by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6436
* (admin ui / auth fix) Allow internal user to call /key/{token}/regenerate by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6430
* LiteLLM Minor Fixes & Improvements (10/24/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6421
* (proxy audit logs) fix serialization error on audit logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6433


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.50.4...v1.51.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.0



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 230.0 | 256.2776533033099 | 6.163517714105049 | 0.0 | 1843 | 0 | 210.4747610000004 | 1438.3136239999885 |
| Aggregated | Passed βœ… | 230.0 | 256.2776533033099 | 6.163517714105049 | 0.0 | 1843 | 0 | 210.4747610000004 | 1438.3136239999885 |

v1.50.4-stable
What's Changed
* (feat) Arize - Allow using Arize HTTP endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6364
* LiteLLM Minor Fixes & Improvements (10/22/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6384
* build(deps): bump http-proxy-middleware from 2.0.6 to 2.0.7 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/6395
* (docs + testing) Correctly document the timeout value used by litellm proxy is 6000 seconds + add to best practices for prod by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6339
* (refactor) move convert dict to model response to llm_response_utils/ by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6393
* (refactor) litellm.Router client initialization utils by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6394
* (fix) Langfuse key based logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6372
* Revert "(refactor) litellm.Router client initialization utils " by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6403
* (fix) using /completions with `echo` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6401
* (refactor) prometheus async_log_success_event to be under 100 LOC by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6416
* (refactor) router - use static methods for client init utils by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6420
* (code cleanup) remove unused and undocumented logging integrations - litedebugger, berrispend by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6406


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.50.2...v1.50.4-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.50.4-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 282.12398419383265 | 6.070437193170591 | 0.0 | 1816 | 0 | 215.3092099999867 | 6805.4257369999505 |
| Aggregated | Passed βœ… | 250.0 | 282.12398419383265 | 6.070437193170591 | 0.0 | 1816 | 0 | 215.3092099999867 | 6805.4257369999505 |

1.51.0.dev1

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.0...v1.51.0.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.0.dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 270.0 | 304.38755454794625 | 6.1009349714042544 | 0.0 | 1825 | 0 | 232.0200620000037 | 3500.7745139999997 |
| Aggregated | Failed ❌ | 270.0 | 304.38755454794625 | 6.1009349714042544 | 0.0 | 1825 | 0 | 232.0200620000037 | 3500.7745139999997 |

v1.51.0-stable
What's Changed
* perf: remove 'always_read_redis' - adding +830ms on each llm call by krrishdholakia in https://github.com/BerriAI/litellm/pull/6414
* feat(litellm_logging.py): refactor standard_logging_payload function … by krrishdholakia in https://github.com/BerriAI/litellm/pull/6388
* LiteLLM Minor Fixes & Improvements (10/23/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6407
* allow configuring httpx hooks for AsyncHTTPHandler (6290) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6415
* feat(proxy_server.py): check if views exist on proxy server startup +… by krrishdholakia in https://github.com/BerriAI/litellm/pull/6360
* feat(litellm_pre_call_utils.py): support 'add_user_information_to_llm… by krrishdholakia in https://github.com/BerriAI/litellm/pull/6390
* (admin ui) - show created_at for virtual keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6429
* (feat) track created_at, updated_at for virtual keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6428
* Code cov - add checks for patch and overall repo by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6436
* (admin ui / auth fix) Allow internal user to call /key/{token}/regenerate by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6430
* LiteLLM Minor Fixes & Improvements (10/24/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6421
* (proxy audit logs) fix serialization error on audit logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6433


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.50.4...v1.51.0-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.0-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 220.0 | 259.348547705819 | 6.147561516829862 | 0.0 | 1839 | 0 | 207.74116500001583 | 1588.2848330000456 |
| Aggregated | Passed βœ… | 220.0 | 259.348547705819 | 6.147561516829862 | 0.0 | 1839 | 0 | 207.74116500001583 | 1588.2848330000456 |

1.50.4

What's Changed
* (feat) Arize - Allow using Arize HTTP endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6364
* LiteLLM Minor Fixes & Improvements (10/22/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6384
* build(deps): bump http-proxy-middleware from 2.0.6 to 2.0.7 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/6395
* (docs + testing) Correctly document the timeout value used by litellm proxy is 6000 seconds + add to best practices for prod by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6339
* (refactor) move convert dict to model response to llm_response_utils/ by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6393
* (refactor) litellm.Router client initialization utils by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6394
* (fix) Langfuse key based logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6372
* Revert "(refactor) litellm.Router client initialization utils " by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6403
* (fix) using /completions with `echo` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6401
* (refactor) prometheus async_log_success_event to be under 100 LOC by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6416
* (refactor) router - use static methods for client init utils by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6420
* (code cleanup) remove unused and undocumented logging integrations - litedebugger, berrispend by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6406


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.50.2...v1.50.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.50.4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 280.0 | 312.6482922531862 | 6.037218908394318 | 0.0 | 1805 | 0 | 231.8999450000092 | 2847.2051709999846 |
| Aggregated | Failed ❌ | 280.0 | 312.6482922531862 | 6.037218908394318 | 0.0 | 1805 | 0 | 231.8999450000092 | 2847.2051709999846 |

Page 4 of 93

Links

Releases

Has known vulnerabilities

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.