Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 93

1.52.3

What's Changed
* Litellm Minor Fixes & Improvements (11/08/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6658
* (pricing): Fix multiple mistakes in Claude pricing by Manouchehri in https://github.com/BerriAI/litellm/pull/6666


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.2...v1.52.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.3



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 210.0 | 236.59706194640916 | 6.234242656262243 | 0.0 | 1866 | 0 | 180.61705699994945 | 3424.5764140000006 |
| Aggregated | Passed βœ… | 210.0 | 236.59706194640916 | 6.234242656262243 | 0.0 | 1866 | 0 | 180.61705699994945 | 3424.5764140000006 |

1.52.2

What's Changed
* chore: comment for maritalk by nobu007 in https://github.com/BerriAI/litellm/pull/6607
* Update gpt-4o-2024-08-06, and o1-preview, o1-mini models in model cost map by emerzon in https://github.com/BerriAI/litellm/pull/6654
* (QOL improvement) add unit testing for all static_methods in litellm_logging.py by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6640
* (feat) log error class, function_name on prometheus service failure hook + only log DB related failures on DB service hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6650
* Update several Azure AI models in model cost map by emerzon in https://github.com/BerriAI/litellm/pull/6655
* ci(conftest.py): reset conftest.py for local_testing/ by krrishdholakia in https://github.com/BerriAI/litellm/pull/6657
* Litellm dev 11 07 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6649

New Contributors
* emerzon made their first contribution in https://github.com/BerriAI/litellm/pull/6654

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.1...v1.52.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.2



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 230.0 | 251.09411961031876 | 6.087114215107422 | 0.0 | 1822 | 0 | 198.72582000004968 | 1667.4085729999888 |
| Aggregated | Passed βœ… | 230.0 | 251.09411961031876 | 6.087114215107422 | 0.0 | 1822 | 0 | 198.72582000004968 | 1667.4085729999888 |

1.52.2dev1

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.3...v1.52.2-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.2-dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.2-dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 200.0 | 220.40195196940357 | 6.227773108800657 | 0.0 | 1863 | 0 | 180.672578000042 | 2967.1460419999676 |
| Aggregated | Passed βœ… | 200.0 | 220.40195196940357 | 6.227773108800657 | 0.0 | 1863 | 0 | 180.672578000042 | 2967.1460419999676 |

1.52.1

What's Changed
* (DB fix) don't run apply_db_fixes on startup by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6604
* LiteLLM Minor Fixes & Improvements (11/04/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6572
* ci: remove redundant lint.yml workflow by krrishdholakia in https://github.com/BerriAI/litellm/pull/6622
* LiteLLM Minor Fixes & Improvements (11/05/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6590
* LiteLLM Minor Fixes & Improvements (11/06/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6624
* (feat) GCS Bucket logging. Allow using IAM auth for logging to GCS by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6628
* Update opentelemetry_integration.md - Fix typos by ronensc in https://github.com/BerriAI/litellm/pull/6618
* (fix) ProxyStartup - Check that prisma connection is healthy when starting an instance of LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6627
* Update team_budgets.md by superpoussin22 in https://github.com/BerriAI/litellm/pull/6611
* (feat) Allow failed DB connection requests to allow virtual keys with `allow_failed_db_requests` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6605
* fix(pattern_match_deployments.py): default to user input if unable to… by krrishdholakia in https://github.com/BerriAI/litellm/pull/6646
* fix(pattern_match_deployments.py): default to user input if unable to… by krrishdholakia in https://github.com/BerriAI/litellm/pull/6632

New Contributors
* ronensc made their first contribution in https://github.com/BerriAI/litellm/pull/6618

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.0...v1.52.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 260.0 | 292.8286898309638 | 6.110969302244283 | 0.0 | 1828 | 0 | 230.12115400001676 | 2643.3588609999674 |
| Aggregated | Passed βœ… | 260.0 | 292.8286898309638 | 6.110969302244283 | 0.0 | 1828 | 0 | 230.12115400001676 | 2643.3588609999674 |

v1.52.0-stable
What's Changed
* LiteLLM Minor Fixes & Improvements (11/01/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6551
* Litellm dev 11 02 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6561
* build(deps): bump cookie and express in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/6566
* docs(virtual_keys.md): update Dockerfile reference by emmanuel-ferdman in https://github.com/BerriAI/litellm/pull/6554
* (proxy fix) - call connect on prisma client when running setup by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6534
* Add 3.5 haiku by paul-gauthier in https://github.com/BerriAI/litellm/pull/6588
* Litellm perf improvements 3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6573
* (fix) /image/generation - ImageObject conversion when `content_filter_results` exists by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6584
* (fix) litellm.text_completion raises a non-blocking error on simple usage by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6546
* (feat) add `Predicted Outputs` for OpenAI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6594
* (fix) Vertex Improve Performance when using `image_url` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6593
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check by krrishdholakia in https://github.com/BerriAI/litellm/pull/6577

New Contributors
* emmanuel-ferdman made their first contribution in https://github.com/BerriAI/litellm/pull/6554

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.3...v1.52.0-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.0-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 240.0 | 270.29554346208295 | 6.124428891308869 | 0.0 | 1833 | 0 | 212.83106800001406 | 1622.2440090000418 |
| Aggregated | Passed βœ… | 240.0 | 270.29554346208295 | 6.124428891308869 | 0.0 | 1833 | 0 | 212.83106800001406 | 1622.2440090000418 |

1.52.0

![Group 6166](https://github.com/user-attachments/assets/9db96953-371e-4a89-b3ed-148213dc7b56)

What's Changed
* LiteLLM Minor Fixes & Improvements (11/01/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6551
* Litellm dev 11 02 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6561
* build(deps): bump cookie and express in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/6566
* docs(virtual_keys.md): update Dockerfile reference by emmanuel-ferdman in https://github.com/BerriAI/litellm/pull/6554
* (proxy fix) - call connect on prisma client when running setup by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6534
* Add 3.5 haiku by paul-gauthier in https://github.com/BerriAI/litellm/pull/6588
* Litellm perf improvements 3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6573
* (fix) /image/generation - ImageObject conversion when `content_filter_results` exists by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6584
* (fix) litellm.text_completion raises a non-blocking error on simple usage by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6546
* (feat) add `Predicted Outputs` for OpenAI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6594
* (fix) Vertex Improve Performance when using `image_url` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6593
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check by krrishdholakia in https://github.com/BerriAI/litellm/pull/6577

New Contributors
* emmanuel-ferdman made their first contribution in https://github.com/BerriAI/litellm/pull/6554

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.3...v1.52.0




Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.0



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 210.0 | 231.0704959909717 | 6.291122309918775 | 0.0 | 1883 | 0 | 180.74613400000317 | 2618.3897060000163 |
| Aggregated | Passed βœ… | 210.0 | 231.0704959909717 | 6.291122309918775 | 0.0 | 1883 | 0 | 180.74613400000317 | 2618.3897060000163 |

1.51.3

What's Changed
* Support specifying exponential backoff retry strategy when calling completions() by dbczumar in https://github.com/BerriAI/litellm/pull/6520
* (fix) slack alerting - don't spam the failed cost tracking alert for the same model by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6543
* (feat) add XAI ChatCompletion Support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6373
* LiteLLM Minor Fixes & Improvements (10/30/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6519


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.2...v1.51.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.3



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 200.0 | 220.3819331893052 | 6.253936592654308 | 0.0 | 1870 | 0 | 179.7343989999831 | 3185.1700670000014 |
| Aggregated | Passed βœ… | 200.0 | 220.3819331893052 | 6.253936592654308 | 0.0 | 1870 | 0 | 179.7343989999831 | 3185.1700670000014 |

v1.51.1-stable
What's Changed
* (UI) Delete Internal Users on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6442
* (testing) increase prometheus.py test coverage to 90% by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6466
* (Feat) New Logging integration - add Datadog LLM Observability support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6449
* (feat) add pricing for amazon.titan-embed-image-v1 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6444
* LiteLLM Minor Fixes & Improvements (10/24/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6441
* Litellm dev 10 26 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6472
* (Testing) Add unit testing for DualCache - ensure in memory cache is used when expected by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6471
* redis otel tracing + async support for latency routing by krrishdholakia in https://github.com/BerriAI/litellm/pull/6452
* (fix) Prometheus - Log Postgres DB latency, status on prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6484
* (router_strategy/) ensure all async functions use async cache methods by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6489
* (fix) proxy - fix when `STORE_MODEL_IN_DB` should be set by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6492
* (fix) `PrometheusServicesLogger` `_get_metric` should return metric in Registry by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6486
* Add `azure/gpt-4o-mini-2024-07-18` to model_prices_and_context_window… by xingyaoww in https://github.com/BerriAI/litellm/pull/6477
* Update utils.py by vibhanshu-ob in https://github.com/BerriAI/litellm/pull/6468




Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_nov1-v1.51.1



**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.2...v1.51.1-stable

custom-docker-image-test-dev
What's Changed
* Support specifying exponential backoff retry strategy when calling completions() by dbczumar in https://github.com/BerriAI/litellm/pull/6520


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.2...custom-docker-image-test-dev



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-custom-docker-image-test-dev



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-custom-docker-image-test-dev



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-custom-docker-image-test-dev



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-custom-docker-image-test-dev



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-custom-docker-image-test-dev



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 277.2980743547866 | 6.178135563258271 | 0.0 | 1849 | 0 | 222.02958399998352 | 3219.9342000000115 |
| Aggregated | Passed βœ… | 250.0 | 277.2980743547866 | 6.178135563258271 | 0.0 | 1849 | 0 | 222.02958399998352 | 3219.9342000000115 |

Page 3 of 93

Links

Releases

Has known vulnerabilities

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.