Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 93

1.52.8

What's Changed
* (chore) ci/cd fix - use correct `test_key_generate_prisma.py` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6718
* Litellm key update fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/6710
* Update code blocks in huggingface.md by Aiden-Jeon in https://github.com/BerriAI/litellm/pull/6737
* Doc fix for prefix support by CamdenClark in https://github.com/BerriAI/litellm/pull/6734
* (Feat) Add support for storing virtual keys in AWS SecretManager by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6728
* LiteLLM Minor Fixes & Improvement (11/14/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6730

New Contributors
* Aiden-Jeon made their first contribution in https://github.com/BerriAI/litellm/pull/6737
* CamdenClark made their first contribution in https://github.com/BerriAI/litellm/pull/6734

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.6...v1.52.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.8



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 270.0 | 298.55231204572533 | 6.139888957283805 | 0.0 | 1837 | 0 | 232.112771000061 | 1744.873116000008 |
| Aggregated | Passed ✅ | 270.0 | 298.55231204572533 | 6.139888957283805 | 0.0 | 1837 | 0 | 232.112771000061 | 1744.873116000008 |

v1.52.5-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.5.staging1...v1.52.5-stable

Docker image `ghcr.io/berriai/litellm:litellm_stable_nov12-stable`


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_nov12-stable


What's Changed
* Litellm dev 11 11 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6693
> fix(init.py): add 'watsonx_text' as mapped llm api route
[fix(opentelemetry.py): fix passing parallel tool calls to otel](https://github.com/BerriAI/litellm/pull/6693/commits/c1f7a2309d3e0909a35a8757ff3de647325588a0)
[fix(init.py): update provider-model mapping to include all known provider-model mappings](https://github.com/BerriAI/litellm/pull/6693/commits/4449e4f6dbd74910efd4e436c58723a820dc1e4e)
[feat(anthropic): support passing document in llm api call](https://github.com/BerriAI/litellm/pull/6693/commits/6c08c5ebc0518a91d2bdd2c12576606f02b6e282)
[docs(anthropic.md): add pdf anthropic call to docs + expose new 'supports_pdf_input' function](https://github.com/BerriAI/litellm/pull/6693/commits/712a84a10eddbbc8fed13ed86d65c3278779260f)

* Add docs to export logs to Laminar by dinmukhamedm in https://github.com/BerriAI/litellm/pull/6674
* (Feat) Add langsmith key based logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6682
* (fix) OpenAI's optional messages[].name does not work with Mistral API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6701
* (feat) add xAI on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6680
* (docs) add benchmarks on 1K RPS by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6704
* (feat) add cost tracking stable diffusion 3 on Bedrock by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6676
* fix raise correct error 404 when /key/info is called on non-existent key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6653
* (feat) Add support for logging to GCS Buckets with folder paths by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6675
* (feat) add bedrock image gen async support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6672
* (feat) Add Bedrock Stability.ai Stable Diffusion 3 Image Generation models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6673
* (Feat) 273% improvement GCS Bucket Logger - use Batched Logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6679
* Litellm Minor Fixes & Improvements (11/08/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6658

> fix(deepseek/chat): convert content list to str
test(test_deepseek_completion.py): implement base llm unit tests
fix(router.py): support content policy violation fallbacks with default fallbacks
fix(opentelemetry.py): refactor to move otel imports behing flag
fix(opentelemtry.py): close span on success completion
fix(user_api_key_auth.py): allow user_role to default to none

* (pricing): Fix multiple mistakes in Claude pricing by Manouchehri in https://github.com/BerriAI/litellm/pull/6666

New Contributors
* dinmukhamedm made their first contribution in https://github.com/BerriAI/litellm/pull/6674





Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.5-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 250.0 | 288.0333965427629 | 6.0955375578428805 | 0.0 | 1824 | 0 | 215.17615800001977 | 3641.4951400000177 |
| Aggregated | Passed ✅ | 250.0 | 288.0333965427629 | 6.0955375578428805 | 0.0 | 1824 | 0 | 215.17615800001977 | 3641.4951400000177 |

1.52.8.dev1

What's Changed
* (feat) add bedrock/stability.stable-image-ultra-v1:0 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6723
* [Feature]: Stop swallowing up AzureOpenAi exception responses in litellm's implementation for a BadRequestError by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6745
* [Feature]: json_schema in response support for Anthropic by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6748
* fix: import audio check by IamRash-7 in https://github.com/BerriAI/litellm/pull/6740
* (fix) Cost tracking for `vertex_ai/imagen3` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6752
* (feat) Vertex AI - add support for fine tuned embedding models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6749
* LiteLLM Minor Fixes & Improvements (11/13/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6729
* feat - add us.llama 3.1 models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6760

New Contributors
* IamRash-7 made their first contribution in https://github.com/BerriAI/litellm/pull/6740

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.8...v1.52.8.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.8.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 200.0 | 224.53181540392876 | 6.2943081785528925 | 0.0 | 1884 | 0 | 181.30709700000125 | 3153.26821299999 |
| Aggregated | Passed ✅ | 200.0 | 224.53181540392876 | 6.2943081785528925 | 0.0 | 1884 | 0 | 181.30709700000125 | 3153.26821299999 |

1.52.6

What's Changed
* LiteLLM Minor Fixes & Improvements (11/12/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6705
* (feat) helm hook to sync db schema by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6715
* (fix proxy redis) Add redis sentinel support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6154
* Fix: Update gpt-4o costs to those of gpt-4o-2024-08-06 by klieret in https://github.com/BerriAI/litellm/pull/6714
* (fix) using Anthropic `response_format={"type": "json_object"}` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6721
* (feat) Add cost tracking for Azure Dall-e-3 Image Generation + use base class to ensure basic image generation tests pass by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6716

New Contributors
* klieret made their first contribution in https://github.com/BerriAI/litellm/pull/6714

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.5...v1.52.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 240.0 | 266.21521040425523 | 6.127671245386762 | 0.0 | 1833 | 0 | 215.80195500001764 | 2902.9665340000292 |
| Aggregated | Passed ✅ | 240.0 | 266.21521040425523 | 6.127671245386762 | 0.0 | 1833 | 0 | 215.80195500001764 | 2902.9665340000292 |

v1.52.5.staging1
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.5...v1.52.5.staging1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.5.staging1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 260.0 | 290.351197702432 | 6.041456642946781 | 0.0 | 1808 | 0 | 225.4500810000195 | 3132.288078999977 |
| Aggregated | Passed ✅ | 260.0 | 290.351197702432 | 6.041456642946781 | 0.0 | 1808 | 0 | 225.4500810000195 | 3132.288078999977 |

1.52.6.dev1

What's Changed
* chore: comment for maritalk by nobu007 in https://github.com/BerriAI/litellm/pull/6607
* Update gpt-4o-2024-08-06, and o1-preview, o1-mini models in model cost map by emerzon in https://github.com/BerriAI/litellm/pull/6654
* (QOL improvement) add unit testing for all static_methods in litellm_logging.py by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6640
* (feat) log error class, function_name on prometheus service failure hook + only log DB related failures on DB service hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6650


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.1...v1.52.6.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.6.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.6.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 250.0 | 284.54861130679984 | 6.047368285253406 | 0.0 | 1809 | 0 | 224.15313200008313 | 1935.1971319999848 |
| Aggregated | Passed ✅ | 250.0 | 284.54861130679984 | 6.047368285253406 | 0.0 | 1809 | 0 | 224.15313200008313 | 1935.1971319999848 |

1.52.5

What's Changed
* Litellm dev 11 11 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6693
* Add docs to export logs to Laminar by dinmukhamedm in https://github.com/BerriAI/litellm/pull/6674
* (Feat) Add langsmith key based logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6682
* (fix) OpenAI's optional messages[].name does not work with Mistral API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6701
* (feat) add xAI on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6680
* (docs) add benchmarks on 1K RPS by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6704
* (feat) add cost tracking stable diffusion 3 on Bedrock by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6676
* fix raise correct error 404 when /key/info is called on non-existent key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6653

New Contributors
* dinmukhamedm made their first contribution in https://github.com/BerriAI/litellm/pull/6674

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.4...v1.52.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.5



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 200.0 | 216.13288200000045 | 6.215294300193555 | 0.0 | 1859 | 0 | 166.97629999998753 | 1726.1806539999611 |
| Aggregated | Passed ✅ | 200.0 | 216.13288200000045 | 6.215294300193555 | 0.0 | 1859 | 0 | 166.97629999998753 | 1726.1806539999611 |

1.52.4

What's Changed
* (feat) Add support for logging to GCS Buckets with folder paths by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6675
* (feat) add bedrock image gen async support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6672
* (feat) Add Bedrock Stability.ai Stable Diffusion 3 Image Generation models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6673
* (Feat) 273% improvement GCS Bucket Logger - use Batched Logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6679


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.3...v1.52.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 260.0 | 290.15274785816086 | 6.102299282865334 | 0.0 | 1826 | 0 | 221.48416699997142 | 3998.8694860000464 |
| Aggregated | Passed ✅ | 260.0 | 290.15274785816086 | 6.102299282865334 | 0.0 | 1826 | 0 | 221.48416699997142 | 3998.8694860000464 |

Page 2 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.