Litellm

Latest version: v1.65.1

Safety actively analyzes 723152 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 16 of 112

1.55.1

Not secure
What's Changed
* (feat) add `response_time` to StandardLoggingPayload - logged on `datadog`, `gcs_bucket`, `s3_bucket` etc by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7199
* build(deps): bump nanoid from 3.3.7 to 3.3.8 in /ui by dependabot in https://github.com/BerriAI/litellm/pull/7198
* (Feat) DataDog Logger - Add `HOSTNAME` and `POD_NAME` to DataDog logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7189
* (feat) add `error_code`, `error_class`, `llm_provider` to `StandardLoggingPayload` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7200
* (docs) Document StandardLoggingPayload Spec by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7201
* fix: Support WebP image format and avoid token calculation error by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7182
* (feat) UI - Disable Usage Tab once SpendLogs is 1M+ Rows by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7208
* (minor fix proxy) Clarify Proxy Rate limit errors are showing hash of litellm virtual key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7210
* (fix) latency fix - revert prompt caching check on litellm router by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7211


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.0...v1.55.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 250.0 | 274.17864765330575 | 6.170501674094568 | 0.0 | 1846 | 0 | 212.15181599995958 | 2203.3609819999356 |
| Aggregated | Passed ✅ | 250.0 | 274.17864765330575 | 6.170501674094568 | 0.0 | 1846 | 0 | 212.15181599995958 | 2203.3609819999356 |

1.55.0

Not secure
What's Changed
* Litellm code qa common config by krrishdholakia in https://github.com/BerriAI/litellm/pull/7113
* (Refactor) Code Quality improvement - use Common base handler for Cohere by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7117
* (Refactor) Code Quality improvement - Use Common base handler for `clarifai/` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7125
* (Refactor) Code Quality improvement - Use Common base handler for `cloudflare/` provider by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7127
* (Refactor) Code Quality improvement - Use Common base handler for Cohere /generate API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7122
* (Refactor) Code Quality improvement - Use Common base handler for `anthropic_text/` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7143
* docs: document code quality by krrishdholakia in https://github.com/BerriAI/litellm/pull/7149
* (Refactor) Code Quality improvement - stop redefining LiteLLMBase by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7147
* LiteLLM Common Base LLM Config (pt.2) by krrishdholakia in https://github.com/BerriAI/litellm/pull/7146
* LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config by krrishdholakia in https://github.com/BerriAI/litellm/pull/7148
* refactor(sagemaker/): separate chat + completion routes + make them b… by krrishdholakia in https://github.com/BerriAI/litellm/pull/7151
* rename `llms/OpenAI/` -> `llms/openai/` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7154
* Code Quality improvement - remove symlink to `requirements.txt` from within litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7155
* LiteLLM Common Base LLM Config (pt.4): Move Ollama to Base LLM Config by krrishdholakia in https://github.com/BerriAI/litellm/pull/7157
* Code Quality Improvement - remove `file_apis`, `fine_tuning_apis` from `/llms` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7156
* Revert "LiteLLM Common Base LLM Config (pt.4): Move Ollama to Base LLM Config" by krrishdholakia in https://github.com/BerriAI/litellm/pull/7160
* Litellm ollama refactor by krrishdholakia in https://github.com/BerriAI/litellm/pull/7162
* Litellm vllm refactor by krrishdholakia in https://github.com/BerriAI/litellm/pull/7158
* Litellm merge pr by krrishdholakia in https://github.com/BerriAI/litellm/pull/7161
* Code Quality Improvement - remove `tokenizers/` from /llms by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7163
* build(deps): bump nanoid from 3.3.7 to 3.3.8 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/7159
* (Refactor) Code Quality improvement - remove `/prompt_templates/` , `base_aws_llm.py` from `/llms` folder by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7164
* Code Quality Improvement - use `vertex_ai/` as folder name for vertexAI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7166
* Code Quality Improvement - move `aleph_alpha` to deprecated_providers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7168
* (Refactor) Code Quality improvement - rename `text_completion_codestral.py` -> `codestral/completion/` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7172
* (Code Quality) - Add test to enforce all folders in `/llms` are a litellm provider by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7175
* fix(get_supported_openai_params.py): cleanup by krrishdholakia in https://github.com/BerriAI/litellm/pull/7176
* fix(acompletion): support fallbacks on acompletion by krrishdholakia in https://github.com/BerriAI/litellm/pull/7184


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.54.1...v1.55.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.0



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 250.0 | 286.19507948581224 | 5.886697197840291 | 0.0033409178194326278 | 1762 | 1 | 211.68456200001629 | 3578.4067740000296 |
| Aggregated | Passed ✅ | 250.0 | 286.19507948581224 | 5.886697197840291 | 0.0033409178194326278 | 1762 | 1 | 211.68456200001629 | 3578.4067740000296 |

1.55.0.dev2

What's Changed
* (feat) add `response_time` to StandardLoggingPayload - logged on `datadog`, `gcs_bucket`, `s3_bucket` etc by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7199
* build(deps): bump nanoid from 3.3.7 to 3.3.8 in /ui by dependabot in https://github.com/BerriAI/litellm/pull/7198
* (Feat) DataDog Logger - Add `HOSTNAME` and `POD_NAME` to DataDog logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7189
* (feat) add `error_code`, `error_class`, `llm_provider` to `StandardLoggingPayload` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7200
* (docs) Document StandardLoggingPayload Spec by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7201
* fix: Support WebP image format and avoid token calculation error by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7182
* (feat) UI - Disable Usage Tab once SpendLogs is 1M+ Rows by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7208


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.0...v1.55.0.dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.0.dev2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 210.0 | 236.69042419128075 | 6.133942906309277 | 0.0 | 1835 | 0 | 175.69668400000182 | 4096.7015589999955 |
| Aggregated | Passed ✅ | 210.0 | 236.69042419128075 | 6.133942906309277 | 0.0 | 1835 | 0 | 175.69668400000182 | 4096.7015589999955 |

1.55.0.dev1

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.0...v1.55.0.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.0.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 220.0 | 243.6248328955295 | 6.208881714875033 | 0.0 | 1857 | 0 | 195.87834699996165 | 1936.7717839999727 |
| Aggregated | Passed ✅ | 220.0 | 243.6248328955295 | 6.208881714875033 | 0.0 | 1857 | 0 | 195.87834699996165 | 1936.7717839999727 |

1.54.1

Not secure
What's Changed
* refactor - use consistent file naming convention `AI21/` -> `ai21` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7090
* refactor - use consistent file naming convention AzureOpenAI/ -> azure by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7092
* Litellm dev 12 07 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7086


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.54.0...v1.54.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.54.1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 280.0 | 340.7890831504466 | 5.986291177372485 | 0.0 | 1788 | 0 | 236.28402200000664 | 4047.592437999981 |
| Aggregated | Failed ❌ | 280.0 | 340.7890831504466 | 5.986291177372485 | 0.0 | 1788 | 0 | 236.28402200000664 | 4047.592437999981 |

1.54.0

Not secure
What's Changed
* (feat) Track `custom_llm_provider` in LiteLLMSpendLogs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7081
* Add MLflow to the side bar by B-Step62 in https://github.com/BerriAI/litellm/pull/7031
* (bug fix) SpendLogs update DB catch all possible DB errors for retrying by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7082
* (Feat) Add StructuredOutputs support for Fireworks.AI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7085
* added deepinfra/Meta-Llama-3.1-405B-Instruct to the Model json by AliSayyah in https://github.com/BerriAI/litellm/pull/7084
* (feat) Add created_at and updated_at for LiteLLM_UserTable by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7089

New Contributors
* AliSayyah made their first contribution in https://github.com/BerriAI/litellm/pull/7084

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.53.9...v1.54.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.54.0



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 200.0 | 220.2003271503722 | 6.29832230581454 | 0.0 | 1882 | 0 | 179.34225999999853 | 1827.969679000006 |
| Aggregated | Passed ✅ | 200.0 | 220.2003271503722 | 6.29832230581454 | 0.0 | 1882 | 0 | 179.34225999999853 | 1827.969679000006 |

Page 16 of 112

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.