Litellm

Latest version: v1.65.1

Safety actively analyzes 723144 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 112

1.58.2dev2

What's Changed
* build(pyproject.toml): bump uvicorn depedency requirement + Azure o1 model check fix + Vertex Anthropic headers fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/7773
* Add `gemini/` frequency_penalty + presence_penalty support by krrishdholakia in https://github.com/BerriAI/litellm/pull/7776
* feat(helm): add securityContext and pull policy values to migration job by Hexoplon in https://github.com/BerriAI/litellm/pull/7652
* fix confusing save button label by yujonglee in https://github.com/BerriAI/litellm/pull/7778
* [integrations/lunary] Improve Lunary documentaiton by hughcrt in https://github.com/BerriAI/litellm/pull/7770
* Fix wrong URL for internal user invitation by yujonglee in https://github.com/BerriAI/litellm/pull/7762
* Update instructor tutorial by Winston-503 in https://github.com/BerriAI/litellm/pull/7784

New Contributors
* Winston-503 made their first contribution in https://github.com/BerriAI/litellm/pull/7784

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.58.2...v1.58.2-dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.2-dev2



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 200.0 | 215.31102581586012 | 6.152564490107213 | 0.0 | 1841 | 0 | 176.80144700000255 | 3405.0107850000018 |
| Aggregated | Passed βœ… | 200.0 | 215.31102581586012 | 6.152564490107213 | 0.0 | 1841 | 0 | 176.80144700000255 | 3405.0107850000018 |

1.58.2dev1

What's Changed
* build(pyproject.toml): bump uvicorn depedency requirement + Azure o1 model check fix + Vertex Anthropic headers fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/7773
* Add `gemini/` frequency_penalty + presence_penalty support by krrishdholakia in https://github.com/BerriAI/litellm/pull/7776
* Add back in non root image fixes by rajatvig in https://github.com/BerriAI/litellm/pull/7781


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.58.2...v1.58.2-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.2-dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 270.0 | 302.10681472219204 | 6.039948987754746 | 0.0 | 1807 | 0 | 228.94537999997056 | 4199.834433000035 |
| Aggregated | Failed ❌ | 270.0 | 302.10681472219204 | 6.039948987754746 | 0.0 | 1807 | 0 | 228.94537999997056 | 4199.834433000035 |

1.58.2.dev1

What's Changed
* build(pyproject.toml): bump uvicorn depedency requirement + Azure o1 model check fix + Vertex Anthropic headers fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/7773
* Add `gemini/` frequency_penalty + presence_penalty support by krrishdholakia in https://github.com/BerriAI/litellm/pull/7776
* feat(helm): add securityContext and pull policy values to migration job by Hexoplon in https://github.com/BerriAI/litellm/pull/7652
* fix confusing save button label by yujonglee in https://github.com/BerriAI/litellm/pull/7778
* [integrations/lunary] Improve Lunary documentaiton by hughcrt in https://github.com/BerriAI/litellm/pull/7770
* Fix wrong URL for internal user invitation by yujonglee in https://github.com/BerriAI/litellm/pull/7762
* Update instructor tutorial by Winston-503 in https://github.com/BerriAI/litellm/pull/7784
* (helm) - allow specifying envVars on values.yaml + add helm lint test by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7789
* Fix anthropic pass-through end user tracking + add gemini-2.0-flash-thinking-exp by krrishdholakia in https://github.com/BerriAI/litellm/pull/7772
* Add back in non root image fixes (7781) by krrishdholakia in https://github.com/BerriAI/litellm/pull/7795
* test: initial test to enforce all functions in user_api_key_auth.py h… by krrishdholakia in https://github.com/BerriAI/litellm/pull/7797
* test: initial commit enforcing testing on all anthropic pass through … by krrishdholakia in https://github.com/BerriAI/litellm/pull/7794
* build: bump certifi version - see if that fixes asyncio ssl issue on … by krrishdholakia in https://github.com/BerriAI/litellm/pull/7800

New Contributors
* Winston-503 made their first contribution in https://github.com/BerriAI/litellm/pull/7784

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.58.2...v1.58.2.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.2.dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 279.05652153373586 | 6.093771343336731 | 0.0 | 1823 | 0 | 214.89994900002785 | 2653.5651230000212 |
| Aggregated | Passed βœ… | 250.0 | 279.05652153373586 | 6.093771343336731 | 0.0 | 1823 | 0 | 214.89994900002785 | 2653.5651230000212 |

1.58.1

Not secure
🚨Alpha - 1.58.0 has various perf improvements, we recommend waiting for a stable release before bumping in production

What's Changed
* (core sdk fix) - fix fallbacks stuck in infinite loop by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7751
* [Bug fix]: v1.58.0 - issue with read request body by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7753
* (litellm SDK perf improvements) - handle cases when unable to lookup model in model cost map by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7750
* (prometheus - minor bug fix) - `litellm_llm_api_time_to_first_token_metric` not populating for bedrock models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7740
* (fix) health check - allow setting `health_check_model` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7752


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.58.0...v1.58.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 294.2978673554448 | 6.045420383532543 | 0.0 | 1809 | 0 | 223.72276400000146 | 3539.4181890000027 |
| Aggregated | Passed βœ… | 250.0 | 294.2978673554448 | 6.045420383532543 | 0.0 | 1809 | 0 | 223.72276400000146 | 3539.4181890000027 |

1.58.0

Not secure
🚨 This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues

What's Changed
* (proxy perf) - service logger don't always import OTEL in helper function by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7727
* (proxy perf) - only read request body 1 time per request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7728


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.57.11...v1.58.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.0



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 240.0 | 273.2166563012582 | 6.118315985413586 | 0.0033451700302972037 | 1829 | 1 | 75.1692759999969 | 3821.228761000043 |
| Aggregated | Passed βœ… | 240.0 | 273.2166563012582 | 6.118315985413586 | 0.0033451700302972037 | 1829 | 1 | 75.1692759999969 | 3821.228761000043 |

1.57.11

Not secure
🚨 This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues
What's Changed
* (litellm SDK perf improvement) - use `verbose_logger.debug` and `_cached_get_model_info_helper` in `_response_cost_calculator` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7720
* (litellm sdk speedup) - use `_model_contains_known_llm_provider` in `response_cost_calculator` to check if the model contains a known litellm provider by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7721
* (proxy perf) - only parse request body 1 time per request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7722
* Revert "(proxy perf) - only parse request body 1 time per request" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7724
* add azure o1 pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/7715


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.57.10...v1.57.11



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.11



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 240.0 | 270.55759577820237 | 6.130862160194138 | 0.0 | 1835 | 0 | 224.79750500002638 | 1207.8732939999952 |
| Aggregated | Passed βœ… | 240.0 | 270.55759577820237 | 6.130862160194138 | 0.0 | 1835 | 0 | 224.79750500002638 | 1207.8732939999952 |

v1.57.8-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.57.8...v1.57.8-stable

🚨 Not stable - we've got alerts about bugs on `text-embedding-3`. Identifying the root cause
βœ… Resolved - this was not a litellm issue, it was cause because dd-trace run was patching the OpenAI SDK https://github.com/DataDog/dd-trace-py/issues/11994

You are safe to upgrade to this version if you do not use `dd-trace-run` in front of litellm

Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.57.8-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 240.0 | 271.08706884006597 | 6.1244865014274685 | 0.0 | 1832 | 0 | 221.9753340000068 | 2009.652516000017 |
| Aggregated | Passed βœ… | 240.0 | 271.08706884006597 | 6.1244865014274685 | 0.0 | 1832 | 0 | 221.9753340000068 | 2009.652516000017 |

Page 10 of 112

Links

Releases

Has known vulnerabilities

Β© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.