Litellm

Latest version: v1.65.1

Safety actively analyzes 723144 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 112

1.60.5

Not secure
What's Changed
* Added a guide for users who want to use LiteLLM with AI/ML API. by waterstark in https://github.com/BerriAI/litellm/pull/7058
* Added compatibility guidance, etc. for xAI Grok model by zhaohan-dong in https://github.com/BerriAI/litellm/pull/8282
* (Security fix) - remove code block that inserts master key hash into DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8268
* (UI) - Add Assembly AI provider to UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8297
* (feat) - Add Assembly AI to model cost map by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8298
* fixed issues 8126 and 8127 (8275) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8299
* (Refactor) - migrate bedrock invoke to `BaseLLMHTTPHandler` class by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8290

New Contributors
* waterstark made their first contribution in https://github.com/BerriAI/litellm/pull/7058

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.4...v1.60.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.5



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 210.0 | 251.44053604962153 | 6.19421782055854 | 0.0 | 1854 | 0 | 167.35073600000305 | 4496.06190000003 |
| Aggregated | Passed ✅ | 210.0 | 251.44053604962153 | 6.19421782055854 | 0.0 | 1854 | 0 | 167.35073600000305 | 4496.06190000003 |

1.60.5dev1

What's Changed
* Azure OpenAI improvements - o3 native streaming, improved tool call + response format handling by krrishdholakia in https://github.com/BerriAI/litellm/pull/8292
* Fix edit team on ui by krrishdholakia in https://github.com/BerriAI/litellm/pull/8295
* Improve rpm check on keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/8301
* docs: fix enterprise links by wagnerjt in https://github.com/BerriAI/litellm/pull/8294
* Add gemini-2.0-flash pricing + model info by krrishdholakia in https://github.com/BerriAI/litellm/pull/8303


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.5...v1.60.5-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.5-dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 190.0 | 244.25501266865584 | 6.165773096861878 | 0.006687389475989022 | 1844 | 2 | 149.75326000001132 | 28799.14171600001 |
| Aggregated | Passed ✅ | 190.0 | 244.25501266865584 | 6.165773096861878 | 0.006687389475989022 | 1844 | 2 | 149.75326000001132 | 28799.14171600001 |

1.60.4

Not secure
What's Changed
* Internal User Endpoint - vulnerability fix + response type fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/8228
* Litellm UI fixes 8123 v2 (8208) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8245
* Update model_prices_and_context_window.json by superpoussin22 in https://github.com/BerriAI/litellm/pull/8249
* Update model_prices_and_context_window.json by superpoussin22 in https://github.com/BerriAI/litellm/pull/8256
* (dependency) - pip loosen httpx version requirement by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8255
* Add hyperbolic deepseek v3 model configurations by lowjiansheng in https://github.com/BerriAI/litellm/pull/8232
* fix(prometheus.py): fix setting key budget metrics by krrishdholakia in https://github.com/BerriAI/litellm/pull/8234
* (feat) - add supports tool choice to model cost map by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8265
* (feat) - track org_id in SpendLogs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8253
* (Bug fix) - Langfuse / Callback settings stored in DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8251
* Fix passing top_k parameter for Bedrock Anthropic models (8131) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8269
* (Feat) - Add support for structured output on `bedrock/nova` models + add util `litellm.supports_tool_choice` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8264
* [BETA] Support OIDC `role` based access to proxy by krrishdholakia in https://github.com/BerriAI/litellm/pull/8260
* Fix deepseek calling - refactor to use base_llm_http_handler by krrishdholakia in https://github.com/BerriAI/litellm/pull/8266
* allows dynamic message redaction by krrishdholakia in https://github.com/BerriAI/litellm/pull/8270


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.2...v1.60.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.4



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 210.0 | 243.98647747354212 | 6.187158959524932 | 0.0033407985742575225 | 1852 | 1 | 94.81396500007122 | 3976.009301999966 |
| Aggregated | Passed ✅ | 210.0 | 243.98647747354212 | 6.187158959524932 | 0.0033407985742575225 | 1852 | 1 | 94.81396500007122 | 3976.009301999966 |

1.60.2

Not secure
What's Changed
* Control Model Access by IDP 'groups' by krrishdholakia in https://github.com/BerriAI/litellm/pull/8164
* build(schema.prisma): add new `sso_user_id` to LiteLLM_UserTable by krrishdholakia in https://github.com/BerriAI/litellm/pull/8167
* Litellm dev contributor prs 01 31 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8168
* Improved O3 + Azure O3 support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8181
* test: add more unit testing for team member endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/8170
* Add azure/deepseek-r1 by Klohto in https://github.com/BerriAI/litellm/pull/8177
* [Bug Fix] - `/vertex_ai/` was not detected as llm_api_route on pass through but `vertex-ai` was by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8186
* (UI + SpendLogs) - Store SpendLogs in UTC Timezone, Fix filtering logs by start/end time by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8190
* Azure AI Foundry - Deepseek R1 by elabbarw in https://github.com/BerriAI/litellm/pull/8188
* fix(main.py): fix passing openrouter specific params by krrishdholakia in https://github.com/BerriAI/litellm/pull/8184
* Complete o3 model support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8183
* Easier user onboarding via SSO by krrishdholakia in https://github.com/BerriAI/litellm/pull/8187
* LiteLLM Minor Fixes & Improvements (01/16/2025) - p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7828
* Added deprecation date for gemini-1.5 models by yurchik11 in https://github.com/BerriAI/litellm/pull/8210
* docs: Updating the available VoyageAI models in the docs by fzowl in https://github.com/BerriAI/litellm/pull/8215
* build: ui updates by krrishdholakia in https://github.com/BerriAI/litellm/pull/8206
* Fix tokens for deepseek by SmartManoj in https://github.com/BerriAI/litellm/pull/8207
* (UI Fixes for add new model flow) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8216
* Update xAI provider and fix some old model config by zhaohan-dong in https://github.com/BerriAI/litellm/pull/8218
* Support guardrails `mode` as list, fix valid keys error in pydantic, add more testing by krrishdholakia in https://github.com/BerriAI/litellm/pull/8224
* docs: fix typo in lm_studio.md by foreign-sub in https://github.com/BerriAI/litellm/pull/8222
* (Feat) - New pass through add assembly ai passthrough endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8220
* fix(openai/): allows 'reasoning_effort' param to be passed correctly by krrishdholakia in https://github.com/BerriAI/litellm/pull/8227

New Contributors
* Klohto made their first contribution in https://github.com/BerriAI/litellm/pull/8177
* zhaohan-dong made their first contribution in https://github.com/BerriAI/litellm/pull/8218
* foreign-sub made their first contribution in https://github.com/BerriAI/litellm/pull/8222

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.0...v1.60.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 170.0 | 187.78487681207412 | 6.365583292626693 | 0.0 | 1905 | 0 | 135.5453470000043 | 3644.0179759999864 |
| Aggregated | Passed ✅ | 170.0 | 187.78487681207412 | 6.365583292626693 | 0.0 | 1905 | 0 | 135.5453470000043 | 3644.0179759999864 |

1.60.2dev1

What's Changed
* Internal User Endpoint - vulnerability fix + response type fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/8228
* Litellm UI fixes 8123 v2 (8208) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8245
* Update model_prices_and_context_window.json by superpoussin22 in https://github.com/BerriAI/litellm/pull/8249
* Update model_prices_and_context_window.json by superpoussin22 in https://github.com/BerriAI/litellm/pull/8256
* (dependency) - pip loosen httpx version requirement by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8255
* Add hyperbolic deepseek v3 model configurations by lowjiansheng in https://github.com/BerriAI/litellm/pull/8232
* fix(prometheus.py): fix setting key budget metrics by krrishdholakia in https://github.com/BerriAI/litellm/pull/8234
* (feat) - add supports tool choice to model cost map by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8265
* (feat) - track org_id in SpendLogs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8253
* (Bug fix) - Langfuse / Callback settings stored in DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8251
* Fix passing top_k parameter for Bedrock Anthropic models (8131) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8269
* (Feat) - Add support for structured output on `bedrock/nova` models + add util `litellm.supports_tool_choice` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8264
* [BETA] Support OIDC `role` based access to proxy by krrishdholakia in https://github.com/BerriAI/litellm/pull/8260
* Fix deepseek calling - refactor to use base_llm_http_handler by krrishdholakia in https://github.com/BerriAI/litellm/pull/8266
* allows dynamic message redaction by krrishdholakia in https://github.com/BerriAI/litellm/pull/8270


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.2...v1.60.2-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.2-dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 220.0 | 251.01715871561 | 6.124144736848293 | 0.0 | 1832 | 0 | 171.2837300000274 | 3691.155395999999 |
| Aggregated | Passed ✅ | 220.0 | 251.01715871561 | 6.124144736848293 | 0.0 | 1832 | 0 | 171.2837300000274 | 3691.155395999999 |

1.60.0

Not secure
- `def async_log_stream_event` and `def log_stream_event` no longer supported for `CustomLoggers` https://docs.litellm.ai/docs/observability/custom_callback. If you want to log stream events use `def async_log_success_event` and `def log_success_event` for logging success stream events


Known Issues
🚨 Detected issue with Langfuse Logging when Langfuse credentials are stored in DB



* Adding gemini-2.0-flash-thinking-exp-01-21 by marcoaleixo in https://github.com/BerriAI/litellm/pull/8089
* add groq/deepseek-r1-distill-llama-70b by miraclebakelaser in https://github.com/BerriAI/litellm/pull/8078
* (UI) Fix SpendLogs page - truncate `bedrock` models + show `end_user` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8118
* UI Fixes - Newly created key does not display on the View Key Page + Updated the validator to allow model editing when `keyTeam.team_alias === "Default Team"` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8122
* (Refactor / QA) - Use `LoggingCallbackManager` to append callbacks and ensure no duplicate callbacks are added by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8112
* (UI) fix adding Vertex Models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8129
* Fix json_mode parameter propagation in OpenAILikeChatHandler by miraclebakelaser in https://github.com/BerriAI/litellm/pull/8133
* Doc updates - add key rotations to docs by krrishdholakia in https://github.com/BerriAI/litellm/pull/8136
* Enforce default_on guardrails always run + expose new `litellm.disable_no_log_param` param by krrishdholakia in https://github.com/BerriAI/litellm/pull/8134
* Doc updates + management endpoint fixes by krrishdholakia in https://github.com/BerriAI/litellm/pull/8138
* New stable release - release notes by krrishdholakia in https://github.com/BerriAI/litellm/pull/8148
* FEATURE: OpenAI o3-mini by ventz in https://github.com/BerriAI/litellm/pull/8151
* build: fix model cost map with o3 model pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/8153
* (Fixes) OpenAI Streaming Token Counting + Fixes usage track when `litellm.turn_off_message_logging=True` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8156
* (UI) Allow adding custom pricing when adding new model by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8165
* (Feat) add bedrock/deepseek custom import models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8132
* Adding Azure OpenAI o3-mini costs & specs by yigitkonur in https://github.com/BerriAI/litellm/pull/8166
* Adjust model pricing metadata by yurchik11 in https://github.com/BerriAI/litellm/pull/8147

New Contributors
* marcoaleixo made their first contribution in https://github.com/BerriAI/litellm/pull/8089
* yigitkonur made their first contribution in https://github.com/BerriAI/litellm/pull/8166

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.10...v1.60.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.0



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 240.0 | 281.07272626532927 | 6.158354312051399 | 0.0 | 1843 | 0 | 215.79772499995897 | 3928.489000000013 |
| Aggregated | Passed ✅ | 240.0 | 281.07272626532927 | 6.158354312051399 | 0.0 | 1843 | 0 | 215.79772499995897 | 3928.489000000013 |

Page 6 of 112

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.