Litellm

Latest version: v1.65.1

Safety actively analyzes 723158 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 15 of 112

1.55.10

Not secure
What's Changed
* (Admin UI) - Test Key Tab - Allow typing in `model` name + Add wrapping for text response by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7347
* (Admin UI) - Test Key Tab - Allow using `UI Session` instead of manually creating a virtual key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7348
* (refactor) - fix from enterprise.utils import ui_get_spend_by_tags by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7352
* (chore) - enforce model budgets on virtual keys as enterprise feature by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7353
* (Admin UI) correctly render provider name in /models with wildcard routing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7349
* (Admin UI) - maintain history on chat UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7351
* Litellm enforce enterprise features by krrishdholakia in https://github.com/BerriAI/litellm/pull/7357
* Document team admins + Enforce assigning team admins as an enterprise feature by krrishdholakia in https://github.com/BerriAI/litellm/pull/7359
* Litellm docs update by krrishdholakia in https://github.com/BerriAI/litellm/pull/7365
* Complete 'requests' library removal by krrishdholakia in https://github.com/BerriAI/litellm/pull/7350
* (chore) remove unused code files by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7363
* (security fix) - update base image for all docker images to `python:3.13.1-slim` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7388
* LiteLLM Minor Fixes & Improvements (12/23/2024) - p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7383
* LiteLLM Minor Fixes & Improvements (12/23/2024) - P2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7386
* [Bug Fix]: Errors in LiteLLM When Using Embeddings Model with Usage-Based Routing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7390
* (Feat) Add input_cost_per_token_batches, output_cost_per_token_batches for OpenAI cost tracking Batches API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7391
* (feat) Add basic logging support for `/batches` endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7381
* (feat) Add cost tracking for /batches requests OpenAI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7384
* dd logger fix - handle objects that can't be JSON dumped by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7393


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.9...v1.55.10



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.10



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 200.0 | 218.24862748744047 | 6.256831142894005 | 0.0 | 1871 | 0 | 177.71721199983403 | 1940.1571020000574 |
| Aggregated | Passed βœ… | 200.0 | 218.24862748744047 | 6.256831142894005 | 0.0 | 1871 | 0 | 177.71721199983403 | 1940.1571020000574 |

v1.55.8-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.8...v1.55.8-stable



Docker Run LiteLLM Proxy

shell
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.55.8-stable




Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 230.0 | 256.12454035233407 | 6.158450406948531 | 0.0 | 1842 | 0 | 207.30311900001652 | 2232.342858000038 |
| Aggregated | Passed βœ… | 230.0 | 256.12454035233407 | 6.158450406948531 | 0.0 | 1842 | 0 | 207.30311900001652 | 2232.342858000038 |

1.55.9

Not secure
What's Changed
* Controll fallback prompts client-side by krrishdholakia in https://github.com/BerriAI/litellm/pull/7334
* [Bug fix ]: Triton /infer handler incompatible with batch responses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7337
* Litellm dev 12 20 2024 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7339
* Litellm dev 2024 12 20 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7335
* (fix) LiteLLM Proxy fix GET `/files/{file_id:path}/content"` endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7342
* (Bug fix) Azure cost calculation - `dall-e-3` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7343


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.8...v1.55.9



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.9



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 240.0 | 270.2192842992925 | 6.152704904591068 | 0.0 | 1841 | 0 | 213.0105499999786 | 2430.5650640000067 |
| Aggregated | Passed βœ… | 240.0 | 270.2192842992925 | 6.152704904591068 | 0.0 | 1841 | 0 | 213.0105499999786 | 2430.5650640000067 |

1.55.8

Not secure
What's Changed
* fix(proxy_server.py): pass model access groups to get_key/get_team mo… by krrishdholakia in https://github.com/BerriAI/litellm/pull/7281
* Litellm security fixes by krrishdholakia in https://github.com/BerriAI/litellm/pull/7282
* Added sambanova cloud models by rodrigo-92 in https://github.com/BerriAI/litellm/pull/7187
* Re-add prompt caching based model filtering (route to previous model) by krrishdholakia in https://github.com/BerriAI/litellm/pull/7299
* (Fix) deprecated Pydantic Config class with model_config BerriAI/li… by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7300
* (feat - proxy) Add `status_code` to `litellm_proxy_total_requests_metric_total` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7293
* fix(hosted_vllm/transformation.py): return fake api key, if none give… by krrishdholakia in https://github.com/BerriAI/litellm/pull/7301
* LiteLLM Minor Fixes & Improvements (2024/12/18) p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7295
* (feat proxy) v2 - model max budgets by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7302
* (proxy admin ui) - show Teams sorted by `Team Alias` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7296
* (Refactor) use separate file for track_cost_callback by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7304
* o1 - add image param handling by krrishdholakia in https://github.com/BerriAI/litellm/pull/7312
* (code quality) run ruff rule to ban unused imports by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7313
* [Bug Fix]: ImportError: cannot import name 'T' from 're' by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7314
* (code refactor) - Add `BaseRerankConfig`. Use `BaseRerankConfig` for `cohere/rerank` and `azure_ai/rerank` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7319
* (feat) add infinity rerank models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7321
* Litellm dev 12 19 2024 p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7315
* Langfuse Prompt Management Support by krrishdholakia in https://github.com/BerriAI/litellm/pull/7322
* Fix LiteLLM Fireworks AI Documentation by jravi-fireworks in https://github.com/BerriAI/litellm/pull/7333

New Contributors
* rodrigo-92 made their first contribution in https://github.com/BerriAI/litellm/pull/7187
* jravi-fireworks made their first contribution in https://github.com/BerriAI/litellm/pull/7333

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.4...v1.55.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.8



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 220.0 | 237.6551034099362 | 6.125601230624555 | 0.0 | 1832 | 0 | 193.92061900009594 | 1182.1513959999947 |
| Aggregated | Passed βœ… | 220.0 | 237.6551034099362 | 6.125601230624555 | 0.0 | 1832 | 0 | 193.92061900009594 | 1182.1513959999947 |

1.55.4

Not secure
What's Changed
* (feat) Add Azure Blob Storage Logging Integration by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7265
* (feat) Add Bedrock knowledge base pass through endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7267
* docs(input.md): document 'extra_headers' param support by krrishdholakia in https://github.com/BerriAI/litellm/pull/7268
* fix(utils.py): fix openai-like api response format parsing by krrishdholakia in https://github.com/BerriAI/litellm/pull/7273
* LITELLM: Remove `requests` library usage by krrishdholakia in https://github.com/BerriAI/litellm/pull/7235
* Litellm dev 12 17 2024 p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7277
* Litellm dev 12 17 2024 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7279
* LiteLLM Minor Fixes & Improvements (12/16/2024) - p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7263
* Add Azure Llama 3.3 by emerzon in https://github.com/BerriAI/litellm/pull/7283
* (feat) proxy Azure Blob Storage - Add support for `AZURE_STORAGE_ACCOUNT_KEY` Auth by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7280
* Correct max_tokens on Model DB by emerzon in https://github.com/BerriAI/litellm/pull/7284
* (fix) unable to pass input_type parameter to Voyage AI embedding mode by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7276


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.3...v1.55.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 260.0 | 295.9831253378703 | 6.145780132592515 | 0.0 | 1838 | 0 | 220.05146400010744 | 2923.6937059999946 |
| Aggregated | Passed βœ… | 260.0 | 295.9831253378703 | 6.145780132592515 | 0.0 | 1838 | 0 | 220.05146400010744 | 2923.6937059999946 |

1.55.3

Not secure
What's Changed
* LiteLLM Minor Fixes & Improvements (12/13/2024) pt.1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7219
* (feat - Router / Proxy ) Allow setting budget limits per LLM deployment by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7220
* build(deps): bump nanoid from 3.3.7 to 3.3.8 in /ui/litellm-dashboard by dependabot in https://github.com/BerriAI/litellm/pull/7216
* Litellm add router to base llm testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7202
* fix(main.py): fix retries being multiplied when using openai sdk by krrishdholakia in https://github.com/BerriAI/litellm/pull/7221
* (proxy) - Auth fix, ensure re-using safe request body for checking `model` field by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7222
* (UI fix) - Allow editing Key Metadata by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7230
* (UI) Fix Usage Tab - Don't make expensive UI queries after SpendLogs crosses 1M Rows by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7229
* (code quality) Add ruff check to ban `print` in repo by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7233
* (UI QA) - stop making expensive UI queries when 1M + spendLogs in DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7234
* Fix vllm import by ivanvykopal in https://github.com/BerriAI/litellm/pull/7224
* Add new Gemini 2.0 Flash model to Vertex AI. by Manouchehri in https://github.com/BerriAI/litellm/pull/7193
* Litellm remove circular imports by krrishdholakia in https://github.com/BerriAI/litellm/pull/7232
* (feat) Add Tag-based budgets on litellm router / proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7236
* Litellm dev 12 14 2024 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7231

New Contributors
* ivanvykopal made their first contribution in https://github.com/BerriAI/litellm/pull/7224

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.2...v1.55.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.3



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 281.1265187306242 | 6.1657490001280255 | 0.0033418693767631575 | 1845 | 1 | 119.36488499998177 | 3755.8482019999815 |
| Aggregated | Passed βœ… | 250.0 | 281.1265187306242 | 6.1657490001280255 | 0.0033418693767631575 | 1845 | 1 | 119.36488499998177 | 3755.8482019999815 |

v1.55.1-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.1...v1.55.1-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_dec_14-stable


`litellm-database` image


ghcr.io/berriai/litellm-database:litellm_stable_dec_14-stable


`litellm-non-root` image


ghcr.io/berriai/litellm-non_root:litellm_stable_dec_14-stable





Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 200.0 | 217.72878062246997 | 6.2754597178458145 | 0.0033415653449658223 | 1878 | 1 | 76.6410740000083 | 1257.3869729999956 |
| Aggregated | Passed βœ… | 200.0 | 217.72878062246997 | 6.2754597178458145 | 0.0033415653449658223 | 1878 | 1 | 76.6410740000083 | 1257.3869729999956 |

1.55.2

Not secure
What's Changed
* Litellm dev 12 12 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7203
* Litellm dev 12 11 2024 v2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7215


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.1...v1.55.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.2



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.2



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.2



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 282.51255728779716 | 6.192691226975396 | 0.0 | 1852 | 0 | 223.9336790000266 | 3178.0424589999257 |
| Aggregated | Passed βœ… | 250.0 | 282.51255728779716 | 6.192691226975396 | 0.0 | 1852 | 0 | 223.9336790000266 | 3178.0424589999257 |

Page 15 of 112

Links

Releases

Has known vulnerabilities

Β© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.