Litellm

Latest version: v1.65.1

Safety actively analyzes 723144 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 112

1.57.10

Not secure
šŸšØ This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues

* Litellm dev 01 10 2025 p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7679
* Litellm dev 01 10 2025 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7682
* build: new ui build by krrishdholakia in https://github.com/BerriAI/litellm/pull/7685
* fix(model_hub.tsx): clarify cost in model hub is per 1m tokens by krrishdholakia in https://github.com/BerriAI/litellm/pull/7687
* Litellm dev 01 11 2025 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7702
* (perf litellm) - use `_get_model_info_helper` for cost tracking by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7703
* (perf sdk) - minor changes to cost calculator to run helpers only when necessary by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7704
* (perf) - proxy, use `orjson` for reading request body by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7706
* (minor fix - `aiohttp_openai/`) - fix get_custom_llm_provider by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7705
* (sdk perf fix) - only print args passed to litellm when debugging mode is on by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7708
* (perf) - only use response_cost_calculator 1 time per request. (Don't re-use the same helper twice per call ) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7709
* [BETA] Add OpenAI `/images/variations` + Topaz API support by krrishdholakia in https://github.com/BerriAI/litellm/pull/7700
* (litellm sdk speedup router) - adds a helper `_cached_get_model_group_info` to use when trying to get deployment tpm/rpm limits by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7719


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.57.8...v1.57.10



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.10



Don't want to maintain your internal proxy? get in touch šŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed āœ… | 240.0 | 264.0629029362514 | 6.184926091214754 | 0.0 | 1851 | 0 | 213.62108399998192 | 1622.618584999998 |
| Aggregated | Passed āœ… | 240.0 | 264.0629029362514 | 6.184926091214754 | 0.0 | 1851 | 0 | 213.62108399998192 | 1622.618584999998 |

1.57.8

Not secure
What's Changed
* (proxy latency/perf fix - user_api_key_auth) - use asyncio.create task for caching virtual key once it's validated by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7676
* (litellm sdk - perf improvement) - optimize `response_cost_calculator` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7674
* (litellm sdk - perf improvement) - use O(1) set lookups for checking llm providers / models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7672
* (litellm sdk - perf improvement) - optimize `pre_call_check` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7673
* [integrations/lunary] allow to pass custom parent run id to LLM calls by hughcrt in https://github.com/BerriAI/litellm/pull/7651
* LiteLLM Minor Fixes & Improvements (01/10/2025) - p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7670
* (performance improvement - litellm sdk + proxy) - ensure litellm does not create unnecessary threads when running async functions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7680
* (litellm proxy perf) - pass num_workers cli arg to uvicorn when `num_workers` is specified by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7681
* fix proxy pre call hook - only use `asyncio.create_task` if user opts into alerting by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7683
* [Bug fix]: Proxy Auth Layer - Allow Azure Realtime routes as llm_api_routes by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7684


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.57.7...v1.57.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.8



Don't want to maintain your internal proxy? get in touch šŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed āœ… | 210.0 | 225.29799695056985 | 6.153370698253471 | 0.0 | 1841 | 0 | 177.73327700001573 | 2088.13791099999 |
| Aggregated | Passed āœ… | 210.0 | 225.29799695056985 | 6.153370698253471 | 0.0 | 1841 | 0 | 177.73327700001573 | 2088.13791099999 |

1.57.7

Not secure
What's Changed
* (minor latency fixes / proxy) - use verbose_proxy_logger.debug() instead of litellm.print_verbose by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7664
* feat(ui_sso.py): Allows users to use test key pane, and have team budget limits be enforced for their use-case by krrishdholakia in https://github.com/BerriAI/litellm/pull/7666
* fix(main.py): fix lm_studio/ embedding routing by krrishdholakia in https://github.com/BerriAI/litellm/pull/7658
* fix(vertex_ai/gemini/transformation.py): handle 'http://' in gemini pā€¦ by krrishdholakia in https://github.com/BerriAI/litellm/pull/7660
* Use environment variable for Athina logging URL by vivek-athina in https://github.com/BerriAI/litellm/pull/7628


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.57.5...v1.57.7



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.7



Don't want to maintain your internal proxy? get in touch šŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed āœ… | 200.0 | 218.4749677188173 | 6.216185012755876 | 0.0 | 1860 | 0 | 177.92223199990076 | 3911.6109139999935 |
| Aggregated | Passed āœ… | 200.0 | 218.4749677188173 | 6.216185012755876 | 0.0 | 1860 | 0 | 177.92223199990076 | 3911.6109139999935 |

1.57.5

Not secure
šŸšØšŸšØ Known issue - do not upgrade - Window's compatibility issue on this release

Relevant issue: https://github.com/BerriAI/litellm/issues/7677

What's Changed
* LiteLLM Minor Fixes & Improvements (01/08/2025) - p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7643
* Litellm dev 01 08 2025 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7640
* (proxy - RPS) - Get 2K RPS at 4 instances, minor fix for caching_handler by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7655
* (proxy - RPS) - Get 2K RPS at 4 instances, minor fix `aiohttp_openai/` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7659
* (proxy perf improvement) - use `uvloop` for higher RPS (10%-20% higher RPS) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7662
* (Feat - Batches API) add support for retrieving vertex api batch jobs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7661
* (proxy-latency fixes) use asyncio tasks for logging db metrics by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7663


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.57.4...v1.57.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.5



Don't want to maintain your internal proxy? get in touch šŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed āœ… | 230.0 | 282.70225500655766 | 6.115771768544881 | 0.0 | 1830 | 0 | 206.44150200001832 | 3375.4479410000044 |
| Aggregated | Passed āœ… | 230.0 | 282.70225500655766 | 6.115771768544881 | 0.0 | 1830 | 0 | 206.44150200001832 | 3375.4479410000044 |

1.57.4

Not secure
What's Changed
* fix(utils.py): fix select tokenizer for custom tokenizer by krrishdholakia in https://github.com/BerriAI/litellm/pull/7599
* LiteLLM Minor Fixes & Improvements (01/07/2025) - p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7635
* (feat) - allow building litellm proxy from pip package by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7633
* Litellm dev 01 07 2025 p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7622
* Allow assigning teams to org on UI + OpenAI `omni-moderation` cost model tracking by krrishdholakia in https://github.com/BerriAI/litellm/pull/7566
* (fix) proxy auth - allow using Azure JS SDK routes as llm_api_routes by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7631
* (helm) - bug fix - allow using `migrationJob.enabled` variable within job by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7639


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.57.3...v1.57.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.4



Don't want to maintain your internal proxy? get in touch šŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed āœ… | 200.0 | 218.7550845980808 | 6.268875045928877 | 0.0 | 1876 | 0 | 170.9488330000113 | 1424.4913769999812 |
| Aggregated | Passed āœ… | 200.0 | 218.7550845980808 | 6.268875045928877 | 0.0 | 1876 | 0 | 170.9488330000113 | 1424.4913769999812 |

1.57.3

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.57.2...v1.57.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.3



Don't want to maintain your internal proxy? get in touch šŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed āœ… | 240.0 | 273.577669278204 | 6.101109800829093 | 0.0 | 1826 | 0 | 209.38834100002168 | 2450.7287210000186 |
| Aggregated | Passed āœ… | 240.0 | 273.577669278204 | 6.101109800829093 | 0.0 | 1826 | 0 | 209.38834100002168 | 2450.7287210000186 |

Page 11 of 112

Links

Releases

Has known vulnerabilities

Ā© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.