Litellm

Latest version: v1.65.1

Safety actively analyzes 723144 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 112

1.61.7.dev1

What's Changed
* (UI) Allow adding models for a Team (8598) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8601
* (UI) Refactor Add Models for Specific Teams by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8592
* (UI) Improvements to Add Team Model Flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8603


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.7...v1.61.7.dev1

v1.61.7-nightly
What's Changed
* docs: update README.md API key and model example typos by colesmcintosh in https://github.com/BerriAI/litellm/pull/8590
* Fix typo in main readme by scosman in https://github.com/BerriAI/litellm/pull/8574
* (UI) Allow adding models for a Team by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8598
* feat(ui): alert when adding model without STORE_MODEL_IN_DB by Aditya8840 in https://github.com/BerriAI/litellm/pull/8591
* Revert "(UI) Allow adding models for a Team" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8600
* Litellm stable UI 02 17 2025 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8599

New Contributors
* colesmcintosh made their first contribution in https://github.com/BerriAI/litellm/pull/8590
* scosman made their first contribution in https://github.com/BerriAI/litellm/pull/8574
* Aditya8840 made their first contribution in https://github.com/BerriAI/litellm/pull/8591

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.6-nightly...v1.61.7-nightly

1.61.6.dev1

What's Changed
* docs: update README.md API key and model example typos by colesmcintosh in https://github.com/BerriAI/litellm/pull/8590
* Fix typo in main readme by scosman in https://github.com/BerriAI/litellm/pull/8574
* (UI) Allow adding models for a Team by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8598
* feat(ui): alert when adding model without STORE_MODEL_IN_DB by Aditya8840 in https://github.com/BerriAI/litellm/pull/8591
* Revert "(UI) Allow adding models for a Team" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8600
* Litellm stable UI 02 17 2025 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8599
* (UI) Allow adding models for a Team (8598) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8601
* (UI) Refactor Add Models for Specific Teams by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8592
* (UI) Improvements to Add Team Model Flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8603

New Contributors
* colesmcintosh made their first contribution in https://github.com/BerriAI/litellm/pull/8590
* scosman made their first contribution in https://github.com/BerriAI/litellm/pull/8574
* Aditya8840 made their first contribution in https://github.com/BerriAI/litellm/pull/8591

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.6-nightly...v1.61.6.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.6.dev1


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 170.0 | 197.04136517618934 | 6.316924319787487 | 6.316924319787487 | 1890 | 1890 | 142.7094059999945 | 2646.323271999961 |
| Aggregated | Failed ❌ | 170.0 | 197.04136517618934 | 6.316924319787487 | 6.316924319787487 | 1890 | 1890 | 142.7094059999945 | 2646.323271999961 |

v1.61.6-nightly
What's Changed
* refactor(teams.tsx): refactor to display all teams, across all orgs by krrishdholakia in https://github.com/BerriAI/litellm/pull/8565


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.5-nightly...v1.61.6-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.6-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 170.0 | 197.37858561234376 | 6.172709160882249 | 6.172709160882249 | 1847 | 1847 | 139.8097940000298 | 3194.1706680000266 |
| Aggregated | Failed ❌ | 170.0 | 197.37858561234376 | 6.172709160882249 | 6.172709160882249 | 1847 | 1847 | 139.8097940000298 | 3194.1706680000266 |

v1.61.5-nightly
What's Changed
* Optimize Alpine Dockerfile by removing redundant apk commands by PeterDaveHello in https://github.com/BerriAI/litellm/pull/5016
* fix(main.py): fix key leak error when unknown provider given by krrishdholakia in https://github.com/BerriAI/litellm/pull/8556
* (Feat) - return `x-litellm-attempted-fallbacks` in responses from litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8558
* Add remaining org CRUD endpoints + support deleting orgs on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8561
* Enable update/delete org members on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8560
* (Bug Fix) - Add Regenerate Key on Virtual Keys Tab by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8567
* (Bug Fix + Better Observability) - BudgetResetJob: for reseting key, team, user budgets by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8562
* (Patch/bug fix) - UI, filter out litellm ui session tokens on Virtual Keys Page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8568

New Contributors
* PeterDaveHello made their first contribution in https://github.com/BerriAI/litellm/pull/5016

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.3.dev1...v1.61.5-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.5-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 150.0 | 169.92952748954406 | 6.233287189548679 | 6.233287189548679 | 1865 | 1865 | 130.2254270000276 | 1515.568768999998 |
| Aggregated | Failed ❌ | 150.0 | 169.92952748954406 | 6.233287189548679 | 6.233287189548679 | 1865 | 1865 | 130.2254270000276 | 1515.568768999998 |

v1.61.3-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.3...v1.61.3-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm_stable_release_branch-v1.61.3-stable


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 150.0 | 178.8079934268893 | 6.237874881624696 | 6.237874881624696 | 1867 | 1867 | 130.88419100000692 | 2746.1132829999997 |
| Aggregated | Failed ❌ | 150.0 | 178.8079934268893 | 6.237874881624696 | 6.237874881624696 | 1867 | 1867 | 130.88419100000692 | 2746.1132829999997 |

v1.61.4-nightly
What's Changed
* docs(perplexity.md): removing `return_citations` documentation by miraclebakelaser in https://github.com/BerriAI/litellm/pull/8527
* (docs - cookbook) litellm proxy x langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8541
* UI Fixes and Improvements (02/14/2025) p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8546
* (Feat) - Add `/bedrock/meta.llama3-3-70b-instruct-v1:0` tool calling support + cost tracking + base llm unit test for tool calling by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8545
* fix(general_settings.tsx): filter out empty dictionaries post fallbac… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8550
* (perf) Fix memory leak on `/completions` route by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8551
* Org Flow Improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/8549
* feat(openai/o_series_transformation.py): support native streaming for o1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8552
* fix(team_endpoints.py): fix team info check to handle team keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/8529
* build: ui build update by krrishdholakia in https://github.com/BerriAI/litellm/pull/8553


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.3...v1.61.4-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.4-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 190.0 | 216.89425311206062 | 6.2617791082055785 | 6.2617791082055785 | 1874 | 1874 | 143.52555700003222 | 3508.21726800001 |
| Aggregated | Failed ❌ | 190.0 | 216.89425311206062 | 6.2617791082055785 | 6.2617791082055785 | 1874 | 1874 | 143.52555700003222 | 3508.21726800001 |

1.61.3

Not secure
What's Changed
* Improved wildcard route handling on `/models` and `/model_group/info` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8473
* (Bug fix) - Using `include_usage` for /completions requests + unit testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8484
* add sonar pricings by themrzmaster in https://github.com/BerriAI/litellm/pull/8476
* (bug fix) `PerplexityChatConfig` - track correct OpenAI compatible params by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8496
* (fix 2) don't block proxy startup if license check fails & using prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8492
* ci(config.yml): mark daily docker builds with `-nightly` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8499
* (Redis Cluster) - Fixes for using redis cluster + pipeline by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8442
* Litellm UI stable version 02 12 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8497
* fix: fix test by krrishdholakia in https://github.com/BerriAI/litellm/pull/8501
* enables no auth for SMTP by krrishdholakia in https://github.com/BerriAI/litellm/pull/8494
* UI Fixes p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8502
* add phoenix docs for observability integration by exiao in https://github.com/BerriAI/litellm/pull/8522
* Added custom_attributes to additional_keys which can be sent to athina by vivek-athina in https://github.com/BerriAI/litellm/pull/8518
* (UI) fix log details page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8524
* Add UI Support for Admins to Call /cache/ping and View Cache Analytics (8475) by tahaali-dev in https://github.com/BerriAI/litellm/pull/8519
* LiteLLM Improvements (02/13/2025) p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8523
* fix(utils.py): fix vertex ai optional param handling by krrishdholakia in https://github.com/BerriAI/litellm/pull/8477
* Add 'prediction' param for Azure + Add `gemini-2.0-pro-exp-02-05` vertex ai model to cost map + New `bedrock/deepseek_r1/*` route by krrishdholakia in https://github.com/BerriAI/litellm/pull/8525
* (UI) - Refactor View Key Table by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8526


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.1...v1.61.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.3


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.3


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 110.0 | 127.51554087063036 | 6.408067444109619 | 6.408067444109619 | 1917 | 1917 | 94.95955199997752 | 2825.282969 |
| Aggregated | Failed ❌ | 110.0 | 127.51554087063036 | 6.408067444109619 | 6.408067444109619 | 1917 | 1917 | 94.95955199997752 | 2825.282969 |

v1.61.2-nightly
What's Changed
* Improved wildcard route handling on `/models` and `/model_group/info` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8473
* (Bug fix) - Using `include_usage` for /completions requests + unit testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8484
* add sonar pricings by themrzmaster in https://github.com/BerriAI/litellm/pull/8476
* (bug fix) `PerplexityChatConfig` - track correct OpenAI compatible params by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8496
* (fix 2) don't block proxy startup if license check fails & using prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8492
* ci(config.yml): mark daily docker builds with `-nightly` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8499
* (Redis Cluster) - Fixes for using redis cluster + pipeline by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8442
* Litellm UI stable version 02 12 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8497
* fix: fix test by krrishdholakia in https://github.com/BerriAI/litellm/pull/8501
* enables no auth for SMTP by krrishdholakia in https://github.com/BerriAI/litellm/pull/8494
* UI Fixes p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8502
* add phoenix docs for observability integration by exiao in https://github.com/BerriAI/litellm/pull/8522
* Added custom_attributes to additional_keys which can be sent to athina by vivek-athina in https://github.com/BerriAI/litellm/pull/8518
* (UI) fix log details page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8524
* Add UI Support for Admins to Call /cache/ping and View Cache Analytics (8475) by tahaali-dev in https://github.com/BerriAI/litellm/pull/8519
* LiteLLM Improvements (02/13/2025) p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8523


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.1...v1.61.2-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.2-nightly


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 180.0 | 216.33586769555853 | 6.245273580245063 | 6.245273580245063 | 1869 | 1869 | 145.7912179999994 | 3665.8740830000056 |
| Aggregated | Failed ❌ | 180.0 | 216.33586769555853 | 6.245273580245063 | 6.245273580245063 | 1869 | 1869 | 145.7912179999994 | 3665.8740830000056 |

1.61.3.dev1

What's Changed
* docs(perplexity.md): removing `return_citations` documentation by miraclebakelaser in https://github.com/BerriAI/litellm/pull/8527
* (docs - cookbook) litellm proxy x langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8541
* UI Fixes and Improvements (02/14/2025) p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8546
* (Feat) - Add `/bedrock/meta.llama3-3-70b-instruct-v1:0` tool calling support + cost tracking + base llm unit test for tool calling by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8545
* fix(general_settings.tsx): filter out empty dictionaries post fallbac… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8550
* (perf) Fix memory leak on `/completions` route by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8551
* Org Flow Improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/8549
* feat(openai/o_series_transformation.py): support native streaming for o1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8552
* fix(team_endpoints.py): fix team info check to handle team keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/8529
* build: ui build update by krrishdholakia in https://github.com/BerriAI/litellm/pull/8553


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.3...v1.61.3.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.3.dev1


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 110.0 | 135.69110078279184 | 6.367509457796576 | 6.367509457796576 | 1906 | 1906 | 93.52984899999228 | 3754.151283999988 |
| Aggregated | Failed ❌ | 110.0 | 135.69110078279184 | 6.367509457796576 | 6.367509457796576 | 1906 | 1906 | 93.52984899999228 | 3754.151283999988 |

v1.61.3-nightly
What's Changed
* fix(utils.py): fix vertex ai optional param handling by krrishdholakia in https://github.com/BerriAI/litellm/pull/8477
* Add 'prediction' param for Azure + Add `gemini-2.0-pro-exp-02-05` vertex ai model to cost map + New `bedrock/deepseek_r1/*` route by krrishdholakia in https://github.com/BerriAI/litellm/pull/8525
* (UI) - Refactor View Key Table by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8526


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.2-nightly...v1.61.3-nightly

1.61.1

Not secure
What's Changed
* Show Guardrails on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8447
* Log applied guardrails on LLM API call by krrishdholakia in https://github.com/BerriAI/litellm/pull/8452
* Ui Fixes Teams Setting 8347 by tahaali-dev in https://github.com/BerriAI/litellm/pull/8353
* (UI) allow adding model aliases for teams by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8471
* (round 4 fixes) - Team model alias setting by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8474


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.0...v1.61.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 160.0 | 180.272351294557 | 6.268555221678184 | 0.0 | 1874 | 0 | 118.979319999994 | 3618.562145999988 |
| Aggregated | Passed βœ… | 160.0 | 180.272351294557 | 6.268555221678184 | 0.0 | 1874 | 0 | 118.979319999994 | 3618.562145999988 |

1.61.1.dev5

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.1.dev1...v1.61.1.dev5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.1.dev5


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.1.dev5


Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 160.0 | 181.8912512839885 | 6.370642047299698 | 6.370642047299698 | 1905 | 1905 | 129.7774050000271 | 3442.713688000026 |
| Aggregated | Failed ❌ | 160.0 | 181.8912512839885 | 6.370642047299698 | 6.370642047299698 | 1905 | 1905 | 129.7774050000271 | 3442.713688000026 |

Page 4 of 112

Links

Releases

Has known vulnerabilities

Β© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.