Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 26 of 93

1.42.11

Not secure
What's Changed
* refactor(openai/azure.py): move to returning openai/azure response headers by default by krrishdholakia in https://github.com/BerriAI/litellm/pull/5020


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.10...v1.42.11



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.11



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 100.0 | 133.03325972004214 | 6.385494007425846 | 0.0 | 1911 | 0 | 85.22510000000239 | 2206.01549700001 |
| Aggregated | Passed ✅ | 100.0 | 133.03325972004214 | 6.385494007425846 | 0.0 | 1911 | 0 | 85.22510000000239 | 2206.01549700001 |

v1.42.10-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.9-stable...v1.42.10-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.10-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 100.0 | 129.54900722873379 | 6.443681428122125 | 0.0 | 1928 | 0 | 84.65257200003862 | 2099.9838429999613 |
| Aggregated | Passed ✅ | 100.0 | 129.54900722873379 | 6.443681428122125 | 0.0 | 1928 | 0 | 84.65257200003862 | 2099.9838429999613 |

1.42.10

Not secure
What's Changed
* feat(vertex_ai_partner.py): add vertex ai codestral FIM support by krrishdholakia in https://github.com/BerriAI/litellm/pull/5004
* fix(litellm_logging.py): Fix azure base model cost calc in response headers by krrishdholakia in https://github.com/BerriAI/litellm/pull/4996
* fix(utils.py): Add streaming token usage in hidden params by krrishdholakia in https://github.com/BerriAI/litellm/pull/5001


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.9...v1.42.10



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.10



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 160.0 | 175.53988997779015 | 6.3179372749243035 | 0.0 | 1891 | 0 | 125.14354199998934 | 1485.7802660000061 |
| Aggregated | Passed ✅ | 160.0 | 175.53988997779015 | 6.3179372749243035 | 0.0 | 1891 | 0 | 125.14354199998934 | 1485.7802660000061 |

1.42.9.dev1

What's Changed
* fix: support vertex filepath on proxy litellm_params definition by ec2ainun in https://github.com/BerriAI/litellm/pull/4989
* [Fix-OTEL Proxy] Only forward traceparent to llm api when `litellm.forward_traceparent_to_llm_provider=True` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4995
* [Fix-Proxy] Log attributes on failed LLM calls by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4997
* [Enterprise Proxy Feature] - Log to GCS Bucket ✨⚡️ by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4999
* Update helm chart to 0.2.2 by lowjiansheng in https://github.com/BerriAI/litellm/pull/4992
* Add `databricks/databricks-meta-llama-3-1-70b-instruct`, 'databricks/databricks-meta-llama-3-1-405b-instruct' by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5003
* Add new model for gemini-1.5-pro-exp-0801. by Manouchehri in https://github.com/BerriAI/litellm/pull/5002
* feat(vertex_ai_partner.py): add vertex ai codestral FIM support by krrishdholakia in https://github.com/BerriAI/litellm/pull/5004

New Contributors
* ec2ainun made their first contribution in https://github.com/BerriAI/litellm/pull/4989

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.8...v1.42.9.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.9.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.9.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 151.23181146121527 | 6.377240371349092 | 0.0 | 1908 | 0 | 100.50357199997961 | 2558.7362329999905 |
| Aggregated | Passed ✅ | 130.0 | 151.23181146121527 | 6.377240371349092 | 0.0 | 1908 | 0 | 100.50357199997961 | 2558.7362329999905 |

v1.42.9-stable-fix
What's Changed
* fix: support vertex filepath on proxy litellm_params definition by ec2ainun in https://github.com/BerriAI/litellm/pull/4989
* [Fix-OTEL Proxy] Only forward traceparent to llm api when `litellm.forward_traceparent_to_llm_provider=True` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4995
* [Fix-Proxy] Log attributes on failed LLM calls by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4997
* [Enterprise Proxy Feature] - Log to GCS Bucket ✨⚡️ by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4999
* Update helm chart to 0.2.2 by lowjiansheng in https://github.com/BerriAI/litellm/pull/4992
* Add `databricks/databricks-meta-llama-3-1-70b-instruct`, 'databricks/databricks-meta-llama-3-1-405b-instruct' by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5003
* Add new model for gemini-1.5-pro-exp-0801. by Manouchehri in https://github.com/BerriAI/litellm/pull/5002

New Contributors
* ec2ainun made their first contribution in https://github.com/BerriAI/litellm/pull/4989

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.8...v1.42.9-stable-fix



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.9-stable-fix



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.9-stable-fix



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 153.81077638915255 | 6.350367162336094 | 0.0 | 1899 | 0 | 97.45739499999218 | 19724.838319000013 |
| Aggregated | Passed ✅ | 120.0 | 153.81077638915255 | 6.350367162336094 | 0.0 | 1899 | 0 | 97.45739499999218 | 19724.838319000013 |

v1.42.9-stable
What's Changed
* fix(litellm_logging.py): Fix azure base model cost calc in response headers by krrishdholakia in https://github.com/BerriAI/litellm/pull/4996
* fix(utils.py): Add streaming token usage in hidden params by krrishdholakia in https://github.com/BerriAI/litellm/pull/5001


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.9.dev1...v1.42.9-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.9-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 85 | 98.97799238117081 | 6.569979707519222 | 0.0 | 1965 | 0 | 70.48408700001119 | 2014.9693710000065 |
| Aggregated | Passed ✅ | 85 | 98.97799238117081 | 6.569979707519222 | 0.0 | 1965 | 0 | 70.48408700001119 | 2014.9693710000065 |

1.42.8

Not secure
What's Changed
* Update lockfile of docs by yujonglee in https://github.com/BerriAI/litellm/pull/4990


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.7...v1.42.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.8



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 170.09964099790207 | 6.374999196105115 | 0.036772412772499354 | 1907 | 11 | 90.60879700001578 | 1693.6838860000307 |
| Aggregated | Passed ✅ | 140.0 | 170.09964099790207 | 6.374999196105115 | 0.036772412772499354 | 1907 | 11 | 90.60879700001578 | 1693.6838860000307 |

v1.42.7-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.7...v1.42.7-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.7-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 134.5310143141385 | 6.502201258665194 | 0.0 | 1945 | 0 | 104.74455500002477 | 1187.6723190000007 |
| Aggregated | Passed ✅ | 120.0 | 134.5310143141385 | 6.502201258665194 | 0.0 | 1945 | 0 | 104.74455500002477 | 1187.6723190000007 |

1.42.7

Not secure
What's Changed
* [Feat-Proxy] Add List fine-tuning jobs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4987
* [Feat] Add Support for Azure OpenAI Fine Tuning + Files Endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4978
* [Feat-Proxy] Add /fine_tuning endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4985
* Feat - add testing for proxy fine tuning endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4986
https://docs.litellm.ai/docs/fine_tuning
![Group 5916](https://github.com/user-attachments/assets/f8ac9125-9b41-4bcb-b7f0-a92a99ebd352)
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.6...v1.42.7



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.7



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 83 | 96.06243567403597 | 6.49843613402354 | 0.0 | 1945 | 0 | 68.66073699995923 | 932.4520559999883 |
| Aggregated | Passed ✅ | 83 | 96.06243567403597 | 6.49843613402354 | 0.0 | 1945 | 0 | 68.66073699995923 | 932.4520559999883 |

1.42.6

Not secure
What's Changed
* feat(vertex_ai_partner.py): Vertex AI Mistral Support by krrishdholakia in https://github.com/BerriAI/litellm/pull/4925
* Support vertex mistral cost tracking by krrishdholakia in https://github.com/BerriAI/litellm/pull/4929
* [Feat-Proxy] - Langfuse log /audio/transcription on langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4939
* Fix: 4942. Remove verbose logging when exception can be handled by dleen in https://github.com/BerriAI/litellm/pull/4943
* fixes: 4947 Bedrock context exception does not have a response by dleen in https://github.com/BerriAI/litellm/pull/4948
* [Feat] Bedrock add support for Bedrock Guardrails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4946
* build(deps): bump fast-xml-parser from 4.3.2 to 4.4.1 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/4950
* ui - allow entering custom model names for all all provider (azure ai, openai, etc) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4951
* Fix bug in cohere_chat.py by pat-cohere in https://github.com/BerriAI/litellm/pull/4949
* Feat UI - allow using custom header for litellm api key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4916
* [Feat] Add `litellm.create_fine_tuning_job()` , `litellm.list_fine_tuning_jobs()`, `litellm.cancel_fine_tuning_job()` finetuning endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4956
* [Feature]: GET /v1/batches to return list of batches by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4969
* [Fix-Proxy] ProxyException code as str - Make OpenAI Compatible by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4973
* Proxy Admin UI - switch off console logs in production mode by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4975
* feat(huggingface_restapi.py): Support multiple hf embedding types + async hf embeddings by krrishdholakia in https://github.com/BerriAI/litellm/pull/4976
* fix(cohere.py): support async cohere embedding calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/4977
* fix(utils.py): fix model registeration to model cost map by krrishdholakia in https://github.com/BerriAI/litellm/pull/4979

New Contributors
* pat-cohere made their first contribution in https://github.com/BerriAI/litellm/pull/4949

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.5...v1.42.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 150.6083251086047 | 6.375223413611649 | 0.0 | 1906 | 0 | 105.08289299997386 | 1346.7240439999841 |
| Aggregated | Passed ✅ | 130.0 | 150.6083251086047 | 6.375223413611649 | 0.0 | 1906 | 0 | 105.08289299997386 | 1346.7240439999841 |

Page 26 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.