Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 9 of 93

1.48.12

What's Changed
* OpenAI `/v1/realtime` api support by krrishdholakia in https://github.com/BerriAI/litellm/pull/6047
* Litellm Minor Fixes & Improvements (10/03/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6049
* 🔧 (model_prices_and_context_window.json): rename gemini-pro-flash to … by marek-kerka in https://github.com/BerriAI/litellm/pull/5980

New Contributors
* marek-kerka made their first contribution in https://github.com/BerriAI/litellm/pull/5980

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.11...v1.48.12



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.12



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 165.22783864278296 | 6.392030883394571 | 0.0 | 1912 | 0 | 107.11809800000083 | 2847.098367000001 |
| Aggregated | Passed ✅ | 120.0 | 165.22783864278296 | 6.392030883394571 | 0.0 | 1912 | 0 | 107.11809800000083 | 2847.098367000001 |

v1.48.11-stable
What's Changed
* (azure): Enable stream_options for Azure OpenAI. (6024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6029
* (load testing) add vertex_ai embeddings load test by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6004
* (feat proxy) add key based logging for GCS bucket by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6031
* (feat) add nvidia nim embeddings by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6032
* (contributor PRs) oct 3rd, 2024 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6034
* fix(factory.py): bedrock: merge consecutive tool + user messages by krrishdholakia in https://github.com/BerriAI/litellm/pull/6028
* (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6039


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.10...v1.48.11-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.11-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.11-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.11-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.11-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 152.4228770444209 | 6.31798352557817 | 0.0 | 1891 | 0 | 105.53450899999461 | 2581.7491679999875 |
| Aggregated | Passed ✅ | 130.0 | 152.4228770444209 | 6.31798352557817 | 0.0 | 1891 | 0 | 105.53450899999461 | 2581.7491679999875 |

1.48.11

What's Changed
* (azure): Enable stream_options for Azure OpenAI. (6024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6029
* (load testing) add vertex_ai embeddings load test by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6004
* (feat proxy) add key based logging for GCS bucket by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6031
* (feat) add nvidia nim embeddings by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6032
* (contributor PRs) oct 3rd, 2024 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6034
* fix(factory.py): bedrock: merge consecutive tool + user messages by krrishdholakia in https://github.com/BerriAI/litellm/pull/6028
* (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6039


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.10...v1.48.11



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.11



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 160.94989321263134 | 6.350767242529357 | 0.0 | 1900 | 0 | 112.36911899993629 | 2553.2338090000053 |
| Aggregated | Passed ✅ | 140.0 | 160.94989321263134 | 6.350767242529357 | 0.0 | 1900 | 0 | 112.36911899993629 | 2553.2338090000053 |

1.48.10

What's Changed
* (testing): Enable testing `us.anthropic.claude-3-haiku-20240307-v1:0` by Manouchehri in https://github.com/BerriAI/litellm/pull/6018
* [Feat] Added Opik integration for logging and evaluation by dsblank in https://github.com/BerriAI/litellm/pull/5680
* LiteLLM Minor Fixes & Improvements (10/02/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6023

New Contributors
* dsblank made their first contribution in https://github.com/BerriAI/litellm/pull/5680

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.9...v1.48.10



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.10



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.10



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 150.0 | 180.9486091101014 | 6.254874603240562 | 0.0 | 1871 | 0 | 123.72644900000296 | 2227.763870999979 |
| Aggregated | Passed ✅ | 150.0 | 180.9486091101014 | 6.254874603240562 | 0.0 | 1871 | 0 | 123.72644900000296 | 2227.763870999979 |

v1.48.9-stable
What's Changed
* Litellm ruff linting enforcement by krrishdholakia in https://github.com/BerriAI/litellm/pull/5992


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.8...v1.48.9-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.9-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 144.5057939263474 | 6.4487849647820426 | 0.0 | 1928 | 0 | 92.1307560000173 | 2493.685092000021 |
| Aggregated | Passed ✅ | 120.0 | 144.5057939263474 | 6.4487849647820426 | 0.0 | 1928 | 0 | 92.1307560000173 | 2493.685092000021 |

1.48.9

What's Changed
* Litellm ruff linting enforcement by krrishdholakia in https://github.com/BerriAI/litellm/pull/5992


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.8...v1.48.9



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.9



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 94 | 110.6238739327856 | 6.517541026059478 | 0.0 | 1949 | 0 | 75.31776399997625 | 2736.753392999958 |
| Aggregated | Passed ✅ | 94 | 110.6238739327856 | 6.517541026059478 | 0.0 | 1949 | 0 | 75.31776399997625 | 2736.753392999958 |

v1.48.8-stable
What's Changed
* Fixed minor typo in bash command to prevent overwriting .env file by sdaoudi in https://github.com/BerriAI/litellm/pull/5902
* (docs) fix health check documentation language problems by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5984
* (docs) add example using Azure OpenAI entrata id, client_id, tenant_id with litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5985
* (docs) prometheus metrics document all prometheus metrics by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5989
* [Bug] Skip slack alert if there was no spend by pazcuturi in https://github.com/BerriAI/litellm/pull/5998
* (feat proxy slack alerting) - allow opting in to getting key / internal user alerts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5990
* (performance improvement - vertex embeddings) ~111.11% faster by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6000

New Contributors
* sdaoudi made their first contribution in https://github.com/BerriAI/litellm/pull/5902
* pazcuturi made their first contribution in https://github.com/BerriAI/litellm/pull/5998

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.7...v1.48.8-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.8-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 155.33913651327444 | 6.425896297578675 | 0.003345078759801497 | 1921 | 1 | 93.83325700002842 | 2569.498112999952 |
| Aggregated | Passed ✅ | 130.0 | 155.33913651327444 | 6.425896297578675 | 0.003345078759801497 | 1921 | 1 | 93.83325700002842 | 2569.498112999952 |

1.48.8

What's Changed
* Fixed minor typo in bash command to prevent overwriting .env file by sdaoudi in https://github.com/BerriAI/litellm/pull/5902
* (docs) fix health check documentation language problems by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5984
* (docs) add example using Azure OpenAI entrata id, client_id, tenant_id with litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5985
* (docs) prometheus metrics document all prometheus metrics by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5989
* [Bug] Skip slack alert if there was no spend by pazcuturi in https://github.com/BerriAI/litellm/pull/5998
* (feat proxy slack alerting) - allow opting in to getting key / internal user alerts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5990
* (performance improvement - vertex embeddings) ~111.11% faster by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6000

New Contributors
* sdaoudi made their first contribution in https://github.com/BerriAI/litellm/pull/5902
* pazcuturi made their first contribution in https://github.com/BerriAI/litellm/pull/5998

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.7...v1.48.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.8



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 95 | 117.17931074485615 | 6.495544582359362 | 0.0 | 1944 | 0 | 73.97817899999382 | 3032.9931830000305 |
| Aggregated | Passed ✅ | 95 | 117.17931074485615 | 6.495544582359362 | 0.0 | 1944 | 0 | 73.97817899999382 | 3032.9931830000305 |

v1.48.7-stable
What's Changed
* Litellm Minor Fixes & Improvements (09/24/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5963


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.6...v1.48.7-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.7-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 160.0 | 184.06415131920545 | 6.228256797353927 | 0.0 | 1864 | 0 | 127.05130199992709 | 2792.665706999969 |
| Aggregated | Passed ✅ | 160.0 | 184.06415131920545 | 6.228256797353927 | 0.0 | 1864 | 0 | 127.05130199992709 | 2792.665706999969 |

1.48.7

What's Changed
* Litellm Minor Fixes & Improvements (09/24/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/5963


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.6...v1.48.7



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.7



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 163.40408680073426 | 6.3711455062151 | 0.0 | 1907 | 0 | 108.54263300007005 | 1747.1208460000298 |
| Aggregated | Passed ✅ | 140.0 | 163.40408680073426 | 6.3711455062151 | 0.0 | 1907 | 0 | 108.54263300007005 | 1747.1208460000298 |

Page 9 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.