Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 39 of 93

1.40.9

Not secure
What's Changed
* fix opentelemetry-semantic-conventions-ai does not exist on LiteLLM Docker by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4129
* [Feat] OTEL - allow propagating traceparent in headers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4133
* Added `mypy` to the Poetry `dev` group by jamesbraza in https://github.com/BerriAI/litellm/pull/4136
* Azure AI support all models by krrishdholakia in https://github.com/BerriAI/litellm/pull/4134
* feat(utils.py): bump tiktoken dependency to 0.7.0 (gpt-4o token counting support) by krrishdholakia in https://github.com/BerriAI/litellm/pull/4119
* fix(proxy_server.py): use consistent 400-status code error code for exceeded budget errors by krrishdholakia in https://github.com/BerriAI/litellm/pull/4139
* Allowing inference of LLM provider in `get_supported_openai_params` by jamesbraza in https://github.com/BerriAI/litellm/pull/4137
* [FEAT] log management endpoint logs to otel by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4138

New Contributors
* jamesbraza made their first contribution in https://github.com/BerriAI/litellm/pull/4136

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.8...v1.40.9



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.9



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 95 | 118.26463258740928 | 6.42020613574963 | 0.0 | 1922 | 0 | 78.571060999991 | 1634.9082140000064 |
| Aggregated | Passed ✅ | 95 | 118.26463258740928 | 6.42020613574963 | 0.0 | 1922 | 0 | 78.571060999991 | 1634.9082140000064 |

1.40.8

Not secure
What's Changed
* [FEAT]- OTEL log litellm request / response by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4076
* [Feat] Enterprise - Attribute Management changes to Users in Audit Logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4083
* [FEAT]- OTEL Log raw LLM request/response on OTEL by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4078
* fix(cost_calculator.py): fixes tgai unmapped model pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/4085
* fix(utils.py): improved predibase exception mapping by krrishdholakia in https://github.com/BerriAI/litellm/pull/4080
* [Fix] Litellm sdk - allow ChatCompletionMessageToolCall, and Function to be used as dict by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4086
* Update together ai pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/4087
* [Feature]: Proxy: Support API-Key header in addition to Authorization header by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4088
* docs - cache controls on `litellm python SDK` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4099
* docs: add llmcord.py to side bar nav by jakobdylanc in https://github.com/BerriAI/litellm/pull/4101
* docs: fix llmcord.py side bar link by jakobdylanc in https://github.com/BerriAI/litellm/pull/4104
* [FEAT] - viewing spend report per customer / team by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4105
* feat - log Proxy Server auth errors on OTEL by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4103
* [Feat] Client Side Fallbacks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4107
* Fix typos: Enterpise -> Enterprise by msabramo in https://github.com/BerriAI/litellm/pull/4110
* `assistants.md`: Remove extra trailing backslash by msabramo in https://github.com/BerriAI/litellm/pull/4112
* `assistants.md`: Add "Get a Thread" example by msabramo in https://github.com/BerriAI/litellm/pull/4114
* ui - Fix Test Key dropdown by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4108
* fix(bedrock_httpx.py): fix tool calling for anthropic bedrock calls w/ streaming by krrishdholakia in https://github.com/BerriAI/litellm/pull/4106
* fix(proxy_server.py): allow passing in a list of team members by krrishdholakia in https://github.com/BerriAI/litellm/pull/4084
* fix - show `model group` in Azure ContentPolicy exceptions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4116

Client Side Fallbacks: https://docs.litellm.ai/docs/proxy/reliability#test---client-side-fallbacks
![fallbacks py](https://github.com/BerriAI/litellm/assets/29436595/14433d7d-1575-4886-bc44-61ede51806b0)

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.7...v1.40.8



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.8



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.8



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 169.11120714803027 | 6.281005310183787 | 0.0 | 1878 | 0 | 114.50119100004486 | 1457.4686270000257 |
| Aggregated | Passed ✅ | 140.0 | 169.11120714803027 | 6.281005310183787 | 0.0 | 1878 | 0 | 114.50119100004486 | 1457.4686270000257 |

1.40.8.dev1

What's Changed
* fix opentelemetry-semantic-conventions-ai does not exist on LiteLLM Docker by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4129
* [Feat] OTEL - allow propagating traceparent in headers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4133
* Added `mypy` to the Poetry `dev` group by jamesbraza in https://github.com/BerriAI/litellm/pull/4136
* Azure AI support all models by krrishdholakia in https://github.com/BerriAI/litellm/pull/4134
* feat(utils.py): bump tiktoken dependency to 0.7.0 (gpt-4o token counting support) by krrishdholakia in https://github.com/BerriAI/litellm/pull/4119
* fix(proxy_server.py): use consistent 400-status code error code for exceeded budget errors by krrishdholakia in https://github.com/BerriAI/litellm/pull/4139
* Allowing inference of LLM provider in `get_supported_openai_params` by jamesbraza in https://github.com/BerriAI/litellm/pull/4137
* [FEAT] log management endpoint logs to otel by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4138

New Contributors
* jamesbraza made their first contribution in https://github.com/BerriAI/litellm/pull/4136

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.8-stable...1.40.8.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-1.40.8.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 110.0 | 125.08247401976035 | 6.426077578390951 | 0.0 | 1923 | 0 | 91.96702899998854 | 1106.7971329999864 |
| Aggregated | Passed ✅ | 110.0 | 125.08247401976035 | 6.426077578390951 | 0.0 | 1923 | 0 | 91.96702899998854 | 1106.7971329999864 |

v1.40.8-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.8...v1.40.8-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.8-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 147.76399794426771 | 6.297998256290574 | 0.0 | 1884 | 0 | 97.42064800002481 | 1621.3958460000413 |
| Aggregated | Passed ✅ | 120.0 | 147.76399794426771 | 6.297998256290574 | 0.0 | 1884 | 0 | 97.42064800002481 | 1621.3958460000413 |

1.40.7

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.6...v1.40.7



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.7



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 97 | 126.50565680197539 | 6.4278560269757214 | 0.003340881510902142 | 1924 | 1 | 82.64289499999222 | 1316.4627209999935 |
| Aggregated | Passed ✅ | 97 | 126.50565680197539 | 6.4278560269757214 | 0.003340881510902142 | 1924 | 1 | 82.64289499999222 | 1316.4627209999935 |

1.40.7.dev1

What's Changed
* [FEAT]- OTEL log litellm request / response by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4076
* [Feat] Enterprise - Attribute Management changes to Users in Audit Logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4083
* [FEAT]- OTEL Log raw LLM request/response on OTEL by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4078
* fix(cost_calculator.py): fixes tgai unmapped model pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/4085
* fix(utils.py): improved predibase exception mapping by krrishdholakia in https://github.com/BerriAI/litellm/pull/4080
* [Fix] Litellm sdk - allow ChatCompletionMessageToolCall, and Function to be used as dict by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4086
* Update together ai pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/4087
* [Feature]: Proxy: Support API-Key header in addition to Authorization header by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4088
* docs - cache controls on `litellm python SDK` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4099


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.7...v1.40.7.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.7.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 179.79878830216478 | 6.323646865102133 | 0.0 | 1893 | 0 | 111.88137199997072 | 2245.1254659999904 |
| Aggregated | Passed ✅ | 140.0 | 179.79878830216478 | 6.323646865102133 | 0.0 | 1893 | 0 | 111.88137199997072 | 2245.1254659999904 |

1.40.6

Not secure
🐞 [Fix]- Allow redacting messages from slack alerting https://docs.litellm.ai/docs/proxy/alerting#advanced---redacting-messages-from-alerts

🔨 [Refactor] - Refactor proxy_server.py to use common function for add_litellm_data_to_request

✨ [Feat] [OpenTelemetry](https://www.linkedin.com/feed/#) - Log Exceptions from Proxy Server

✨ [FEAT] [OpenTelemetry](https://www.linkedin.com/feed/#) - Log Redis Cache Read / Writes

✨ [FEAT] [OpenTelemetry](https://www.linkedin.com/feed/#) - LOG DB Exceptions

✨ [Feat] [OpenTelemetry](https://www.linkedin.com/feed/#) - Instrument DB Reads

🐞 [Fix] UI - Allow custom logout url and show proxy base url on API Ref Page

<img width="1648" alt="Xnapper-2024-06-07-21 44 06" src="https://github.com/BerriAI/litellm/assets/29436595/b3ea8dd3-1298-4cdd-8853-d568e054e185">



What's Changed
* feat(bedrock_httpx.py): add support for bedrock converse api by krrishdholakia in https://github.com/BerriAI/litellm/pull/4033
* feature - Types for mypy - issue 360 by mikeslattery in https://github.com/BerriAI/litellm/pull/3925
* [Fix]- Allow redacting `messages` from slack alerting by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4047
* Fix to support all file types supported by Gemini by nick-rackauckas in https://github.com/BerriAI/litellm/pull/4055
* [Feat] OTEL - Instrument DB Reads by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4058
* [Refactor] - Refactor proxy_server.py to use common function for `add_litellm_data_to_request` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4065
* [Feat] OTEL - Log Exceptions from Proxy Server by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4067
* Raw request debug logs - security fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/4068
* [FEAT] OTEL - Log Redis Cache Read / Writes by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4070
* [FEAT] OTEL - LOG DB Exceptions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4071
* [Fix] UI - Allow custom logout url and show proxy base url on API Ref Page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4072
* *

New Contributors
* mikeslattery made their first contribution in https://github.com/BerriAI/litellm/pull/3925
* nick-rackauckas made their first contribution in https://github.com/BerriAI/litellm/pull/4055

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.5...v1.40.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.6



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 151.53218399526997 | 6.362696017911015 | 0.0 | 1903 | 0 | 109.01354200001379 | 1319.1295889999992 |
| Aggregated | Passed ✅ | 130.0 | 151.53218399526997 | 6.362696017911015 | 0.0 | 1903 | 0 | 109.01354200001379 | 1319.1295889999992 |

Page 39 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.