Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 22 of 93

1.43.13

Not secure
We're launching Day 0 support for Anthropic Prompt Caching on LiteLLM πŸ‘‰ Start here: https://docs.litellm.ai/docs/providers/anthropic#prompt-caching

πŸ“– Cut Costs and latency, use Anthropic prompt caching for the following scenarios:

- Large Context Caching​ https://docs.litellm.ai/docs/providers/anthropic#caching---large-context-caching

- Tools definitions https://docs.litellm.ai/docs/providers/anthropic#caching---tools-definitions

- Continuing Multi-Turn Convo https://docs.litellm.ai/docs/providers/anthropic#caching---continuing-multi-turn-convo

πŸ› οΈ [Fix-Proxy] Allow running docker, docker-database as non-root user (h/t [Oz Elhassid](https://www.linkedin.com/feed/#))


πŸ“ˆ [Fix] Prometheus use 'litellm_' prefix for new deployment metrics (h/t [Filipe Andujar](https://www.linkedin.com/feed/#))


βœ… [Feat-Proxy] Add failure logging for GCS bucket logging https://docs.litellm.ai/docs/proxy/bucket
![Group 5955](https://github.com/user-attachments/assets/79a95ced-6a53-4b86-a7fb-96bf8c4448ad)


What's Changed
* Update prices/context windows for Perplexity Llama 3.1 models by bachya in https://github.com/BerriAI/litellm/pull/5206
* Allow specifying langfuse project for logging in key metadata by krrishdholakia in https://github.com/BerriAI/litellm/pull/5176
* vertex_ai/claude-3-5-sonnet20240620 support prefill by paul-gauthier in https://github.com/BerriAI/litellm/pull/5203
* Enable follow redirects in ollama_chat by fabceolin in https://github.com/BerriAI/litellm/pull/5148
* feat(user_api_key_auth.py): support calling langfuse with litellm user_api_key_auth by krrishdholakia in https://github.com/BerriAI/litellm/pull/5192
* Use `AZURE_API_VERSION` env var as default azure openai version by msabramo in https://github.com/BerriAI/litellm/pull/5211
* [Feat] Add Anthropic API Prompt Caching Support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5210

New Contributors
* fabceolin made their first contribution in https://github.com/BerriAI/litellm/pull/5148

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.12...v1.43.13



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.13



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 84 | 97.96550381346324 | 6.506952562817539 | 0.0 | 1946 | 0 | 66.09550899997885 | 1639.4581249999192 |
| Aggregated | Passed βœ… | 84 | 97.96550381346324 | 6.506952562817539 | 0.0 | 1946 | 0 | 66.09550899997885 | 1639.4581249999192 |

1.43.13.dev1

What's Changed
* fix(utils.py): support calling openai models via `azure_ai/` by krrishdholakia in https://github.com/BerriAI/litellm/pull/5209
* [Feat-Proxy] - user common helper to `route_request` for making llm call by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5224


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.13...v1.43.13.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.13.dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 85 | 99.67029309573238 | 6.427917460346874 | 0.0 | 1922 | 0 | 69.25942499998428 | 778.3932070000219 |
| Aggregated | Passed βœ… | 85 | 99.67029309573238 | 6.427917460346874 | 0.0 | 1922 | 0 | 69.25942499998428 | 778.3932070000219 |

v1.43.13-stable
What's Changed
* Update prices/context windows for Perplexity Llama 3.1 models by bachya in https://github.com/BerriAI/litellm/pull/5206
* Allow specifying langfuse project for logging in key metadata by krrishdholakia in https://github.com/BerriAI/litellm/pull/5176
* vertex_ai/claude-3-5-sonnet20240620 support prefill by paul-gauthier in https://github.com/BerriAI/litellm/pull/5203
* Enable follow redirects in ollama_chat by fabceolin in https://github.com/BerriAI/litellm/pull/5148
* feat(user_api_key_auth.py): support calling langfuse with litellm user_api_key_auth by krrishdholakia in https://github.com/BerriAI/litellm/pull/5192
* Use `AZURE_API_VERSION` env var as default azure openai version by msabramo in https://github.com/BerriAI/litellm/pull/5211
* [Feat] Add Anthropic API Prompt Caching Support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5210

New Contributors
* fabceolin made their first contribution in https://github.com/BerriAI/litellm/pull/5148

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.12...v1.43.13-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.13-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 110.0 | 123.61768599690367 | 6.473189780476622 | 0.0 | 1937 | 0 | 85.34288399999923 | 1587.6906680000502 |
| Aggregated | Passed βœ… | 110.0 | 123.61768599690367 | 6.473189780476622 | 0.0 | 1937 | 0 | 85.34288399999923 | 1587.6906680000502 |

1.43.12

Not secure
What's Changed
* fix: wrong order of arguments for ollama by thiswillbeyourgithub in https://github.com/BerriAI/litellm/pull/5116
* Mismatch in example fixed by zby in https://github.com/BerriAI/litellm/pull/5199
* [Fix-Proxy] Allow running docker, docker-database as non-root user by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5186
* [Feat-Proxy] Add failure logging for GCS bucket by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5200
* [Fix] Prometheus use 'litellm_' prefix for new deployment metrics by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5202

New Contributors
* thiswillbeyourgithub made their first contribution in https://github.com/BerriAI/litellm/pull/5116
* zby made their first contribution in https://github.com/BerriAI/litellm/pull/5199

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.10...v1.43.12



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.12



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 83 | 107.61395181150338 | 6.50579510348369 | 0.0 | 1947 | 0 | 68.5540619999756 | 2549.9163599999974 |
| Aggregated | Passed βœ… | 83 | 107.61395181150338 | 6.50579510348369 | 0.0 | 1947 | 0 | 68.5540619999756 | 2549.9163599999974 |

v1.43.10-stable
What's Changed
* [Feat-Proxy+langfuse] LiteLLM-specific Tags on Langfuse - `cache_hit`, `cache_key` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5180
* [Feat-Proxy Security] Allow Using `x-forwarded-for` for enforcing + tracking ip address by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5181
* [Feat-UI] - Handle session expired on UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5187
* Improving the proxy docs for configuring with vllm by fozziethebeat in https://github.com/BerriAI/litellm/pull/5184
* (models): Add chatgpt-4o-latest by Manouchehri in https://github.com/BerriAI/litellm/pull/5190
* Fix not sended json_data_for_triton by ArtyomZemlyak in https://github.com/BerriAI/litellm/pull/5189
* [Feat] Allow loading LiteLLM config from s3 buckets by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5191

New Contributors
* fozziethebeat made their first contribution in https://github.com/BerriAI/litellm/pull/5184
* ArtyomZemlyak made their first contribution in https://github.com/BerriAI/litellm/pull/5189

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.9...v1.43.10-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.10-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 130.0 | 167.87047576794217 | 6.378933257294029 | 0.0 | 1909 | 0 | 113.80252899994048 | 2223.1993999999986 |
| Aggregated | Passed βœ… | 130.0 | 167.87047576794217 | 6.378933257294029 | 0.0 | 1909 | 0 | 113.80252899994048 | 2223.1993999999986 |

1.43.10

Not secure
What's Changed
* [Feat-Proxy+langfuse] LiteLLM-specific Tags on Langfuse - `cache_hit`, `cache_key` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5180
* [Feat-Proxy Security] Allow Using `x-forwarded-for` for enforcing + tracking ip address by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5181
* [Feat-UI] - Handle session expired on UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5187
* Improving the proxy docs for configuring with vllm by fozziethebeat in https://github.com/BerriAI/litellm/pull/5184
* (models): Add chatgpt-4o-latest by Manouchehri in https://github.com/BerriAI/litellm/pull/5190
* Fix not sended json_data_for_triton by ArtyomZemlyak in https://github.com/BerriAI/litellm/pull/5189
* [Feat] Allow loading LiteLLM config from s3 buckets by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5191

New Contributors
* fozziethebeat made their first contribution in https://github.com/BerriAI/litellm/pull/5184
* ArtyomZemlyak made their first contribution in https://github.com/BerriAI/litellm/pull/5189

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.9...v1.43.10



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.10



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 130.0 | 167.11426559664835 | 6.3781820304668635 | 0.0 | 1909 | 0 | 105.92628599999898 | 1269.0274309999836 |
| Aggregated | Passed βœ… | 130.0 | 167.11426559664835 | 6.3781820304668635 | 0.0 | 1909 | 0 | 105.92628599999898 | 1269.0274309999836 |

1.43.9

Not secure
Get notified when Create, Update, Delete operations are done on Virtual Keys, Users and Teams

πŸ‘‰ Start here to use: https://docs.litellm.ai/docs/proxy/alerting

πŸͺ£ [Feat] GCS Bucket logging - log in folders based on dd-m-yyyy

πŸͺ£ [Feat] GCS Bucket logging - log api key metadata + response cost

✨ [Proxy - docstring] fix curl on docstring on /team endpoints

⚑️ [Feat-Proxy] send fallbacks statistics on slack notifications
![Group 5950](https://github.com/user-attachments/assets/5e457529-78ca-4ada-9342-ef612422f8a8)


What's Changed
* [Feat-Proxy] send prometheus fallbacks stats to slack by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5154
* [Feat-Security] Send Slack Alert when CRUD ops done on Virtual Keys, Teams, Internal Users by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5166
* [Proxy docstring] fix curl on docstring on /team endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5167
* [Feat Proxy] Send slack alert on CRUD endpoints for Internal Users by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5168
* [Feat] Log GCS logs in folders based on dd-m-yyyy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5171
* [Feat] GCS Bucket logging - log api key metadata + response cost by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5169


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.7...v1.43.9



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.9



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 120.0 | 137.22814964462026 | 6.367015610222066 | 0.0 | 1905 | 0 | 95.95173799999657 | 1027.7449360000048 |
| Aggregated | Passed βœ… | 120.0 | 137.22814964462026 | 6.367015610222066 | 0.0 | 1905 | 0 | 95.95173799999657 | 1027.7449360000048 |

1.43.9.dev4

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.9.dev3...v1.43.9.dev4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.9.dev4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.9.dev4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 130.0 | 154.29097360042562 | 6.2792160315752845 | 0.0 | 1877 | 0 | 110.9379830000421 | 2118.2475180000324 |
| Aggregated | Passed βœ… | 130.0 | 154.29097360042562 | 6.2792160315752845 | 0.0 | 1877 | 0 | 110.9379830000421 | 2118.2475180000324 |

Page 22 of 93

Links

Releases

Has known vulnerabilities

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.