Litellm

Latest version: v1.65.1

Safety actively analyzes 723144 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 112

1.59.8dev1

What's Changed
* Fix custom pricing - separate provider info from model info by krrishdholakia in https://github.com/BerriAI/litellm/pull/7990
* Litellm dev 01 25 2025 p4 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8006
* (UI) - Adding new models enhancement - show provider logo by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8033
* (UI enhancement) - allow onboarding wildcard models on UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8034
* add openrouter/deepseek/deepseek-r1 by paul-gauthier in https://github.com/BerriAI/litellm/pull/8038
* (UI) - allow assigning wildcard models to a team / key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8041
* Add smolagents by aymeric-roucher in https://github.com/BerriAI/litellm/pull/8026
* (UI) fixes to add model flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8043
* github - run stale issue/pr bot by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8045
* (doc) Add nvidia as provider by raspawar in https://github.com/BerriAI/litellm/pull/8023
* feat(handle_jwt.py): initial commit adding custom RBAC support on jwt… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8037
* fix(utils.py): handle failed hf tokenizer request during calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/8032

New Contributors
* aymeric-roucher made their first contribution in https://github.com/BerriAI/litellm/pull/8026
* raspawar made their first contribution in https://github.com/BerriAI/litellm/pull/8023

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.8...v1.59.8-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.8-dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 230.0 | 253.74562668371757 | 6.073890684010945 | 0.0 | 1818 | 0 | 198.74819999995452 | 1957.5085989999934 |
| Aggregated | Passed βœ… | 230.0 | 253.74562668371757 | 6.073890684010945 | 0.0 | 1818 | 0 | 198.74819999995452 | 1957.5085989999934 |

1.59.7

Not secure
What's Changed
* Add datadog health check support + fix bedrock converse cost tracking w/ region name specified by krrishdholakia in https://github.com/BerriAI/litellm/pull/7958
* Retry for replicate completion response of status=processing (7901) by krrishdholakia in https://github.com/BerriAI/litellm/pull/7965
* Ollama ssl verify = False + Spend Logs reliability fixes by krrishdholakia in https://github.com/BerriAI/litellm/pull/7931
* (Feat) - allow setting `default_on` guardrails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7973
* (Testing) e2e testing for team budget enforcement checks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7988
* (UI) - Usage page show days when spend is 0 and round spend figures on charts to 2 sig figs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7991
* (Feat) - Add GCS Pub/Sub Logging integration for sending DB `SpendLogs` to BigQuery by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7976
* fix(spend_tracking_utils.py): revert api key pass through fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/7977
* Ensure base_model cost tracking works across all endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/7989
* (UI) Allow admin to expose teams for joining by krrishdholakia in https://github.com/BerriAI/litellm/pull/7992


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.6...v1.59.7



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.7



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 260.0 | 294.5630730660492 | 6.1254059494010225 | 0.0 | 1832 | 0 | 231.04980300001898 | 2728.9633709999634 |
| Aggregated | Passed βœ… | 260.0 | 294.5630730660492 | 6.1254059494010225 | 0.0 | 1832 | 0 | 231.04980300001898 | 2728.9633709999634 |

1.59.6

Not secure
What's Changed
* Add `attempted-retries` and `timeout` values to response headers + more testing by krrishdholakia in https://github.com/BerriAI/litellm/pull/7926
* Refactor prometheus e2e test by yujonglee in https://github.com/BerriAI/litellm/pull/7919
* (Testing + Refactor) - Unit testing for team and virtual key budget checks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7945
* docs: fix typo by wagnerjt in https://github.com/BerriAI/litellm/pull/7953
* (Feat) - Allow Admin UI users to view spend logs even when not storing messages / responses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7952
* (UI) - Set/edit guardrails on a virtual key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7954
* (Feat) - emit `litellm_team_budget_reset_at_metric` and `litellm_api_key_budget_remaining_hours_metric` on prometheus by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7946
* (Feat) allow setting guardrails on a team on the API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7959
* (UI) Set guardrails on Team Create and Edit page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7963
* (GCS fix) - don't truncate payload by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7964
* Litellm dev 01 23 2025 p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7962

New Contributors
* wagnerjt made their first contribution in https://github.com/BerriAI/litellm/pull/7953

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.5...v1.59.6



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.6



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 250.0 | 302.94444351157557 | 6.065526445072595 | 0.0 | 1814 | 0 | 184.99327999995785 | 3192.1896389999915 |
| Aggregated | Failed ❌ | 250.0 | 302.94444351157557 | 6.065526445072595 | 0.0 | 1814 | 0 | 184.99327999995785 | 3192.1896389999915 |

1.59.5

Not secure
What's Changed
* Deepseek r1 support + watsonx qa improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/7907
* (Testing) - Add e2e testing for langfuse logging with tags by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7922
* build(deps): bump undici from 6.21.0 to 6.21.1 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/7902
* (test) add e2e test for proxy with fallbacks + custom fallback message by krrishdholakia in https://github.com/BerriAI/litellm/pull/7933
* (feat) - add `deepseek/deepseek-reasoner` to model cost map by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7935
* fix(utils.py): move adding custom logger callback to success event in… by krrishdholakia in https://github.com/BerriAI/litellm/pull/7905
* Add `provider_specifc_header` param by krrishdholakia in https://github.com/BerriAI/litellm/pull/7932
* (Refactor) Langfuse - remove `prepare_metadata`, langfuse python SDK now handles non-json serializable objects by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7925


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.3...v1.59.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.5



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 210.0 | 227.08635060543418 | 6.150672112760015 | 0.0 | 1840 | 0 | 180.76872099999264 | 2652.4827009999967 |
| Aggregated | Passed βœ… | 210.0 | 227.08635060543418 | 6.150672112760015 | 0.0 | 1840 | 0 | 180.76872099999264 | 2652.4827009999967 |

1.59.3

Not secure
What's Changed
* Update MLflow calllback and documentation by B-Step62 in https://github.com/BerriAI/litellm/pull/7809


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.2...v1.59.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.3



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 200.0 | 229.9985951234699 | 6.27846665942667 | 0.0 | 1879 | 0 | 179.09318400000984 | 3769.753647000016 |
| Aggregated | Passed βœ… | 200.0 | 229.9985951234699 | 6.27846665942667 | 0.0 | 1879 | 0 | 179.09318400000984 | 3769.753647000016 |

1.59.3.dev1

What's Changed
* Deepseek r1 support + watsonx qa improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/7907
* (Testing) - Add e2e testing for langfuse logging with tags by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7922
* build(deps): bump undici from 6.21.0 to 6.21.1 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/7902
* (test) add e2e test for proxy with fallbacks + custom fallback message by krrishdholakia in https://github.com/BerriAI/litellm/pull/7933


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.3...v1.59.3.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.3.dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 230.0 | 259.2853146928995 | 6.073999238925992 | 0.0 | 1817 | 0 | 211.11294400003544 | 2538.129180999988 |
| Aggregated | Passed βœ… | 230.0 | 259.2853146928995 | 6.073999238925992 | 0.0 | 1817 | 0 | 211.11294400003544 | 2538.129180999988 |

Page 8 of 112

Links

Releases

Has known vulnerabilities

Β© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.