Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 93

1.48.18

What's Changed
* fix(utils.py): fix fix pydantic obj to schema creation for vertex en… by krrishdholakia in https://github.com/BerriAI/litellm/pull/6071
* Proxy: include customer budget in responses by kvadros in https://github.com/BerriAI/litellm/pull/5977
* (proxy ui) - fix view user pagination by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6094
* (proxy ui sso flow) - fix invite user sso flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6093
* (bug fix) TTL not being set for embedding caching requests by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6095
* (feat proxy) add v2 maintained LiteLLM grafana dashboard by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6098

New Contributors
* kvadros made their first contribution in https://github.com/BerriAI/litellm/pull/5977

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.17...v1.48.18



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.18



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.18



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 150.0 | 179.59820728602008 | 6.264331807633761 | 0.0 | 1874 | 0 | 123.93443999997089 | 1518.5208869999656 |
| Aggregated | Passed βœ… | 150.0 | 179.59820728602008 | 6.264331807633761 | 0.0 | 1874 | 0 | 123.93443999997089 | 1518.5208869999656 |

v1.48.17-stable
What's Changed
* Add pyright to ci/cd + Fix remaining type-checking errors by krrishdholakia in https://github.com/BerriAI/litellm/pull/6082
* LiteLLM Minor Fixes & Improvements (10/05/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6083
* Litellm expose disable schema update flag by krrishdholakia in https://github.com/BerriAI/litellm/pull/6085


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.16...v1.48.17-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.17-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 92 | 107.17307300103636 | 6.4589942522035875 | 0.0 | 1932 | 0 | 75.81280799990964 | 2287.0391480000194 |
| Aggregated | Passed βœ… | 92 | 107.17307300103636 | 6.4589942522035875 | 0.0 | 1932 | 0 | 75.81280799990964 | 2287.0391480000194 |

1.48.18.dev2

What's Changed
* [docs] fix links due to broken list in enterprise features by pradhyumna85 in https://github.com/BerriAI/litellm/pull/6103
* (docs) key based callbacks - add info on behavior by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6107
* (docs) add remaining litellm settings on configs.md doc by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6108
* (clean up) move docker files from root to `docker` folder by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6109


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.18...1.48.18.dev2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-1.48.18.dev2



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 98 | 113.12898091455176 | 6.453603686639764 | 0.0 | 1931 | 0 | 74.54381100001228 | 1944.8568819999537 |
| Aggregated | Passed βœ… | 98 | 113.12898091455176 | 6.453603686639764 | 0.0 | 1931 | 0 | 74.54381100001228 | 1944.8568819999537 |

1.48.17

What's Changed
* Add pyright to ci/cd + Fix remaining type-checking errors by krrishdholakia in https://github.com/BerriAI/litellm/pull/6082
* LiteLLM Minor Fixes & Improvements (10/05/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6083
* Litellm expose disable schema update flag by krrishdholakia in https://github.com/BerriAI/litellm/pull/6085


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.16...v1.48.17



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.17



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 150.0 | 174.98607575808006 | 6.41385669690212 | 0.0 | 1918 | 0 | 124.5531670000446 | 1759.9385870000788 |
| Aggregated | Passed βœ… | 150.0 | 174.98607575808006 | 6.41385669690212 | 0.0 | 1918 | 0 | 124.5531670000446 | 1759.9385870000788 |

v1.48.16-stable
What's Changed
* (feat) add azure o1 models to model cost map by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6075
* (feat) add cost tracking for OpenAI prompt caching by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6055
* (docs) add links / sections for router settings, general settings on proxy config.yaml by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6078
* (feat) add azure openai cost tracking for prompt caching by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6077
* openrouter/openai's litellm_provider should be openrouter, not openai by GTonehour in https://github.com/BerriAI/litellm/pull/6079
* (code clean up) use a folder for gcs bucket logging + add readme in folder by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6080

New Contributors
* GTonehour made their first contribution in https://github.com/BerriAI/litellm/pull/6079

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.15...v1.48.16-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.16-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 150.0 | 278.77820193060205 | 6.113357772864492 | 0.003340632662767482 | 1830 | 1 | 89.60288500003344 | 38293.597436000025 |
| Aggregated | Passed βœ… | 150.0 | 278.77820193060205 | 6.113357772864492 | 0.003340632662767482 | 1830 | 1 | 89.60288500003344 | 38293.597436000025 |

1.48.16

What's Changed
* (feat) add azure o1 models to model cost map by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6075
* (feat) add cost tracking for OpenAI prompt caching by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6055
* (docs) add links / sections for router settings, general settings on proxy config.yaml by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6078
* (feat) add azure openai cost tracking for prompt caching by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6077
* openrouter/openai's litellm_provider should be openrouter, not openai by GTonehour in https://github.com/BerriAI/litellm/pull/6079
* (code clean up) use a folder for gcs bucket logging + add readme in folder by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6080

New Contributors
* GTonehour made their first contribution in https://github.com/BerriAI/litellm/pull/6079

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.15...v1.48.16



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.16



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 150.0 | 178.4167077446905 | 6.294411321333372 | 0.0 | 1884 | 0 | 124.05050000006668 | 1785.5170410000483 |
| Aggregated | Passed βœ… | 150.0 | 178.4167077446905 | 6.294411321333372 | 0.0 | 1884 | 0 | 124.05050000006668 | 1785.5170410000483 |

1.48.15

What's Changed
* (docs) router settings - on litellm config by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6037
* (feat) OpenAI prompt caching models to model cost map by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6063
* LiteLLM Minor Fixes & Improvements (10/04/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6064
* fix(gcs_bucket.py): show error response text in exception by krrishdholakia in https://github.com/BerriAI/litellm/pull/6072
* (feat) add /key/health endpoint to test key based logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6073


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.14...v1.48.15



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.15



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.15



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 140.0 | 166.19756279801152 | 6.391191282424009 | 0.0 | 1911 | 0 | 122.57483899998078 | 934.5902409999667 |
| Aggregated | Passed βœ… | 140.0 | 166.19756279801152 | 6.391191282424009 | 0.0 | 1911 | 0 | 122.57483899998078 | 934.5902409999667 |

v1.48.14-stable
What's Changed
* fix(utils.py): return openai streaming prompt caching tokens by krrishdholakia in https://github.com/BerriAI/litellm/pull/6051
* (fixes) gcs bucket key based logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6044
* (fix prometheus) track cooldown events for llm deployments by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6060
* (docs) add 1k rps load test doc by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6059
* (fixes) docs + qa - gcs key based logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6061


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.12...v1.48.14-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.14-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 140.0 | 157.70601028721856 | 6.354251502736087 | 0.0 | 1901 | 0 | 109.31232299998328 | 1592.3886319999951 |
| Aggregated | Passed βœ… | 140.0 | 157.70601028721856 | 6.354251502736087 | 0.0 | 1901 | 0 | 109.31232299998328 | 1592.3886319999951 |

1.48.14

What's Changed
* fix(utils.py): return openai streaming prompt caching tokens by krrishdholakia in https://github.com/BerriAI/litellm/pull/6051
* (fixes) gcs bucket key based logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6044
* (fix prometheus) track cooldown events for llm deployments by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6060
* (docs) add 1k rps load test doc by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6059
* (fixes) docs + qa - gcs key based logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6061


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.48.12...v1.48.14



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.14



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 140.0 | 163.27994546016794 | 6.375151479543482 | 0.0 | 1908 | 0 | 108.71396600003891 | 2362.470617999975 |
| Aggregated | Passed βœ… | 140.0 | 163.27994546016794 | 6.375151479543482 | 0.0 | 1908 | 0 | 108.71396600003891 | 2362.470617999975 |

Page 8 of 93

Links

Releases

Has known vulnerabilities

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.