Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 45 of 93

1.37.14

Not secure
What's Changed
* Work with custom `LANGSMITH_BASE_URL` by msabramo in https://github.com/BerriAI/litellm/pull/3703


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.13...v1.37.14



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.14



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 9 | 11.887553362704857 | 1.6297242673296988 | 1.6297242673296988 | 488 | 488 | 7.5520679999954154 | 178.089099999994 |
| /health/liveliness | Failed ❌ | 8 | 10.902480935483862 | 15.529134924350613 | 15.529134924350613 | 4650 | 4650 | 6.3291929999991225 | 907.1240070000499 |
| /health/readiness | Failed ❌ | 8 | 11.157714575899545 | 15.68609607304835 | 15.68609607304835 | 4697 | 4697 | 6.4579570000091735 | 1189.8105640000267 |
| Aggregated | Failed ❌ | 8 | 11.073253457447953 | 32.84495526472866 | 32.84495526472866 | 9835 | 9835 | 6.3291929999991225 | 1189.8105640000267 |

v1.37.13-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.13...v1.37.13-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.13-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

1.37.13

Not secure
What's Changed
* [Fix]- router/proxy show better client side errors when `no_healthy deployments available` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3679
* [Fix] Flush langfuse logs on proxy shutdown by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3681
* Allow non-admins to use `/engines/{model}/chat/completions` by msabramo in https://github.com/BerriAI/litellm/pull/3663
* Fix `datetime.datetime.utcnow` `DeprecationWarning` by msabramo in https://github.com/BerriAI/litellm/pull/3686
* [Fix] - include model name in cool down alerts by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3690
* feat(lago.py): Enable Usage-based billing with lago by krrishdholakia in https://github.com/BerriAI/litellm/pull/3685
* [UI] End User Spend - Fix Timezone diff bug by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3692
* [Feat] `token_counter` endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3682
* Timeout param: custom_llm_provider needs to be set before setting timeout by edwinjosegeorge in https://github.com/BerriAI/litellm/pull/3645
* [Fix] AI Studio (Gemini API) returns invalid 1 index instead of 0 when "stream": false by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3693
* fix(proxy_server.py): check + get end-user obj even for master key calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/3575
* [Feat] Support Anthropic `tools-2024-05-16` - Set Custom Anthropic Custom Headers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3694
* [Feat] Admin UI - show model prices as Per 1M tokens by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3696
* Add commented `set_verbose` line to proxy_config by msabramo in https://github.com/BerriAI/litellm/pull/3699
* [Fix] Polish Models Page - set max width per column, fix bug with selecting models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3698
* [UI] Fix Round Team Spend, and Show Key Alias on Top API Keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3700
* [Fix] allow users to opt into specific alert types + Introduce `spend_report` alert type by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3702
* fix(replicate.py): move replicate calls to being async by krrishdholakia in https://github.com/BerriAI/litellm/pull/3704
* [FEAT] add cost tracking for Fine Tuned OpenAI `ft:davinci-002` and `ft:babbage-002` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3705
* Exclude custom headers from response if the value is None or empty string by paneru-rajan in https://github.com/BerriAI/litellm/pull/3701
* Fix(router.py): Kill a bug that forced Azure OpenAI to have an API ke… by Manouchehri in https://github.com/BerriAI/litellm/pull/3706

https://docs.litellm.ai/docs/providers/anthropic#forcing-anthropic-tool-use
![codeimage-snippet_17 (2)](https://github.com/BerriAI/litellm/assets/29436595/c3a9e266-0465-4b1d-8029-8d39ee3e3a53)



New Contributors
* edwinjosegeorge made their first contribution in https://github.com/BerriAI/litellm/pull/3645

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.12...v1.37.13



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.13



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat



v1.37.12-stable
What's Changed
* feat(proxy_server.py): JWT-Auth improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/3666
* feat(proxy_server.py): new `/end_user/info` endpoint by krrishdholakia in https://github.com/BerriAI/litellm/pull/3652


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.11...v1.37.12-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.12-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

1.37.12

Not secure
What's Changed
* feat(proxy_server.py): JWT-Auth improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/3666
* feat(proxy_server.py): new `/end_user/info` endpoint by krrishdholakia in https://github.com/BerriAI/litellm/pull/3652


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.11...v1.37.12



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.12



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

1.37.11

Not secure
What's Changed
* feat(proxy_server.py): Enabling Admin to control general settings on proxy ui by krrishdholakia in https://github.com/BerriAI/litellm/pull/3660
* [Fix] Mask API Keys from Predibase AuthenticationErrors by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3662
* [FIX] raise alerts for exceptions on `/completions` endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3661
* Updated Ollama cost models to include LLaMa3 and Mistral/Mixtral Instruct series by kmheckel in https://github.com/BerriAI/litellm/pull/3543

New Contributors
* kmheckel made their first contribution in https://github.com/BerriAI/litellm/pull/3543

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.10...v1.37.11



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.11



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.11

1.37.10

Not secure
What's Changed
* Fix `pkg_resources` warning by msabramo in https://github.com/BerriAI/litellm/pull/3602
* Update pydantic code to fix warnings by msabramo in https://github.com/BerriAI/litellm/pull/3600
* Add ability to customize slack report frequency by msabramo in https://github.com/BerriAI/litellm/pull/3622
* Duplicate code by rkataria1000 in https://github.com/BerriAI/litellm/pull/3594
* [Feature] Add cache to disk by antonioloison in https://github.com/BerriAI/litellm/pull/3266
* Logfire Integration by elisalimli in https://github.com/BerriAI/litellm/pull/3444
* Ignore 0 failures and 0s latency in daily slack reports by taralika in https://github.com/BerriAI/litellm/pull/3599
* feat - reset spend per team, api_key [Only Master Key] by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3626
* docs - use discord alerting by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3634
* Revert "Logfire Integration" by krrishdholakia in https://github.com/BerriAI/litellm/pull/3637
* [Feat] Proxy - cancel tasks when fast api request is cancelled by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3640
* [Feat] Proxy + router - don't cooldown on 4XX error that are not 429, 408, 401 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3651
* cloned gpt-4o models into openrouter/openai in costs&context.json by paul-gauthier in https://github.com/BerriAI/litellm/pull/3647
* [Fix] - Alerting on `/completions` - don't raise hanging request alert for /completions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3653
* Fix Proxy Server - only show API base, Model server log exceptions, not on client side by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3655
* [Fix] Revert 3600 https://github.com/BerriAI/litellm/pull/3600 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3664

New Contributors
* rkataria1000 made their first contribution in https://github.com/BerriAI/litellm/pull/3594
* antonioloison made their first contribution in https://github.com/BerriAI/litellm/pull/3266
* taralika made their first contribution in https://github.com/BerriAI/litellm/pull/3599

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.9...v1.37.10



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.10



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

v1.37.9-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.9...v1.37.9-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.9-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

1.37.9

Not secure
What's Changed
* feat(langfuse.py): Allow for individual call message/response redaction by alexanderepstein in https://github.com/BerriAI/litellm/pull/3603
* [Feat] - `/global/spend/report` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3619
* Fixes 3544 based on the data-type of message by paneru-rajan in https://github.com/BerriAI/litellm/pull/3554
* [UI] Filter Tag Spend by Date + Show Bar Chart by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3624
* Default routing fallbacks by krrishdholakia in https://github.com/BerriAI/litellm/pull/3625


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.7...v1.37.9



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.9



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 40 | 45.14681571397024 | 1.5067595942578198 | 1.5067595942578198 | 451 | 451 | 37.28894399998239 | 203.69157899997958 |
| /health/liveliness | Failed ❌ | 38 | 43.774724098143416 | 15.65894061704302 | 15.65894061704302 | 4687 | 4687 | 36.20009499996968 | 219.30193999997982 |
| /health/readiness | Failed ❌ | 38 | 42.98829494917115 | 15.314824789529593 | 15.314824789529593 | 4584 | 4584 | 36.154727999985425 | 234.44879100000549 |
| Aggregated | Failed ❌ | 38 | 43.46756735054526 | 32.48052500083043 | 32.48052500083043 | 9722 | 9722 | 36.154727999985425 | 234.44879100000549 |

v1.37.7-stable
What's Changed
* feat(langfuse.py): Allow for individual call message/response redaction by alexanderepstein in https://github.com/BerriAI/litellm/pull/3603
* [Feat] - `/global/spend/report` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3619
* Fixes 3544 based on the data-type of message by paneru-rajan in https://github.com/BerriAI/litellm/pull/3554
* [UI] Filter Tag Spend by Date + Show Bar Chart by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3624


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.7...v1.37.7-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.7-stable



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Page 45 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.