What's Changed
* Upgrade Traceloop to version 0.18.2 by elisalimli in https://github.com/BerriAI/litellm/pull/3727
* usage-based-routing-ttl-on-cache by sumanth13131 in https://github.com/BerriAI/litellm/pull/3412
* Revert "Revert "Logfire Integration"" by elisalimli in https://github.com/BerriAI/litellm/pull/3756
* docs - add bedrock meta llama3 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3763
* [Cohere] Add request source to request by BeatrixCohere in https://github.com/BerriAI/litellm/pull/3759
* [Fix] Bump OpenAI version on Litellm PIP package [OpenAI>=1.27.0] by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3765
* Support anthropic 'tool_choice' param by krrishdholakia in https://github.com/BerriAI/litellm/pull/3771
* [Feat] Proxy - Create Keys that can only access `/spend` routes on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3772
* feat(lowest_latency.py): route by time to first token, for streaming requests (if available) by krrishdholakia in https://github.com/BerriAI/litellm/pull/3768
* feat(router.py): filter out deployments which don't support request params w/ 'pre_call_checks=True' by krrishdholakia in https://github.com/BerriAI/litellm/pull/3770
New Contributors
* BeatrixCohere made their first contribution in https://github.com/BerriAI/litellm/pull/3759
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.19...v1.37.20
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.20
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
v1.37.19-stable
🚨 SSO on LiteLLM Proxy will be enforced behind a license from this release
- If you use SSO on the litellm admin UI + Proxy and want a license, meet with us here: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
What's Changed
* [Fix] only run `check_request_disconnection` logic for maximum 10 mins by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3741
* Adding decoding of base64 image data for gemini pro 1.5 by hmcp22 in https://github.com/BerriAI/litellm/pull/3711
* [Feat] Enforce user has a valid license when using SSO on LiteLLM Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3742
* [FEAT] Async VertexAI Image Generation by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3739
* [Feat] Router/ Proxy - set cooldown_time based on Azure exception headers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3716
* fix divide by 0 bug on slack alerting by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3745
* Standardize slack exception msg format by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3747
* Another dictionary changed size during iteration error by phact in https://github.com/BerriAI/litellm/pull/3657
* feat(proxy_server.py): allow admin to return rejected response as string to user by krrishdholakia in https://github.com/BerriAI/litellm/pull/3740
* [Fix] - raise 404 from `/team/info` when team does not exist by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3749
* webhook support for budget alerts by krrishdholakia in https://github.com/BerriAI/litellm/pull/3748
* [Fix] - raise Exception when trying to update/delete a non-existent team by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3750
* [FEAT] - add litellm.Router - `abatch_completion_one_model_multiple_requests` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3751
New Contributors
* hmcp22 made their first contribution in https://github.com/BerriAI/litellm/pull/3711
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.37.17...v1.37.19-stable
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.19-stable
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat