Major Changes
- LiteLLM Proxy now uses Gunicorn by default
If you use LiteLLM Dockerfile or images no changes required. If you're using litellm pip package just run `pip install 'litellm[proxy]'-U`
- Support for `litellm.ContentPolicyViolationError` catch these errors from image generation models
LiteLLM Proxy Dockerfiles
- Dockerfile.Database has `litellm` as the entrypoint https://github.com/BerriAI/litellm/blob/b103ca3960a8c42de09dd8c9ecfdf379bf298bba/Dockerfile.database#L59 cc Manouchehri (you can pass litellm cli args)
- Use https://github.com/BerriAI/litellm/pkgs/container/litellm for calling LLM APIs (without Virtual keys)
- Use https://github.com/BerriAI/litellm/pkgs/container/litellm-database for calling LLM APIs + Virtual Keys (this build has optimized cold boot for using Prisma (the DB Provider))
What's Changed
* [Feat] Add litellm.ContentPolicyViolationError by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1381
* (fix): Self-serve UI, AUTH link generation error by speedyankur in https://github.com/BerriAI/litellm/pull/1385
* (fix): Self-serve UI, AUTH link generation error by speedyankur in https://github.com/BerriAI/litellm/pull/1386
* (fix): Self-serve UI, AUTH link generation error by speedyankur in https://github.com/BerriAI/litellm/pull/1391
* (caching) Fix incorrect usage of str, which created invalid JSON. by Manouchehri in https://github.com/BerriAI/litellm/pull/1390
* Litellm dockerfile testing by krrishdholakia in https://github.com/BerriAI/litellm/pull/1402
* fix(lowest_latency.py): add back tpm/rpm checks, configurable time window support, improved latency tracking by krrishdholakia in https://github.com/BerriAI/litellm/pull/1403
* LiteLLM Proxy - Use Gunicorn with Uvicorn workers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1399
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.16.21...v1.17.0