What's Changed
* feat: decrypts aws keys in entrypoint.sh by krrishdholakia in https://github.com/BerriAI/litellm/pull/4437
* fix: replicate - catch 422 unprocessable entity error by krrishdholakia in
* fix: router.py - pre-call-checks (if enabled) only check context window limits for azure modes if base_model is set by krrishdholakia in https://github.com/BerriAI/litellm/commit/c9a424d28d23b798e1f4c5c00d95cfa0cf0eb13c
* fix: utils.py - correctly raise openrouter content filter error by krrishdholakia in https://github.com/BerriAI/litellm/commit/ca04244a0ab76291a819f0f9a475f5e0706d0808
Note: This release contains changes in how pre-call-checks run for azure models. Filtering models based on context window limits, will only apply to azure models if the base_model is set.
To enable pre-call-checks 👉 https://docs.litellm.ai/docs/routing#pre-call-checks-context-window-eu-regions
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.31...v1.41.0
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.41.0
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 130.0 | 156.9410206132132 | 6.2719899835647945 | 0.0 | 1877 | 0 | 112.84582399997589 | 1745.2864320000003 |
| Aggregated | Passed ✅ | 130.0 | 156.9410206132132 | 6.2719899835647945 | 0.0 | 1877 | 0 | 112.84582399997589 | 1745.2864320000003 |