What's Changed
* (fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7519
* (Fix) - Hashicorp secret manager - don't print hcorp secrets in debug logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7529
* [Bug-Fix]: None metadata not handled for `_PROXY_VirtualKeyModelMaxBudgetLimiter` hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7523
* Bump anthropic.claude-3-5-haiku-20241022-v1:0 to new limits by Manouchehri in https://github.com/BerriAI/litellm/pull/7118
* Fix langfuse prompt management on proxy by krrishdholakia in https://github.com/BerriAI/litellm/pull/7535
* (Feat) - Hashicorp secret manager, use TLS cert authentication by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7532
* Fix OTEL message redaction + Langfuse key leak in logs by krrishdholakia in https://github.com/BerriAI/litellm/pull/7516
* feat: implement support for limit, order, before, and after parameters in get_assistants by jeansouzak in https://github.com/BerriAI/litellm/pull/7537
* Add missing prefix for deepseek by SmartManoj in https://github.com/BerriAI/litellm/pull/7508
* (fix) `aiohttp_openai/` route - get to 1K RPS on single instance by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7539
* Revert "feat: implement support for limit, order, before, and after parameters in get_assistants" by krrishdholakia in https://github.com/BerriAI/litellm/pull/7542
* [Feature]: - allow print alert log to console by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7534
* (fix proxy perf) use `_read_request_body` instead of ast.literal_eval to get better performance by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7545
New Contributors
* jeansouzak made their first contribution in https://github.com/BerriAI/litellm/pull/7537
* SmartManoj made their first contribution in https://github.com/BerriAI/litellm/pull/7508
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.56.8...v1.56.9
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.9
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 240.0 | 269.3983699320639 | 6.149252570882109 | 0.0 | 1840 | 0 | 211.95807399999467 | 2571.210135000001 |
| Aggregated | Passed ✅ | 240.0 | 269.3983699320639 | 6.149252570882109 | 0.0 | 1840 | 0 | 211.95807399999467 | 2571.210135000001 |