What's Changed
* chore: comment for maritalk by nobu007 in https://github.com/BerriAI/litellm/pull/6607
* Update gpt-4o-2024-08-06, and o1-preview, o1-mini models in model cost map by emerzon in https://github.com/BerriAI/litellm/pull/6654
* (QOL improvement) add unit testing for all static_methods in litellm_logging.py by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6640
* (feat) log error class, function_name on prometheus service failure hook + only log DB related failures on DB service hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6650
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.1...v1.52.6.dev1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.6.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.6.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 250.0 | 284.54861130679984 | 6.047368285253406 | 0.0 | 1809 | 0 | 224.15313200008313 | 1935.1971319999848 |
| Aggregated | Passed ✅ | 250.0 | 284.54861130679984 | 6.047368285253406 | 0.0 | 1809 | 0 | 224.15313200008313 | 1935.1971319999848 |