What's Changed
* chore: comment for maritalk by nobu007 in https://github.com/BerriAI/litellm/pull/6607
* Update gpt-4o-2024-08-06, and o1-preview, o1-mini models in model cost map by emerzon in https://github.com/BerriAI/litellm/pull/6654
* (QOL improvement) add unit testing for all static_methods in litellm_logging.py by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6640
* (feat) log error class, function_name on prometheus service failure hook + only log DB related failures on DB service hook by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6650
* Update several Azure AI models in model cost map by emerzon in https://github.com/BerriAI/litellm/pull/6655
* ci(conftest.py): reset conftest.py for local_testing/ by krrishdholakia in https://github.com/BerriAI/litellm/pull/6657
* Litellm dev 11 07 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6649
New Contributors
* emerzon made their first contribution in https://github.com/BerriAI/litellm/pull/6654
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.52.1...v1.52.2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.2
Don't want to maintain your internal proxy? get in touch π
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed β
| 230.0 | 251.09411961031876 | 6.087114215107422 | 0.0 | 1822 | 0 | 198.72582000004968 | 1667.4085729999888 |
| Aggregated | Passed β
| 230.0 | 251.09411961031876 | 6.087114215107422 | 0.0 | 1822 | 0 | 198.72582000004968 | 1667.4085729999888 |