What's Changed
* (perf) Litellm redis router fix - ~100ms improvement by krrishdholakia in https://github.com/BerriAI/litellm/pull/6483
* LiteLLM Minor Fixes & Improvements (10/28/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6475
* Litellm dev 10 29 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6502
* Litellm router max depth by krrishdholakia in https://github.com/BerriAI/litellm/pull/6501
* (UI) fix bug with rendering max budget = 0 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6506
* (UI) fix + test displaying number of keys an internal user owns by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6507
* (UI) Fix viewing members, keys in a team + added testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6514
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.1...v1.51.2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.2
Don't want to maintain your internal proxy? get in touch π
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed β
| 250.0 | 287.73103211135674 | 6.039141910660648 | 0.0 | 1805 | 0 | 213.5724959999834 | 2930.6253560000073 |
| Aggregated | Passed β
| 250.0 | 287.73103211135674 | 6.039141910660648 | 0.0 | 1805 | 0 | 213.5724959999834 | 2930.6253560000073 |
v1.51.1-staging
What's Changed
* (perf) Litellm redis router fix - ~100ms improvement by krrishdholakia in https://github.com/BerriAI/litellm/pull/6483
* LiteLLM Minor Fixes & Improvements (10/28/2024) by krrishdholakia in https://github.com/BerriAI/litellm/pull/6475
* Litellm dev 10 29 2024 by krrishdholakia in https://github.com/BerriAI/litellm/pull/6502
* Litellm router max depth by krrishdholakia in https://github.com/BerriAI/litellm/pull/6501
* (UI) fix bug with rendering max budget = 0 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6506
* (UI) fix + test displaying number of keys an internal user owns by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6507
* (UI) Fix viewing members, keys in a team + added testing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/6514
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.51.1...v1.51.1-staging
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.1-staging
Don't want to maintain your internal proxy? get in touch π
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed β | 270.0 | 311.93605914725106 | 6.080288332872121 | 0.0033408177653143525 | 1820 | 1 | 117.93499300000576 | 3293.080912999983 |
| Aggregated | Failed β | 270.0 | 311.93605914725106 | 6.080288332872121 | 0.0033408177653143525 | 1820 | 1 | 117.93499300000576 | 3293.080912999983 |