What's Changed
* Fix custom pricing - separate provider info from model info by krrishdholakia in https://github.com/BerriAI/litellm/pull/7990
* Litellm dev 01 25 2025 p4 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8006
* (UI) - Adding new models enhancement - show provider logo by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8033
* (UI enhancement) - allow onboarding wildcard models on UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8034
* add openrouter/deepseek/deepseek-r1 by paul-gauthier in https://github.com/BerriAI/litellm/pull/8038
* (UI) - allow assigning wildcard models to a team / key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8041
* Add smolagents by aymeric-roucher in https://github.com/BerriAI/litellm/pull/8026
* (UI) fixes to add model flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8043
* github - run stale issue/pr bot by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8045
* (doc) Add nvidia as provider by raspawar in https://github.com/BerriAI/litellm/pull/8023
* feat(handle_jwt.py): initial commit adding custom RBAC support on jwt… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8037
* fix(utils.py): handle failed hf tokenizer request during calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/8032
* Bedrock document processing fixes by krrishdholakia in https://github.com/BerriAI/litellm/pull/8005
* Fix bedrock model pricing + add unit test using bedrock pricing api by krrishdholakia in https://github.com/BerriAI/litellm/pull/7978
* Add openai `metadata` param preview support + new `x-litellm-timeout` request header by krrishdholakia in https://github.com/BerriAI/litellm/pull/8047
* (beta ui - spend logs view fixes & Improvements 1) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8062
* (fix) - proxy reliability, ensure duplicate callbacks are not added to proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8067
* (UI) Fixes for Adding model page - keep existing page as default, have 2nd tab for wildcard models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8073
New Contributors
* aymeric-roucher made their first contribution in https://github.com/BerriAI/litellm/pull/8026
* raspawar made their first contribution in https://github.com/BerriAI/litellm/pull/8023
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.59.8...v1.59.9
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.9
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Failed ❌ | 270.0 | 301.01550717582927 | 6.14169679840119 | 0.0 | 1837 | 0 | 234.85362500002793 | 3027.238808999982 |
| Aggregated | Failed ❌ | 270.0 | 301.01550717582927 | 6.14169679840119 | 0.0 | 1837 | 0 | 234.85362500002793 | 3027.238808999982 |