What's Changed
* Control Model Access by IDP 'groups' by krrishdholakia in https://github.com/BerriAI/litellm/pull/8164
* build(schema.prisma): add new `sso_user_id` to LiteLLM_UserTable by krrishdholakia in https://github.com/BerriAI/litellm/pull/8167
* Litellm dev contributor prs 01 31 2025 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8168
* Improved O3 + Azure O3 support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8181
* test: add more unit testing for team member endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/8170
* Add azure/deepseek-r1 by Klohto in https://github.com/BerriAI/litellm/pull/8177
* [Bug Fix] - `/vertex_ai/` was not detected as llm_api_route on pass through but `vertex-ai` was by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8186
* (UI + SpendLogs) - Store SpendLogs in UTC Timezone, Fix filtering logs by start/end time by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8190
* Azure AI Foundry - Deepseek R1 by elabbarw in https://github.com/BerriAI/litellm/pull/8188
* fix(main.py): fix passing openrouter specific params by krrishdholakia in https://github.com/BerriAI/litellm/pull/8184
* Complete o3 model support by krrishdholakia in https://github.com/BerriAI/litellm/pull/8183
* Easier user onboarding via SSO by krrishdholakia in https://github.com/BerriAI/litellm/pull/8187
* LiteLLM Minor Fixes & Improvements (01/16/2025) - p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7828
* Added deprecation date for gemini-1.5 models by yurchik11 in https://github.com/BerriAI/litellm/pull/8210
* docs: Updating the available VoyageAI models in the docs by fzowl in https://github.com/BerriAI/litellm/pull/8215
* build: ui updates by krrishdholakia in https://github.com/BerriAI/litellm/pull/8206
* Fix tokens for deepseek by SmartManoj in https://github.com/BerriAI/litellm/pull/8207
* (UI Fixes for add new model flow) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8216
* Update xAI provider and fix some old model config by zhaohan-dong in https://github.com/BerriAI/litellm/pull/8218
* Support guardrails `mode` as list, fix valid keys error in pydantic, add more testing by krrishdholakia in https://github.com/BerriAI/litellm/pull/8224
* docs: fix typo in lm_studio.md by foreign-sub in https://github.com/BerriAI/litellm/pull/8222
* (Feat) - New pass through add assembly ai passthrough endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8220
* fix(openai/): allows 'reasoning_effort' param to be passed correctly by krrishdholakia in https://github.com/BerriAI/litellm/pull/8227
New Contributors
* Klohto made their first contribution in https://github.com/BerriAI/litellm/pull/8177
* zhaohan-dong made their first contribution in https://github.com/BerriAI/litellm/pull/8218
* foreign-sub made their first contribution in https://github.com/BerriAI/litellm/pull/8222
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.60.0...v1.60.2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 170.0 | 187.78487681207412 | 6.365583292626693 | 0.0 | 1905 | 0 | 135.5453470000043 | 3644.0179759999864 |
| Aggregated | Passed ✅ | 170.0 | 187.78487681207412 | 6.365583292626693 | 0.0 | 1905 | 0 | 135.5453470000043 | 3644.0179759999864 |