Litellm

Latest version: v1.65.4.post1

Safety actively analyzes 724993 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 113

1.63.14.rc

What's Changed
* Modify completion handler for SageMaker to use payload from `prepared_request` by andjsmi in https://github.com/BerriAI/litellm/pull/9326
* Arize integration Fix by nate-mar in https://github.com/BerriAI/litellm/pull/9338
* Fix get_llm_provider error + add model name to tpm/rpm cache key (enables wildcard models to work w/ usage-based-routing) by krrishdholakia in https://github.com/BerriAI/litellm/pull/9355
* Litellm dev 03 18 2025 p1 v3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/9354
* [Bug Fix] Arize AI Logging Integration with LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9352
* build(model_prices_and_context_window.json): fix azure gpt-4o pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/9361
* Contributor PR: Fix sagemaker too little data for content error by krrishdholakia in https://github.com/BerriAI/litellm/pull/9335
* [Feat] - API - Allow using dynamic Arize AI Spaces on LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9353
* fix(common_utils.py): handle cris only model by krrishdholakia in https://github.com/BerriAI/litellm/pull/9363
* docs(litellm_proxy): correct parameter assignment in litellm proxy docs by colesmcintosh in https://github.com/BerriAI/litellm/pull/9375
* Feature flag checking LiteLLM_CredentialsTable by krrishdholakia in https://github.com/BerriAI/litellm/pull/9376
* fix(lowest_tpm_rpm_v2.py): support batch writing increments to redis by krrishdholakia in https://github.com/BerriAI/litellm/pull/9357
* Support 'prisma migrate' for db schema changes by krrishdholakia in https://github.com/BerriAI/litellm/pull/9382
* Fix latency redis by emerzon in https://github.com/BerriAI/litellm/pull/9387
* Revert "Fix latency redis" by krrishdholakia in https://github.com/BerriAI/litellm/pull/9388
* build(model_prices_and_context_window.json): add o1-pro pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/9397
* [Bug Fix] - Azure OpenAI - ensure SSL verification runs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9341
* [Feat] - Allow building custom prompt management integration by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9384
* Litellm fix icons by azdolinski in https://github.com/BerriAI/litellm/pull/9374
* [UI Improvement] Use local icons for model providers instead of downloading them by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9399
* fix(internal_user_endpoints.py): re-introduce upsert on user not found by krrishdholakia in https://github.com/BerriAI/litellm/pull/9395
* docs: Include Phoenix Page into sidebar under observability by SrilakshmiC in https://github.com/BerriAI/litellm/pull/9332
* fix(types/utils.py): support openai 'file' message type by krrishdholakia in https://github.com/BerriAI/litellm/pull/9402
* feat: Add support for custom OPENROUTER_API_BASE via get_secret in co… by graysonchen in https://github.com/BerriAI/litellm/pull/9369
* fix: VertexAI outputDimensionality configuration by JamesGuthrie in https://github.com/BerriAI/litellm/pull/9366
* docs(vertexai): fix typo in required env variables by Shiti in https://github.com/BerriAI/litellm/pull/9325
* Update perplexity.md by jollyolliel in https://github.com/BerriAI/litellm/pull/9290
* fix: VoyageAI `prompt_token` always empty by lucasra1 in https://github.com/BerriAI/litellm/pull/9260
* build(deps): bump litellm from 1.55.3 to 1.61.15 in /cookbook/litellm-ollama-docker-image by dependabot in https://github.com/BerriAI/litellm/pull/9422
* [Feat] OpenAI o1-pro Responses API streaming support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9419
* [Feat] Add OpenAI o1-pro support on Responses API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9415
* [Docs - Draft] LiteLLM x MCP Interface by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9435
* support returning api-base on pass-through endpoints + consistently return 404 if team not found in DB by krrishdholakia in https://github.com/BerriAI/litellm/pull/9439
* fix(handle_error.py): make cooldown error more descriptive by krrishdholakia in https://github.com/BerriAI/litellm/pull/9438
* Consistent anthropic response_format streaming/non-streaming behaviour by krrishdholakia in https://github.com/BerriAI/litellm/pull/9437
* New Azure Models (GPT-4.5-Preview, Mistral Small 3.1) by emerzon in https://github.com/BerriAI/litellm/pull/9453
* Set max size limit to in-memory cache item - prevents OOM errors by krrishdholakia in https://github.com/BerriAI/litellm/pull/9448
* fix(model_param_helper.py): update `_get_litellm_supported_transcription_kwargs()` to use proper annotations from `TranscriptionCreateParamsNonStreaming` & ``TranscriptionCreateParamsStreaming` by hsaeed3 in https://github.com/BerriAI/litellm/pull/9451
* [Feat] LiteLLM x MCP Bridge - Use MCP Tools with LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9436
* fix(model_param_helper.py): update _get_litellm_supported_transcription_kwargs() to use proper annotations from TranscriptionCreateParamsNonStreaming & `TranscriptionCreateParamsStreaming by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9455

New Contributors
* andjsmi made their first contribution in https://github.com/BerriAI/litellm/pull/9326
* azdolinski made their first contribution in https://github.com/BerriAI/litellm/pull/9374
* SrilakshmiC made their first contribution in https://github.com/BerriAI/litellm/pull/9332
* graysonchen made their first contribution in https://github.com/BerriAI/litellm/pull/9369
* JamesGuthrie made their first contribution in https://github.com/BerriAI/litellm/pull/9366
* Shiti made their first contribution in https://github.com/BerriAI/litellm/pull/9325
* jollyolliel made their first contribution in https://github.com/BerriAI/litellm/pull/9290
* hsaeed3 made their first contribution in https://github.com/BerriAI/litellm/pull/9451

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.12-nightly...v1.63.14.rc



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.14.rc



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 230.0 | 249.89683940656786 | 6.2068378536570386 | 0.0 | 1857 | 0 | 196.31636200006142 | 3500.2345190000597 |
| Aggregated | Passed βœ… | 230.0 | 249.89683940656786 | 6.2068378536570386 | 0.0 | 1857 | 0 | 196.31636200006142 | 3500.2345190000597 |

v1.63.14-nightly
What's Changed
* Modify completion handler for SageMaker to use payload from `prepared_request` by andjsmi in https://github.com/BerriAI/litellm/pull/9326
* Arize integration Fix by nate-mar in https://github.com/BerriAI/litellm/pull/9338
* Fix get_llm_provider error + add model name to tpm/rpm cache key (enables wildcard models to work w/ usage-based-routing) by krrishdholakia in https://github.com/BerriAI/litellm/pull/9355
* Litellm dev 03 18 2025 p1 v3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/9354
* [Bug Fix] Arize AI Logging Integration with LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9352
* build(model_prices_and_context_window.json): fix azure gpt-4o pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/9361
* Contributor PR: Fix sagemaker too little data for content error by krrishdholakia in https://github.com/BerriAI/litellm/pull/9335
* [Feat] - API - Allow using dynamic Arize AI Spaces on LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9353
* fix(common_utils.py): handle cris only model by krrishdholakia in https://github.com/BerriAI/litellm/pull/9363
* docs(litellm_proxy): correct parameter assignment in litellm proxy docs by colesmcintosh in https://github.com/BerriAI/litellm/pull/9375
* Feature flag checking LiteLLM_CredentialsTable by krrishdholakia in https://github.com/BerriAI/litellm/pull/9376
* fix(lowest_tpm_rpm_v2.py): support batch writing increments to redis by krrishdholakia in https://github.com/BerriAI/litellm/pull/9357
* Support 'prisma migrate' for db schema changes by krrishdholakia in https://github.com/BerriAI/litellm/pull/9382
* Fix latency redis by emerzon in https://github.com/BerriAI/litellm/pull/9387
* Revert "Fix latency redis" by krrishdholakia in https://github.com/BerriAI/litellm/pull/9388
* build(model_prices_and_context_window.json): add o1-pro pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/9397
* [Bug Fix] - Azure OpenAI - ensure SSL verification runs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9341
* [Feat] - Allow building custom prompt management integration by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9384
* Litellm fix icons by azdolinski in https://github.com/BerriAI/litellm/pull/9374
* [UI Improvement] Use local icons for model providers instead of downloading them by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9399
* fix(internal_user_endpoints.py): re-introduce upsert on user not found by krrishdholakia in https://github.com/BerriAI/litellm/pull/9395
* docs: Include Phoenix Page into sidebar under observability by SrilakshmiC in https://github.com/BerriAI/litellm/pull/9332
* fix(types/utils.py): support openai 'file' message type by krrishdholakia in https://github.com/BerriAI/litellm/pull/9402
* feat: Add support for custom OPENROUTER_API_BASE via get_secret in co… by graysonchen in https://github.com/BerriAI/litellm/pull/9369
* fix: VertexAI outputDimensionality configuration by JamesGuthrie in https://github.com/BerriAI/litellm/pull/9366
* docs(vertexai): fix typo in required env variables by Shiti in https://github.com/BerriAI/litellm/pull/9325
* Update perplexity.md by jollyolliel in https://github.com/BerriAI/litellm/pull/9290
* fix: VoyageAI `prompt_token` always empty by lucasra1 in https://github.com/BerriAI/litellm/pull/9260
* build(deps): bump litellm from 1.55.3 to 1.61.15 in /cookbook/litellm-ollama-docker-image by dependabot in https://github.com/BerriAI/litellm/pull/9422
* [Feat] OpenAI o1-pro Responses API streaming support by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9419
* [Feat] Add OpenAI o1-pro support on Responses API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9415
* [Docs - Draft] LiteLLM x MCP Interface by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9435
* support returning api-base on pass-through endpoints + consistently return 404 if team not found in DB by krrishdholakia in https://github.com/BerriAI/litellm/pull/9439
* fix(handle_error.py): make cooldown error more descriptive by krrishdholakia in https://github.com/BerriAI/litellm/pull/9438
* Consistent anthropic response_format streaming/non-streaming behaviour by krrishdholakia in https://github.com/BerriAI/litellm/pull/9437
* New Azure Models (GPT-4.5-Preview, Mistral Small 3.1) by emerzon in https://github.com/BerriAI/litellm/pull/9453
* Set max size limit to in-memory cache item - prevents OOM errors by krrishdholakia in https://github.com/BerriAI/litellm/pull/9448
* fix(model_param_helper.py): update `_get_litellm_supported_transcription_kwargs()` to use proper annotations from `TranscriptionCreateParamsNonStreaming` & ``TranscriptionCreateParamsStreaming` by hsaeed3 in https://github.com/BerriAI/litellm/pull/9451
* [Feat] LiteLLM x MCP Bridge - Use MCP Tools with LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9436
* fix(model_param_helper.py): update _get_litellm_supported_transcription_kwargs() to use proper annotations from TranscriptionCreateParamsNonStreaming & `TranscriptionCreateParamsStreaming by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9455

New Contributors
* andjsmi made their first contribution in https://github.com/BerriAI/litellm/pull/9326
* azdolinski made their first contribution in https://github.com/BerriAI/litellm/pull/9374
* SrilakshmiC made their first contribution in https://github.com/BerriAI/litellm/pull/9332
* graysonchen made their first contribution in https://github.com/BerriAI/litellm/pull/9369
* JamesGuthrie made their first contribution in https://github.com/BerriAI/litellm/pull/9366
* Shiti made their first contribution in https://github.com/BerriAI/litellm/pull/9325
* jollyolliel made their first contribution in https://github.com/BerriAI/litellm/pull/9290
* hsaeed3 made their first contribution in https://github.com/BerriAI/litellm/pull/9451

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.12-nightly...v1.63.14-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.14-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 210.0 | 225.80703369680768 | 6.282456357875219 | 0.0 | 1880 | 0 | 182.8801230000181 | 2803.2175499999994 |
| Aggregated | Passed βœ… | 210.0 | 225.80703369680768 | 6.282456357875219 | 0.0 | 1880 | 0 | 182.8801230000181 | 2803.2175499999994 |

v1.63.12-nightly
What's Changed
* Fixes bedrock modelId encoding for Inference Profiles by omrishiv in https://github.com/BerriAI/litellm/pull/9123
* Aim Security post-call guardrails support by hxtomer in https://github.com/BerriAI/litellm/pull/8356
* Litellm dev 03 12 2025 contributor prs p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/9216
* Support bedrock Application inference profiles + Support guardrails on streaming responses by krrishdholakia in https://github.com/BerriAI/litellm/pull/9274
* v1.63.11-stable release notes by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9279
* Infer aws region from bedrock application profile id by krrishdholakia in https://github.com/BerriAI/litellm/pull/9281
* feat: make masterkey secret configurable by mknet3 in https://github.com/BerriAI/litellm/pull/9288
* fix(utils.py): Prevents final chunk w/ usage from being ignored by krrishdholakia in https://github.com/BerriAI/litellm/pull/9314
* Update prompt_caching.md to fix typo by afspies in https://github.com/BerriAI/litellm/pull/9317
* fix(redis_cache.py): add 5s default timeout by krrishdholakia in https://github.com/BerriAI/litellm/pull/9322
* Support reading litellm proxy response cost header in sdk + support setting lower ssl security level by krrishdholakia in https://github.com/BerriAI/litellm/pull/9330
* [Bug fix] Reset Budget Job by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9329
* fix(ollama/completions/transformation.py): pass prompt, untemplated o… by krrishdholakia in https://github.com/BerriAI/litellm/pull/9333
* [UI] - Allow controlling default internal user settings on ui by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9328
* [Patch] - Allow disabling all spend updates / writes to DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9331

New Contributors
* afspies made their first contribution in https://github.com/BerriAI/litellm/pull/9317

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.11-nightly...v1.63.12-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.12-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 240.0 | 266.99215766025395 | 6.081096542088128 | 0.0 | 1819 | 0 | 211.12568599994574 | 4206.960361000029 |
| Aggregated | Passed βœ… | 240.0 | 266.99215766025395 | 6.081096542088128 | 0.0 | 1819 | 0 | 211.12568599994574 | 4206.960361000029 |

v1.63.11-stable.patch1
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.11-nightly...v1.63.11-stable.patch1

🚨 Known issue on Azure OpenAI - We don't recommend upgrading if you use Azure OpenAI. This version failed our Azure OpenAI load test


Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.63.11-stable.patch1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 286.0849351151513 | 6.065608931564817 | 0.0 | 1815 | 0 | 215.73641000003363 | 4467.043058000001 |
| Aggregated | Passed βœ… | 250.0 | 286.0849351151513 | 6.065608931564817 | 0.0 | 1815 | 0 | 215.73641000003363 | 4467.043058000001 |

v1.63.11-stable
Docker Run LiteLLM Proxy

🚨 Known issue on Azure OpenAI - We don't recommend upgrading if you use Azure OpenAI. This version failed our Azure OpenAI load test


docker run
-e STORE_MODEL_IN_DB=True
-p 4000:4000
ghcr.io/berriai/litellm:main-v1.63.11-stable


These are the changes since v1.63.2-stable.

This release is primarily focused on:
- [Beta] Responses API Support
- Snowflake Cortex Support, Amazon Nova Image Generation
- UI - Credential Management, re-use credentials when adding new models
- UI - Test Connection to LLM Provider before adding a model

Demo Instance

Here's a Demo Instance to test changes:
- Instance: https://demo.litellm.ai/
- Login Credentials:
- Username: admin
- Password: sk-1234



New Models / Updated Models

- Image Generation support for Amazon Nova Canvas [Getting Started](https://docs.litellm.ai/docs/providers/bedrock#image-generation)
- Add pricing for Jamba new models [PR](https://github.com/BerriAI/litellm/pull/9032/files)
- Add pricing for Amazon EU models [PR](https://github.com/BerriAI/litellm/pull/9056/files)
- Add Bedrock Deepseek R1 model pricing [PR](https://github.com/BerriAI/litellm/pull/9108/files)
- Update Gemini pricing: Gemma 3, Flash 2 thinking update, LearnLM [PR](https://github.com/BerriAI/litellm/pull/9190/files)
- Mark Cohere Embedding 3 models as Multimodal [PR](https://github.com/BerriAI/litellm/pull/9176/commits/c9a576ce4221fc6e50dc47cdf64ab62736c9da41)
- Add Azure Data Zone pricing [PR](https://github.com/BerriAI/litellm/pull/9185/files#diff-19ad91c53996e178c1921cbacadf6f3bae20cfe062bd03ee6bfffb72f847ee37)
- LiteLLM Tracks cost for `azure/eu` and `azure/us` models


LLM Translation


1. **New Endpoints**
- [Beta] POST `/responses` API. [Getting Started](https://docs.litellm.ai/docs/response_api)

2. **New LLM Providers**
- Snowflake Cortex [Getting Started](https://docs.litellm.ai/docs/providers/snowflake)

3. **New LLM Features**

- Support OpenRouter `reasoning_content` on streaming [Getting Started](https://docs.litellm.ai/docs/reasoning_content)

4. **Bug Fixes**

- OpenAI: Return `code`, `param` and `type` on bad request error [More information on litellm exceptions](https://docs.litellm.ai/docs/exception_mapping)
- Bedrock: Fix converse chunk parsing to only return empty dict on tool use [PR](https://github.com/BerriAI/litellm/pull/9166)
- Bedrock: Support extra_headers [PR](https://github.com/BerriAI/litellm/pull/9113)
- Azure: Fix Function Calling Bug & Update Default API Version to `2025-02-01-preview` [PR](https://github.com/BerriAI/litellm/pull/9191)
- Azure: Fix AI services URL [PR](https://github.com/BerriAI/litellm/pull/9185)
- Vertex AI: Handle HTTP 201 status code in response [PR](https://github.com/BerriAI/litellm/pull/9193)
- Perplexity: Fix incorrect streaming response [PR](https://github.com/BerriAI/litellm/pull/9081)
- Triton: Fix streaming completions bug [PR](https://github.com/BerriAI/litellm/pull/8386)
- Deepgram: Support bytes.IO when handling audio files for transcription [PR](https://github.com/BerriAI/litellm/pull/9071)
- Ollama: Fix "system" role has become unacceptable [PR](https://github.com/BerriAI/litellm/pull/9261)
- All Providers (Streaming): Fix String `data:` stripped from entire content in streamed responses [PR](https://github.com/BerriAI/litellm/pull/9070)



Spend Tracking Improvements

1. Support Bedrock converse cache token tracking [Getting Started](https://docs.litellm.ai/docs/completion/prompt_caching)
2. Cost Tracking for Responses API [Getting Started](https://docs.litellm.ai/docs/response_api)
3. Fix Azure Whisper cost tracking [Getting Started](https://docs.litellm.ai/docs/audio_transcription)


UI

Re-Use Credentials on UI

You can now onboard LLM provider credentials on LiteLLM UI. Once these credentials are added you can re-use them when adding new models [Getting Started](https://docs.litellm.ai/docs/proxy/ui_credentials)

Test Connections before adding models

Before adding a model you can test the connection to the LLM provider to verify you have setup your API Base + API Key correctly

General UI Improvements
1. Add Models Page
- Allow adding Cerebras, Sambanova, Perplexity, Fireworks, Openrouter, TogetherAI Models, Text-Completion OpenAI on Admin UI
- Allow adding EU OpenAI models
- Fix: Instantly show edit + deletes to models
2. Keys Page
- Fix: Instantly show newly created keys on Admin UI (don't require refresh)
- Fix: Allow clicking into Top Keys when showing users Top API Key
- Fix: Allow Filter Keys by Team Alias, Key Alias and Org
- UI Improvements: Show 100 Keys Per Page, Use full height, increase width of key alias
3. Users Page
- Fix: Show correct count of internal user keys on Users Page
- Fix: Metadata not updating in Team UI
4. Logs Page
- UI Improvements: Keep expanded log in focus on LiteLLM UI
- UI Improvements: Minor improvements to logs page
- Fix: Allow internal user to query their own logs
- Allow switching off storing Error Logs in DB [Getting Started](https://docs.litellm.ai/docs/proxy/ui_logs)
5. Sign In/Sign Out
- Fix: Correctly use `PROXY_LOGOUT_URL` when set [Getting Started](https://docs.litellm.ai/docs/proxy/self_serve#setting-custom-logout-urls)


Security

1. Support for Rotating Master Keys [Getting Started](https://docs.litellm.ai/docs/proxy/master_key_rotations)
2. Fix: Internal User Viewer Permissions, don't allow `internal_user_viewer` role to see `Test Key Page` or `Create Key Button` [More information on role based access controls](https://docs.litellm.ai/docs/proxy/access_control)
3. Emit audit logs on All user + model Create/Update/Delete endpoints [Getting Started](https://docs.litellm.ai/docs/proxy/multiple_admins)
4. JWT
- Support multiple JWT OIDC providers [Getting Started](https://docs.litellm.ai/docs/proxy/token_auth)
- Fix JWT access with Groups not working when team is assigned All Proxy Models access
5. Using K/V pairs in 1 AWS Secret [Getting Started](https://docs.litellm.ai/docs/secret#using-kv-pairs-in-1-aws-secret)


Logging Integrations

1. Prometheus: Track Azure LLM API latency metric [Getting Started](https://docs.litellm.ai/docs/proxy/prometheus#request-latency-metrics)
2. Athina: Added tags, user_feedback and model_options to additional_keys which can be sent to Athina [Getting Started](https://docs.litellm.ai/docs/observability/athina_integration)


Performance / Reliability improvements

1. Redis + litellm router - Fix Redis cluster mode for litellm router [PR](https://github.com/BerriAI/litellm/pull/9010)


General Improvements

1. OpenWebUI Integration - display `thinking` tokens
- Guide on getting started with LiteLLM x OpenWebUI. [Getting Started](https://docs.litellm.ai/docs/tutorials/openweb_ui)
- Display `thinking` tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) [Getting Started](https://docs.litellm.ai/docs/tutorials/openweb_ui#render-thinking-content-on-openweb-ui)

<Image img={require('../../img/litellm_thinking_openweb.gif')} />

New Contributors
* ogunoz made their first contribution in https://github.com/BerriAI/litellm/pull/9010
* Askir made their first contribution in https://github.com/BerriAI/litellm/pull/8576
* tvishwanadha made their first contribution in https://github.com/BerriAI/litellm/pull/9071
* 5aaee9 made their first contribution in https://github.com/BerriAI/litellm/pull/9081
* mounta11n made their first contribution in https://github.com/BerriAI/litellm/pull/8757
* minwhoo made their first contribution in https://github.com/BerriAI/litellm/pull/8386
* santibreo made their first contribution in https://github.com/BerriAI/litellm/pull/7736
* utkashd made their first contribution in https://github.com/BerriAI/litellm/pull/8956
* kearnsw made their first contribution in https://github.com/BerriAI/litellm/pull/9108
* sfarthin made their first contribution in https://github.com/BerriAI/litellm/pull/8019
* lucasra1 made their first contribution in https://github.com/BerriAI/litellm/pull/9180
* youngchannelforyou made their first contribution in https://github.com/BerriAI/litellm/pull/9193
* xucailiang made their first contribution in https://github.com/BerriAI/litellm/pull/8741
* SunnyWan59 made their first contribution in https://github.com/BerriAI/litellm/pull/8950
* bexelbie made their first contribution in https://github.com/BerriAI/litellm/pull/9254
* briandevvn made their first contribution in https://github.com/BerriAI/litellm/pull/9261

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.2-stable...v1.63.11-stable

Github generated changelog
* Fix redis cluster mode for routers by ogunoz in https://github.com/BerriAI/litellm/pull/9010
* [Feat] - Display `thinking` tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9029
* (AWS Secret Manager) - Using K/V pairs in 1 AWS Secret by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9039
* (Docs) connect litellm to open web ui by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9040
* Added PDL project by vazirim in https://github.com/BerriAI/litellm/pull/8925
* (UI) - Allow adding EU OpenAI models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9042
* fix(team_endpoints.py): ensure 404 raised when team not found + fix setting tags on keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/9038
* build(model_prices_and_context_window.json): update azure o1 mini pri… by krrishdholakia in https://github.com/BerriAI/litellm/pull/9046
* Support master key rotations by krrishdholakia in https://github.com/BerriAI/litellm/pull/9041
* (Feat) - add pricing for eu.amazon.nova models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9056
* docs: Add project page for pgai by Askir in https://github.com/BerriAI/litellm/pull/8576
* Mark several Claude models as being able to accept PDF inputs by minhduc0711 in https://github.com/BerriAI/litellm/pull/9054
* (UI) - Keys Page - Show 100 Keys Per Page, Use full height, increase width of key alias by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9064
* (UI) Logs Page - Keep expanded log in focus on LiteLLM UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9061
* (Docs) OpenWeb x LiteLLM Docker compose + Instructions on spend tracking + logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9059
* (UI) - Allow adding Cerebras, Sambanova, Perplexity, Fireworks, Openrouter, TogetherAI Models on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9069
* UI - new API Playground for testing LiteLLM translation by krrishdholakia in https://github.com/BerriAI/litellm/pull/9073
* Bug fix - String data: stripped from entire content in streamed Gemini responses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9070
* (UI) - Minor improvements to logs page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9076
* Bug fix: support bytes.IO when handling audio files for transcription by tvishwanadha in https://github.com/BerriAI/litellm/pull/9071
* Fix batches api cost tracking + Log batch models in spend logs / standard logging payload by krrishdholakia in https://github.com/BerriAI/litellm/pull/9077
* (UI) - Fix, Allow Filter Keys by Team Alias, Key Alias and Org by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9083
* (Clean up) - Allow switching off storing Error Logs in DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9084
* (UI) - Fix show correct count of internal user keys on Users Page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9082
* New stable release notes by krrishdholakia in https://github.com/BerriAI/litellm/pull/9085
* Litellm dev 03 08 2025 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/9089
* feat: prioritize api_key over tenant_id for more Azure AD token provi… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8701
* Fix incorrect streaming response by 5aaee9 in https://github.com/BerriAI/litellm/pull/9081
* Support openrouter `reasoning_content` on streaming by krrishdholakia in https://github.com/BerriAI/litellm/pull/9094
* add support for Amazon Nova Canvas model by omrishiv in https://github.com/BerriAI/litellm/pull/7838
* pricing for jamba new models by themrzmaster in https://github.com/BerriAI/litellm/pull/9032
* build(deps): bump jinja2 from 3.1.4 to 3.1.6 by dependabot in https://github.com/BerriAI/litellm/pull/9014
* (docs) add section for contributing to litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9107
* build: Add Makefile for LiteLLM project with test targets by colesmcintosh in https://github.com/BerriAI/litellm/pull/8948
* (Docs) - Contributing to litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9110
* Added tags, user_feedback and model_options to additional_keys which can be sent to athina by vivek-athina in https://github.com/BerriAI/litellm/pull/8845
* fix missing comma by niinpatel in https://github.com/BerriAI/litellm/pull/8746
* Update model_prices_and_context_window.json by mounta11n in https://github.com/BerriAI/litellm/pull/8757
* Fix triton streaming completions bug by minwhoo in https://github.com/BerriAI/litellm/pull/8386
* (docs) Update vertex.md old code example by santibreo in https://github.com/BerriAI/litellm/pull/7736
* (Feat) - Allow adding Text-Completion OpenAI models through UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9102
* docs(pr-template): update unit test command in checklist by colesmcintosh in https://github.com/BerriAI/litellm/pull/9119
* [UI SSO Bug fix] - Correctly use `PROXY_LOGOUT_URL` when set by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9117
* Validate `model_prices_and_context_window.json` with a test, clarify possible `mode` values + ensure consistent use of `mode` by utkashd in https://github.com/BerriAI/litellm/pull/8956
* JWT Auth Fix - [Bug]: JWT access with Groups not working when team is assigned All Proxy Models access by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8934
* fix(base_invoke_transformation.py): support extra_headers on bedrock … by krrishdholakia in https://github.com/BerriAI/litellm/pull/9113
* feat(handle_jwt.py): support multiple jwt url's by krrishdholakia in https://github.com/BerriAI/litellm/pull/9047
* Return `code`, `param` and `type` on openai bad request error by krrishdholakia in https://github.com/BerriAI/litellm/pull/9109
* feature: Handle ManagedIdentityCredential in Azure AD token provider by you-n-g in https://github.com/BerriAI/litellm/pull/9135
* Adding/Update of models by emerzon in https://github.com/BerriAI/litellm/pull/9120
* Update bedrock.md for variable consistency by superpoussin22 in https://github.com/BerriAI/litellm/pull/8185
* ci: add helm unittest by mknet3 in https://github.com/BerriAI/litellm/pull/9068
* [UI Fixes RBAC] - for Internal User Viewer Permissions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9148
* Delegate router azure client init logic to azure provider by krrishdholakia in https://github.com/BerriAI/litellm/pull/9140
* feat: add bedrock deepseek r1 model pricing by kearnsw in https://github.com/BerriAI/litellm/pull/9108
* fix(internal_user_endpoints.py): allow internal user to query their o… by krrishdholakia in https://github.com/BerriAI/litellm/pull/9162
* add support for Amazon Nova Canvas model (7838) by krrishdholakia in https://github.com/BerriAI/litellm/pull/9101
* Fix bedrock chunk parsing + azure whisper cost tracking by krrishdholakia in https://github.com/BerriAI/litellm/pull/9166
* Bing Search Pass Thru by sfarthin in https://github.com/BerriAI/litellm/pull/8019
* [Feat] Add OpenAI Responses API to litellm python SDK by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9155
* Support credential management on Proxy - via CRUD endpoints - `credentials/*` by krrishdholakia in https://github.com/BerriAI/litellm/pull/9124
* Bump babel/runtime-corejs3 from 7.26.0 to 7.26.10 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/9167
* Bump babel/helpers from 7.26.0 to 7.26.10 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/9168
* fix(azure): Patch for Function Calling Bug & Update Default API Version to `2025-02-01-preview` by colesmcintosh in https://github.com/BerriAI/litellm/pull/9191
* [Feat] - Add Responses API on LiteLLM Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9183
* gemini price updates: gemma 3, flash 2 thinking update, learnlm by yigitkonur in https://github.com/BerriAI/litellm/pull/9190
* Mark Cohere Embedding 3 models as Multimodal by emerzon in https://github.com/BerriAI/litellm/pull/9176
* Fix Metadata not updating in Team UI by lucasra1 in https://github.com/BerriAI/litellm/pull/9180
* feat: initial commit adding support for credentials on proxy ui by krrishdholakia in https://github.com/BerriAI/litellm/pull/9186
* Fix azure ai services url + add azure data zone pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/9185
* (gemini)Handle HTTP 201 status code in Vertex AI response by youngchannelforyou in https://github.com/BerriAI/litellm/pull/9193
* feat/postgres-volumes by xucailiang in https://github.com/BerriAI/litellm/pull/8741
* [FEAT] Support for Snowflake REST API LLMs 7979 by SunnyWan59 in https://github.com/BerriAI/litellm/pull/8950
* fix(azure.py): track azure llm api latency metric by krrishdholakia in https://github.com/BerriAI/litellm/pull/9217
* Support bedrock converse cache token tracking by krrishdholakia in https://github.com/BerriAI/litellm/pull/9221
* Emit audit logs on All user + model Create/Update/Delete endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/9223
* (UI Usage) - Allow clicking into Top Keys when showing users Top API Key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9225
* [Feat] Add Snowflake Cortex to LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9222
* [Fixes] Responses API - allow /responses and subpaths as LLM API route + Add exception mapping for responses API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9220
* docs: Add centralized credential management docs by bexelbie in https://github.com/BerriAI/litellm/pull/9254
* Docs: Update configs.md by bexelbie in https://github.com/BerriAI/litellm/pull/9263
* Support reusing existing model credentials by krrishdholakia in https://github.com/BerriAI/litellm/pull/9267
* LiteLLM UI Fixes by krrishdholakia in https://github.com/BerriAI/litellm/pull/9269
* Fix "system" role has become unacceptable in ollama by briandevvn in https://github.com/BerriAI/litellm/pull/9261
* Litellm rc 03 14 2025 patch 1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/9271
* [Feat] UI - Add Test Connection by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9272
* [UI] Fix 1 - instantly show newly create keys on Admin UI (don't require refresh) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9257
* (UI) Fix model edit + delete - instantly show edit + deletes to models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9258



v1.63.11-nightly
What's Changed
* gemini price updates: gemma 3, flash 2 thinking update, learnlm by yigitkonur in https://github.com/BerriAI/litellm/pull/9190
* Mark Cohere Embedding 3 models as Multimodal by emerzon in https://github.com/BerriAI/litellm/pull/9176
* Fix Metadata not updating in Team UI by lucasra1 in https://github.com/BerriAI/litellm/pull/9180
* feat: initial commit adding support for credentials on proxy ui by krrishdholakia in https://github.com/BerriAI/litellm/pull/9186
* Fix azure ai services url + add azure data zone pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/9185
* (gemini)Handle HTTP 201 status code in Vertex AI response by youngchannelforyou in https://github.com/BerriAI/litellm/pull/9193
* feat/postgres-volumes by xucailiang in https://github.com/BerriAI/litellm/pull/8741
* [FEAT] Support for Snowflake REST API LLMs 7979 by SunnyWan59 in https://github.com/BerriAI/litellm/pull/8950
* fix(azure.py): track azure llm api latency metric by krrishdholakia in https://github.com/BerriAI/litellm/pull/9217
* Support bedrock converse cache token tracking by krrishdholakia in https://github.com/BerriAI/litellm/pull/9221
* Emit audit logs on All user + model Create/Update/Delete endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/9223
* (UI Usage) - Allow clicking into Top Keys when showing users Top API Key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9225
* [Feat] Add Snowflake Cortex to LiteLLM by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9222
* [Fixes] Responses API - allow /responses and subpaths as LLM API route + Add exception mapping for responses API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9220
* docs: Add centralized credential management docs by bexelbie in https://github.com/BerriAI/litellm/pull/9254
* Docs: Update configs.md by bexelbie in https://github.com/BerriAI/litellm/pull/9263
* Support reusing existing model credentials by krrishdholakia in https://github.com/BerriAI/litellm/pull/9267
* LiteLLM UI Fixes by krrishdholakia in https://github.com/BerriAI/litellm/pull/9269
* Fix "system" role has become unacceptable in ollama by briandevvn in https://github.com/BerriAI/litellm/pull/9261
* Litellm rc 03 14 2025 patch 1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/9271
* [Feat] UI - Add Test Connection by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9272
* [UI] Fix 1 - instantly show newly create keys on Admin UI (don't require refresh) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9257
* (UI) Fix model edit + delete - instantly show edit + deletes to models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9258

New Contributors
* lucasra1 made their first contribution in https://github.com/BerriAI/litellm/pull/9180
* youngchannelforyou made their first contribution in https://github.com/BerriAI/litellm/pull/9193
* xucailiang made their first contribution in https://github.com/BerriAI/litellm/pull/8741
* SunnyWan59 made their first contribution in https://github.com/BerriAI/litellm/pull/8950
* bexelbie made their first contribution in https://github.com/BerriAI/litellm/pull/9254
* briandevvn made their first contribution in https://github.com/BerriAI/litellm/pull/9261

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.8-nightly...v1.63.11-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.11-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.11-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.11-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.11-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.11-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.11-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 265.0124223657352 | 6.224465343838535 | 0.0 | 1862 | 0 | 213.4414930000048 | 2393.6349599999858 |
| Aggregated | Passed βœ… | 250.0 | 265.0124223657352 | 6.224465343838535 | 0.0 | 1862 | 0 | 213.4414930000048 | 2393.6349599999858 |

v1.63.8-nightly
What's Changed
* Delegate router azure client init logic to azure provider by krrishdholakia in https://github.com/BerriAI/litellm/pull/9140
* Bing Search Pass Thru by sfarthin in https://github.com/BerriAI/litellm/pull/8019
* [Feat] Add OpenAI Responses API to litellm python SDK by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9155
* Support credential management on Proxy - via CRUD endpoints - `credentials/*` by krrishdholakia in https://github.com/BerriAI/litellm/pull/9124
* Bump babel/runtime-corejs3 from 7.26.0 to 7.26.10 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/9167
* Bump babel/helpers from 7.26.0 to 7.26.10 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/9168
* fix(azure): Patch for Function Calling Bug & Update Default API Version to `2025-02-01-preview` by colesmcintosh in https://github.com/BerriAI/litellm/pull/9191
* [Feat] - Add Responses API on LiteLLM Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9183

New Contributors
* sfarthin made their first contribution in https://github.com/BerriAI/litellm/pull/8019

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.7-nightly...v1.63.8-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.8-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 230.0 | 258.44225107871796 | 6.155259588605481 | 0.0033416175833905974 | 1842 | 1 | 86.54871299995648 | 3971.497738000039 |
| Aggregated | Passed βœ… | 230.0 | 258.44225107871796 | 6.155259588605481 | 0.0033416175833905974 | 1842 | 1 | 86.54871299995648 | 3971.497738000039 |

v1.63.7-nightly
What's Changed
* add support for Amazon Nova Canvas model by omrishiv in https://github.com/BerriAI/litellm/pull/7838
* feat: add bedrock deepseek r1 model pricing by kearnsw in https://github.com/BerriAI/litellm/pull/9108
* fix(internal_user_endpoints.py): allow internal user to query their o… by krrishdholakia in https://github.com/BerriAI/litellm/pull/9162
* add support for Amazon Nova Canvas model (7838) by krrishdholakia in https://github.com/BerriAI/litellm/pull/9101
* Fix bedrock chunk parsing + azure whisper cost tracking by krrishdholakia in https://github.com/BerriAI/litellm/pull/9166

New Contributors
* omrishiv made their first contribution in https://github.com/BerriAI/litellm/pull/7838
* kearnsw made their first contribution in https://github.com/BerriAI/litellm/pull/9108

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.6.dev1...v1.63.7-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.7-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 190.0 | 210.2989654185558 | 6.340335220104276 | 0.0 | 1897 | 0 | 164.78329899996425 | 3864.2000990000156 |
| Aggregated | Passed βœ… | 190.0 | 210.2989654185558 | 6.340335220104276 | 0.0 | 1897 | 0 | 164.78329899996425 | 3864.2000990000156 |

1.63.6.dev1

What's Changed
* fix(base_invoke_transformation.py): support extra_headers on bedrock … by krrishdholakia in https://github.com/BerriAI/litellm/pull/9113
* feat(handle_jwt.py): support multiple jwt url's by krrishdholakia in https://github.com/BerriAI/litellm/pull/9047
* Return `code`, `param` and `type` on openai bad request error by krrishdholakia in https://github.com/BerriAI/litellm/pull/9109
* feature: Handle ManagedIdentityCredential in Azure AD token provider by you-n-g in https://github.com/BerriAI/litellm/pull/9135
* Adding/Update of models by emerzon in https://github.com/BerriAI/litellm/pull/9120
* Update bedrock.md for variable consistency by superpoussin22 in https://github.com/BerriAI/litellm/pull/8185
* ci: add helm unittest by mknet3 in https://github.com/BerriAI/litellm/pull/9068
* [UI Fixes RBAC] - for Internal User Viewer Permissions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9148


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.6-nightly...v1.63.6.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.6.dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 210.0 | 231.1532913758782 | 6.178065596694084 | 0.0 | 1849 | 0 | 182.94453300001123 | 3487.431916000048 |
| Aggregated | Passed βœ… | 210.0 | 231.1532913758782 | 6.178065596694084 | 0.0 | 1849 | 0 | 182.94453300001123 | 3487.431916000048 |

v1.63.6-nightly
What's Changed
* pricing for jamba new models by themrzmaster in https://github.com/BerriAI/litellm/pull/9032
* build(deps): bump jinja2 from 3.1.4 to 3.1.6 by dependabot in https://github.com/BerriAI/litellm/pull/9014
* (docs) add section for contributing to litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9107
* build: Add Makefile for LiteLLM project with test targets by colesmcintosh in https://github.com/BerriAI/litellm/pull/8948
* (Docs) - Contributing to litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9110
* Added tags, user_feedback and model_options to additional_keys which can be sent to athina by vivek-athina in https://github.com/BerriAI/litellm/pull/8845
* fix missing comma by niinpatel in https://github.com/BerriAI/litellm/pull/8746
* Update model_prices_and_context_window.json by mounta11n in https://github.com/BerriAI/litellm/pull/8757
* Fix triton streaming completions bug by minwhoo in https://github.com/BerriAI/litellm/pull/8386
* (docs) Update vertex.md old code example by santibreo in https://github.com/BerriAI/litellm/pull/7736
* (Feat) - Allow adding Text-Completion OpenAI models through UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9102
* docs(pr-template): update unit test command in checklist by colesmcintosh in https://github.com/BerriAI/litellm/pull/9119
* [UI SSO Bug fix] - Correctly use `PROXY_LOGOUT_URL` when set by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9117
* Validate `model_prices_and_context_window.json` with a test, clarify possible `mode` values + ensure consistent use of `mode` by utkashd in https://github.com/BerriAI/litellm/pull/8956
* JWT Auth Fix - [Bug]: JWT access with Groups not working when team is assigned All Proxy Models access by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8934

New Contributors
* mounta11n made their first contribution in https://github.com/BerriAI/litellm/pull/8757
* minwhoo made their first contribution in https://github.com/BerriAI/litellm/pull/8386
* santibreo made their first contribution in https://github.com/BerriAI/litellm/pull/7736
* utkashd made their first contribution in https://github.com/BerriAI/litellm/pull/8956

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.3.dev1...v1.63.6-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.6-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 240.0 | 258.1422458623163 | 6.0939161635327785 | 0.0 | 1823 | 0 | 213.26022699997793 | 2549.854018000019 |
| Aggregated | Passed βœ… | 240.0 | 258.1422458623163 | 6.0939161635327785 | 0.0 | 1823 | 0 | 213.26022699997793 | 2549.854018000019 |

v1.63.5-nightly
What's Changed
* fix(team_endpoints.py): ensure 404 raised when team not found + fix setting tags on keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/9038
* build(model_prices_and_context_window.json): update azure o1 mini pri… by krrishdholakia in https://github.com/BerriAI/litellm/pull/9046
* Support master key rotations by krrishdholakia in https://github.com/BerriAI/litellm/pull/9041
* (Feat) - add pricing for eu.amazon.nova models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9056
* docs: Add project page for pgai by Askir in https://github.com/BerriAI/litellm/pull/8576
* Mark several Claude models as being able to accept PDF inputs by minhduc0711 in https://github.com/BerriAI/litellm/pull/9054
* (UI) - Keys Page - Show 100 Keys Per Page, Use full height, increase width of key alias by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9064
* (UI) Logs Page - Keep expanded log in focus on LiteLLM UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9061
* (Docs) OpenWeb x LiteLLM Docker compose + Instructions on spend tracking + logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9059
* (UI) - Allow adding Cerebras, Sambanova, Perplexity, Fireworks, Openrouter, TogetherAI Models on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9069
* UI - new API Playground for testing LiteLLM translation by krrishdholakia in https://github.com/BerriAI/litellm/pull/9073
* Bug fix - String data: stripped from entire content in streamed Gemini responses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9070
* (UI) - Minor improvements to logs page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9076
* Bug fix: support bytes.IO when handling audio files for transcription by tvishwanadha in https://github.com/BerriAI/litellm/pull/9071
* Fix batches api cost tracking + Log batch models in spend logs / standard logging payload by krrishdholakia in https://github.com/BerriAI/litellm/pull/9077
* (UI) - Fix, Allow Filter Keys by Team Alias, Key Alias and Org by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9083
* (Clean up) - Allow switching off storing Error Logs in DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9084
* (UI) - Fix show correct count of internal user keys on Users Page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9082
* New stable release notes by krrishdholakia in https://github.com/BerriAI/litellm/pull/9085
* Litellm dev 03 08 2025 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/9089
* feat: prioritize api_key over tenant_id for more Azure AD token provi… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8701
* Fix incorrect streaming response by 5aaee9 in https://github.com/BerriAI/litellm/pull/9081
* Support openrouter `reasoning_content` on streaming by krrishdholakia in https://github.com/BerriAI/litellm/pull/9094
* pricing for jamba new models by themrzmaster in https://github.com/BerriAI/litellm/pull/9032

New Contributors
* Askir made their first contribution in https://github.com/BerriAI/litellm/pull/8576
* tvishwanadha made their first contribution in https://github.com/BerriAI/litellm/pull/9071
* 5aaee9 made their first contribution in https://github.com/BerriAI/litellm/pull/9081

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.3-nightly...v1.63.5-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.5-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 265.2487556257438 | 6.181834559182228 | 0.0 | 1849 | 0 | 214.44034500001408 | 3942.616398000041 |
| Aggregated | Passed βœ… | 250.0 | 265.2487556257438 | 6.181834559182228 | 0.0 | 1849 | 0 | 214.44034500001408 | 3942.616398000041 |

1.63.3.dev1

What's Changed
* fix(team_endpoints.py): ensure 404 raised when team not found + fix setting tags on keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/9038
* build(model_prices_and_context_window.json): update azure o1 mini pri… by krrishdholakia in https://github.com/BerriAI/litellm/pull/9046
* Support master key rotations by krrishdholakia in https://github.com/BerriAI/litellm/pull/9041
* (Feat) - add pricing for eu.amazon.nova models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9056
* docs: Add project page for pgai by Askir in https://github.com/BerriAI/litellm/pull/8576
* Mark several Claude models as being able to accept PDF inputs by minhduc0711 in https://github.com/BerriAI/litellm/pull/9054
* (UI) - Keys Page - Show 100 Keys Per Page, Use full height, increase width of key alias by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9064
* (UI) Logs Page - Keep expanded log in focus on LiteLLM UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9061
* (Docs) OpenWeb x LiteLLM Docker compose + Instructions on spend tracking + logging by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9059
* (UI) - Allow adding Cerebras, Sambanova, Perplexity, Fireworks, Openrouter, TogetherAI Models on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9069
* UI - new API Playground for testing LiteLLM translation by krrishdholakia in https://github.com/BerriAI/litellm/pull/9073
* Bug fix - String data: stripped from entire content in streamed Gemini responses by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9070
* (UI) - Minor improvements to logs page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9076
* Bug fix: support bytes.IO when handling audio files for transcription by tvishwanadha in https://github.com/BerriAI/litellm/pull/9071
* Fix batches api cost tracking + Log batch models in spend logs / standard logging payload by krrishdholakia in https://github.com/BerriAI/litellm/pull/9077
* (UI) - Fix, Allow Filter Keys by Team Alias, Key Alias and Org by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9083
* (Clean up) - Allow switching off storing Error Logs in DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9084
* (UI) - Fix show correct count of internal user keys on Users Page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9082
* New stable release notes by krrishdholakia in https://github.com/BerriAI/litellm/pull/9085
* Litellm dev 03 08 2025 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/9089
* feat: prioritize api_key over tenant_id for more Azure AD token provi… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8701
* Fix incorrect streaming response by 5aaee9 in https://github.com/BerriAI/litellm/pull/9081
* Support openrouter `reasoning_content` on streaming by krrishdholakia in https://github.com/BerriAI/litellm/pull/9094

New Contributors
* Askir made their first contribution in https://github.com/BerriAI/litellm/pull/8576
* tvishwanadha made their first contribution in https://github.com/BerriAI/litellm/pull/9071
* 5aaee9 made their first contribution in https://github.com/BerriAI/litellm/pull/9081

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.3-nightly...v1.63.3.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.3.dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 200.0 | 228.5953675353703 | 6.234609422669878 | 0.0 | 1866 | 0 | 180.65118199996277 | 3985.566232999986 |
| Aggregated | Passed βœ… | 200.0 | 228.5953675353703 | 6.234609422669878 | 0.0 | 1866 | 0 | 180.65118199996277 | 3985.566232999986 |

v1.63.2-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.20-stable...v1.63.2-stable

1. New Models / Updated Models
1. [Add](https://github.com/BerriAI/litellm/commit/f63cf0030679fe1a43d03fb196e815a0f28dae92)Β [supports_pdf_input: true](https://github.com/BerriAI/litellm/commit/f63cf0030679fe1a43d03fb196e815a0f28dae92)Β for specific Bedrock Claude modelsΒ 

1. LLM Translation
1. [Support `/openai/` passthrough for Assistant endpoints](https://github.com/BerriAI/litellm/commit/51a6a219cd859553eb8cecbb0c4e4bba92fe80e1)
2. [Bedrock Claude - fix amazon anthropic claude 3 tool calling transformation on invoke route](https://github.com/BerriAI/litellm/pull/8908)
3. [Bedrock Claude - response_format support for claude on invoke route](https://github.com/BerriAI/litellm/pull/8908)
4. [Bedrock - pass `description` if set in response_format](https://github.com/BerriAI/litellm/commit/c8dc4f3eecde12d458c3629cfe489cb170dd26d8)
5. Bedrock - [Fix passing](https://github.com/BerriAI/litellm/commit/c84b489d5897755139aa7d4e9e54727ebe0fa540)Β [response_format: {"type": "text"}](https://github.com/BerriAI/litellm/commit/c84b489d5897755139aa7d4e9e54727ebe0fa540)
6. OpenAI - Handle sendingΒ image_urlΒ as str to openai
7. [Deepseek - Fix deepseek 'reasoning_content' error](https://github.com/BerriAI/litellm/pull/8963)
8. [Caching - Support caching on reasoning content](https://github.com/BerriAI/litellm/pull/8973)
9. [Bedrock - handle thinking blocks in assistant message](https://github.com/BerriAI/litellm/issues/8961)
10. [Anthropic - Return signature on anthropic streaming + migrate to signature field instead of signature_delta](https://github.com/BerriAI/litellm/commit/ec4f665e299bda268af24f58c0545bdb472d4aad)
11. SupportΒ formatΒ param for specifying image type
12. [Anthropic - `/v1/messages` endpoint - `thinking` param support](https://github.com/BerriAI/litellm/pull/9013): note: this refactors the [BETA] unified `/v1/messages` endpoint, to just work for the Anthropic API.
13. [Vertex AI - handle $id in response schema when calling vertex ai](https://github.com/BerriAI/litellm/commit/f1a44d1fdc316d6c7deae1bf75c18280a0c4437d)

1. Spend Tracking Improvements
1. [Batches API - Fix cost calculation to run on retrieve_batch](https://github.com/BerriAI/litellm/pull/8997)
2. [Batches API - Log batch models in spend logs / standard logging payload](https://github.com/BerriAI/litellm/commit/4330ef8e81927f7c47440fe5bb968d11151d413e)

1. Management Endpoints / UI
1. [Allow team/org filters to be searchable on the Create Key Page](https://github.com/BerriAI/litellm/commit/91cdc0114980d6ce477befb7c48eb0d42eaabe61)
2. [Add](https://github.com/BerriAI/litellm/commit/2d2d1b9df5c919a96d23db4fcb2f1dc5e36fd23a)Β [created_by](https://github.com/BerriAI/litellm/commit/2d2d1b9df5c919a96d23db4fcb2f1dc5e36fd23a)Β [and](https://github.com/BerriAI/litellm/commit/2d2d1b9df5c919a96d23db4fcb2f1dc5e36fd23a)Β [updated_by](https://github.com/BerriAI/litellm/commit/2d2d1b9df5c919a96d23db4fcb2f1dc5e36fd23a)Β [fields to Keys table](https://github.com/BerriAI/litellm/commit/2d2d1b9df5c919a96d23db4fcb2f1dc5e36fd23a)
3. [Show 'user_email' on key table on UI](https://github.com/BerriAI/litellm/commit/887c66c6b78c60127f073bf5cd0b8bdd541875b8)
4. [(Feat) - Show Error Logs on LiteLLM UI](https://github.com/BerriAI/litellm/commit/3a086cee06e66ec29bfd492d851e9e7d9e9e3812)
5. [UI - Allow admin to control default model access for internal users](https://github.com/BerriAI/litellm/commit/c1527ebf526d305677bbbf88c8d4428f7f34b351)
6. [(UI) - Allow Internal Users to View their own logs](https://github.com/BerriAI/litellm/commit/df095b60226c68d09f4010e1d89d44c040dad69a)
7. [(UI) Fix session handling with cookies](https://github.com/BerriAI/litellm/pull/8969)
8. [Keys Page - Show 100 Keys Per Page, Use full height, increase width of key alias](https://github.com/BerriAI/litellm/pull/9064)

1. Logging / Guardrail Integrations
1. [Fix prometheus metrics w/ custom metrics](https://github.com/BerriAI/litellm/pull/8935)

1. Performance / Loadbalancing / Reliability improvements
1. [Cooldowns - Support cooldowns on models called with client side credentials ](https://github.com/BerriAI/litellm/commit/2fc62626755fe711ffcd1661398a838362e8bba3)
2. [Tag-based Routing - ensures tag-based routing across all endpoints (`/embeddings`, `/image_generation`, etc.)](https://github.com/BerriAI/litellm/pull/8944)

1. General Proxy Improvements
1. RaiseΒ BadRequestErrorΒ when unknown model passed in request
2. [Enforce model access restrictions on Azure OpenAI proxy route](https://github.com/BerriAI/litellm/commit/740bd7e9cef998e4e46f3e37202c59c23c39a8b4)
3. [Reliability fix - Handle emoji’s in text - fix orjson error](https://github.com/BerriAI/litellm/pull/8891)
4. Model Access Patch - don't overwrite litellm.anthropic_models when running auth checks
5. [Enable setting timezone information in docker image](https://github.com/BerriAI/litellm/pull/8915)


Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.63.2-stable





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.63.2-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 190.0 | 223.19371836864636 | 6.25209576552295 | 0.0033451555727784642 | 1869 | 1 | 89.92210900004238 | 1948.821826000028 |
| Aggregated | Passed βœ… | 190.0 | 223.19371836864636 | 6.25209576552295 | 0.0033451555727784642 | 1869 | 1 | 89.92210900004238 | 1948.821826000028 |

v1.63.3-nightly
What's Changed
* Fix redis cluster mode for routers by ogunoz in https://github.com/BerriAI/litellm/pull/9010
* [Feat] - Display `thinking` tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9029
* (AWS Secret Manager) - Using K/V pairs in 1 AWS Secret by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9039
* (Docs) connect litellm to open web ui by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9040
* Added PDL project by vazirim in https://github.com/BerriAI/litellm/pull/8925
* (UI) - Allow adding EU OpenAI models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9042

New Contributors
* ogunoz made their first contribution in https://github.com/BerriAI/litellm/pull/9010

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.2-nightly...v1.63.3-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.3-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 220.0 | 274.33505649537244 | 6.144475001880859 | 0.0 | 1837 | 0 | 199.62131199997657 | 3623.5841269999582 |
| Aggregated | Passed βœ… | 220.0 | 274.33505649537244 | 6.144475001880859 | 0.0 | 1837 | 0 | 199.62131199997657 | 3623.5841269999582 |

v1.63.2-nightly
What's Changed
* Return signature on bedrock converse thinking + Fix `{}` empty dictionary on streaming + thinking by krrishdholakia in https://github.com/BerriAI/litellm/pull/9023
* (Refactor) `/v1/messages` to follow simpler logic for Anthropic API spec by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9013


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.0-nightly...v1.63.2-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.2-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 283.0173457426872 | 6.168530673577194 | 0.0 | 1846 | 0 | 214.4760310000038 | 4984.3768089999685 |
| Aggregated | Passed βœ… | 250.0 | 283.0173457426872 | 6.168530673577194 | 0.0 | 1846 | 0 | 214.4760310000038 | 4984.3768089999685 |

1.63.0

It also moves the response structure from `signature_delta` to `signature` to be the same as Anthropic. [Anthropic Docs](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking#implementing-extended-thinking)


Diff

bash
"message": {
...
"reasoning_content": "The capital of France is Paris.",
"thinking_blocks": [
{
"type": "thinking",
"thinking": "The capital of France is Paris.",
- "signature_delta": "EqoBCkgIARABGAIiQL2UoU0b1OHYi+..." πŸ‘ˆ OLD FORMAT
+ "signature": "EqoBCkgIARABGAIiQL2UoU0b1OHYi+..." πŸ‘ˆ KEY CHANGE
}
]
}




**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.62.4-nightly...v1.63.0-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.0-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.0-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 272.1226933173393 | 6.127690671911355 | 0.0 | 1834 | 0 | 217.38513100001455 | 3752.371346000018 |
| Aggregated | Passed βœ… | 250.0 | 272.1226933173393 | 6.127690671911355 | 0.0 | 1834 | 0 | 217.38513100001455 | 3752.371346000018 |

v1.62.4-nightly
What's Changed
* Fix deepseek 'reasoning_content' error by krrishdholakia in https://github.com/BerriAI/litellm/pull/8963
* (UI) Fix session handling with cookies by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8969
* (UI) - Improvements to session handling logic by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8970
* fix(route_llm_request.py): move to using common router, for client-side credentials by krrishdholakia in https://github.com/BerriAI/litellm/pull/8966
* Litellm dev 03 01 2025 p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8944
* Support caching on reasoning content + other fixes by krrishdholakia in https://github.com/BerriAI/litellm/pull/8973
* fix(common_utils.py): handle $id in response schema when calling vert… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8991
* (bug fix) - Fix Cache Health Check for Redis when redis_version is float by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8979
* (UI) - Security Improvement, move to JWT Auth for Admin UI Sessions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8995
* Litellm dev 03 04 2025 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8997
* fix(base_aws_llm.py): remove region name before sending in args by krrishdholakia in https://github.com/BerriAI/litellm/pull/8998


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.62.1-nightly...v1.62.4-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.62.4-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 230.0 | 255.24015655585677 | 6.161171624266898 | 0.0 | 1844 | 0 | 200.43409900000597 | 1911.432934000004 |
| Aggregated | Passed βœ… | 230.0 | 255.24015655585677 | 6.161171624266898 | 0.0 | 1844 | 0 | 200.43409900000597 | 1911.432934000004 |

v1.62.1-nightly
What's Changed
* Allow team/org filters to be searchable on the Create Key Page + Show team alias on Keys Table by krrishdholakia in https://github.com/BerriAI/litellm/pull/8881
* Add `created_by` and `updated_by` fields to Keys table by krrishdholakia in https://github.com/BerriAI/litellm/pull/8885
* (Proxy improvement) - Raise `BadRequestError` when unknown model passed in request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8886
* (Improvements) use `/openai/` pass through with OpenAI Ruby for Assistants API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8884
* Update model path and documentation for Cerebras API call by marscod in https://github.com/BerriAI/litellm/pull/8862
* docs: update sambanova docs by jhpiedrahitao in https://github.com/BerriAI/litellm/pull/8875
* Update model settings data by yurchik11 in https://github.com/BerriAI/litellm/pull/8871
* (security fix) - Enforce model access restrictions on Azure OpenAI route by krrishdholakia in https://github.com/BerriAI/litellm/pull/8888
* Show 'user_email' on key table on UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/8887
* fix: ollama chat async stream error propagation by Tomas2D in https://github.com/BerriAI/litellm/pull/8870
* Litellm dev 02 27 2025 p6 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8891
* Fix calling claude via invoke route + response_format support for claude on invoke route by krrishdholakia in https://github.com/BerriAI/litellm/pull/8908
* converse_transformation: pass 'description' if set in response_format by krrishdholakia in https://github.com/BerriAI/litellm/pull/8907
* Fix bedrock passing `response_format: {"type": "text"}` by krrishdholakia in https://github.com/BerriAI/litellm/pull/8900
* (Feat) - Show Error Logs on LiteLLM UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8904
* UI - Allow admin to control default model access for internal users by krrishdholakia in https://github.com/BerriAI/litellm/pull/8912
* (bug fix - patch) - don't overwrite litellm.anthropic_models when running auth checks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8924
* (patch) ui remove search button on internal users tab by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8926
* (bug fix) - don't log messages, prompt, input in `model_parameters` in StandardLoggingPayload by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8923
* Litellm stable release notes v1 61 20 by krrishdholakia in https://github.com/BerriAI/litellm/pull/8929
* (bug fix) - dd tracer, only send traces when user opts into sending dd-trace by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8928
* docs(index.md): add demo instance to docs for easy testing by krrishdholakia in https://github.com/BerriAI/litellm/pull/8931
* (Bug fix) - don't log messages in `model_parameters` in StandardLoggingPayload by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8932
* (UI) Error Logs improvements - Store Raw proxy server request for success and failure by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8917
* (UI) - Allow Internal Users to View their own logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/8933
* Add `supports_pdf_input: true` for specific Bedrock Claude models by iwamot in https://github.com/BerriAI/litellm/pull/8655
* Fix prometheus metrics w/ custom metrics + Handle sending `image_url` as str to openai by krrishdholakia in https://github.com/BerriAI/litellm/pull/8935
* fix(proxy_server.py): fix setting router redis cache, if cache enable… by krrishdholakia in https://github.com/BerriAI/litellm/pull/8859
* Fix `relation "dailytagspend" does not exist` error by Schnitzel in https://github.com/BerriAI/litellm/pull/8947

New Contributors
* marscod made their first contribution in https://github.com/BerriAI/litellm/pull/8862
* jhpiedrahitao made their first contribution in https://github.com/BerriAI/litellm/pull/8875
* Tomas2D made their first contribution in https://github.com/BerriAI/litellm/pull/8870
* Schnitzel made their first contribution in https://github.com/BerriAI/litellm/pull/8947

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.61.20.rc...v1.62.1-nightly



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.62.1-nightly



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 264.65819933898456 | 6.116726934503129 | 0.0033443012217075608 | 1829 | 1 | 88.08753300002081 | 3128.6442510000256 |
| Aggregated | Passed βœ… | 250.0 | 264.65819933898456 | 6.116726934503129 | 0.0033443012217075608 | 1829 | 1 | 88.08753300002081 | 3128.6442510000256 |

1.63.0.dev5

What's Changed
* Return signature on bedrock converse thinking + Fix `{}` empty dictionary on streaming + thinking by krrishdholakia in https://github.com/BerriAI/litellm/pull/9023
* (Refactor) `/v1/messages` to follow simpler logic for Anthropic API spec by ishaan-jaff in https://github.com/BerriAI/litellm/pull/9013


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.0-nightly...v1.63.0.dev5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.0.dev5



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 250.0 | 278.42101090109276 | 6.116149255066882 | 0.0 | 1830 | 0 | 214.94648899999902 | 4750.29671599998 |
| Aggregated | Passed βœ… | 250.0 | 278.42101090109276 | 6.116149255066882 | 0.0 | 1830 | 0 | 214.94648899999902 | 4750.29671599998 |

1.63.0.dev1

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.63.0-nightly...v1.63.0.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.63.0.dev1



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 190.0 | 209.86284151312142 | 6.250523763835477 | 0.0 | 1867 | 0 | 163.62763399996538 | 3461.6653150000047 |
| Aggregated | Passed βœ… | 190.0 | 209.86284151312142 | 6.250523763835477 | 0.0 | 1867 | 0 | 163.62763399996538 | 3461.6653150000047 |

v1.63.0-nightly
What's Changed
* Fix 7629 - Add tzdata package to Dockerfile (8915) by krrishdholakia in https://github.com/BerriAI/litellm/pull/9009
* Return `signature` on anthropic streaming + migrate to `signature` field instead of `signature_delta` [MINOR bump] by krrishdholakia in https://github.com/BerriAI/litellm/pull/9021
* Support `format` param for specifying image type by krrishdholakia in https://github.com/BerriAI/litellm/pull/9019

Page 2 of 113

Links

Releases

Has known vulnerabilities

Β© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.