What's Changed
* [Feat-Proxy] - Langfuse log /audio/transcription on langfuse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4939
* Fix: 4942. Remove verbose logging when exception can be handled by dleen in https://github.com/BerriAI/litellm/pull/4943
* fixes: 4947 Bedrock context exception does not have a response by dleen in https://github.com/BerriAI/litellm/pull/4948
* [Feat] Bedrock add support for Bedrock Guardrails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4946
* build(deps): bump fast-xml-parser from 4.3.2 to 4.4.1 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/4950
* ui - allow entering custom model names for all all provider (azure ai, openai, etc) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4951
* Fix bug in cohere_chat.py by pat-cohere in https://github.com/BerriAI/litellm/pull/4949
* Feat UI - allow using custom header for litellm api key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4916
* [Feat] Add `litellm.create_fine_tuning_job()` , `litellm.list_fine_tuning_jobs()`, `litellm.cancel_fine_tuning_job()` finetuning endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4956
* [Feature]: GET /v1/batches to return list of batches by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4969
* [Fix-Proxy] ProxyException code as str - Make OpenAI Compatible by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4973
* Proxy Admin UI - switch off console logs in production mode by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4975
* feat(huggingface_restapi.py): Support multiple hf embedding types + async hf embeddings by krrishdholakia in https://github.com/BerriAI/litellm/pull/4976
* fix(cohere.py): support async cohere embedding calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/4977
* fix(utils.py): fix model registeration to model cost map by krrishdholakia in https://github.com/BerriAI/litellm/pull/4979
New Contributors
* pat-cohere made their first contribution in https://github.com/BerriAI/litellm/pull/4949
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.5-dev1...v1.42.5-dev2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.5-dev2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 80 | 94.54394269072652 | 6.341772574201341 | 0.0 | 1898 | 0 | 67.73792799998546 | 1524.3011940000315 |
| Aggregated | Passed ✅ | 80 | 94.54394269072652 | 6.341772574201341 | 0.0 | 1898 | 0 | 67.73792799998546 | 1524.3011940000315 |