Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 25 of 93

1.43.2

Not secure
What's Changed
* LLM Guardrails - Support lakera config thresholds + custom api base by krrishdholakia in https://github.com/BerriAI/litellm/pull/5076
* Revert "Fix: Add prisma binary_cache_dir specification to pyproject.toml" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5085
* add ft:gpt-4o-mini-2024-07-18 to model prices by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5084
* [Fix-Bug]: Using extra_headers removes the OpenRouter HTTP-Referer/X-Title headers by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5086
* [Feat] - Prometheus Metrics to monitor a model / deployment health by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5092
* [Fix] Init Prometheus Service Logger when it's None by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5088
* fix(anthropic.py): handle anthropic returning empty argument string (invalid json str) for tool call while streaming by krrishdholakia in https://github.com/BerriAI/litellm/pull/5091
* Clarifai : Removed model name casing issue by Mogith-P-N in https://github.com/BerriAI/litellm/pull/5095
* feat(utils.py): support passing response_format as pydantic model by krrishdholakia in https://github.com/BerriAI/litellm/pull/5079
* [Feat-Router + Proxy] Add provider wildcard routing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5098
* Add deepseek-coder-v2(-lite), mistral-large, codegeex4 to ollama by sammcj in https://github.com/BerriAI/litellm/pull/5100

New Contributors
* Mogith-P-N made their first contribution in https://github.com/BerriAI/litellm/pull/5095
* sammcj made their first contribution in https://github.com/BerriAI/litellm/pull/5100

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.1...v1.43.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.2



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 100.0 | 144.3065982356908 | 6.422272607319682 | 0.0 | 1922 | 0 | 83.24493999998595 | 25498.95384100003 |
| Aggregated | Passed ✅ | 100.0 | 144.3065982356908 | 6.422272607319682 | 0.0 | 1922 | 0 | 83.24493999998595 | 25498.95384100003 |

1.43.1

Not secure
What's Changed
* fix(main.py): log hidden params for text completion calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/5061
* feat(proxy_cli.py): support iam-based auth to rds by krrishdholakia in https://github.com/BerriAI/litellm/pull/5057
* add OpenAI gpt-4o-2024-08-06 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5070
* [Fix-Proxy] allow forwarding headers from request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5062
* feat(proxy_server.py): allow restricting allowed email domains for UI by krrishdholakia in https://github.com/BerriAI/litellm/pull/5071
* [Fix] Fix testing emails through Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5077
* [Fix] Dev docker image by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5069
* [Feat] /audio/transcription use file checksum for cache key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5075
* UI - fwd UI requests from server root path by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5078


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.0...v1.43.1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 140.0 | 166.1667368516749 | 6.284496216122683 | 0.0 | 1881 | 0 | 116.9949800000154 | 1591.6928899999903 |
| Aggregated | Passed ✅ | 140.0 | 166.1667368516749 | 6.284496216122683 | 0.0 | 1881 | 0 | 116.9949800000154 | 1591.6928899999903 |

1.43.1dev1

What's Changed
* LLM Guardrails - Support lakera config thresholds + custom api base by krrishdholakia in https://github.com/BerriAI/litellm/pull/5076
* Revert "Fix: Add prisma binary_cache_dir specification to pyproject.toml" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5085
* add ft:gpt-4o-mini-2024-07-18 to model prices by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5084


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.1...v1.43.1-dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.1-dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 152.1685983354436 | 6.339706230157776 | 0.0 | 1896 | 0 | 99.8656730000107 | 1999.8072829999956 |
| Aggregated | Passed ✅ | 120.0 | 152.1685983354436 | 6.339706230157776 | 0.0 | 1896 | 0 | 99.8656730000107 | 1999.8072829999956 |

1.43.0

Not secure
What's Changed
* feat(router.py): add flag for mock testing loadbalancing for rate limit errors by krrishdholakia in https://github.com/BerriAI/litellm/pull/5036
* Handle bedrock tool calling in stream_chunk_builder by jcheng5 in https://github.com/BerriAI/litellm/pull/5025
* feat(anthropic_adapter.py): support streaming requests for `/v1/messages` endpoint by krrishdholakia in https://github.com/BerriAI/litellm/pull/5040
* [Feat-Proxy] Log request/response on GCS by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5047
* [Proxy-Fix] Requests that are incorrectly flagged as admin-only paths by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5050
* [FIX] allow setting UI BASE path by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4142
* Revert "[FIX] allow setting UI BASE path" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5054
* [Proxy Fix] Allow running UI on custom path by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5056
* feat(caching.py): enable caching on provider-specific optional params by krrishdholakia in https://github.com/BerriAI/litellm/pull/5051
* OTEL - Log DB queries / functions on OTEL by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5059
* Fix - add debug statements when connecting to prisma DB by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5058
* fix: bump default allowed_fails + reduce default db pool limit by krrishdholakia in https://github.com/BerriAI/litellm/pull/5052

New Contributors
* jcheng5 made their first contribution in https://github.com/BerriAI/litellm/pull/5025

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.12...v1.43.0



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.0



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 85 | 100.15133779102914 | 6.555354782147974 | 0.0 | 1962 | 0 | 67.9489269999749 | 1520.4776349999634 |
| Aggregated | Passed ✅ | 85 | 100.15133779102914 | 6.555354782147974 | 0.0 | 1962 | 0 | 67.9489269999749 | 1520.4776349999634 |

1.43.0.dev1

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.43.0...v1.43.0.dev1



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.0.dev1



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 100.0 | 130.56140989293124 | 6.427369891993175 | 0.0 | 1924 | 0 | 82.19501900003934 | 2619.2400959999986 |
| Aggregated | Passed ✅ | 100.0 | 130.56140989293124 | 6.427369891993175 | 0.0 | 1924 | 0 | 82.19501900003934 | 2619.2400959999986 |

1.42.12

Not secure
What's Changed
* fix(types/utils.py): Support deepseek prompt caching by krrishdholakia in https://github.com/BerriAI/litellm/pull/5019
* build(ui): allow admin_viewer to view teams tab by krrishdholakia in https://github.com/BerriAI/litellm/pull/5027
* [Fix] Whisper Caching - Use correct cache keys for checking request in cache by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5026
* fix(utils.py): Fix adding azure models on ui by krrishdholakia in https://github.com/BerriAI/litellm/pull/5029
* Allow Bedrock to set custom STS endpoint for OIDC flow by Manouchehri in https://github.com/BerriAI/litellm/pull/4982
* [Feat] Add support for Vertex AI fine tuning endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5028
* [Feat] Add support for Vertex AI Fine tuning on LiteLLM Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5030
* [Feat] Vertex AI fine tuning - support translating hyperparameters by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5034
* Docs - Add example of Vertex AI fine tuning API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5035
* [Feat] support all native vertex ai endpoints - Gemini API, Embeddings API, Imagen API, Batch prediction API, Tuning API, CountTokens API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/5037


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.42.11...v1.42.12



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.12



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.12



Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed ✅ | 120.0 | 147.37461002147643 | 6.379597709990025 | 0.0033418531744316532 | 1909 | 1 | 27.773200999973824 | 2754.675483000028 |
| Aggregated | Passed ✅ | 120.0 | 147.37461002147643 | 6.379597709990025 | 0.0033418531744316532 | 1909 | 1 | 27.773200999973824 | 2754.675483000028 |

Page 25 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.