Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 37 of 93

1.40.19

Not secure
🚨🚨🚨 Known Bug on LiteLLM Proxy Server on this Release- we do not recommend upgrading until issue is fixed

You can use claude 3.5 sonnet on older versions - no upgrade is required

What's Changed
* fix(proxy_server.py): fix llm_model_list to use router.get_model_list() by krrishdholakia in https://github.com/BerriAI/litellm/pull/4274
* Support 'image url' to vertex ai / google ai studio gemini models by krrishdholakia in https://github.com/BerriAI/litellm/pull/4266
* Use AWS Key Management System for Encrypted Database URL + Redis Credentials by krrishdholakia in https://github.com/BerriAI/litellm/pull/4111
* feat - add open router exception mapping by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4282
* feat - support CURL OPTIONS for `/health/readiness` endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4286
* fix(litellm_logging.py): Add missing import statement. by Manouchehri in https://github.com/BerriAI/litellm/pull/4276
* docs - setting team budgets on ui by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4292
* [Bug-Fix]: Azure image generation doesn't support HTTPS_PROXY en by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4293
* [feat] Add ft:gpt-4, ft:gpt-4o models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4294
* build(model_prices_and_context_window.json): fix gemini pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/4291
* support vertex_credentials filepath by hawktang in https://github.com/BerriAI/litellm/pull/4199
* fix(litellm_logging.py): initialize global variables for logging by krrishdholakia in https://github.com/BerriAI/litellm/pull/4296
* Vertex AI - character based cost calculation by krrishdholakia in https://github.com/BerriAI/litellm/pull/4295
* Add claude 3.5 sonnet model by lowjiansheng in https://github.com/BerriAI/litellm/pull/4310
* Add `vertex_ai/claude-3-5-sonnet20240620` and `bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4311
* Updated deepseek coder for v2, added openrouter version by paul-gauthier in https://github.com/BerriAI/litellm/pull/4308
* Fix model name for deepseek-coder in documentation by williamjeong2 in https://github.com/BerriAI/litellm/pull/4304

New Contributors
* hawktang made their first contribution in https://github.com/BerriAI/litellm/pull/4199
* lowjiansheng made their first contribution in https://github.com/BerriAI/litellm/pull/4310
* williamjeong2 made their first contribution in https://github.com/BerriAI/litellm/pull/4304

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.17...v1.40.19

1.40.17

Not secure
What's Changed
* [Fix] Proxy Thread creation using the Assistants API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4260
* Add a Dependabot config to auto-update GitHub action versions by kurtmckee in https://github.com/BerriAI/litellm/pull/4261
* [Fix-Bug]: LiteLLM returns 500 in case of Quota exceeded for anthropic-claude-3-haiku by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4263
* [Docs] Deep infra llama3 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4265
* [Fix-Bug] Async Streaming Mock is different to sync streaming mock by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4268
* fix(proxy_server.py): Fix JWT-Auth team spend tracking by krrishdholakia in https://github.com/BerriAI/litellm/pull/4269
* fix: add more type hints to init methods by nejch in https://github.com/BerriAI/litellm/pull/4258
* [Fix] Use Langfuse prompt Object with LiteLLM Proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4275
* fix(vertex_httpx.py): Correctly handle Vertex content policy violation error by krrishdholakia in https://github.com/BerriAI/litellm/pull/4271

New Contributors
* kurtmckee made their first contribution in https://github.com/BerriAI/litellm/pull/4261
* nejch made their first contribution in https://github.com/BerriAI/litellm/pull/4258

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.16...v1.40.17



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.17



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 140.0 | 166.0081077142874 | 6.315588540483388 | 0.0 | 1890 | 0 | 109.83896499999446 | 2170.276978000004 |
| Aggregated | Passed βœ… | 140.0 | 166.0081077142874 | 6.315588540483388 | 0.0 | 1890 | 0 | 109.83896499999446 | 2170.276978000004 |

1.40.16

Not secure
Codestral API : https://docs.litellm.ai/docs/providers/codestral
<img width="428" alt="Xnapper-2024-06-17-22 54 32" src="https://github.com/BerriAI/litellm/assets/29436595/481ec4ec-e247-426b-95e1-83e0fad1e579">



What's Changed
* Langfuse Integration ignore Embedding Output by hburrichter in https://github.com/BerriAI/litellm/pull/4226
* [Refactor Proxy] - refactor proxy place internal user, customer endpoints in separate file by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4223
* Add Gemini context window pricing by krrishdholakia in https://github.com/BerriAI/litellm/pull/4243
* Update utils.py (fix dangerous code) by CodeVigilanteOfficial in https://github.com/BerriAI/litellm/pull/4228
* fix: lunary callback tags by hughcrt in https://github.com/BerriAI/litellm/pull/4141
* build(deps): bump urllib3 from 2.2.1 to 2.2.2 by dependabot in https://github.com/BerriAI/litellm/pull/4251
* build(deps): bump ws from 7.5.9 to 7.5.10 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/4252
* VertexAI/Gemini: Calculate cost based on context window by krrishdholakia in https://github.com/BerriAI/litellm/pull/4245
* [Feat] Add Codestral FIM API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4247
* [Feat] - Add Codestral Chat API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4242
* build(deps): bump sharp from 0.30.7 to 0.32.6 in /docs/my-website by dependabot in https://github.com/BerriAI/litellm/pull/4253
* [SECURITY] `/model/info`: strip out llm credential before returning by ushuz in https://github.com/BerriAI/litellm/pull/4244
* [New Feature] Add mock_tool_calls to `main.py` by jacquesyvesgl in https://github.com/BerriAI/litellm/pull/4195
* Fix file type handling of uppercase extensions by nick-rackauckas in https://github.com/BerriAI/litellm/pull/4182
* [Fix] Refactor Logfire to use LiteLLM OTEL Class by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4254
* feat(main.py): Gemini (Google AI Studio) - Support Function Calling, Inline images, etc. by krrishdholakia in https://github.com/BerriAI/litellm/pull/4246

New Contributors
* CodeVigilanteOfficial made their first contribution in https://github.com/BerriAI/litellm/pull/4228
* hughcrt made their first contribution in https://github.com/BerriAI/litellm/pull/4141
* jacquesyvesgl made their first contribution in https://github.com/BerriAI/litellm/pull/4195

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.15...v1.40.16



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.16



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 120.0 | 147.66809414867745 | 6.318096816836798 | 0.0 | 1890 | 0 | 102.75308299998187 | 1718.0775130000256 |
| Aggregated | Passed βœ… | 120.0 | 147.66809414867745 | 6.318096816836798 | 0.0 | 1890 | 0 | 102.75308299998187 | 1718.0775130000256 |

1.40.15

Not secure
🚨Please wait for a stable release before upgrading your production version of LiteLLM πŸ‘‰ We refactored utils.py, proxy_server.py to be less than 10K lines.

What's Changed
* [Fix] Security Fix bump docusaurus version by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4213
* build(deps): bump azure-identity from 1.15.0 to 1.16.1 by dependabot in https://github.com/BerriAI/litellm/pull/4130
* build(deps): bump braces from 3.0.2 to 3.0.3 in /ui/litellm-dashboard by dependabot in https://github.com/BerriAI/litellm/pull/4131
* fix(build): .dockerignore not picked up by bcvanmeurs in https://github.com/BerriAI/litellm/pull/3116
* [Refactor-Proxy] Refactor user_api_key_auth to be it's own file by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4215
* [Reliability Fix] Anthropic / Bedrock HTTPX - Cache Async Httpx client by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4211
* refactor(utils.py): Cut down utils.py to <10k lines. by krrishdholakia in https://github.com/BerriAI/litellm/pull/4216
* ui - show exceptions by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4222
* fix - non sso ui sign up flow by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4220
* [Refactor-Proxy] Make proxy_server.py < 10K lines (move management, key, endpoints to their own files) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4217


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.14...v1.40.15



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.15



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 120.0 | 132.58387302297749 | 6.398687111538595 | 0.0 | 1915 | 0 | 97.12711200000967 | 1186.0091809999744 |
| Aggregated | Passed βœ… | 120.0 | 132.58387302297749 | 6.398687111538595 | 0.0 | 1915 | 0 | 97.12711200000967 | 1186.0091809999744 |

1.40.14

Not secure
What's Changed
* ui - fix team based usage crashing by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4198
* [Fix + Refactor] - Router Alerting for llm exceptions + use separate util for sending alert by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4197
* [Bug fix] Don't cache team, user, customer budget after calling /update, /delete by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4203
* feat(router.py): support content policy fallbacks by krrishdholakia in https://github.com/BerriAI/litellm/pull/4207
* fix(slack_alerting.py): allow new 'alerting_metadata' arg by krrishdholakia in https://github.com/BerriAI/litellm/pull/4205
* build(pyproject.toml): require pydantic v2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/4151
* [Feat] send email alerts when budget exceeded by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4209
* [Fix] redact_message_input_output_from_logging deepcopy bug by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4210


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.13...v1.40.14



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.14



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 120.0 | 141.18410333195084 | 6.441903839147897 | 0.0 | 1928 | 0 | 105.22602600002529 | 510.8018800000025 |
| Aggregated | Passed βœ… | 120.0 | 141.18410333195084 | 6.441903839147897 | 0.0 | 1928 | 0 | 105.22602600002529 | 510.8018800000025 |

1.40.14.dev4

What's Changed
* [Fix] Security Fix bump docusaurus version by ishaan-jaff in https://github.com/BerriAI/litellm/pull/4213
* build(deps): bump azure-identity from 1.15.0 to 1.16.1 by dependabot in https://github.com/BerriAI/litellm/pull/4130
* build(deps): bump braces from 3.0.2 to 3.0.3 in /ui/litellm-dashboard by dependabot in https://github.com/BerriAI/litellm/pull/4131
* fix(build): .dockerignore not picked up by bcvanmeurs in https://github.com/BerriAI/litellm/pull/3116


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.40.14...v1.40.14.dev4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.14.dev4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat





Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.14.dev4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 130.0 | 155.13191184789494 | 6.354376739146439 | 0.0 | 1900 | 0 | 111.70541100000264 | 1523.7574440000117 |
| Aggregated | Passed βœ… | 130.0 | 155.13191184789494 | 6.354376739146439 | 0.0 | 1900 | 0 | 111.70541100000264 | 1523.7574440000117 |

Page 37 of 93

Links

Releases

Has known vulnerabilities

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.