Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 42 of 93

1.39.4

Not secure
What's Changed
* fix - UI submit chat on enter by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3916
* Revert "Revert "fix: Log errors in Traceloop Integration (reverts previous revert)"" by nirga in https://github.com/BerriAI/litellm/pull/3909


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.39.3...v1.39.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.39.4



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 120.0 | 135.98662418243552 | 6.404889633803229 | 0.0 | 1913 | 0 | 97.80563699996492 | 1663.1231360000243 |
| Aggregated | Passed βœ… | 120.0 | 135.98662418243552 | 6.404889633803229 | 0.0 | 1913 | 0 | 97.80563699996492 | 1663.1231360000243 |

1.39.3

Not secure
What's Changed
* fix: Log errors in Traceloop Integration (reverts previous revert) by nirga in https://github.com/BerriAI/litellm/pull/3846
* Added support for Triton chat completion using trtlllm generate endpo… by giritatavarty-8451 in https://github.com/BerriAI/litellm/pull/3895
* Revert "Added support for Triton chat completion using trtlllm generate endpo…" by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3900
* [Feat] Implement Logout Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3901
* Revert "fix: Log errors in Traceloop Integration (reverts previous revert)" by krrishdholakia in https://github.com/BerriAI/litellm/pull/3908
* feat(proxy_server.py): emit webhook event whenever customer spend is tracked by krrishdholakia in https://github.com/BerriAI/litellm/pull/3906
* fix(openai.py): only allow 'user' as optional param if openai model by krrishdholakia in https://github.com/BerriAI/litellm/pull/3902
* [Feat] UI update analytics tab to show human friendly usage vals by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3894
* ui - fix latency analytics on `completion_tokens` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3897
* [Admin UI] Edit `Internal Users` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3904
* fix(proxy_server.py): fix end user object check when master key used by krrishdholakia in https://github.com/BerriAI/litellm/pull/3910
* [UI] Fix bug on Model analytics by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3913
* feat - langfuse use `key_alias` as generation name on litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3911
* fix pricing / price tracking for vertex_ai/claude-3-opus20240229 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3915

New Contributors
* giritatavarty-8451 made their first contribution in https://github.com/BerriAI/litellm/pull/3895

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.39.2...v1.39.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.39.3



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 110.0 | 133.96143579083153 | 6.347194412767075 | 0.0 | 1898 | 0 | 91.88108999995848 | 1459.6432470000025 |
| Aggregated | Passed βœ… | 110.0 | 133.96143579083153 | 6.347194412767075 | 0.0 | 1898 | 0 | 91.88108999995848 | 1459.6432470000025 |

1.39.2

Not secure
What's Changed
* Update ollama.py for image handling by rick-github in https://github.com/BerriAI/litellm/pull/2888
* fix(anthropic.py): fix parallel streaming on anthropic.py by krrishdholakia in https://github.com/BerriAI/litellm/pull/3883
* feat(proxy_server.py): Time to first token Request-level breakdown by krrishdholakia in https://github.com/BerriAI/litellm/pull/3886
* [BETA-Feature] Add OpenAI `v1/batches` Support on LiteLLM SDK by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3882
* feat - router add abatch_completion - N Models, M Messages by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3889
* [Feat] LiteLLM Proxy Add `POST /v1/files` and `GET /v1/files` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3888
* [Feat] LiteLLM Proxy - Add support for `POST /v1/batches` , `GET /v1/batches` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3885
* feat(router.py): support fastest response batch completion call by krrishdholakia in https://github.com/BerriAI/litellm/pull/3887


![pika-1716961715848-1x](https://github.com/BerriAI/litellm/assets/29436595/bc259073-a1ce-400f-b77c-1dbbef28497d)


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.38.12...v1.39.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.39.2



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 72 | 83.46968387564114 | 6.529958043991633 | 0.0 | 1954 | 0 | 61.38368400002037 | 678.4462749999989 |
| Aggregated | Passed βœ… | 72 | 83.46968387564114 | 6.529958043991633 | 0.0 | 1954 | 0 | 61.38368400002037 | 678.4462749999989 |

1.38.12

Not secure
What's Changed
* feat(proxy_server.py): CRUD endpoints for controlling 'invite link' flow by krrishdholakia in https://github.com/BerriAI/litellm/pull/3873
* [Feat] Add, Test Email Alerts on Admin UI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3874


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.38.11...v1.38.12



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.38.12



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 76 | 91.16258395147193 | 6.473952425752436 | 0.0 | 1937 | 0 | 62.406538999994154 | 1772.6057410000067 |
| Aggregated | Passed βœ… | 76 | 91.16258395147193 | 6.473952425752436 | 0.0 | 1937 | 0 | 62.406538999994154 | 1772.6057410000067 |

1.38.11

Not secure
✨ NEW /Customer/update and /Customer/delete endpoints https://docs.litellm.ai/docs/proxy/users#set-rate-limits

πŸ“ [Feat] Email alerting is now Free Tier: https://docs.litellm.ai/docs/proxy/email

πŸš€ [Feat] Show supported OpenAI params on LiteLLM UI model hub

✨ [Feat] Show Created at, Created by on Models Page

![codeimage-snippet_28 (3)](https://github.com/BerriAI/litellm/assets/29436595/742ce548-f63b-42eb-a0c8-1a9cdd229fa8)




What's Changed
* Clarifai-LiteLLM update docs by mogith-pn in https://github.com/BerriAI/litellm/pull/3856
* [Feat] Show supported OpenAI params on model hub by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3859
* fix(parallel_request_limiter.py): fix user+team tpm/rpm limit check by krrishdholakia in https://github.com/BerriAI/litellm/pull/3857
* fix - Admin UI show activity by model_group by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3865
* [Feat] Show Created at, Created by on `Models` Page by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3868
* Improve validate-fallbacks method by SujanShilakar in https://github.com/BerriAI/litellm/pull/3847
* fix(model_dashboard.tsx): accurately show the input/output cost per token when custom pricing is set by krrishdholakia in https://github.com/BerriAI/litellm/pull/3871
* Admin UI - Public model hub by krrishdholakia in https://github.com/BerriAI/litellm/pull/3869
* [Feat] Rename `/end/user/new` -> `/customer/new` (maintain backwards compatibility) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3870
* [Feat] Make Email alerting Free Tier, but customizing emails enterprise by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3872


New Contributors
* SujanShilakar made their first contribution in https://github.com/BerriAI/litellm/pull/3847

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.38.10...v1.38.11



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.38.11



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 94 | 113.13091035154665 | 6.485092627447978 | 0.0 | 1940 | 0 | 80.4994959999874 | 735.4111310000064 |
| Aggregated | Passed βœ… | 94 | 113.13091035154665 | 6.485092627447978 | 0.0 | 1940 | 0 | 80.4994959999874 | 735.4111310000064 |

1.38.10

Not secure
What's Changed
* [Feat] Model Hub by ishaan-jaff in https://github.com/BerriAI/litellm/pull/3849


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.38.8...v1.38.10



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.38.10



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 130.0 | 152.41971991092666 | 6.452763997233594 | 0.0 | 1931 | 0 | 108.63601500000186 | 1150.9651800000142 |
| Aggregated | Passed βœ… | 130.0 | 152.41971991092666 | 6.452763997233594 | 0.0 | 1931 | 0 | 108.63601500000186 | 1150.9651800000142 |

v1.38.8-stable
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.38.8...v1.38.8-stable



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.38.8-stable



Don't want to maintain your internal proxy? get in touch πŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed βœ… | 73 | 85.31436445193742 | 6.640342227407584 | 0.0 | 1987 | 0 | 61.23339800001304 | 1299.6820050000224 |
| Aggregated | Passed βœ… | 73 | 85.31436445193742 | 6.640342227407584 | 0.0 | 1987 | 0 | 61.23339800001304 | 1299.6820050000224 |

Page 42 of 93

Links

Releases

Has known vulnerabilities

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.