Litellm

Latest version: v1.61.11

Safety actively analyzes 707607 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 102 of 110

1.18.9

Not secure
What's Changed
* docs: Fix import statement for provider's sample code by kihaya in https://github.com/BerriAI/litellm/pull/1535
* Litellm GitHub action build admin UI by ShaunMaher in https://github.com/BerriAI/litellm/pull/1505
* [Feat] Proxy Auth - Use custom_key_generate by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1538
* [Fix] Router - Usage Based Routing with fallbacks (Track the correct tpm/rpm) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1555
* [Feat] support custom cost tracking per second by krrishdholakia in https://github.com/BerriAI/litellm/pull/1551

New Contributors
* kihaya made their first contribution in https://github.com/BerriAI/litellm/pull/1535

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.18.7...v1.18.9

1.18.8

Not secure
What's Changed
* [Feat] Add typehints for litellm.Router by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1529
* [Feat] Litellm.Router set custom cooldown times by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1534


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.18.6...v1.18.8

1.18.7

Not secure
What's Changed

1. Improved `litellm.Router` logging for fallbacks
**Example Log for one call**
shell
LiteLLM Router - INFO: get_available_deployment for model: azure/gpt-4-fast, No deployment available
LiteLLM Router - INFO: litellm.completion(model=None) Exception No models available.
LiteLLM Router - INFO: get_available_deployment for model: azure/gpt-4-basic, No deployment available
LiteLLM Router - INFO: litellm.completion(model=None) Exception No models available.
LiteLLM Router - INFO: get_available_deployment for model: openai-gpt-4, Selected deployment: {'model_name': 'openai-gpt-4', 'litellm_params': {'model': 'gpt-3.5-turbo', 'api_key': 'sk-PhEM****', 'tpm': 2000}, 'tpm': 2000, 'model_info': {'id': '5a4b95fa-c018-4767-85c2-c4851c57cf34'}} for model: openai-gpt-4
LiteLLM Router - INFO: litellm.completion(model=gpt-3.5-turbo) 200 OK


How to use in python
python
router = litellm.Router(
model_list=model_list,
fallbacks=fallbacks_list,
set_verbose=True,
debug_level="DEBUG" optional, default=INFO
)


2. Improvements to Usage Based Routing - litellm.Router
Before making the first call, we check if any of the deployments have TPM to make the call, Thanks georgeseifada for this!
3. [Feat] Add typehints for litellm.Router by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1529



**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.18.6...v1.18.7

1.18.6

Not secure
What's Changed
1.[Feat] litellm.acompletion() make Langfuse success handler non blocking by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1519
- The Langfuse Success Callback was blocking running `litellm.acompletion()` calls. fixed on this release
- Support for logging Cache Hits on Langfuse:
support for tagging `cache_hits` on Langfuse
(note you need langfuse>=2.6.3

<img width="1054" alt="Screenshot 2024-01-19 at 11 36 47 AM" src="https://github.com/BerriAI/litellm/assets/29436595/d65ec033-390e-4a06-b549-74625bcde6c0">

2. Langsmith: Add envs for project/run names; fix bug with None metadata by timothyasp in https://github.com/BerriAI/litellm/pull/1524
[Feat] Router improvements by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1525

3. Allow overriding headers for anthropic by keeganmccallum in https://github.com/BerriAI/litellm/pull/1513
* fix(utils.py): add metadata to logging obj on setup, if exists (fixes max parallel request bug) by krrishdholakia in https://github.com/BerriAI/litellm/pull/1531
4. test(tests/): add unit testing for proxy server endpoints by krrishdholakia in https://github.com/BerriAI/litellm/commit/f5ced089d6f0af05600062e25a981fdabebba815
* fix(proxy_server.py): users can now see their key info by krrishdholakia in https://github.com/BerriAI/litellm/commit/f5ced089d6f0af05600062e25a981fdabebba815
* fix(proxy_server.py): model info now restricts for user access by krrishdholakia in https://github.com/BerriAI/litellm/commit/f5ced089d6f0af05600062e25a981fdabebba815

New Contributors
* timothyasp made their first contribution in https://github.com/BerriAI/litellm/pull/1524
* keeganmccallum made their first contribution in https://github.com/BerriAI/litellm/pull/1513

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.18.5...v1.18.6

1.18.5

Not secure
What's Changed
* add headers to budget manager by HaruHunab1320 in https://github.com/BerriAI/litellm/pull/1506
* Adds s3_path prefix so that we can save objects in predifined location in s3 bucket by duarteocarmo in https://github.com/BerriAI/litellm/pull/1499
* nit: switch to valid SPDX license identifier `MIT` in pyproject.toml by ErikBjare in https://github.com/BerriAI/litellm/pull/1515

New Contributors
* HaruHunab1320 made their first contribution in https://github.com/BerriAI/litellm/pull/1506

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.18.4...v1.18.5

1.18.4

Not secure
What's Changed
[Feat] Proxy - Add Spend tracking logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1498
New SpendTable when Using LiteLLM Virtual Keys - Logs API Key, CreatedAt Date + Time, Model, Spend, Messages, Response
Docs to get started: https://docs.litellm.ai/docs/proxy/virtual_keys

![Group 197](https://user-images.githubusercontent.com/29436595/297958409-230b76f6-e919-4a1f-b183-09084a0e568d.png)

[Feat] Proxy - Track Cost Per User (Using `user` passed to requests) by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1509

- Proxy Server Track Cost Per User
Request:
shell
curl --location 'http://0.0.0.0:8000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer sk-RwPq' \
--data ' {
"model": "BEDROCK_GROUP",
"user": "litellm-is-awesome-user",
"messages": [
{
"role": "user",
"content": "what llm are you-444"
}
],
}'



Cost Tracked in LiteLLM Spend Tracking DB

<img width="419" alt="Screenshot 2024-01-18 at 5 56 17 PM" src="https://github.com/BerriAI/litellm/assets/29436595/700732e1-868a-4cec-bd17-376d7d510bab">

Notes:
- If a `user` is passed to the request the proxy tracks cost for it
- If the `user` does not exist in the User Table, we make a new user with the spend

feat(parallel_request_limiter.py): add support for tpm/rpm rate limits for keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/1501



**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.18.3...v1.18.4

Page 102 of 110

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.