Litellm

Latest version: v1.65.1

Safety actively analyzes 723152 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 14 of 112

1.56.5

Not secure
What's Changed
* Refactor: move all bedrock invoke providers to BaseConfig by krrishdholakia in https://github.com/BerriAI/litellm/pull/7463
* (fix) `litellm.amoderation` - support using `model=openai/omni-moderation-latest`, `model=omni-moderation-latest`, `model=None` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7475
* [Bug Fix]: rerank restfulapi response parse still too strict by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7476
* Litellm dev 12 30 2024 p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7480
* HumanLoop integration for Prompt Management by krrishdholakia in https://github.com/BerriAI/litellm/pull/7479


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.56.4...v1.56.5



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.5



Don't want to maintain your internal proxy? get in touch ๐ŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed โœ… | 230.0 | 268.0630784626629 | 6.174316845767241 | 0.0 | 1848 | 0 | 212.08500100010497 | 3189.481879000027 |
| Aggregated | Passed โœ… | 230.0 | 268.0630784626629 | 6.174316845767241 | 0.0 | 1848 | 0 | 212.08500100010497 | 3189.481879000027 |

1.56.4

Not secure
What's Changed
* Update model_prices_and_context_window.json by superpoussin22 in https://github.com/BerriAI/litellm/pull/7452
* (Refactor) ๐Ÿงน - remove deprecated litellm server by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7456
* ๐Ÿ“– Docs - Using LiteLLM with 1M rows in spend logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7461
* (Admin UI - 1) - added the model used either directly before or after the "Assistant" so that it's clear which model provided the given assistant output by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7459
* (Admin UI - 2) UI chat should render the output in markdown by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7460
* (Security fix) - Upgrade to `fastapi==0.115.5 ` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7447
* fix OR deepseek by paul-gauthier in https://github.com/BerriAI/litellm/pull/7425
* (Bug Fix) Add health check support for realtime models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7453
* (Refactor) - Re use litellm.completion/litellm.embedding etc for health checks by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7455
* Litellm dev 12 28 2024 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7464
* Fireworks AI - document inlining support + model access groups for wildcard models by krrishdholakia in https://github.com/BerriAI/litellm/pull/7458


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.56.3...v1.56.4



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.4



Don't want to maintain your internal proxy? get in touch ๐ŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed โœ… | 240.0 | 268.74238744669225 | 6.116896356155644 | 0.0 | 1829 | 0 | 214.29422199992132 | 1969.7571099999323 |
| Aggregated | Passed โœ… | 240.0 | 268.74238744669225 | 6.116896356155644 | 0.0 | 1829 | 0 | 214.29422199992132 | 1969.7571099999323 |

1.56.3

Not secure
What's Changed
* Update Documentation - Gemini Embedding by igorlima in https://github.com/BerriAI/litellm/pull/7436
* (Bug fix) missing `model_group` field in logs for aspeech call types by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7392
* (Feat) - new endpoint `GET /v1/fine_tuning/jobs/{fine_tuning_job_id:path}` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7427
* Update model_prices_and_context_window.json by superpoussin22 in https://github.com/BerriAI/litellm/pull/7345
* LiteLLM Minor Fixes & Improvements (12/27/2024) - p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7448
* Litellm dev 12 27 2024 p2 1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7449

New Contributors
* igorlima made their first contribution in https://github.com/BerriAI/litellm/pull/7436

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.56.2...v1.56.3



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.3



Don't want to maintain your internal proxy? get in touch ๐ŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed โœ… | 250.0 | 276.9724297749999 | 6.148940938190872 | 0.003341815727277648 | 1840 | 1 | 112.37049800001842 | 1700.1428350000083 |
| Aggregated | Passed โœ… | 250.0 | 276.9724297749999 | 6.148940938190872 | 0.003341815727277648 | 1840 | 1 | 112.37049800001842 | 1700.1428350000083 |

1.56.2

Not secure
What's Changed
* Litellm dev 12 24 2024 p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7400
* (feat) Support Dynamic Params for `guardrails` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7415
* docs: cleanup docker compose comments by marcoscannabrava in https://github.com/BerriAI/litellm/pull/7414
* (Security fix) UI - update `next` version by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7418
* (security fix) - fix docs snyk vulnerability by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7419
* LiteLLM Minor Fixes & Improvements (12/25/2024) - p1 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7411
* LiteLLM Minor Fixes & Improvements (12/25/2024) - p2 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7420
* Ensure 'disable_end_user_cost_tracking_prometheus_only' works for new prometheus metrics by krrishdholakia in https://github.com/BerriAI/litellm/pull/7421
* (security fix) - bump fast api, fastapi-sso, python-multipart - fix snyk vulnerabilities by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7417
* docs - batches cost tracking by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7422
* Add `/openai` pass through route on litellm proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7412
* (Feat) Add logging for `POST v1/fine_tuning/jobs` by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7426
* (docs) - show all supported Azure OpenAI endpoints in overview by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7428
* (docs) - custom guardrail show how to use dynamic guardrail params by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7430
* Support budget/rate limit tiers for keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/7429
* (fix) initializing OTEL Logging on LiteLLM Proxy - ensure OTEL logger is initialized only once by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7435
* Litellm dev 12 26 2024 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7434
* fix(key_management_endpoints.py): enforce user_id / team_id checks on key generate by krrishdholakia in https://github.com/BerriAI/litellm/pull/7437
* LiteLLM Minor Fixes & Improvements (12/26/2024) - p4 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7439
* Refresh VoyageAI models, prices and context by fzowl in https://github.com/BerriAI/litellm/pull/7443
* Revert "Refresh VoyageAI models, prices and context" by krrishdholakia in https://github.com/BerriAI/litellm/pull/7446
* (feat) `/guardrails/list` show guardrail info params by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7442
* add openrouter o1 by paul-gauthier in https://github.com/BerriAI/litellm/pull/7424
* โœจ (Feat) Log Guardrails run, guardrail response on logging integrations by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7445

New Contributors
* marcoscannabrava made their first contribution in https://github.com/BerriAI/litellm/pull/7414
* fzowl made their first contribution in https://github.com/BerriAI/litellm/pull/7443

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.12...v1.56.2



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.2



Don't want to maintain your internal proxy? get in touch ๐ŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed โœ… | 250.0 | 275.3240164096845 | 6.143891773397197 | 0.0 | 1838 | 0 | 224.26387399997338 | 1437.5524760000076 |
| Aggregated | Passed โœ… | 250.0 | 275.3240164096845 | 6.143891773397197 | 0.0 | 1838 | 0 | 224.26387399997338 | 1437.5524760000076 |

1.55.12

Not secure
What's Changed
* Add 'end_user', 'user' and 'requested_model' on more prometheus metrics by krrishdholakia in https://github.com/BerriAI/litellm/pull/7399
* (feat) `/batches` Add support for using `/batches` endpoints in OAI format by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7402
* (feat) `/batches` - track `user_api_key_alias`, `user_api_key_team_alias` etc for /batch requests by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7401
* Litellm dev 12 24 2024 p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7403
* (Feat) add `"/v1/batches/{batch_id:path}/cancel" endpoint by ishaan-jaff in https://github.com/BerriAI/litellm/pull/7406
* Litellm dev 12 24 2024 p4 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7407


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.11...v1.55.12



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.12



Don't want to maintain your internal proxy? get in touch ๐ŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed โœ… | 220.0 | 241.51418849604215 | 6.334659319234715 | 0.0 | 1895 | 0 | 191.11329300005764 | 3854.987871999924 |
| Aggregated | Passed โœ… | 220.0 | 241.51418849604215 | 6.334659319234715 | 0.0 | 1895 | 0 | 191.11329300005764 | 3854.987871999924 |

1.55.11

Not secure
What's Changed
* LiteLLM Minor Fixes & Improvements (12/23/2024) - p3 by krrishdholakia in https://github.com/BerriAI/litellm/pull/7394


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.55.10...v1.55.11



Docker Run LiteLLM Proxy


docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.11



Don't want to maintain your internal proxy? get in touch ๐ŸŽ‰
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

| Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| /chat/completions | Passed โœ… | 250.0 | 290.3865391657403 | 6.034920682874279 | 0.0 | 1804 | 0 | 229.06071099987457 | 2909.605226000167 |
| Aggregated | Passed โœ… | 250.0 | 290.3865391657403 | 6.034920682874279 | 0.0 | 1804 | 0 | 229.06071099987457 | 2909.605226000167 |

Page 14 of 112

Links

Releases

Has known vulnerabilities

ยฉ 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.