Litellm

Latest version: v1.61.11

Safety actively analyzes 707607 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 108 of 110

1.16.12

Not secure
New providers
Xinference Embeddings: https://docs.litellm.ai/docs/providers/xinference
Voyage AI: https://docs.litellm.ai/docs/providers/voyage
Cloudflare AI workers: https://docs.litellm.ai/docs/providers/cloudflare_workers

Fixes:
AWS region name error when passing user bedrock client: https://github.com/BerriAI/litellm/issues/1292
Azure OpenAI models - use correct context window in `model_context_window_and_prices.json`
Fixes for Azure OpenAI + Streaming - counting prompt tokens correctly: https://github.com/BerriAI/litellm/issues/1264




What's Changed
* Update mistral.md by nanowell in https://github.com/BerriAI/litellm/pull/1157
* Use current Git folder for building Dockerfile by Manouchehri in https://github.com/BerriAI/litellm/pull/1076
* (fix) curl example on proxy/quick_start by ku-suke in https://github.com/BerriAI/litellm/pull/1163
* clarify the need to set an exporter by nirga in https://github.com/BerriAI/litellm/pull/1162
* Update mistral.md by ericmjl in https://github.com/BerriAI/litellm/pull/1174
* doc: updated langfuse ver 1.14 in pip install cmd by speedyankur in https://github.com/BerriAI/litellm/pull/1178
* Fix for issue that occured when proxying to ollama by clevcode in https://github.com/BerriAI/litellm/pull/1168
* Sample code to prevent logging API key in callback to Slack by navidre in https://github.com/BerriAI/litellm/pull/1185
* Add partial support of VertexAI safety settings by neubig in https://github.com/BerriAI/litellm/pull/1190
* docker build and push on release by Rested in https://github.com/BerriAI/litellm/pull/1197
* Add a default for safety settings in vertex AI by neubig in https://github.com/BerriAI/litellm/pull/1199
* Make vertex ai work with generate_content by neubig in https://github.com/BerriAI/litellm/pull/1213
* fix: success_callback logic for cost_tracking by sihyeonn in https://github.com/BerriAI/litellm/pull/1211
* Add aws_bedrock_runtime_endpoint support by Manouchehri in https://github.com/BerriAI/litellm/pull/1203
* fix least_busy router by updating min_traffic by AllentDan in https://github.com/BerriAI/litellm/pull/1195
* Improve langfuse integration by maxdeichmann in https://github.com/BerriAI/litellm/pull/1183
* usage_based_routing_fix by sumanth13131 in https://github.com/BerriAI/litellm/pull/1182
* add some GitHub workflows for flake8 and add black dependecy by bufferoverflow in https://github.com/BerriAI/litellm/pull/1223
* fix: helicone logging by evantancy in https://github.com/BerriAI/litellm/pull/1249
* update anyscale price link by prateeksachan in https://github.com/BerriAI/litellm/pull/1246
* Bump aiohttp from 3.8.4 to 3.9.0 by dependabot in https://github.com/BerriAI/litellm/pull/1180
* updated oobabooga to new api and support for embeddings by danikhan632 in https://github.com/BerriAI/litellm/pull/1248
* add support for mistral json mode via anyscale by marmikcfc in https://github.com/BerriAI/litellm/pull/1275
* Adds support for Vertex AI Unicorn by AshGreh in https://github.com/BerriAI/litellm/pull/1277
* fix typos & add missing names for azure models by fcakyon in https://github.com/BerriAI/litellm/pull/1290
* fix(proxy_server.py) Check when '_hidden_params' is None by asedmammad in https://github.com/BerriAI/litellm/pull/1300

New Contributors
* nanowell made their first contribution in https://github.com/BerriAI/litellm/pull/1157
* ku-suke made their first contribution in https://github.com/BerriAI/litellm/pull/1163
* ericmjl made their first contribution in https://github.com/BerriAI/litellm/pull/1174
* speedyankur made their first contribution in https://github.com/BerriAI/litellm/pull/1178
* clevcode made their first contribution in https://github.com/BerriAI/litellm/pull/1168
* navidre made their first contribution in https://github.com/BerriAI/litellm/pull/1185
* neubig made their first contribution in https://github.com/BerriAI/litellm/pull/1190
* Rested made their first contribution in https://github.com/BerriAI/litellm/pull/1197
* sihyeonn made their first contribution in https://github.com/BerriAI/litellm/pull/1211
* AllentDan made their first contribution in https://github.com/BerriAI/litellm/pull/1195
* sumanth13131 made their first contribution in https://github.com/BerriAI/litellm/pull/1182
* bufferoverflow made their first contribution in https://github.com/BerriAI/litellm/pull/1223
* evantancy made their first contribution in https://github.com/BerriAI/litellm/pull/1249
* prateeksachan made their first contribution in https://github.com/BerriAI/litellm/pull/1246
* danikhan632 made their first contribution in https://github.com/BerriAI/litellm/pull/1248
* marmikcfc made their first contribution in https://github.com/BerriAI/litellm/pull/1275
* AshGreh made their first contribution in https://github.com/BerriAI/litellm/pull/1277
* fcakyon made their first contribution in https://github.com/BerriAI/litellm/pull/1290
* asedmammad made their first contribution in https://github.com/BerriAI/litellm/pull/1300

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.15.0...1.16.12

1.16.6

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.16.18...v1.16.6

1.16.5

Not secure
What's Changed
- use s3 Buckets for caching /chat/completion, embedding responses. Proxy Caching: https://docs.litellm.ai/docs/proxy/caching, Caching with `litellm.completion` https://docs.litellm.ai/docs/caching/redis_cache
- `litellm.completion_cost()` Support for cost calculation for embedding responses - Azure embedding, and `text-embedding-ada-002-v2` jeromeroussin
python
async def _test():
response = await litellm.aembedding(
model="azure/azure-embedding-model",
input=["good morning from litellm", "gm"],
)

print(response)

return response

response = asyncio.run(_test())

cost = litellm.completion_cost(completion_response=response)

- `litellm.completion_cost()` raises exceptions (instead of swallowing exceptions) jeromeroussin
- Improved token counting for azure streaming responses langgg0511 https://github.com/BerriAI/litellm/issues/1304
- set os.environ/ variables for litellm proxy cache Manouchehri
yaml
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
- model_name: text-embedding-ada-002
litellm_params:
model: text-embedding-ada-002

litellm_settings:
set_verbose: True
cache: True set cache responses to True
cache_params: set cache params for s3
type: s3
s3_bucket_name: cache-bucket-litellm AWS Bucket Name for S3
s3_region_name: us-west-2 AWS Region Name for S3
s3_aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID us os.environ/<variable name> to pass environment variables. This is AWS Access Key ID for S3
s3_aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY AWS Secret Access Key for S3


* build(Dockerfile): moves prisma logic to dockerfile by krrishdholakia in https://github.com/BerriAI/litellm/pull/1342


**Full Changelog**: https://github.com/BerriAI/litellm/compare/1.16.14...v1.16.15

1.15.0

Not secure
What's Changed
LiteLLM Proxy now maps exceptions for 100+ LLMs to the OpenAI format https://docs.litellm.ai/docs/proxy/quick_start
๐Ÿงจ Log all LLM Input/Output to [dynamodb](https://twitter.com/dynamodb) set `litellm.success_callback = ["dynamodb"] `https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput---dynamodb
โญ๏ธ Support for [MistralAI](https://twitter.com/MistralAI) API, Gemini PRO
๐Ÿ”Ž Set Aliases for model groups on LiteLLM Proxy
๐Ÿ”Ž Exception mapping for openai.NotFoundError live now + testing for exception mapping on proxy added to LiteLLM ci/cd https://docs.litellm.ai/docs/exception_mapping
โš™๏ธ Fixes for async + streaming caching https://docs.litellm.ai/docs/proxy/caching
๐Ÿ‘‰ Support for using Async logging with [langfuse](https://twitter.com/langfuse) live on proxy

AI Generated Release Notes
* Enable setting default `model` value for `LiteLLM`, `Chat`, `Completions` by estill01 in https://github.com/BerriAI/litellm/pull/985
* fix replicate system prompt: forgot to add **optional_params to input data by nbaldwin98 in https://github.com/BerriAI/litellm/pull/1080
* Update factory.py to fix issue when calling from write-the -> langchain -> litellm served ollama by James4Ever0 in https://github.com/BerriAI/litellm/pull/1054
* Update Dockerfile to preinstall Prisma CLI by Manouchehri in https://github.com/BerriAI/litellm/pull/1039
* build(deps): bump aiohttp from 3.8.6 to 3.9.0 by dependabot in https://github.com/BerriAI/litellm/pull/937
* multistage docker build by wallies in https://github.com/BerriAI/litellm/pull/995
* fix: traceloop links by nirga in https://github.com/BerriAI/litellm/pull/1123
* refactor: add CustomStreamWrapper return type for completion by Undertone0809 in https://github.com/BerriAI/litellm/pull/1112
* fix langfuse tests by maxdeichmann in https://github.com/BerriAI/litellm/pull/1097
* Fix 1119, no content when streaming. by emsi in https://github.com/BerriAI/litellm/pull/1122
* docs(projects): add Docq to 'projects built on..' section by janaka in https://github.com/BerriAI/litellm/pull/1142
* docs(projects): add Docq.AI to sidebar nav by janaka in https://github.com/BerriAI/litellm/pull/1143

New Contributors
* James4Ever0 made their first contribution in https://github.com/BerriAI/litellm/pull/1054
* wallies made their first contribution in https://github.com/BerriAI/litellm/pull/995
* maxdeichmann made their first contribution in https://github.com/BerriAI/litellm/pull/1097
* emsi made their first contribution in https://github.com/BerriAI/litellm/pull/1122
* janaka made their first contribution in https://github.com/BerriAI/litellm/pull/1142

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.11.1...v1.15.0

1.11.1

Not secure
Proxy
- Bug fix for non OpenAI LLMs on proxy
- Major stability improvements & Fixes + added test cases for proxy
- Async success/failure loggers
- Support for using custom loggers with `aembedding()`


What's Changed
* feat: add docker compose file and running guide by geekyayush in https://github.com/BerriAI/litellm/pull/993
* (feat) Speedup health endpoint by PSU3D0 in https://github.com/BerriAI/litellm/pull/1023
* (pricing) Add Claude v2.1 for Bedrock by Manouchehri in https://github.com/BerriAI/litellm/pull/1042

New Contributors
* geekyayush made their first contribution in https://github.com/BerriAI/litellm/pull/993

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.10.4...v1.11.1

1.10.4

Not secure
Note: Proxy Server on 1.10.4 has a bug for non OpenAI LLMs - Fixed on 1.10.11

Updates Proxy Server
- Use custom callbacks on the proxy https://docs.litellm.ai/docs/proxy/logging
- Set `timeout` and `stream_timeout` per model https://docs.litellm.ai/docs/proxy/load_balancing#custom-timeouts-stream-timeouts---per-model
- Stability: Added testing for reading config.yaml on the proxy
- *NEW* `/model/new` + `/model/info` endpoints - Add new models + Get model info without restarting proxy.
- Custom user auth - https://github.com/BerriAI/litellm/issues/898#issuecomment-1826396106
- Key Security -> keys now stored as just hashes in the db
- user id accepted + passed to OpenAI/Azure

`litellm` Package
- Specify `kwargs` for Redis Cache https://github.com/BerriAI/litellm/commit/9ba17657ad664a21b5e91259a152db58540be024
- Fixes for Sagemaker + Palm Streaming
- Support for async custom callbacks - https://docs.litellm.ai/docs/observability/custom_callback#async-callback-functions
- Major improvements to stream chunk builder - support for parallel tool calling, system fingerprints, etc.
- Fixes for azure / openai streaming (return complete response object)
- Support for loading keys from azure key vault - https://docs.litellm.ai/docs/secret#azure-key-vault

What's Changed
* docs: adds gpt-3.5-turbo-1106 in supported models by rishabgit in https://github.com/BerriAI/litellm/pull/958
* (feat) Allow installing proxy dependencies explicitly with `pip install litellm[proxy]` by PSU3D0 in https://github.com/BerriAI/litellm/pull/966
* Mention Neon as a database option in docs by Manouchehri in https://github.com/BerriAI/litellm/pull/977
* fix system prompts for replicate by nbaldwin98 in https://github.com/BerriAI/litellm/pull/970

New Contributors
* rishabgit made their first contribution in https://github.com/BerriAI/litellm/pull/958
* PSU3D0 made their first contribution in https://github.com/BerriAI/litellm/pull/966
* nbaldwin98 made their first contribution in https://github.com/BerriAI/litellm/pull/970

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.7.11...v1.10.4

Page 108 of 110

Links

Releases

Has known vulnerabilities

ยฉ 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.