Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 91 of 93

1.16.16

Not secure
What's Changed
* Litellm speed improvements by krrishdholakia in https://github.com/BerriAI/litellm/pull/1344


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.16.15...v1.16.16

1.16.15

Not secure

1.16.14

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.16.13...1.16.14

1.16.12

Not secure
New providers
Xinference Embeddings: https://docs.litellm.ai/docs/providers/xinference
Voyage AI: https://docs.litellm.ai/docs/providers/voyage
Cloudflare AI workers: https://docs.litellm.ai/docs/providers/cloudflare_workers

Fixes:
AWS region name error when passing user bedrock client: https://github.com/BerriAI/litellm/issues/1292
Azure OpenAI models - use correct context window in `model_context_window_and_prices.json`
Fixes for Azure OpenAI + Streaming - counting prompt tokens correctly: https://github.com/BerriAI/litellm/issues/1264




What's Changed
* Update mistral.md by nanowell in https://github.com/BerriAI/litellm/pull/1157
* Use current Git folder for building Dockerfile by Manouchehri in https://github.com/BerriAI/litellm/pull/1076
* (fix) curl example on proxy/quick_start by ku-suke in https://github.com/BerriAI/litellm/pull/1163
* clarify the need to set an exporter by nirga in https://github.com/BerriAI/litellm/pull/1162
* Update mistral.md by ericmjl in https://github.com/BerriAI/litellm/pull/1174
* doc: updated langfuse ver 1.14 in pip install cmd by speedyankur in https://github.com/BerriAI/litellm/pull/1178
* Fix for issue that occured when proxying to ollama by clevcode in https://github.com/BerriAI/litellm/pull/1168
* Sample code to prevent logging API key in callback to Slack by navidre in https://github.com/BerriAI/litellm/pull/1185
* Add partial support of VertexAI safety settings by neubig in https://github.com/BerriAI/litellm/pull/1190
* docker build and push on release by Rested in https://github.com/BerriAI/litellm/pull/1197
* Add a default for safety settings in vertex AI by neubig in https://github.com/BerriAI/litellm/pull/1199
* Make vertex ai work with generate_content by neubig in https://github.com/BerriAI/litellm/pull/1213
* fix: success_callback logic for cost_tracking by sihyeonn in https://github.com/BerriAI/litellm/pull/1211
* Add aws_bedrock_runtime_endpoint support by Manouchehri in https://github.com/BerriAI/litellm/pull/1203
* fix least_busy router by updating min_traffic by AllentDan in https://github.com/BerriAI/litellm/pull/1195
* Improve langfuse integration by maxdeichmann in https://github.com/BerriAI/litellm/pull/1183
* usage_based_routing_fix by sumanth13131 in https://github.com/BerriAI/litellm/pull/1182
* add some GitHub workflows for flake8 and add black dependecy by bufferoverflow in https://github.com/BerriAI/litellm/pull/1223
* fix: helicone logging by evantancy in https://github.com/BerriAI/litellm/pull/1249
* update anyscale price link by prateeksachan in https://github.com/BerriAI/litellm/pull/1246
* Bump aiohttp from 3.8.4 to 3.9.0 by dependabot in https://github.com/BerriAI/litellm/pull/1180
* updated oobabooga to new api and support for embeddings by danikhan632 in https://github.com/BerriAI/litellm/pull/1248
* add support for mistral json mode via anyscale by marmikcfc in https://github.com/BerriAI/litellm/pull/1275
* Adds support for Vertex AI Unicorn by AshGreh in https://github.com/BerriAI/litellm/pull/1277
* fix typos & add missing names for azure models by fcakyon in https://github.com/BerriAI/litellm/pull/1290
* fix(proxy_server.py) Check when '_hidden_params' is None by asedmammad in https://github.com/BerriAI/litellm/pull/1300

New Contributors
* nanowell made their first contribution in https://github.com/BerriAI/litellm/pull/1157
* ku-suke made their first contribution in https://github.com/BerriAI/litellm/pull/1163
* ericmjl made their first contribution in https://github.com/BerriAI/litellm/pull/1174
* speedyankur made their first contribution in https://github.com/BerriAI/litellm/pull/1178
* clevcode made their first contribution in https://github.com/BerriAI/litellm/pull/1168
* navidre made their first contribution in https://github.com/BerriAI/litellm/pull/1185
* neubig made their first contribution in https://github.com/BerriAI/litellm/pull/1190
* Rested made their first contribution in https://github.com/BerriAI/litellm/pull/1197
* sihyeonn made their first contribution in https://github.com/BerriAI/litellm/pull/1211
* AllentDan made their first contribution in https://github.com/BerriAI/litellm/pull/1195
* sumanth13131 made their first contribution in https://github.com/BerriAI/litellm/pull/1182
* bufferoverflow made their first contribution in https://github.com/BerriAI/litellm/pull/1223
* evantancy made their first contribution in https://github.com/BerriAI/litellm/pull/1249
* prateeksachan made their first contribution in https://github.com/BerriAI/litellm/pull/1246
* danikhan632 made their first contribution in https://github.com/BerriAI/litellm/pull/1248
* marmikcfc made their first contribution in https://github.com/BerriAI/litellm/pull/1275
* AshGreh made their first contribution in https://github.com/BerriAI/litellm/pull/1277
* fcakyon made their first contribution in https://github.com/BerriAI/litellm/pull/1290
* asedmammad made their first contribution in https://github.com/BerriAI/litellm/pull/1300

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.15.0...1.16.12

1.16.6

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.16.18...v1.16.6

1.16.5

Not secure
What's Changed
- use s3 Buckets for caching /chat/completion, embedding responses. Proxy Caching: https://docs.litellm.ai/docs/proxy/caching, Caching with `litellm.completion` https://docs.litellm.ai/docs/caching/redis_cache
- `litellm.completion_cost()` Support for cost calculation for embedding responses - Azure embedding, and `text-embedding-ada-002-v2` jeromeroussin
python
async def _test():
response = await litellm.aembedding(
model="azure/azure-embedding-model",
input=["good morning from litellm", "gm"],
)

print(response)

return response

response = asyncio.run(_test())

cost = litellm.completion_cost(completion_response=response)

- `litellm.completion_cost()` raises exceptions (instead of swallowing exceptions) jeromeroussin
- Improved token counting for azure streaming responses langgg0511 https://github.com/BerriAI/litellm/issues/1304
- set os.environ/ variables for litellm proxy cache Manouchehri
yaml
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
- model_name: text-embedding-ada-002
litellm_params:
model: text-embedding-ada-002

litellm_settings:
set_verbose: True
cache: True set cache responses to True
cache_params: set cache params for s3
type: s3
s3_bucket_name: cache-bucket-litellm AWS Bucket Name for S3
s3_region_name: us-west-2 AWS Region Name for S3
s3_aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID us os.environ/<variable name> to pass environment variables. This is AWS Access Key ID for S3
s3_aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY AWS Secret Access Key for S3


* build(Dockerfile): moves prisma logic to dockerfile by krrishdholakia in https://github.com/BerriAI/litellm/pull/1342


**Full Changelog**: https://github.com/BerriAI/litellm/compare/1.16.14...v1.16.15

Page 91 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.