Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 69 of 93

1.30.7

Not secure
What's Changed
* fix(bedrock.py): enable claude-3 streaming by krrishdholakia in https://github.com/BerriAI/litellm/pull/2425
* (docs) LiteLLM Proxy - use port 4000 in examples by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2416
* fix(proxy_server.py): use argon2 for faster api key checking by krrishdholakia in https://github.com/BerriAI/litellm/pull/2394
* (fix) use python 3.8 on ci/cd by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2428
* tests - monitor memory usage with litellm by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2427
* feat: add cost tracking + caching for `/audio/transcription` calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/2426


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.30.6...v1.30.7

1.30.6

Not secure
What's Changed
* [Docs] Deploying litellm - litellm, litellm-database, litellm with redis by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2423
* feat(helm-chart): redis as cache managed by chart by debdutdeb in https://github.com/BerriAI/litellm/pull/2420

New Contributors
* debdutdeb made their first contribution in https://github.com/BerriAI/litellm/pull/2420

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.30.5...v1.30.6

1.30.5

Not secure
What's Changed
* feat(main.py): support openai transcription endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/2401
* load balancing transcription endpoints by krrishdholakia in https://github.com/BerriAI/litellm/pull/2405


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.30.4...v1.30.5

1.30.4

Not secure
1.Incognito Requests - Don't log anything - docs: https://docs.litellm.ai/docs/proxy/enterprise#incognito-requests---dont-log-anything

When `no-log=True`, the request will **not be logged on any callbacks** and there will be **no server logs on litellm**

python
import openai
client = openai.OpenAI(
api_key="anything", proxy api-key
base_url="http://0.0.0.0:8000" # litellm proxy
)

response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={
"no-log": True
}
)

print(response)


2. Allow user to pass messages.name for claude-3, perplexity
Note: Before this pr - the two providers would raise errors with the `name` param
LiteLLM SDK

python
import litellm
response = litellm.completion(
model="claude-3-opus-20240229",
messages = [
{"role": "user", "content": "Hi gm!", "name": "ishaan"},
]
)



LiteLLM Proxy Server
python
import openai
client = openai.OpenAI(
api_key="anything",
base_url="http://0.0.0.0:8000"
)

response = client.chat.completions.create(
model="claude-3-opus-20240229"",
messages = [
{"role": "user", "content": "Hi gm!", "name": "ishaan"},
])

print(response)



3. If user is using `run_gunicorn` use cpu_count to select optimal `num_workers`

4. AzureOpenAI - Pass api_version to litellm proxy per request

Usage - sending a request to litellm proxy
python
from openai import AzureOpenAI

client = AzureOpenAI(
api_key="dummy",
I want to use a specific api_version, other than default 2023-07-01-preview
api_version="2023-05-15",
OpenAI Proxy Endpoint
azure_endpoint="https://openai-proxy.domain.com"
)

response = client.chat.completions.create(
model="gpt-35-turbo-16k-qt",
messages=[
{"role": "user", "content": "Some content"}
],
)


What's Changed
* [Feat] Support messages.name for claude-3, perplexity ai API by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2399
* docs: fix yaml typo in proxy/configs.md by GuillermoBlasco in https://github.com/BerriAI/litellm/pull/2402
* [Feat] LiteLLM - use cpu_count for default num_workers, run locust load test by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2406
* [FEAT] AzureOpenAI - Pass `api_version` to litellm per request by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2403
* Add quickstart deploy with k8s by GuillermoBlasco in https://github.com/BerriAI/litellm/pull/2409
* Update Docs for Kubernetes by H0llyW00dzZ in https://github.com/BerriAI/litellm/pull/2411
* [FEAT-liteLLM Proxy] Incognito Requests - Don't log anything by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2408
* Fix Docs Formatting in Website by H0llyW00dzZ in https://github.com/BerriAI/litellm/pull/2413

New Contributors
* GuillermoBlasco made their first contribution in https://github.com/BerriAI/litellm/pull/2402
* H0llyW00dzZ made their first contribution in https://github.com/BerriAI/litellm/pull/2411

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.30.3...v1.30.4

1.30.3

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.30.2...v1.30.3

1.30.2

Not secure
What's Changed
* test: reintegrate s3 testing by krrishdholakia in https://github.com/BerriAI/litellm/pull/2386
* (docs) setting load balancing config by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2388
* feat: add realease details to discord notification message by DanielChico in https://github.com/BerriAI/litellm/pull/2387
* [FIX] Proxy better debug prisma logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2390
* [Feat] Load Balancing - View Metrics about selected deployments in server logs by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2393
* (feat) LiteLLM AWS CloudFormation Stack Template by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2391

![Group 5746](https://github.com/BerriAI/litellm/assets/29436595/8570ff60-a629-426c-b661-32a8b22553e7)


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.30.1...v1.30.2

Page 69 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.