Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 74 of 93

1.27.7

Not secure
Use ClickhouseDB for low latency LLM Analytics / Spend Reports
(sub 1s analytics, with 100M logs)

Getting started with ClickHouse DB + LiteLLM Proxy

Docs + Docker compose for getting started with clickhouse: https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput---clickhouse

**Step 1**: Create a `config.yaml` file and set `litellm_settings`: `success_callback`
yaml
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
litellm_settings:
success_callback: ["clickhouse"]



**Step 2**: Set Required env variables for clickhouse

Env Variables for self hosted click house
shell
CLICKHOUSE_HOST = "localhost"
CLICKHOUSE_PORT = "8123"
CLICKHOUSE_USERNAME = "admin"
CLICKHOUSE_PASSWORD = "admin"



**Step 3**: Start the proxy, make a test request


New Models
Mistral on Azure AI Studio

Sample Usage
**Ensure you have the `/v1` in your api_base**
python
from litellm import completion
import os

response = completion(
model="mistral/Mistral-large-dfgfj",
api_base="https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1",
api_key = "JGbKodRcTp****"
messages=[
{"role": "user", "content": "hello from litellm"}
],
)
print(response)


[LiteLLM Proxy] Using Mistral Models

Set this on your litellm proxy config.yaml

**Ensure you have the `/v1` in your api_base**
yaml
model_list:
- model_name: mistral
litellm_params:
model: mistral/Mistral-large-dfgfj
api_base: https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1
api_key: JGbKodRcTp****




What's Changed
* [Docs] use azure ai studio + mistral large by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2205
* [Feat] Start Self hosted clickhouse server by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2206
* [FEAT] Admin UI - View /spend/logs from clickhouse data by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2210
* [Docs] Use Clickhouse DB + Docker compose by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2211


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.27.6...v1.27.7

1.27.6

Not secure
New Models
- `azure/text-embedding-3-large`
- `azure/text-embedding-3-small`
- `mistral/mistral-large-latest`

Log LLM Output in ClickHouse DB
python
litellm.success_callback = ["clickhouse"]
await litellm.acompletion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"This is a test"}],
max_tokens=10,
temperature=0.7,
user="ishaan-2",
)




What's Changed
* [FEAT] add cost for azure/text-embedding-3-large, azure/text-embedding-3-small by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2198
* [FEAT] Use Logging on clickhouse by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2187
* Litellm custom callback fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/2202


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.27.4...v1.27.6

1.27.4

Not secure
What's Changed
* Allow end-users to opt out of llm api calls by krrishdholakia in https://github.com/BerriAI/litellm/pull/2174
* [Docs] open router - clarify we support all models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2186
* (docs) using openai compatible endpoints by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2189
* [Fix] Fix health check when API base set for OpenAI compatible models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2188
* fix(proxy_server.py): allow user to set team tpm/rpm limits/budget/models by krrishdholakia in https://github.com/BerriAI/litellm/pull/2183


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.27.1...v1.27.4

1.27.1

Not secure
What's Changed
* Default user values for new user on SSO by krrishdholakia in https://github.com/BerriAI/litellm/pull/2172
* fix(langfuse.py): support time to first token logging on langfuse by krrishdholakia in https://github.com/BerriAI/litellm/pull/2165
* fix(utils.py): stricter azure function calling tests by krrishdholakia in https://github.com/BerriAI/litellm/pull/2175


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.26.13...v1.27.1

1.26.13

Not secure
What's Changed
* [FIX] BUG where extra tokens created in litellm verification token table by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2150
* Support for Athina logging by vivek-athina in https://github.com/BerriAI/litellm/pull/2163
* [FEAT] Support extra headers - OpenAI / Azure by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2164
* [FEAT] Support Groq AI by ishaan-jaff in https://github.com/BerriAI/litellm/pull/2168

Sample Usage
python
from litellm import completion
import os

os.environ['GROQ_API_KEY'] = ""
response = completion(
model="groq/llama2-70b-4096",
messages=[
{"role": "user", "content": "hello from litellm"}
],
)
print(response)

![Group 5725](https://github.com/BerriAI/litellm/assets/29436595/cd8f8e4e-1269-4661-8b49-ad3bd94a49d3)


New Contributors
* vivek-athina made their first contribution in https://github.com/BerriAI/litellm/pull/2163

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.26.10...v1.26.13

1.26.11

Not secure
What's Changed
* Allow admin to ban keywords on proxy by krrishdholakia in https://github.com/BerriAI/litellm/pull/2147
* fix(utils.py): ensure argument is always a string by krrishdholakia in https://github.com/BerriAI/litellm/pull/2141


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.26.9...v1.26.11

Page 74 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.