Litellm

Latest version: v1.52.14

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 79 of 93

1.23.8

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.7...v1.23.8

1.23.7

Not secure
* [FEAT] ui - view total proxy spend / budget by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1915
* [FEAT] Bedrock set timeouts on litellm.completion by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1919
* [FEAT] Use LlamaIndex with Proxy - Support azure deployments for /embeddings - by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1921
* [FIX] Verbose Logger - don't double print CURL command by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1924
* [FEAT] Set timeout for bedrock on proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1922
* feat(proxy_server.py): show admin global spend as time series data by krrishdholakia in https://github.com/BerriAI/litellm/pull/1920

1. Bedrock Set Timeouts
Usage - litellm.completion
python
response = litellm.completion(
model="bedrock/anthropic.claude-instant-v1",
timeout=0.01,
messages=[{"role": "user", "content": "hello, write a 20 pg essay"}],
)


Usage on Proxy config.yaml
yaml
model_list:
- model_name: BEDROCK_GROUP
litellm_params:
model: bedrock/cohere.command-text-v14
timeout: 0.0001

2 View total proxy spend / budget
<img width="1280" alt="Screenshot 2024-02-09 at 11 50 23 AM" src="https://github.com/BerriAI/litellm/assets/29436595/e1090d6d-b3a4-4b8a-87b7-66bde3534a31">

3. Use LlamaIndex with Proxy - Support azure deployments for /embeddings

Send Embedding requests like this

`http://0.0.0.0:4000/openai/deployments/azure-embedding-model/embeddings?api-version=2023-07-01-preview`

This allow users to use llama index AzureOpenAI with LiteLLM

Use LlamaIndex with LiteLLM Proxy
python
import os, dotenv

from dotenv import load_dotenv

load_dotenv()

from llama_index.llms import AzureOpenAI
from llama_index.embeddings import AzureOpenAIEmbedding
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext

llm = AzureOpenAI(
engine="azure-gpt-3.5",
temperature=0.0,
azure_endpoint="http://0.0.0.0:4000",
api_key="sk-1234",
api_version="2023-07-01-preview",
)

embed_model = AzureOpenAIEmbedding(
deployment_name="azure-embedding-model",
azure_endpoint="http://0.0.0.0:4000",
api_key="sk-1234",
api_version="2023-07-01-preview",
)


response = llm.complete("The sky is a beautiful blue and")
print(response)

documents = SimpleDirectoryReader("llama_index_data").load_data()
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
index = VectorStoreIndex.from_documents(documents, service_context=service_context)

query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)






**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.5...v1.23.7

1.23.5

Not secure
What's Changed
* fix(proxy_server.py): enable aggregate queries via /spend/keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/1901
* fix(factory.py): mistral message input fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/1902


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.4...v1.23.5

1.23.4

Not secure
What's Changed
* [FEAT] 76 % Faster s3 logging Proxy / litellm.acompletion / router.acompletion 🚀 by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1892
* (feat) Add support for AWS credentials from profile file by dleen in https://github.com/BerriAI/litellm/pull/1895
* Litellm langfuse error logging - log input by krrishdholakia in https://github.com/BerriAI/litellm/pull/1898
* Admin UI - View Models, TPM, RPM Limit of a Key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1903
* Admin UI - show delete confirmation when deleting keys by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1904

![litellm_key_gen5](https://github.com/BerriAI/litellm/assets/29436595/809f61ae-21b3-490e-9f01-eb962a19da9b)



**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.3...v1.23.4

1.23.3

Not secure
What's Changed
* [FEAT] 78% Faster s3 Cache⚡️- Proxy/ litellm.acompletion/ litellm.Router.acompletion by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1891


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.2...v1.23.3

1.23.2

Not secure
What's Changed 🐬
1. [FEAT] Azure Pricing - based on base_model in model_info
2. [Feat] Semantic Caching - Track Cost of using embedding, Use Langfuse Trace ID
3. [Feat] Slack Alert when budget tracking fails


1. [FEAT] Azure Pricing - based on base_model in model_info by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1874
Azure Pricing - Use Base model for cost calculation
Why ?
Azure returns `gpt-4` in the response when `azure/gpt-4-1106-preview` is used, We were using `gpt-4` when calculating response_cost

How to use - set `base_model` on config.yaml
yaml
model_list:
- model_name: azure-gpt-3.5
litellm_params:
model: azure/chatgpt-v-2
api_base: os.environ/AZURE_API_BASE
api_key: os.environ/AZURE_API_KEY
api_version: "2023-07-01-preview"
model_info:
base_model: azure/gpt-4-1106-preview


View Cost calculated on Langfuse
This used the correct pricing for `azure/gpt-4-1106-preview` = `(9*0.00001) + (28*0.00003)`
<img width="938" alt="Screenshot 2024-02-07 at 4 39 12 PM" src="https://github.com/BerriAI/litellm/assets/29436595/9edd3b8f-15d3-4c7f-82f7-2a0e3c08c17d">

2. [Feat] Semantic Caching - Track Cost of using embedding, Use Langfuse Trace ID by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1878
- If a `trace_id` is passed we'll place the semantic cache embedding call in the same trace
- We now track cost for the API key that will make the embedding call for semantic caching

<img width="1002" alt="Screenshot 2024-02-07 at 7 18 57 PM" src="https://github.com/BerriAI/litellm/assets/29436595/203e2d12-9d1e-4411-a1dd-4219de83a2b7">

3. [Feat] Slack Alert when budget tracking fails by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1877
<img width="913" alt="Screenshot 2024-02-07 at 8 08 27 PM" src="https://github.com/BerriAI/litellm/assets/29436595/4c70c204-05bf-412d-8efe-18e25a8b8b17">


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.1...v1.23.2

Page 79 of 93

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.