Litellm

Latest version: v1.61.11

Safety actively analyzes 707607 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 95 of 110

1.23.12

Not secure
🚀 LiteLLM Proxy Server v1.23.11 - Allow your team to create keys for Azure, OpenAI, Bedrock, Sagemaker, Gemini and call 100+ LLMs

![litellm_model_info](https://github.com/BerriAI/litellm/assets/29436595/2d59e7c3-f0da-4a47-92b3-f90a67776b71)


What's Changed
* support langfuse tags feature by deenaawny-github-account in https://github.com/BerriAI/litellm/pull/1943
* build(deps): bump jinja2 from 3.1.2 to 3.1.3 by dependabot in https://github.com/BerriAI/litellm/pull/1944
* [FEAT] ADMIN UI - Show Model Info by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1949
* Litellm proxy routing fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/1946
* [FEAT] UI show user available models when making a key by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1950


New Contributors
* deenaawny-github-account made their first contribution in https://github.com/BerriAI/litellm/pull/1943

**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.10...v1.23.12

1.23.10

Not secure
What's Changed
* Enable viewing key alias instead of hashed tokens by krrishdholakia in https://github.com/BerriAI/litellm/pull/1926
* [FEAT] Proxy - set team specific models by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1933
* feat(proxy_server.py): support for pii masking with microsoft presidio by krrishdholakia in https://github.com/BerriAI/litellm/pull/1931


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.9...v1.23.10

1.23.9

Not secure
What's Changed
* fix(usage.tsx): do cost breakdown by model by krrishdholakia in https://github.com/BerriAI/litellm/pull/1928
* [FEAT] Proxy set ssl_certificates on proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1929


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.8...v1.23.9

1.23.8

Not secure
**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.7...v1.23.8

1.23.7

Not secure
* [FEAT] ui - view total proxy spend / budget by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1915
* [FEAT] Bedrock set timeouts on litellm.completion by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1919
* [FEAT] Use LlamaIndex with Proxy - Support azure deployments for /embeddings - by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1921
* [FIX] Verbose Logger - don't double print CURL command by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1924
* [FEAT] Set timeout for bedrock on proxy by ishaan-jaff in https://github.com/BerriAI/litellm/pull/1922
* feat(proxy_server.py): show admin global spend as time series data by krrishdholakia in https://github.com/BerriAI/litellm/pull/1920

1. Bedrock Set Timeouts
Usage - litellm.completion
python
response = litellm.completion(
model="bedrock/anthropic.claude-instant-v1",
timeout=0.01,
messages=[{"role": "user", "content": "hello, write a 20 pg essay"}],
)


Usage on Proxy config.yaml
yaml
model_list:
- model_name: BEDROCK_GROUP
litellm_params:
model: bedrock/cohere.command-text-v14
timeout: 0.0001

2 View total proxy spend / budget
<img width="1280" alt="Screenshot 2024-02-09 at 11 50 23 AM" src="https://github.com/BerriAI/litellm/assets/29436595/e1090d6d-b3a4-4b8a-87b7-66bde3534a31">

3. Use LlamaIndex with Proxy - Support azure deployments for /embeddings

Send Embedding requests like this

`http://0.0.0.0:4000/openai/deployments/azure-embedding-model/embeddings?api-version=2023-07-01-preview`

This allow users to use llama index AzureOpenAI with LiteLLM

Use LlamaIndex with LiteLLM Proxy
python
import os, dotenv

from dotenv import load_dotenv

load_dotenv()

from llama_index.llms import AzureOpenAI
from llama_index.embeddings import AzureOpenAIEmbedding
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext

llm = AzureOpenAI(
engine="azure-gpt-3.5",
temperature=0.0,
azure_endpoint="http://0.0.0.0:4000",
api_key="sk-1234",
api_version="2023-07-01-preview",
)

embed_model = AzureOpenAIEmbedding(
deployment_name="azure-embedding-model",
azure_endpoint="http://0.0.0.0:4000",
api_key="sk-1234",
api_version="2023-07-01-preview",
)


response = llm.complete("The sky is a beautiful blue and")
print(response)

documents = SimpleDirectoryReader("llama_index_data").load_data()
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
index = VectorStoreIndex.from_documents(documents, service_context=service_context)

query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)






**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.5...v1.23.7

1.23.5

Not secure
What's Changed
* fix(proxy_server.py): enable aggregate queries via /spend/keys by krrishdholakia in https://github.com/BerriAI/litellm/pull/1901
* fix(factory.py): mistral message input fix by krrishdholakia in https://github.com/BerriAI/litellm/pull/1902


**Full Changelog**: https://github.com/BerriAI/litellm/compare/v1.23.4...v1.23.5

Page 95 of 110

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.