Jaankoppe-llama-index

Latest version: v0.8.26.post3

Safety actively analyzes 638681 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 14

0.8.4

Bug Fixes / Nits
- Improve SQL Query parsing (7283)
- Fix loading embed_model from global service context (7284)
- Limit langchain version until we migrate to pydantic v2 (7297)

0.8.3

New Features
- Added Knowledge Graph RAG Retriever (7204)

Bug Fixes / Nits
- accept `api_key` kwarg in OpenAI LLM class constructor (7263)
- Fix to create separate queue instances for separate instances of `StreamingAgentChatResponse` (7264)

0.8.2.post1

New Features
- Added support for Rockset as a vector store (7111)

Bug Fixes
- Fixed bug in service context definition that could disable LLM (7261)

0.8.2

New Features
- Enable the LLM or embedding model to be disabled by setting to `None` in the service context (7255)
- Resolve nearly any huggingface embedding model using the `embed_model="local:<model_name>"` syntax (7255)
- Async tool-calling support (7239)

Bug Fixes / Nits
- Updated supabase kwargs for add and query (7103)
- Small tweak to default prompts to allow for more general purpose queries (7254)
- Make callback manager optional for `CustomLLM` + docs update (7257)

0.8.1

New Features
- feat: add node_postprocessors to ContextChatEngine (7232)
- add ensemble query engine tutorial (7247)

Smaller Features
- Allow EMPTY keys for Fastchat/local OpenAI API endpoints (7224)

0.8.0

New Features
- Added "LLAMA_INDEX_CACHE_DIR" to control cached files (7233)
- Default to pydantic selectors when possible (7154, 7223)
- Remove the need for langchain wrappers on `embed_model` in the service context (7157)
- Metadata extractors take an `LLM` object now, in addition to `LLMPredictor` (7202)
- Added local mode + fallback to llama.cpp + llama2 (7200)
- Added local fallback for embeddings to `BAAI/bge-small-en` (7200)
- Added `SentenceWindowNodeParser` + `MetadataReplacementPostProcessor` (7211)

Breaking Changes
- Change default LLM to gpt-3.5-turbo from text-davinci-003 (7223)
- Change prompts for compact/refine/tree_summarize to work better with gpt-3.5-turbo (7150, 7179, 7223)
- Increase default LLM temperature to 0.1 (7180)

Page 5 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.