Jaankoppe-llama-index

Latest version: v0.8.26.post3

Safety actively analyzes 638646 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 14

0.7.3

New Features

- Sub question query engine returns source nodes of sub questions in the callback manager (6745)
- trulens integration (6741)
- Add sources to subquestion engine (6745)

Bug Fixes / Nits

- Added/Fixed streaming support to simple and condense chat engines (6717)
- fixed `response_mode="no_text"` response synthesizer (6755)
- fixed error setting `num_output` and `context_window` in service context (6766)
- Fix missing as_query_engine() in tutorial (6747)
- Fixed variable sql_query_engine in the notebook (6778)
- fix required function fields (6761)
- Remove usage of stop token in Prompt, SQL gen (6782)

0.7.2

New Features

- Support Azure OpenAI (6718)
- Support prefix messages (e.g. system prompt) in chat engine and OpenAI agent (6723)
- Added `CBEventType.SUB_QUESTIONS` event type for tracking sub question queries/responses (6716)

Bug Fixes / Nits

- Fix HF LLM output error (6737)
- Add system message support for langchain message templates (6743)
- Fixed applying node-postprocessors (6749)
- Add missing `CustomLLM` import under `llama_index.llms` (6752)
- fix(typo): `get_transformer_tokenizer_fn` (6729)
- feat(formatting): `black[jupyter]` (6732)
- fix(test): `test_optimizer_chinese` (6730)

0.7.1

New Features

- Streaming support for OpenAI agents (6694)
- add recursive retriever + notebook example (6682)

0.7.0

New Features

- Index creation progress bars (6583)

Bug Fixes/ Nits

- Improved chat refine template (6645)

Breaking/Deprecated API Changes

- Change `BaseOpenAIAgent` to use `llama_index.llms.OpenAI`. Adjust `chat_history` to use `List[ChatMessage]]` as type.
- Remove (previously deprecated) `llama_index.langchain_helpers.chain_wrapper` module.
- Remove (previously deprecated) `llama_index.token_counter.token_counter` module. See [migration guide](/how_to/callbacks/token_counting_migration.html) for more details on new callback based token counting.
- Remove `ChatGPTLLMPredictor` and `HuggingFaceLLMPredictor`. See [migration guide](/how_to/customization/llms_migration_guide.html) for more details on replacements.
- Remove support for setting `cache` via `LLMPredictor` constructor.
- Update `BaseChatEngine` interface:
- adjust `chat_history` to use `List[ChatMessage]]` as type
- expose `chat_history` state as a property
- support overriding `chat_history` in `chat` and `achat` endpoints
- Remove deprecated arguments for `PromptHelper`: `max_input_size`, `embedding_limit`, `max_chunk_overlap`
- Update all notebooks to use native openai integration (6696)

0.6.38

New Features

- add optional tqdm progress during index creation (6583)
- Added async support for "compact" and "refine" response modes (6590)
- [feature]add transformer tokenize functionalities for optimizer (chinese) (6659)
- Add simple benchmark for vector store (6670)
- Introduce `llama_index.llms` module, with new `LLM` interface, and `OpenAI`, `HuggingFaceLLM`, `LangChainLLM` implementations. (6615)
- Evaporate pydantic program (6666)

Bug Fixes / Nits

- Improve metadata/node storage and retrieval for RedisVectorStore (6678)
- Fixed node vs. document filtering in vector stores (6677)
- add context retrieval agent notebook link to docs (6660)
- Allow null values for the 'image' property in the ImageNode class and se… (6661)
- Fix broken links in docs (6669)
- update milvus to store node content (6667)

0.6.37

New Features

- add context augmented openai agent (6655)

Page 10 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.