Langchain-decorators

Latest version: v0.6.0

Safety actively analyzes 663882 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 5

0.2.3

- Fix verbose result longing when not verbose mode
- fix langchain logging warnings for using deprecated imports

0.2.2

- Minor bugfix of LlmSelector causing error in specific cases

0.2.1

- Hotfix of bug causing simple (without prompt blocks) prompts not working

0.2.0

- Support for custom template building, to support any kind of prompt block types (https://github.com/ju-bezdek/langchain-decorators/issues/5)
- Support for retrieving a chain object with preconfigured kwargs for more convenient use with the rest of LangChain ecosystem
- support for followup handle for convenient simple followup to response without using a history object
- hotfix support for pydantic v2

0.1.0

- Support for dynamic function schema, that allows augment the function schema dynamically based on the input [more here](./README.MDdynamic-function-schemas)
- Support Functions provider, that allows control function/tool selection that will be fed into LLM [more here](./README.MDfunctions-provider)
- Minor fix for JSON output parser for array scenarios

0.0.12

New parameters in llm decorator
- support for `llm_selector_rule_key` to sub selection of LLM's to for consideration during selection. This enables you to enforce pick only some models (like GPT4 for instance) for particular prompts, or even for particular runs
- support for `function_source` and `memory_source` to point pick properties/attributes of the instance prompt is bound to (aka `self`) as source of functions and memories, so we wont need to send pass it in every time

Page 3 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.