Langchain-decorators

Latest version: v0.6.1

Safety actively analyzes 685670 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 5

0.3.0

- Support for new OpenAI models (set as default, you can turn it off by setting env variable `LANGCHAIN_DECORATORS_USE_PREVIEW_MODELS=0` )
- automatically turn on new OpenAI JSON mode if `dict` is the output type / JSON output parser
- added timeouts to default models definitions
- you can now reference input variables from `__self__` of the object the `llm_function` is bound to (not only the `llm_prompt`)
- few bug fixes

0.2.3

- Fix verbose result longing when not verbose mode
- fix langchain logging warnings for using deprecated imports

0.2.2

- Minor bugfix of LlmSelector causing error in specific cases

0.2.1

- Hotfix of bug causing simple (without prompt blocks) prompts not working

0.2.0

- Support for custom template building, to support any kind of prompt block types (https://github.com/ju-bezdek/langchain-decorators/issues/5)
- Support for retrieving a chain object with preconfigured kwargs for more convenient use with the rest of LangChain ecosystem
- support for followup handle for convenient simple followup to response without using a history object
- hotfix support for pydantic v2

0.1.0

- Support for dynamic function schema, that allows augment the function schema dynamically based on the input [more here](./README.MDdynamic-function-schemas)
- Support Functions provider, that allows control function/tool selection that will be fed into LLM [more here](./README.MDfunctions-provider)
- Minor fix for JSON output parser for array scenarios

Page 3 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.