Langchain-decorators

Latest version: v0.6.0

Safety actively analyzes 641171 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 5

0.5.1

- break hard dependency on promptwatch
-

0.5.0

- ability to pass in function to augment function arguments before executing in OutputWithFunctionCall

0.4.2

- critical bugfix - Assistant messages without context (text) but only with arguments were ignored

0.4.1

- support for func_description passed as part of llm_function decorator
- allowed not having func_description
- minor fixes

0.4.0

- Input kwargs augmentations by implementing the llm_prompt function (checkout example: [code_examples/augmenting_llm_prompt_inputs.py](https://github.com/ju-bezdek/langchain-decorators/blob/main/code_examples/augmenting_llm_prompt_inputs.py) )
- support for automatic json fix using if `json_repair` is installed
(*not even OpenAI JSON format is not yet perfect*)

0.3.0

- Support for new OpenAI models (set as default, you can turn it off by setting env variable `LANGCHAIN_DECORATORS_USE_PREVIEW_MODELS=0` )
- automatically turn on new OpenAI JSON mode if `dict` is the output type / JSON output parser
- added timeouts to default models definitions
- you can now reference input variables from `__self__` of the object the `llm_function` is bound to (not only the `llm_prompt`)
- few bug fixes

Page 2 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.