Langchain-decorators

Latest version: v0.6.1

Safety actively analyzes 687918 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 5

0.0.12

New parameters in llm decorator
- support for `llm_selector_rule_key` to sub selection of LLM's to for consideration during selection. This enables you to enforce pick only some models (like GPT4 for instance) for particular prompts, or even for particular runs
- support for `function_source` and `memory_source` to point pick properties/attributes of the instance prompt is bound to (aka `self`) as source of functions and memories, so we wont need to send pass it in every time

0.0.11

- fixed streaming
- multiple little bugfixes
- option to set the expected generated token count as a hint for LLM selector
- add argument schema option for llm_function

0.0.10

- async screaming callback support
- LlmSelector for automatic selection of LLM based on the model context window and prompt length

0.0.9

- fix some scenarios of LLM response that raised error
- save AIMessage with function call in output wrapper
- fix logging that we are out or stream context, when stream is not on

0.0.8

- support for parsing via OpenAI functions 🚀
- support for controlling function_call
- add BIG_CONTEXT prompt type
- ton of bugfixes

0.0.7

- fixed streaming capture
- better handling for missing docs for llm_function

Page 4 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.