Dandy

Latest version: v0.13.3

Safety actively analyzes 723177 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 7

0.3.0

Features

- Ollama config now supports max_completion_tokens (num_predict on ollama api) and context_length (num_ctx on ollama api)
- Openai config now supports max_completion_tokens

Changes

- Single Choice LLM Bots now return only the value when the choices are from a dictionary.
- Multiple Choice LLM Bots now return a list of only the values when the choices are from a dictionary.

Fixes

- Improved validation on llm configs.

0.2.0

Features

- Added support for async using the thread pool executor to create a future were you can process things into futures.
- use the future.result to get the result of the future.
- Added datetime to the debug recorder output.

Changes

- Handler, Bots, and Workflow now have a process_to_future method that can be used to process things into futures.

Fixes

- Fixed testing to not output large blocks of text.

0.1.3

Changes

- Prompt formatting is now slightly changed to improve prompt inference quality.

Fixes

- Fixed prompt list to handle indention and nested lists, tuples and sets.

0.1.1

0.1.0

Features

- Improved Testing and Debugging
- Prompts now support array and array_random_order

Changes

- DebugRecorder method "to_html" renamed to "to_html_file"
- LLM service method "assistant_prompt_str_to_str" renamed to "assistant_str_prompt_to_str"
- Choice LLM Bot now uses array_random_order snippet

Fixes

- Fixed the prompt title to use a better format.
- Choice LLM Bot has improved default prompts.

0.0.10

Features
- Debug Recorder output to html drastically improved with a lot of new features.

Page 5 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.