Dandy

Latest version: v0.14.2

Safety actively analyzes 723947 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 7

0.9.2

Fixes

- Update requirements.txt

0.9.1

Fixes

- Added default instructions prompt to the "LlmBot" class.

0.9.0

Features

- LlmBots now have a default built in process method that takes in a prompt and returns an intel object.
- Changed our http handling library to httpx.
- The contrib choice llm bot has been replaced with the much simpler selector llm bot.
- The Prompt class init now has a text argument that will create a prompt with a text snippet automatically for simple prompts.
- New setting "DEFAULT_LLM_REQUEST_TIME_OUT" that controls the timeout for LLM requests default is "None".

Changes

- Moved "llm_bot" from "dandy.bot" to "dandy.llm.bot" to match our refactoring changes.
- Changed the base class from "Handler" to "BaseProcessor"
- Refactored "Intel" to "BaseIntel" to improve readability.
- Added "BaseLlmBot" class to "dandy.llm.bot" to be used for creating all llm bots.
- "BaseLlmBot" config now takes just a string that is one of the "LLM_CONFIGS" in the settings.

Fixes

- There is now a "DefaultLlmIntel" class that is used as the default intel class for LlmBots that has one attribute called "text".
- Fixed a bunch of Generic Type handling through-out the project.
- Connection retry count of zero no longer causes an error.
- Refactor llm internal packages to match their usage better.
- Fixed AsyncFuture to allow you to access the result after accessing it once.
- Fixed CLI to properly load environment variables and settings in the correct order.

0.8.1

Fixes

- Fixed the settings module validation to be much easier to implement and debug.

0.8.0

Major Changes

- We have created a new class called "Intel" that is the pydantic "BaseModel" class renamed to give more separation of concerns between dandy code and your code.
- For the most part of this project the word "Model" has been refactored to "Intel" to create more separation of concerns in projects.
- The word "Model" has a lot of meaning in the context of dandy, artificial intelligence, databases, libraries and other frameworks.
- Our hope is this creates a very clear line between these specific objects and the rest of your code.

0.7.0

Major Improvement

- All the changes in v0.7.0 should reduce the over all code required to work with dandy by up to 50%.

Features

- Project structure improvement with new settings file.
- All projects going forward will require a "dandy_settings.py" file in the root of your project.
- This file will also need a "BASE_PATH" str variable set to the root of your project.
- This file will also need a "LLM_CONFIGS" dict variable with a "DEFAULT" llm config.
- Debug recorder can now output to a json string or file.
- Added randomize seed to LLM config that will randomize the seed every time the config is used.
- Added new evaluate cli command for evaluating your dandy code -e --evaluate.
- Currently only supports Prompt evaluation.
- ALLOW_DEBUG_RECORDING was added to the settings for project wide control of the debug recorder.
- defaulted to False.
- You can now select your llm config from the cli using the -l --llm-config flag.

Changes

- Updated the readme to match the new project structure.
- All settings and llm configs are now managed through the dandy settings module.
- The environment variable "DANDY_SETTINGS_MODULE" can be used to specify the settings module to be used.
- The system will default to look for a "dandy_settings.py" file in the current working directory or sys.path.
- Moved a lot of project wide constants into the "const.py" file.

Fixes

- Fixed the user "dandy_settings.py" not loading properly with in the internal dandy modules.
- Fixed readme to match new project structure and configuration setup.
- Fixed the cli to properly use the dandy_settings.py file in the current working directory.
- Improved testing coverage across the whole project.

Page 3 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.