Toolio

Latest version: v0.5.2

Safety actively analyzes 702700 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.5.2

Added

- `toolio.common.load_or_connect` convenience function
- `reddit_newsletter` multi-agent demo

Changed

- Make the `{json_schema}` template "cutout" configurable, and change the default (to `!JSON_SCHEMA!`)

Fixed

- Clean up how optional dependencies are handled
- Tool-calling prompting enhancements
- Clean up HTTP client & server interpretation of tool-calling & schemata

0.5.1

Added

- Demo `demo/re_act.py`
- `common.response_text()` function to simplify usage

Fixed

- Usage pattern of KVCache

Changed

- Decode `json_schema` if given as a string

Removed

- `json_response` arg to `llm_helper.complete()`; just go by whether json_schema is None

0.5.0

Added

- `llm_helper.debug_model_manager`—a way to extract raw prompt & schema/tool-call info for debugging of underlying LLM behavior
- docs beyond the README (`doc` folder)
- test cases
- demo/algebra_tutor.py
- demo/blind_obedience.py

Changed

- use of logger rather than trace boolean, throughout
- further code modularizarion and reorg
- improvements to default prompting
- more elegant handling of install from an unsupported OS

Fixed

- handling of multi-trip scenarios

0.4.2

Added

- notes on how to override prompting

Changed

- processing for function-calling system prompts

Fixed

- server startup 😬

0.4.1

Added

- demo `demo/zipcode.py`
- support for multiple workers & CORS headers (`--workers` & `--cors_origin` cmdline option)

Fixed

- async tool definitions

0.4.0

Added

- `toolio.responder` module, with coherent factoring from `server.py`
- `llm_helper.model_manager` convenience API for direct Python loading & inferencing over models
- `llm_helper.extract_content` helper to simplify the OpenAI-style streaming completion responses
- `test/quick_check.py` for quick assessment of LLMs in Toolio
- Mistral model type support

Changed

- Turn off prompt caching until we figure out [12](https://github.com/OoriData/Toolio/issues/12)
- Have responders return actual dicts, rather than label + JSON dump
- Factor out HTTP protocol schematics to a new module
- Handle more nuances of tool-calling tokenizer setup
- Harmonize tool definition patterns across invocation styles

Fixed

- More vector shape mamagement

Removed

- Legacy OpenAI-style function-calling support

Page 1 of 2

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.