Chat-in-a-nutshell

Latest version: v1.5.1

Safety actively analyzes 722491 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

1.5.1

- Added `--max-tokens` parameter to limit token generation (default: 4096)
- Enhanced error handling with formatted error messages
- Improved provider model listings with tabular display
- Renamed main script to `chat_provider.py` for clarity

1.5.0

- **Anthropic Provider Integration**:
- Added support for Anthropic as a provider.
- Select provider with `-p` or `--provider` flag.
- Requires `ANTHROPIC_API_KEY` environment variable for authentication.
- Required parameter `max_tokens` is set to 8,000 tokens (the absolute maximum number of tokens to generate)
- **Dependencies**: Added `anthropic~=0.49.0`.

1.4.0

- **OpenAI requests refactoring**:
- Added back system messages for `o1` and `o3` models, now supported as developer messages.
- Introduced an option to specify reasoning effort for reasoning models.
- Improved readability of `--available-models` and `--available-models-gpt` outputs.

- **Tool calling**:
- The feature can be toggled with `--use-tools` to enable and `--no-tools` to disable.
- To use your own tools, specify the URL path using the `TOOLS_URL` environment variable. Check `example_tools.json` in the repo for the example tool definition file structure.
- Review available tools with `--available-tools`.

1.3.0

- **o1 model support**:
- Removed system message from requests to o1 models because they are currently not supported.
- Set temperature to 1 for o1 models; other values are not currently supported.

- **Added CI/CD Pipeline**:
- Added GitHub Actions pipeline for deploying to PyPI.

1.2.0

1.1.0

Page 1 of 2

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.