Kani

Latest version: v1.2.4

Safety actively analyzes 688823 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 6

0.3.2

Improvements
- Made `chat_in_terminal` work in Google Colab, rather than having to use `await chat_in_terminal_async`

0.3.1

- HuggingFace Engine: Fixed an issue where completion message lengths were overreported by an amount equal to the prompt length.
- Other documentation improvements

0.3.0

Improvements
- Added `Kani.add_to_history`, a method that is called whenever kani adds a new message to the chat context
- `httpclient.BaseClient.request` now returns a `Response` to aid low-level implementation
- `.get()` and `.post()` are unchanged
- Add additional documentation about GPU support for local models
- Other documentation improvements

0.2.0

Improvements

- Engines: Added `Engine.function_token_reserve()` to dynamically reserve a number of tokens for a function list
- OpenAI: The OpenAIEngine now reads the `OPENAI_API_KEY` environment variable by default if no api key or client is specified
- Documentation improvements (polymorphism, mixins, extension packages)

0.1.0

BREAKING CHANGES

*These should hopefully be the last set of breaking changes until v1.0. We're finalizing some of the attribute names for clarity and publication.*

- renamed `Kani.always_include_messages` to `Kani.always_included_messages`

Features & Improvements

- `ai_function`s with synchronous signatures now run in a thread pool in order to prevent blocking the asyncio event loop
- OpenAI: Added the ability to specify the API base and additional headers (e.g. for proxy APIs).
- Various documentation improvements

0.0.3

BREAKING CHANGES
- Renamed `Kani.get_truncated_chat_history` to `Kani.get_prompt`

Additions & Improvements
- Added `CTransformersEngine` and `LlamaCTransformersEngine` (thanks Maknee!)
- Added a lower-level `Kani.get_model_completion` to make a prediction at the current chat state (without modifying the chat history)
- Added the `auto_truncate` param to `ai_function` to opt in to kani trimming long responses from a function (i.e., responses that do not fit in a model's context)
- Improved the internal handling of tokens when the chat history is directly modified
- `ChatMessage.[role]()` classmethods now pass kwargs to the constructor
- LLaMA: Improved the fidelity of non-strict-mode LLaMA prompting
- OpenAI: Added support for specifying an OpenAI organization and configuring retry
- Many documentation improvements

Fixes
- OpenAI message length could return too short on messages with no content
- Other minor fixes and improvements

Page 5 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.