Kani

Latest version: v1.0.2

Safety actively analyzes 638720 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 5

0.5.1

- OpenAI: The OpenAIClient (internal class used by OpenAIEngine) now expects `OpenAIChatMessage`s as input rather than `kani.ChatMessage` in order to better type-validate API requests
- OpenAI: Updated token estimation to better reflect current token counts returned by the API

0.5.0

New Feature: Message Parts API
The Message Parts API is intended to provide a foundation for future multimodal LLMs and other engines that require engine-specific input without compromising kani's model-agnostic design. This is accomplished by allowing `ChatMessage.content` to be a list of `MessagePart` objects, in addition to a string.

*This change is fully backwards-compatible and will not affect existing code.*

When writing code with compatibility in mind, the `ChatMessage` class exposes `ChatMessage.text` (always a string or None) and `ChatMessage.parts` (always a list of message parts), which we recommend using instead of `ChatMessage.content`. These properties are dynamically generated based on the underlying content, and it is safe to mix messages with different content types in a single Kani.

Generally, message part classes are defined by an engine, and consumed by the developer. Message parts can be used in any role’s message - for example, you might use a message part in an assistant message to separate out a chain of thought from a user reply, or in a user message to supply an image to a multimodal model.

For more information, see the [Message Parts documentation](https://kani.readthedocs.io/en/latest/advanced.html#message-parts).

*Up next: we're adding support for multimodal vision-language models like LLaVA and GPT-Vision through a kani extension!*

Improvements
- LLaMA 2: Improved the prompting in non-strict mode to group consecutive user/system messages into a single `[INST]` wrapper. See [the tests](https://github.com/zhudotexe/kani/blob/main/tests/test_llama2_prompt.py) for how kani translates consecutive message types into the LLaMA prompt.
- Other documentation and minor improvements

0.4.0

BREAKING CHANGES
- `Kani.full_round` now emits *every* message generated during the round, not just assistant messages
- This means that you will need to handle `FUNCTION` messages, and potentially `SYSTEM` messages from a function exception handler.
- `Kani.full_round_str`'s default behaviour is unchanged.
- `Kani.full_round_str` now takes in a `message_formatter` rather than a `function_call_formatter`
- By default, this handler only returns the contents of `ASSISTANT` messages.
- `Kani.do_function_call` now returns a `FunctionCallResult` rather than a `bool`
- To migrate any overriding functions, you should change the following:
- Rather than calling `Kani.add_to_history` in the override, save the ChatMessage to a variable
- Update the return value from a boolean to `FunctionCallResult(is_model_turn=<old return value>, message=<message from above>)`
- `Kani.handle_function_call_exception` now returns a `ExceptionHandleResult` rather than a `bool`
- To migrate any overriding functions, you should change the following:
- Rather than calling `Kani.add_to_history` in the override, save the ChatMessage to a variable
- Update the return value from a boolean to `ExceptionHandleResult(should_retry=<old return value>, message=<message from above>)`

Improvements
- Added `kani.utils.message_formatters`
- Added `kani.ExceptionHandleResult` and `kani.FunctionCallResult`
- Documentation improvements

Fixes
- Fixed an issue where `ChatMessage.copy_with` could cause unset values to appear in JSON serializations

0.3.4

Improvements
- Updated dependencies to allow more recent versions
- The documentation now shows fully-qualified class names in reference sections
- Added `.copy_with` method to ChatMessage and FunctionCall to make updating chat history easier
- Various documentation updates

0.3.3

Improvements
- Added a warning in `Kani.chat_round` to use `Kani.full_round` when AI functions are defined
- Added examples in Google Colab
- Other documentation improvements

Fixes
- Fixed an issue where the ctransformers engine could overrun its context length (e.g. see https://github.com/zhudotexe/kani/actions/runs/6152842183/job/16695721588)

0.3.2

Improvements
- Made `chat_in_terminal` work in Google Colab, rather than having to use `await chat_in_terminal_async`

Page 3 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.