Kani

Latest version: v0.8.0

Safety actively analyzes 623847 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

1.0.0rc1

- Added support for Llama 3
- Added `WrapperEngine` to make writing wrapper extensions easier
- Refactored internal Command R prompt building for easier runtime extension
- Updated documentation

1.0.0rc0

New Features
Streaming
kani now supports streaming to print tokens from the engine as they are received! Streaming is designed to be a drop-in superset of the `chat_round` and `full_round` methods, allowing you to gradually refactor your code without ever leaving it in a broken state.

To request a stream from the engine, use `Kani.chat_round_stream()` or `Kani.full_round_stream()`. These methods will return a `StreamManager`, which you can use in different ways to consume the stream.

The simplest way to consume the stream is to iterate over it with async for, which will yield a stream of str.
py
CHAT ROUND:
stream = ai.chat_round_stream("What is the airspeed velocity of an unladen swallow?")
async for token in stream:
print(token, end="")
msg = await stream.message()

FULL ROUND:
async for stream in ai.full_round_stream("What is the airspeed velocity of an unladen swallow?"):
async for token in stream:
print(token, end="")
msg = await stream.message()

After a stream finishes, its contents will be available as a `ChatMessage`. You can retrieve the final message or BaseCompletion with:
py
msg = await stream.message()
completion = await stream.completion()

The final ChatMessage may contain non-yielded tokens (e.g. a request for a function call). If the final message or completion is requested before the stream is iterated over, the stream manager will consume the entire stream.

> [!TIP]
> For compatibility and ease of refactoring, awaiting the stream itself will also return the message, i.e.:
> py
> msg = await ai.chat_round_stream("What is the airspeed velocity of an unladen swallow?")
>
> (note the await that is not present in the above examples). This allows you to refactor your code by changing chat_round to chat_round_stream without other changes.
> diff
> - msg = await ai.chat_round("What is the airspeed velocity of an unladen swallow?")
> + msg = await ai.chat_round_stream("What is the airspeed velocity of an unladen swallow?")
>

Issue: 30

New Models
kani now has bundled support for the following new models:

**Hosted**

- Claude 3 (including function calling)

**Open Source**

- [Command R](https://huggingface.co/CohereForAI/c4ai-command-r-v01) and [Command R+](https://huggingface.co/CohereForAI/c4ai-command-r-plus) (including function calling)
- [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) and [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- [Gemma](https://huggingface.co/collections/google/gemma-release-65d5efbccdbb8c4202ec078b) (all sizes)

Although these models have built-in support, kani supports every chat model available on Hugging Face through `transformers` or `llama.cpp` using the new Prompt Pipelines feature (see below)!

Issue: 34

llama.cpp

To use GGUF-quantized versions of models, kani now supports the `LlamaCppEngine`, which uses the `llama-cpp-python` library to interface with the `llama.cpp` library. Any model with a GGUF version is compatible with this engine!

Prompt Pipelines

A prompt pipeline creates a reproducible pipeline for translating a list of `ChatMessage` into an engine-specific format using fluent-style chaining.

To build a pipeline, create an instance of `PromptPipeline()` and add steps by calling the step methods documented below. Most pipelines will end with a call to one of the terminals, which translates the intermediate form into the desired output format.

Pipelines come with a built-in `explain()` method to print a detailed explanation of the pipeline and multiple examples (selected based on the pipeline steps).

Here’s an example using the PromptPipeline to build a LLaMA 2 chat-style prompt:

py
from kani import PromptPipeline, ChatRole

LLAMA2_PIPELINE = (
PromptPipeline()

System messages should be wrapped with this tag. We'll translate them to USER
messages since a system and user message go together in a single [INST] pair.
.wrap(role=ChatRole.SYSTEM, prefix="<<SYS>>\n", suffix="\n<</SYS>>\n")
.translate_role(role=ChatRole.SYSTEM, to=ChatRole.USER)

If we see two consecutive USER messages, merge them together into one with a
newline in between.
.merge_consecutive(role=ChatRole.USER, sep="\n")
Similarly for ASSISTANT, but with a space (kani automatically strips whitespace from the ends of
generations).
.merge_consecutive(role=ChatRole.ASSISTANT, sep=" ")

Finally, wrap USER and ASSISTANT messages in the instruction tokens. If our
message list ends with an ASSISTANT message, don't add the EOS token
(we want the model to continue the generation).
.conversation_fmt(
user_prefix="<s>[INST] ",
user_suffix=" [/INST]",
assistant_prefix=" ",
assistant_suffix=" </s>",
assistant_suffix_if_last="",
)
)

We can see what this pipeline does by calling explain()...
LLAMA2_PIPELINE.explain()

And use it in our engine to build a string prompt for the LLM.
prompt = LLAMA2_PIPELINE(ai.get_prompt())


Integration with HuggingEngine and LlamaCppEngine

Previously, to use a model with a different prompt format than the ones bundled with the library, one had to create a subclass of the `HuggingEngine` to implement the prompting scheme. With the release of Prompt Pipelines, you can now supply a `PromptPipeline` in addition to the model ID to use the `HuggingEngine` directly!

For example, the `LlamaEngine` (huggingface) is now equivalent to the following:

py
engine = HuggingEngine(
"meta-llama/Llama-2-7b-chat-hf",
prompt_pipeline=LLAMA2_PIPELINE
)


Issue: 32

Improvements

- The `OpenAIEngine` now uses the official `openai-python` package. (31)
- This means that `aiohttp` is no longer a direct dependency, and the `HTTPClient` has been deprecated. For API-based models, we recommend using the `httpx` library.
- Added arguments to the `chat_in_terminal` helper to control maximum width, echo user inputs, show function call arguments and results, and other interactive utilities (33)
- The `HuggingEngine` can now automatically determine a model's context length.
- Added a warning message if an `ai_function` is missing a docstring. (37)

Breaking Changes
- All `kani` models (e.g. `ChatMessage`) are no longer immutable. This means that you can edit the chat history directly, and token counting will still work correctly.
- As the `ctransformers` library does not appear to be maintained, we have removed the `CTransformersEngine` and replaced it with the `LlamaCppEngine`.
- The arguments to `chat_in_terminal` (except the first) are now keyword-only.
- The arguments to `HuggingEngine` (except `model_id`, `max_context_size`, and `prompt_pipeline`) are now keyword-only.
- Generation arguments for OpenAI models now take dictionaries rather than `kani.engines.openai.models.*` models. (If you aren't sure if you're affected by this, you probably aren't.)

It should be a painless upgrade from kani v0.x to kani v1.0! We tried our best to ensure that we didn't break any existing code. If you encounter any issues, please reach out on [our Discord](https://discord.gg/Zvp89dsU5b).

0.8.0

Most likely the last release before v1.0! This update mostly contains improvements to `chat_in_terminal` to improve usability in interactive environments like Jupyter Notebook.

Possible Breaking Change

All arguments to `chat_in_terminal` except the Kani instance must now be keyword arguments; positional arguments are no longer accepted.

For example, `chat_in_terminal(ai, 1, "!stop")` must now be written `chat_in_terminal(ai, rounds=1, stopword="!stop")`.

Improvements

- You may now specify `None` as the user query in `chat_round` and `full_round`. This will request a new ASSISTANT message without adding a USER message to the chat history (e.g. to continue an unfinished generation).

Added the following keyword args to `chat_in_terminal` to improve usability in interactive environments like Jupyter Notebook:

- echo: Whether to echo the user's input to stdout after they send a message (e.g. to save in interactive notebook outputs; default false)
- ai_first: Whether the user should send the first message (default) or the model should generate a completion before prompting the user for a message.
- width: The maximum width of the printed outputs (default unlimited).
- show_function_args: Whether to print the arguments the model is calling functions with for each call (default false).
- show_function_returns: Whether to print the results of each function call (default false).
- verbose: Equivalent to setting ``echo``, ``show_function_args``, and ``show_function_returns`` to True.

0.7.2

- OpenAI: Added support for Jan 25 models without specifying `max_context_length` explicitly
- OpenAI: Fixed an issue where the token count for parallel function calls would only consider the first function call

0.7.1

- OpenAI: Fixes an issue where a tool call could have an unbound tool call ID when using `always_included_messages` near the maximum context length

0.7.0

New Features
- Added support for the Claude API through the `AnthropicEngine`
- Currently, this is only for chat messages - we don't yet have access to the new function calling API. We plan to add Claude function calling to Kani as soon as we get access!
- Renamed `ToolCallError` to a more general `PromptError`
- Technically a minor breaking change, though a search of GitHub shows that no one has used `ToolCallError` yet

Fixes
- Fixed an issue where parallel tool calls could not be validated (thanks arturoleon!)

Page 1 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.