Openai-messages-token-helper

Latest version: v0.1.10

Safety actively analyzes 666166 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.1.4

- Add support and tests for gpt-4o, which has a different tokenizer.

0.1.3

- Use openai type annotations for more precise type hints, and add a typing test.

0.1.2

- Add `py.typed` file so that mypy can find the type hints in this package.

0.1.0

- Add `count_tokens_for_system_and_tools` to count tokens for system message and tools. You should count the tokens for both together, since the token count for tools varies based off whether a system message is provided.
- Updated `build_messages` to allow for `tools` and `tool_choice` to be passed in.
- Breaking change: Changed `new_user_message` to `new_user_content` in `build_messages` for clarity.

0.0.6

- Add keyword argument `fallback_to_default` to `build_messages` function to allow for defaulting to the CL100k token encoder and minimum GPT token limit if the model is not found.
- Fixed usage of `past_messages` argument of `build_messages` to not skip the last past message. (New user message should *not* be passed in)

0.0.5

- Add keyword argument `default_to_cl100k` to `count_tokens_for_message` function to allow for defaulting to the CL100k token limit if the model is not found.
- Add keyword argument `default_to_minimum` to `get_token_limit` function to allow for defaulting to the minimum token limit if the model is not found.

Page 2 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.