Cappr

Latest version: v0.9.6

Safety actively analyzes 701595 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 7

0.5.1

Breaking changes

None

New features

* See [this section](https://cappr.readthedocs.io/en/latest/installation.html#without-dependencies) of the docs

Bug fixes

None

0.5.0

Breaking changes

* `completions` is not allowed to be an empty sequence

New features

* Use GGUF models using the `cappr.llama_cpp.classify` module. Install using:


pip install "cappr[llama-cpp]"


See [this section](https://cappr.readthedocs.io/en/latest/select_a_language_model.html#llama-cpp) of the docs. See [this demo](https://github.com/kddubey/cappr/blob/main/demos/llama_cpp.ipynb) for an example.

Bug fixes

None

0.4.7

Breaking changes

* `end_of_prompt` is restricted to be a whitespace, `” “`, or empty string, `””`. After much thought and experimentation, I realized that anything else is unnecessarily complicated

* The OpenAI API model `gpt-3.5-turbo-instruct` has been deprecated b/c their API won’t allow setting `echo=True, logprobs=1` starting tomorrow

* The keyword argument for the (still highly experimental) discount feature, `log_marginal_probs_completions`, has been renamed to `log_marg_probs_completions`

New features

* You can input your OpenAI API key dynamically: `api_key=`

* The [User Guide](https://cappr.readthedocs.io/en/latest/user_guide.html) is much better

Bug fixes

None

0.4.6

Breaking changes

None

New features

* Input checks on `prompts` and `completions` are more accurate. You can now input, e.g., a polars or pandas Series of strings

Bug fixes

None

0.4.5

Breaking changes

* There are stronger input checks to avoid silent failures. `prompts` cannot be empty. `completions` cannot be empty or a pure string (it has to be a sequence of strings)

New features

* Pass `normalize=False` when you want raw, unnormalized probabilities for, e.g., multi-label classification applications
* You can input a single prompt string or `Example` object. You no longer have to wrap it in a list and then unwrap it
* You can disable progress bars using `show_progress_bar=False`
* `cappr.huggingface` type-hints the model as a `ModelForCausalLM` for greater clarity

Bug fixes

* `cappr.huggingface` doesn't modify the model or tokenizer anymore, sorry bout that
* The jagged/inhomogenous numpy array warning from earlier numpy versions (when using `_examples` functions) is correctly handled

0.4.0

Breaking changes

None

New features

* `cappr.huggingface` is faster when all of the completions are single tokens. Specifically, we just do inference once on the prompts, and don't repeat data unnecessarily
* `cappr.huggingface` implements `token_logprobs` like `cappr.openai` did
* `cappr.huggingface` now supports the (highly experimental) discount feature (mentioned at the bottom of [this answer](https://stats.stackexchange.com/a/606323)) like `cappr.openai` did

Bug fixes

None

Page 5 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.