Aac-metrics

Latest version: v0.5.5

Safety actively analyzes 698693 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.5.5

Added
- New `CLAPSim` metric based on the embeddings given by CLAP model.
- New `MACE` metric based on `CLAPSim` and `FER` metrics.
- DCASE2024 challenge metric set, class and functions.
- Preprocess option in `evaluate` now accepts custom callable value.
- List of bibtex sources in `data/papers.bib` file.

Changed
- Improve metric output typing for language servers with typed dicts.
- `batch_size` can now be `None` to take all inputs at once into the model.

Fixed
- `bert_score` option in download script.

0.5.4

Fixed
- Backward compatibility of `BERTScoreMrefs` with torchmetrics prior to 1.0.0.

Deleted
- `Version` class to use `packaging.version.Version` instead.

0.5.3

Fixed
- Fix `BERTScoreMrefs` computation when all multiple references sizes are equal.
- Check for empty timeout list in `SPICE` metric.

0.5.2

Changed
- `aac-metrics` is now compatible with `transformers>=4.31`.
- Rename default device value `"auto"` to `"cuda_if_available"`.

0.5.1

Added
- Check sentences inputs for all metrics.

Fixed
- Fix `BERTScoreMRefs` metric with 1 candidate and 1 reference.

0.5.0

Added
- New `Vocab` metric to compute vocabulary size and vocabulary ratio.
- New `BERTScoreMRefs` metric wrapper to compute BERTScore with multiple references.

Changed
- Rename metric `FluErr` to `FER`.

Fixed
- `METEOR` localization issue. ([9](https://github.com/Labbeti/aac-metrics/issues/9))
- `SPIDErMax` output when `return_all_scores=False`.

Page 1 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.