Torchmetrics

Latest version: v1.7.0

Safety actively analyzes 723158 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 10

0.10.2

Changed

- Changed in-place operation to out-of-place operation in `pairwise_cosine_similarity` ([1288](https://github.com/Lightning-AI/torchmetrics/pull/1288))

Fixed

- Fixed high memory usage for certain classification metrics when `average='micro'` ([1286](https://github.com/Lightning-AI/torchmetrics/pull/1286))
- Fixed precision problems when `structural_similarity_index_measure` was used with autocast ([1291](https://github.com/Lightning-AI/torchmetrics/pull/1291))
- Fixed slow performance for confusion matrix based metrics ([1302](https://github.com/Lightning-AI/torchmetrics/pull/1302))
- Fixed restrictive dtype checking in `spearman_corrcoef` when used with autocast ([1303](https://github.com/Lightning-AI/torchmetrics/pull/1303))

0.10.1

Fixed

- Fixed broken clone method for classification metrics ([1250](https://github.com/Lightning-AI/torchmetrics/pull/1250))
- Fixed unintentional downloading of `nltk.punkt` when `lsum` not in `rouge_keys` ([1258](https://github.com/Lightning-AI/torchmetrics/pull/1258))
- Fixed type casting in `MAP` metric between `bool` and `float32` ([1150](https://github.com/Lightning-AI/torchmetrics/pull/1150))

0.10.0

Added

- Added a new NLP metric `InfoLM` ([915](https://github.com/Lightning-AI/torchmetrics/pull/915))
- Added `Perplexity` metric ([922](https://github.com/Lightning-AI/torchmetrics/pull/922))
- Added `ConcordanceCorrCoef` metric to regression package ([1201](https://github.com/Lightning-AI/torchmetrics/pull/1201))
- Added argument `normalize` to `LPIPS` metric ([1216](https://github.com/Lightning-AI/torchmetrics/pull/1216))
- Added support for multiprocessing of batches in `PESQ` metric ([1227](https://github.com/Lightning-AI/torchmetrics/pull/1227))
- Added support for multioutput in `PearsonCorrCoef` and `SpearmanCorrCoef` ([1200](https://github.com/Lightning-AI/torchmetrics/pull/1200))

Changed

- Classification refactor (
[1054](https://github.com/Lightning-AI/torchmetrics/pull/1054),
[1143](https://github.com/Lightning-AI/torchmetrics/pull/1143),
[1145](https://github.com/Lightning-AI/torchmetrics/pull/1145),
[1151](https://github.com/Lightning-AI/torchmetrics/pull/1151),
[1159](https://github.com/Lightning-AI/torchmetrics/pull/1159),
[1163](https://github.com/Lightning-AI/torchmetrics/pull/1163),
[1167](https://github.com/Lightning-AI/torchmetrics/pull/1167),
[1175](https://github.com/Lightning-AI/torchmetrics/pull/1175),
[1189](https://github.com/Lightning-AI/torchmetrics/pull/1189),
[1197](https://github.com/Lightning-AI/torchmetrics/pull/1197),
[1215](https://github.com/Lightning-AI/torchmetrics/pull/1215),
[1195](https://github.com/Lightning-AI/torchmetrics/pull/1195)
)
- Changed update in `FID` metric to be done in online fashion to save memory ([1199](https://github.com/Lightning-AI/torchmetrics/pull/1199))
- Improved performance of retrieval metrics ([1242](https://github.com/Lightning-AI/torchmetrics/pull/1242))
- Changed `SSIM` and `MSSSIM` update to be online to reduce memory usage ([1231](https://github.com/Lightning-AI/torchmetrics/pull/1231))

Deprecated

- Deprecated `BinnedAveragePrecision`, `BinnedPrecisionRecallCurve`, `BinnedRecallAtFixedPrecision` ([1163](https://github.com/Lightning-AI/torchmetrics/pull/1163))
* `BinnedAveragePrecision` -> use `AveragePrecision` with `thresholds` arg
* `BinnedPrecisionRecallCurve` -> use `AveragePrecisionRecallCurve` with `thresholds` arg
* `BinnedRecallAtFixedPrecision` -> use `RecallAtFixedPrecision` with `thresholds` arg
- Renamed and refactored `LabelRankingAveragePrecision`, `LabelRankingLoss` and `CoverageError` ([1167](https://github.com/Lightning-AI/torchmetrics/pull/1167))
* `LabelRankingAveragePrecision` -> `MultilabelRankingAveragePrecision`
* `LabelRankingLoss` -> `MultilabelRankingLoss`
* `CoverageError` -> `MultilabelCoverageError`
- Deprecated `KLDivergence` and `AUC` from classification package ([1189](https://github.com/Lightning-AI/torchmetrics/pull/1189))
* `KLDivergence` moved to `regression` package
* Instead of `AUC` use `torchmetrics.utils.compute.auc`

Fixed

- Fixed a bug in `ssim` when `return_full_image=True` where the score was still reduced ([1204](https://github.com/Lightning-AI/torchmetrics/pull/1204))
- Fixed MPS support for:
* MAE metric ([1210](https://github.com/Lightning-AI/torchmetrics/pull/1210))
* Jaccard index ([1205](https://github.com/Lightning-AI/torchmetrics/pull/1205))
- Fixed bug in `ClasswiseWrapper` such that `compute` gave wrong result ([1225](https://github.com/Lightning-AI/torchmetrics/pull/1225))
- Fixed synchronization of empty list states ([1219](https://github.com/Lightning-AI/torchmetrics/pull/1219))

---

0.9.3

Added

- Added global option `sync_on_compute` to disable automatic synchronization when `compute` is called ([1107](https://github.dev/Lightning-AI/torchmetrics/pull/1107))

Fixed

- Fixed missing reset in `ClasswiseWrapper` ([1129](https://github.com/Lightning-AI/torchmetrics/pull/1129))
- Fixed `JaccardIndex` multi-label compute ([1125](https://github.com/Lightning-AI/torchmetrics/pull/1125))
- Fix SSIM propagate device if `gaussian_kernel` is False, add test ([1149](https://github.com/Lightning-AI/torchmetrics/pull/1149))

0.9.2

Fixed

- Fixed mAP calculation for areas with 0 predictions ([1080](https://github.com/Lightning-AI/torchmetrics/pull/1080))
- Fixed bug where avg precision state and auroc state was not merge when using MetricCollections ([1086](https://github.com/Lightning-AI/torchmetrics/pull/1086))
- Skip box conversion if no boxes are present in `MeanAveragePrecision` ([1097](https://github.com/Lightning-AI/torchmetrics/pull/1097))
- Fixed inconsistency in docs and code when setting `average="none"` in `AveragePrecision` metric ([1116](https://github.com/Lightning-AI/torchmetrics/pull/1116))

0.9.1

Added

- Added specific `RuntimeError` when metric object is on the wrong device ([1056](https://github.com/Lightning-AI/torchmetrics/pull/1056))
- Added an option to specify own n-gram weights for `BLEUScore` and `SacreBLEUScore` instead of using uniform weights only. ([1075](https://github.com/Lightning-AI/torchmetrics/pull/1075))

Fixed

- Fixed aggregation metrics when input only contains zero ([1070](https://github.com/Lightning-AI/torchmetrics/pull/1070))
- Fixed `TypeError` when providing superclass arguments as `kwargs` ([1069](https://github.com/Lightning-AI/torchmetrics/pull/1069))
- Fixed bug related to state reference in metric collection when using compute groups ([1076](https://github.com/Lightning-AI/torchmetrics/pull/1076))

Page 6 of 10

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.