Improvement + small breaking change to `DistributedLossWrapper`
- Changed the `emb` argument of `DistributedLossWrapper.forward` to `embeddings` to be consistent with the rest of the library. - Added a warning and early-return when `DistributedLossWrapper` is being used in a non-distributed setting. - Thank you elisim!
2.5.0
Improvements
- [Allow scaling up the memory and batch size when using TripletMarginMiner](https://github.com/KevinMusgrave/pytorch-metric-learning/issues/688) - Pull request: https://github.com/KevinMusgrave/pytorch-metric-learning/pull/689
Thanks mkmenta !
2.4.1
This is identical to v2.4.0, but includes the LICENSE file which was missing from v2.4.0.
2.4.0
Features
- Added [DynamicSoftMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#dynamicsoftmarginloss). See PR 659. Thanks domenicoMuscill0! - Added [RankedListLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#rankedlistloss). See PR 659. Thanks domenicoMuscill0!
Bug fixes - Fixed issue where PNPLoss would return NaN when a batch sample had no corresponding positive. See PR 660. Thanks Puzer and interestingzhuo!
Tests - Fixed the test for HistogramLoss to work with PyTorch 2.1. Thanks GaetanLepage!
2.3.0
Features
- Added [HistogramLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#histogramloss). See pull request 651. Thanks domenicoMuscill0!