Pytorch-metric-learning

Latest version: v2.8.1

Safety actively analyzes 723882 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 9

1.2.0

New Loss Function: SubCenterArcFace

- [Documentation](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#subcenterarcfaceloss)
- [Example notebook](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/SubCenterArcFaceMNIST.ipynb)
- [Paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123560715.pdf)
- [Issue](https://github.com/KevinMusgrave/pytorch-metric-learning/issues/208)
- [Pull Request](https://github.com/KevinMusgrave/pytorch-metric-learning/pull/424)

Thanks chingisooinar!

1.1.2

Bug fixes

- 427
- 428

1.1.1

Bug fixes
- 420
- 422

1.1.0

New features

[CentroidTripletLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#centroidtripletloss)
Implementation of [On the Unreasonable Effectiveness of Centroids in Image Retrieval](https://arxiv.org/pdf/2104.13643.pdf)

[VICRegLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#vicregloss)
Implementation of [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning](https://arxiv.org/pdf/2105.04906.pdf)

[AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/)
- Added mean reciprocal rank as an accuracy metric. Available as "mean_reciprocal_rank".
- Added return_per_class argument for AccuracyCalculator. This is like avg_of_avgs but returns the accuracy per class, instead of averaging them for you.


Related issues
369
372
374
394

Contributors
Thanks to cwkeam and mlw214!

1.0.0

Reference embeddings for tuple losses
You can separate the source of anchors and positive/negatives. In the example below, anchors will be selected from embeddings and positives/negatives will be selected from ref_emb.

python
loss_fn = TripletMarginLoss()
loss = loss_fn(embeddings, labels, ref_emb=ref_emb, ref_labels=ref_labels)


Efficient mode for DistributedLossWrapper
- efficient=True: each process uses its own embeddings for anchors, and the gathered embeddings for positives/negatives. Gradients will **not** be equal to those in non-distributed code, but the benefit is reduced memory and faster training.
- efficient=False: each process uses gathered embeddings for both anchors and positives/negatives. Gradients will be equal to those in non-distributed code, but at the cost of doing unnecessary operations (i.e. doing computations where both anchors and positives/negatives have no gradient).

The default is False. You can set it to True like this:

python
from pytorch_metric_learning import losses
from pytorch_metric_learning.utils import distributed as pml_dist

loss_func = losses.ContrastiveLoss()
loss_func = pml_dist.DistributedLossWrapper(loss_func, efficient=True)

Documentation: https://kevinmusgrave.github.io/pytorch-metric-learning/distributed/

Customizing k-nearest-neighbors for AccuracyCalculator
You can use a different type of faiss index:
python
import faiss
from pytorch_metric_learning.utils.accuracy_calculator import AccuracyCalculator
from pytorch_metric_learning.utils.inference import FaissKNN

knn_func = FaissKNN(index_init_fn=faiss.IndexFlatIP, gpus=[0,1,2])
ac = AccuracyCalculator(knn_func=knn_func)


You can also use a custom distance function:
python
from pytorch_metric_learning.distances import SNRDistance
from pytorch_metric_learning.utils.inference import CustomKNN

knn_func = CustomKNN(SNRDistance())
ac = AccuracyCalculator(knn_func=knn_func)


Relevant docs:
- [Accuracy Calculation](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/)
- [FaissKNN](https://kevinmusgrave.github.io/pytorch-metric-learning/inference_models/#faissknn)
- [CustomKNN](https://kevinmusgrave.github.io/pytorch-metric-learning/inference_models/#customknn)


Issues resolved
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/204
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/251
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/256
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/292
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/330
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/337
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/345
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/347
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/349
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/353
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/359
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/361
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/362
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/363
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/368
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/376
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/380

Contributors
Thanks to yutanakamura-tky and KinglittleQ for pull requests, and mensaochun for providing helpful code in 380

0.9.99

Bug fixes

- Accuracy Calculation bug in GlobalTwoStreamEmbeddingSpaceTester (301)
- Mixed precision bug in convert_to_weights (300)

Features

- [HierarchicalSampler](https://kevinmusgrave.github.io/pytorch-metric-learning/samplers/#hierarchicalsampler)
- Improved functionality for [InferenceModel](https://kevinmusgrave.github.io/pytorch-metric-learning/inference_models/#inferencemodel) (296 and 304)
- train_indexer now accepts a dataset
- also added functions save_index, load_index, and add_to_indexer
- Added power argument to LpRegularizer (299)
- Return exception if labels has more than 1 dimension (307)
- [Added a global flag](https://kevinmusgrave.github.io/pytorch-metric-learning/common_functions/#collect_stats) for turning on/off collect_stats (311)
- TripletMarginLoss smooth variant uses the input margin now (315)
- [Use package-specific logger, "PML"](https://kevinmusgrave.github.io/pytorch-metric-learning/common_functions/#logger), instead of root logger (318)
- Cleaner key verification in the trainers (102)

Thanks to elias-ramzi, gkouros, vltanh, and Hummer12007

Page 6 of 9

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.