Pytorch-metric-learning

Latest version: v2.7.0

Safety actively analyzes 682471 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 9

1.6.1

Bug Fixes

Fixed a bug in `mean_average_precision` in `AccuracyCalculator`. Previously, the divisor for each sample was the number of correctly retrieved samples. In the new version, the divisor for each sample is `min(k, num_relevant)`.

For example, if class "A" has 11 samples, then `num_relevant` is 11 for every sample with the label "A".
- If `k = 5`, meaning that 5 nearest neighbors are retrieved for each sample, then the divisor will be 5.
- If `k = 100`, meaning that 100 nearest neighbors are retrieved for each sample, then the divisor will be 11.

The bug in previous versions did _not_ affect `mean_average_precision_at_r`.

Other minor changes

Added additional shape checks to `AccuracyCalculator.get_accuracy`.

1.6.0

Features

`DistributedLossWrapper` and `DistributedMinerWrapper` now support `ref_emb` and `ref_labels`:

python
from pytorch_metric_learning import losses
from pytorch_metric_learning.utils import distributed as pml_dist

loss_func = losses.ContrastiveLoss()
loss_func = pml_dist.DistributedLossWrapper(loss_func)

loss = loss_func(embeddings, labels, ref_emb=ref_emb, ref_labels=ref_labels)


Thanks NoTody for PR 503

1.5.2

Bug fixes
In previous versions, when `embeddings_come_from_same_source == True`, the first nearest-neighbor of each query embedding was discarded, with the assumption that it must be the query embedding itself.

While this is usually the case, it's not always the case. It is possible for two different embeddings to be exactly equal to each other, and discarding the first nearest-neighbor in this case can be incorrect.

This release fixes this bug by excluding each embedding's index from the k-nn results.

Sort-of breaking changes
In order for the above bug fix to work, `AccuracyCalculator` now requires that `reference[:len(query)] == query` when `embeddings_come_from_same_source == True`. For example, the following will raise an error:

python
query = torch.randn(100, 10)
ref = torch.randn(100, 10)
ref = torch.cat([ref, query], dim=0)
AC.get_accuracy(query, ref, labels1, labels2, True)
ValueError


To fix this, move `query` to the beginning of `ref`:
python
query = torch.randn(100, 10)
ref = torch.randn(100, 10)
ref = torch.cat([query, ref], dim=0)
AC.get_accuracy(query, ref, labels1, labels2, True)


Note that this change doesn't affect the case where `query is ref`.

1.5.1

Bug fixes

Bumped the record-keeper version to fix issue 497

1.5.0

Features

For some loss functions, labels are now optional if `indices_tuple` is provided:
python
loss = loss_func(embeddings, indices_tuple=pairs)


The losses for which you can do this are:

- CircleLoss
- ContrastiveLoss
- IntraPairVarianceLoss
- GeneralizedLiftedStructureLoss
- LiftedStructureLoss
- MarginLoss
- MultiSimilarityLoss
- NTXentLoss
- SignalToNoiseRatioContrastiveLoss
- SupConLoss
- TripletMarginLoss
- TupletMarginLoss

This issue has come up several times:

412
490
482
473
179
263

1.4.0

New features

- Added [InstanceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#instanceloss). See 410 by layumi

Page 4 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.