Pytorch-toolbelt

Latest version: v0.8.0

Safety actively analyzes 681881 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 6

1903.01347

0.8.0

What's Changed

* Feature/modules overhaul by BloodAxe in https://github.com/BloodAxe/pytorch-toolbelt/pull/92
* Feature/modules overhaul by BloodAxe in https://github.com/BloodAxe/pytorch-toolbelt/pull/96

0.7.0

New stuff

* All encoders & decoders & heads are now inherit from `HasOutputFeaturesSpecification` interface to query number of output channels and strides this module outputs.
* New loss class `QualityFocalLoss` from https://arxiv.org/abs/2006.04388
* New function `pad_tensor_to_size` -A generic padding function for N-dimensional tensors [B,C, ...] shape.
* Added `DropPath` layer (aka DropConnect)
* Pretrained weights for SegFormer backbones
* `first_class_background_init` for initializing last output convolution/linear block to have zeros in weights and bias layer set to `[logit(bg_prob), logit(1-bg_prob), ...]`
* New function `instantiate_normalization_block` to create normalization layer by name. This is used in some decoder layers / heads.

* Improvements

* Improve numeric accuracy of `focal_loss_with_logits` function by explicitly disabling AMP autocast for this function and casting preds & targets to `float32`.
* `MultiscaleTTA` now allows setting interpolation `mode` and `align_corners` for resizing input and predictions.
* `BinaryFocalLoss` now has `__repr__`
* `name_for_stride` now accepts `stride` argument to be None. In this case the function is noop and returns input argument `name`.
* `RandomSubsetDataset` now takes optional `weights` argument to select samples with given probability.

Bugfixes

* Implementation of `get_collate_fn` for `RandomSubsetDataset` is not correct and returns collate function instead of calling it

Breaking Changes

* Signature of decoders changes to require first argument `input_spec` to be of type `FeatureMapsSpecification`.
* Rewritten BiFPN decoder to support arbitrary number of input feature maps, user-defined normalization & activation & BiFPN block.
* Rewritten UNetDecoder to allow setting upsample block as string type

* `WeightedLoss` and `JointLoss` classes has been removed. If your code was using these classes here they are - copy paste them to your project and live happily, but I strongly suggest to use modern deep learning frameworks that support defining losses from configuration files.
python
class WeightedLoss(_Loss):
"""Wrapper class around loss function that applies weighted with fixed factor.
This class helps to balance multiple losses if they have different scales
"""

def __init__(self, loss, weight=1.0):
super().__init__()
self.loss = loss
self.weight = weight

def forward(self, *input):
return self.loss(*input) * self.weight


class JointLoss(_Loss):
"""
Wrap two loss functions into one. This class computes a weighted sum of two losses.
"""

def __init__(self, first: nn.Module, second: nn.Module, first_weight=1.0, second_weight=1.0):
super().__init__()
self.first = WeightedLoss(first, first_weight)
self.second = WeightedLoss(second, second_weight)

def forward(self, *input):
return self.first(*input) + self.second(*input)

0.6.2

0.6.1

* Fixes to CI actions
* Adding support for python 3.10
* Bugfix in DatasetMeanStdCalculator when mask argument was used

0.6.0

Breaking Changes
All catalyst-related callbacks are moved to [fork](https://github.com/BloodAxe/catalyst) of the Catalyst library

Page 1 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.