Pytorch-toolbelt

Latest version: v0.8.0

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 6

0.4.2

Breaking Changes

* Bump up minimal PyTorch version to 1.7.1

New features

* New dataset classes `ClassificationDataset`, `SegmentationDataset` for easy every-day use in Kaggle
* New losses: `FocalCosineLoss`, `BiTemperedLogisticLoss`, `SoftF1Loss`
* Support of new activations for `get_activation_block` (Silu, Softplus, Gelu)
* More encoders from timm package: NFNets, NFRegNet, HRNet, DPN
* `RocAucMetricCallback` for Catalyst
* `MultilabelAccuracyCallback` and `AccuracyCallback` with DDP support

Bugfixes

* Fix invalid prefix in catalyst registry to from `tbt` to `tbt.`

0.4.1

New features

* Added Soft-F1 loss for direct optimization of F1 score (Binary case only)
* Fully rework TTA (Kept backward compatibility where it's possible) module for inference.
* Added support of `ignore_index` to Dice & Jaccard losses.
* Improved Lovasz loss to work in `fp16` mode.
* Added option to override selected params in `make_n_channel_input`.
* More Encoders, from `timm` package.
* `FPNFuse` module not works on 2D, 3D and N-D inputs.
* Added Global K-Max 2D pooling block.
* Added Generalized mean pooling 2D block.
* Added `softmax_over_dim_X`, `argmax_over_dim_X` shorthand functions for use in metrics to get soft/hard labels without using lambda functions.
* Added helper visualization functions to add fancy header to image, stack images of different sizes.
* Improved rendering of confusion matrix.

Catalyst goodies

* Encoders & Losses are available in Catalyst registry
* `StopIfNanCallback`
* Added `OutputDistributionCallback` to log distribtion of predictions to TensorBoard.
* Added `UMAPCallback` to visualize embedding space using UMAP in TensorBoard.


Breaking Changes

* Renamed `CudaTileMerger` to `TileMerger`. `TileMerger` allows to specify target device explicitly.
* `tensor_from_rgb_image` removed in favor of `image_to_tensor`.

Bug fixes & Improvements

* Improve numeric stability of `focal_loss_with_logits` when `reduction="sum"`
* Prevent `NaN` in FocalLoss when all elements are equal to `ignore_index` value.
* A LOT of type hints.

0.4.0

New features
* Memory-efficient `Swish` and `Mish` activation functions (Credits goes to http://github.com/rwightman/pytorch-image-models)
* Refactor EfficientNet encoders (no pretrained weights yet)

Fixes
* Fixed incorrect default value for `ignore_index` in `SoftCrossEntropyLoss`

Breaking changes
* All catalyst-related utils updated to be compatible with Catalyst 20.8.2
* Remove PIL package dependency

Improvements
* More comments, more type hints

0.3.2

New features

* Many helpful callbacks for Catalyst library: HyperParameterCallback, LossAdapter to name a few.
* New losses for deep model supervision (Helpful, when size of target and output mask are different)
* Stacked Hourglass encoder
* Context Aggregation Network decoder

Breaking Changes

* ABN module will now resolve as nn.Sequential(BatchNorm2d, Activation) instead of a hand-crafted module. This enables easier conversion of batch normalization modules to the nn.SyncBatchNorm.

* Almost every Encoder/Decoder implementation has been refactored for better clarity and flexibility. Please double-check your pipelines.

Important bugfixes

* Improved numerical stability of Dice / Jaccard losses (Using log_sigmoid() + exp() instead of plain sigmoid() )


Other

* A lots of comments for functions and modules
* Code cleanup, thanks for DeepSource
* Type annotations for modules and functions
* Update of README

0.3.1

Fixes

* Fixed bug in computation IoU metric in `binary_dice_iou_score` function
* Fixed incorrect default value in `SoftCrossEntropyLoss` 38

Improvements

* Function `draw_binary_segmentation_predictions` now has parameter `image_format` (`rgb`|`bgr`|`gray`) to specify format of the image to visualize correctly images in TB
* More type annotations across the codebase


New features

* New visualization function `draw_multilabel_segmentation_predictions`

0.3.0

New features

Encoders

* HRNetV2
* DenseNets
* EfficientNet
* `Encoder` class has `change_input_channels` method to change number of channels in input image

New losses

* `BCELoss` with support of `ignore_index`
* `SoftBCELoss` (Label smoothing loss for binary case with support of `ignore_index`)
* `SoftCrossEntropyLoss` (Label smoothing loss for multiclass case with support of `ignore_index`)

Catalyst goodies

* Online pseudolabeling callback
* Training signal annealing callback

Other

* New activation functions support in `ABN` block: Swish, Mish, HardSigmoid
* New decoders (Unet, FPN, DeeplabV3, PPM) to simplify creation of segmentation models
* `CREDITS.md` to include all the references to code/articles. Existing list is definitely not complete, so feel free to make PR's
* Object context block from OCNet

API changes

* Focal loss now supports normalized focal loss and reduced focal loss extensions.
* Optimize computation of pyramid weight matrix 34
* Default value `align_corners=False` in `F.interpolate` when doing bilinear upsampling.

Bugfixes

* Fix missing call to batch normalization block in `FPNBottleneckBN`
* Fix numerical stability for `DiceLoss` and `JaccardLoss` when `log_loss=True`
* Fix numerical stability when computing normalized focal loss

Page 3 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.