Pytorch-toolbelt

Latest version: v0.6.2

Safety actively analyzes 641134 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 5

0.5.1

New API

* Added `fs.find_subdirectories_in_dir` to retrieve list of subdirectories (non-recursive) in the given directory.
* Added `logodd` averaging of TTA predictions and counterpart `logodd_mean` function.

Improvements

* In `plot_confusion_matrix` one can disable plotting scores in each cell using `show_scores` argument (`True` by default).
* `freeze_model` method now returns input `module` argument.

0.5.0

This is the major release update of Pytorch Toolbelt. It's been a long time since the last update and there are many improvements & updates since 0.4.4:

New features

* Added class `pytorch_toolbelt.datasets.DatasetMeanStdCalculator` to compute mean & std of the dataset that does not fit entirely in memory.
* New decoder module: `BiFPNDecoder`
* New encoders: `SwinTransformer`, `SwinB`, `SwinL`, `SwinT`, `SwinS`
* Added `broadcast_from_master` function to distributed utils. This method allows scattering a tensor from the master node to all nodes.
* Added `reduce_dict_sum` to gather & concatenate dictionary of lists from all nodes in DDP.
* Added `master_print` as a drop-in replacement to `print` that prints to stdout only on the zero-rank node.

Bug Fixes

* Fix bug in lovasz loss by seefun in https://github.com/BloodAxe/pytorch-toolbelt/pull/62

Breaking changes

* Bounding boxes matching method has been divided into two: `match_bboxes` and `match_bboxes_hungarian`. The first method uses scores of predicted bboxes and matches most confident predictions first, while the `match_bboxes_hungarian` matches bboxes to maximize overall IoU.
* `set_manual_seed` now sets random seed for Numpy.
* `to_numpy` now correctly works for None and all iterables (Not only tuple & list)

Fixes & Improvements (NO BC)

* Added `dim` argument to `ApplySoftmaxTo` to specify channel for softmax operator (default value is 1, which was hardcoded previously)
* `ApplySigmoidTo` now applies in-place sigmoid (Purely performance optimization)
* `TileMerger` now supports specifying a `device` (Torch semantics) for storing intermediate tensors of accumulated tiles.
* All TTA functions supports PyTorch Tracing
* `MultiscaleTTA` now supports a model that returns a single Tensor (Key-Value outputs still works as before)
* `balanced_binary_cross_entropy_with_logits` and `BalancedBCEWithLogitsLoss` now supports `ignore_index` argument.
* `BiTemperedLogisticLoss` & `BinaryBiTemperedLogisticLoss` also got support of `ignore_index` argument.
* `focal_loss_with_logits` now also supports `ignore_index`. Computation of ignored values has been moved from `BinaryFocalLoss` to this function.
* Reduced number of boilerplates & hardcoded code for encoders from `timm`. Now `GenericTimmEncoder` queries output strides & feature maps directly from the `timm`'s encoder instance.
* HRNet-based encoders now have a `use_incre_features` argument to specify whether output feature maps should have an increased number of features.
* `change_extension`, `read_rgb_image`, `read_image_as_is` functions now supports `Path` as input argument. Return type (str) remains unchanged.
* `count_parameters` now accepts `human_friendly` argument to print parameters count in human-friendly form `21.1M` instead 21123123.
* `plot_confusion_matrix` now has `format_string` argument (None by default) to specify custom format string for values in confusion matrix.
* `RocAucMetricCallback` for Catalyst got `fix_nans` argument to fix `NaN` outputs, which caused `roc_auc` to raise an exception and break the training.
* `BestWorstMinerCallbac` now additionally logs batch with `NaN` value in monitored metric

0.4.4

New features

- New tiled processing classes for 3D data - `VolumeSlicer` and `VolumeMerger`. Designed similarly to `ImageSlicer`. Not you can run 3D segmentation on huge volumes without risk of OOM.
- Support of labels (scalar or 1D vector) augmentation/deaugmentation in D2, D4 and flip-style TTA.
- Balanced BCE loss (`BalancedBCEWithLogitsLoss`)
- Bi-Tempered loss 'BiTemperedLogisticLoss'
- `SelectByIndex` helper module to pick named output of the model (For use in `nn.Sequential`)
- New encoders `MobileNetV3Large`, `MobileNetV3Small` from `torchvision`.
- New encoders from `timm` package (HRNets, ResNetD, EfficientNetV2 and others).
- DeepLabV3 & DeepLabV3+ Decoders
- Pure `PyTorch`-based implementation for bbox matching (`match_bboxes`) that supports both CPU/GPU matching using hungarian algorithm.

Bugfixes
- Fix bug in Lovasz Loss (62), thanks seefun

Breaking Changes

- Parameter `ignore` renamed to `ignore_index` in `BinaryLovaszLoss` class.
- Renamed `fpn_channels` argument in constructor of `FPNSumDecoder` and `FPNCatDecoder` to `channels`.
- Renamed 'output_channels` argument in constructor of `HRNetSegmentationDecoder` to 'channels`.
- `conv1x1` not set bias to zero by default
- Bumped up minimal pytorch version to 1.8.1

Other Improvements

- `Ensembler` class not correctly works with `torch.jit.tracing`
- Numerous docstrings & type annotations enchancements

0.4.3

Modules

- Added missing `sigmoid` activation support to `get_activation_block`
- Make Encoders support JIT & Tracing
- Better support for encoders from `timm` (They named with prefix `Timm`)

Utils
- `rgb_image_from_tensor` now clip values

TTA & Ensembling

- `Ensembler` now supports arithmetic, geometric & harmonic averaging via `reduction` parameter.
- Bring geometric & harmonic averaging to all TTA functions as well

Datasets

- `read_binary_mask`
- Refactor `SegmentationDataset` to support strided masks for deep supervision
- Added `RandomSubsetDataset` and `RandomSubsetWithMaskDataset` to sample dataset based on some condition (E.g. sample only samples of particular class)

Other

As usual, more tests, better type annotations & comments

0.4.2

Breaking Changes

* Bump up minimal PyTorch version to 1.7.1

New features

* New dataset classes `ClassificationDataset`, `SegmentationDataset` for easy every-day use in Kaggle
* New losses: `FocalCosineLoss`, `BiTemperedLogisticLoss`, `SoftF1Loss`
* Support of new activations for `get_activation_block` (Silu, Softplus, Gelu)
* More encoders from timm package: NFNets, NFRegNet, HRNet, DPN
* `RocAucMetricCallback` for Catalyst
* `MultilabelAccuracyCallback` and `AccuracyCallback` with DDP support

Bugfixes

* Fix invalid prefix in catalyst registry to from `tbt` to `tbt.`

0.4.1

New features

* Added Soft-F1 loss for direct optimization of F1 score (Binary case only)
* Fully rework TTA (Kept backward compatibility where it's possible) module for inference.
* Added support of `ignore_index` to Dice & Jaccard losses.
* Improved Lovasz loss to work in `fp16` mode.
* Added option to override selected params in `make_n_channel_input`.
* More Encoders, from `timm` package.
* `FPNFuse` module not works on 2D, 3D and N-D inputs.
* Added Global K-Max 2D pooling block.
* Added Generalized mean pooling 2D block.
* Added `softmax_over_dim_X`, `argmax_over_dim_X` shorthand functions for use in metrics to get soft/hard labels without using lambda functions.
* Added helper visualization functions to add fancy header to image, stack images of different sizes.
* Improved rendering of confusion matrix.

Catalyst goodies

* Encoders & Losses are available in Catalyst registry
* `StopIfNanCallback`
* Added `OutputDistributionCallback` to log distribtion of predictions to TensorBoard.
* Added `UMAPCallback` to visualize embedding space using UMAP in TensorBoard.


Breaking Changes

* Renamed `CudaTileMerger` to `TileMerger`. `TileMerger` allows to specify target device explicitly.
* `tensor_from_rgb_image` removed in favor of `image_to_tensor`.

Bug fixes & Improvements

* Improve numeric stability of `focal_loss_with_logits` when `reduction="sum"`
* Prevent `NaN` in FocalLoss when all elements are equal to `ignore_index` value.
* A LOT of type hints.

Page 2 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.