New functionality added:
General
Major changes
- Add [ResUNet++](https://arxiv.org/pdf/1911.07067.pdf) model
- Add ``TEST.POST_PROCESSING.REMOVE_BY_PROPERTIES``, and its options, to remove instances by the conditions based in each instance properties. This merges ``PROBLEM.INSTANCE_SEG.WATERSHED_CIRCULARITY``, ``PROBLEM.INSTANCE_SEG.DATA_REMOVE_SMALL_OBJ_AFTER`` and ``PROBLEM.INSTANCE_SEG.DATA_REMOVE_SMALL_OBJ_AFTER`` functionalities.
- New options and upgrades to save memory:
* Move normalization to ``load_sample`` function inside the generators if ``DATA.*.IN_MEMORY`` is selected, which allows to have in memory the dataset in its original dtype (usuarlly ``uint8`` or ``uint16``) and not in ``float32``, consuming less memory, at the cost of having to do the normalization per batch.
* Update``TEST.REDUCE_MEMORY`` option to reduce also the dtype of the prediction from ``float32`` to ``float16``
* Add ``TEST.BY_CHUNKS``, and its options, to process large images by chunks: load/save steps work with ``H5`` or ``Zarr`` formats. This option helps to generate model's prediction with overlap/padding with low memory footprint by constructing it patch by patch. It is also prepared to do multi-GPU inference to accelerate the reconstruction process. It can also work loading ``TIF`` images but with ``H5`` and ``Zarr`` only the patches processed are loaded into memory, and nothing else, so you can should scale to TB of data without having memory problems.
* Add ``TEST.BY_CHUNKS.WORKFLOW_PROCESS``, and a few more options related to it, to continue or not the workflow _normal_ steps after the model prediction. With ``TEST.BY_CHUNKS.WORKFLOW_PROCESS.TYPE`` you can tell the worklow to process the predicted image patch by patch or as just one image. By patch option is currently only supported in ``DETECTION`` workflow.
Minor changes
- Delete ``MODEL.KERNEL_INIT``
- ``TRAIN.PATIENCE`` default changed to ``-1``
- Add ``utils/scripts/h5_to_zarr.py`` auxiliary script
- Now ``warmupcosine``learning rate scheduler is done by iterations and not by epochs.
- Update notebooks to work with BiaPy based on Pytorch
Workflows
Instance segmentation
- Add ``TEST.POST_PROCESSING.CLEAR_BORDER`` to remove instances in the border
Denoising
- Change N2V masks to be created always on the fly (saving memory)
Detection
- Remove ``TEST.DET_LOCAL_MAX_COORDS`` option
- Add ``TEST.DET_POINT_CREATION_FUNCTION``, and a few more options related to it, to decide whether to use ``peak_local_max`` or ``blob_log`` (from scikit-image) functions to create the final points from probabilities.
SSL
- Add ``MODEL.MAE_MASK_RATIO`` option
SR
- Add ``3D`` support
- Add notebooks
Bugs fixed:
- Correct bug on 2D UNETR definition
- Fix bug in 2D cross validation
- Minor bugs created when switching from Tensorflow to Pytorch