Slideflow

Latest version: v2.3.1

Safety actively analyzes 623739 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 10

3.0

>>> print(ctranspath.citation)

{wang2022,
title={Transformer-based Unsupervised Contrastive Learning for Histopathological Image Classification},
author={Wang, Xiyue and Yang, Sen and Zhang, Jun and Wang, Minghui and Zhang, Jing and Yang, Wei and Huang, Junzhou and Han, Xiao},
journal={Medical Image Analysis},
year={2022},
publisher={Elsevier}
}


DINOv2
As with SimCLR, Slideflow now supports generating features from a trained DINOv2 model. Use the feature extractor `'dinov2'`, and pass the `*.pth` teacher weights to the argument `weights`, and the YAML configuration file to the argument `cfg`:

python
dinov2 = sf.model.build_feature_extractor(
'dinov2',
weights='/path/to/teacher_checkpoint.pth',
cfg='/path/to/config.yaml'


We've also provided a modified version of [DINOv2](https://github.com/jamesdolezal/dinov2) that allows you to train the network using Slideflow projects and datasets. See our [documentation](https://slideflow.dev/ssl/#dinov2) for instructions on how to train and use DINOv2.

New MIL Features

2.5

2.3.1

This is a minor, bug-fix release. See the [Version 2.3 release notes](https://github.com/jamesdolezal/slideflow/releases/tag/2.3.0) for details about the latest major release.

Changes
- Fix TypeError when attempting to generate MIL predictions in Studio with a stain normalizer [346] - thanks siddhir
- Fix generation of MIL heatmaps during training & evaluation when using a stride > 1 [331] - thanks sz3029

2.3.0

Highlights
We are very happy to release Slideflow 2.3, the highlight of which is the introduction of **whole-slide tissue segmentation**. Both binary and multiclass tissue segmentation models can be trained from labeled ROIs and deployed for slide QC or used to generate ROIs. This release also adds **CycleGAN-based stain normalization**, as well as several smaller features and optimizations.

Table of Contents
1. **Tissue Segmentation**
a. Training segmentation models
b. Using models for QC
c. Generating ROIs
d. Deploying in Studio
2. **CycleGAN Stain Normalization**
3. **Other New Features**
4. **Dependencies**
5. **Known Issues**

Tissue segmentation

https://github.com/jamesdolezal/slideflow/assets/48372806/acac4415-eb4c-4a90-8483-56da26f76bfe

Slideflow now supports training and deploying [tissue segmentation](https://slideflow.dev/segmentation) models, both via the programmatic interface as well as in [Slideflow Studio](https://slideflow.dev/studio#tissue-segmentation). Tissue segmentation models can be trained in binary, multiclass, or multilabel mode using [labeled ROIs](https://slideflow.dev/studio#roi-annotations). Tissue segmentation is performed at the whole-slide level, trained on randomly cropped sections of the slide thumbnail at a specified resolution.

Training segmentation models

Segmentation models are configured using `SegmentConfig`, which determines the segmentation architecture (U-Net, FPN, DeepLabV3, etc), image resolution for segmentation in microns-per-pixel (MPP), and other training parameters.

python
from slideflow import segment

Create a config object
config = segment.SegmentConfig(mpp=20, mode='binary', arch='Unet')


Models can be trained with `slideflow.segment.train()`. Models will be saved in the given destination directory as `model.pth`, with an auto-generated `segment_config.json` file describing the architecture and parameters.

python
...

Load a dataset
project = sf.Project(...)
dataset = project.dataset(...)

Train the model
segment.train(config, dataset, dest='path/to/output')


Once trained, tissue segmentation models can either be used for slide-level QC or to generate ROIs.

Using models for QC

The new `slideflow.slide.qc.Segment` class provides an easy interface for generating QC masks from a segmentation model (e.g., for a model trained to identify tumor regions, pen marks, etc). This class takes a path to a trained segmentation model as an argument, and otherwise can be used for QC as outlined in the [documentation](https://slideflow.dev/slide_processing#filtering).

python

import slideflow as sf
from slideflow.slide import qc

Load the slide
wsi = sf.WSI('/path/to/slide', ...)

Create the QC algorithm
segmenter = qc.Segment('/path/to/model.pth')

Apply QC
applied_mask = wsi.qc(segmenter)


For multiclass segmentation models, `qc.Segment` provides [additional arguments](https://slideflow.dev/segmentation#generating-qc-masks) to customize how the model should be used for QC.

Generating ROIs

The same `qc.Segment` class can also be used to generate regions of interest (ROIs). Use `Segment.generate_rois()` to generate and apply ROIs to a single slide:

python
...

Create a QC mask
segmenter = qc.Segment('/path/to/model.pth')

Generate and apply ROIs to a slide
roi_outlines = segmenter.generate_rois(wsi)


Or use `Dataset.generate_rois()` to create ROIs for an entire dataset:

python
import slideflow as sf

Load a project and dataset.
project = sf.load_project('path/to/project')
dataset = project.dataset()

Generate ROIs for all slides in the dataset.
dataset.generate_rois('path/to/model.pth')


Deploying in Studio

The slide widget in Studio now has a "Segment" section. A trained segmentation model can be loaded and used for either QC or to generate ROIs. Further details regarding use are available in the [documentation](https://slideflow.dev/studio#tissue-segmentation).

CycleGAN Stain Normalization

Slideflow now includes a CycleGAN-based stain normalizer, `'cyclegan'`. Our implementation is based off of the work by [Zingman et al](https://github.com/Boehringer-Ingelheim/stain-transfer). The stain normalization algorithm is a two-step process utilizing two separate GANs. The H&E image to be transformed is first converted via GAN-1 into Masson's Trichrome (MT), and then converted back to H&E via GAN-2. By default, pretrained weights [provided by Zingman](https://osf.io/byf27/) will be used, although custom weights can also be provided.

At present, CycleGAN stain normalization requires PyTorch. If you would like us to port GAN normalizers to the Tensorflow backend, please head to our ongoing [Discussion](https://github.com/jamesdolezal/slideflow/discussions/343) and let us know!

This method can be used like any other stain normalizer:

python
Configure training parameters
to use CycleGAN stain normalization
params = sf.ModelParams(..., normalizer='cyclegan')


Other New Features
- Stain normalizers can now augment an image without also normalizing, using the new `.augment()` method.

python
import slideflow as sf

Get a Macenko normalizer
macenko = sf.norm.autoselect('macenko')

Perform stain augmentation
img = macenko.augment(img)


- Expanded support for more tile aggregation methods, for reducing tile-level predictions to slide- or patient-level predictions. The `reduce_method` argument to `Project.train()` and `.evaluate()` now supports 'median', 'sum', 'min', and 'max' (in addition to the previously supported 'average' and 'proportion'), as well as arbitrary callable functions. For example, to define slide-level predictions as the 75th percentile of tile-level predictions:

python
Project.train(
...
reduce_method=lambda x: np.percentile(x, 75)
)

- New utility function `Dataset.get_unique_roi_labels()` for getting a list of all unique ROI labels in a dataset.
- Improve inference speed of PyTorch feature extractors when called on `uint8` images.
- Much faster generation of tile-level predictions for MIL models
- Add function `sf.mil.get_mil_tile_predictions()`, which functions the same as `sf.mil.save_mil_tile_predictions()` but returns a pandas dataframe
- Add ability to calculate tile-level uncertainty for MIL models trained with UQ, by passing `uq=True` to `sf.mil.get_mil_tile_predictions()`

Dependencies

Dependencies are largely unchanged. Updates include:
- Tissue segmentation requires the `segmentation-models-pytorch` package.

Known Issues

- **Tissue segmentation is performed at the whole-slide level (based on cropped thumbnails), and performs best at lower magnifications (microns-per-pixel of 10 or greater)**. Attempting to train or deploy a tissue segmentation model at higher magnification may significantly increase memory requirements. Optimization work is ongoing to reduce memory requirements when training and deploying tissue segmentation models that operate at higher magnification.

2.2.2

This is a *minor*, bug-fix release. See the [Version 2.2 release notes](https://github.com/jamesdolezal/slideflow/releases/tag/2.2.0) for details about the latest major release.

Changes
- Fix bug with GPU stain augmentation in PyTorch (`ValueError: Stain augmentation (n) requires a stain normalizer, which was not provided`)
- Fix bug with generating intermediate layer activations from a pytorch model
- Fix bug in Otsu's thresholding if `roi_method == 'outside'` (340) - thanks matte-esse
- Improve handling of edge case where there is 1 tile in a slide
- Fix rare "LiveError" when generating feature bags
- Slideflow Studio: fix handling the case where there are no tiles in a slide (e.g. a JPEG image smaller than the tile size)
- Slideflow Studio [Cellpose extension]: Fix inconsistent transparency for displayed masks when zooming out
- Slideflow Studio [Cellpose extension]: Fix bug with whole-slide cell segmentation

2.2.1post1

This is a *minor*, bug-fix release. See the [Version 2.2 release notes](https://github.com/jamesdolezal/slideflow/releases/tag/2.2.0) for details about the latest major release.

Changes
- Fix ballooning memory requirements when training MIL models on large datasets for many epochs
- Fix documentation typo (337) - thanks luiscarm9
- Fix stain augmentation string parsing in PyTorch (335) - thanks luiscarm9
- Fix bug when specifying `normalizer=None` during feature bag generation
- Fix bug where feature generation from a WSI would hang when using a PyTorch feature extractor and non-OpenCV stain normalizer
- Fix slide thumbnail orientation when using a transformation (rotation)
- Fix bug with `WSI.align_tiles_to(..., align_by='tile')`
- Fix rendering of ALT hover and ROI labeling in Slideflow Studio on high-DPI devices
- Add support for loading float16 bags for MIL models, auto-converting to float32
- Improve check for empty arrays in DatasetFeatures (332) - thanks Mr-Milk
- Add forward compatibility for upcoming MIL hyperparameters

Page 1 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.