Slideflow

Latest version: v2.3.1

Safety actively analyzes 638819 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 10

2.2

- A new fully-transformer model, `"bistro.transformer"` (from [this paper](https://www.cell.com/cancer-cell/pdf/S1535-6108(23)00278-7.pdf))
- New `aggregation_level` MIL config option. If any patients have multiple slides, use `aggregation_level="patient"`. (316)
- MIL models can be further customized with a new argument `mil_config(model_kwargs={...})`. Use `model_kwargs` in the MIL configuration to pass through keyword arguments to the model initializer. This can be used, for example, to specify the softmax temperature and attention gating for the `Attention_MIL` model through the model keyword arguments `temperature` and `attention_gate`.

python
from slideflow.mil import mil_config

config = mil_config(
'attention_mil',
model_kwargs={'temperature': 0.3},
...
)


- New experimental uncertainty quantification support for MIL models. Enable uncertainty estimation with the argument `uq=True` to either `P.train_mil` or `P.evaluate_mil`. The model must support a `uq` argument in it's `forward()` function. At present, MIL UQ is only available for the `Attention_MIL` model.
- New class initializer `DatasetFeatures.from_bags()`, for loading a `DatasetFeatures` objects from previously generated feature bags. This makes it easier to perform latent space exploration and visualization, using `DatasetFeatures.map_activations()` (see [docs](https://slideflow.dev/posthoc/#mapping-activations))
- New `sf.mil.MILFeatures` class to assist with calculating and visualizing last-layer activations from MIL models, prior to final logits. This class is analogous to the `DatasetFeatures` interface, but for MIL model layer activations.

Slideflow Studio Updates
The latest version of Slideflow Studio includes a number of usability improvements and new features.
- **ROI labels**: You can now [assign labels](https://slideflow.dev/#studio-roi) to Regions of Interest (ROIs) in Studio. These labels can be used for downstream [strongly-supervised training](https://slideflow.dev/tile_labels), where labels are determined from ROIs rather than inherited from the slide label.
- **ALT hover**: Press Left ALT while hovering over a heatmap to show the raw prediction/attention values beneath your cursor.
- **Progress bars**: A progress bar is now displayed when generating predictions for a slide.
- **Tile predictions with MIL**: Show tile-level predictions and attention for MIL models by right-clicking anywhere on a slide
- **GAN seed tracking**: Keep track of GAN seeds with easy saving and loading. Quickly scroll through seeds by pressing left/right on your keyboard.
- Scale sidebar icons with font size
- Improved low-memory mode for MIL models (supports devices with < 8 GB RAM)
- Preserve ROIs and slide settings when changing models
- Default to Otsu's thresholding instead of grayspace filtering, for improved efficiency
- Various bug fixes and stability improvement

Documentation Expansion
Documentation at https://slideflow.dev has been further expanded, and a new **Developer Notes** section has been added. Developer Notes are intended to provide a deeper dive into selected topics of interest for developers or advanced users. Our first developer notes include:
- **TFRecords: Reading and Writing**: a detailed description of our TFRecord data format, with examples of how to create, inspect, and read from these files.
- **Dataloaders: Sampling and Augmentation**: descriptions of how to create PyTorch `DataLoaders` or Tensorflow `tf.data.Datasets` and apply custom image transformations, labeling, and sampling. Includes a detailed examination of our oversampling and undersampling methods.
- **Custom Feature Extractors**: A look at how to construct custom feature extractors for MIL models.
- **Strong Supervision with Tile Labels**: An example of how Region of Interest (ROI) labels can be leveraged for training strongly-supervised models.

In addition to these new Dev Notes, we've also added two tutorials ([Tutorial 7: Training with Custom Augmentations](https://slideflow.dev/tutorial7) and [Tutorial 8: Multiple-Instance Learning](https://slideflow.dev/tutorial8)), as well as expanded our [Slideflow Studio](https://slideflow.dev/studio) docs to reflect the latest features.

Other New Features
- Align two slides together using `sf.WSI.align_to()`. This coarse alignment is fast and effective for slides in the proper orientation, without distortion. Use the more accurate `sf.WSI.align_tiles_to()` method for a higher accuracy alignment, finetuned at each tile location.
- Rotate a whole-slide image upon initial load using the new `transforms` argument [**Libvips only**]. This is particularly useful when attempting to align slides:

python
wsi = sf.WSI(..., transforms=[sf.slide.ROTATE_90_CLOCKWISE])


- Use OpenSlide bounding boxes, if present, with the new WSI argument `use_bounds` [**Libvips only**]. If `True`, will use existing OpenSlide bounding boxes. If a tuple, will crop the whole-slide image to the specified bounds. This is particularly useful when aligning slides.

python
Use OpenSlide bounds
wsi = sf.WSI(..., use_bounds=True)

Manually define the bounding box
wsi = sf.WSI(..., use_bounds=(41445, 112000, 48000, 70000))


- New [Indexable PyTorch dataset](https://slideflow.dev/dataloaders/#direct-indexing), for easier integration of Slideflow datasets into external projects
- Improved progress bars during tfrecord interleaving
- Add support for protobuf version 4
- Train GANs at odd image sizes with the new ``resize`` argument (see [docs](https://slideflow.dev/stylegan))
- Train a GAN conditioned on tile-level labels (see [docs](https://slideflow.dev/stylegan))

Version Requirements
Version requirements are largely unchanged. Notable differences include:
- The new `PLIP` feature extractor requires the `transformers` package.
- The `TransMIL` model requires the `nystom_attention` package.
- protobuf version compatibility expanded to support version 4
- imgui version increase to >= 2.0.0

**Note: the default backend has been switched from Tensorflow to PyTorch.** The backend can be manually set using the environmental variable "SF_BACKEND":


export SF_BACKEND=tensorflow

2.2.0

Highlights
Slideflow 2.2 further extends multiple-instance learning (MIL) capabilities, with the introduction of **multi-magnification MIL**, new models, experimental uncertainty quantification, and various other enhancements. This release also includes two new pretrained feature extractors - **HistoSSL** and **PLIP** - as well as support for the self-supervised learning framework, **DINOv2**. Slideflow Studio has been updated with several new features and quality of life improvements. Finally, the documentation has been enriched with Developer Notes and new tutorials, providing deeper insights on select topics.

Table of Contents
1. **Multi-Magnification MIL**
2. **New Feature Extractors**
a. Pretrained
b. DINOv2
3. **New MIL Features**
4. **Slideflow Studio Updates**
5. **Documentation Expansion**
6. **Other New Features**
7. **Version Requirements**


Multi-Magnification MIL
Slideflow now supports multi-modal MIL, with feature bags generated from multiple feature extractors at different magnifications. Multi-magnification MIL offers potential advantages if there are valuable histologic features at both low and high magnification.

Working with multi-magnification MIL is easy - you can use the same training API as standard MIL models. Simply provide multiple bag paths (one for each magnification) and use the new `"mm_attention_mil"` model.

python
Configure a multimodal MIL model.
config = mil_config('mm_attention_mil', lr=1e-4)

Set the bags paths for each modality.
bags_10x = '/path/to/bags_10x'
bags_40x = '/path/to/bags_40x'

P.train_mil(
config=config,
outcomes='HPV_status',
train_dataset=train,
val_dataset=val,
bags=[bags_10x, bags_40x]
)


Slideflow Studio also supports multi-magnification MIL models, allowing you to visualize attention and tile-level predictions from each mode separately.

New Feature Extractors
We've introduced support for two new pretrained feature extractors, as well as the self-supervised learning framework [DINOv2](https://github.com/jamesdolezal/dinov2).

Pretrained
The new pretrained feature extractors include:
- **[HistoSSL](https://github.com/owkin/HistoSSLscaling)**: a pan-cancer, pretrained ViT-based iBOT model (`iBOT[ViT-B]PanCancer`). [Paper](https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2.full.pdf)
- **[PLIP](https://github.com/PathologyFoundation/plip)**: feature encoder used for a CLIP model finetuned on pathology images and text descriptions. [Paper](https://www.nature.com/articles/s41591-023-02504-3)

Licenses and citations are available for all feature extractors through the new `.license` and `.citation` attributes.

python
>>> ctranspath = sf.model.build_feature_extractor('ctranspath', tile_px=299)
>>> ctranspath.license

2.1.1

This is a *minor*, bug-fix release. See the [Version 2.1 release notes](https://github.com/jamesdolezal/slideflow/releases/tag/2.1.0) for details about the latest major release.

Changes
- Fix error when providing a feature extractor argument to `sf.mil.predict_slide()` [310] (thanks bplee)
- Fix Neptune logging with latest API [312, 313, 314] (thanks luiscarm9)
- Fix unknown argument `save_checkpoints` when performing SMAC hyperparameter search in PyTorch [309]
- Fix MIL inference support on MacOS (MPS) devices
- Fix attention heatmap generation from MIL model `TransMIL`
- Fix feature generation during MIL inference from finetuned SimCLR model
- Opening a slide in Studio with `WSI.view()` will load the slide at the same tile_px/tile_um
- `Dataset.rebuild_index()` will now remove old index files, fixing various `allow_pickle` errors when using old TFRecords
- Fix minor inconsistency in masking algorithm with Reinhard between OpenCV/Numpy and Tensorflow/PyTorch implementations
- Fix "Random pixel interpolation not implemented" error when using `augment=True`
- Minor documentation typo fixes

2.1.0

Highlights
Slideflow 2.1 includes a number of new features and optimizations, with a focus on improving Multiple-Instance Learning (MIL) model development and deployment. Key improvements include an **MIL / Attention Heatmaps extension** for Slideflow Studio, improvements to both **feature extraction** and **MIL training**, **new QC algorithms**, and dozens of other enhancements and bug fixes.

Table of Contents
1. **Slideflow Studio: MIL & Attention Heatmaps**
2. **MIL Training Enhancements**
a. Rebuilding feature extractors used for MIL
b. Single-slide predictions, without feature bags
c. QOL improvements
3. **Streamlined Feature Extraction**
a. Features from layer activations of an ImageNet-pretrained model
b. Features from a public pretrained network
c. Features from a SimCLR model (self-supervised learning)
d. Using feature extractors
4. **Slideflow Studio: Tile Extraction Preview & More**
5. **Slide Filtering / QC Updates**
a. DeepFocus
b. GaussianV2
6. **Smaller updates**
a. PyTorch Image Preprocessing Improvements
b. Mini-batch sample diversity for PyTorch dataloaders
c. TFRecord optimizations
d. Other new features
e. Other improvements
f. Bug fixes

Slideflow Studio: MIL & Attention Heatmaps
Slideflow Studio now includes an [MIL extension](https://slideflow.dev/studio/#multiple-instance-learning), allowing you to generate MIL predictions for slides and visualize attention as a heatmap.

Start by navigating to the Extensions tab in the bottom-left corner, and enable the "Multiple-instance Learning" extension.

![image](https://github.com/jamesdolezal/slideflow/assets/48372806/c94974ba-7ff6-4f30-85ed-490c1f3bd1ee)

A new icon will appear in the left-hand toolbar. Use this button to open the MIL widget. Models can be loaded by clicking the "Load MIL model" button, with "File -> Load MIL Model...", or by dragging-and-dropping an MIL model folder onto the window.

Information about the feature extractor and MIL model will be shown in the toolbar. MIL model architecture and hyperparameters can be viewed by clicking the "HP" button. Click "Predict Slide" to generate a whole-slide prediction. If applicable, attention will be displayed as a heatmap. The heatmap color and display can be customized in the Heatmap widget.

![image](https://github.com/jamesdolezal/slideflow/assets/48372806/d5a838df-a654-4538-bd94-1aa6a63de32d)

MIL Training Enhancements
Several changes in the MIL training process have been made to improve the user experience and facilitate deployment of trained MIL models on new slides.

Rebuilding feature extractors used for MIL
One of the previous challenges with MIL models was the reliance on generated feature "bags", even for model evaluation. Slideflow now includes tools to generate predictions from MIL models without manually generating feature bags, greatly simplifying evaluation and single-slide testing.

When image tile features are calculated and exported for a dataset (either with `Project.generate_feature_bags()` or `DatasetFeatures.to_torch()`), the feature extractor configuration is now saved as `bags_config.json` in the same directory as the exported feature bags. This configuration file contains all information necessary for rebuilding the feature extractor. An example file is shown below.

json
{
"extractor": {
"class": "slideflow.model.extractors.retccl.RetCCLFeatures",
"kwargs": {
"center_crop": true
}
},
"normalizer": {
"method": "macenko",
"fit": {
"stain_matrix_target": [
[
0.5062568187713623,
0.22186939418315887
],
[
0.7532230615615845,
0.8652154803276062
],
[
0.4069173336029053,
0.42241501808166504
]
],
"target_concentrations": [
1.7656903266906738,
1.2797492742538452
]
}
},
"num_features": 2048,
"tile_px": 299,
"tile_um": 302
}


The feature extractor can then be rebuilt with `sf.model.rebuild_extractor()`:

python
from slideflow.model.extractors import rebuild_extractor

Recreate the feature extractor
and stain normalizer, if applicable
extractor, normalizer = rebuild_extractor("/path/to/bags_config.json")


Single-slide predictions, without feature bags
The new `sf.mil.predict_slide()` function allows you to generate a whole-slide prediction (and attention heatmap) from a saved MIL model, without requiring the user to manually generate feature bags.

This is accomplished by including feature extraction information in the `mil_params.json` file stored in MIL model folders. When performing single-slide inference, Slideflow will automatically rebuild the feature extractor, calculate features for all tiles in the given slide, and pass these features to the loaded MIL model.

You can generate single-slide predictions using a path to a slide:

python
from slideflow.mil import predict_slide

slide = '/path/to/slide.svs'
model = '/path/to/mil_model'

Calculate predictions and attention heatmap
y_pred, y_att = predict_slide(model, slide)


You can also generate single-slide predictions from a loaded `WSI` object, allowing you to customize slide processing or QC before generating predictions:

python
import slideflow as sf
from slideflow.mil import predict_slide
from slideflow.slide import qc

Load slide and apply Otsu thresholding
slide = '/path/to/slide.svs'
wsi = sf.WSI(slide, ...)
wsi.qc(qc.Otsu())

Calculate predictions and attention heatmap
y_pred, y_att = predict_slide('/path/to/mil_model', wsi)


QOL improvements for MIL training
Several smaller quality of life improvements have been made for MIL training. In addition to the feature extraction configuration, the `mil_params.json` file now also includes information about the input and output shapes of the MIL network and outcome labels. An example file is shown below.

json
{
"trainer": "fastai",
"params": {
...
},
"outcomes": "histology",
"outcome_labels": {
"0": "Adenocarcinoma",
"1": "Squamous"
},
"bags": "/mnt/data/projects/example_project/bags/simclr-263510/",
"input_shape": 1024,
"output_shape": 2,
"bags_encoder": {
"extractor": {
"class": "slideflow.model.extractors.simclr.SimCLR_Features",
"kwargs": {
"center_crop": false,
"ckpt": "/mnt/data/projects/example_project/simclr/00001-EXAMPLE/ckpt-263510.ckpt"
}
},
"normalizer": null,
"num_features": 1024,
"tile_px": 299,
"tile_um": 302
}
}


When exporting feature bags for MIL training with `Project.generate_feature_bags()`, memory consumption is reduced by performing the feature bag calculation in smaller batches of slides at a time. [261]

Finally, when validating or evaluation MIL models with a categorical outcome, accuracy within each class is reported separately. [265] (thank you andrewsris)


INFO Validation metrics for outcome histology:
INFO slide-level AUC (cat 0): 0.993 AP: 0.998 (opt. threshold: 0.565)
INFO slide-level AUC (cat 1): 0.993 AP: 0.974 (opt. threshold: 0.439)
INFO Category 0 acc: 97.3% (146/150)
INFO Category 1 acc: 92.3% (36/39)


Streamlined Feature Extraction
Extracting features from image tiles - commonly used for training [Multiple-instance Learning (MIL)](http://slideflow.dev/mil/) models - has been streamlined with `sf.model.build_feature_extractor()`, providing a common API for preparing many types of feature extractors.

Features from layer activations of an ImageNet-pretrained model
Generate features from a neural network pretrained on ImageNet simply by passing the name of the network to `sf.model.build_feature_extractor()`. If a tile size is specified, input tiles will be center cropped before calculating features.

python
from slideflow.model import build_feature_extractor

resnet50_extractor = build_feature_extractor(
'resnet50',
tile_px=299
)


This will calculate features using activations from the post-convolutional layer of the network. You can also concatenate activations from multiple layers and apply pooling for layers with 2D output shapes.

python
extractor = build_feature_extractor(
'resnet50',
layers=['conv1_relu', 'conv3_block1_2_relu'],
pooling='avg',
tile_px=299
)


Features from layer activations of a fine-tuned model
Generate features from a model fine-tuned in Slideflow by calculating activations at any number of arbitrary neural network layers.

python
extractor = build_feature_extractor(
'/path/to/trained_model.zip'
)


Features from a public pretrained network
Generate features from the pre-trained CTransPath or RetCCL networks. Weights for these pretrained networks will be automatically downloaded from [HuggingFace](huggingface.co/jamesdolezal/retccl/).

python
extractor = build_feature_extractor(
'retccl',
tile_px=299
)


Features from a SimCLR model (self-supervised learning)
Generate features from a model trained with [self-supervised learning](https://slideflow.dev/ssl) using SimCLR. Specify a saved model folder or path to a model checkpoint (`*.ckpt`).

python
extractor = build_feature_extractor(
'simclr'
ckpt='/path/to/simclr.ckpt'
)


Using feature extractors

All feature extractors can then be used to calculate features from individual image tiles, [generate feature bags](https://slideflow.dev/mil/#exporting-features) for MIL training, or calculate features for an entire slide using a loaded `WSI` object.

Slideflow Studio: Tile Extraction Preview & More
Studio now facilitates quickly previewing tile extraction. Tile extraction parameters - such as slide-level processing / QC, grayspace/whitespace filtering, and stride - can be customized in the "Slide Processing" section. The "Display" section allows users to preview tile extraction by displaying outlines around tiles. When generating whole-slide predictions from a loaded model, only the shown tiles will be used.

![image](https://github.com/jamesdolezal/slideflow/assets/48372806/a4911b16-9b5a-4289-9d46-41c95f31acda)

Additional updates to Studio include:
- Gracefully handle invalid/incompatible slides with an error message, instead of crashing
- Zoom to a specific MPP in a slide with `View -> Zoom to MPP (Ctrl +/)` [270] (thank you skochanny)
- Remove status bar when capturing main view [270]
- Add MacOS M1 / MPS compatibility when generating StyleGAN images
- Fix ROI annotations on high-DPI devices
- Various stability improvements & bug fixes

Slide Filtering / QC Updates (DeepFocus, GaussianV2)
Slideflow includes two new slide filtering / QC algorithms: `DeepFocus` and `GaussianV2`.

DeepFocus
An official implementation of the DeepFocus QC algorithm is now included in Slideflow, and can be used like any other QC algorithm. By default, DeepFocus is applied to slides at 40X magnification, although this can be customized with the `tile_um` argument.

python
from slideflow.slide import qc

deepfocus = qc.DeepFocus(tile_um='20x')
slide.qc(deepfocus)


You can also retrieve raw predictions from the DeepFocus model by passing the argument `threshold=False`:


preds = deepfocus(slide, threshold=False)


GaussianV2
A new, optimized Gaussian ("blur") filter has been implemented as `sf.slide.qc.GaussianV2`. This method reduces computational time and memory consumption by first splitting the slide into smaller chunks, performing Gaussian filtering on each chunk separately (accelerated with multiprocessing), and then merging the chunks (eliminating areas of overlap to reduce stitching artifacts). `GaussianV2` will be used by default when using the QC methods `'blur'` or `'both'`.

Smaller updates

Slideflow includes a number of other new features and enhancements, as detailed below.

PyTorch Image Preprocessing Improvements
Image preprocessing and augmentations in PyTorch backend have been refactored to use torchvision transformations. This improves computational efficiency and makes custom transformation pipelines easier to work with. This results in a 3-4x speed improvement in PyTorch Gaussian blur augmentation [145], and also improves PyTorch stain normalization speed.

Custom PyTorch transformations or augmentations can be used in any PyTorch dataloader by passing a callable function to `Dataset.torch(augment=...)` or `Dataset.torch(transform=...)`. For example, to apply a resize transformation on images:

python
import slideflow as sf
from torchvision import transforms

Load a project and dataset
P = sf.load_project(...)
dataset = P.dataset(tile_px=299, tile_um=302)

Establish a resize transformation
resize = transforms.resize(512)

Create a PyTorch dataloader with this
transformation applied to images
dl = dataset.torch(transform=resize)


Custom transformations can also be used in any Tensorflow dataset using the same API. Pass a callable function to the `transform` argument of `Dataset.tensorflow()`:

python
import slideflow as sf
import tensorflow as tf

tf.function
def custom_resize(image):
return tf.image.resize(image, (512, 512))

Load a project and dataset
P = sf.load_project(...)
dataset = P.dataset(tile_px=299, tile_um=302)

Create a Tensorflow dataset with this
resize transformation applied to images
dl = dataset.tensorflow(transform=custom_resize)


Mini-batch sample diversity for PyTorch dataloaders
This update addresses a long-standing issue where mini-batches assembled with PyTorch tended to contain tiles from repeat slides. PyTorch dataloaders now enforce greater sample diversity, reducing the chance that multiple tiles from the same slide will be present in a single batch (unless the number of slides is less than the batch size). Performance auditing has revealed that this change may improve model generalizability.

TFRecord optimizations
TFrecord index files now store tile location information, greatly improving efficiency of reading TFRecords by tile location (which is performed for various internal functions, such as calculating dataset features). Existing TFRecord indices will be automatically updated with location information when used, but this process can be manually triggered with `Dataset.rebuild_index()`. Tile locations can be read from a TFRecord's index file with `sf.io.get_locations_from_tfrecord()`.

Other new features
- Add support for slide images that do not contain 'levels', such as multi-page TIFFs and Versa-scanned SVS files. (Thank you emmachancellor and skochanny)
- `Dataset.verify_slide_names()`: verify that TFRecord filenames match the slide names inside
- `sf.WSI.area()`: Calculate the area of a slide that has passed QC using
- `sf.slide.backends.vips.vips_padded_crop()`: enable extracting tiles outside the bounds of a slide, padding out-of-bounds area with white or black background.
- New `use_edge_tiles` option for `sf.WSI`. If True, will allow extracting edge tiles from the slide. Empty areas are rendered as white, in both cuCIM and VIPS backends.
- Add optional `loc`, `ncol`, and `legend_kwargs` arguments (passed to `ax.legend()`) to `Slidemap.plot()`, for customizing the UMAP plot axes. [275] (Thank you emmachancellor)
- Add support for training SimCLR with stain augmentation

Other improvements
- Improve clarity of slide backend error messages [266] (thank you cswpy)
- Include Libvips version info in `sf.about()`
- Improve PyTorch training speed by using channels-last memory format.
- Improve handling of `linalg` errors during Macenko normalization. If an error is encountered with Macenko normalization, the original image is returned instead of raising the error. This behavior can be disabled by passing `StainNormalizer.transform(allow_errors=False)`.
- Improve quality of slide thumbnail in PDF extraction report. Also adds ability to provide thumbnail keywords arguments when extracting tiles via `thumb_kwargs` (thank you skochanny)
- Improved CPU core detection in Linux. All functions which detect the number of CPU cores now use `sf.util.num_cpu()` instead of `os.cpu_count()`. This will first check available cores with `os.sched_getaffinity(0)`, which reflects available CPU cores with OS-level scheduling. If this fails (e.g. on Window and macOS systems), it will default to `os.cpu_count()`.
- SimCLR default arguments have been updated to reflect the default parameters of the original paper:
- `learning_rate`: 0.3 -> 0.075
- `learning_rate_scaling`: 'linear' -> 'sqrt'
- `weight_decay`: 1e-6 -> 1e-4
- Fix issue where Otsu's thresholding on MRXS files would occasionally fail to identify any foreground tissue. This was due to very small images in the MRXS pyramid. (thank you siddhir)
- Fix issue where MRXS slides could not be extracted when using a buffer, due to the presence of an associated folder with the MRXS file format. [300]
- Close file handles when deleting PyTorch dataloader
- Improve accuracy of mosaic map grid
- Deprecate `Project.generate_features_for_clam()`, replacing it with `Project.generate_feature_bags()`

Bug fixes
- Fix reported concordance index for survival models, which was previously being incorrectly reported as `1 - c_index`
- Fix 'input Tensor too large' error with PyTorch GPU normalizers. Fix is applied by capping the batch size for normalization at 32.
- Fix `sf.DatasetFeatures.to_csv()` [260]
- Fix mixed precision training in PyTorch
- Improve protobuf dependency versioning. Slideflow requires protobuf version <=3.20.\*. Previously, setup.py listed protobuf requirements as <=3.20.2; this has been updated to <3.21 to include any additional 3.20.\* patch releases. This also specifies tensorflow_datasets<4.9.0 to prevent protobuf version >= 4. [289] (thank you sebp)
- Pin required version of cellpose to `<2.2`
- Pin required version of pandas to `<2`
- Pin required version of timm to `<0.9` (thank you quark412)

2.0.5

This is a *minor*, bug-fix release. See the [Version 2.0 release notes](https://github.com/jamesdolezal/slideflow/releases/tag/2.0.0) for details about the latest major release.

Changes
- Fix `ConcatOp` error when training multi-modal models with continuous variables [282]
- Minor docstring updates and typo fixes [281]

2.0.4

This is a *minor*, bug-fix and optimization release. See the [Version 2.0 release notes](https://github.com/jamesdolezal/slideflow/releases/tag/2.0.0) for details about the latest major release.

Bug fixes
- **Fix bug which caused the `predictions.parquet` file for MIL models to have incorrect ground-truth labels.** This also impacted accuracy of results calculated with `Project.evaluate_mil()`.
- Fix bug with `Dataset.kfold_split()`
- Fix bug with `DatasetFeatures.to_csv()` [260]
- Fix bug with loading JPG/PNG files in Slideflow Studio
- Fix `DatasetFeatures.remove_slide()` when activations are not generated

Other changes
- Allow using relative paths with `sf.create_project()` [272]
- Update `Project.extract_tiles()` docstring [273]
- Add libvips version info in `sf.about()`, for easier troubleshooting
- Improve clarity of slide backend error messages [266]
- Remove status bar when capturing main view in Slideflow Studio [270]
- Improve accuracy of mosaic map grid
- Return original image when linalg errors would be raised during stain Macenko normalization in the Tensorflow backend, instead of raising an error
- Improve documentation clarity regarding CPH backend support [276]

Page 2 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.