Fastestimator

Latest version: v1.7.0

Safety actively analyzes 722617 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 6

12.2

Apphub:
• New Apphub: foundation_model - Lora, GPT image_generation – Stable Diffusion
• Updating levit tensorflow link

Dataset:
• Allowing custom directories

NumpyOp:
• Extending MinMax numpyop to take custom user defined min and max.
• Updating masks_in for multivariate op

2.15.1

2.3.1

1.7.0

Release Note
With the compatibility issues between Tensorflow and PyTorch, future releases might not include Tensorflow.

Backend:

1.6.0

This is a new major release of FastEstimator. Here are the highlights of new release:

Backend:
* New framework backend: TF 2.11, torch 2.0.1
* Introduced `IfElse` Helper function to help create simplified code

Apphub:
* New Apphub: keypoint detection - HRNet, 3D segmentation - 3DUnet+, Line Search, LeViT
* Added use your own dataset section to help with adapting to new task
* Updated SimCLR apphub for better efficiency
* Fix in-place operation in PGGAN in pytorch that cause graph to fail randomly

Dataset:
* New dataset class - Interleave dataset to switch dataset on per-step basis for multi-task learning.
* Added keypoint to mscoco dataset
* Added Pascal VOC dataset, MedMnist dataset
* Add filtering functionality to the csv dataset class
* Created custom pycocotools API to resolve pycocotool compilation issue
* Migrated several dataset hosting to google drive for stability
* Improved stability of batchdataset probability sampling

Pipeline:
* Fixed eval logging for multi-ds users
* Pipeline can now be instantiated without datasets

NumpyOp:
* Fixed issue encountered with Onehot encoding
* Added probability to OneOf NumpyOp

CLI:
* enabled warmup, eager, summary argument in run cli

Network:
* Fixed an issue when model's optimizer is None, the model_lr will not be printed at the end of training
* Fixed a performance issue of torch unet
* New patch-based inferencing class - Slicer

TensorOp:
* enabled multi-dimensional support for cross entropy loss
* Since Mixup and Cutmix no longer uses MixLoss, remove the class and update docstring
* Update focal loss's default mode to be consistent with other lossOp
* Resolved L1 loss dimension mismatch issue
* Focal loss rework
* Introduced RepeatOp for tensor operations
* Added probability to OneOf TensorOp

Visualization:
* Add BatchDisplay and GridDisplay traces
* FE logging visualization will work with single file
* Extending visualization to keypoints and masks

Trace:
* CSV logger rework
* added classification AUC trace
* Dice Trace rework
* fixed an issue of restore with unhashable param loading

Traceability:
* Updated pytorch model summary for traceability report
* Fixed multi-gpu model traceability graph
* Traceability Report now displays hardware information

Others:
* Updated yapf setting to work with recent yapf versions
* new benchmarking tool for better speed and resource monitoring
* New support matrix added to help user install past FE versions
* Fix ipython version due to its recent release upgrade that no longer supports python 3.8 below
* Updated Mac installation guide

Thank everyone who provided their feedbacks and made contribution to FE.

1.5.2

This release features several dependency bug fixes which caused issue during installation. Specifically, here is a list of notable changes in dependencies:
* numpy: specified a maximum numpy version due to numpy stopped supporting np.bool in recent versions.
* scikit-learn: changed sklearn to scikit-learn due to their recent name change and occasional downtime in the installation
* uncertainty-calibration: upgraded to a more recent version due to their dependency of sklearn

Moreover, this release will increase the stability of the package by resolving all open-ended dependency list. This will avoid future issues installing this release.

Finally, this release also incorporates a critical bug fix: TensorOp Oneof. TensorOp version of Oneof works properly now.

Page 1 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.