Nni

Latest version: v3.0

Safety actively analyzes 710445 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 8

2.6.1

Bug Fixes

* Fix a bug that new TPE does not support dict metrics.
* Fix a bug that missing comma. (Thanks to mrshu)

2.6

**NOTE**: NNI v2.6 is the last version that supports Python 3.6. From next release NNI will require Python 3.7+.

Hyper-Parameter Optimization

Experiment

* The legacy experiment config format is now deprecated. [(doc of new config)](https://nni.readthedocs.io/en/v2.6/reference/experiment_config.html)
* If you are still using legacy format, nnictl will show equivalent new config on start. Please save it to replace the old one.
* nnictl now uses [`nni.experiment.Experiment`](https://nni.readthedocs.io/en/stable/Tutorial/HowToLaunchFromPython.html) APIs as backend. The output message of create, resume, and view commands have changed.
* Added Kubeflow and Frameworkcontroller support to hybrid mode. [(doc)](https://nni.readthedocs.io/en/v2.6/TrainingService/HybridMode.html)
* The hidden tuner manifest file has been updated. This should be transparent to users, but if you encounter issues like failed to find tuner, please try to remove `~/.config/nni`.

Algorithms

* Random tuner now supports classArgs `seed`. [(doc)](https://nni.readthedocs.io/en/v2.6/Tuner/RandomTuner.html)
* TPE tuner is refactored: [(doc)](https://nni.readthedocs.io/en/v2.6/Tuner/TpeTuner.html)
* Support classArgs `seed`.
* Support classArgs `tpe_args` for expert users to customize algorithm behavior.
* Parallel optimization has been turned on by default. To turn it off set `tpe_args.constant_liar_type` to `null` (or `None` in Python).
* `parallel_optimize` and `constant_liar_type` has been removed. If you are using them please update your config to use `tpe_args.constant_liar_type` instead.
* Grid search tuner now supports all search space types, including uniform, normal, and nested choice. [(doc)](https://nni.readthedocs.io/en/v2.6/Tuner/GridsearchTuner.html)

Neural Architecture Search

* Enhancement to serialization utilities [(doc)](https://nni.readthedocs.io/en/v2.6/NAS/Serialization.html) and changes to recommended practice of customizing evaluators. [(doc)](https://nni.readthedocs.io/en/v2.6/NAS/QuickStart.html#pick-or-customize-a-model-evaluator)
* Support latency constraint on edge device for ProxylessNAS based on nn-Meter. [(doc)](https://nni.readthedocs.io/en/v2.6/NAS/Proxylessnas.html)
* Trial parameters are showed more friendly in Retiarii experiments.
* Refactor NAS examples of ProxylessNAS and SPOS.

Model Compression

* New Pruner Supported in Pruning V2
* Auto-Compress Pruner [(doc)](https://nni.readthedocs.io/en/v2.6/Compression/v2_pruning_algo.html#auto-compress-pruner)
* AMC Pruner [(doc)](https://nni.readthedocs.io/en/v2.6/Compression/v2_pruning_algo.html#amc-pruner)
* Movement Pruning Pruner [(doc)](https://nni.readthedocs.io/en/v2.6/Compression/v2_pruning_algo.html#movement-pruner)
* Support `nni.trace` wrapped `Optimizer` in Pruning V2. In the case of not affecting the user experience as much as possible, trace the input parameters of the optimizer. [(doc)](https://nni.readthedocs.io/en/v2.6/Compression/v2_pruning_algo.html)
* Optimize Taylor Pruner, APoZ Activation Pruner, Mean Activation Pruner in V2 memory usage.
* Add more examples for Pruning V2.
* Add document for pruning config list. [(doc)](https://nni.readthedocs.io/en/v2.6/Compression/v2_pruning_config_list.html)
* Parameter `masks_file` of `ModelSpeedup` now accepts `pathlib.Path` object. (Thanks to dosemeion) [(doc)](https://nni.readthedocs.io/en/v2.6/Compression/ModelSpeedup.html#user-configuration-for-modelspeedup)
* Bug Fix
* Fix Slim Pruner in V2 not sparsify the BN weight.
* Fix Simulator Annealing Task Generator generates config ignoring 0 sparsity.

Documentation

* Supported GitHub feature "Cite this repository".
* Updated index page of readthedocs.
* Updated Chinese documentation.
* From now on NNI only maintains translation for most import docs and ensures they are up to date.
* Reorganized HPO tuners' doc.

Bugfixes

* Fixed a bug where numpy array is used as a truth value. (Thanks to khituras)
* Fixed a bug in updating search space.
* Fixed a bug that HPO search space file does not support scientific notation and tab indent.
* For now NNI does not support mixing scientific notation and YAML features. We are waiting for PyYAML to update.
* Fixed a bug that causes DARTS 2nd order to crash.
* Fixed a bug that causes deep copy of mutation primitives (e.g., LayerChoice) to crash.
* Removed blank at bottom in Web UI overview page.

2.5

Model Compression

+ New major version of pruning framework [(doc)](https://nni.readthedocs.io/en/v2.5/Compression/v2_pruning.html)
- Iterative pruning is more automated, users can use less code to implement iterative pruning.
- Support exporting intermediate models in the iterative pruning process.
- The implementation of the pruning algorithm is closer to the paper.
- Users can easily customize their own iterative pruning by using PruningScheduler.
- Optimize the basic pruners underlying generate mask logic, easier to extend new functions.
- Optimized the memory usage of the pruners.
+ MobileNetV2 end-to-end example [(notebook)](https://github.com/microsoft/nni/blob/v2.5/examples/model_compress/pruning/mobilenetv2_end2end/Compressing%20MobileNetV2%20with%20NNI%20Pruners.ipynb)
+ Improved QAT quantizer [(doc)](https://nni.readthedocs.io/en/v2.5/Compression/Quantizer.html#qat-quantizer)
- Support dtype and scheme customization
- Support dp multi-gpu training
- Support load_calibration_config
+ Model speed-up now supports directly loading the mask [(doc)](https://nni.readthedocs.io/en/v2.5/Compression/ModelSpeedup.html#nni.compression.pytorch.ModelSpeedup)
+ Support speed-up depth-wise convolution
+ Support bn-folding for LSQ quantizer
+ Support QAT and LSQ resume from PTQ
+ Added doc for observer quantizer [(doc)](https://nni.readthedocs.io/en/v2.5/Compression/Quantizer.html#observer-quantizer)

Neural Architecture Search

+ NAS benchmark [(doc)](https://nni.readthedocs.io/en/v2.5/NAS/Benchmarks.html)
- Support benchmark table lookup in experiments
- New data preparation approach
+ Improved [quick start doc](https://nni.readthedocs.io/en/v2.5/NAS/QuickStart.html)
+ Experimental CGO execution engine [(doc)](https://nni.readthedocs.io/en/v2.5/NAS/ExecutionEngines.html#cgo-execution-engine-experimental)

Hyper-Parameter Optimization

+ New training platform: Alibaba DSW+DLC [(doc)](https://nni.readthedocs.io/en/v2.5/TrainingService/DLCMode.html)
+ Support passing ConfigSpace definition directly to BOHB [(doc)](https://nni.readthedocs.io/en/v2.5/Tuner/BohbAdvisor.html#usage) (thanks to khituras)
+ Reformatted [experiment config doc](https://nni.readthedocs.io/en/v2.5/reference/experiment_config.html)
+ Added example config files for Windows (thanks to politecat314)
+ FrameworkController now supports reuse mode

Fixed Bugs

+ Experiment cannot start due to platform timestamp format (issue 4077 4083)
+ Cannot use 1e-5 in search space (issue 4080)
+ Dependency version conflict caused by ConfigSpace (issue 3909) (thanks to jexxers)
+ Hardware-aware SPOS example does not work (issue 4198)
+ Web UI show wrong remaining time when duration exceeds limit (issue 4015)
+ cudnn.deterministic is always set in AMC pruner (4117) thanks to mstczuo

And...

New [emoticons](https://github.com/microsoft/nni/blob/v2.5/docs/en_US/Tutorial/NNSpider.md)!
![holiday emoticon](https://raw.githubusercontent.com/microsoft/nni/v2.5/docs/img/emoicons/Holiday.png)

Install from [pypi](https://pypi.org/project/nni/2.5/)

2.4

Major Updates

Neural Architecture Search

* NAS visualization: visualize model graph through Netron (3878)
* Support NAS bench 101/201 on Retiarii framework (3871 3920)
* Support hypermodule AutoActivation (3868)
* Support PyTorch v1.8/v1.9 (3937)
* Support Hardware-aware NAS with nn-Meter (3938)
* Enable `fixed_arch` on Retiarii (3972)

Model Compression

* Refactor of ModelSpeedup: auto shape/mask inference (3462)
* Added more examples for ModelSpeedup (3880)
* Support global sort for Taylor pruning (3896)
* Support TransformerHeadPruner (3884)
* Support batch normalization folding in QAT quantizer (3911, thanks the external contributor chenbohua3)
* Support post-training observer quantizer (3915, thanks the external contributor chenbohua3)
* Support ModelSpeedup for Slim Pruner (4008)
* Support TensorRT 8.0.0 in ModelSpeedup (3866)

Hyper-parameter Tuning

* Improve HPO benchmarks (3925)
* Improve type validation of user defined search space (3975)

Training service & nnictl

* Support JupyterLab (3668 3954)
* Support viewing experiment from experiment folder (3870)
* Support kubeflow in training service reuse framework (3919)
* Support viewing trial log on WebUI for an experiment launched in `view` mode (3872)

Minor Updates & Bug Fixes

* Fix the failure of the exit of Retiarii experiment (3899)
* Fix `exclude` not supported in some `config_list` cases (3815)
* Fix bug in remote training service on reuse mode (3941)
* Improve IP address detection in modern way (3860)
* Fix bug of the search box on WebUI (3935)
* Fix bug in url_prefix of WebUI (4051)
* Support dict format of intermediate on WebUI (3895)
* Fix bug in openpai training service induced by experiment config v2 (4027 4057)
* Improved doc (3861 3885 3966 4004 3955)
* Improved the API `export_model` in model compression (3968)
* Supported `UnSqueeze` in ModelSpeedup (3960)
* Thanks other external contributors: Markus92 (3936), thomasschmied (3963), twmht (3842)

2.3

Major Updates

Neural Architecture Search

* Retiarii Framework (NNI NAS 2.0) Beta Release with new features:

* Support new high-level APIs: ``Repeat`` and ``Cell`` (3481)
* Support pure-python execution engine (3605)
* Support policy-based RL strategy (3650)
* Support nested ModuleList (3652)
* Improve documentation (3785)

**Note**: there are more exciting features of Retiarii planned in the future releases, please refer to [Retiarii Roadmap](https://github.com/microsoft/nni/discussions/3744) for more information.

* Add new NAS algorithm: Blockwise DNAS FBNet (3532, thanks the external contributor alibaba-yiwuyao)

Model Compression

* Support Auto Compression Framework (3631)
* Support slim pruner in Tensorflow (3614)
* Support LSQ quantizer (3503, thanks the external contributor chenbohua3)
* Improve APIs for iterative pruners (3507 3688)

Training service & Rest

* Support 3rd-party training service (3662 3726)
* Support setting prefix URL (3625 3674 3672 3643)
* Improve NNI manager logging (3624)
* Remove outdated TensorBoard code on nnictl (3613)

Hyper-Parameter Optimization

* Add new tuner: DNGO (3479 3707)
* Add benchmark for tuners (3644 3720 3689)

WebUI

* Improve search parameters on trial detail page (3651 3723 3715)
* Make selected trials consistent after auto-refresh in detail table (3597)
* Add trial stdout button on local mode (3653 3690)

Examples & Documentation

* Convert all trial examples' from config v1 to config v2 (3721 3733 3711 3600)
* Add new jupyter notebook examples (3599 3700)

Dev Excellent

* Upgrade dependencies in Dockerfile (3713 3722)
* Substitute PyYAML for ``ruamel.yaml`` (3702)
* Add pipelines for AML and hybrid training service and experiment config V2 (3477 3648)
* Add pipeline badge in README (3589)
* Update issue bug report template (3501)

Bug Fixes & Minor Updates

* Fix syntax error on Windows (3634)
* Fix a logging related bug (3705)
* Fix a bug in GPU indices (3721)
* Fix a bug in FrameworkController (3730)
* Fix a bug in ``export_data_url format`` (3665)
* Report version check failure as a warning (3654)
* Fix bugs and lints in nnictl (3712)
* Fix bug of ``optimize_mode`` on WebUI (3731)
* Fix bug of ``useActiveGpu`` in AML v2 config (3655)
* Fix bug of ``experiment_working_directory`` in Retiarii config (3607)
* Fix a bug in mask conflict (3629, thanks the external contributor Davidxswang)
* Fix a bug in model speedup shape inference (3588, thanks the external contributor Davidxswang)
* Fix a bug in multithread on Windows (3604, thanks the external contributor Ivanfangsc)
* Delete redundant code in training service (3526, thanks the external contributor maxsuren)
* Fix typo in DoReFa compression doc (3693, thanks the external contributor Erfandarzi)
* Update docstring in model compression (3647, thanks the external contributor ichejun)
* Fix a bug when using Kubernetes container (3719, thanks the external contributor rmfan)

2.2

Not secure
Major updates
=========

Neural Architecture Search
-----------------------------

* Improve NAS 2.0 (Retiarii) Framework (Alpha Release)

* Support local debug mode (3476)
* Support nesting ``ValueChoice`` in ``LayerChoice`` (3508)
* Support dict/list type in ``ValueChoice`` (3508)
* Improve the format of export architectures (3464)
* Refactor of NAS examples (3513)
* Refer to `here <https://github.com/microsoft/nni/issues/3301>`__ for Retiarii Roadmap

Model Compression
-----------------------------

* Support speedup for mixed precision quantization model (Experimental) (3488 3512)
* Support model export for quantization algorithm (3458 3473)
* Support model export in model compression for TensorFlow (3487)
* Improve documentation (3482)

nnictl & nni.experiment
-----------------------------

* Add native support for experiment config V2 (3466 3540 3552)
* Add resume and view mode in Python API ``nni.experiment`` (3490 3524 3545)

Training Service
-----------------------------

* Support umount for shared storage in remote training service (3456)
* Support Windows as the remote training service in reuse mode (3500)
* Remove duplicated env folder in remote training service (3472)
* Add log information for GPU metric collector (3506)
* Enable optional Pod Spec for FrameworkController platform (3379, thanks the external contributor mbu93)

WebUI
-----------------------------

* Support launching TensorBoard on WebUI (3454 3361 3531)
* Upgrade echarts-for-react to v5 (3457)
* Add wrap for dispatcher/nnimanager log monaco editor (3461)

Bug Fixes
=========


* Fix bug of FLOPs counter (3497)
* Fix bug of hyper-parameter Add/Remove axes and table Add/Remove columns button conflict (3491)
* Fix bug that monaco editor search text is not displayed completely (3492)
* Fix bug of Cream NAS (3498, thanks the external contributor AliCloud-PAI)
* Fix typos in docs (3448, thanks the external contributor OliverShang)
* Fix typo in NAS 1.0 (3538, thanks the external contributor ankitaggarwal23)

Page 2 of 8

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.