Nni

Latest version: v3.0

Safety actively analyzes 685525 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 8

3.0

Web Portal

* New look and feel

Neural Architecture Search

* **Breaking change**: ``nni.retiarii`` is no longer maintained and tested. Please migrate to ``nni.nas``.
+ Inherit ``nni.nas.nn.pytorch.ModelSpace``, rather than use ``model_wrapper``.
+ Use ``nni.choice``, rather than ``nni.nas.nn.pytorch.ValueChoice``.
+ Use ``nni.nas.experiment.NasExperiment`` and ``NasExperimentConfig``, rather than ``RetiariiExperiment``.
+ Use ``nni.nas.model_context``, rather than ``nni.nas.fixed_arch``.
+ Please refer to [quickstart](https://nni.readthedocs.io/en/v3.0pt1/tutorials/hello_nas.html) for more changes.
* A refreshed experience to construct model space.
+ Enhanced debuggability via ``freeze()`` and ``simplify()`` APIs.
+ Enhanced expressiveness with ``nni.choice``, ``nni.uniform``, ``nni.normal`` and etc.
+ Enhanced experience of customization with ``MutableModule``, ``ModelSpace`` and ``ParamterizedModule``.
+ Search space with constraints is now supported.
* Improved robustness and stability of strategies.
+ Supported search space types are now enriched for PolicyBaseRL, ENAS and Proxyless.
+ Each step of one-shot strategies can be executed alone: model mutation, evaluator mutation and training.
+ Most multi-trial strategies now supports specifying seed for reproducibility.
+ Performance of strategies have been verified on a set of benchmarks.
* Strategy/engine middleware.
+ Filtering, replicating, deduplicating or retrying models submitted by any strategy.
+ Merging or transforming models before executing (e.g., CGO).
+ Arbitrarily-long chains of middlewares.
* New execution engine.
+ Improved debuggability via SequentialExecutionEngine: trials can run in a single process and breakpoints are effective.
+ The old execution engine is now decomposed into execution engine and model format.
+ Enhanced extensibility of execution engines.
* NAS profiler and hardware-aware NAS.
+ New profilers profile a model space, and quickly compute a profiling result for a sampled architecture or a distribution of architectures (FlopsProfiler, NumParamsProfiler and NnMeterProfiler are officially supported).
+ Assemble profiler with arbitrary strategies, including both multi-trial and one-shot.
+ Profiler are extensible. Strategies can be assembled with arbitrary customized profilers.

Model Compression

* Compression framework is refactored, new framework import path is ``nni.contrib.compression``.
+ Configure keys are refactored, support more detailed compression configurations. [view doc](https://nni.readthedocs.io/en/v3.0pt1/compression/config_list.html)
+ Support multi compression methods fusion.
+ Support distillation as a basic compression component.
+ Support more compression targets, like ``input``, ``ouptut`` and any registered paramters.
+ Support compressing any module type by customizing module settings.
* Model compression support in DeepSpeed mode.
* Fix example bugs.
* Pruning
+ Pruner interfaces have fine-tuned for easy to use. [view doc](https://nni.readthedocs.io/en/v3.0pt1/reference/compression/pruner.html)
+ Support configuring ``granularity`` in pruners. [view doc](https://nni.readthedocs.io/en/v3.0pt1/compression/config_list.html#granularity)
+ Support different mask ways, multiply zero or add a large negative value.
+ Support manully setting dependency group and global group. [view doc](https://nni.readthedocs.io/en/v3.0/compression/config_list.html#global-group-id)
+ A new powerful pruning speedup is released, applicability and robustness have been greatly improved. [view doc](https://nni.readthedocs.io/en/v3.0pt1/reference/compression/pruning_speedup.html)
+ The end to end transformer compression tutorial has been updated, achieved more extreme compression performance. [view doc](https://nni.readthedocs.io/en/v3.0pt1/tutorials/new_pruning_bert_glue.html)
+ Fix config list in the examples.
* Quantization
+ Support using ``Evaluator`` to handle training/inferencing.
+ Support more module fusion combinations. [view doc](https://nni.readthedocs.io/en/v3.0pt1/compression/module_fusion.html)
+ Support configuring ``granularity`` in quantizers. [view doc](https://nni.readthedocs.io/en/v3.0pt1/compression/config_list.html#id6)
+ Bias correction is supported in the Post Training Quantization algorithm.
+ LSQ+ quantization algorithm is supported.
* Distillation
+ [DynamicLayerwiseDistiller](https://nni.readthedocs.io/en/v3.0pt1/reference/compression/distiller.html#dynamiclayerwisedistiller) and [Adaptive1dLayerwiseDistiller](https://nni.readthedocs.io/en/v3.0pt1/reference/compression/distiller.html#adaptive1dlayerwisedistiller) are supported.
* Compression documents now updated for the new framework, the old version please view [v2.10 doc](https://nni.readthedocs.io/en/v2.10/).
* New compression examples are under [`nni/examples/compression`](https://github.com/microsoft/nni/tree/v3.0pt1/examples/compression)
+ Create a evaluator: [`nni/examples/compression/evaluator`](https://github.com/microsoft/nni/tree/v3.0rc1/examples/compression/evaluator)
+ Pruning a model: [`nni/examples/compression/pruning`](https://github.com/microsoft/nni/tree/v3.0pt1/examples/compression/pruning)
+ Quantize a model: [`nni/examples/compression/quantization`](https://github.com/microsoft/nni/tree/v3.0rc1/examples/compression/quantization)
+ Fusion compression: [`nni/examples/compression/fusion`](https://github.com/microsoft/nni/tree/v3.0pt1/examples/compression/fusion)

Training Services

* **Breaking change**: NNI v3.0 cannot resume experiments created by NNI v2.x
* Local training service:
+ Reduced latency of creating trials
+ Fixed "GPU metric not found"
+ Fixed bugs about resuming trials
* Remote training service:
+ ``reuse_mode`` now defaults to ``False``; setting it to ``True`` will fallback to v2.x remote training service
+ Reduced latency of creating trials
+ Fixed "GPU metric not found"
+ Fixed bugs about resuming trials
+ Supported viewing trial logs on the web portal
+ Supported automatic recover after temporary server failure (network fluctuation, out of memory, etc)
* Get rid of IoC and remove unused training services.

3.0rc1

Web Portal

* New look and feel

Neural Architecture Search

* **Breaking change:** `nni.retiarii` is no longer maintained and tested. Please migrate to `nni.nas`.
* Inherit `nni.nas.nn.pytorch.ModelSpace`, rather than use `model_wrapper`.
* Use `nni.choice`, rather than `nni.nas.nn.pytorch.ValueChoice`.
* Use `nni.nas.experiment.NasExperiment` and `NasExperimentConfig`, rather than `RetiariiExperiment`.
* Use `nni.nas.model_context`, rather than `nni.nas.fixed_arch`.
* Please refer to [quickstart](https://nni.readthedocs.io/en/v3.0/tutorials/hello_nas.html) for more changes.
* A refreshed experience to construct model space.
* Enhanced debuggability via `freeze()` and `simplify()` APIs.
* Enhanced expressiveness with `nni.choice`, `nni.uniform`, `nni.normal` and etc.
* Enhanced experience of customization with `MutableModule`, `ModelSpace` and `ParamterizedModule`.
* Search space with constraints is now supported.
* Improved robustness and stability of strategies.
* Supported search space types are now enriched for PolicyBaseRL, ENAS and Proxyless.
* Each step of one-shot strategies can be executed alone: model mutation, evaluator mutation and training.
* Most multi-trial strategies now supports specifying seed for reproducibility.
* Performance of strategies have been verified on a set of benchmarks.
* Strategy/engine middleware.
* Filtering, replicating, deduplicating or retrying models submitted by any strategy.
* Merging or transforming models before executing (e.g., CGO).
* Arbitrarily-long chains of middlewares.
* New execution engine.
* Improved debuggability via SequentialExecutionEngine: trials can run in a single process and breakpoints are effective.
* The old execution engine is now decomposed into execution engine and model format.
* Enhanced extensibility of execution engines.
* NAS profiler and hardware-aware NAS.
* New profilers profile a model space, and quickly compute a profiling result for a sampled architecture or a distribution of architectures (FlopsProfiler, NumParamsProfiler and NnMeterProfiler are officially supported).
* Assemble profiler with arbitrary strategies, including both multi-trial and one-shot.
* Profiler are extensible. Strategies can be assembled with arbitrary customized profilers.

Compression

* Compression framework is refactored, new framework import path is `nni.contrib.compression`.
+ Configure keys are refactored, support more detailed compression configurations. [view doc](https://nni.readthedocs.io/en/v3.0rc1/compression/config_list.html)
+ Support multi compression methods fusion. [view doc](https://nni.readthedocs.io/en/v3.0rc1/compression/fusion_compress.html)
+ Support distillation as a basic compression component. [view doc](https://nni.readthedocs.io/en/v3.0rc1/reference/compression/distiller.html)
+ Support more compression targets, like `input`, `output` and any registered parameters. [view doc](https://nni.readthedocs.io/en/v3.0rc1/compression/config_list.html#target-names)
+ Support compressing any module type by customizing module settings. [view doc](https://nni.readthedocs.io/en/v3.0rc1/compression/setting.html)
* Pruning
+ Pruner interfaces have fine-tuned for easy to use. [view doc](https://nni.readthedocs.io/en/v3.0rc1/reference/compression/pruner.html)
+ Support configuring `granularity` in pruners. [view doc](https://nni.readthedocs.io/en/v3.0rc1/compression/config_list.html#granularity)
+ Support different mask ways, multiply zero or add a large negative value.
+ Support manully setting dependency group and global group. [view doc](https://nni.readthedocs.io/en/v3.0rc1/compression/config_list.html#global-group-id)
+ A new powerful pruning speedup is released, applicability and robustness have been greatly improved. [view doc](https://nni.readthedocs.io/en/v3.0rc1/reference/compression/pruning_speedup.html)
+ The end to end transformer compression tutorial has been updated, achieved more extreme compression performance. [view doc](https://nni.readthedocs.io/en/v3.0rc1/tutorials/new_pruning_bert_glue.html)
* Quantization
+ Support using `Evaluator` to handle training/inferencing.
+ Support more module fusion combinations. [view doc](https://nni.readthedocs.io/en/v3.0rc1/compression/module_fusion.html)
+ Support configuring `granularity` in quantizers. [view doc](https://nni.readthedocs.io/en/v3.0rc1/compression/config_list.html#id6)
* Distillation
+ [DynamicLayerwiseDistiller](https://nni.readthedocs.io/en/v3.0rc1/reference/compression/distiller.html#dynamiclayerwisedistiller) and [Adaptive1dLayerwiseDistiller](https://nni.readthedocs.io/en/v3.0rc1/reference/compression/distiller.html#adaptive1dlayerwisedistiller) are supported.
* Compression documents now updated for the new framework, the old version please view [v2.10 doc](https://nni.readthedocs.io/en/v2.10/).
* New compression examples are under [`nni/examples/compression`](https://github.com/microsoft/nni/tree/v3.0rc1/examples/compression)
+ Create a evaluator: [`nni/examples/compression/evaluator`](https://github.com/microsoft/nni/tree/v3.0rc1/examples/compression/evaluator)
+ Pruning a model: [`nni/examples/compression/pruning`](https://github.com/microsoft/nni/tree/v3.0rc1/examples/compression/pruning)
+ Quantize a model: [`nni/examples/compression/quantization`](https://github.com/microsoft/nni/tree/v3.0rc1/examples/compression/quantization)
+ Fusion compression: [`nni/examples/compression/fusion`](https://github.com/microsoft/nni/tree/v3.0rc1/examples/compression/fusion)

Training Services

* **Breaking change:** NNI v3.0 cannot resume experiments created by NNI v2.x
* Local training service:
+ Reduced latency of creating trials
+ Fixed "GPU metric not found"
+ Fixed bugs about resuming trials
* Remote training service:
+ `reuse_mode` now defaults to `False`; setting it to `True` will fallback to v2.x remote training service
+ Reduced latency of creating trials
+ Fixed "GPU metric not found"
+ Fixed bugs about resuming trials
+ Supported viewing trial logs on the web portal
+ Supported automatic recover after temporary server failure (network fluctuation, out of memory, etc)

2.10

Neural Architecture Search

- Added trial deduplication for evolutionary search.
- Fixed the racing issue in RL strategy on submitting models.
- Fixed an issue introduced by the trial recovery feature.
- Fixed import error of `PyTorch Lightning` in NAS.

Compression

- Supported parsing schema by replacing `torch._C.parse_schema` in pytorch 1.8.0 in ModelSpeedup.
- Fixed the bug that speedup `rand_like_with_shape` is easy to overflow when `dtype=torch.int8`.
- Fixed the propagation error with view tensors in speedup.

Hyper-parameter optimization

- Supported rerunning the interrupted trials induced by the termination of an NNI experiment when resuming this experiment.
- Fixed a dependency issue of Anneal tuner by changing Anneal tuner dependency to optional.
- Fixed a bug that tuner might lose connection in long experiments.

Training service

- Fixed a bug that trial code directory cannot have non-English characters.

Web portal

- Fixed an error of columns in HPO experiment hyper-parameters page by using localStorage.
- Fixed a link error in About menu on WebUI.

Known issues

- Modelspeedup does not support non-tensor intermediate variables.

2.9

Neural Architecture Search

- New tutorial of model space hub and one-shot strategy. ([tutorial](https://nni.readthedocs.io/en/v2.9/tutorials/darts.html))
- Add pretrained checkpoints to AutoFormer. ([doc](https://nni.readthedocs.io/en/v2.9/reference/nas/search_space.htm.retiarii.hub.pytorch.AutoformerSpace))
- Support loading checkpoint of a trained supernet in a subnet. ([doc](https://nni.readthedocs.io/en/v2.9/reference/nas/strategy.htm.retiarii.strategy.RandomOneShot))
- Support view and resume of NAS experiment. ([doc](https://nni.readthedocs.io/en/v2.9/reference/nas/others.htm.retiarii.experiment.pytorch.RetiariiExperiment.resume))

Enhancements

- Support `fit_kwargs` in lightning evaluator. ([doc](https://nni.readthedocs.io/en/v2.9/reference/nas/evaluator.html#nni.retiarii.evaluator.pytorch.Lightning))
- Support `drop_path` and `auxiliary_loss` in NASNet. ([doc](https://nni.readthedocs.io/en/v2.9/reference/nas/search_space.html#nasnet))
- Support gradient clipping in DARTS. ([doc](https://nni.readthedocs.io/en/v2.9/reference/nas/strategy.html#nni.retiarii.strategy.DARTS))
- Add `export_probs` to monitor the architecture weights.
- Rewrite configure_optimizers, functions to step optimizers / schedulers, along with other hooks for simplicity, and to be compatible with latest lightning (v1.7).
- Align implementation of DifferentiableCell with DARTS official repo.
- Re-implementation of ProxylessNAS.
- Move `nni.retiarii` code-base to `nni.nas`.

Bug fixes

- Fix a performance issue caused by tensor formatting in `weighted_sum`.
- Fix a misuse of lambda expression in NAS-Bench-201 search space.
- Fix the gumbel temperature schedule in Gumbel DARTS.
- Fix the architecture weight sharing when sharing labels in differentiable strategies.
- Fix the memo reusing in exporting differentiable cell.

Compression

- New tutorial of pruning transformer model. ([tutorial](https://nni.readthedocs.io/en/v2.9/tutorials/pruning_bert_glue.html))
- Add `TorchEvaluator`, `LightningEvaluator`, `TransformersEvaluator` to ease the expression of training logic in pruner. ([doc](https://nni.readthedocs.io/en/v2.9/compression/compression_evaluator.html), [API](https://nni.readthedocs.io/en/v2.9/reference/compression/evaluator.html))

Enhancements

- Promote all pruner API using `Evaluator`, the old API is deprecated and will be removed in v3.0. ([doc](https://nni.readthedocs.io/en/v2.9/reference/compression/pruner.html))
- Greatly enlarge the set of supported operators in pruning speedup via automatic operator conversion.
- Support `lr_scheduler` in pruning by using `Evaluator`.
- Support pruning NLP task in `ActivationAPoZRankPruner` and `ActivationMeanRankPruner`.
- Add `training_steps`, `regular_scale`, `movement_mode`, `sparse_granularity` for `MovementPruner`. ([doc](https://nni.readthedocs.io/en/v2.9/reference/compression/pruner.html#movement-pruner))
- Add `GroupNorm` replacement in pruning speedup. Thanks external contributor cin-xing .
- Optimize `balance` mode performance in `LevelPruner`.

Bug fixes

- Fix the invalid `dependency_aware` mode in scheduled pruners.
- Fix the bug where `bias` mask cannot be generated.
- Fix the bug where `max_sparsity_per_layer` has no effect.
- Fix `Linear` and `LayerNorm` speedup replacement in NLP task.
- Fix tracing `LightningModule` failed in `pytorch_lightning >= 1.7.0`.

Hyper-parameter optimization

- Fix the bug that weights are not defined correctly in `adaptive_parzen_normal` of TPE.

Training service

- Fix trialConcurrency bug in K8S training service: use`${envId}_run.sh` to replace `run.sh`.
- Fix upload dir bug in K8S training service: use a separate working directory for each experiment. Thanks external contributor amznero .

Web portal

- Support dict keys in Default metric chart in the detail page.
- Show experiment error message with small popup windows in the bottom right of the page.
- Upgrade React router to v6 to fix index router issue.
- Fix the issue of details page crashing due to choices containing `None`.
- Fix the issue of missing dict intermediate dropdown in comparing trials dialog.

Known issues

- Activation based pruner can not support `[batch, seq, hidden]`.
- Failed trials are NOT auto-submitted when experiment is resumed (4931 is reverted due to its pitfalls).

2.8

Neural Architecture Search

* Align user experience of one-shot NAS with multi-trial NAS, i.e., users can use one-shot NAS by specifying the corresponding strategy ([doc](https://nni.readthedocs.io/en/v2.8/nas/exploration_strategy.html#one-shot-strategy))
* Support multi-GPU training of one-shot NAS
* *Preview* Support load/retrain the pre-searched model of some search spaces, i.e., 18 models in 4 different search spaces ([doc](https://github.com/microsoft/nni/tree/v2.8/nni/retiarii/hub))
* Support AutoFormer search space in search space hub, thanks our collaborators nbl97 and penghouwen
* One-shot NAS supports the NAS API ``repeat`` and ``cell``
* Refactor of RetiariiExperiment to share the common implementation with HPO experiment
* CGO supports pytorch-lightning 1.6

Model Compression

* *Preview* Refactor and improvement of automatic model compress with a new ``CompressionExperiment``
* Support customizating module replacement function for unsupported modules in model speedup ([doc](https://nni.readthedocs.io/en/v2.8/reference/compression/pruning_speedup.html#nni.compression.pytorch.speedup.ModelSpeedup))
* Support the module replacement function for some user mentioned modules
* Support output_padding for convtranspose2d in model speedup, thanks external contributor haoshuai-orka

Hyper-Parameter Optimization

* Make ``config.tuner.name`` case insensitive
* Allow writing configurations of advisor in tuner format, i.e., aligning the configuration of advisor and tuner

Experiment

* Support launching multiple HPO experiments in one process
* Internal refactors and improvements

* Refactor of the logging mechanism in NNI
* Refactor of NNI manager globals for flexible and high extensibility
* Migrate dispatcher IPC to WebSocket
* Decouple lock stuffs from experiments manager logic
* Use launcher's sys.executable to detect Python interpreter

WebUI

* Improve user experience of trial ordering in the overview page
* Fix the update issue in the trial detail page

Documentation

* A new translation framework for document
* Add a new quantization demo ([doc](https://nni.readthedocs.io/en/v2.8/tutorials/quantization_quick_start_mnist.html))

Notable Bugfixes

* Fix TPE import issue for old metrics
* Fix the issue in TPE nested search space
* Support ``RecursiveScriptModule`` in speedup
* Fix the issue of failed "implicit type cast" in merge_parameter()

2.7

Documentation

A full-size upgrade of the documentation, with the following significant improvements in the reading experience, practical tutorials, and examples:

* Reorganized the document structure with a new document template. ([Upgraded doc entry](https://nni.readthedocs.io/en/v2.7))
* Add more friendly tutorials with jupyter notebook.([New Quick Starts](https://nni.readthedocs.io/en/v2.7/quickstart.html))
* New model pruning demo available. ([Youtube entry](https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw), [Bilibili entry](https://space.bilibili.com/1649051673))

Hyper-Parameter Optimization

* [Improvement] TPE and random tuners will not generate duplicate hyperparameters anymore.
* [Improvement] Most Python APIs now have type annotations.

Neural Architecture Search

* Jointly search for architecture and hyper-parameters: ValueChoice in evaluator. ([doc](https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#valuechoice))
* Support composition (transformation) of one or several value choices. ([doc](https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#valuechoice))
* Enhanced Cell API (``merge_op``, preprocessor, postprocessor). ([doc](https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#cell))
* The argument ``depth`` in the ``Repeat`` API allows ValueChoice. ([doc](https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#repeat))
* Support loading ``state_dict`` between sub-net and super-net. ([doc](https://nni.readthedocs.io/en/v2.7/reference/nas/others.html#nni.retiarii.utils.original_state_dict_hooks), [example in spos](https://nni.readthedocs.io/en/v2.7/reference/nas/strategy.html#spos))
* Support BN fine-tuning and evaluation in SPOS example. ([doc](https://nni.readthedocs.io/en/v2.7/reference/nas/strategy.html#spos))
* *Experimental* Model hyper-parameter choice. ([doc](https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#modelparameterchoice))
* *Preview* Lightning implementation for Retiarii including DARTS, ENAS, ProxylessNAS and RandomNAS. ([example usage](https://github.com/microsoft/nni/blob/v2.7/test/ut/retiarii/test_oneshot.py))
* *Preview* A search space hub that contains 10 search spaces. ([code](https://github.com/microsoft/nni/tree/v2.7/nni/retiarii/hub))

Model Compression
* Pruning V2 is promoted as default pruning framework, old pruning is legacy and keeps for a few releases.([doc](https://nni.readthedocs.io/en/v2.7/reference/compression/pruner.html))
* A new pruning mode ``balance`` is supported in ``LevelPruner``.([doc](https://nni.readthedocs.io/en/v2.7/reference/compression/pruner.html#level-pruner))
* Support coarse-grained pruning in ``ADMMPruner``.([doc](https://nni.readthedocs.io/en/v2.7/reference/compression/pruner.html#admm-pruner))
* [Improvement] Support more operation types in pruning speedup.
* [Improvement] Optimize performance of some pruners.

Experiment
* [Improvement] Experiment.run() no longer stops web portal on return.

Notable Bugfixes
* Fixed: experiment list could not open experiment with prefix.
* Fixed: serializer for complex kinds of arguments.
* Fixed: some typos in code. (thanks a1trl9 mrshu)
* Fixed: dependency issue across layer in pruning speedup.
* Fixed: uncheck trial doesn't work bug in the detail table.
* Fixed: filter name | id bug in the experiment management page.

Page 1 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.