Major Features and Improvements
FrontEnd
- [BETA] Add `mindspore.train.Model.fit` API, add `mindspore.train.callback.EarlyStopping` and `mindspore.train.callback.ReduceLROnPlateau` in Callback.
- [BETA] Support custom operator implemented by Julia.
- [BETA] Support custom operator implemented by MindSpore Hybrid DSL.
- [STABLE] The export() interface supports the export of a model using a custom encryption algorithm, and the load() interface supports the import of a model using a custom decryption algorithm.
- [BETA] [Unified_Dynamic_and_Static_Graphs] [Usability] Constant-type data (tuple/list/dict is supported in Version 1.8) can be set to be variable during graph compiling.
- [BETA] [Unified_Dynamic_and_Static_Graphs] JIT fallback is used to support the control flow capability in the constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python raise statement is supported in the graph mode constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python assert statement is supported in the graph mode constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python print statement is supported in the graph mode constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The str.format() method is supported in the graph mode.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The slice method can be used to assign a value to the list in the graph mode.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The instances of custom classes can be created and invoked in the graph mode.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] Obtaining the properties of a class from the Cell array and the custom class array is supported.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] isinstance supports scenario expanding in the graph mode.
- [STABLE] Rename the custom operator decorator 'ms_hybrid' to 'ms_kernel'.
- [BETA] Custom operator Hybrid DSL is supported on the backend of CPU.
- [BETA] Custom operator Ascend backend adds custom scheduling primitive syntax support.
PyNative
- [STABLE] Implement the AdamWeightDecay operator to replace the original small operator combination mode.
- [STABLE] In PyNative mode, execute the optimizer by unifying the dynamic and static graphs.
- [STABLE] Optimize the execution performance of PyNative bprop graph and ms_function.
Auto Parallel
- [STABLE] Docking the AllToAll single-operator mode. Support AllToAll Operator in the graph compilation level O0.
- [STABLE] Whole-graph offloading supports MPI launching. In Whole-graph offloading, launching with MPI is supported.
- [STABLE] Seeds of model weights provide parallel interface configuration. If you do not set the random number of seeds through the mindspore.set_seed command, the weights initialized by each parameter is determined by the current fragment index. If the random number of seeds are configured, the initialization results of the same shape and weight of the same segmentation policy are the same.
- [STABLE] The HCCL shields internal full-mesh and non-full-mesh connections. Both fully-connected AllToAllv and hierarchical AllToAllv are allowed in one training session.
- [BETA] CPU optimizer fusion. Multiple optimizer operators are combined according to data types through cross-parameter fusion, improving performance. Currently, It has been verified on CPU AdamWeightDecay optimizer. You can use the flatten_weights method in the network cell class to enable this function.
Executor
- [STABLE] Provide southbound API.
- [STABLE] Multi-actor fusion execution to optimize the execution performance during runtime.
- [STABLE] Nopop operators (eg. reshape) execute elimination.
- [STABLE] Embedded cache architecture switches unified distributed runtime.
- [STABLE] Parameter Server switches unified distributed runtime.
- [STABLE] Support Parameter Server mode training on CPU.
DataSet
- [STABLE] When using the map operation for dataset objects and the parameters like: num_parallel_workers > 1 and python_multiprocessing=True, the multi-process mechanism is optimized, so that the data channel and child processes are mapped one by one, avoiding excessive file handle occupation, and closing_pool interface is also deleted.
- [STABLE] Add a batch of Vision, Text and Audio data augmentation operations.
- [STABLE] Fix a bug where the flat_map method of the Dataset class does not flatten the result.
- [STABLE] Unify import paths of dataset augmentation APIs to provide more easier way to use. Refer to [latest api usages](https://www.mindspore.cn/docs/en/r1.8/api_python/mindspore.dataset.vision.html).
API Change
operator
- [STABLE] Add GPU support for ops.adaptive_avg_pool2d.
- [BETA] Add Ascend, GPU, and CPU support for ops.adaptive_max_pool2d .
- [BETA] Add CPU support for ops.approximate_equal.
- [STABLE] Add CPU support for ops.argmin.
- [BETA] Add CPU support for ops.assign_sub.
- [STABLE] Add GPU support for ops.bernoulli.
- [BETA] Add CPU support for ops.bessel_i0.
- [BETA] Add CPU support for ops.bessel_i0e.
- [BETA] Add CPU support for ops.bessel_i1.
- [BETA] Add CPU support for ops.bessel_i1e Add CPU support.
- [STABLE] Add CPU support for ops.bessel_j0.
- [STABLE] Add CPU support for ops.bessel_j1.
- [STABLE] Add CPU support for ops.bessel_k0.
- [STABLE] Add CPU support for ops.bessel_k0e.
- [BETA] Add CPU support for ops.bessel_k1.
- [BETA] Add CPU support for ops.bessel_k1e.
- [STABLE] Add CPU support for ops.bessel_y0.
- [STABLE] Add CPU support for ops.bessel_y1.
- [STABLE] Add CPU support for ops.bitwise_and.
- [STABLE] Add CPU support for ops.bitwise_or.
- [STABLE] Add CPU support for ops.bitwise_xor.
- [STABLE] Add functional interface for ops.broadcast_to.
- [BETA] Add GPU and CPU support for ops.ceil.
- [BETA] Add GPU support for ops.col2im.
- [BETA] Add functional interface for ops.concat.
- [STABLE] Add GPU support for ops.cosh.
- [STABLE] Add Ascend and CPU support for ops.ctc_greedy_decoder.
- [BETA] Add GPU and CPU support for ops.DataFormatDimMap.
- [BETA] Add GPU and CPU support for ops.dropout2d.
- [BETA] Add CPU support for ops.dropout3d.
- [BETA] Add CPU support for ops.erf.
- [BETA] Add CPU support for ops.erfc.
- [STABLE] Add functional interface for ops.expand_dims.
- [STABLE] Add GPU and CPU support for ops.fast_gelu.
- [STABLE] Add Ascend dynamic shape support for ops.flatten.
- [BETA] Add GPU and CPU support for ops.ger.
- [STABLE] Add Ascend, GPU, and CPU support for ops.gumbel_softmax.
- [BETA] Add GPU and CPU support for ops.hardshrink.
- [BETA] Add CPU support for ops.index_add.
- [BETA] Add CPU support for ops.inplace_add.
- [BETA] Add CPU support for ops.inplace_sub.
- [STABLE] Add CPU support for ops.intopk.
- [STABLE] Add GPU and CPU support for ops.inv.
- [STABLE] Add GPU and CPU support for ops.invert.
- [BETA] Add CPU support for ops.isclose.
- [STABLE] Add CPU support for ops.lerp.
- [BETA] Add CPU support for ops.linspace.
- [BETA] Add functional interface for ops.log_softmax.
- [BETA] Add Ascend, GPU, and CPU support for ops.norm.
- [BETA] Add CPU support for ops.lrn.
- [BETA] Add GPU support for ops.masked_select.
- [BETA] Add GPU and CPU support for ops.matrix_band_part.
- [BETA] Add GPU and CPU support for ops.matrix_solve.
- [BETA] Add CPU support for ops.meshgrid.
- [STABLE] Add CPU support for ops.mish.
- [BETA] Add GPU support forops.nonzero.
- [STABLE] Add GPU and CPU support for ops.padding.
- [BETA] Add Ascend dynamic shape support for ops.pow.
- [BETA] Add functional interface for ops.range.
- [BETA] Add Ascend dynamic shape support for ops.round.
- [STABLE] Add Ascend dynamic shape support for ops.scatter_add.
- [STABLE] Add Ascend dynamic shape support for ops.scatter_div.
- [BETA] Add GPU support for ops.scatter_max.
- [BETA] Add GPU support for ops.scatter_min.
- [BETA] Add CPU support for ops.scatter_nd_add.
- [STABLE] Add GPU and CPU support for ops.scatter_nd_div.
- [STABLE] Add GPU and CPU support for ops.scatter_nd_min.
- [STABLE] Add GPU and CPU support for ops.scatter_nd_mul.
- [BETA] Add CPU support for ops.scatter_nd_sub.
- [STABLE] Add Ascend dynamic shape support for ops.scatter_update.
- [BETA] Add Ascend dynamic shape support for ops.select.
- [BETA] Add GPU and CPU support for ops.selu.
- [BETA] Add GPU and CPU support for ops.soft_shrink.
- [BETA] Add CPU support for ops.softsign.
- [STABLE] Add GPU support for ops.tan.
- [BETA] Add Ascend and CPU support ops.tensor_scatter_add.
- [STABLE] Add GPU and CPU support for ops.tensor_scatter_div.
- [STABLE] Add GPU and CPU support for ops.tensor_scatter_mul.
- [BETA] Add Ascend and CPU support for ops.tensor_scatter_sub.
- [STABLE] Add Ascend, GPU, and CPU support for nn.AdaptiveAvgPool1d.
- [STABLE] Add Ascend, GPU, and CPU support for nn.AdaptiveMaxPool1d.
- [BETA] Add Ascend, GPU, and CPU support for nn.BiDense.
- [STABLE] Add Ascend, GPU, and CPU support for nn.ConstantPad1d.
- [STABLE] Add Ascend, GPU, and CPU support for nn.ConstantPad2d.
- [STABLE] Add Ascend, GPU, and CPU support for nn.ConstantPad3d.
- [STABLE] Add Ascend, GPU, and CPU support for nn.Hardtanh.
- [STABLE] Add Ascend, GPU, and CPU support for nn.HuberLoss.
- [STABLE] Add Ascend, GPU, and CPU support for nn.RReLU.
- [STABLE] Add Ascend, GPU, and CPU support for nn.Tanhshrink.
- [STABLE] Add Ascend, GPU, and CPU support for nn.Threshold.
- [STABLE] Add Ascend, GPU, and CPU support for nn.ZeroPad2d.
- [BETA] Add GPU support for ops.unique_consecutive.
- [STABLE] Add CPU support for ops.unsorted_segment_max.
- [STABLE] Add CPU support for ops.unsorted_segment_min.
- [STABLE] Add GPU support for ops.unsorted_segment_prod.
Backwards Incompatible Change
Python API
- DVPP simulation algorithm is no longer supported. Remove `mindspore.dataset.vision.c_transforms.SoftDvppDecodeRandomCropResizeJpeg` and `mindspore.dataset.vision.c_transforms.SoftDvppDecodeResizeJpeg` interfaces.
- Add `on_train_epoch_end` method in LossMonitor, which implements printing metric information in the epoch level when it is used in `mindspore.train.Model.fit`.
- TimeMonitor printing content changes, and the printed content is added to "train" or "eval" to distinguish between training and inference phases.
- `filter_prefix` of `mindspore.load_checkpoint` interface: empty string ("") is no longer supported, and the matching rules are changed from strong matching to fuzzy matching.
Import Optimization
APIs in `mindspore.context`, `mindspore.parallel`, `mindspore.profiler` and `mindspore.train` can be directly used in `mindspore`. The original usage can still be supported.
For examples:
- `mindspore.context.set_context` can be simplified to `mindspore.set_context`.
- `mindspore.parallel.set_algo_parameters` can be simplified to `mindspore.set_algo_parameters`.
- `mindspore.profiler.Profiler` can be simplified to `mindspore.Profiler`.
- `mindspore.train.callback.Callback` can be simplified to `mindspore.train.Callback`.
The API pages are aggregated to <https://www.mindspore.cn/docs/en/r1.8/api_python/mindspore.html>.
Contributors
Thanks goes to these wonderful people:
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
Contributions of any kind are welcome!
MindSpore Lite 1.8.0 Release Notes
Major Features and Improvements
API
- [STABLE] Add C++ and Python APIs for model conversion.
- [STABLE] Add Python APIs for model inference.
Post-Training Quantization
- [STABLE] Support perlayer quantization, and built-in CLE to optimize perlayer quantization accuracy.