Mindspore

Latest version: v2.3.1

Safety actively analyzes 679777 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 8

1.8.0

Major Features and Improvements

FrontEnd

- [BETA] Add `mindspore.train.Model.fit` API, add `mindspore.train.callback.EarlyStopping` and `mindspore.train.callback.ReduceLROnPlateau` in Callback.
- [BETA] Support custom operator implemented by Julia.
- [BETA] Support custom operator implemented by MindSpore Hybrid DSL.
- [STABLE] The export() interface supports the export of a model using a custom encryption algorithm, and the load() interface supports the import of a model using a custom decryption algorithm.
- [BETA] [Unified_Dynamic_and_Static_Graphs] [Usability] Constant-type data (tuple/list/dict is supported in Version 1.8) can be set to be variable during graph compiling.
- [BETA] [Unified_Dynamic_and_Static_Graphs] JIT fallback is used to support the control flow capability in the constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python raise statement is supported in the graph mode constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python assert statement is supported in the graph mode constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python print statement is supported in the graph mode constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The str.format() method is supported in the graph mode.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The slice method can be used to assign a value to the list in the graph mode.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The instances of custom classes can be created and invoked in the graph mode.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] Obtaining the properties of a class from the Cell array and the custom class array is supported.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] isinstance supports scenario expanding in the graph mode.
- [STABLE] Rename the custom operator decorator 'ms_hybrid' to 'ms_kernel'.
- [BETA] Custom operator Hybrid DSL is supported on the backend of CPU.
- [BETA] Custom operator Ascend backend adds custom scheduling primitive syntax support.

PyNative

- [STABLE] Implement the AdamWeightDecay operator to replace the original small operator combination mode.
- [STABLE] In PyNative mode, execute the optimizer by unifying the dynamic and static graphs.
- [STABLE] Optimize the execution performance of PyNative bprop graph and ms_function.

Auto Parallel

- [STABLE] Docking the AllToAll single-operator mode. Support AllToAll Operator in the graph compilation level O0.
- [STABLE] Whole-graph offloading supports MPI launching. In Whole-graph offloading, launching with MPI is supported.
- [STABLE] Seeds of model weights provide parallel interface configuration. If you do not set the random number of seeds through the mindspore.set_seed command, the weights initialized by each parameter is determined by the current fragment index. If the random number of seeds are configured, the initialization results of the same shape and weight of the same segmentation policy are the same.
- [STABLE] The HCCL shields internal full-mesh and non-full-mesh connections. Both fully-connected AllToAllv and hierarchical AllToAllv are allowed in one training session.
- [BETA] CPU optimizer fusion. Multiple optimizer operators are combined according to data types through cross-parameter fusion, improving performance. Currently, It has been verified on CPU AdamWeightDecay optimizer. You can use the flatten_weights method in the network cell class to enable this function.

Executor

- [STABLE] Provide southbound API.
- [STABLE] Multi-actor fusion execution to optimize the execution performance during runtime.
- [STABLE] Nopop operators (eg. reshape) execute elimination.
- [STABLE] Embedded cache architecture switches unified distributed runtime.
- [STABLE] Parameter Server switches unified distributed runtime.
- [STABLE] Support Parameter Server mode training on CPU.

DataSet

- [STABLE] When using the map operation for dataset objects and the parameters like: num_parallel_workers > 1 and python_multiprocessing=True, the multi-process mechanism is optimized, so that the data channel and child processes are mapped one by one, avoiding excessive file handle occupation, and closing_pool interface is also deleted.
- [STABLE] Add a batch of Vision, Text and Audio data augmentation operations.
- [STABLE] Fix a bug where the flat_map method of the Dataset class does not flatten the result.
- [STABLE] Unify import paths of dataset augmentation APIs to provide more easier way to use. Refer to [latest api usages](https://www.mindspore.cn/docs/en/r1.8/api_python/mindspore.dataset.vision.html).

API Change

operator

- [STABLE] Add GPU support for ops.adaptive_avg_pool2d.
- [BETA] Add Ascend, GPU, and CPU support for ops.adaptive_max_pool2d .
- [BETA] Add CPU support for ops.approximate_equal.
- [STABLE] Add CPU support for ops.argmin.
- [BETA] Add CPU support for ops.assign_sub.
- [STABLE] Add GPU support for ops.bernoulli.
- [BETA] Add CPU support for ops.bessel_i0.
- [BETA] Add CPU support for ops.bessel_i0e.
- [BETA] Add CPU support for ops.bessel_i1.
- [BETA] Add CPU support for ops.bessel_i1e Add CPU support.
- [STABLE] Add CPU support for ops.bessel_j0.
- [STABLE] Add CPU support for ops.bessel_j1.
- [STABLE] Add CPU support for ops.bessel_k0.
- [STABLE] Add CPU support for ops.bessel_k0e.
- [BETA] Add CPU support for ops.bessel_k1.
- [BETA] Add CPU support for ops.bessel_k1e.
- [STABLE] Add CPU support for ops.bessel_y0.
- [STABLE] Add CPU support for ops.bessel_y1.
- [STABLE] Add CPU support for ops.bitwise_and.
- [STABLE] Add CPU support for ops.bitwise_or.
- [STABLE] Add CPU support for ops.bitwise_xor.
- [STABLE] Add functional interface for ops.broadcast_to.
- [BETA] Add GPU and CPU support for ops.ceil.
- [BETA] Add GPU support for ops.col2im.
- [BETA] Add functional interface for ops.concat.
- [STABLE] Add GPU support for ops.cosh.
- [STABLE] Add Ascend and CPU support for ops.ctc_greedy_decoder.
- [BETA] Add GPU and CPU support for ops.DataFormatDimMap.
- [BETA] Add GPU and CPU support for ops.dropout2d.
- [BETA] Add CPU support for ops.dropout3d.
- [BETA] Add CPU support for ops.erf.
- [BETA] Add CPU support for ops.erfc.
- [STABLE] Add functional interface for ops.expand_dims.
- [STABLE] Add GPU and CPU support for ops.fast_gelu.
- [STABLE] Add Ascend dynamic shape support for ops.flatten.
- [BETA] Add GPU and CPU support for ops.ger.
- [STABLE] Add Ascend, GPU, and CPU support for ops.gumbel_softmax.
- [BETA] Add GPU and CPU support for ops.hardshrink.
- [BETA] Add CPU support for ops.index_add.
- [BETA] Add CPU support for ops.inplace_add.
- [BETA] Add CPU support for ops.inplace_sub.
- [STABLE] Add CPU support for ops.intopk.
- [STABLE] Add GPU and CPU support for ops.inv.
- [STABLE] Add GPU and CPU support for ops.invert.
- [BETA] Add CPU support for ops.isclose.
- [STABLE] Add CPU support for ops.lerp.
- [BETA] Add CPU support for ops.linspace.
- [BETA] Add functional interface for ops.log_softmax.
- [BETA] Add Ascend, GPU, and CPU support for ops.norm.
- [BETA] Add CPU support for ops.lrn.
- [BETA] Add GPU support for ops.masked_select.
- [BETA] Add GPU and CPU support for ops.matrix_band_part.
- [BETA] Add GPU and CPU support for ops.matrix_solve.
- [BETA] Add CPU support for ops.meshgrid.
- [STABLE] Add CPU support for ops.mish.
- [BETA] Add GPU support forops.nonzero.
- [STABLE] Add GPU and CPU support for ops.padding.
- [BETA] Add Ascend dynamic shape support for ops.pow.
- [BETA] Add functional interface for ops.range.
- [BETA] Add Ascend dynamic shape support for ops.round.
- [STABLE] Add Ascend dynamic shape support for ops.scatter_add.
- [STABLE] Add Ascend dynamic shape support for ops.scatter_div.
- [BETA] Add GPU support for ops.scatter_max.
- [BETA] Add GPU support for ops.scatter_min.
- [BETA] Add CPU support for ops.scatter_nd_add.
- [STABLE] Add GPU and CPU support for ops.scatter_nd_div.
- [STABLE] Add GPU and CPU support for ops.scatter_nd_min.
- [STABLE] Add GPU and CPU support for ops.scatter_nd_mul.
- [BETA] Add CPU support for ops.scatter_nd_sub.
- [STABLE] Add Ascend dynamic shape support for ops.scatter_update.
- [BETA] Add Ascend dynamic shape support for ops.select.
- [BETA] Add GPU and CPU support for ops.selu.
- [BETA] Add GPU and CPU support for ops.soft_shrink.
- [BETA] Add CPU support for ops.softsign.
- [STABLE] Add GPU support for ops.tan.
- [BETA] Add Ascend and CPU support ops.tensor_scatter_add.
- [STABLE] Add GPU and CPU support for ops.tensor_scatter_div.
- [STABLE] Add GPU and CPU support for ops.tensor_scatter_mul.
- [BETA] Add Ascend and CPU support for ops.tensor_scatter_sub.
- [STABLE] Add Ascend, GPU, and CPU support for nn.AdaptiveAvgPool1d.
- [STABLE] Add Ascend, GPU, and CPU support for nn.AdaptiveMaxPool1d.
- [BETA] Add Ascend, GPU, and CPU support for nn.BiDense.
- [STABLE] Add Ascend, GPU, and CPU support for nn.ConstantPad1d.
- [STABLE] Add Ascend, GPU, and CPU support for nn.ConstantPad2d.
- [STABLE] Add Ascend, GPU, and CPU support for nn.ConstantPad3d.
- [STABLE] Add Ascend, GPU, and CPU support for nn.Hardtanh.
- [STABLE] Add Ascend, GPU, and CPU support for nn.HuberLoss.
- [STABLE] Add Ascend, GPU, and CPU support for nn.RReLU.
- [STABLE] Add Ascend, GPU, and CPU support for nn.Tanhshrink.
- [STABLE] Add Ascend, GPU, and CPU support for nn.Threshold.
- [STABLE] Add Ascend, GPU, and CPU support for nn.ZeroPad2d.
- [BETA] Add GPU support for ops.unique_consecutive.
- [STABLE] Add CPU support for ops.unsorted_segment_max.
- [STABLE] Add CPU support for ops.unsorted_segment_min.
- [STABLE] Add GPU support for ops.unsorted_segment_prod.

Backwards Incompatible Change

Python API

- DVPP simulation algorithm is no longer supported. Remove `mindspore.dataset.vision.c_transforms.SoftDvppDecodeRandomCropResizeJpeg` and `mindspore.dataset.vision.c_transforms.SoftDvppDecodeResizeJpeg` interfaces.
- Add `on_train_epoch_end` method in LossMonitor, which implements printing metric information in the epoch level when it is used in `mindspore.train.Model.fit`.
- TimeMonitor printing content changes, and the printed content is added to "train" or "eval" to distinguish between training and inference phases.
- `filter_prefix` of `mindspore.load_checkpoint` interface: empty string ("") is no longer supported, and the matching rules are changed from strong matching to fuzzy matching.

Import Optimization

APIs in `mindspore.context`, `mindspore.parallel`, `mindspore.profiler` and `mindspore.train` can be directly used in `mindspore`. The original usage can still be supported.

For examples:

- `mindspore.context.set_context` can be simplified to `mindspore.set_context`.
- `mindspore.parallel.set_algo_parameters` can be simplified to `mindspore.set_algo_parameters`.
- `mindspore.profiler.Profiler` can be simplified to `mindspore.Profiler`.
- `mindspore.train.callback.Callback` can be simplified to `mindspore.train.Callback`.

The API pages are aggregated to <https://www.mindspore.cn/docs/en/r1.8/api_python/mindspore.html>.

Contributors

Thanks goes to these wonderful people:

AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.

Contributions of any kind are welcome!

MindSpore Lite 1.8.0 Release Notes

Major Features and Improvements

API

- [STABLE] Add C++ and Python APIs for model conversion.
- [STABLE] Add Python APIs for model inference.

Post-Training Quantization

- [STABLE] Support perlayer quantization, and built-in CLE to optimize perlayer quantization accuracy.

1.7.0

Not secure
Major Features and Improvements

OS

- [STABLE] Support Python 3.8 (Linux/Windows/Mac).
- [STABLE] Installation improved with more detailed install guide and automated shell scripts.
- [STABLE] Support operator computing with multi-thread under Windows.
- [STABLE] Compatible with GCC from version 7.3 to 9.x.

FrontEnd

- [STABLE] Support dynamic weight decay for optimizers, that is weight decay value will change according to the increasing step during training.
- [STABLE] Add four methods to create Tensor, which are `mindspore.numpy.rand()`, `mindspore.numpy.randn()`, `mindspore.numpy.randint()`, and `mindspore.ops.arange()`.
- [STABLE] Add `mindspore.train.callback.History` in Callback.
- [BETA] Support custom operator implemented by Julia operator.
- [STABLE] Support accessing attributes and methods of user-defined classes through `mindspore.ms_class` class decorator.
- [STABLE] Support training when a network has side effect operations and control flow statements at the same time.
- [STABLE] Support for more complex control flow syntax, such as a for loop statement in the body of a while loop.
- [STABLE] Improve the performance of networks with complex syntax control flow statements by decreasing the num of subgraphs.

PyNative

- [STABLE] Add Hook functions in PyNative mode, including register_forward_pre_hook, register_forward_hook of the forward hook interface, register_backward_hook of the reverse hook interface.
- [STABLE] Optimize the execution performance of PyNative mode, and execute the front-end Python and the back-end C++ in parallel.

Auto Parallel

- [STABLE] Support TopK routing, data parallel and optimizer state parallel when enable MoE.
- [STABLE] Support AllGather/ReduceScatter communication operator fusion. Support AllReuduce fusion by the data volume size in DATA_PARALLEL mode.
- [STABLE] Support ops.clip_by_global_norm in the parallel mode.
- [STABLE] Support AdaSum optimizer in the parallel mode.
- [STABLE] Support automatic optimizer state parallel.
- [STABLE] Support AlltoAll configurable. Support automatically add virtualdataset cell.
- [STABLE] Support automatically inference trainable parameters in pipeline parallel training.
- [STABLE] Support clusters where the device number is not the power of 2.
- [STABLE] Support sharding propagation in auto-parallel mode.
- [STABLE] Support optimizer offload under the unified runtime.
- [STABLE] Support Adafactor operator on CPU.
- [STABLE] Support sharding at H/W axis for Conv2d/Conv2DTranspose operator. Support operators such as ResizeBilinear,ROIAlign, CropAndResize, BoundingBoxEncode, IOU and RandomChoiceWithMask.

Executor

- [BETA] [Failure Recovery Under Data Parallel Training](https://www.mindspore.cn/tutorials/experts/en/r1.7/parallel/train_gpu.html) Support auto failure recovery under data parallel training mode.
- [BETA] Support searching for the number of threads under the CPU to obtain the optimal number of threads for execution. The entire search process takes 50 steps, and the overall performance will reach a stable state after 50 steps. When testing performance, data after 50 steps need to be used as a standard.

DataSet

- [STABLE] Add dataset operations mapping between TensorFlow.data module and MindSpore.dataset module, [check list](https://www.mindspore.cn/docs/en/r1.7/note/api_mapping/tensorflow_api_mapping.html#tf-data).
- [STABLE] Python multiprocessing optimization and make processes exit normally.
- [STABLE] Support [Dataset Autotune](https://www.mindspore.cn/tutorials/experts/en/master/dataset/dataset_autotune.html) for tuning the speed of dataset pipeline automatically.
- [BETA] [Dataset Offload](https://www.mindspore.cn/tutorials/experts/en/master/dataset/dataset_offload.html) support new data augmentation operations: RandomColorAdjust, RandomSharpness, TypeCast.
- Output a single data column when `__getitem__/__next__` methods of GeneratorDataset return a single NumPy object.
- Use `ulimit -u 10240` to increase the number of threads/processes available to the current user when specify too many processes or threads for loading dataset may cause RuntimeError: can't start new thread.

API Change

Backwards Incompatible Change

Python API

- Modify the gradient return value type of the hook corresponding to the register_backward_hook function, and change the gradient return value to the tuple type uniformly.([!31876](https://gitee.com/mindspore/mindspore/pulls/31876))
- Deprecated usage: `import mindspore.dataset.engine.datasets as ds`. Use `import mindspore.dataset as ds` instead as recommended in [mindspore doc](https://www.mindspore.cn/docs/en/r1.7/api_python/mindspore.dataset.html).
- Add `mindspore.ms_class` interface, as class decorator for user-defined classes. It allows MindSpore to identify user-defined classes and access their attributes and methods([!30855](https://gitee.com/mindspore/mindspore/pulls/30855))
- Deprecate `mindspore.SparseTensor` and use `mindspore.COOTensor` instead. ([!28505](https://gitee.com/mindspore/mindspore/pulls/28505))
- Add Tensor init arg `internal` for internal use.

Contributors

Thanks goes to these wonderful people:

AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.

Contributions of any kind are welcome!

MindSpore Lite 1.7.0 Release Notes

Major Features and Improvements

Post quantization

- [STABLE] Support post quantization to run dynamic quantization algorithm.
- [BETA] Support post quantized model to run on NVIDIA GPU.

1.6.0

Major Features and Improvements

OS

- [STABLE] Support macOS with CPU(X86)
- [BETA] Supoport macOS with CPU(M1)

FrontEnd

- [STABLE] Support JIT Fallback feature in Graph mode.
- [STABLE] Support compile cache feature in Graph mode.
- [STABLE] Add new optimizers, including ASGD and Rprop.
- [STABLE] Add new initializers, including Identity, Orthogonal, Dirac, Sparse and VarianceScaling.
- [STABLE] Support resuming training when an exception occurs in the process.
- [STABLE] Change `mindspore.nn.LSTMCell` from single-layer LSTM to single-cell LSTM.
- [BETA] Introduce `mindspore.ops.Custom` to customize your own operators for Ascend(AICore, AICPU), GPU, CPU backends, and the custom type can be one of TBE, AKG, pure Python function or prebuild binary(called aot operator).

PyNative

- [STABLE] Support heterogeneous feature in PyNative mode.
- [STABLE] Optimize memory allocation in PyNative mode.

Auto Parallel

- [STABLE] Support configuring the output shard strategy of the MatMul distributed operator.
- [STABLE] Support multi-instances parallel.
- [STABLE] Support activation slice communication and calculation overlap in Transformer.
- [STABLE] Support heterogeneous parallel tensor swap.
- [STABLE] Add implementations of distributed operator of ResizeNearestNeighbor.
- [STABLE] Add a communication operator named NeighborExchangeV2 that supports data exchange between adjacent 8 rank ids.
- [STABLE] Pipeline parallel support GPU platform.
- [STABLE] Add cell-level data parallel interface.
- [STABLE] Support gradient AllReduce fusion according to the amount of data.
- [STABLE] Support a sharding strategy search algorithm called sharding propagation.

Executor

- [STABLE] Support multigraph sink and subgraph sink of MindRT.
- [STABLE] Support memory swap to break the device memory size limit on Ascend platform.
- [STABLE] Support dynamic deployment of distributed training cluster(GPU).
- [BETA] Support automatic failover of parameter server.

DataSet

- [STABLE] Support overwrite feature in MindRecord.
- [STABLE] Log improvement and more friendly to users.
- [BETA] Support new feature [Dataset Offload](https://www.mindspore.cn/docs/programming_guide/en/r1.6/enable_dataset_offload.html) to speed up data processing by heterogeneous computing.
- [BETA] Support new feature [Dataset Autotune](https://www.mindspore.cn/docs/programming_guide/en/r1.6/enable_auto_tune.html) to adjust parallelism of dataset pipeline automatically.

GraphKernel Fusion

- [STABLE] Support kernel fusion and generation for CPU backend.

Federated Learning

- [STABLE] FL-Client framework and model decoupling.
- [BETA] Support Cross-silo federated learning framework.

Debug

- [STABLE] Support dump in cell level(Ascend).
- [STABLE] Support dump Tensor statistics(Ascend/GPU).
- [STABLE] Support displaying corresponding code lines for fusion nodes.
- [STABLE] Support passing dump flag in Ascend backend in order to dump correct operators after fusion transformation.

API Change

Backwards Incompatible Change

Python API

`mindspore.dataset.MindDataset` interface changes input parameter dataset_file([!27542](https://gitee.com/mindspore/mindspore/pulls/27542))

`MindDataset` contains the input parameter `dataset_file`, which is in the singular format. It can receive a single file path or a list that stores multiple file paths. Thus It is preferred to change the input parameter `dataset_file` into plural format. In addition, the input parameters of most dataset API, such as `TFRecordDataset`, are in plural formart (`dataset_files`). To ensure consistency, the input parameter `dataset_file` of MindDataset is changed to plural formart as `dataset_files`, we can see the updated version in api of [mindspore.dataset.MindDataset](https://www.mindspore.cn/docs/en/master/api_python/dataset/mindspore.dataset.MindDataset.html#mindspore.dataset.MindDataset).

Delete `mindspore.Tensor`'s property `virtual_flag`([!26989](https://gitee.com/mindspore/mindspore/pulls/26989))

Delete `mindspore.Parameter`'s property `is_init`([!26989](https://gitee.com/mindspore/mindspore/pulls/26989))

Delete `mindspore.nn.ROC`'s interface `roc`([!25713](https://gitee.com/mindspore/mindspore/pulls/25713))

The `shard()` interface of primitive is changed from `shard(strategy)` to `shard(in_strategy=None, out_strategy=None)`

The `set_auto_parallel_context()` interface of context is changed from

`set_auto_parallel_context(parallel_mode=AUTO_PARALLEL, auto_parallel_search_mode="dynamic_programming")` to `set_auto_parallel_context(parallel_mode=AUTO_PARALLEL, search_mode="dynamic_programming")`

Collect Data and Create Landscape

Python API

`mindspore.train.callback.SummaryCollector` interface's parameter `collect_specified_data` add new operations `collect_landscape` ([!26229](https://gitee.com/mindspore/mindspore/pulls/26229))

`collect_landscape` can collect the parameters needed to create the loss landscape. we can see the updated version in api of [mindspore.train.callback.SummaryCollector](https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.SummaryCollector.html#mindspore.SummaryCollector).

`mindspore.train.callback` add new interface `SummaryLandscape` ([!26229](https://gitee.com/mindspore/mindspore/pulls/26229))

`SummaryLandscape` can help you to collect loss landscape information. It can create landscape in PCA direction or random direction by calculating loss. We can see the updated version in api of [mindspore.train.callback.SummaryLandscape](https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.SummaryLandscape.html#mindspore.SummaryLandscape).

Bug fixes

Executor

- Fix process hanging while calling MPI_comm_create in asymmetric pipeline split scenario. ([!28707](https://gitee.com/mindspore/mindspore/pulls/28707))
- Fix the execution error when the weights are shared between graph mode and PyNative mode.([!26635](https://gitee.com/mindspore/mindspore/pulls/26635))
- Fixed the probability coredump when free memory under PyNative mode.([!25472](https://gitee.com/mindspore/mindspore/pulls/25472))

Dataset

- Fix memory increase abnormally when running dataset for a long time. ([!26237](https://gitee.com/mindspore/mindspore/pulls/26237))
- Fix saving MindRecord files with Chinese path on Windows. ([!28378](https://gitee.com/mindspore/mindspore/pulls/28378))

MindSpore Lite

Major Features and Improvements

Converter and runtime

- [STABLE] Add more fusion patterns in the converter tool to improve runtime performance.
- [STABLE] Support take OpenGL texture as input and output of inference.
- [STABLE] Refactor the JAVA API.
- [BETA] Support inference on Ascend310.

x86 backend optimization

- [STABLE] Optimize kernels for x86 using Advanced Vector Extensions(AVX512).

ARM backend optimization

- [STABLE] Support heterogeneous parallel inference, including splitting operators, constructing heterogeneous subgraphs, and heterogeneous parallel scheduling between CPUs and GPUs.
- [STABLE] Add more FP16 operators.

Post quantization

- [STABLE] Post quantization supports debugging.
- [STABLE] Full quantization supports choosing non-quantized nodes.
- [STABLE] Mixed bit quantization supports auto-tune.

Training on Device

- [STABLE] Support user-defined algorithm models to access the federated learning framework.

Contributors

Thanks goes to these wonderful people:

AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, [wangnan39huawei.com](mailto:wangnan39huawei.com), wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, [zhanghaibo5huawei.com](mailto:zhanghaibo5huawei.com), zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.

Contributions of any kind are welcome!

1.5.2

Not secure
Bug fixes

- Fix code specification, pclint, codedex alarm.
- Repair NN Abnormal output of graphnorm operator.
- Fixed the problem of poor performance in scenes with dynamic rnngrad batch size of 16 times.

Contributors

Thanks goes to these wonderful people:

Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.

Contributions of any kind are welcome!

1.5.1

Bug fixes

- Fix code specification, pclint, codedex alarm.
- Fix yolov4 network probabilistic segment error.

Contributors

Thanks goes to these wonderful people:

Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.

Contributions of any kind are welcome!

1.5.0

Major Features and Improvements

NewModels

- [STABLE] Add CV model on Ascend: Fast-SCNN
- [BETA] Add CV models on Ascend: midas_V2, attgan, FairMOT, CenterNet_resnet101, SEResNext, YOLOV3-tiny, RetinaFace
- [STABLE] Add CV models on GPU: ssd_mobilenetv1_fpn, shufflenetv1, tinyDarkNet, CNN-CTC, unet++, DeepText, SqueezeNet
- [STABLE] Add NLP models on GPU: GRU, GNMT2, Bert-Squad
- [STABLE] Add recommend models on GPU: NCF
- [BETA] Add CV models on GPU: FaceAttribute, FaceDetection, FaceRecongnition SENet,
- [BETA] Add Audio models on GPU: DeepSpeech2
- [STABLE]`model_zoo` has been separated to an individual repository`models`

FrontEnd

- [STABLE] Support`while` and`break`,`continue` statements of training network in`GRAPH_MODE`.
- [BETA] Support export MindIR file after model training in cloud side and evaluate in edge side by import the MindIR file.
- [STABLE] Support forward mode auto-diff interface Jvp(Jacobian-Vector-Product).
- [STABLE] Support backward mode auto-diff interface Vjp(Vector-Jacobian-Product).

Auto Parallel

- [STABLE] Support distributed pipeline inference.
- [STABLE] Add implementation of the sparse attention and its distributed operator.
- [STABLE] Add implementations of distributed operator of Conv2d/Conv2dTranspose/Conv2dBackpropInput/Maxpool/Avgpool/Batchnorm/Gatherd.
- [STABLE] Support configuring the dataset strategy on distributed training and inference mode.
- [STABLE] Add high level API of the Transformer module.

Executor

- [STABLE] Support AlltoAll operator.
- [STABLE] CPU operator (Adam) performance optimization increased by 50%.
- [BETA] Support Adam offload feature, reduce the static memory usage of Pangu large model by 50%.
- [STABLE] MindSpore Ascend backend supports configuration operator generation and loading cache path.
- [STABLE] MindSpore Ascend backend supports lazy build in PyNaitve mode and compilation performance improved by 10 times.
- [STABLE] The function or Cell decorated by ms_function supports gradient calculation in PyNative mode.
- [STABLE] The outermost network supports parameters of non tensor type in PyNative mode.

DataSet

- [BETA] Add a new method for class Model to support auto data preprocessing in scenario of Ascend 310 inference.
- [STABLE] Add a new drawing tool to visualize detection/segmentation datasets.
- [STABLE] Support a new tensor operation named ConvertColor to support color space transform of images.
- [STABLE] Enhance the following tensor operations to handle multiple columns simultaneously: RandomCrop, RandomHorizontalFlip, RandomResize, RandomResizedCrop, RandomVerticalFlip.
- [STABLE] Support electromagnetic simulation dataset loading and data augmentation.
- [STABLE] Optimize the error logs of Dataset to make them more friendly to users.

Federated Learning

- [STABLE] Change the deployment environment of FL-Client.

Running Data Recorder

- [STABLE] RDR saves collected data files within directories named by Rank ID on distributed training on Ascend, GPU and CPU.

GraphKernel Fusion

API Change

Backwards Incompatible Change

Python API

New Recomputation Configuration for AutoParallel and SemiAutoParallel Scenarios

Configuring the recomputation of the communication operations generated by the model parallel and optimizer parallel to save the memory on the
devices. Users can pass `mp_comm_recompute` and `parallel_optimizer_comm_recompute` to enable the recomputation of the communication operations.

Bug fixes

FrontEnd

- Fix bug of too many subgraphs when network include`for` statement.([!23669](https://gitee.com/mindspore/mindspore/pulls/23669))

Executor

- RunTask failed when parameter_broadcast is enabled in PyNative mode. ([!23255](https://gitee.com/mindspore/mindspore/pulls/23255))
- An illegal memory access was encountered in the dynamic shape net on GPU.
- Fix tune failed for DynamicRnn. ([!21081](https://gitee.com/mindspore/mindspore/pulls/21081))

Dataset

- Optimize thread monitoring to solve the problem of running multiple multiprocessesing on Windwos. ([!23232](https://gitee.com/mindspore/mindspore/pulls/23232))
- Fix bugs of Dataset tensor operations in lite mode. ([!21999](https://gitee.com/mindspore/mindspore/pulls/21999))
- Fix memory increasing when using create_dict_iterator in for loop. ([!22529](https://gitee.com/mindspore/mindspore/pulls/22529))([!22529](https://gitee.com/mindspore/mindspore/pulls/22529))

MindSpore Lite

Major Features and Improvements

Converter and runtime

1. Optimize TDNN-like streaming model by reusing the result of last inference.
2. Support dynamic filter Convolution.
3. Support serializing float32 weight into float16 weight for reducing size of model file.
4. Provide unified runtime API for developer reusing their code between cloud side and end side.
5. Now developer can configure built-in pass as custom passes.
6. Now user can specify format and shape of model inputs while converting model.
7. Support multiple devices inference, includeing CPU, NPU, GPU. User can set devices in mindspore::Context.
8. Support mixed precision inference. User can set inference precision by LoadConfig API.
9. Support custom operator registration and enable inference on third-party hardware.

ARM backend optimization

1. Support the nchw data format of some Operators, such as Conv, InstanceNorm, etc. The performance of some models convertered from onnx and caffe is greatly improved.
2. Fix bugs of memory leak on NPU.

Post quantization

1. Weight quantization supports mixed bit quantization.
2. Full quantization supports data pre-processing.
3. Adjust the quantization parameters from the command line to the configuration file.

Training on Device

1. Unify lite external api with MindSpore.
2. Implement static memory allocator and common workspace for TOD,save memory 10-20%.
3. Provide getgradients and setgradients interface,get and set optimizer params interfaces to support MOE Model.
4. Support user specified output node when export IOD Model.
5. Support more text networks (tinybert,albert) and operators.

Codegen

1. Support kernel register for custom op. Third-party hardware like NNIE can be accessed through it.

API Change

API Incompatible Change

C++ API

Contributors

Thanks goes to these wonderful people:

Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.

Contributions of any kind are welcome!

Page 5 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.