Deepsparse

Latest version: v1.8.0

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 7

1.5.1

This is a patch release for 1.5.0 that contains the following changes:
- Latest 1.5-supported transformers datasets are incompatible with pandas 2.0. Future releases will support later datasets versions so this is to restrict pandas to < 2.0. (1074)

1.5.0

New Features:
* ONNX evaluation pipeline for OpenPifPaf (915)
* YOLOv8 segmentation pipelines and validation (924)
* `deepsparse.benchmark_sweep` CLI to enable sweeps of benchmarks across different settings such as cores and batch sizes (860)
* `Engine.generate_random_inputs()` API (966)
* Example data logging configurations for pipelines/server (867)
* Expanded built-in functions for NLP and CV pipeline logging to enable better monitoring (865) (862)
* Product usage analytics tracking in DeepSparse Community edition ([documentation](https://docs.neuralmagic.com/products/deepsparse/community#product-usage-analytics))

Performance Improvements:
* Inference latency for unstructured sparse-quantized CNNs has been improved by up to 2x.
* Inference throughput and latency for dense CNNs has been improved by up to 20%.
* Inference throughput and latency for dense transformers has been improved by up to 30%.
* The following operators are now supported for performance:
* Neg, Unsqueeze with non-constant inputs
* MatMulInteger with two non-constant inputs
* GEMM with constant weights and 4D or 5D inputs

Changes:
* Transformers and YOLOv5 integrations migrated from auto install to install from PyPI packages. Going forward, `pip install deepsparse[transformers]` and `pip install deepsparse[yolov5]` will need to be used.
* DeepSparse now uses hwloc to determine CPU topology. This fixes a bug where DeepSparse could not be used performantly inside of a Kubernetes cluster with a static CPU manager policy.
* When users pass in a `num_streams` parameter that is smaller than the number of cores, multi-stream and elastic scheduler behaviors have been improved. Previously, DeepSparse would divide the system into `num_streams` chunks and fill each chunk until it ran out of threads. Now, each stream will use a number of threads equal to `num_cores` divided by `num_streams`, with the remainder distributed in a round-robin fashion.

Resolved Issues:

* In networks with a Clip operator where min isn't equal to zero, performance bugs no longer occurs.

* Crashing eliminated:
* Pipeline conll eval using `ignore_labels`. (903)
* YOLOv8 pipelines handling models with dynamic inputs. (967)
* QA pipelines with sequence lengths equal to or less than 128. (889)
* Image classification pipelines handling PNG images. (870)
* ONNX overriding of shapes if a list was not passed in; this now automatically wraps in a list. (914)

* Assertion errors/failures removed:
* Networks with both Convolutions and GEMM operations.
* YOLOv8 model compilation.
* Slice and Unsqueeze operators with a negative axis.
* OPT models involving a constant tensor that is broadcast in two different ways.

Known Issues:
* None

1.4.2

This is a patch release for 1.4.0 that contains the following changes:

- Fallback support for YOLOv5 models with dynamic input shapes provided (not recommended pathway). (971)
- Loading of system logging configuration now addressed. (858)

1.4.1

This is a patch release for 1.4.0 that contains the following changes:

- The bounding boxes for YOLOv5 pipelines now scales with correct detection boxes. (881)

1.4.0

New Features:
* OpenPifPaf deployment pipelines support (788)
* VITPose example deployment pipeline (794)
* DeepSparse Server logging with support for metrics, timings, and input/output values through Prometheus (821, 791)

Changes:
* Inference speed improved by up to 20% on dense FP32 BERT models.
* Inference speed improved by up to 50% on quantized EfficientNetV1 and by up to 10% on quantized EfficientNetV2.
* YOLOv5 integration upgraded to the latest upstream.

Resolved Issues:
* DeepSparse no longer improperly detects each core as belonging to its own socket on some virtual machines, including those on OVHcloud.
* When running networks with any Quantized Depthwise Convolution with a nontrivial w_zero_point parameter no longer produces an assertion failure. Trivial in this case means that the zero point is equal to 128 for uint8 data, or 0 for int8 data.
* At executable_buffer.cpp (see https://github.com/neuralmagic/deepsparse/issues/899), an assertion failure no longer occurs.
* In quantized transformer models, a rare assertion failure no longer occurs.

Known Issues:
* None

1.3.2

This is a patch release for 1.3.0 that contains the following changes:
- Softmax operators from ONNX Opset 13 and later now behave correctly in DeepSparse. Previously, the semantics of Softmax from ONNX Opset 11 were applied, which would result in incorrect answers in some cases.
- Quantized YOLOv8 models are now supported in DeepSparse. Previously, the user would have encountered an assertion failure.

Page 2 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.