Mct-quantizers

Latest version: v1.5.2

Safety actively analyzes 700638 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

1.5.1

What's Changed
* Updated requirements file to restrict numpy version to below 2.0.

1.5.0

What's Changed
* TensorFlow 2.15 Support: MCT Quantizers now offers full compatibility with TensorFlow version 2.15.
* Added **metadata** to TensorFlow\Pytorch\ONNX models: use the `add_metadata` and `get_metadata` API to access it. The metadata is a dictionary, for example: `{'mctq_version': '1.5.0'}`. Note that in TensorFlow the metadata is also saved in a special layer `MetadataLayer`.
* Added support for quantizing constants with [KerasQuantizationWrapper](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/keras/quantize_wrapper.py) and [PytorchQuantizationWrapper](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/pytorch/quantize_wrapper.py).

Breaking Changes
* Removed `KMEANS` from [QuantizationMethod](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/common/quant_info.py#L19).

Contributors
ofirgo, reuvenperetz, Chizkiyahu & elad-c

**Full Changelog**: https://github.com/sony/mct_quantizers/compare/v1.4.0...v1.5.0

1.4.0

What's Changed
- Updated Installation Instructions: The README has been updated to provide clearer guidelines for installing the latest stable release, nightly builds, and building from source.
- Added support for negative channel_axis values in Keras weight quantizers.

Fixes:
- Fixed an issue where quantization parameters such as input_rank and channel_axis were missing in scenarios where they were irrelevant, such as when using per-tensor quantization.
- KerasQuantizationWrapper fixes:
- Fixed an issue in KerasQuantizationWrapper where _trainable_weights and _non_trainable_weights were not handled correctly, caused by the removal of weights in some cases.
- The convert_to_inferable_quantizers has been removed from KerasQuantizationWrapper.

1.3.0

What's Changed
* TensorFlow 2.13 and .keras File Format Support: MCT Quantizers now offers full compatibility with TensorFlow version 2.13 and the .keras file format.
* MCT Quantizers Version Tracking. With this release, you can easily access the 'mctq_version' property within each KerasQuantizationWrapper and KerasActivationQuantizationHolder.
* Support ONNX models export- we now support exporting models to the ONNX format. Notice this is experimental and subject to future changes.

Breaking Changes
* LUT Quantizer Argument Improvements: We've refined the argument names for the Look-Up Table (LUT) quantizer.

Contributors
ofirgo reuvenperetz Chizkiyahu eladc-git lior-dikstein

New Contributors
* lior-dikstein made their first contribution in https://github.com/sony/mct_quantizers/pull/60

**Full Changelog**: https://github.com/sony/mct_quantizers/compare/v1.2.0...v1.3.0

1.2.0

What's Changed

* MCT Quantizers now officially supports tensorflow version 2.12.
* Replaced `quantizer_type` property with `identity` property in [`mark_quantizer`](https://github.com/sony/mct_quantizers/blob/aebf9162a636fb3e2fc7d2d472fe5b4762db5376/mct_quantizers/common/base_inferable_quantizer.py#L30).
* New unique identifier [`QuantizerID.INFERABLE`](https://github.com/sony/mct_quantizers/blob/aebf9162a636fb3e2fc7d2d472fe5b4762db5376/mct_quantizers/common/base_inferable_quantizer.py#L26C12-L26C12) for inferable quantizers.
* This allows retrieving inferable quantizers via [`get_inferable_quantizer_class`](https://github.com/sony/mct_quantizers/blob/main/mct_quantizers/common/get_quantizers.py) API without conflicting with other quantizers that my be defined in an external (user) environment.
* All dependencies in the tensorflow-model-optimization package have been removed.

Contributors
ofirgo

New Contributors
* Chizkiyahu made their first contribution in https://github.com/sony/mct_quantizers/pull/34

**Full Changelog**: https://github.com/sony/mct_quantizers/compare/v1.1.0...v1.2.0

1.1.0

This is the first released version of this project.

Introduction
MCT Quantizers is an open-source library developed by researchers and engineers at Sony Semiconductor Israel. The library provides tools for easily representing a quantized neural network in both Keras and PyTorch. It offers a set of useful quantizers and a simple interface for implementing new custom quantizers.

Main API and Features

- **Quantization Wrapper:** A framework-specific object that takes a layer with weights and a set of weight quantizers to infer a quantized layer.

- **Activation Quantization Holder:** A framework-specific object that holds an activation quantizer to be used during inference.

- **Quantizers:** The library provides the "Inferable Quantizer" interface for implementing new quantizers. This interface is based on the `BaseInferableQuantizer` class, which allows the definition of quantizers used for emulating inference-time quantization. A set of framework-specific quantizers for both weights and activations are defined on top of `BaseInferableQuantizer`.

- **Mark Quantizer Decorator:** The `mark_quantizer` decorator is used to assign each quantizer with static properties that define its task compatibility. Each quantizer class should be decorated with this decorator, which defines properties like `QuantizationTarget`, `QuantizationMethod`, and `quantizer_type`.

- **Load quantized model:** A framework-specific API that allows loading a model that is quantized using the library's quantization interface.

Getting Started
To get started, you can install MCT Quantizers either from the source or from PyPi. See the repository's [README file](https://github.com/sony/mct_quantizers/blob/main/README.md) for more details.

Contributors
reuvenperetz, eladc-git, lior-dikstein, elad-c, haihabi, lapid92, Idan-BenAmi, ofirgo

Thank you for your interest in our project. We look forward to your contributions and feedback.

**Full Changelog**: https://github.com/sony/mct_quantizers/commits/v1.1.0

Links

Releases

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.