Coremltools

Latest version: v8.2

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 7

7.0

* New submodule [`coremltools.optimize`](https://coremltools.readme.io/v7.0/docs/optimizing-models) for model quantization and compression
* `coremltools.optimize.coreml` for compressing coreml models, in a data free manner. `coremltools.compresstion_utils.*` APIs have been moved here
* `coremltools.optimize.torch` for compressing torch model with training data and fine-tuning. The fine tuned torch model can then be converted using `coremltools.convert`
* The default neural network backend is now `mlprogram` for iOS15/macOS12. Previously calling `coremltools.convert()` without providing the `convert_to` or the `minimum_deployment_target` arguments, used the lowest deployment target (iOS11/macOS10.13) and the `neuralnetwork` backend. Now the conversion process will default to iOS15/macOS12 and the `mlprogram` backend. You can change this behavior by providing a `minimum_deployment_target` or `convert_to` value.
* Python 3.11 support.
* Support for new PyTorch ops: `repeat_interleave`, `unflatten`, `col2im`, `view_as_real`, `rand`, `logical_not`, `fliplr`, `quantized_matmul`, `randn`, `randn_like`, `scaled_dot_product_attention`, `stft`, `tile`
* `pass_pipeline` parameter has been added to `coremltools.convert` to allow controls over which optimizations are performed.
* MLModel batch prediction support.
* Support for converting statically quantized PyTorch models.
* Prediction from compiled model (`.modelc` files). Get compiled model files from an `MLModel` instance. Python API to explicitly compile a model.
* Faster weight palletization for large tensors.
* New utility method for getting weight metadata: `coremltools.optimize.coreml.get_weights_metadata`. This information can be used to customize optimization across ops when using `coremltools.optimize.coreml` APIs.
* New and updated MIL ops for iOS17/macOS14/watchOS10/tvOS17
* `coremltools.compression_utils` is deprecated.
* Changes default I/O type for Neural Networks to FP16 for iOS16/macOS13 or later when `mlprogram` backend is used.
* Changes upper input range behavior when backend is `mlprogram`:
* If `RangeDim` is used and no upper-bound is set (with a positive number), an exception will be raised.
* If the user does not use the `inputs` parameter but there are undetermined dim in input shape (for example, TF with "None" in input placeholder), it will be sanitized to a finite number (default_size + 1) and raise a warning.
* Various other bug fixes, enhancements, clean ups and optimizations.

Special thanks to our external contributors for this release: fukatani , pcuenca , KWiecko , comeweber , sercand , mlaves, cclauss, smpanaro , nikalra, jszaday

7.0b2

* The default neural network backend is now `mlprogram` for iOS15/macOS12. Previously calling `coremltools.convert()` without providing the `convert_to` or the `minimum_deployment_target` arguments, used the lowest deployment target (iOS11/macOS10.13) and the `neuralnetwork` backend. Now the conversion process will default to iOS15/macOS12 and the `mlprogram` backend. You can change this behavior by providing a `minimum_deployment_target` or `convert_to` value.
* Changes default I/O type for Neural Networks to FP16 for iOS16/macOS13 or later when `mlprogram` backend is used.
* Changes upper input range behavior when backend is `mlprogram`:
* If `RangeDim` is used and no upper-bound is set (with a positive number), an exception will be raised.
* If the user does not use the `inputs` parameter but there are undetermined dim in input shape (for example, TF with "None" in input placeholder), it will be sanitized to a finite number (default_size + 1) and raise a warning.
* New utility method for getting weight metadata: `coremltools.optimize.coreml.get_weights_metadata`. This information can be used to customize optimization across ops when using `coremltools.optimize.coreml` APIs.
* Support for new PyTorch ops: `repeat_interleave` and `unflatten`.
* New and updated iOS17/macOS14 ops: `batch_norm`, `conv`, `con`v`_transpose`, `expand_dims`, `gru`, `instance_norm`, `inverse`, `l2_norm`, `layer_norm`, `linear`, `local_response_norm`, `log`, `lstm`, `matmul`, `reshape_like`, `resample`, `resize`, `reverse`, `reverse_sequence`, `rnn`, `rsqrt`, `slice_by_index`, `slice_by_size`, `sliding_windows`, `squeeze`, `transpose`.
* Various other bug fixes, enhancements, clean ups and optimizations.


Special thanks to our external contributors for this release: fukatani, pcuenca, KWiecko, comeweber and sercand

7.0b1

* New submodule [`coremltools.optimize`](https://coremltools.readme.io/v7.0/docs/optimizing-models) for model quantization and compression
* `coremltools.optimize.coreml` for compressing coreml models, in a data free manner. `coremltools.compresstion_utils.*` APIs have been moved here
* `coremltools.optimize.torch` for compressing torch model with training data and fine-tuning. The fine tuned torch model can then be converted using `coremltools.convert`
* Updated MIL ops for iOS17/macOS14/watchOS10/tvOS17
* `pass_pipeline` parameter has been added to `coremltools.convert` to allow controls over which optimizations are performed.
* Python 3.11 support.
* MLModel batch prediction support.
* Support for converting statically quantized PyTorch models
* New Torch layer support: `randn`, `randn_like`, `scaled_dot_product_attention`, `stft`, `tile`
* Faster weight palletization for large tensors.
* `coremltools.models.ml_program.compression_utils` is deprecated.
* Various other bug fixes, enhancements, clean ups and optimizations.

Core ML tools 7.0 guide: https://coremltools.readme.io/v7.0/

Special thanks to our external contributors for this release: fukatani, pcuenca, mlaves, cclauss, smpanaro, nikalra, jszaday

6.3

Core ML Tools 6.3 Release Note

* Torch 2.0 Support
* TensorFlow 2.12.0 Support
* Remove Python 3.6 support
* Functionality for controling graph passes/optimizations, see the `pass_pipeline` parameter to `coremltools.convert`.
* A utility function for easily creating pipeline, see: `utils.make_pipeline`.
* A debug utility function for extracting submodels, see: `converters.mil.debugging_utils.extract_submodel`
* Various other bug fixes, enhancements, clean ups and optimizations.


Special thanks to our external contributors for this release: fukatani, nikalra and kevin-keraudren.

6.2

Core ML Tools 6.2 Release Note

* Support new PyTorch version: `torch==1.13.1` and `torchvision==0.14.1`.
* New ops support:
* New PyTorch ops support: 1-D and N-D FFT / RFFT / IFFT / IRFFT in `torch.fft`, `torchvision.ops.nms`, `torch.atan2`, `torch.bitwise_and`, `torch.numel`,
* New TensorFlow ops support: FFT / RFFT / IFFT / IRFFT in `tf.signal`, `tf.tensor_scatter_nd_add`.
* Existing ops improvements:
* Supports int input for `clamp` op.
* Supports dynamic `topk` (k not determined during compile time).
* Supports `padding='valid'` in PyTorch convolution.
* Supports PyTorch Adaptive Pooling.
* Supports numpy v1.24.0 (1718)
* Add int8 affine quantization for the compression_utils.
* Various other bug fixes, optimizations and improvements.

Special thanks to our external contributors for this release: fukatani, ChinChangYang, danvargg, bhushan23 and cjblocker.

6.1

* Support for TensorFlow `2.10`.
* New PyTorch ops supported: `baddbmm`, `glu`, `hstack`, `remainder`, `weight_norm`, `hann_window`, `randint`, `cross`, `trace`, and `reshape_as.`
* Avoid root logger and use the coremltools logger instead.
* Support dynamic input shapes for PyTorch `repeat` and `expand` op.
* Enhance translation of torch `where` op with only one input.
* Add support for PyTorch einsum equation: `'bhcq,bhck→bhqk’`.
* Optimization graph pass improvement
* 3D convolution batchnorm fusion
* Consecutive relu fusion
* Noop elimination
* Actively catch the tensor which has rank >= 6 and error out
* Various other bug fixes, optimizations and improvements.

Special thanks to our external contributors for this release: fukatani, piraka9011, giorgiop, hollance, SangamSwadiK, RobertRiachi, waylybaye, GaganNarula, and sunnypurewal.

Page 2 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.