coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

BSD-3-CLAUSE License

Downloads
352.4K
Stars
4.4K
Committers
165

Bot releases are hidden (Show)

coremltools - coremltools 8.0

Published by junpeiz about 1 month ago

Release Notes

Compare to 7.2 (including features from 8.0b1 and 8.0b2)

  • Support for Latest Dependencies
    • Compatible with the latest protobuf python package which improves serialization latency.
    • Support torch 2.4.0, numpy 2.0, scikit-learn 1.5.
  • Support stateful Core ML models
    • Updates to the converter to produce Core ML models with the State Type (new type introduced in iOS18/macOS15).
    • Adds a toy stateful attention example model to show how to use in-place kv-cache.
  • Increase conversion support coverage for models produced by torch.export
    • Op translation support is at 56% parity with our mature torch.jit.trace converter
    • Representative deep learning models (mobilebert, deeplab, edsr, mobilenet, vit, inception, resnet, wav2letter, emformer) have been supported
    • Representative foundation models (llama, stable diffusion) have been supported
    • The model quantized by ct.optimize.torch could be exported by torch.export and then convert.
  • New Compression Features
    • coremltools.optimize
      • Support compression with more granularities: blockwise quantization, grouped channel wise palettization
      • 4 bit weight quantization and 3 bit palettization
      • Support joint compression modes (8 bit look-up-tables for palettization, pruning+quantization/palettization)
      • Vector palettization by setting cluster_dim > 1 and palettization with per channel scale by setting enable_per_channel_scale=True.
      • Experimental activation quantization (take a W16A16 Core ML model and produce a W8A8 model)
      • API updates for coremltools.optimize.coreml and coremltools.optimize.torch
    • Support some models quantized by torchao (including the ops produced by torchao such as _weight_int4pack_mm).
    • Support more ops in quantized_decomposed namespace, such as embedding_4bit, etc.
  • Support new ops and fixes bugs for old ops
    • compression related ops: constexpr_blockwise_shift_scale, constexpr_lut_to_dense, constexpr_sparse_to_dense, etc
    • updates to the GRU op
    • SDPA op scaled_dot_product_attention
    • clip op
  • Updated the model loading API
    • Support optimizationHints.
    • Support loading specific functions for prediction.
  • New utilities in coremltools.utils
    • coremltools.utils.MultiFunctionDescriptor and coremltools.utils.save_multifunction, for creating an mlprogram with multiple functions in it, that can share weights.
    • coremltools.models.utils.bisect_model can break a large Core ML model into two smaller models with similar sizes.
    • coremltools.models.utils.materialize_dynamic_shape_mlmodel can convert a flexible input shape model into a static input shape model.
  • Various other bug fixes, enhancements, clean ups and optimizations
  • Special thanks to our external contributors for this release: @sslcandoit @FL33TW00D @dpanshu @timsneath @kasper0406 @lamtrinhdev @valfrom @teelrabbit @igeni @Cyanosite
coremltools - coremltools 8.0b2 Latest Release

Published by jakesabathia2 2 months ago

Release Notes

  • Support for Latest Dependencies
    • Compatible with the latest protobuf python package: Improves serialization latency.
    • Compatible with numpy 2.0.
    • Supports scikit-learn 1.5.
  • New Core ML model utils
    • coremltools.models.utils.bisect_model can break a large Core ML model into two smaller models with similar sizes.
    • coremltools.models.utils.materialize_dynamic_shape_mlmodel can convert a flexible input shape model into a static input shape model.
  • New compression features in coremltools.optimize.coreml
    • Vector palettization: By setting cluster_dim > 1 in coremltools.optimize.coreml.OpPalettizerConfig, you can do the vector palettization, where each entry in the lookup table is a vector of length cluster_dim.
    • Palettization of per channel scale: By setting enable_per_channel_scale=True in coremltools.optimize.coreml.OpPalettizerConfig, weights are normalized along the output channel using per channel scales before being palettized.
    • Joint compression: A new pattern is supported, where weights are first quantized to int8 and then palettized into n-bit look-up table with int8 entries.
    • Support conversion of palettized model with 8bits LUT produced from coremltools.optimize.torch.
  • New compression features / bug fixes in coremltools.optimize.torch
    • Added conversion support for Torch models jointly compressed using the training time APIs in coremltools.optimize.torch .
    • Added vector palettization support to SKMPalettizer .
    • Fixed bug in construction of weight vectors along output channel for vector palettization with PostTrainingPalettizer and DKMPalettizer .
    • Deprecated cluter_dtype option in favor of lut_dtype in ModuleDKMPalettizerConfig .
    • Added support for quantizing ConvTranspose modules with PostTrainingQuantizer and LinearQuantizer .
    • Added static grouping for activation heuristic in GPTQ.
    • Fixed bug in how quantization scales are computed for Conv2D layer with per-block quantization in GPTQ .
    • Can now perform activation only quantization with QAT APIs.
  • Experimental torch.export conversion support
    • Support conversion of stateful models with mutable buffer.
    • Support conversion of dynamic inputs shape models.
    • Support conversion of 4-bit weight compression models.
  • Support new torch ops: clip .
  • Various other bug fixes, enhancements, clean ups and optimizations.
  • Special thanks to our external contributors for this release: @dpanshu , @timsneath , @kasper0406 , @lamtrinhdev , @valfrom

Appendix

  • Example code of converting stateful torch.export model
import torch
import coremltools as ct

class Model(torch.nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.register_buffer("state_1", torch.tensor([0.0, 0.0, 0.0]))

    def forward(self, x):
        # In place update of the model state
        self.state_1.mul_(x)
        return self.state_1 + 1.0

source_model = Model()
source_model.eval()

example_inputs = (torch.tensor([1.0, 2.0, 3.0]),)
exported_model = torch.export.export(source_model, example_inputs)
coreml_model = ct.convert(exported_model, minimum_deployment_target=ct.target.iOS18)
  • Example code of converting torch.export models with dynamic input shapes
import torch
import coremltools as ct

class Model(torch.nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.linear = torch.nn.Linear(3, 5)

    def forward(self, x):
        y = self.linear(x)
        return y

source_model = Model()
source_model.eval()

example_inputs = (torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]),)
dynamic_shapes = {"x": {0: torch.export.Dim(name="batch_dim")}}
exported_model = torch.export.export(source_model, example_inputs, dynamic_shapes=dynamic_shapes)
coreml_model = ct.convert(exported_model)
  • Example code of converting torch.export with 4-bit weight compression
import torch
from torch._export import capture_pre_autograd_graph
from torch.ao.quantization.quantize_pt2e import convert_pt2e, prepare_pt2e
from torch.ao.quantization.quantizer.xnnpack_quantizer import (
    XNNPACKQuantizer,
    get_symmetric_quantization_config,
)
import coremltools as ct

class Model(torch.nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.linear = torch.nn.Linear(3, 5)
    def forward(self, x):
        y = self.linear(x)
        return y

source_model = Model()
source_model.eval()

example_inputs = (torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]),)

pre_autograd_graph = capture_pre_autograd_graph(source_model, example_inputs)
quantization_config = get_symmetric_quantization_config(weight_qmin=-7, weight_qmax=8)
quantizer = XNNPACKQuantizer().set_global(quantization_config)
prepared_graph = prepare_pt2e(pre_autograd_graph, quantizer)
converted_graph = convert_pt2e(prepared_graph)

exported_model = torch.export.export(converted_graph, example_inputs)
coreml_model = ct.convert(exported_model, minimum_deployment_target=ct.target.iOS17)
coremltools - coremltools 8.0b1

Published by YifanShenSZ 4 months ago

For all the new features, find the updated documentation in the docs-guides

  • New utilities coremltools.utils.MultiFunctionDescriptor() and coremltools.utils.save_multifunction , for creating an mlprogram with multiple functions in it, that can share weights. Updated the model loading API to load specific functions for prediction.
  • Stateful Core ML models: updates to the converter to produce Core ML models with the State Type (new type introduced in iOS18/macOS15).
  • coremltools.optimize
    • Updates to model representation (mlprogram) pertaining to compression:
      • Support compression with more granularities: blockwise quantization, grouped channel wise palettization
      • 4 bit weight quantization (in addition to 8 bit quantization that was already supported)
      • 3 bit palettization (in addition to 1,2,4,6,8 bit palettization that was already supported)
      • Support joint compression modes:
        • 8 bit Look-up-tables for palettization
        • ability to combine weight pruning and palettization
        • ability to combine weight pruning and quantization
    • API updates:
      • coremltools.optimize.coreml
        • Updated existing APIs to account for features mentioned above
        • Support joint compression by applying compression techniques on an already compressed model
        • A new API to support activation quantization using calibration data, which can be used to take a W16A16 Core ML model and produce a W8A8 model: ct.optimize.coreml.experimental.linear_quantize_activations
          • (to be upgraded from experimental to the official name space in a future release)
      • coremltools.optimize.torch
        • Updated existing APIs to account for features mentioned above
        • Added new APIs for data free compression (PostTrainingPalettizer , PostTrainingQuantizer
        • Added new APIs for calibration data based compression (SKMPalettizer for sensitive k-means palettization algorithm, layerwise_compression for GPTQ/sparseGPT quantization/pruning algorithm)
        • Updated the APIs + the coremltools.convert implementation, so that for converting torch models compressed with ct.optimize.torch , there is no longer a need to provide additional pass pipeline arguments.
  • iOS18 / macOS15 ops
    • compression related ops: constexpr_blockwise_shift_scale, constexpr_lut_to_dense, constexpr_sparse_to_dense, etc
    • updates to the GRU op
    • PyTorch op scaled_dot_product_attention
  • Experimental torch.export conversion support
import torch
import torchvision

import coremltools as ct

torch_model = torchvision.models.vit_b_16(weights="IMAGENET1K_V1")

x = torch.rand((1, 3, 224, 224))
example_inputs = (x,)
exported_program = torch.export.export(torch_model, example_inputs)

coreml_model = ct.convert(exported_program)
  • Various other bug fixes, enhancements, clean ups and optimizations

Known Issues

  • Conversion will fail when using certain palettization modes (e.g. int8 LUT, vector palettization) with torch models using ct.optimize.torch
  • Some of the joint compression modes when used with the training time APIs in ct.optimize.torch will result in a torch model that is not correctly converted
  • The post-training palettization config for mlpackage models (ct.optimize.coreml.``OpPalettizerConfig) does not yet have all the arguments that are supported in the cto.torch.palettization APIs (e.g. lut_dtype (to get int8 dtyped LUT), cluster_dim (to do vector palettization), enable_per_channel_scale (to apply per-channel-scale) etc).
  • Applying symmetric quantization using GPTQ algorithm with ct.optimize.torch.layerwise_compression.LayerwiseCompressor will not produce the correct quantization scales, due to a known bug. This may lead to poor accuracy for the quantized model

Special thanks to our external contributors for this release: @teelrabbit @igeni @Cyanosite

coremltools - coremltools 7.2

Published by YifanShenSZ 6 months ago

  • New Features
    • Supports ExecuTorch 0.2 (see ExecuTorch doc for examples)
      • Core ML Partitioner: If a PyTorch model is partially supported with Core ML, then Core ML partitioner can determine the supported part and have ExecuTorch delegate to Core ML.
      • Core ML Quantizer: Quantize PyTorch models in Core ML favored scheme
  • Enhancements
    • Improved Model Conversion Speed
    • Expanded Operation Translation Coverage
      • add torch.narrow
      • add torch.adaptive_avg_pool1d and torch.adaptive_max_pool1d
      • add torch.numpy_t (i.e. the numpy-style transpose operator .T)
      • enhance torch.clamp_min for integer data type
      • enhance torch.add for complex data type
      • enhance tf.math.top_k when k is variable

Thanks to our ExecuTorch partners and our open-source community: @KrassCodes @M-Quadra @teelrabbit @minimalic @alealv @ChinChangYang @pcuenca

coremltools - coremltools 7.1

Published by DawerG 12 months ago

  • New Features:

    • Supports Torch 2.1
      • Includes experimental support for torch.export API but limited to EDGE dialect.

      • Example usage:

        •  import torch 
           from torch.export import export
           from executorch.exir import to_edge
           
           import coremltools as ct
           
           example_args = (torch.randn(*size), )
           aten_dialect = export(AnyNNModule(), example_args)
           edge_dialect = to_edge(aten_dialect).exported_program()
           edge_dialect._dialect = "EDGE"
           
           mlmodel = ct.convert(edge_dialect)
          
  • Enhancements:

    • API - ct.utils.make_pipeline - now allows specifying compute_units
    • New optimization passes:
      • Folds selective data movement ops like reshape, transpose into adjacent constant compressed weights
      • Casts int32 → int16 dtype for all intermediate tensors when compute precision is set to fp16
    • PyTorch op - multinomial - Adds lowering for it to CoreML
    • Type related refinements on Pad and Gather/Gather-like ops
  • Bug Fixes:

    • Fixes coremltools build issue related to kmeans1d package
    • Minor fixes in lowering of PyTorch ops: masked_fill & randint
  • Various other bug fixes, enhancements, clean ups and optimizations.

coremltools - coremltools 7.0

Published by TobyRoseman about 1 year ago

  • New submodule coremltools.optimize for model quantization and compression
    • coremltools.optimize.coreml for compressing coreml models, in a data free manner. coremltools.compresstion_utils.* APIs have been moved here
    • coremltools.optimize.torch for compressing torch model with training data and fine-tuning. The fine tuned torch model can then be converted using coremltools.convert
  • The default neural network backend is now mlprogram for iOS15/macOS12. Previously calling coremltools.convert() without providing the convert_to or the minimum_deployment_target arguments, used the lowest deployment target (iOS11/macOS10.13) and the neuralnetwork backend. Now the conversion process will default to iOS15/macOS12 and the mlprogram backend. You can change this behavior by providing a minimum_deployment_target or convert_to value.
  • Python 3.11 support.
  • Support for new PyTorch ops: repeat_interleave, unflatten, col2im, view_as_real, rand, logical_not, fliplr, quantized_matmul, randn, randn_like, scaled_dot_product_attention, stft, tile
  • pass_pipeline parameter has been added to coremltools.convert to allow controls over which optimizations are performed.
  • MLModel batch prediction support.
  • Support for converting statically quantized PyTorch models.
  • Prediction from compiled model (.modelc files). Get compiled model files from an MLModel instance. Python API to explicitly compile a model.
  • Faster weight palletization for large tensors.
  • New utility method for getting weight metadata: coremltools.optimize.coreml.get_weights_metadata. This information can be used to customize optimization across ops when using coremltools.optimize.coreml APIs.
  • New and updated MIL ops for iOS17/macOS14/watchOS10/tvOS17
  • coremltools.compression_utils is deprecated.
  • Changes default I/O type for Neural Networks to FP16 for iOS16/macOS13 or later when mlprogram backend is used.
  • Changes upper input range behavior when backend is mlprogram:
    • If RangeDim is used and no upper-bound is set (with a positive number), an exception will be raised.
    • If the user does not use the inputs parameter but there are undetermined dim in input shape (for example, TF with "None" in input placeholder), it will be sanitized to a finite number (default_size + 1) and raise a warning.
  • Various other bug fixes, enhancements, clean ups and optimizations.

Special thanks to our external contributors for this release: @fukatani , @pcuenca , @KWiecko , @comeweber , @sercand , @mlaves, @cclauss, @smpanaro , @nikalra, @jszaday

coremltools - coremltools 7.0b2

Published by TobyRoseman about 1 year ago

  • The default neural network backend is now mlprogram for iOS15/macOS12. Previously calling coremltools.convert() without providing the convert_to or the minimum_deployment_target arguments, used the lowest deployment target (iOS11/macOS10.13) and the neuralnetwork backend. Now the conversion process will default to iOS15/macOS12 and the mlprogram backend. You can change this behavior by providing a minimum_deployment_target or convert_to value.
  • Changes default I/O type for Neural Networks to FP16 for iOS16/macOS13 or later when mlprogram backend is used.
  • Changes upper input range behavior when backend is mlprogram:
    • If RangeDim is used and no upper-bound is set (with a positive number), an exception will be raised.
    • If the user does not use the inputs parameter but there are undetermined dim in input shape (for example, TF with "None" in input placeholder), it will be sanitized to a finite number (default_size + 1) and raise a warning.
  • New utility method for getting weight metadata: coremltools.optimize.coreml.get_weights_metadata. This information can be used to customize optimization across ops when using coremltools.optimize.coreml APIs.
  • Support for new PyTorch ops: repeat_interleave and unflatten.
  • New and updated iOS17/macOS14 ops: batch_norm, conv, conv_transpose, expand_dims, gru, instance_norm, inverse, l2_norm, layer_norm, linear, local_response_norm, log, lstm, matmul, reshape_like, resample, resize, reverse, reverse_sequence, rnn, rsqrt, slice_by_index, slice_by_size, sliding_windows, squeeze, transpose.
  • Various other bug fixes, enhancements, clean ups and optimizations.

Special thanks to our external contributors for this release: @fukatani, @pcuenca, @KWiecko, @comeweber and @sercand

coremltools - coremltools 7.0b1

Published by TobyRoseman over 1 year ago

  • New submodule coremltools.optimize for model quantization and compression
    • coremltools.optimize.coreml for compressing coreml models, in a data free manner. coremltools.compresstion_utils.* APIs have been moved here
    • coremltools.optimize.torch for compressing torch model with training data and fine-tuning. The fine tuned torch model can then be converted using coremltools.convert
  • Updated MIL ops for iOS17/macOS14/watchOS10/tvOS17
  • pass_pipeline parameter has been added to coremltools.convert to allow controls over which optimizations are performed.
  • Python 3.11 support.
  • MLModel batch prediction support.
  • Support for converting statically quantized PyTorch models
  • New Torch layer support: randn, randn_like, scaled_dot_product_attention, stft, tile
  • Faster weight palletization for large tensors.
  • coremltools.models.ml_program.compression_utils is deprecated.
  • Various other bug fixes, enhancements, clean ups and optimizations.

Core ML tools 7.0 guide: https://coremltools.readme.io/v7.0/

Special thanks to our external contributors for this release: @fukatani, @pcuenca, @mlaves, @cclauss, @smpanaro, @nikalra, @jszaday

coremltools - coremltools 6.3

Published by junpeiz over 1 year ago

Core ML Tools 6.3 Release Note

  • Torch 2.0 Support
  • TensorFlow 2.12.0 Support
  • Remove Python 3.6 support
  • Functionality for controling graph passes/optimizations, see the pass_pipeline parameter to coremltools.convert.
  • A utility function for easily creating pipeline, see: utils.make_pipeline.
  • A debug utility function for extracting submodels, see: converters.mil.debugging_utils.extract_submodel
  • Various other bug fixes, enhancements, clean ups and optimizations.

Special thanks to our external contributors for this release: @fukatani, @nikalra and @kevin-keraudren.

coremltools - coremltools 6.2

Published by junpeiz over 1 year ago

Core ML Tools 6.2 Release Note

  • Support new PyTorch version: torch==1.13.1 and torchvision==0.14.1.
  • New ops support:
    • New PyTorch ops support: 1-D and N-D FFT / RFFT / IFFT / IRFFT in torch.fft, torchvision.ops.nms, torch.atan2, torch.bitwise_and, torch.numel,
    • New TensorFlow ops support: FFT / RFFT / IFFT / IRFFT in tf.signal, tf.tensor_scatter_nd_add.
  • Existing ops improvements:
    • Supports int input for clamp op.
    • Supports dynamic topk (k not determined during compile time).
    • Supports padding='valid' in PyTorch convolution.
    • Supports PyTorch Adaptive Pooling.
  • Supports numpy v1.24.0 (#1718)
  • Add int8 affine quantization for the compression_utils.
  • Various other bug fixes, optimizations and improvements.

Special thanks to our external contributors for this release: @fukatani, @ChinChangYang, @danvargg, @bhushan23 and @cjblocker.

coremltools - coremltools 6.1

Published by jakesabathia2 almost 2 years ago

  • Support for TensorFlow 2.10.
  • New PyTorch ops supported: baddbmm, glu, hstack, remainder, weight_norm, hann_window, randint, cross, trace, and reshape_as.
  • Avoid root logger and use the coremltools logger instead.
  • Support dynamic input shapes for PyTorch repeat and expand op.
  • Enhance translation of torch where op with only one input.
  • Add support for PyTorch einsum equation: 'bhcq,bhck→bhqk’.
  • Optimization graph pass improvement
    • 3D convolution batchnorm fusion
    • Consecutive relu fusion
    • Noop elimination
  • Actively catch the tensor which has rank >= 6 and error out
  • Various other bug fixes, optimizations and improvements.

Special thanks to our external contributors for this release: @fukatani, @piraka9011, @giorgiop, @hollance, @SangamSwadiK, @RobertRiachi, @waylybaye, @GaganNarula, and @sunnypurewal.

coremltools - coremltools 6.0

Published by TobyRoseman about 2 years ago

  • MLProgram compression: affine quantization, palettize, sparsify. See coremltools.compression_utils
  • Python 3.10 support.
  • Support for latest scikit-learn version (1.1.2).
  • Support for latest PyTorch version (1.12.1).
  • Support for TensorFlow 2.8.
  • Support for options to specify input and output data types, for both images and multiarrays
    • Update coremltools python bindings to work with GRAYSCALE_FLOAT16 image datatype of CoreML
    • New options to set input and output types to multi array of type float16, grayscale image of type float16 and set output type as images, similar to the coremltools.ImageType used with inputs.
  • New compute unit enum type: CPU_AND_NE to select the model runtime to the Neural engine and CPU.
  • Support for several new TensorFlow and PyTorch ops.
  • Changes to opset (available from iOS16, macOS13)
    • New MIL ops: full_like, resample, reshape_like, pixel_unshuffle, topk
    • Existing MIL ops with new functionality: crop_resize, gather, gather_nd, topk, upsample_bilinear.
  • API Breaking Changes:
    • Do not assume source prediction column is “predictions”, fixes #58.
    • Remove useCPUOnly parameter from coremltools.convert and coremltools.models.MLModel. Use coremltools.ComputeUnit instead.
    • Remove ONNX support.
    • Remove multi-backend Keras support.
  • Various other bug fixes, optimizations and improvements.
coremltools - coremltools 6.0b2

Published by TobyRoseman about 2 years ago

  • Support for new MIL ops added in iOS16/macOS13: pixel_unshuffle, resample, topk
  • Update coremltools python bindings to work with GRAYSCALE_FLOAT16 image datatype of CoreML
  • New compute unit enum type: CPU_AND_NE
  • New PyTorch ops: AdaptiveAvgPool2d, cosine_similarity, eq, linalg.norm, linalg.matrix_norm, linalg.vector_norm, ne, PixelUnshuffle
  • Support for identity_n TensorFlow op
  • Various other bug fixes, optimizations and improvements.
coremltools - coremltools 6.0b1

Published by TobyRoseman over 2 years ago

  • MLProgram compression: affine quantization, palettize, sparsify. See coremltools.compression_utils.
  • New options to set input and output types to multi array of type float16, grayscale image of type float16 and set output type as images, similar to the coremltools.ImageType used with inputs.
  • Support for PyTorch 1.11.0.
  • Support for TensorFlow 2.8.
  • [API Breaking Change] Remove useCPUOnly parameter from coremltools.convert and coremltools.models.MLModel. Use coremltools.ComputeUnit instead.
  • Support for many new PyTorch and TensorFlow layers
  • Many bug fixes and enhancements.

Known issues

  • While conversion and CoreML models with Grayscale Float16 images should work with ios16/macos13 beta, the coremltools-CoreML python binding has an issue which would cause the predict API in coremltools to crash when the either the input or output is of type grayscale float16
  • The new Compute unit configuration MLComputeUnitsCPUAndNeuralEngine is not available in coremltools yet
coremltools - coremltools 5.2

Published by TobyRoseman over 2 years ago

  • Support latest version (1.10.2) of PyTorch
  • Support TensorFlow 2.6.2
  • Support New PyTorch ops:
    • bitwise_not
    • dim
    • dot
    • eye
    • fill
    • hardswish
    • linspace
    • mv
    • new_full
    • new_zeros
    • rrelu
    • selu
  • Support TensorFlow ops
    • DivNoNan
    • Log1p
    • SparseSoftmaxCrossEntropyWithLogits
  • Various bug fixes, clean ups and optimizations.
  • This is the final coremltools version to support Python 3.5
coremltools - coremltools 5.1

Published by TobyRoseman almost 3 years ago

  • New supported PyTorch operations: broadcast_tensors, frobenius_norm, full, norm and scatter_add.
  • Automatic support for inplace PyTorch operations if non-inplace operation is supported.
  • Support PyTorch 1.9.1
  • Various other bug fixes, optimizations and improvements.
coremltools - coremltools 5.0

Published by TobyRoseman about 3 years ago

What’s New

  • Added a new kind of Core ML model type, called ML Program. TensorFlow and Pytorch models can now be converted to ML Programs.
    • To learn about ML Programs, how they are different from the classicial Core ML neural network types, and what they offer, please see the documentation here
    • Use the convert_to argument with the unified converter API to indicate the model type of the Core ML model.
      • coremltools.convert(..., convert_to=“mlprogram”) converts to a Core ML model of type ML program.
      • coremltools.convert(..., convert_to=“neuralnetwork”) converts to a Core ML model of type neural network. “Neural network” is the older Core ML format and continues to be supported. Using just coremltools.convert(...) will default to produce a neural network Core ML model.
    • When targeting ML program, there is an additional option available to set the compute precision of the Core ML model to either float 32 or float16. The default is float16. Usage example:
      • ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT32) or ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT16)
      • To know more about how this affects the runtime, see the documentation on Typed execution.
  • You can save to the new Model Package format through the usual coremltool’s save method. Simply use model.save("<model_name>.mlpackage") instead of the usual model.save(<"model_name>.mlmodel")
    • Core ML is introducing a new model format called model packages. It’s a container that stores each of a model’s components in its own file, separating out its architecture, weights, and metadata. By separating these components, model packages allow you to easily edit metadata and track changes with source control. They also compile more efficiently, and provide more flexibility for tools which read and write models.
    • ML Programs can only be saved in the model package format.
  • Adds the compute_units parameter to MLModel and coremltools.convert. This matches the MLComputeUnits in Swift and Objective-C. Use this parameter to specify where your models can run:
    • ALL - use all compute units available, including the neural engine.
    • CPU_ONLY - limit the model to only use the CPU.
    • CPU_AND_GPU - use both the CPU and GPU, but not the neural engine.
  • Python 3.9 Support
  • Native M1 support for Python 3.8 and 3.9
  • Support for TensorFlow 2.5
  • Support Torch 1.9.0
  • New Torch ops: affine_grid_generator, einsum, expand, grid_sampler, GRU, linear, index_put maximum, minimum, SiLUs, sort, torch_tensor_assign, zeros_like.
  • Added flag to skip loading a model during conversion. Useful when converting for new macOS on older macOS:
    ct.convert(....., skip_model_load=True)
  • Various bug fixes, optimizations and additional testing.

Deprecations and Removals

  • Caffe converter has been removed. If you are still using the Caffe converter, please use coremltools 4.
  • Keras.io and ONNX converters will be deprecated in coremltools 6. Users are recommended to transition to the TensorFlow/PyTorch conversion via the unified converter API.
  • Methods, such as convert_neural_network_weights_to_fp16(), convert_neural_network_spec_weights_to_fp16() , that had been deprecated in coremltools 4, have been removed.
  • The useCPUOnly parameter for MLModel and MLModel.predicthas been deprecated. Instead, use the compute_units parameter for MLModel and coremltools.convert.
coremltools - coremltools 5.0b5

Published by TobyRoseman about 3 years ago

  • Added support for pytorch conversion for tensor assignment statements: torch_tensor_assign op and index_put_ op . Fixed bugs in translation of expand ops and sort ops.
  • Model input/output name sanitization: input and output names for "neuralnetwork" backend are sanitized (updated to match regex [a-zA-Z_][a-zA-Z0-9_]*), similar to the "mlprogram" backend. So instead of producing input/output names such as "1" or "input/1", "var_1" or "input_1", names will be produced by the unified converter API.
  • Fixed a bug preventing a Model Package from being saved more than once to the same path.
  • Various bug fixes, optimizations and additional testing.
coremltools - coremltools 5.0b4

Published by TobyRoseman about 3 years ago

  • Fixes Python 3.5 and 3.6 errors when importing some specific submodules.
  • Fixes Python 3.9 import error for arm64. #1288
coremltools - coremltools 5.0b3

Published by TobyRoseman about 3 years ago

  • Native M1 support for Python 3.8 and Python 3.9
  • Adds the compute_units parameter to MLModel and coremltools.convert. Use this to specify where your models can run:
    • ALL - use all compute units available, including the neural engine.
    • CPU_ONLY - limit the model to only use the CPU.
    • CPU_AND_GPU - use both the CPU and GPU, but not the neural engine.
  • With the above change we are deprecating the useCPUOnly parameter for MLModel and coremltools.convert.
  • For ML programs the default compute precision has changed from Float 32 to Float 16. This can be overridden with the compute_precision parameter of coremltools.convert.
  • Support for TensorFlow 2.5
  • Removed scipy dependency
  • Various bug fixes and optimizations
Package Rankings
Top 4.48% on Proxy.golang.org
Top 33.06% on Anaconda.org
Top 22.65% on Conda-forge.org
Top 1.05% on Pypi.org
Badges
Extracted from project README
Build Status PyPI Release Python Versions