coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

BSD-3-CLAUSE License

Downloads
352.4K
Stars
4.4K
Committers
165

Bot releases are hidden (Show)

coremltools - coremltools 5.0b2

Published by TobyRoseman over 3 years ago

  • Python 3.9 support
  • Ubuntu 18 support
  • Torch 1.9.0 support
  • Added flag to skip loading a model during conversion. Useful when converting for new macOS on older macOS.
  • New torch ops: affine_grid_generator, grid_sampler, linear, maximum, minimum, SiLUs
  • Fuse Activation SiLUs optimization
  • Add no-op transpose into noop_elimination
  • Various bug fixes and other improvements, including:
    • bug fix in coremltools.utils.rename_feature utility for ML Program spec
    • bug fix in classifier model conversion for ML Program target
coremltools - coremltools 5.0b1

Published by TobyRoseman over 3 years ago

To install this version run: pip install coremltools==5.0b1

Whats New

  • Added a new kind of Core ML model type, called ML Program. TensorFlow and Pytorch models can now be converted to ML Programs.
    • To learn about ML Programs, how they are different from the classicial Core ML neural network types, and what they offer, please see the documentation here
    • Use the convert_to argument with the unified converter API to indicate the model type of the Core ML model.
      • coremltools.convert(..., convert_to=“mlprogram”) converts to a Core ML model of type ML program.
      • coremltools.convert(..., convert_to=“neuralnetwork”) converts to a Core ML model of type neural network. “Neural network” is the older Core ML format and continues to be supported. Using just coremltools.convert(...) will default to produce a neural network Core ML model.
    • When targeting ML program, there is an additional option available to set the compute precision of the Core ML model to either float 32 or float16. That is,
      • ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT32) or ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT16)
      • To know more about how this affects the runtime, see the documentation on Typed execution.
  • You can save to the new Model Package format through the usual coremltool’s save method. Simply use model.save("<model_name>.mlpackage") instead of the usual model.save(<"model_name>.mlmodel")
    • Core ML is introducing a new model format called model packages. It’s a container that stores each of a model’s components in its own file, separating out its architecture, weights, and metadata. By separating these components, model packages allow you to easily edit metadata and track changes with source control. They also compile more efficiently, and provide more flexibility for tools which read and write models.
    • ML Programs can only be saved in the model package format.
  • Several performance improvements by adding new graph passes in the conversion pipeline for deep learning models, including “fuse_gelu”, “replace_stack_reshape”, “concat_to_pixel_shuffle”, “fuse_layernorm_or_instancenorm” etc
  • New Translation methods for Torch ops such as “einsum”, “GRU”, “zeros_like” etc
  • OS versions supported by coremltools 5.0b1: macOS10.15 and above, Linux with C++17 and above

Deprecations and Removals

  • Caffe converter has been removed. If you are still using the Caffe converter, please use coremltools 4.
  • Keras.io and ONNX converters will be deprecated in coremltools 6. Users are recommended to transition to the TensorFlow/PyTorch conversion via the unified converter API.
  • Methods, such as convert_neural_network_weights_to_fp16(), convert_neural_network_spec_weights_to_fp16() , that had been deprecated in coremltools 4, have been removed.

Known Issues

  • The default compute precision for conversion to ML Programs is set to precision.FLOAT32, although it will be updated to precision.FLOAT16 in a later beta release, prior to the official coremltools 5.0 release.
  • Core ML may downcast float32 tensors specified in ML Program model types when running on a device with Neural Engine support. Workaround: Restrict compute units to .cpuAndGPU in MLModelConfiguration for seed 1
  • Converting some models to ML Program may lead to an error (such as a segmentation fault or “Error in building plan”), due to a bug in the Core ML GPU runtime. Workaround: When using coremltools, you can force the prediction to stay on the CPU, without changing the prediction code, by specifying the useCPUOnly argument during conversion. That is, ct.convert(source_model, convert_to='mlprogram', useCPUOnly=True). And for such models, in your swift code you can use the MLComputeUnits.cpuOnly option at the time of loading the model, to restrict the compute unit to CPU.
  • Flexible input shapes, for image inputs have a bug when using with the ML Program type, in seed 1 of Core ML framework. This will be fixed in an upcoming seed release.
  • coremltools 5.0b1 supports python versions 3.5, 3.6, 3.7, 3.8. Support for python 3.9 will be enabled in a future beta release.
coremltools - coremltools 4.1

Published by aseemw over 3 years ago

  • Support for python 2 deprecated. This release contains wheels for python 3.5, 3.6, 3.7, 3.8
  • PyTorch converter updates:
    • added translation methods for ops topK, groupNorm, log10, pad, stacked LSTMs
    • support for PyTorch 1.7
  • TensorFlow Converter updates:
    • Added translation functions for ops Mfcc, AudioSpectrogram
  • Miscellaneous Bug fixes
coremltools - coremltools 4.0

Published by aseemw about 4 years ago

What's new in coremltools 4.0

  • New documentation available at http://coremltools.readme.io.
  • New converters from PyTorch, TensorFlow 1, and TensorFlow 2 available via the new unified converter API, ct.convert()
  • New Model Intermediate Language (MIL) builder library, using which the new converters have been implemented. Using MIL its easy to build neural network models directly or implement composite operations.
  • New utilities to configure inputs while converting from PyTorch and TensorFlow, using ct.convert() with ct.ImageType(), ct.ClassifierConfig(), etc., see details: https://coremltools.readme.io/docs/neural-network-conversion.

Highlights of Core ML 4

  • Model Deployment
  • Model Encryption
  • Unified converter API with PyTorch and TensorFlow 2 support in coremltools 4
  • MIL builder for neural networks and composite ops in coremltools 4
  • New layers in neural network:
    * CumSum
    * OneHot
    * ClampedReLu
    * ArgSort
    * SliceBySize
    * Convolution3D
    * Pool3D
    * Bilinear Upsample with align corners and fractional factors
    * PixelShuffle
    * MatMul with int8 weights and int8 activations
    * Concat interleave
    * See NeuralNetwork.proto
  • Enhanced Xcode model view with interactive previews
  • Enhanced Xcode Playground support for Core ML models
coremltools - coremltools 4.0b4

Published by aseemw about 4 years ago

  • Several bug fixes, including:

    • Fix in rename_feature API, when used with a neural network model with image inputs
    • Bug fixes in conversion of torch ops such as layer norm, flatten, conv transpose, expand, dynamic reshape, slice etc.
    • Fixes when converting from PyTorch 1.6.0
    • Fixes in supporting .pth extension, in addition to .pt extension , for torch conversion
    • Fixes in TF2 LSTM with dynamic batch size
    • Fixes in control flow models with TF 2.3.0
    • Fixes in numerical issues with the inverse layer, on a few devices, by increasing the lower bound of the output
  • Added conversion functions for PyTorch ops such as neg, sum, repeat, where, adaptive_max_pool2d, floordiv etc

  • Update Doc strings for several MIL ops

  • Support for TF1 models with fake quant ops when used with convolution ops

  • Several new MIL optimization passes such as no-op elimination, pad and conv fusion etc.

coremltools - coremltools 4.0b3

Published by bhushan23 about 4 years ago

Whats new

  • Support for PyTorch 1.6
  • concat with interleave option
  • New Torch ops support added
    • acos
    • acosh
    • argsort
    • asin
    • asinh
    • atan
    • atan
    • atanh
    • avg_pool3d
    • bmm
    • ceil
    • cos
    • cosh
    • cumsum
    • elu
    • exp
    • exp2
    • floor
    • gather
    • hardsigmoid
    • is_floating_point
    • leaky_relu
    • log
    • max_pool
    • prelu
    • reciprocal
    • relu6
    • round
    • rsqrt
    • sign
    • sin
    • sinh
    • softplus
    • softsign
    • sqrt
    • square
    • tan
    • tanh
    • threshold
    • true_divide
  • Improved TF2 test coverage
  • MIL definition update
    • LSTM activation function moved from TupleInput to individual inputs
  • Improvements in MIL infrastructure

Known Issues

  • TensorFlow 2 model conversion is supported for models with 1 concrete function.
  • Conversion for TensorFlow and PyTorch models with quantized weights is currently not supported.
coremltools - coremltools 4.0b2

Published by 1duo about 4 years ago

What's New

  • Improved documentation available at http://coremltools.readme.io.
  • New converter path to directly convert PyTorch models without going through ONNX.
  • Enhanced TensorFlow 2 conversion support, which now includes support for dynamic control flow and LSTM layers. Support for several popular models and architectures, including Transformers such as GPT and BERT-variants.
  • New unified conversion API ct.convert() for converting PyTorch and TensorFlow (including tf.keras) models.
  • New Model Intermediate Language (MIL) builder library to either build neural network models directly or implement composite operations.
  • New utilities to configure inputs while converting from PyTorch and TensorFlow, using ct.convert() with ct.ImageType(), ct.ClassifierConfig(), etc., see details: https://coremltools.readme.io/docs/neural-network-conversion.
  • onnx-coreml converter is now moved under coremltools and can be accessed as ct.converters.onnx.convert().

Deprecations

  • Deprecated the following methods

    • NeuralNetworkShaper class.
    • get_allowed_shape_ranges().
    • can_allow_multiple_input_shapes().
    • visualize_spec() method of the MLModel class.
    • quantize_spec_weights(), instead use the quantize_weights() method.
    • get_custom_layer_names(), replace_custom_layer_name(), has_custom_layer(), moved them to internal methods.
  • Added deprecation warnings for, will be deprecated in next major release.

Known Issues

  • Latest version of Pytorch tested to work with the converter is Torch 1.5.0.
  • TensorFlow 2 model conversion is supported for models with 1 concrete function.
  • Conversion for TensorFlow and PyTorch models with quantized weights is currently not supported.
  • coremltools.utils.rename_feature does not work correctly in renaming the output feature of a model of type neural network classifier
  • leaky_relu layer is not added yet to the PyTorch converter, although it's supported in MIL and the Tensorflow converters.
coremltools - coremltools 4.0b1

Published by 1duo over 4 years ago

Whats New

  • New documentation available at http://coremltools.readme.io.
  • New converter path to directly convert PyTorch models without going through ONNX.
  • Enhanced TensorFlow 2 conversion support, which now includes support for dynamic control flow and LSTM layers. Support for several popular models and architectures, including Transformers such as GPT and BERT-variants.
  • New unified conversion API ct.convert() for converting PyTorch and TensorFlow (including tf.keras) models.
  • New Model Intermediate Language (MIL) builder library to either build neural network models directly or implement composite operations.
  • New utilities to configure inputs while converting from PyTorch and TensorFlow, using ct.convert() with ct.ImageType(), ct.ClassifierConfig(), etc., see details: https://coremltools.readme.io/docs/neural-network-conversion.
  • onnx-coreml converter is now moved under coremltools and can be accessed as ct.converters.onnx.convert().

Deprecations

  • Deprecated the following methods

    • NeuralNetworkShaper class.
    • get_allowed_shape_ranges().
    • can_allow_multiple_input_shapes().
    • visualize_spec() method of the MLModel class.
    • quantize_spec_weights(), instead use the quantize_weights() method.
    • get_custom_layer_names(), replace_custom_layer_name(), has_custom_layer(), moved them to internal methods.
  • Added deprecation warnings for, will be deprecated in next major release.

Known Issues

  • Tensorflow 2 model conversion is supported for models with 1 concrete function.
  • Conversion for TensorFlow and PyTorch models with quantized weights is currently not supported.
  • coremltools.utils.rename_feature does not work correctly in renaming the output feature of a model of type neural network classifier
  • leaky_relu layer is not added yet to the PyTorch converter, although its supported in MIL and the Tensorflow converters.
coremltools - coremltools 3.4

Published by aseemw over 4 years ago

  • Added support for tf.einsum op
  • Bug fixes in image pre-processing error handling, quantization function for the embeddingND layer, conversion of tf.stack op
  • Updated the transpose removal mlmodel pass
  • Fixed import statement to support scikit-learn >=0.21 (@sapieneptus )
  • Added deprecation warnings for class NeuralNetworkShaper and methods visualize_spec, quantize_spec_weights
  • Updated the names of a few functions that were unintentionally exposed to the public API, to internal, by prepending with underscore. The original methods still work but deprecation warnings have been added.
coremltools - coremltools 3.3

Published by srikris over 4 years ago

Release Notes

Bug Fixes

  • Add support for converting Softplus layer in coremltools.
  • Fix in gelu and layer norm fusion pass.
  • Simplified build & CI setup.
  • Fixed critical numpy
coremltools - coremltools 3.2

Published by 1duo almost 5 years ago

This release includes new op conversion supports, bug fixes, and improved graph optimization passes.

Install/upgrade to the latest coremltools with pip install --upgrade coremltools.

More details can be found in neural-network-guide.md.

coremltools - coremltools 3.1

Published by 1duo almost 5 years ago

Changes:

  • Add support for TensorFlow 2.x file format (.h5, SavedModel, and concrete functions).
  • Add support for several new ops, such as AddV2, FusedBatchNormV3.
  • Bug fixes in the Tensorflow converter's op fusion graph pass.

Known Issues:

  • tf.keras model conversion supported only with TensorFlow 2
  • Currently, there are issues while invoking the TensorFlow 2.x model conversion in Python 2.x.
  • Currently, there are issues while converting tf.keras graphs that contain recurrent layers.
coremltools - coremltools 3.0

Published by aseemw about 5 years ago

Release coremltools 3.0

We are very excited about the release of coremltools 3 and for Core ML release notes to become a fixture, increasing the issues resolved and features added. In this document, we give you an overview of the features and issues that were resolved in the most recent release. The issues can also be found on the project boards of each respective repository (for example, coremltools). The labels will also indicate the type of issue.

In addition to the features and improvements introduced in this release, there have been some changes within the repository. There are now issue templates to help specify the type of issue whether its a bug, feature request or question. and help us triage quickly. There is also a new document, contributing.md which contains guidelines for community engagement.

coremltools 3.0

We are happy to announce the official release of coremltools 3 which aligns with Core ML 3. It includes a new version of the .mlmodel specification (version 4) which brings with it support for:

  • Updatable models - Neural Network and KNN
  • More dynamic and expressive neural networks - approx. 100 more layers added compared to Core ML 2
  • Dynamic control flows
  • Nearest neighbor classifiers
  • Recommenders
  • Linked models
  • Sound analysis preprocessing
  • Runtime adjustable parameters for on-device update

This version of coremltools also includes a new converter path for TensorFlow models. The tfcoreml converter has been updated to include this new path to convert to specification 4 which can handle control flow and cyclic tensor flow graphs.

Control flow example can be found here.

Updatable Models

Core ML 3 supports an on-device update of models. Version 4 of the .mlmodel specification can encapsulate all the necessary parameters for a model update. Nearest neighbor, neural networks and pipeline models can all be made updatable.
Updatable neural networks support the training of convolution and fully connected layer weights (with back-propagation through many other layers types). Categorical cross-entropy and mean squared error losses are available along with stochastic gradient descent and Adam optimizers.
See examples of how to convert and create updatable models.
See the MLUpdateTask API reference for how to update a model from within an app.

Neural Networks

  • Support for new layers in Core ML 3 added to the NeuralNetworkBuilder
    • Exact rank mapping of multi dimensional array inputs
    • Control Flow related layers (branch, loop, range, etc.)
    • Element-wise unary layers (ceil, floor, sin, cos, gelu, etc.)
    • Element-wise binary layers with broadcasting (addBroadcastable, multiplyBroadcastable, etc)
    • Tensor manipulation layers (gather, scatter, tile, reverse, etc.)
    • Shape manipulation layers (squeeze, expandDims, getShape, etc.)
    • Tensor creation layers (fillDynamic, randomNormal, etc.)
    • Reduction layers (reduceMean, reduceMax, etc.)
    • Masking / Selection Layers (whereNonZero, lowerTriangular, etc.)
    • Normalization layers (layerNormalization)
    • For a full list of supported layers in Core ML 3, check out Core ML specification documentation or NeuralNetwork.proto.
  • Support conversion of recurrent networks from TensorFlow
coremltools - coremltools 3.0 beta 6 release

Published by aseemw about 5 years ago

coremltools - coremltools 3.0b beta release

Published by Necross over 5 years ago

This is the first beta release of coremltools 3 which aligns with the preview of Core ML 3. It includes a new version of the .mlmodel specification which brings with it support for:

  • Updatable models
  • More dynamic and expressive neural networks
  • Nearest neighbor classifiers
  • Recommenders
  • Linked models
  • Sound analysis preprocessing
  • Runtime adjustable parameters

This release also enhances and introduces the following converters and utilities:

  • Keras converter
    • Adds support for converting training details using respect_trainable flag
  • Scikit converter
    • Nearest neighbor classifier conversion
  • NeuralNetworkBuilder
    • Support for all new layers introduced in CoreML 3
    • Support for adding update details such as marking layers updatable, specifying a loss function and providing an optimizer
  • KNearestNeighborsClassifierBuilder (new)
    • Newly added to support simple programatic construction of nearest neighbor classifiers
  • Tensorflow (new)
    • A new tensorflow converter with improved graph transformation capabilities and support for version 4 of the .mlmodel specification
    • This is used by the new tfcoreml beta converter package as well. Try it out with pip install tfcoreml==0.4.0b1

This release also adds Python 3.7 support for coremltools

Updatable Models

Core ML 3 supports on-device update of models. Version 4 of the .mlmodel specification can encapsulate all the necessary parameters for a model update. Nearest neighbor, neural networks and pipeline models can all be made updatable.

Updatable neural networks support training of convolution and fully connected layer weights (with back-propagation through many other layers types). Categorical cross entropy and mean squared error losses are available along with stochastic gradient descent and Adam optimizers.

See examples of how to convert and create updatable models

See the MLUpdateTask API reference for how update a model from within an app.

Neural Networks

  • Support for new layers in Core ML 3 added to the NeuralNetworkBuilder
    • Exact rank mapping of multi dimensional array inputs
    • Control Flow related layers (branch, loop, range, etc.)
    • Element-wise unary layers (ceil, floor, sin, cos, gelu, etc.)
    • Element-wise binary layers with broadcasting (addBroadcastable, multiplyBroadcastable, etc)
    • Tensor manipulation layers (gather, scatter, tile, reverse, etc.)
    • Shape manipulation layers (squeeze, expandDims, getShape, etc.)
    • Tensor creation layers (fillDynamic, randomNormal, etc.)
    • Reduction layers (reduceMean, reduceMax, etc.)
    • Masking / Selection Layers (whereNonZero, lowerTriangular, etc.)
    • Normalization layers (layerNormalization)
    • For a full list of supported layers in Core ML 3, check out CoreML specification documentation (NeuralNetwork.proto).
  • Support conversion of recurrent networks from TensorFlow

Known Issues

coremltools 3.0b1

  • Converting a Keras model that uses mean squared error for the loss function will not create a valid model. A workaround is to set respect_trainable to False (the default) when converting and then manually add the loss function.

Core ML 3 Developer Beta 1

  • The default number of epochs encoded in model is not respected and may run for 0 epochs, immediately returning without training.
    • Workaround: Explicitly supply epochs via MLModelConfiguration updateParameters using MLParameterKey.epochs even if you want to use the default value encoded in the model.
  • Loss returned by the ADAM optimizer is not correct
  • Some updatable pipeline models containing a static neural network sub-model can intermittently fail to update with the error: “Attempting to hash an MLFeatureValue that is not an image or multi array”. This error will surface in task.error as part of MLUpdateContext passed to the provided completion handler.
    • Workaround: Retry model update by creating a new update task with the same training data.
  • Some of the new neural network layers may result in an error when the model is run on a non-CPU compute device.
    • Workaround: restrict computation to CPU with MLModelConfiguration computeUnits
  • Enumerated shape flexibility, when used with Neural network inputs with 'exact_rank' mapping (i.e. rank 5 disabled), may result in an error during prediction.
    • Workaround: use range shape flexibility
coremltools - coremltools 2.1.0

Published by aseemw over 5 years ago

coremltools - coremltools 2.0

Published by aseemw about 6 years ago

  • Support for quantizing Neural Network models (1-8 bits)
  • Support for specifying flexible shapes for model inputs
  • Added NN builder support for new neural network layers: resize_bilinear, crop_resize
  • Added utilities for visualizing and printing summary of neural network models
  • Miscellaneous fixes
coremltools - coremltools 0.8

Published by znation over 6 years ago

  • Adds Python 3.5 and 3.6 support
  • Fixed compatibility with Keras 2.1.3
  • Support for xgboost 0.7
  • Fixes: when 1D convolution output is directly fed by flatten layer, Keras converter gives a wrong output shape
  • Fixes: Index range bug in keras converter function "make_output_layers()"
  • Adds custom activation function support in Keras 2 converter
  • Miscellaneous documentation fixes
coremltools - coremltools-0.7.0

Published by TobyRoseman almost 7 years ago

Neural Networks

  • Half precision weights
    • New to .mlmodel specification version 2
    • Supported by macOS 10.13.2, iOS 11.2, watchOS 4.2, tvOS 11.2
    • WeightParams can now be specified in half precision (float16)
    • New float16 conversion utility function can convert existing models with neural networks to half precision by calling coremltools.utils.convert_neural_network_spec_weights_to_fp16
    • Can also pass in a flag in keras or caffe converter functions during model conversion time to convert models to half precision
    • See: https://developer.apple.com/documentation/coreml/reducing_the_size_of_your_core_ml_app
  • Custom Layers

Visualization

  • Visualize model specification with: coremltools.utils.visualize_spec

Python 3

Misc

  • Support grayscale image outputs in python predictions
  • Bug fixes
coremltools - coremltools-0.6.3

Published by srikris about 7 years ago

Features

  • Linux support
  • Added a “useCPUOnly” flag that lets you run predictions using CoreML through Python bindings using only the CPU

Note: coremltools-0.6.2 has a known issue with the useCPUOnly flag that failed on certain neural network models. This has been fixed with 0.6.3

Neural Network Builder

Added support for layers in the NeuralNetworkBuilder that were present in the neural network protobuf but missing from the builder:

  • Local response normalization (LRN) layer
  • Split layer
  • Unary function layer
  • Bias, scale layers
  • Load constant layer
  • L2 normalization layer
  • Mean variance normalization (MVN) layer
  • Elementwise min layer
  • Depthwise and separable convolutions

Added support for some of the missing parameters in NeuralNetworkBuilder:

  • Padding options in convolution, pooling and padding layers
  • Scale and shift options for linear activation

Other bug fixes & enhancements

  • Bug-fix in the caffe converter that was preventing the elementwise max layer from converting.
  • Support for converting DepthwiseConv2D and SeparableConv2D from Keras
Package Rankings
Top 4.48% on Proxy.golang.org
Top 33.06% on Anaconda.org
Top 22.65% on Conda-forge.org
Top 1.05% on Pypi.org
Badges
Extracted from project README
Build Status PyPI Release Python Versions