Skip to content

coremltools 7.0

Compare
Choose a tag to compare
@TobyRoseman TobyRoseman released this 18 Sep 21:47
· 129 commits to main since this release
e4b0d63
  • New submodule coremltools.optimize for model quantization and compression
    • coremltools.optimize.coreml for compressing coreml models, in a data free manner. coremltools.compresstion_utils.* APIs have been moved here
    • coremltools.optimize.torch for compressing torch model with training data and fine-tuning. The fine tuned torch model can then be converted using coremltools.convert
  • The default neural network backend is now mlprogram for iOS15/macOS12. Previously calling coremltools.convert() without providing the convert_to or the minimum_deployment_target arguments, used the lowest deployment target (iOS11/macOS10.13) and the neuralnetwork backend. Now the conversion process will default to iOS15/macOS12 and the mlprogram backend. You can change this behavior by providing a minimum_deployment_target or convert_to value.
  • Python 3.11 support.
  • Support for new PyTorch ops: repeat_interleave, unflatten, col2im, view_as_real, rand, logical_not, fliplr, quantized_matmul, randn, randn_like, scaled_dot_product_attention, stft, tile
  • pass_pipeline parameter has been added to coremltools.convert to allow controls over which optimizations are performed.
  • MLModel batch prediction support.
  • Support for converting statically quantized PyTorch models.
  • Prediction from compiled model (.modelc files). Get compiled model files from an MLModel instance. Python API to explicitly compile a model.
  • Faster weight palletization for large tensors.
  • New utility method for getting weight metadata: coremltools.optimize.coreml.get_weights_metadata. This information can be used to customize optimization across ops when using coremltools.optimize.coreml APIs.
  • New and updated MIL ops for iOS17/macOS14/watchOS10/tvOS17
  • coremltools.compression_utils is deprecated.
  • Changes default I/O type for Neural Networks to FP16 for iOS16/macOS13 or later when mlprogram backend is used.
  • Changes upper input range behavior when backend is mlprogram:
    • If RangeDim is used and no upper-bound is set (with a positive number), an exception will be raised.
    • If the user does not use the inputs parameter but there are undetermined dim in input shape (for example, TF with "None" in input placeholder), it will be sanitized to a finite number (default_size + 1) and raise a warning.
  • Various other bug fixes, enhancements, clean ups and optimizations.

Special thanks to our external contributors for this release: @fukatani , @pcuenca , @KWiecko , @comeweber , @sercand , @mlaves, @cclauss, @smpanaro , @nikalra, @jszaday