onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

MIT License

Downloads
13M
Stars
14.4K
Committers
624

Bot releases are hidden (Show)

onnxruntime - ONNX Runtime v1.10.0

Published by jingyanwangms almost 3 years ago

Announcements

  • As noted in the deprecation notice in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider.
    e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
  • Python 3.6 support removed for Mac builds. Since 3.6 is end-of-life in December 2021, it will no longer be supported from next release (ORT 1.11) onwards
  • Removed dependency on optional-lite
  • Removed experimental Featurizers code

General

  • Support for plug-in custom thread creation and join functions to enable usage of external threads
  • Optional type support from op set 15

Performance

  • Introduced indirect Convolution method for QLinearConv which has symmetrically quantized filter, i.e., filter type is int8 and zero point of filter is 0. The method leverages in-direct buffer instead of memcpy'ing the original data and doesn’t need to compute the sum of each pixel of output image for quantized Conv.
    • X64: new kernels - including avx2, avxvnni, avx512 and avx 512 vnni, for general and depthwise quantized Conv.
    • ARM64: new kernels for depthwise quantized Conv.
  • Tensor shape optimization to avoid allocating heap memory in most cases - #9542
  • Added transpose optimizer to push and cancel transpose ops, significantly improving perf for models requiring layout transformation

API

  • Python
    • Following through on the deprecation notice in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider.
      e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
  • C/C++
    • New API to query CUDA stream to launch a custom kernel for scenarios where custom ops compiled into shared libraries need implicit synchronization with ORT CUDA kernels - #9141
    • Updated Invalid -> OrtInvalidAllocator
    • Updated every item in OrtCudnnConvAlgoSearch to a safer global name
  • WinML
    • New APIs to create OrtValues from Windows platform specific ID3D12Resources by exposing DirectML Execution Provider specific APIs. These APIs allow DML to extend the C-API and provide EP specific extensions.
      • OrtSessionOptionsAppendExecutionProviderEx_DML
      • DmlCreateGPUAllocationFromD3DResource
      • DmlFreeGPUAllocation
      • DmlGetD3D12ResourceFromAllocation
    • Bug fix: LearningModel::LoadFromFilePath in UWP apps

Packages

  • Added Mac M1 Universal2 build support for a single binary that runs natively on both Apple silicon and Intel-based Macs. These are included in the official Nuget packages. (build instructions)
  • Windows C API Symbols are now uploaded to Microsoft symbol server
  • Nuget package now supports ARM64 Linux C#
  • Python GPU package now includes both TensorRT and CUDA EPs. Note: EPs need to be explicitly registered to ensure the correct provider is used. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider']). Please also ensure you have appropriate TensorRT dependencies and CUDA dependencies installed.

Execution Providers

  • TensorRT EP
    • Python GPU release packages now include support for TensorRT 8.0. Enable TensorrtExecutionProvider by explicitly setting providers parameter when creating an InferenceSession. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
    • Published quantized BERT model example
  • OpenVINO EP
    • Add support for OpenVINO 2021.4.x
    • Auto Plugin support
    • IO Buffer/Copy Avoidance Optimizations for GPU plugin
    • Misc fixes
  • DNNL EP
    • Add Softmaxgrad op
    • Add Transpose, Reshape, Pow and LeakyRelu ops
    • Add DynamicQuantizeLinear op
    • Add squeeze/unsqueeze ops
  • DirectML EP
    • Update DirectML.dll from 1.5.1 to 1.8.0
    • Support full precision uint64/int64 for 48 operators
    • Add 8D for 7 more existing operators
    • Add DynamicQuantizeLinear op
    • Accept ID3DResource's via C API

Mobile

  • Added Xamarin support to the ORT C# Nuget packages
    • Updated target frameworks in native package
    • iOS and Android binaries now included in native package
  • ORT format models now have backwards compatibility guarantee

Web

  • Support WebAssembly SIMD for qgemm kernel to accelerate the performance of quantized models
  • Upgraded existing WebGL kernels to the latest opset
  • Optimized bundle size to support various production scenarios, such as WebAssembly only or WebGL only

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, gineshidalgo99, fs-eire, gwang-msft, edgchen1, hariharans29, skottmckay, jeffdaily, baijumeswani, fdwr, smk2007, suffiank, souptc, RyanUnderhill, iK1D, yuslepukhin, chilo-ms, satyajandhyala, hanbitmyths, thiagocrepaldi, wschin, tianleiwu, pengwa, xadupre, zhanghuanrong, SherlockNoMad, wangyems, RandySheriffH, ashbhandare, tiagoshibata, yufenglee, mindest, sumitsays, MaajidKhan, gramalingam, tracysh, georgen117, jywu-msft, sfatimar, martinb35, nkreeger, ytaous, ashari4, stevenlix, chandru-r, jingyanwangms, mosdav, raviskolli, faxu, liqunfu, kit1980, weixingzhang, pranavsharma, jcwchen, chenfucn, BowenBao, jeffbloo

onnxruntime - ONNX Runtime v1.9.1

Published by smk2007 about 3 years ago

This is a patch release on 1.9.0 with the following fixes:

  • Microsoft.AI.MachineLearning NuGet Package Fixes
    • Bug fix for the issue that fails GPU execution if the executable is on the path that contained the unicode characters - 9229.
    • Bug fix for the NuGet package to be installed on UWP apps with 1.9 - 9182.
  • Bug fix for OpenVino EP Python API- 9166.
  • Bump up TVM version for NUPHAR EP - 9159.
  • Fixed build issue for iOS 11 and earlier versions - 9036.
onnxruntime - ONNX Runtime v1.9.0

Published by wangyems about 3 years ago

Announcements

  • GCC version < 7 is no longer supported
  • CMAKE_SYSTEM_PROCESSOR needs be set when cross-compiling on Linux because pytorch cpuinfo was introduced as a dependency for ARM big.LITTLE support. Set it to the value of uname -m output of your target device.

General

  • ONNX 1.10 support
    • opset 15
    • ONNX IR 8 (SparseTensor type, model local functionprotos, Optional type not yet fully supported this release)
  • Improved documentation of C/C++ APIs
  • IBM Power support
  • WinML - DLL dependency fix supports learning models on Windows 8.1
  • Support for sub-building onnxruntime-extensions and statically linking into onnxruntime binary for custom builds
    • Add --_use_extensions option to run models with custom operators implemented in onnxruntime-extensions

APIs

  • Registration of a custom allocator for sharing between multiple sessions. (See RegisterAllocator and UnregisterAllocator APIs in onnxruntime_c_api.h)
  • SessionOptionsAppendExecutionProvider_TensorRT API is deprecated; use SessionOptionsAppendExecutionProvider_TensorRT_V2
  • New APIs: SessionOptionsAppendExecutionProvider_TensorRT_V2, CreateTensorRTProviderOptions, UpdateTensorRTProviderOptions, GetTensorRTProviderOptionsAsString, ReleaseTensorRTProviderOptions, EnableOrtCustomOps, RegisterAllocator, UnregisterAllocator, IsSparseTensor, CreateSparseTensorAsOrtValue, FillSparseTensorCoo, FillSparseTensorCsr, FillSparseTensorBlockSparse, CreateSparseTensorWithValuesAsOrtValue, UseCooIndices, UseCsrIndices, UseBlockSparseIndices, GetSparseTensorFormat, GetSparseTensorValuesTypeAndShape, GetSparseTensorValues, GetSparseTensorIndicesTypeShape, GetSparseTensorIndices,

Performance and quantization

  • Performance improvement on ARM
    • Added S8S8 (signed int8, signed int8) matmul kernel. This avoids extending uin8 to int16 for better performance on ARM64 without dot-product instruction
    • Expanded GEMM udot kernel to 8x8 accumulator
    • Added sgemm and qgemm optimized kernels for ARM64EC
  • Operator improvements
    • Improved performance for quantized operators: DynamicQuantizeLSTM, QLinearAvgPool
    • Added new quantized operator QGemm for quantizing Gemm directly
    • Fused HardSigmoid and Conv
  • Quantization tool - subgraph support
  • Transformers tool improvements
    • Fused Attention for BART encoder and Megatron GPT-2
    • Integrated mixed precision ONNX conversion and parity test for GPT-2
    • Updated graph fusion for embed layer normalization for BERT
    • Improved symbolic shape inference for operators: Attention, EmbedLayerNormalization, Einsum and Reciprocal

Packages

  • Official ORT GPU packages (except Python) now include both CUDA and TensorRT Execution Providers.
    • Python packages will be updated next release. Please note that EPs should be explicitly registered to ensure the correct provider is used.
  • GPU packages are built with CUDA 11.4 and should be compatible with 11.x on systems with the minimum required driver version. See: CUDA minor version compatibility
  • Pypi
    • ORT + DirectML Python packages now available: onnxruntime-directml
    • GPU package can be used on both CPU-only and GPU machines
  • Nuget
    • C#: Added support for using netstandard2.0 as a target framework
    • Windows symbol (PDB) files are no longer included in the Nuget package, reducing size of the binary Nuget package by 85%. To download, please see the artifacts below in Github.

Execution Providers

  • CUDA EP

    • Framework improvements that boost CUDA performance of subgraph heavy models (#8642, #8702)
    • Support for sequence ops for improved performance for models using sequence type
    • Kernel perf improvements for Pad and Upsample (up to 4.5x faster)
  • TensorRT EP

    • Added support for TensorRT 8.0 (x64 Windows/Linux, ARM Jetson), which includes new TensorRT explicit-quantization features (ONNX Q/DQ support)
    • General fixes and quality improvements
  • OpenVINO EP

    • Added support for OpenVINO 2021.4
  • DirectML EP

    • Bug fix for Identity with non-float inputs affecting DynamicQuantizeLinear ONNX backend test

ORT Web

  • WebAssembly
    • SIMD (Single Instruction, Multiple Data) support
    • Option to load WebAssembly from worker thread to avoid blocking main UI thread
    • wasm file path override
  • WebGL
    • Simpler workflow for WebGL kernel implementation
    • Improved performance with Conv kernel enhancement

ORT Mobile

  • Added more example mobile apps
  • CoreML and NNAPI EP enhancements
  • Reduced peak memory usage when initializing session with ORT format model as bytes
  • Enhanced partitioning to improve performance when using NNAPI and CoreML
    • Reduce number of NNAPI/CoreML partitions required
    • Add ability to force usage of CPU for post-processing in SSD models
      • Improves performance by avoiding expensive device copy to/from NPU for cheap post-processing section of the model
  • Changed to using xcframework in the iOS package
    • Supports usage of arm64 iPhone simulator on Mac with Apple silicon

ORT Training

  • Expanding input formats supported to include dictionaries and lists.
  • Enable user defined autograd functions
  • Support for fallback to PyTorch for execution
  • Added support for deterministic compute to enable reproducibility with ORTModule
  • Add DebugOptions and LogLevels to ORTModule API* to improve debuggability
  • Improvements additions to kernels/gradients: Concat, Split, MatMul, ReluGrad, PadOp, Tile, BatchNormInternal
  • Support for ROCm 4.3.1 on AMD GPU

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
edgchen1, gwang-msft, tianleiwu, fs-eire, hariharans29, skottmckay, baijumeswani, RyanUnderhill, iK1D, souptc, nkreeger, liqunfu, pengwa, SherlockNoMad, wangyems, chilo-ms, thiagocrepaldi, KeDengMS, suffiank, oliviajain, chenfucn, satyajandhyala, yuslepukhin, pranavsharma, tracysh, yufenglee, hanbitmyths, ytaous, YUNQIUGUO, zhanghuanrong, stevenlix, jywu-msft, chandru-r, duli2012, smk2007, wschin, MaajidKhan, tiagoshibata, xadupre, RandySheriffH, ashbhandare, georgen117, Tixxx, harshithapv, Craigacp, BowenBao, askhade, zhangxiang1993, gramalingam, weixingzhang, natke, tlh20, codemzs, ryanlai2, raviskolli, pranav-prakash, faxu, adtsai, fdwr, wenbingl, jcwchen, neginraoof, cschreib-ibex

onnxruntime - ONNX Runtime v1.8.2

Published by guoyu-wang about 3 years ago

This is a minor patch release on 1.8.1 with the following changes:

Inference

  • Fix a crash issue when optimizing Conv->Add->Relu for CUDA EP
  • ORT Mobile updates
    • Change Pre-built iOS package to static framework to fix App Store submission issue
    • Support for metadata in ORT format models
    • Additional operators
    • Bug fixes

Known issues

  • cudnn 8.0.5 causes memory leaks on T4 GPU as indicated by the issue, an upgrade to later version solves the problem.
onnxruntime - ONNX Runtime v1.8.1

Published by harshithapv over 3 years ago

This release contains fixes and key updates for 1.8.0.
For all package installation details, please refer to https://www.onnxruntime.ai.

Inference

  • Fixes for GPU package loading issues
  • Fix for memory issue for models with convolution nodes while using the EXHAUSTIVE algo search mode
  • ORT Mobile updates
    • CoreML EP enabled in iOS mobile package
    • Additional operators
    • Bug fixes
    • React Native package now available

Training

Performance updates for ONNX Runtime for PyTorch (training acceleration for PyTorch models)

  • Accelerates most popular Hugging Face models as well as GPT-Neo and Microsoft TNLG and TNLU models
  • Support for PyTorch 1.8.1 and 1.9
  • Support for CUDA 10.2 and 11.1
  • Preview packages for ROCm 4.2
onnxruntime - ONNX Runtime v1.8.0

Published by xzhu1900 over 3 years ago

Announcements

  • This release
    • Building onnxruntime from source now requires a C++ compiler with full C++14 support.
    • Builds with OpenMP are no longer published. They can still be built from source if needed. The default threadpool option should provide optimal performance for the majority of models.
    • New dependency for Python package: flatbuffers
  • Next release (v1.9)
    • Builds will require C++ 17 compiler
    • GPU build will be updated to CUDA 11.1

General

  • ONNX opset 14 support - new and updated operators from the ONNX 1.9 release
  • Dynamically loadable CUDA execution provider
    • Allows a single build to work for both CPU and GPU (excludes Python packages)
  • Profiler tool now includes information on threadpool usage
    • multi-threading preparation time
    • multi-threading run time
    • multi-threading wait time
  • [Experimental] onnxruntime-extensions package
    • Crowd-sourced library of common/shareable custom operator implementations that can be loaded and run with ONNX Runtime; community contributions are welcome! - microsoft/onnxruntime-extensions
    • Currently includes mostly ops and tokenizers for string operations (full list here)
    • Tutorials to export and load custom ops from onnxruntime-extensions: TensorFlow, PyTorch

Training

Mobile

  • Official package now available
  • Objective-C API for iOS in preview
  • Expanded operators supported by NNAPI (Android) and CoreML (iOS) execution providers
  • All operators in the ai.onnx domain now support type reduction
    • Create ORT format model with --enable_type_reduction flag, and perform minimal build --enable_reduced_operator_type_support flag

ORT Web

  • New ONNX Runtime Javascript API
  • ONNX Runtime Web package
    • Support WebAssembly and WebGL for CPU and GPU
    • Support Web Worker based multi-threaded WebAssembly backend
    • Supports ORT model format
    • Improved WebGL performance

Performance

  • Memory footprint reduction through shared pre-packed weights for shared initializers

    • Pre-packing refers to weights that are pre-processed at model load time
    • Allows pre-packed weights of shared initializers to also be shared between sessions, preserving memory savings from using shared initializers
  • Memory footprint reduction through arena shrinkage

    • By default, the memory arena doesn't shrink and it holds onto any allocated memory forever. This feature exposes a RunOption that scans the arena and potentially returns unused memory back to the system after the end of a Run. This feature is particularly useful while running a dynamic shape model that may occasionally process an outlier inference request that requires a large amount of memory. If the shrinkage option if invoked as part of these Runs, the memory that was required for that Run is not held on forever by the memory arena.
  • Quantization

    • Native support of Quantize-Dequantize (QDQ) format for CPU
    • Support for Concat, Transpose, GlobalAveragePool, AveragePool, Resize, Squeeze
    • Improved performance on high-end ARM devices by leveraging dot-product instructions
    • Improved performance for batched quant GEMM with optimized multi-threading logic
    • Per-column quantization for MatMul
  • Transformers

    • GPT-2 and beam search integration (example)

APIs

  • WinML
    • New native WinML API SetIntraOpThreadSpinning for toggling Intra Op thread spin behavior. When enabled, and when there is no current workload, IntraOp threads will continue to spin for some additional time as it waits for any additional work. This can result in better performance for the current workload but may impact performance of other unrelated workloads. This toggle is enabled by default.
  • ORT Inferencing
    • The following APIs have been added to this release. Please check the API documentation for information.
      • KernelInfoGetAttributeArray_float
      • KernelInfoGetAttributeArray_int64
      • CreateArenaCfgV2
      • AddRunConfigEntry
      • CreatePrepackedWeightsContainer
      • PrepackedWeightsContainer
      • CreateSessionWithPrepackedWeightsContainer
      • CreateSessionFromArrayWithPrepackedWeightsContainer

Execution Providers

  • TensorRT
    • Added support for TensorRT EP configuration using session options instead of environment variables.
    • Added support for DLA on Jetson Xavier (AGX, NX)
    • General bug fixes and quality improvements.
  • OpenVINO
    • Added support for OpenVINO 2021.3
    • Removed support for OpenVINO 2020.4
    • Added support for Loading/Saving of Blobs on MyriadX devices to avoid expensive model blob compilation at runtime.
  • DirectML
    • Supports ARM/ARM64 architectures now in WinML and ONNX RunTime NuGet packages.
    • Support for 8-dimensional tensors to: BatchNormalization, Cast, Join, LpNormalization, MeanVarianceNormalization, Padding, Tile, TopK.
    • Substantial performance improvements for several operators.
    • Resize nearest_mode “floor” and “round_prefer_ceil”.
    • Fusion activations for: Conv, ConvTranspose, BatchNormalization, MeanVarianceNormalization, Gemm, MatMul.
    • Decomposes unsupported QLinearSigmoid operation.
    • Removes strided 64-bit emulation in Cast.
    • Allows empty shapes on constant CPU inputs.

Known issues

  • This release has an issue that may result in segmentation faults when deployed on Intel 12th Gen processors with hybrid architecture capabilities with Performance and Efficient-cores (P-core and E-core). This has been fixed in ORT 1.9.
  • The CUDA build of this release has a regression in that the memory utilization increases significantly compared to the previous releases. A fix for this will be released shortly as part of 1.8.1 patch. Here is an incomplete list of issues where this was reported - 8287, 8171, 8147.
  • GPU part of source code is not compatible with
    • Visual Studio 2019 16.10.0 ( which was just released on May 25, 2021). 16.9.x is fine.
    • clang 12
  • CPU part of source code is not compatible with
  • C# OpenVino EP is broken. #7951
  • Python and Windows only: if your CUDNN DLLs are not in CUDA's installation dir, then you need to set manually "CUDNN_HOME" variable. Just putting them in %PATH% is not enough. #7965
  • onnxruntime-win-gpu-x64-1.8.0.zip on this page misses important DLLs, please don't use it.

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:

snnn, gwang-msft, baijumeswani, fs-eire, edgchen1, zhanghuanrong, yufenglee, thiagocrepaldi, hariharans29, skottmckay, weixingzhang, tianleiwu, SherlockNoMad, ashbhandare, tracysh, satyajandhyala, liqunfu, iK1D, RandySheriffH, suffiank, hanbitmyths, wangyems, askhade, stevenlix, chilo-ms, smk2007, kit1980, codemzs, raviskolli, pranav-prakash, chenfucn, xadupre, gramalingam, harshithapv, oliviajain, xzhu1900, ytaous, MaajidKhan, RyanUnderhill, mrry, orilevari, jingyanwangms, sfatimar, KeDengMS, jywu-msft, souptc, adtsai, tlh20, yuslepukhin, duli2012, pranavsharma, faxu, georgen117, jeffbloo, Tixxx, wschin, YUNQIUGUO, tiagoshibata, martinb35, alberto-magni, ryanlai2, Craigacp, suryasidd, fdwr, jcwchen, neginraoof, natke, BowenBao

onnxruntime - ONNX Runtime v1.7.2

Published by smk2007 over 3 years ago

This is a minor patch release on 1.7.1 with the following changes:

onnxruntime - ONNX Runtime v1.7.1

Published by oliviajain over 3 years ago

The Microsoft.ML.OnnxRuntime.Gpu and Microsoft.ML.OnnxRuntime.Managed packages are uploaded to Nuget.org. Please note the version numbers for the Microsoft.ML.OnnxRuntime.Managed package.

onnxruntime - ONNX Runtime v1.7.0

Published by oliviajain over 3 years ago

Announcements

Starting from this release, all ONNX Runtime CPU packages are now built without OpenMP. A version with OpenMP is available on Nuget (Microsoft.ML.OnnxRuntime.OpenMP) and PyPi (onnxruntime-openmp). Please report any issues in GH Issues.

Note: The 1.7.0 GPU package is uploaded on this Azure DevOps Feed due to the size limit on Nuget.org. Please use 1.7.1 for the GPU package through Nuget.

Key Feature Updates

General

  • Mobile
    • Custom operators now supported in the ONNX Runtime Mobile build
    • Added ability to reduce types supported by operator kernels to only the types required by the models
      • Expect a 25-33% reduction in binary size contribution from the kernel implementations. Reduction is model dependent, but testing with common models like Mobilenet v2, SSD Mobilenet and Mobilebert achieved reductions in this range.
  • Custom op support for dynamic input
  • MKLML/openblas/jemalloc build configs removed
  • Removed dependency on gemmlowp
  • [Experimental] Audio Operators
    • Fourier Transforms (DFT, IDFT, STFT), Windowing Functions (Hann, Hamming, Blackman), and a MelWeightMatrix operator in "com.microsoft.experimental” domain
    • Buildable using ms_experimental build flag (included in Microsoft.AI.MachineLearning NuGet package)

Performance

  • Quantization
    • Quantization tool now supports quantization of models in QDQ (QuantizeLinear-DequantizeLinear) format
    • Depthwise Conv quantization performance improvement
    • Quantization support added for Pad, Split and MaxPool for channel last
    • QuantizeLinear performance improvement on AVX512
    • Optimization: Fusion for Conv + Mul/Add
  • Transformers
    • Longformer Attention CUDA kernel memory footprint reduction
    • Einsum Float16 CUDA kernel for ALBERT and XLNet
    • Python optimizer tool now supports fusion for BART
    • CPU profiling tool for transformers models

APIs and Packages

  • Python 3.8 and 3.9 support added for all platforms, removed support for 3.5
  • ARM32/64 Windows builds are now included in the CPU Nuget and zip packages
  • WinML
    • .NET5 support - will work with .NET5 Standard 2.0 Projections
    • Image descriptors expose NominalPixelRange properties
      • Native support added for additional pixel ranges [0..1] and [-1..1] in image models.
      • A new property is added to the ImageFeatureDescriptor runtimeclass to expose the ImageNominalPixelRange property in ImageFeatureDescriptor. Other similar properties exposed are the image’s BitmapPixelFormat and BitmapAlphaMode.
    • Bug fixes and performance improvements, including #6249
  • [Experimental] Model Building API available under the Microsoft.AI.MachineLearning.Experimental namespace. (included in Microsoft.AI.MachineLearning NuGet package)
    • Can be used to create dynamic models on the fly to enable engine-optimized and hardware accelerated dynamic tensor featurization code sample

Execution Providers

  • CUDA EP
    • Official GPU build now built with CUDA 11
  • OpenVINO EP
    • Support for OpenVINO 2021.2
    • Deprecated support for OpenVINO 2020.2
    • Support for OpenVINO EP options in onnxruntime_perf_test tool
    • General fixes
  • TensorRT EP
    • Support for TensorRT 7.2
    • General fixes and perf improvements
  • DirectML EP
    • Support for DirectML 1.4.2
    • DirectML PIX markers added to enable profiling graph at operator level.
  • NNAPI EP
    • Performance improvement for quantized models
    • Support of per-channel quantization for QlinearConv
    • Additional operator support – Min/Max/Pow

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
edgchen1, snnn, skottmckay, gwang-msft, hariharans29, tianleiwu, xadupre, yufenglee, ryanlai2, wangyems, suffiank, liqunfu, orilevari, baijumeswani, weixingzhang, pranavsharma, RandySheriffH, ashbhandare, oliviajain, smk2007, tracysh, stevenlix, fs-eire, Craigacp, faxu, mrry, codemzs, chilo-ms, jcwchen, zhanghuanrong, SherlockNoMad, iK1D, askhade, zhangxiang1993, yuslepukhin, tlh20, MaajidKhan, wschin, smkarlap, wenbingl, pengwa, duli2012, natke, alberto-magni, Tixxx, HectorSVC, jingyanwangms, jstoecker, kit1980, suryasidd, RandyShuai, sfatimar, jywu-msft, liuziyue, mosdav, thiagocrepaldi, souptc, fdwr

onnxruntime - ONNX Runtime v1.6.0

Published by duli2012 almost 4 years ago

Announcements

  • OpenMP will be disabled in future official builds (build option will still be available). A NoOpenMP version of ONNX Runtime is now available with this release on Nuget and PyPi for C/C++/C#/Python users.
  • In the next release, MKL-ML, openblas, and jemallac build options will be removed, and the Microsoft.ML.OnnxRuntime.MKLML Nuget package will no longer be published. Users of MKL-ML are recommended to use the Intel EPs. If you are using these options and identify issues switching to an alternative build, please file an issue with details.

Key Feature Updates

General

  • ONNX 1.8 support / opset 13
  • New contrib ops: BiasSoftmax, MatMulIntegerToFloat, QLinearSigmoid, Trilu
  • ORT Mobile now compatible with NNAPI for accelerating model execution on Android devices
  • Build support for Mac with Apple Silicon (CPU only)
  • New dependency: flatbuffers
  • Support for loading sparse tensor initializers in pruned models
  • Support for setting the execution priority of a node
  • Support for selection of cuDNN conv algorithms
  • BERT Model profiling tool

Performance

  • New session option to disable denormal floating numbers on sse3 supporting CPUs
    • Eliminates unexpected performance degradation due to denormals without needing to retrain the model
  • Option to share initializers between sessions to improve memory utilization
    • Useful when several models that use the same set of initializers except the last few layers of the model are loaded in the same process
    • Eliminates wasteful memory usage when every model (session) creates a separate instance of the same initializer
    • Exposed by the AddInitializer API
  • Transformer model optimizations
    • Longformer: LongformerAttention CUDA operator added
    • Support for BERT models exported from Tensorflow with 1 or 2 inputs
    • Python optimizer supports additional models: openai-GPT, ALBERT and FlauBERT
  • Quantization
    • Support of per-channel QuantizeLinear and DeQuantizeLinear
    • Support of LSTM quantization
    • Quantization performance improvement on ARM
    • CNN quantization perf optimizations, including u8s8 support and NHWC transformer in QLinearConv
  • ThreadPool
    • Use _mm_pause() for spin loop to improve performance and power consumption

APIs and Packages

  • Python - I/O Binding enhancements
    • Usage Documentation (OrtValue and IOBinding sections)
    • Python binding for the OrtValue data structure
      • An interface is exposed to allocate memory on a CUDA-supported device and define the contents of this memory. No longer need to use allocators provided by other libraries to allocate and manage CUDA memory to be used with ORT.
      • Allows consuming ORT allocated device memory as an OrtValue (check Scenario 4 in the IOBinding section of the documentation for an example)
    • OrtValue instances can be used to bind inputs/outputs. This is in addition to existing interfaces that allows binding a piece of memory directly/using numpy arrays that can be bound and may be particularly useful when binding ORT allocated device memory.
  • C# - float16 and bfloat16 support
  • Windows ML
    • NuGet package now supports UWP applications targeting Windows Store deployment for both CPU and GPU
    • Minor API Improvements:
      • Able to bind IIterable as inputs and outputs
      • Able to create Tensor* via multiple buffers
    • WindowsAI Redist now includes a statically linked C-Runtime package for additional deployment options

Execution Providers

  • DNNL EP Updates
    • DNNL updated from 1.1.1 to 1.7
  • NNAPI EP Updates
    • Support for CNN models
    • Additional operator support - Resize/Flatten/Clip
  • TensorRT EP Updates
    • Int8 quantization support (experimental)
    • Engine cache refactoring and improvements
    • General fixes and performance improvements
  • OpenVINO EP Updates
    • OpenVINO 2021.1 support
    • OpenVINO EP builds as shared library
    • Multi-threaded inferencing support
    • fp16 input type support
    • Multi-device plugin support
    • Hetero plugin support
    • Enable build on ARM64
  • DirectML EP Updates (1.3.0 -> 1.4.0)
    • Utilizing the first public standalone release of the DirectML API through the DirectML NuGet package release
    • General fixes and improvements
  • nGraph EP is removed. Recommend to use OpenVINO instead

Additional notes

  • VCRuntime2019 with OpenMP: pinning a process to NUMA node 1 forces the execution to be single threaded. Fix is in progress in VC++.
    • Workaround: place the VS2017 vcomp DLL side-by-side so that ORT uses the VS2017 version
  • Pip version >=20.3 is required for use on macOS Big Sur (11.x)
  • The destructor of OrtEnv is now non-trivial and may do DLL unloading Do not call ReleaseEnv from DLLMain or put OrtEnv in global variables. It is not safe to call FreeLibrary from DllMain. - reference
  • Some unit tests fail on Pascal GPUs. See: https://github.com/microsoft/onnxruntime/issues/5914
  • If using the default CPU package (built with OpenMP), consider tuning the OpenMP settings to improve performance. By default the number of threads to use for openmp parallel regions is set to the number of logical CPUs. This may not be optimal for machines with hyper-threading; when CPUs are oversubscribed the 99-percentile latency could be 10x greater. Setting the OMP_NUM_THREADS environment variable to the number of physical cores is a good starting point. As noted in Announcements, future official builds of ORT will be published without OpenMP

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
gwang-msft, snnn, skottmckay, edgchen1, hariharans29, wangyems, yufenglee, yuslepukhin, tianleiwu, SherlockNoMad, tracysh, ryanlai2, askhade, xadupre, liqunfu, RandySheriffH, jywu-msft, KeDengMS, pranavsharma, mrry, ashbhandare, iK1D, RyanUnderhill, MaajidKhan, wenbingl, kit1980, weixingzhang, tlh20, suffiank, Craigacp, smkarlap, stevenlix, zhanghuanrong, sfatimar, ytaous, tiagoshibata, fdwr, oliviajain, alberto-magni, jcwchen, mosdav, xzhu1900, wschin, codemzs, duli2012, smk2007, natke, zhijxu-MS, manashgoswami, zhangxiang1993, faxu, HectorSVC, take-cheeze, jingyanwangms, chilo-ms, YUNQIUGUO, jgbradley1, jessebenson, martinb35, Andrews548, souptc, pengwa, liuziyue, orilevari, BowenBao, thiagocrepaldi, jeffbloo

onnxruntime - ONNX Runtime v1.5.3

Published by RyanUnderhill almost 4 years ago

This is a minor patch release on 1.5.2 with the following changes:

  • Fix shared provider unload crash #5553
  • Minor minimal build header fix
onnxruntime - ONNX Runtime v1.5.2

Published by tianleiwu about 4 years ago

This is a minor patch release on 1.5.1 with the following changes:

onnxruntime - ONNX Runtime Training RC3.1

Published by edgchen1 about 4 years ago

Fixes issue discovered during validation.

Changes:

onnxruntime - ONNX Runtime Training RC3

Published by edgchen1 about 4 years ago

onnxruntime - ONNX Runtime v1.5.1

Published by tianleiwu about 4 years ago

Key Updates

General

  • Reduced Operator Kernel build allows ORT binaries to be built with only required operators in the model(s) - learn more
  • [Preview] ORT for Mobile Platforms - minimizes build size for mobile and embedded devices - learn more
  • Transformer model inferencing performance optimizations
    • Perf improvement for DistilBERT
    • Benchmark tool supports more pretrained models
  • Improvements in quantization tool
    • Support quantization-aware training models
    • Make calibration tool to support general preprocessing and calibrate on input
    • Simplify the quantization APIs
    • Support of model larger than 2G
  • New operators for static quantization: QLinearMul, QLinearAdd, QlinearSigmoid and QLinearLeakyRelu
  • Prepack constant matrix B for float GEMM (MatMul, Attention)
  • Limited Python 3.8 support added in addition to 3.5-3.7 for official Python packages. Not yet supported for Windows GPU and Linux ARM builds.
  • Telemetry enabled in Java and NodeJS packages for Windows builds. Note: data is not directly sent to Microsoft or ORT teams by ONNX Runtime; enabling telemetry means trace events are collected by the Windows operating system and may be sent to the cloud based on the user's privacy settings - learn more.

API

  • Python API support for RegisterCustomOpsLibrary
  • IO Binding API for C/C++/C# language bindings. This allows use of pre-allocated buffers on targeted devices and also target device for unknown output shapes.
  • Sharing of allocators between multiple sessions. This allows much better utilization of memory by not creating a separate arena for each session in the same process. See this for details.

Windows ML

  • NuGet package now supports UWP applications targeting Windows Store deployment (CPU only)
  • NuGet package now supports .NET and .NET framework applications
  • RUST Developers can now deploy Windows ML – sample and documentation available here
  • New APIs to for additional performance control:
    • IntraopNumThreads: Provides an ability to change the number of threads used in the threadpool for Intra Operator Execution for CPU operators through LearningModelSessionOptions.
    • SetNamedDimensionOverrides: Provides the ability to override named input dimensions to concrete values through LearningModelSessionOptions in order to achieve better runtime performance.
  • Support for additional ONNX format image type denotations – Gray8, normalized [0..1] and normalized [-1..1]
  • Reduced Windows ML package size by separating debug symbols into separate distribution package.

Execution Providers

  • CUDA updates
    • CUDA 10.2 / cuDNN 8.0 in official package
    • CUDA 11 support added and available to build from source
    • CUDA conv kernel support asymmetrical padding to fully support models such as YoloV3 for improved GPU perf
  • TensorRT EP updates
    • Support for TensorRT 7.1
    • Added TensorRT engine caching feature, turned on by setting env variable ORT_TENSORRT_ENGINE_CACHE_ENABLE=1
    • TensorRT builds are now built with the Execution Provider as a separate dll. If enabled in the build, the provider will be available as a shared library. This was previously also enabled for the DNNL EP (ORT 1.3). Other Execution Providers will be added in the future.
  • OpenVINO EP updates
    • Support for OpenVINO 2020.4
    • Added runtime options for VPU hardware to select specific hardware device and enable fast compilation of models.
    • Enable C# binding support for OpenVINO EP
  • DirectML EP updates
    • API available for Python (build from source) and C# Microsoft.ML.OnnxRuntime.DirectML
    • 7 new operators for ONNX 1.7 (opset 12): Celu, GreaterOrEqual, LessOrEqual, ArgMin/Max with select_last_index, GatherND with batch_dim, RoiAlign
    • New data integer types were added to existing operators: Clip int, Max int, Min int, MaxPool int8, ReduceMin int8, ReduceMax int8, Pow int exponent
    • Higher dimension support 1D to 8D added to these operators: ElementWise*, Activation*, Reduce*, ArgMin/ArgMax, Gather*, Scatter*, OneHot
    • 64-bit support for indices on GPU's that support it: Gather, Scatter, OneHot, ArgMax/ArgMin, Cast.
  • Android NNAPI EP updates:
    • Support for dynamic input shape
    • Int32/float32/uint8 data type
    • 50% more supported operators (36 total)
    • Support for Uint8 static quantization
    • Smaller binary size
    • Lower memory consumption
    • CPU fallback for Android level 26-
  • MiGraphX EP updates
    • Added ONNX operators: GatherElements, NonZero, Equal, and Where
    • Support for Boolean data type
    • Improve support for existing operators:
      • Asymmetric padding of AveragePool
      • Multi-dimensional support for Convolution, Pooling, LRN, and Batchnormalization
      • Ceil mode support for AveragePool and MaxPool
      • More general approach to check whether constant folding is possible
    • Improved graph partitioning logic

Training (RC3 release)

  • New and improved API to simplify integration with PyTorch trainer code - see instructions here
  • Updated CUDA 11 / cuDNN 8.0 support to accelerate in NVIDIA A100

Dependency updates

MacOS binaries now rely on openmp to be installed. See this for reference.

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:

gwang-msft, snnn, skottmckay, hariharans29, thiagocrepaldi, tianleiwu, wangyems, RandySheriffH, yufenglee, SherlockNoMad, smk2007, jywu-msft, liqunfu, edgchen1, yuslepukhin, tiagoshibata, fdwr, ashbhandare, iK1D, wschin, BowenBao, zhanghuanrong, RyanUnderhill, ryanlai2, askhade, pranavsharma, martinb35, suffiank, ytaous, KeDengMS, rayankrish, natke, YUNQIUGUO, range4life, smkarlap, zhangxiang1993, xzhu1900, codemzs, weixingzhang, stevenlix, tracysh, mosdav, jingyanwangms, tlh20, souptc, orilevari, kit1980, yangchen-MS, faxu, fs-eire, wenbingl, chilo-ms, xkszltl, Andrews548, yuzawa-san, MaximKalininMS, jgbradley1, nickfeeney, zhijxu-MS, Tixxx, suryasidd, Craigacp, duli2012, jeffbloo

onnxruntime - ORTTraining RC2

Published by ytaous about 4 years ago

onnxruntime - ONNX Runtime v1.4.0

Published by yuslepukhin over 4 years ago

Key Updates

  • Performance optimizations for Transformer models
    • GPT2 - Enable optimizations for Attention with Past State and Attention Mask
    • BERT - Improve EmbedLayerNormalization fusion coverage
  • Quantization updates
    • Added new quantization operators: QLinearAdd, QAttention
    • Improved quantization performance for transformer based models on CPU
      • More graph fusion
      • Further optimization in MLAS kernel
      • Introduced pre-packing for constant Matrix B of DynamicQuantizeMatMul and Qattention
  • New Python IOBinding APIs (bind_cpu_input, bind_output, copy_outputs_to_cpu) allow easier benchmarking
    • Users no longer need to allocate inputs and outputs on non-CPU devices using third-party allocators.
    • Users no longer need to copy inputs to non-CPU devices; ORT handles the copy.
    • Users can now use copy_outputs_to_cpu to copy outputs from non-CPU devices to CPU for verification.
  • CUDA support for Einsum (opset12)
  • ONNX Runtime Training updates
    • Opset 12 support
    • New sample for training experiment using Huggingface GPT-2.
      • Upgraded docker image built from the latest PyTorch release
  • Telemetry is now enabled by default for Python packages and Github release zip files (C API); see more details on what/how telemetry is collected in ORT
  • [Coming soon] Availability of Python package for ONNX Runtime 1.4 for Jetpack 4.4

Execution Providers

New Execution Providers available for preview:

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:

snnn, tianleiwu, edgchen1, hariharans29, skottmckay, tracysh, yufenglee, fs-eire, codemzs, tiagoshibata, yuslepukhin, gwang-msft, wschin, smk2007, prabhat00155, liuziyue, liqunfu, ytaous, iK1D, BowenBao, askhade, pranavsharma, faxu, jywu-msft, ryanlai2, xzhu1900, KeDengMS, tlh20, smkarlap, weixingzhang, jeffbloo, RyanUnderhill, mrry, jgbradley1, stevenlix, zhanghuanrong, suffiank, Andrews548, pengwa, SherlockNoMad, orilevari, duli2012, yangchen-MS, yan12125, jornt-xilinx, ashbhandare, neginraoof, Tixxx, thiagocrepaldi, Craigacp, mayeut, chilo-ms, prasanthpul, martinb35, manashgoswami, zhangxiang1993, suryasidd, wangyems, kit1980, RandySheriffH, fdwr

onnxruntime - ONNX Runtime v1.3.1

Published by stevenlix over 4 years ago

This update includes changes to support the published packages for the Java and nodejs APIs for the 1.3.0 release.

For all other APIs/builds, the 1.3.0 release packages are suggested. 1.3.1 does address the 1.3.0 issue of Crash when setting IntraOpNumThreads using the C/C++/C# API, so if this fix is needed it can be built from source using this release branch (with official release support).

onnxruntime - ONNX Runtime v1.3.0

Published by stevenlix over 4 years ago

Key Updates

General

  • ONNX 1.7 support
    • Opset 12
    • Function expansion support that enables several new ONNX 1.7 ops such as NegativeLogLikelihoodLoss, GreaterOrEqual, LessOrEqual, Celu to run without a kernel implementation.
  • [Preview] ONNX Runtime Training
    • ONNX Runtime Training is a new capability released in preview to accelerate training transformer models. See the sample here to use this feature in your training experiments.
  • Improved threadpool support for better resource utilization
    • Improved threadpool abstractions that switch between openmp and Eigen threadpools based on build settings. All operators have been updated to use these new abstractions.
    • Improved Eigen based threadpool now allow ops to provide cost (among other things like thread affinity) for operations
    • Simpler configuration of thread count. If built with OpenMP, use the OpenMP env variables; else use the ORT APIs to configure the number of threads.
    • Support for sessions to share global threadpool. See this for more information.
  • Performance improvements
    • ~10% average measured latency improvements amongst key representative models (including ONNX model zoo models, MLPerf, and production models shipped in Microsoft products)
    • Further latency improvements for Transformer models on CPU and GPU - benchmark script
    • Improved batch inferencing latency for scikit-learn models for large batch sizes
      • Significant improvements in the implementations of the following ONNX operators: TreeEnsembleRegressor, TreeEnsembleClassifier, LinearRegressor, LinearClassifier, SVMRegressor, SVMClassifier, TopK
    • C# API optimizations - PR3171
  • Telemetry enabled for Windows (more details on telemetry collection)
  • Improved error reporting when a kernel cannot be found due to missing type implementation
  • Minor fixes based on static code analysis

Dependency updates

Please note that this version of onnxruntime depends on Visual C++ 2019 runtime. Previous versions depended on Visual C++ 2017. Please also refer https://github.com/microsoft/onnxruntime/tree/rel-1.3.0#system-requirements for the full set of system requirements.

APIs and Packages

  • [General Availability] Windows Machine Learning APIs - package published on Nuget - Microsoft.AI.MachineLearning
    • Performance improvements
    • Opset updates
  • [General Availability] ONNX Runtime with DirectML package published on Nuget -Microsoft.ML.OnnxRuntime.DirectML
  • [General Availability] Java API - Maven package coming soon.
  • [Preview] Javascript (node.js) API now available to build from the master branch.
  • ARM64 Linux CPU Python package now available on Pypi. Note: this requires building ONNX for ARM64.
  • Nightly dev builds from master (Nuget feed, TestPypi-CPU, GPU)
  • API Updates
    • I/O binding support for Python API - This reduces execution time significantly by allowing users to setup inputs/outputs on the GPU prior to model execution.
    • API to specify free dimensions based on both denotations and symbolic names.

Execution Providers

  • OpenVINO v2.0 EP
  • DirectML EP updates
    • Updated graph interface to abstract GPU-dependent graph optimization
    • ONNX opset 10 and 11 support
    • Initial support of 8bit and quantized operators
    • Performance optimizations
  • [Preview] Rockchip NPU EP
  • [Preview] Xilinx FPGA Vitis-AI EP
  • Capability to build execution providers as DLLs - supported for DNNL EP, work in progress for other EPs.
    • If enabled in the build, the provider will be available as a shared library. Previously, EPs had to be statically linked with the core code.
    • No runtime cost to include the EP if it isn't loaded; can now dynamically decide when to load it based on the model

Contributions

We'd like to recognize our community members across various teams at Microsoft and other companies for all their valuable contributions. Our community contributors in this release include: Adam Pocock, pranavm-nvidia, Andrew Kane, Takeshi Watanabe, Jianhao Zhang, Colin Jermain, Andrews548, Jan Scholz, Pranav Prakash, suryasidd, and S. Manohar Karlapalem.

The ONNX Runtime Training code was originally developed internally at Microsoft, before being ported to Github. We’d like to recognize the original contributors: Aishwarya Bhandare, Ashwin Kumar, Cheng Tang, Du Li, Edward Chen, Ethan Tao, Fanny Nina Paravecino, Ganesan Ramalingam, Harshitha Parnandi Venkata, Jesse Benson, Jorgen Thelin, Ke Deng, Liqun Fu, Li-Wen Chang, Peng Wang, Sergii Dymchenko, Sherlock Huang, Stuart Schaefer, Tao Qin, Thiago Crepaldi, Tianju Xu, Weichun Wang, Wei Zuo, Wei-Sheng Chin, Weixing Zhang, Xiaowan Dong, Xueyun Zhu, Zeeshan Siddiqui, and Zixuan Jiang.

Known Issues

  1. The source doesn't compile on Ubuntu 14.04. See #4048
  2. Crash when setting IntraOpNumThreads using the C/C++/C# API. Fix is available in the master branch.
    Workaround: Setting IntraOpNumThreads is inconsequential when using ORT that is built with openmp enabled. Hence it's not required and can be safely commented out. Use the openmp env variables to set the threading params for openmp enabled builds (which is the recommended way).
onnxruntime - ONNX Runtime v1.2.0

Published by yufenglee over 4 years ago

Key Updates

Execution Providers

  • [Preview] Availability of Windows Machine Learning (WinML) APIs in Windows builds of ONNX Runtime, with DirectML for GPU acceleration
    • Windows ML is a WinRT API designed specifically for Windows developers that already ships as an inbox component in newer Windows versions
    • Compatible with Windows 8.1 for CPU and Windows 10 1709 for GPU
    • Available as source code in the GitHub and pre-built Nuget packages (windows.ai.machinelearning.dll)
    • For additional documentation and samples on getting started, visit the Windows ML API Reference documentation
  • TensorRT Execution Provider upgraded to TRT 7
  • CUDA updated to 10.1
    • Linux build requires CUDA Runtime 10.1.243, cublas10-10.2.1.243, and CUDNN 7.6.5.32. Note: cublas 10.1.x will not work
    • Windows build requires CUDA Runtime 10.1.243, CUDNN 7.6.5.32
    • onnxruntime now depends on curand lib, which is part of the CUDA SDK. If you already have the SDK fully installed, then it won't be an issue

Builds and Packages

  • Nuget package structure updated. There is now a separate Managed Assembly (Microsoft.ML.OnnxRuntime.Managed) shared between the CPU and GPU Nuget packages. The "native" Nuget will depend on the "managed" Nuget to bring it into relevant projects automatically. PR 3104 Note that this should transparent for customers installing the Nuget packages. ORT package details are here.
  • Build system: support getting dependencies from vcpkg (a C++ package manager for Windows, Linux, and MacOS)
  • Capability to generate an onnxruntime Android Archive (AAR) file from source, which can be imported directly in Android Studio

API Updates

  • SessionOptions:
    • default value of max_num_graph_transformation_steps increased to 10
    • default value of graph optimization level is changed to ORT_ENABLE_ALL(99)
  • OrtEnv can be created/destroyed multiple times
  • Java API
    • Gradle now required to build onnxruntime
    • Available on Android
  • C API Additions:
    • GetDenotationFromTypeInfo
    • CastTypeInfoToMapTypeInfo
    • CastTypeInfoToSequenceTypeInfo
    • GetMapKeyType
    • GetMapValueType
    • GetSequenceElementType
    • ReleaseMapTypeInfo
    • ReleaseSequenceTypeInfo
    • SessionEndProfiling
    • SessionGetModelMetadata
    • ModelMetadataGetProducerName
    • ModelMetadataGetGraphName
    • ModelMetadataGetDomain
    • ModelMetadataGetDescription
    • ModelMetadataLookupCustomMetadataMap
    • ModelMetadataGetVersion
    • ReleaseModelMetadata

Operators

  • This release introduces a change to the forward-compatibility pattern ONNX Runtime previously followed. This change was added to guarantee correctness of model prediction and removes behavior ambiguity due to missing opset information. This release adds a model opset number and IR version check - ONNX Runtime will not support models with ONNX versions higher than the supported opset implemented for that version (see version matrix). If higher opset versions are needed, consider using custom operators via ORT's custom schema/kernel registry mechanism.
  • Int8 type support for Where Op
  • Updates to Contrib ops:
    • Changes: ReorderInput in kMSNchwcDomain, SkipLayerNormalization
    • New: QLinearAdd, QLinearMul, QLinearReduceMean, MulInteger, QLinearAveragePool
  • Added featurizer operators as an expansion of Contrib operators - these are not part of the official build and are experimental

Contributions

We'd like to recognize our community members across various teams at Microsoft and other companies for all their valuable contributions. Our community contributors in this release include: Eric Cousineau (Toyota Research Institute), Adam Pocock (Oracle), tinchi, Changyoung Koh, Andrews548, Jianhao Zhang, nicklas-mohr-jas, James Yuzawa, William Tambellini, Maher Jendoubi, Mina Asham, Saquib Nadeem Hashmi, Sanster, and Takeshi Watanabe.