onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

MIT License

Downloads
13M
Stars
14.4K
Committers
624

Bot releases are hidden (Show)

onnxruntime - ONNX Runtime v1.19.2

Published by MaanavD about 1 month ago

Announcements

  • ORT 1.19.2 is a small patch release, fixing some broken workflows and introducing bug fixes.

Build System & Packages

  • Fixed the signing of native DLLs.
  • Disabled absl symbolize in Windows Release build to avoid dependency on dbghelp.dll.

Training

  • Restored support for CUDA compute capability 7.0 and 7.5 with CUDA 12, and 6.0 and 6.1 with CUDA 11.
  • Several fixes for training CI pipelines.

Mobile

  • Fixed ArgMaxOpBuilder::AddToModelBuilderImpl() nullptr Node access for CoreML EP.

Generative AI

  • Added CUDA kernel for Phi3 MoE.
  • Added smooth softmax support in CUDA and CPU kernels for the GroupQueryAttention operator.
  • Fixed number of splits calculations in GroupQueryAttention CUDA operator.
  • Enabled causal support in the MultiHeadAttention CUDA operator.

Contributors

@prathikr, @mszhanyi, @edgchen1, @tianleiwu, @wangyems, @aciddelgado, @mindest, @snnn, @baijumeswani, @MaanavD

Thanks to everyone who helped ship this release smoothly!

Full Changelog: https://github.com/microsoft/onnxruntime/compare/v1.19.0...v1.19.2

onnxruntime - ONNX Runtime v1.19 Latest Release

Published by MaanavD 2 months ago

Announcements

  • Training (pypi) packages are delayed from package manager release due to some publishing errors. Feel free to contact @maanavd if you need release candidates for some workflows ASAP. In the meantime, binaries are attached to this post. This message will be deleted once this ceases to be the case. Thanks for your understanding :)
  • Second note that the wrong commit was initially tagged with v1.19.0. The final commit has since been correctly tagged: https://github.com/microsoft/onnxruntime/commit/26250ae74d2c9a3c6860625ba4a147ddfb936907. This shouldn't effect much, but sorry for the inconvenience!

Build System & Packages

  • Numpy support for 2.x has been added
  • Qualcomm SDK has been upgraded to 2.25
  • ONNX has been upgraded from 1.16 → 1.16.1
  • Default GPU packages use CUDA 12.x and Cudnn 9.x (previously CUDA 11.x/CuDNN 8.x) CUDA 11.x/CuDNN 8.x packages are moved to the aiinfra VS feed.
  • TensorRT 10.2 support added
  • Introduced Java CUDA 12 packages on Maven.
  • Discontinued support for Xamarin. (Xamarin reached EOL on May 1, 2024)
  • Discontinued support for macOS 11 and increasing the minimum supported macOS version to 12. (macOS 11 reached EOL in September 2023)
  • Discontinued support for iOS 12 and increasing the minimum supported iOS version to 13.

Core

Performance

  • Added QDQ support for INT4 quantization in CPU and CUDA Execution Providers
  • Implemented FlashAttention on CPU to improve performance for GenAI prompt cases
  • Improved INT4 performance on CPU (X64, ARM64) and NVIDIA GPUs

Execution Providers

  • TensorRT

    • Updated to support TensorRT 10.2
    • Remove calls to deprecated api’s
    • Enable refittable embedded engine when ONNX model provided as byte stream
  • CUDA

    • Upgraded cutlass to 3.5.0 for performance improvement of memory efficient attention.
    • Updated MultiHeadAttention and Attention operators to be thread-safe.
    • Added sdpa_kernel provider option to choose kernel for Scaled Dot-Product Attention.
    • Expanded op support - Tile (bf16)
  • CPU

    • Expanded op support - GroupQueryAttention, SparseAttention (for Phi-3 small)
  • QNN

    • Updated to support QNN SDK 2.25
    • Expanded op support - HardSigmoid, ConvTranspose 3d, Clip (int32 data), Matmul (int4 weights), Conv (int4 weights), prelu (fp16)
    • Expanded fusion support – Conv + Clip/Relu fusion
  • OpenVINO

    • Added support for OpenVINO 2024.3
    • Support for enabling EpContext using session options
  • DirectML

    • Updated DirectML from 1.14.1 → 1.15
    • Updated ONNX opset from 17 → 20
    • Opset 19 and Opset 20 are supported with known caveats:
      • Gridsample 20: 5d not supported
      • DeformConv not supported

Mobile

Web

  • Updated JavaScript packaging to align with best practices, including slight incompatibilities when apps bundle onnxruntime-web
  • Improved CPU operators coverage for WebNN (now supported by Chrome)

Training

  • No specific updates

GenAI

  • Support for new models Qwen, Llama 3.1, Gemma 2, phi3 small
  • Support to build quantized models with method AWQ and GPTQ
  • Performance improvements for Intel and Arm CPU
  • Packing and language binding
    • Added Java bindings (build from source)
    • Separate OnnxRuntime.dll and directml.dll out of GenAI package to improve usability
    • Publish packages for Win Arm
    • Support for Android (build from source)
  • Bug fixes, like the long prompt correctness issue for phi3.

Extensions

  • Added C APIs for language, vision and audio processors including new FeatureExtractor for Whisper
  • Support for Phi-3 Small Tokenizer and new OpenAI tiktoken format for fast loading of BPE tokenizers
  • Added new CUDA custom operators such as MulSigmoid, Transpose2DCast, ReplaceZero, AddSharedInput and MulSharedInput
  • Enhanced Custom Op Lite API on GPU and fused kernels for DORT
  • Bug fixes, including null bos_token for Qwen2 tokenizer and SentencePiece converted FastTokenizer issue on non-ASCII characters, as well as necessary updates for MSVC 19.40 and numpy 2.0 release

Contributors

Changming Sun, Baiju Meswani, Scott McKay, Edward Chen, Jian Chen, Wanming Lin, Tianlei Wu, Adrian Lizarraga, Chester Liu, Yi Zhang, Yulong Wang, Hector Li, kunal-vaishnavi, pengwa, aciddelgado, Yifan Li, Xu Xing, Yufeng Li, Patrice Vignola, Yueqing Zhang, Jing Fang, Chi Lo, Dmitri Smirnov, mingyueliuh, cloudhan, Yi-Hong Lyu, Ye Wang, Ted Themistokleous, Guenther Schmuelling, George Wu, mindest, liqun Fu, Preetha Veeramalai, Justin Chu, Xiang Zhang, zz002, vraspar, kailums, guyang3532, Satya Kumar Jandhyala, Rachel Guo, Prathik Rao, Maximilian Müller, Sophie Schoenmeyer, zhijiang, maggie1059, ivberg, glen-amd, aamajumder, Xavier Dupré, Vincent Wang, Suryaprakash Shanmugam, Sheil Kumar, Ranjit Ranjan, Peishen Yan, Frank Dong, Chen Feiyue, Caroline Zhu, Adam Louly, Ștefan Talpalaru, zkep, winskuo-quic, wejoncy, vividsnow, vivianw-amd, moyo1997, mcollinswisc, jingyanwangms, Yang Gu, Tom McDonald, Sunghoon, Shubham Bhokare, RuomeiMS, Qingnan Duan, PeixuanZuo, Pavan Goyal, Nikolai Svakhin, KnightYao, Jon Campbell, Johan MEJIA, Jake Mathern, Hans, Hann Wang, Enrico Galli, Dwayne Robinson, Clément Péron, Chip Kerchner, Chen Fu, Carson M, Adam Reeve, Adam Pocock.

Big thank you to everyone who contributed to this release!

Full Changelog: https://github.com/microsoft/onnxruntime/compare/v1.18.1...v1.19.0

onnxruntime - ONNX Runtime v1.18.1

Published by sophies927 4 months ago

What's new?

Announcements:

  • ONNX Runtime Python packages now have numpy dependency >=1.21.6, <2.0. Support for numpy 2.0 will be added in a future release.
  • CUDA 12.x ONNX Runtime GPU packages are now built against cuDNN 9.x (1.18.0 packages previously depended on cuDNN 8.x). CUDA 11.x ONNX Runtime GPU packages continue to depend on CuDNN 8.x.
  • Windows packages require installation of Microsoft Visual C++ Redistributable Runtime 14.38 or newer.

TensorRT EP:

  • TensorRT Weightless API integration.
  • Support for TensorRT hardware compatible engines.
  • Support for INT64 types in TensorRT constant layer calibration.
  • Now using latest commit of onnx-tensorrt parser, which includes several issue fixes.
  • Additional TensorRT support and performance improvements.

Packages:

  • Publish CUDA 12 Java packages to Azure DevOps feed.
  • Various packaging pipeline fixes.

This patch release also features various other bug fixes, including a CUDA 12.5 build error fix.

Big thank you to @yf711 for driving this release as the release manager and to all our contributors!

@yf711 @jchen351 @mszhanyi @snnn @wangyems @jywu-msft @skottmckay @chilo-ms @moraxu @kevinch-nv @pengwa @wejoncy @pranavsharma @Craigacp @jslhcl @adrianlizarraga @inisis @jeffbloo @mo-ja @kunal-vaishnavi @sumitsays @neNasko1 @yufenglee @dhruvbird @wangshuai09 @xiaoyu-work @axinging @yuslepukhin @YUNQIUGUO @shubhambhokare1 @fs-eire @afantino951 @tboby @HectorSVC @baijumeswani

onnxruntime - ONNX Runtime v1.18.0

Published by yihonglyu 5 months ago

Announcements

  • Windows ARM32 support has been dropped at the source code level.
  • Python version >=3.8 is now required for build.bat/build.sh (previously >=3.7). Note: If you have Python version <3.8, you can bypass the tools and use CMake directly.
  • The onnxruntime-mobile Android package and onnxruntime-mobile-c/onnxruntime-mobile-objc iOS cocoapods are being deprecated. Please use the onnxruntime-android Android package, and onnxruntime-c/onnxruntime-objc cocoapods, which support ONNX and ORT format models and all operators and data types. Note: If you require a smaller binary size, a custom build is required. See details on creating a custom Android or iOS package on Custom build | onnxruntime.

Build System & Packages

  • CoreML execution provider now depends on coremltools.
  • Flatbuffers has been upgraded from 1.12.0 → 23.5.26.
  • ONNX has been upgraded from 1.15 → 1.16.
  • EMSDK has been upgraded from 3.1.51 → 3.1.57.
  • Intel neural_speed library has been upgraded from v0.1.1 → v0.3 with several important bug fixes.
  • There is a new onnxruntime_CUDA_MINIMAL CMake option for building ONNX Runtime CUDA execution provider without any operations apart from memcpy ops.
  • Added support for Catalyst for macOS build support.
  • Added initial support for RISC-V and three new build options for it: --rv64, --riscv_toolchain_root, and --riscv_qemu_path.
  • Now you can build TensorRT EP with protobuf-lite instead of the full version of protobuf.
  • Some security-related compile/link flags have been moved from the default setting → new build option: --use_binskim_compliant_compile_flags. Note: All our release binaries are built with this flag, but when building ONNX Runtime from source, this flag is default OFF.
  • Windows ARM64 build now depends on PyTorch CPUINFO library.
  • Windows OneCore build now uses “Reverse forwarding” apisets instead of “Direct forwarding”, so onnxruntime.dll in our Nuget packages will depend on kernel32.dll. Note: Windows systems without kernel32.dll need to have reverse forwarders (see API set loader operation - Win32 apps | Microsoft Learn for more information).

Core

  • Added ONNX 1.16 support.
  • Added additional optimizations related to Dynamo-exported models.
  • Improved testing infrastructure for EPs developed as shared libraries.
  • Exposed Reserve() in OrtAllocator to allow custom allocators to work when session.use_device_allocator_for_initializers is specified.
  • Improved lock contention due to memory allocations.
  • Improved session creation time (graph and graph transformer optimizations).
  • Added new SessionOptions config entry to disable specific transformers and rules.
  • [C# API] Exposed SessionOptions.DisablePerSessionThreads to allow sharing of threadpool between sessions.
  • [Java API] Added CUDA 12 Java support.

Performance

  • Improved 4bit quant support:
    • Added HQQ quantization support to improve accuracy.
    • Implemented general GEMM kernel and improved GEMV kernel performance on GPU.
    • Improved GEMM kernel quality and performance on x64.
    • Implemented general GEMM kernel and improved GEMV performance on ARM64.
  • Improved MultiheadAttention performance on CPU.

Execution Providers

  • TensorRT

    • Added support for TensorRT 10.
    • Finalized support for DDS ops.
    • Added Python support for user provided CUDA stream.
    • Fixed various bugs.
  • CUDA

    • Added support of multiple CUDA graphs.
    • Added a provider option to disable TF32.
    • Added Python support for user provided CUDA stream.
    • Extended MoE to support of Tensor Parallelism and int4 quantization.
    • Fixed bugs in BatchNorm and TopK kernel.
  • QNN

    • Added support for up to QNN SDK 2.22.
    • Upgraded support from A16W8 → mixed 8/16-bit precision configurability per layer.
    • Added fp16 execution support via enable_htp_fp16 option.
    • Added multiple partition support for QNN context binary.
    • Expanded operator support and fixed various bugs.
    • Added support for per-channel quantized weights for Conv.
    • Integration with Qualcomm’s AIHub.
  • OpenVINO

    • Added support for up to OpenVINO 2024.1.
    • Added support for importing pre-compiled blob as EPContext blob.
    • Separated device and precision as inputs by removing support for device_id in provider options and adding precision as separate CLI option.
    • Deprecated CPU_FP32 and GPU_FP32 terminology and introduced CPU and GPU terminology.
    • AUTO:GPU,CPU will only create GPU blob, not CPU blob.
  • DirectML

    • Additional ONNX operator support: Resize-18 and Resize-19, Col2Im-18, InNaN-20, IsInf-20, and ReduceMax-20.
    • Additional contrib op support: SimplifiedLayerNormalization, SkipSimplifiedLayerNormalization, QLinearAveragePool, MatMulIntegerToFloat, GroupQueryAttention, DynamicQuantizeMatMul, and QAttention.

Mobile

  • Improved performance of ARM64 4-bit quantization.
  • Added support for building with QNN on Android.
  • Added MacCatalyst support.
  • Added visionOS support.
  • Added initial support for creating ML Program format CoreML models.
  • Added support for 1D Conv and ConvTranspose to XNNPACK EP.

Web

  • Added WebNN EP preview.
  • Improved WebGPU performance (MHA, ROE).
  • Added more WebGPU and WebNN examples.
  • Increased generative model support.
  • Optimized Buffer management to reduce memory footprint.

Training

  • Large Model Training
    • Added optimizations for Dynamo-exported models.
    • Added Mixtral integration using ORT backend.
  • On-Device Training
    • Added support for models >2GB to enable SLM training on edge devices.

GenAI

  • Added additional model support: Phi-3, Gemma, LLama-3.
  • Added DML EP support.
  • Improved tokenizer quality.
  • Improved sampling method and ORT model performance.

Extensions

  • Created Java packaging pipeline and published to Maven repository.
  • Added support for conversion of Huggingface FastTokenizer into ONNX custom operator.
  • Unified the SentencePiece tokenizer with other Byte Pair Encoding (BPE) based tokenizers.
  • Fixed Whisper large model pre-processing bug.
  • Enabled eager execution for custom operator and refactored the header file structure.

Contributors

Yi Zhang, Yulong Wang, Adrian Lizarraga, Changming Sun, Scott McKay, Tianlei Wu, Peng Wang, Hector Li, Edward Chen, Dmitri Smirnov, Patrice Vignola, Guenther Schmuelling, Ye Wang, Chi Lo, Wanming Lin, Xu Xing, Baiju Meswani, Peixuan Zuo, Vincent Wang, Markus Tavenrath, Lei Cao, Kunal Vaishnavi, Rachel Guo, Satya Kumar Jandhyala, Sheil Kumar, Yifan Li, Jiajia Qin, Maximilian Müller, Xavier Dupré, Yi-Hong Lyu, Yufeng Li, Alejandro Cid Delgado, Adam Louly, Prathik Rao, wejoncy, Zesong Wang, Adam Pocock, George Wu, Jian Chen, Justin Chu, Xiaoyu, guyang3532, Jingyan Wang, raoanag, Satya Jandhyala, Hariharan Seshadri, Jiajie Hu, Sumit Agarwal, Peter Mcaughan, Zhijiang Xu, Abhishek Jindal, Jake Mathern, Jeff Bloomfield, Jeff Daily, Linnea May, Phoebe Chen, Preetha Veeramalai, Shubham Bhokare, Wei-Sheng Chin, Yang Gu, Yueqing Zhang, Guangyun Han, inisis, ironman, Ivan Berg, Liqun Fu, Yu Luo, Rui Ren, Sahar Fatima, snadampal, wangshuai09, Zhenze Wang, Andrew Fantino, Andrew Grigorev, Ashwini Khade, Atanas Dimitrov, AtomicVar, Belem Zhang, Bowen Bao, Chen Fu, Dhruv Matani, Fangrui Song, Francesco, Frank Dong, Hans Chen, He Li, Heflin Stephen Raj, Jambay Kinley, Masayoshi Tsutsui, Matttttt, Nanashi, Phoebe Chen, Pranav Sharma, Segev Finer, Sophie Schoenmeyer, TP Boudreau, Ted Themistokleous, Thomas Boby, Xiang Zhang, Yongxin Wang, Zhang Lei, aamajumder, danyue, Duansheng Liu, enximi, fxmarty, kailums, maggie1059, mindest, mo-ja, moyo1997
Big thank you to everyone who contributed to this release!

onnxruntime - ONNX Runtime v1.17.3

Published by sophies927 6 months ago

What's new?

General:

  • Update copying API header files to make Linux logic consistent with Windows (#19736) - @mszhanyi
  • Pin ONNX version to fix DML and Python packaging pipeline exceptions (#20073) - @mszhanyi

Build System & Packages:

  • Fix minimal build with training APIs enabled bug affecting Apple framework (#19858) - @edgchen1

Core:

  • Fix SplitToSequence op with string tensor bug (#19942) - @Craigacp

CUDA EP:

  • Fix onnxruntime_test_all build break with CUDA (#19673) - @gedoensmax
  • Fix broken pooling CUDA NHWC ops and ensure NCHW / NHWC parity (#19889) - @mtavenrath

TensorRT EP:

  • Fix TensorRT build break caused by image update (#19880) - @jywu-msft
  • Fix TensorRT custom op list concurrency bug (#20093) - @chilo-ms

Web:

  • Add hardSigmoid op support and hardSigmoid activation for fusedConv (#19215, #19233) - @qjia7
  • Add support for WebNN async API with Asyncify (#19415) - @Honry
  • Add uniform support for conv, conv transpose, conv grouped, and fp16 (#18753, #19098) - @axinging
  • Add capture and replay support for JS EP (#18989) - @fs-eire
  • Add LeakyRelu activation for fusedConv (#19369) - @qjia7
  • Add FastGelu custom op support (#19392) - @fs-eire
  • Allow uint8 tensors for WebGPU (#19545) - @satyajandhyala
  • Add and optimize MatMulNBits (#19852) - @satyajandhyala
  • Enable ort-web with any Float16Array polyfill (#19305) - @fs-eire
  • Allow multiple EPs to be specified in backend resolve logic (#19735) - @fs-eire
  • Various bug fixes: (#19258) - @gyagp, (#19201, #19554) - @hujiajie, (#19262, #19981) - @guschmue, (#19581, #19596, #19387) - @axinging, (#19613) - @satyajandhyala
  • Various improvements for performance and usability: (#19202) - @qjia7, (#18900, #19281, #18883) - @axinging, (#18788, #19737) - @satyajandhyala, (#19610) - @segevfiner, (#19614, #19702, #19677, #19857, #19940) - @fs-eire, (#19791) - @gyagp, (#19868) - @guschmue, (#19433) - @martholomew, (#19932) - @ibelem

Windows:

  • Fix Windows memory mapping bug affecting some larger models (#19623) - @yufenglee

Kernel Optimizations:

  • Fix GQA and Rotary Embedding bugs affecting some models (#19801, #19874) - @aciddelgado
  • Update replacement of MultiHeadAttention (MHA) and GroupQueryAttention (GQA) (#19882) - @kunal-vaishnavi
  • Add support for packed QKV input and Rotary Embedding with sm<80 using Memory Efficient Attention kernel (#20012) - @aciddelgado

Models:

  • Add support for benchmarking LLaMA model end-to-end performance (#19985, #20033, #20149) - @kunal-vaishnavi
  • Add example to demonstrate export of Open AI Whisper implementation with batched prompts (#19854) - @shubhambhokare1

This patch release also includes additional fixes by @spampana95 and @enximi. Big thank you to all our contributors!

onnxruntime - ONNX Runtime v1.17.1

Published by YUNQIUGUO 8 months ago

This patch release includes the following updates:

General

  • Update thread affinity on server so it is only set with auto affinity (#19318) - @ivberg

Build System and Packages

  • Fix bug that was breaking arm64 build by disabling __cpuid check on arm64 builds since intrinsic is not available (#19574) - @smk2007

Core

  • Add capturestate / rundown ETW support logging for session and provider options (#19397) - @ivberg
  • Restrict L2 cache core check on Intel devices (#19483) - @smk2007

Performance

  • Optimize KahnsTopologicalSort and PriorityNodeCompare to fix performance degradation in session creation time that was affecting many models (#19475) - @smk2007

EPs

  • Enable DirectML on Windows and CUDA on Linux for Node.js binding (#19274) - @jchen351

QNN

  • Fix split index bugs uncovered by QNN SDK 2.19 release (#19381) - @adrianlizarraga
  • Add job that builds x64 Python wheels for QNN EP so cached QNN models can be created on Windows x64 (#19499) - @adrianlizarraga

OpenVINO

  • Fix bugs for API backwards compatibility (#19482) - @preetha-intel

DirectML

  • Fix bug in external data packing that was causing crash (#19415) - @PatriceVignola
  • Fix bug in allocation planner by disabling streams for DML EP (#19481) - @PatriceVignola

Web

  • Fix bug with types export in package.json (#19458) - @fs-eire

Training

  • Reduce onnxruntime-training package size so it can be published on PyPI (#19486) - @baijumeswani
  • Update default std flag used during torch extensions compilation (#19516) - @baijumeswani
  • Add ATen fallback support for bicubic interpolation algorithm (#19380) - @prathikr

Quantization

  • Update Q/DQ quantization to ensure Microsoft opset (#19335) - @adrianlizarraga
  • Add contrib Q/DQ ops to symbolic shape inference tool (#19340) - @adrianlizarraga
  • Fix subgraph quantization regression (#19421) - @fxmarty
  • Add DefaultTensorType option to specify the default tensor type to quantize (#19455) - @yufenglee
  • Fix bug with command line argparse to process --symmetric [True|False] correctly (#19577) - @satyajandhyala

Whisper Model

  • Fix bug in BeamSearch implementation of Whisper model that was causing a crash in some scenarios (#19345) - @petermcaughan
  • Fix bug in Whisper model timestamps and temperature (#19509) - @kunal-vaishnavi
onnxruntime - ONNX Runtime v1.17.0

Published by YUNQIUGUO 9 months ago

Announcements

In the next release, we will totally drop support for Windows ARM32.

General

Build System and Packages

  • Dropped CentOS 7 support. All Linux binaries now require glibc version >=2.28, but users can still build the source code for a lower glibc version.
  • Added CUDA 12 packages for Python and Nuget.
  • Added Python 3.12 packages for ONNX Runtime Inference. ONNX Runtime Training Python 3.12 packages cannot be provided at this time since training packages depend on PyTorch, which does not support Python 3.12 yet.
  • Linux binaries (except those in AMD GPU packages) are built in a more secure way that is compliant with BinSkim's default policy (e.g., the binaries no longer have an executable stack).
  • Added support for Windows ARM64X for users who build ONNX Runtime from source. No prebuilt package provided yet.
  • Removed Windows ARM32 binaries from official packages. Users who still need these binaries can build them from source.
  • Added AMD GPU package with ROCm and MiGraphX (Python + Linux only).
  • Split ONNX Runtime GPU Nuget package into two packages.
  • When building the source code for Linux ARM64 or Android, the C/C++ compiler must support BFloat16. Support for Android NDK 24.x has been removed. Please use NDK 25.x or 26.x instead.
  • Link time code generation (LTCG or LTO) is now disabled by default when building from source. To re-enable it, users can add "--enable_lto" to the build command. All prebuilt binaries are still built with LTO.

Core

  • Optimized graph inlining.
  • Allow custom op to invoke internal thread-pool for parallelism.
  • Added support for supplying a custom logger at the session level.
  • Added new logging and tracing of session and execution provider options.
  • Added new dynamic ETW provider that can trace/diagnose ONNX internals while maintaining great performance.

Performance

  • Added 4bit quant support on NVIDIA GPU and ARM64.

EPs

TensorRT EP

  • Added support for direct load of precompiled TensorRT engines and customizable engine prefix.
  • Added Python support for TensorRT plugins via ORT custom ops.
  • Fixed concurrent Session::Run bugs.
  • Updated calls to deprecated TensorRT APIs (e.g., enqueue_v2 → enqueue_v3).
  • Fixed various memory leak bugs.

QNN EP

  • Added support for QNN SDK 2.18.
  • Added context binary caching and model initialization optimizations.
  • Added mixed precision (8/16 bit) quantization support.
  • Add device-level session options (soc_model, htp_arch, device_id), extreme_power_saver for htp_performance_mode, and vtcm_mb settings.
  • Fixed multi-threaded inference bug.
  • Fixed various other bugs and added performance improvements.
  • QNN profiling of the NPU can be enabled dynamically with ETW or write out to CSV.

OpenVINO EP

  • Added support for OpenVINO 2023.2.
  • Added AppendExecutionProvider_OpenVINO_V2 API for supporting new OpenVINO EP options.

DirectML EP

  • Updated to DirectML 1.13.1.
  • Updated operators LpPool-18 and AveragePool-19 with dilations.
  • Improved Python I/O binding support.
  • Added RotaryEmbedding.
  • Added support for fusing subgraphs into DirectML execution plans.
  • Added new Python API to choose a specific GPU on multi-GPU devices with the DirectML EP.

Mobile

  • Added initial support for 4bit quantization on ARM64.
  • Extended CoreML/NNAPI operator coverage.
  • Added support for YOLOv8 pose detection pre/post processing.
  • Added support for macOS in CocoaPods package.

Web

  • Added support for external data format.
  • Added support for I/O bindings.
  • Added support for training.
  • Added WebGPU optimizations.
  • Transitioned WebGPU out of experimental.
  • Added FP16 support for WebGPU.

Training

Large Model Training

  • Enabled support for QLoRA (with support for BFloat16).
  • Added symbolic shape support for Triton codegen (see PR).
  • Made improvements to recompute optimizer with easy ON/OFF to allow layer-wise recompute (see PR).
  • Enabled memory-efficient gradient management. For Mistral, we see ~10GB drop in memory consumption when this feature is ON (see PR).
  • Enabled embedding sparsity optimizations.
  • Added support for Aten efficient attention and Triton Flash Attention (see PR).
  • Packages now available for CUDA 11.8 and 12.1.

On Device Training

  • On-Device training will now support training on the web. This release focuses on federated learning and developer exploration scenarios. More features coming soon in future releases.

Extensions

  • Modified gen_processing_model tokenizer model to output int64, unifying output datatype of all tokenizers.
  • Implemented support for post-processing of YOLO v8 within the Python extensions package.
  • Introduced 'fairseq' flag to enhance compatibility with certain Hugging Face tokenizers.
  • Incorporated 'added_token' attribute into the BPE tokenizer to improve CodeGen tokenizer functionality.
  • Enhanced the SentencePiece tokenizer by integrating token indices into the output.
  • Added support for the custom operator implemented with CUDA kernels, including two example operators.
  • Added more tests on the Hugging Face tokenizer and fixed identified bugs.

Known Issues

  • The onnxruntime-training package is not yet available in PyPI but can be accessed in ADO as follows:
    python -m pip install cerberus flatbuffers h5py numpy>=1.16.6 onnx packaging protobuf sympy setuptools>=41.4.0
    pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT/pypi/simple/ onnxruntime-training
    pip install torch-ort
    python -m torch_ort.configure
    
    Installation instructions can also be accessed here.
  • For models with int4 kernel only:
    • Crash may occur when int4 is applied on Intel CPUs with hybrid core if the E-cores are disabled in BIOS. Fix is in progress to be patched.
    • Performance regression on the int4 kernel on x64 makes the op following MatMulNBits much slower. Fix is in progress to be patched.
  • Current bug in BeamSearch implementation of T5, GPT, and Whisper may break models under heavy inference load using BeamSearch on CUDA. See #19345. Fix is in progress to be patched.
  • Full support of ONNX 1.15 opsets is still in progress. A list of new ONNX 1.15 opset support that has been included in this release can be found above in the 'General' section.
  • Some Cast nodes will not be removed (see https://github.com/microsoft/onnxruntime/pull/17953): Cast node from higher precision to lower precision (like fp32 to fp16) will be kept. If model result is different from ORT 1.16 and 1.17, check whether some Cast nodes was removed in 1.16 but kept in 1.17.

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, fs-eire, tianleiwu, mszhanyi, edgchen1, skottmckay, jchen351, adrianlizarraga, qjia7, Honry, HectorSVC, chilo-ms, axinging, jeffbloo, pengwa, yuslepukhin, guschmue, satyajandhyala, xadupre, RandyShuai, PeixuanZuo, RandySheriffH, er3x3, wschin, yf711, PatriceVignola, askhade, smk2007, natke, kunal-vaishnavi, YUNQIUGUO, liqunfu, cloudhan, wangyems, yufenglee, ajindal1, baijumeswani
justinchuby, Craigacp, wejoncy, jywu-msft, hariharans29, nums11, jslhcl, jeffdaily, chenfucn, zhijxu-MS, mindest, BowenBao, sumitsays, prasanthpul, fdwr, pranavsharma, chentaMS, zhangxiang1993, souptc, zhanghuanrong, faxu, georgen117, sfatimar, thiagocrepaldi, adityagoel4512, ivberg, sophies927

NOTE: Please let us know via this GitHub issue if you contributed to this release but your name is missing from this list, and we will add you manually!

onnxruntime - ONNX Runtime v1.16.3

Published by snnn 11 months ago

What's Changed

  1. Stable Diffusion XL demo update by @tianleiwu in https://github.com/microsoft/onnxruntime/pull/18496
  2. Fixed a memory leak issue(#18466) in TensorRT EP by @chilo-ms in https://github.com/microsoft/onnxruntime/pull/18467
  3. Fix a use-after-free bug in SaveInputOutputNamesToNodeMapping function by @snnn in https://github.com/microsoft/onnxruntime/pull/18456 . The issue was found by AddressSanitizer.
onnxruntime - ONNX Runtime v1.16.2

Published by snnn 11 months ago

The patch release includes updates on:

  • Performance optimizations for Llama2 on CUDA EP and DirectML EP
  • Performance optimizations for Stable Diffusion XL model for CUDA EP
    • Demos for text to image generation
  • Mobile bug fixes for crash on some older 64-bit ARM devices and AOT inlining issue on iOS with C# bindings
  • TensorRT EP bug fixes for user provided compute stream and stream synchronization
onnxruntime - ONNX Runtime v1.16.1

Published by snnn about 1 year ago

This release fixed some issues in 1.16.0

onnxruntime - ONNX Runtime v1.16.0

Published by er3x3 about 1 year ago

General

  • Support for serialization of models >=2GB

APIs

  • New session option to disable default CPU EP fallback session.disable_cpu_ep_fallback
  • Java
    • Support for fp16 and bf16 tensors as inputs and outputs, along with utilities to convert between these and fp32 data. On JDK 20 and newer the fp16 conversion methods use the JDK's Float.float16ToFloat and Float.floatToFloat16 methods which can be hardware accelerated and vectorized on some platforms.
    • Support for external initializers so that large models that can be instantiated without filesystem access
  • C#
    • Expose OrtValue API as the new preferred API to run inference in C#. This reduces garbage and exposes direct native memory access via Slice like interfaces.
    • Make Float16 and BFloat16 full featured fp16 interfaces that support conversion and expose floating properties (e.g. IsNaN, IsInfinity etc)

Performance

  • Improve LLM quantization accuracy with smoothquant
  • Support 4-bit quantization on CPU
  • Optimize BeamScore to improve BeamSearch performance
  • Add FlashAttention v2 support for Attention, MultiHeadAttention and PackedMultiHeadAttention ops

Execution Providers

  • CUDA EP
    • Initial fp8 support (QDQ, Cast, MatMul)
    • Relax CUDA Graph constraints to allow more models to utilize
    • Allow CUDA allocator to be registered with ONNX Runtime externally
  • TensorRT EP
    • CUDA Graph support
    • Support user provided cuda compute stream
    • Misc bug fixes and improvements
  • OpenVINO EP
    • Support OpenVINO 2023.1
  • QNN EP
    • Enable context binary cache to reduce initialization time
    • Support QNN 2.12
    • Support for resize with asymmetric transformation mode on HTP backend
    • Ops support: Equal, Less, LessOrEqual, Greater, GreaterOrEqual, LayerNorm, Asin, Sign, DepthToSpace, SpaceToDepth
    • Support 1D Conv/ConvTranspose
    • Misc bug fixes and improvements

Mobile

  • Initial support for Azure EP
  • Dynamic shape support for CoreML
  • Improve React Native performance with JSI
  • Mobile support for CLIPImageProcessor pre-processing and CLIP scenario
  • Swift Package Manager support for ONNX Runtime inference and ONNX Runtime extensions via onnxruntime-swift-package-manager

Web

  • webgpu ops coverage improvements (SAM, T5, Whisper)
  • webnn ops coverage improvements (SAM, Stable Diffusion)
  • Stability/usability improvements for webgpu

Large model training

  • ORTModule + OpenAI Triton Integration now available. See details here
  • Label Sparsity compute optimization support complete and enabled by default starting release 1.16
  • New experimental embedding sparsity related optimizations available (disabled by default).
    • Improves training performance of Roberta in Transformers by 20-30%
  • Other compute optimizations like Gather/Slice/Reshape upstream support enabled.
  • Optimizations for LLaMAv2 (~10% acceleration) and OpenAI Whisper
  • Improvements to logging and metrics (initialization overhead, memory usage, statistics convergence tool, etc) system improvements.
  • PythonOp enhancement: bool and tuple[bool] constants, materialize grads, empty inputs, save in context, customized shape inference, use full qualified name for export.
  • SCELossInternal/SCELossGradInternal CUDA kernels can handle elements more than std::numeric_limits<int32_t>::max.
  • Improvements to LayerNorm fusion
  • Model cache for exported onnx model is introduced to avoid repeatedly exporting a model that is not changed across.

On-Device Training

  • iOS support available starting this release
  • Minimal build now available for On-Device Training. Basic binary size ~1.5 MB
  • ORT-Extensions custom op support enabled through onnxblock for on-device training scenarios

ORT Extensions

This ORT release is accompanied by updates to onnxruntime-extensions. Features include:

  • New Python API gen_processing_models to export ONNX data processing model from Huggingface Tokenizers such as LLaMA , CLIP, XLM-Roberta, Falcon, BERT, etc.
  • New TrieTokenizer operator for RWKV-like LLM models, and other tokenizer operator enhancements.
  • New operators for Azure EP compatibility: AzureAudioToText, AzureTextToText, AzureTritonInvoker for Python and NuGet packages.
  • Processing operators have been migrated to the new Lite Custom Op API

Known Issues

  • ORT CPU Python package requires execution provider to be explicitly provided. See #17631. Fix is in progress to be patched.

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
fs-eire, edgchen1, snnn, pengwa, mszhanyi, PeixuanZuo, tianleiwu, adrianlizarraga, baijumeswani, cloudhan, satyajandhyala, yuslepukhin, RandyShuai, RandySheriffH, skottmckay, Honry, dependabot[bot], HectorSVC, jchen351, chilo-ms, YUNQIUGUO, justinchuby, PatriceVignola, guschmue, yf711, Craigacp, smk2007, RyanUnderhill, jslhcl, wschin, kunal-vaishnavi, mindest, xadupre, fdwr, hariharans29, AdamLouly, wejoncy, chenfucn, pranavsharma, yufenglee, zhijxu-MS, jeffdaily, natke, jeffbloo, liqunfu, wangyems, er3x3, nums11, yihonglyu, sumitsays, zhanghuanrong, askhade, wenbingl, jingyanwangms, ashari4, gramalingam, georgen117, sfatimar, BowenBao, hanbitmyths, stevenlix, jywu-msft

onnxruntime - ONNX Runtime v1.15.1

Published by snnn over 1 year ago

This release fixed the following issues:

  1. A coding problem in test/shared_lib/test_inference.cc that it should use ASSERT_NEAR to test float values instead of ASSERT_EQ. Without this change, some DNNL/OpenVino tests would fail on some AMD CPUs.
  2. A misaligned error in cublasGemmBatchedHelper function. The error only occurs when CUDA version = 11.8 and the GPU's CUDA Compute capability >=80. (In other words: with TensorFloat-32 support) (#15981)
  3. A build issue that build with onnxruntime_ENABLE_MEMORY_PROFILE was broken in 1.15.0 release. (#16124)
  4. Native onnxruntime library not loading in Azure App Service. It is because in 1.15.0 we introduced a Windows API call to SetThreadDescription. Though the API is available in all Windows 10 versions, some sandbox environments block using the API. (#15375)
  5. An alignment problem for xnnpack EP on Intel/AMD CPUs on PC platforms.
  6. Some training header files were missing in the 1.15.0 training nuget package.
  7. Some fields in OrtCUDAProviderOptionsV2 struct are not initialized
  8. The *.dylib files in ONNX Runtime nuget package are not signed. (#16168)

Known issue

  1. Segfaults when loading model with local functions, works fine if model is inlined by ONNX (#16170)
  2. Cross building for iOS requires manually downloading protoc (#16238)
onnxruntime - ONNX Runtime v1.15.0

Published by snnn over 1 year ago

Announcements

Starting from the next release(ONNX Runtime 1.16.0), at operating system level we will drop the support for

  • iOS 11 and below. iOS 12 will be the minimum supported version.
  • CentOS 7, Ubuntu 18.04, and any Linux distro without glibc version >=2.28.

At compiler level we will drop the support for

  • GCC version <= 9
  • Visual Studio 2019

Also, we will remove the onnxruntime_DISABLE_ABSEIL build option since we will upgrade protobuf and the new protobuf version will need abseil.

General

  • Added support for ONNX Optional type in C# API
  • Added collectives to support multi-GPU inferencing
  • Updated macOS build machines to macOS-12, which comes with Xcode 14.2 and we should stop using Xcode 12.4
  • Added Python 3.11 support (deprecate 3.7, support 3.8-3.11) in packages for Onnxruntime CPU, Onnxruntime-GPU, Onnxruntime-directml, and onnxruntime-training.
  • Updated to CUDA 11.8. ONNX Runtime source code is still compatible with CUDA 11.4 and 12.x.
  • Dropped the support for Windows 8.1 and below
  • Eager mode code and onnxruntime_ENABLE_EAGER_MODE cmake option are deleted.
  • Upgraded Mimalloc version from 2.0.3 to 2.1.1
  • Upgraded protobuf version from 3.18.3 to 21.12
  • New dependency: cutlass, which is only used in CUDA/TensorRT packages.
  • Upgraded DNNL from 2.7.1 to 3.0

Build System

  • On POSIX systems by default we disallow using "root" user to build the code. If needed, you can append "--allow_running_as_root" to your build command to bypass the check.
  • Add the support for building the source natively on Windows ARM64 with Visual Studio 2022.
  • Added a Gradle wrapper and updated Gradle version from 6.8.3 to 8.0.1. (Gradle is the tool for building ORT Java package)
  • When doing cross-compiling, the build scripts will try to download a prebuit protoc from Github instead of building the binary from source. Because now protobuf has many dependencies. It is not easy to setup a build environment for protobuf.

Performance

Execution Providers

Two new execution providers: JS EP and QNN EP.

TensorRT EP

  • Official support for TensorRT 8.6
  • Explicit shape profile overrides
  • Support for TensorRT plugins via ORT custom op
  • Improve support for TensorRT options (heuristics, sparsity, optimization level, auxiliary stream, tactic source selection etc.)
  • Support for TensorRT timing cache
  • Improvements to our test coverage, specifically for opset16-17 models and package pipeline unit test coverage.
  • Other misc bugfixes and improvements.

OpenVINO EP

  • Support for OpenVINO 2023.0
  • Dynamic shapes support for iGPU
  • Changes to OpenVINO backend to improve first inference latency
  • Deprecation of HDDL-VADM and Myriad VPU support
  • Misc bug fixes.

QNN EP

DirectML EP:

AzureEP

  • Added support for OpenAI whisper model
  • Available in a Nuget pkg in addition to Python

Mobile

New packages

  • Swift Package Manager for onnxruntime
  • Nuget package for onnxruntime-extensions (supports Android/iOS for MAUI/Xamarin)
  • React Native package for onnxruntime can optionally include onnxruntime-extensions

Pre/Post processing

  • Added support for built-in pre and post processing for NLP scenarios: classification, question-answering, text-prediction

  • Added support for built-in pre and post processing for Speech Recognition (Whisper)

  • Added support for built-in post processing for Object Detection (YOLO). Non-max suppression, draw bounding boxes

  • Additional CoreML and NNAPI kernels to support customer scenarios

    • NNAPI: BatchNormalization, LRN
    • CoreML: Div, Flatten, LeakyRelu, LRN, Mul, Pad, Pow, Sub

Web

  • [preview] WebGPU support
  • Support building the source code with "MinGW make" on Windows.

ORT Training

On-device training:

  • Official package for On-Device Training now available. On-device training extends ORT Inference solutions to enable training on edge devices.
  • APIs and Language bindings supported for C, C++, Python, C#, Java.
  • Packages available for Desktop and Android.
  • For custom builds refer build instructions.

Others

  • Added graph optimizations which leverage the sparsity in the label data to improve performance. With these optimizations we see performance gains ranging from 4% to 15% for popular HF models over baseline ORT.
  • Vision transformer models like ViT, BEIT and SwinV2 see upto 44% speedup with ORT Training+ DeepSpeed over PyTorch eager mode on AzureML.
  • Added optimizations for SOTA models like Dolly and Whisper. ORT Training + DS now gives ~17% speedup for Whisper and ~4% speedup for Dolly over PyTorch eager mode. Dolly optimizations on main branch show a ~40% over eager mode.

Known Issues

  • The onnxruntime-training 1.15.0 packages published to pypi.org were actually built in Debug mode instead of Release mode. You can get the right one from https://download.onnxruntime.ai/ . We will fix the issue in the next patch release.
  • XNNPack EP does not work on x86 CPUs without AVX-512 instructions, because we used wrong alignment when allocating buffers for XNNPack to use.
  • The CUDA EP source code has a build error when CUDA version <11.6. See #16000.
  • The onnxruntime-training builds are missing the training header files.

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, fs-eire, edgchen1, wejoncy, mszhanyi, PeixuanZuo, pengwa, jchen351, cloudhan, tianleiwu, PatriceVignola, wangyems, adrianlizarraga, chenfucn, HectorSVC, baijumeswani, justinchuby, skottmckay, yuslepukhin, RandyShuai, RandySheriffH, natke, YUNQIUGUO, smk2007, jslhcl, chilo-ms, yufenglee, RyanUnderhill, hariharans29, zhanghuanrong, askhade, wschin, jywu-msft, mindest, zhijxu-MS, dependabot[bot], xadupre, liqunfu, nums11, gramalingam, Craigacp, fdwr, shalvamist, jstoecker, yihonglyu, sumitsays, stevenlix, iK1D, pranavsharma, georgen117, sfatimar, MaajidKhan, satyajandhyala, faxu, jcwchen, hanbitmyths, jeffbloo, souptc, ytaous kunal-vaishnavi

onnxruntime - ONNX Runtime v1.14.1

Published by PatriceVignola over 1 year ago

This patch addresses packaging issues and bug fixes on top of v1.14.0:

  • Mac OS Python build for x86 arch (issue: #14663)
  • DirectML EP fixes: sequence ops (#14442), package naming to remove -dev suffix
  • CUDA12 build compatibility (#14659)
  • Performance regression fixes: IOBinding input (#14719), Transformer models (#14732, #14517, #14699)
  • ORT Training kernel fix (#14727)

Only select packages were published for this patch release; others can be found in the attachments below:

onnxruntime - ONNX Runtime v1.14.0

Published by rui-ren over 1 year ago

Announcements

  • Building ORT from source will require cmake version >=3.24 instead of >=3.18.

General

  • ONNX 1.13 support (opset 18)
  • Threading
    • ORT Threadpool is now NUMA aware (details)
    • New API to set thread affinity (details)
  • New custom operator APIs
    • Enables a custom operator to wrap an entire model that is meant to be inferenced with an external API or runtime.
    • Details and example
  • Multi-stream Execution Provider refactoring
    • Improves GPU utilization by putting parallel inference requests on different GPU streams. Updated for CUDA, TensorRT, and ROCM execution providers
    • Improves memory efficiency by enabling GPU memory reuse across different streams
    • Enables Execution Provider developer to customize its stream implementation by providing "Stream" interface in ExecutionProvider API
  • [Preview] Rust API for ORT - not part of release branch but available to build in main.

Performance

  • Support of quantization with AMX on Sapphire Rapids processors
  • CUDA EP performance improvements:
    • Improve performance of transformer models and decoding methods: beam search, greedy search, and topp sampling.
    • Stable Diffusion model optimizations
    • Change cudnn_conv_use_max_workspace default value to be 1
  • Performance improvements to GRU and Slice operators

Execution Providers

Mobile

  • Pre/Post processing
    • Support updating mobilenet and super resolution models to move the pre and post processing into the model, including usage of custom ops for conversion to/from jpg/png
    • [Coming soon] onnxruntime-extensions packages for Android and iOS with DecodeImage and EncodeImage custom ops
    • Updated the onnxruntime inference examples to demonstrate end-to-end usage with onnxruntime-extensions package
  • XNNPACK
    • Added support for additional commonly used operators
    • Add iOS build support
      • XNNPACK EP is now included in the onnxruntime-c iOS package
    • Added support for using the ORT allocator in XNNPACK kernels to minimize memory usage

Web

  • onnxruntime-extensions included in default ort-web build (NLP centric)
  • XNNPACK Gemm
  • Improved exception handling
  • New utility functions (experimental) to help with exchanging data between images and tensors.

Training

  • Performance optimizations and bug fixes for Hugging Face models (i.e. Xlnet and Bloom)
  • Stable diffusion optimizations for training, including support for Resize and InstanceNorm gradients and addition of ORT-enabled examples to the diffusers library
  • FP16 optimizer exposed in torch-ort (details)
  • Bug fixes for Hugging Face models

Known Issues

  • The Microsoft.ML.OnnxRuntime.DirectML package name includes -dev-* suffix. This is functionally equivalent to the release branch build, and a patch is in progress.

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, skottmckay, edgchen1, hariharans29, tianleiwu, yufenglee, guoyu-wang, yuslepukhin, fs-eire, pranavsharma, iK1D, baijumeswani, tracysh, thiagocrepaldi, askhade, RyanUnderhill, wangyems, fdwr, RandySheriffH, jywu-msft, zhanghuanrong, smk2007, pengwa, liqunfu, shahasad, mszhanyi, SherlockNoMad, xadupre, jignparm, HectorSVC, ytaous, weixingzhang, stevenlix, tiagoshibata, faxu, wschin, souptc, ashbhandare, RandyShuai, chilo-ms, PeixuanZuo, cloudhan, dependabot[bot], jeffbloo, chenfucn, linkerzhang, duli2012, codemzs, oliviajain, natke, YUNQIUGUO, Craigacp, sumitsays, orilevari, BowenBao, yangchen-MS, hanbitmyths, satyajandhyala, MaajidKhan, smkarlap, sfatimar, jchen351, georgen117, wejoncy, PatriceVignola, adrianlizarraga, justinchuby, zhangxiang1993, gineshidalgo99, tlh20, xzhu1900, jeffdaily, suryasidd, yihonglyu, liuziyue, chentaMS, jcwchen, ybrnathan, ajindal1, zhijxu-MS, gramalingam, WilBrady, garymm, kkaranasos, ashari4, martinb35, AdamLouly, zhangyaobit, vvchernov, jingyanwangms, wenbingl, daquexian, sreekanth-yalachigere, NonStatic2014, mayavijx, mindest, jstoecker, manashgoswami, Andrews548, baowenlei, kunal-vaishnavi

onnxruntime - ONNX Runtime v1.13.1

Published by jchen351 almost 2 years ago

Announcements

  • Security issues addressed by this release
    1. A protobuf security issue CVE-2022-1941 that impact users who load ONNX models from untrusted sources, for example, a deep learning inference service which allows users to upload their models then runs the inferences in a shared environment.
    2. An ONNX security vulnerability that allows reading of tensor_data outside the model directory, which allows attackers to read or write arbitrary files on an affected system that loads ONNX models from untrusted sources. (#12915)
  • Deprecations
    • CUDA 10.x support at source code level
    • Windows 8.x support in Nuget/C API prebuilt binaries. Support for Windows 7+ Desktop versions (including Windows servers) will be retained by building ONNX Runtime from source.
    • NUPHAR EP code is removed
  • Dependency versioning updates
    • C++ 17 compiler is now required to build ORT from source. On Linux, GCC version >=7.0 is required.
    • Minimal numpy version bumped to 1.21.6 (from 1.21.0) for ONNX Runtime Python packages
    • Official ONNX Runtime GPU packages now require CUDA version >=11.6 instead of 11.4.

General

  • Expose all arena configs in Python API in an extensible way
  • Fix ARM64 NuGet packaging
  • Fix EP allocator setup issue affecting TVM EP

Performance

  • Transformers CUDA improvements
    • Quantization on GPU for BERT - notebook, documentation on QAT, transformer optimization toolchain and quantized kernels.
    • Add fused attention CUDA kernels for BERT.
    • Fuse Add (bias) and Transpose of Q/K/V into one kernel for Attention and LongformerAttention.
    • Reduce GEMM computation in LongformerAttention with a new weight format.
  • General quantization (tool and kernel)
    • Quantization debugging tool - identify sensitive node/layer from accuracy drop discrepancies
    • New quantize API based on QuantConfig
    • New quantized operators: SoftMax, Split, Where

Execution Providers

  • CUDA EP
    • Official ONNX Runtime GPU packages are now built with CUDA version 11.6 instead of 11.4, but should still be backwards compatible with 11.4
  • TensorRT EP
    • Build option to link against pre-built onnx-tensorrt parser; this enables potential "no-code" TensorRT minor version upgrades and can be used to build against TensorRT 8.5 EA
    • Improved nested control flow support
    • Improve HashId generation used for uniquely identifying TRT engines. Addresses issues such as TRT Engine Cache Regeneration Issue
    • TensorRT uint8 support
  • OpenVINO EP
    • OpenVINO version upgraded to 2022.2.0
    • Support for INT8 QDQ models from NNCF
    • Support for Intel 13th Gen Core Process (Raptor Lake)
    • Preview support for Intel discrete graphics cards Intel Data Center GPU Flex Series and Intel Arc GPU
    • Increased test coverage for GPU Plugin
  • SNPE EP
  • DirectML EP
    • Update to DML 1.9.1
    • New ops: LayerNormalization, Gelu, MatMulScale, DFT, FusedMatMul (contrib)
    • Bug fixes: DML EP Fix InstanceNormalization with 3D tensors (#12693), DML EP squeeze all axes when empty (#12649), DirectML GEMM broken in opset 11 and 13 when optional tensor C not provided (#12568)
  • [new] CANN EP - Initial integration of CANN EP contributed by Huawei to support Ascend 310 (#11477)

Mobile

  • EP infrastructure
    • Implemented support for additional EPs that use static kernels
      • Required for EPs like XNNPACK to be supported in minimal build
      • Removes need for kernel hashes to reduce maintenance overhead for developers
      • NOTE: ORT format models will need to be regenerated as the format change is NOT backwards compatible. We're replacing hashes for the CPU EP kernels with operator constraint information for operators used by the model so that we can match any static kernels available at runtime.
  • XNNPack
    • Added more kernels including QDQ format model support
      • AveragePool, Softmax,
      • QLinearConv, QLinearAveragePool, QLinearSoftmax
    • Added support for XNNPACK using threadpool
      • See documentation for recommendations on how to configure the XNNPACK threadpool
  • ORT format model peak memory usage

Web

  • Support for 4GB memory in webassembly
  • Upgraded emscripten to 3.1.19
  • Build from source support for onnxruntime-extensions and sentencepiece
  • Initial support for XNNPACK for optimizations for Wasm

Training

  • Training packages updated to CUDA version 11.6 and removed CUDA 10.2 and 11.3
  • Performance improvements via op fusions like BiasSoftmax and Dropout fusion, Gather to Split fusion etc targeting SOTA models
  • Added Aten support for GroupNorm, InstanceNormalization, Upsample nearest
  • Bug fix for SimplifiedLayerNorm, seg fault for alltoall

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, baijumeswani#2baijumeswani, edgchen1, iK1D, skottmckay, cloudhan, tianleiwu, fs-eire, mszhanyi, WilBrady, hariharans29, chenfucn, fdwr, yuslepukhin, wejoncy, PeixuanZuo, pengwa, yufenglee, jchen351, justinchuby, dependabot[bot], RandySheriffH, sumitsays, wschin, wangyems, YUNQIUGUO, ytaous, pranavsharma, vvchernov, natke, Craigacp, RandyShuai, smk2007, zhangyaobit, jcwchen, yihonglyu, georgen117, chilo-ms, ashbhandare, faxu, jstoecker, gramalingam, garymm, jeffbloo, xadupre, jywu-msft, askhade, RyanUnderhill, thiagocrepaldi, mindest, jingyanwangms, wenbingl, ashari4, sfatimar, MaajidKhan, souptc, HectorSVC, weixingzhang, zhanghuanrong

onnxruntime - ONNX Runtime v1.12.1

Published by RandySheriffH about 2 years ago

This patch addresses packaging issues and bug fixes on top of v1.12.0.

  • Java package: MacOS M1 support folder structure fix
  • Android package: enable optimizations
  • GPU (TensorRT provider): bug fixes
  • DirectML: package fix
  • WinML: bug fixes

See #12418 for full list of specific fixes included

onnxruntime - ONNX Runtime v1.12.0

Published by RandySheriffH about 2 years ago

Announcements

  • For Execution Provider maintainers/owners: the lightweight compile API is now the default compiler API for all Execution Providers (this was previously only available for the mobile build). If you have an EP using the legacy compiler API, please migrate to the lightweight compile API as soon as possible. The legacy API will be deprecated in next release (ORT 1.13).
  • netstandard1.1 support is being deprecated in this release and will be removed in the next ORT 1.13 release

Key Updates

General

  • ONNX spec support
    • onnx opset 17
    • onnx-ml opset 3 (TreeEnsemble update)
  • BeamSearch operator for encoder-decoder transformers models
  • Support for invoking individual ops without the need to create a separate graph
    • For use with custom op development to reuse ORT code
  • Support for feeding external initializers (for large models) as byte arrays for model inferencing
  • Build switch to disable usage of abseil library to remove dependency

Packages

  • Python 3.10 support
  • Mac M1 support in Python and Java packages
  • .NET 6/MAUI support in Nuget C# package
    • Additional target frameworks: net6.0, net6.0-android, net6.0-ios, net6.0-macos
    • NOTE: netstandard1.1 support is being deprecated in this release and will be removed in the 1.13 release
  • onnxruntime-openvino package available on Pypi (from Intel)

Performance and Quantization

  • Improved C++ APIs that now utilize RAII for better memory management
  • Operator performance optimizations, including GatherElements
  • Memory optimizations to support compute-intensive real-time inferencing scenarios (e.g. audio inferencing scenarios)
    • CPU usage savings for infrequent inference requests by reducing thread spinning
    • Memory usage reduction through use of containers from the abseil library, especially inlined vectors used to store tensor shapes and inlined hash maps
  • New quantized kernels for weight symmetry to improve performance on ARM64 little core (GEMM and Conv)
  • Specialized kernel to improve performance of quantized Resize by up to 2x speedup
  • Improved the thread job partition for QLinearConv, demonstrating up to ~20% perf gain for certain models
  • Quantization tool: improved ONNX shape inference for large models

Execution Providers

  • TensorRT EP
    • TensorRT 8.4 support
    • Provide option to share execution context memory between TensorRT subgraphs
    • Workaround long CI test time caused by frequent initialization/de-initialization of TensorRT builder
    • Improve subgraph partitioning and consolidate TensorRT subgraphs when possible
    • Refactor engine cache serialization/deserialization logic
    • Miscellaneous bug fixes and performance improvements
  • OpenVINO EP
    • Pre-Built ONNXRuntime binaries with OpenVINO now available on pypi: onnxruntime-openvino
    • Performance optimizations of existing supported models
    • New runtime configuration option ‘enable_dynamic_shapes’ added to enable dynamic shapes for each iteration
    • ORTModule included as part of OVEP Python Package to enable Torch ORT Inference
  • DirectML EP
  • TVM EP - details
    • Updated to add model .dll ingestion and execution on Windows
    • Updated documentation and CI tests
  • [New] SNPE EP - details
  • [Preview] XNNPACK EP - initial infrastructure with limited operator support, for use with ORT Mobile and ORT Web
    • Currently supports Conv and MaxPool, with work in progress to add more kernels

Mobile

  • Binary size reductions in Android minimal build - 12% reduction in size of base build with no operator kernels
  • Added new operator support to NNAPI and CoreML EPs to improve ability to run super resolution and BERT models using NPU
    • NNAPI: DepthToSpace, PRelu, Gather, Unsqueeze, Pad
    • CoreML: DepthToSpace, PRelu
  • Added Docker file to simplify running a custom minimal build to create an ORT Android package
  • Initial XNNPACK EP compatibility

Web

  • Memory usage optimizations
  • Initial XNNPACK EP compatibility

ORT Training

  • [New] ORT Training acceleration is also natively available through HuggingFace Optimum
  • [New] FusedAdam Optimizer now available through the torch-ort package for easier training integration
  • FP16_Optimizer Support for more DeepSpeed Versions
  • Bfloat16 support for AtenOp
  • Added gradient ops for ReduceMax and ReduceMin
  • Updates to Min and Max grad ops to use distributed logic
  • Optimizations
    • Optimized perf for Gelu and GeluGrad kernels for mixed precision models
    • Enabled fusions for SimplifiedLayerNorm
    • Added bitmask versions of Dropout, BiasDropout and DropoutGrad which brings ~8x space savings for the mast output.

Known issues


Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, edgchen1, fdwr, skottmckay, iK1D, fs-eire, mszhanyi, WilBrady, justinchuby, tianleiwu, PeixuanZuo, garymm, yufenglee, adrianlizarraga, yuslepukhin, dependabot[bot], chilo-ms, vvchernov, oliviajain, ytaous, hariharans29, sumitsays, wangyems, pengwa, baijumeswani, smk2007, RandySheriffH, gramalingam, xadupre, yihonglyu, zhangyaobit, YUNQIUGUO, jcwchen, chenfucn, souptc, chandru-r, jstoecker, hanbitmyths, RyanUnderhill, georgen117, jywu-msft, mindest, sfatimar, HectorSVC, Craigacp, jeffdaily, zhijxu-MS, natke, stevenlix, jeffbloo, guoyu-wang, daquexian, faxu, jingyanwangms, adtsai, wschin, weixingzhang, wenbingl, MaajidKhan, ashbhandare, ajindal1, zhanghuanrong, tiagoshibata, askhade, liqunfu

onnxruntime - ONNX Runtime v1.11.1

Published by chilo-ms over 2 years ago

This is a patch release on 1.11.0 with the following fixes:

All official packages are attached, and Python packages are additionally published to PyPi.

onnxruntime - ONNX Runtime v1.11.0

Published by chilo-ms over 2 years ago

Key Updates

General

  • Support for ONNX 1.11 with opset 16
  • Updated protobuf version to 3.18.x
  • Enable usage of Mimalloc (details)
  • Transformer model helper scripts
  • On Windows, error strings in OrtStatus are now encoded in UTF-8. When you need to print it out to screen, first convert it to a wide char string by using the MultiByteToWideChar Windows API.

Performance

  • Memory utilization related performance improvements (e.g. elimination of vectors for small dims)
  • Performance variance stability improvement through dynamic cost model session option (details)
  • New quantization data format support: S8S8 in QDQ format
    • Added s8s8 kernels for ARM64
    • Support to convert s8s8 to u8s8 automatically for x64
  • Improved performance on ARM64 for quantized CNN model through:
    • New kernels for quantized depthwise Conv
    • Improved symmetrically quantized Conv by leveraging indirect buffer
    • New Gemm kernels for symmetric quantized Conv and MatMul
  • General quantization improvements, including new quantized operators (Resize, ArgMax) and quantization tool updates

API

  • Java: Only a single OrtEnv can be created in any given execution of the JVM. Previously, the environment could be closed completely and a fresh one could be created with different parameters (e.g. global thread pool, or logging level) (details)

Packages

  • Nuget packages
    • C# packages now tested with .NET 5. .NET Core 2.1 support is deprecated as it has reached end of life support on August 21, 2021. We will closely follow .NET's support policy
    • Removed PDB files. These are attached as release artifacts below.
  • Pypi packages
    • Python 3.6 is deprecated as it has reached EOL December 2021. Supported Python versions: 3.7-3.9
    • Note: Mac M1 builds are not yet available in pypi but can be built from source
    • OnnxRuntime with OpenVINO support available at https://pypi.org/project/onnxruntime-openvino/1.11.0/

Execution Providers

  • CUDA
    • Enable CUDA provider option configuration for C# to support workspace size configuration from and fix binary compatibility of CUDAProviderOptions C API
    • Preview support for CUDA Graphs (details)
  • TensorRT
    • TRT 8.2.3 support
    • Memory footprint optimizations
    • Support protobuf >= 3.11
    • Updated flatbuffers version to 2.0
    • Misc Bug Fixes
  • DirectML
    • Updated more operators to opset 13 (QuantizeLinear, DequantizeLinear, ReduceSum, Split, Squeeze, Unsqueeze, ReduceSum).
  • OpenVINO
  • OpenCL (in preview)
    • Introduced the EP for OpenCL to use with Mobile GPUs
    • Available in experimental/opencl branch for users to try. Provide feedback through Issues and Discussions in the repo.
    • README is available here.

Mobile

  • Added general support for converting a model to NHWC layout at runtime
    • Execution provider sets preferred layout and shared infrastructure in ORT will ensure the nodes the execution provider is assigned will be in that layout
  • Added support for runtime optimization with minimal binary size impact
    • Relevant optimizations are saved in the ORT format model for replay at runtime if applicable
  • Added support for QDQ format models to the NNAPI EP
    • Will fall back to CPU EP’s QDQ handling if NNAPI is not available using runtime optimizations
    • Includes updates to the ORT QDQ optimizers so they work better with mobile scenarios
  • Added helpers to:
    • Analyze if a model can be used with the pre-built ORT Mobile package
    • Update ONNX opset so model can be used with the pre-built package
    • Convert dynamic inputs into fixed size inputs so that the model can be used with NNAPI/CoreML
    • Optimize a QDQ format model for use with ORT
  • Added Android and iOS packages with full ORT builds
    • These packages have additional support for the full set of opsets and ops for ONNX models at the cost of a larger binary size.

Web

  • Build option to create ONNX Runtime WebAssembly static library
  • Support for concurrent creation of multiple inference sessions
  • Upgraded emsdk version to 3.1.3 for more stable multi-threads and enables LTO with multi-threads build on WebAssembly.

Known issues

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, edgchen1, skottmckay, yufenglee, wangyems, yuslepukhin, gwang-msft, iK1D, chilo-ms, fdwr, ytaous, RandySheriffH, hanbitmyths, chenfucn, yihonglyu, ajindal1, fs-eire, souptc, tianleiwu, YUNQIUGUO, hariharans29, oliviajain, xadupre, ashari4, RyanUnderhill, jywu-msft, weixingzhang, baijumeswani, georgen117, natke, Craigacp, jeffdaily, JingqiaoFu, zhanghuanrong, satyajandhyala, smk2007, ryanlai2, askhade, thiagocrepaldi, jingyanwangms, pengwa, scxiao, ashbhandare, BowenBao, SherlockNoMad, sumitsays, sfatimar, mosdav, harshithapv, liqunfu, tiagoshibata, gineshidalgo99, pranavsharma, jcwchen, nkreeger, xkszltl, faxu, suffiank, stevenlix, jeffbloo, feihugis