Open standard for machine learning interoperability
APACHE-2.0 License
Bot releases are visible (Hide)
ONNX v1.16.1 is a patch release based on v1.16.0.
Please visit onnx.ai to learn more about ONNX and associated projects.
Published by cjvolzka 7 months ago
ONNX v1.16.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.
stash_type
attribute and change input shape of scale
and bias
from (G) to (C) for GroupNormalization
metadata_props
fieldvalue_info
fieldoverload
field to support overloaded functions.You can upgrade to the latest release using pip install onnx --upgrade
or build from source following the README instructions.
Thanks to these individuals for their contributions in this release since last 1.16.0 release:
Aditya Goel, Adrian Lizarraga, Andreas Fehlner, Charles Volzka, Daniel Richard G, Danni, G. Ramalingam, Gal Hubara-Agam, Ilya Lavrenov, Justin Chu, Tabari Alexander, Takeshi Watanabe, WORLD PEACE, Wouter Deconinck, Xavier Dupré, Yuan Yao, dependabot[bot], galagam, jslap-ubi, liqun Fu
Published by liqunfu 12 months ago
ONNX v1.15.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.
Added new operators: ImageDecoderhttps://github.com/onnx/onnx/pull/5294 RegexFullMatchhttps://github.com/onnx/onnx/pull/5401 StringConcathttps://github.com/onnx/onnx/issues/5350 StringSplithttps://github.com/onnx/onnx/pull/5371 AffineGridhttps://github.com/onnx/onnx/issues/5225 Geluhttps://github.com/onnx/onnx/issues/5277
Updated existing operators: ConstantOfShapehttps://github.com/onnx/onnx/pull/5390 GridSamplehttps://github.com/onnx/onnx/pull/5010 ReduceMaxhttps://github.com/onnx/onnx/pull/5539 ReduceMinhttps://github.com/onnx/onnx/pull/5539 IsNanhttps://github.com/onnx/onnx/pull/5583 IsInfhttps://github.com/onnx/onnx/pull/5583 DFThttps://github.com/onnx/onnx/pull/5514 LabelEncoderhttps://github.com/onnx/onnx/pull/5453
New features, bug fixes, and document updates
New Operators (ai.onnx):
Operator Updates (ai.onnx):
inf/-inf
as float literals. PR#5528
Users are now able to serialize the model proto to a text format by specifying supported file extensions or supplying the format=
argument in save_model
.
For example
# model: onnx.ModelProto
onnx.save_model(model, "model.json")
will save the model as a json file.
You can upgrade to the latest release using pip install onnx --upgrade
or build from source following the README instructions.
python setup.py develop
deprecationDirect invocation of setup.py
is deprecated following https://setuptools.pypa.io/en/latest/deprecated/commands.html. To build ONNX, users should switch to use
# Editable installation
# Before: python setup.py develop
# Now
pip install -e .
# Build wheel
# Before: python setup.py bdist_wheel
# Now
pip install --upgrade build
python -m build .
Thanks to these individuals for their contributions in this release since last 1.15.0 release:
@adityagoel4512 @AlexandreEichenberger @andife @AtanasDimitrovQC @BowenBao @cbourjau @ClifHouck @guoyuhong @gramalingam @ilya-lavrenov @jantonguirao @jbachurski @jcwchen @justinchuby @leso-kn @linkerzhang @liqunfu @prasanthpul @slowlyideal @smk2007 @snnn @take-cheeze @xadupre @yuanyao-nv @zhenhuaw-me
Published by yuanyao-nv about 1 year ago
ONNX v1.14.1 is a patch release based on v1.14.1.
shape
data propagation function to handle missing optional parameters #5219Published by yuanyao-nv over 1 year ago
ONNX v1.14.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.
DeformConv added in https://github.com/onnx/onnx/pull/4783
Equal - Support for string data type added in https://github.com/onnx/onnx/pull/4828
AveragePool - New attribute dilations
https://github.com/onnx/onnx/pull/4790
Pad - Added new wrap
to the mode
attribute to support circular padding https://github.com/onnx/onnx/pull/4793
Resize - Added half_pixel_symmetric
to the coordinate_transformation_mode
attribute https://github.com/onnx/onnx/pull/4862
Replaced real models with light models in backend tests. https://github.com/onnx/onnx/pull/4861 https://github.com/onnx/onnx/pull/4960
Now ONNX supports Protobuf v21: https://github.com/onnx/onnx/pull/4956
You can upgrade to the latest release using pip install onnx --upgrade
or build from source following the README instructions.
Thanks to these individuals for their contributions in this release since last 1.13.0 release: @jcwchen, @andife, @gramalingam, @xadupre, @justinchuby, @liqunfu, @yuanyao-nv, @jbachurski, @p-wysocki, @prasanthpul, @jantonguirao, @take-cheeze, @smk2007, @AlexandreEichenberger, @snnn, @daquexian, @linkerzhang.
Published by jcwchen over 1 year ago
ONNX v1.13.1 is a patch release based on v1.13.0.
Published by p-wysocki almost 2 years ago
ONNX v1.13.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.
antialias
, axes
and keep_aspect_ratio_policy
, allow for both scales
and sizes
to be provided when one of them is an empty constant #4126, #4388
axes
#4190
max
and min
as supported reduction attributes #4411
num_outputs
attribute #4481
ceil_mode
and dilations
#4534
Reference Python runtime dependent on only Python and numpy has been added. #4483
ONNX 1.13.0 supports Python 3.11. #4490
Support for M1/M2 ARM processors has been added. #4642
ONNX 1.13.0 also comes with numerous:
For full details see Logistics for ONNX Release 1.13.0.
TENSOR_TYPE_TO_STORAGE_TENSOR_TYPE
has been deprecated #4270
You can upgrade to the latest release using pip install onnx --upgrade
or build from source following the README instructions.
Thanks to these individuals for their contributions in this release since last 1.12.0 release: @AnandKri, @cbourjau, @jcwchen, @gramalingam, @garymm, @GaetanLepage, @ilya-lavrenov, @jnovikov, @JackBoosY, @jbachurski, @tjich, @jantonguirao, @justinchuby, @natke, @philass, @prasanthpul, @p-wysocki, @SpaceIm, @stephenneuendorffer,@take-cheeze, @sechkova, @thiagocrepaldi, @xadupre, @mszhanyi, @yuanyao-nv, @andife, @daquexian, @kylesayrs, @liqunfu, @longlee0622, @HSQ79815, @williamberman, @YanBC
The list has been acquired with a script written by Aaron Bockover.
Published by etiotto over 2 years ago
ONNX v1.12.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.
input
(#4044)You can upgrade to the latest release using pip install onnx --upgrade
or build from source following the README instructions.
Thanks to these individuals for their contributions in this release since last 1.11.0 release. (Contributor list obtained with: https://github.com/onnx/onnx/graphs/contributors?from=2022-02-08&to=2022-05-24&type=c): @jcwchen, @gramalingam, @xuzijian629, @garymm, @diyessi, @liqunfu, @jantonguirao, @daquexian, @fdwr, @andife, @wschin, @xadupre, @xkszltl, @snnn
Published by liqunfu over 2 years ago
ONNX v1.11.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.
You can upgrade to the latest release using pip install onnx --upgrade
or build from source following the README instructions.
Thanks to these individuals for their contributions in this release since last 1.10.0 release. (Contributor list obtained with: https://github.com/onnx/onnx/graphs/contributors?from=2021-07-30&to=2022-02-08&type=c):
@jcwchen, @gramalingam, @garymm, @mhamilton723, @TomWildenhain-Microsoft, @neginraoof, @xuzijian629, @liqunfu, @gwang-msft, @chudegao, @AlexandreEichenberger, @rajeevsrao, @matteosal, @stillmatic, @askhade, @liuyu21, @jantonguirao, @shinh, @kevinch-nv, @shubhambhokare1, @hwangdeyu, @jiafatom, @postrational, @snnn, @jackwish
Published by rajeevsrao about 3 years ago
This release is a patch release based on v1.10.0.
Bug fix:
Published by rajeevsrao about 3 years ago
ONNX v1.10.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.
Optional
and SparseTensor
types. https://github.com/onnx/onnx/pull/3407 https://github.com/onnx/onnx/pull/3398
Reshape
, Squeeze
, NonZero
, DynamicQuantizeLinear
.Optional
and SparseTensor
https://github.com/onnx/onnx/pull/3407 https://github.com/onnx/onnx/pull/3398
bfloat16
support for Pow. https://github.com/onnx/onnx/pull/3412
start
,end
. https://github.com/onnx/onnx/pull/3580
NonZero
. https://github.com/onnx/onnx/pull/3364
Dynamic QuantizeLinear
. https://github.com/onnx/onnx/pull/3539
Reshape
shape inference. https://github.com/onnx/onnx/pull/3592
Squeeze
. https://github.com/onnx/onnx/pull/3516
Squeeze
without axes. https://github.com/onnx/onnx/pull/3465
onnx.parser
). https://github.com/onnx/onnx/pull/3540
MatMulInteger
and QLinearMatMul
. https://github.com/onnx/onnx/pull/3585
strict_model
for ONNX checker. https://github.com/onnx/onnx/pull/3348
Shape
to be rank-1. https://github.com/onnx/onnx/pull/3394
BatchNormalization
outputs updated for training mode. https://github.com/onnx/onnx/pull/3379
You can upgrade to the latest release using pip install onnx --upgrade
or build from source following the README instructions.
Thanks to these individuals for their contributions in this release:
@jcwchen, @askhade, @gramalingam, @neginraoof, @matteosal, @postrational, @garymm, @yuslepukhin, @fdwr, @jackwish, @manbearian, @etusien, @impactaky, @rajeevsrao, @prasanthpul, @take-cheeze, @chudegao, @mindest, @yufenglee, @annajung, @hwangdeyu, @calvinmccarter-at-lightmatter, @ashbhandare, @xuzijian629, @IceTDrinker, @mrry
Published by postrational over 3 years ago
ONNX v1.9.0 is now available with exciting new features! We would like to thank everyone who contributed to this release!
You may learn more about the project, who is involved and what tools are available at the onnx.ai site.
opset_version
https://github.com/onnx/onnx/pull/3266
allowzero
attribute to Reshape operator https://github.com/onnx/onnx/pull/3113
onnx.OnnxParser
Parser for a textual syntax of ONNX models https://github.com/onnx/onnx/pull/3194
ir_pb_converter
to empty shape https://github.com/onnx/onnx/pull/3279
You can simply pip upgrade using the pip install onnx
--upgrade or build from source following the instructions on Github.
Thanks to these individuals for their contributions in this release:
@jcwchen, @askhade, @postrational, @etusien, @wschin, @prasanthpul, @gramalingam, @daquexian, @BowenBao,
@pranav-prakash, @matteosal, @linkerzhang, @annajung, @neginraoof, @tianleiwu, @tomdol
Published by jcwchen over 3 years ago
This release is a patch release based on v1.8.0.
Bug fixes:
import onnx
on MacOS Catalina.API change:
onnx.shape_inference does not throw shape_inference error now. If you want to see the shape_inference errors, please use onnx.shape_inference.infer_shapes(onnx_model, strict_mode=True)
.
Release:
Published by jcwchen almost 4 years ago
ONNX v1.8 is now available with exciting enhanced features! You may learn more about the project, who is involved and what tools are available at the onnx.ai site. We would like to thank every community member for contributing to the project!
onnx.shape_inference
now accepts model path and supports >2GB models for shape inference. https://github.com/onnx/onnx/pull/3012
You can simply pip upgrade using the pip install onnx --upgrade
or build from source following the instructions on Github.
onnx.optimizer
is moving to another repo: https://github.com/onnx/optimizer. It will be removed from onnx/onnx in ONNX 1.9.onnx.version_converter
has IR gap issue - cannot use input from initializer: https://github.com/onnx/onnx/pull/3007
onnx.shape_inference
updates both output and value_info. It will only update the original output in future update: https://github.com/onnx/onnx/issues/3069
Thanks to these individuals for their contributions in this release:
jcwchen, askhade, wschin, vinitra, prasanthpul, gramalingam, daquexian, rajeevnalawadi, sveta-levitan, ashbhandare, chinhuang007, KsenijaS, shinh, BowenBao, shubhambhokare1, pranav-prakash, prabhat00155, pluradj, matteosal, jackwish, Yukigaru, H1Gdev, 462630221, natke, kevinch-nv, RandySheriffH, souptc, fdwr, HectorSVC, jspisak, codemzs, yuslepukhin, linkerzhang
Published by chinhuang007 over 4 years ago
ONNX v1.7 is now available with exciting new features! We would like to thank everyone who contributed to this release! You may learn more about the project, who is involved and what tools are available at the onnx.ai site.
Major changes and updates since the v1.6.0 release:
Training Support, as a tech preview
Operator changes
Opset has been updated to version 12.
Preview training opset has been added as version 1.
New operators:
Updated operators:
General Features
** Bug fixes **
You can simply pip upgrade using the following command or build from source following the instructions on Github.
pip install onnx --upgrade
You can find all the commits and pull requests on Github, https://github.com/onnx/onnx/pulls?q=is%3Apr+milestone%3A1.7+
Python 2.7 support will be deprecated in ONNX 1.8 release. Please plan accordingly.
Published by kevinch-nv about 5 years ago
ONNX v1.6 is now available! We would like to thank everybody who has contributed to this release! You may learn more about the project, who is involved and what tools are available at the onnx.ai site.
Major changes and updates since the v1.5.0 release:
Graph representation
Operators
C
as optional (#2330)cubic
interpolation mode, sizes
input parameter, and coordinate_transformation
, cubic_coeff_a
and exclude_outside
attributes. (#2057)Bug Fixes
You can simply pip upgrade using the following command or build from source following the instructions on Github.
pip install onnx --upgrade
Fix spec and shape inference for Unsqueeze op (#2347)
Bump NMS version for avoiding regression in existing models (#2348)
Relax IF's shape inference rule (#2345)
Clarify behavior in ConvTranspose (#2343)
Fix node test case model for Gemm scalar bias case (#2342)
Update pybind (#2340)
Update gen_doc script to validate proto3 files (#2122)
Fix some backend tests (#2335)
Gemm optional bias (#2330)
Changes for AIX platform (#1913)
Updated test cases for reshape (#2127)
Replace is by == (#2326)
Updated docs for strides and dilations attributes (#2291)
Revamped test cases for Gemm (#2060)
Add more shape inference tests for Logical operators to improve coverage (#2133)
Change incorrect use of ValueError to TypeError (#2304)
Support dynamic 'pads' and 'value' in Pad operator (#2031)
Update IR doc to clarify initializers are permitted as node inputs (#2320)
Avoid uses of special chars (#2315)
Regenerate ONNX proto and add release date to ver 6 IR (#2316)
Add description of default type about y_zero_point (#2110)
Support make_attribute empty string (#2129)
More unsqueeze tests (#2200)
Fix resize shape inference issue in opset10 (#2294)
Sequence related ops (#2249)
Add helper function update_inputs_outputs_dims to tools (#2148)
Update documentation about required input output types (#2310)
Shape inference for NMS (#2269)
Fix extra collect_snippets warning (#2277) (#2307)
Fix shapeinference function (#2296)
fix the buffer overflow problem in shape inference logic of Squeeze op
Support for negative indices in 'Gather'
Fix collect_snippets warnings (#2277)
Update printable_graph in helper.py to output details of initializers that do not have matching graph inputs. (#2135)
test int64 input type for 'where' op (#2253)
Supporting negative axes for all existing onnx ops (#2281)
Update managingexperimentalops.md (#1981)
Fix link to community docs in readme (#2261)
move map and sequence types to onnx domain
Improve compatiblity with proto3 and enable reading attributes (#2288)
Remove type info for loop variadic input in Loop op used to compose the Range op (#2287)
Add Foundation WG to working-groups.md (#2276)
Fix testdata model for CumSum. Add exclusive attribute. (#2271)
Support GatherND operator in ONNX (#2106)
Support ScatterND operator in ONNX (#2220)
Add Det to ONNX (#2233)
Update the description of nearest_mode of resize op (#2257)
Adding sparse tensor to ONNX (#2019)
Support Range operator in ONNX (#2242)
Update resize op (#2057)
Add function to fuse dynamic quantization graph into 1 node (#2187)
Update logo_request.md (#2231)
Update Clip in opset 11 to support min/max as inputs instead of attributes (#2096)
Fix segfault in tile shape inference (#2221)
update onehot shape inference to reflect the spec for depth input (#2224)
Add GatherElements Op and Rename ScatterElements (#2143)
Unique (#2141)
Clarify dimension variable scoping (#2211)
Liqun/topk sort (#2126)
Update document for NMS (#2193)
Handle negative 'axis' value in Split type and shape inferencing (#2177)
depth to space shuffle order (#2163)
minor updates to fix links in readme (#2189)
Add check to disallow squeezing input axes which are not 1 (#2204)
2019-07-28
Clarify ambiguity in gather spec regarding indices expectation (#2202)
Fix some minor issues in IR.md and Versioning.md (#2108)
Skip install typing package for python >=3.5 (#2199)
Member Company logo guidelines (#2196)
remove link to outdated issue for contributions wanted (#2186)
Create sigs.md (#2103)
mintor format update (#2180)
add more types support for Equal op (#2176)
Update AddNewOP document. (#2172)
Add missing space (#2150)
python api example typo fix (#2155)
Fix errors in RoiAlign shape inference code (#2167)
TensorProto::INT8 & INT16 were missed here (#2164)
Fix LabelEncoder's shape inference (#2170)
Fixing a unit test in Cumsum Operator (#2157)
[New Operator] CumSum (#2030)
Fix globalpool output shape (#2147)
Expose ONNX_ML build option to python (#2138)
Missing newline fix (#2128)
Avoid unnecessary copies of names by checker (#2098)
update qlinear conv test (#2120)
Add shape inference for LinearClassifier (#2077)
Fix inconsistency in describing graph's initializer. The initializer (#2115)
Update codeowners to have community folder changes assigned to steering committee (#2104)
Fix Resize/Upsample Shape inference function (#2085)
Clarify shape inference requirements for new operators (#2088)
Fix NN defs file (#2083)
Fix type s/depracted/deprecated/ (#2092)
Add shape inference for Tile op (#2076)
[New Operator] Round (#2053)
Add dilations support in ConvTranspose shape inference and update docs (#2068)
Fix typo (#2069)
Add a missing step when upgrading an operator (#2071)
Clarify the axis/size in pads
Fix wrong condition and add --user in update_doc.sh (#2050)
Add bit-shift operators for supporting hashing (#1931)
Add shape inference logic for Expand op (#2041)
update qops tests (#2040)
Fix torchvision installation (#2054)
Fix bug that kernel_shape rather than effective_kernel_shape is used in dilated conv (#2043)
Changes done internally at Facebook (#2035)
Explicitly specify type of integers in the input tensor. (#2034)
Version Conversion of Min
Fix auto_pad shape inference bug (#2028)
Version Conversion from opset 8 to 9 (#2007)
fix macro ONNX_DISALLOW_COPY_AND_ASSIGN bug (#2017)
fix array range bug (#2015)
Relax constraint on subgraph input/output type and shape (#2009)
Fix shape inference logic for TopK operator (#2005)
Nullary variadic (#1889)
Removed setting MD/MDd flags manually through cmake. The MTd/MT part is still necessary. Looks like CI fails without it. (#1995)
Move NonMaxSupression to object_detection folder (#2001)
Prevent using invalid iterator
Add shape inference for legacy auto_pad modes (#1988)
Move Quantization working group to completed state (#1980)
Define the IR acronym (#1985)
fix shape inference (#1984)
fixing some of Mod test cases (#1962)
Lint the docs name (#1982)
Fix a shapeinference bug in upsample v9/10 (#1969)
Create managingexperimentalops (#1974)
Create archivefileformat doc based on the wiki equivalent (#1973)
Create NLPinONNXproposal (#1975)
Create ONNXIFIproposal (#1976)
Create onnxreleases (#1977)
Create functionsproposal (#1978)
Create typeannotations.md (#1979)
Published by raymondxyang over 5 years ago
ONNX v1.5 is now available! You may learn more about the project, who is involved and what tools are available at the onnx.ai site. We would like to thank every community member for contributing to the project!
The major changes/updates since v1.4 release:
You can simply pip upgrade using the following command or of course build from source from the latest on Github:
pip install onnx --upgrade
Published by raymondxyang over 5 years ago
This is a patch release on v1.4.0, fixing line ending issue on Linux environment.
Published by raymondxyang over 5 years ago
We are excited to announce the v1.4 release of ONNX is now available! For those who aren't aware of or know about the ONNX, you can learn more about the project, who is involved and what tools are available at the onnx.ai site.
You can simply pip upgrade using the following command or of course build from source from the latest on Github(our source of the truth):
pip install onnx --upgrade
December 4, 2018 - ONNX Runtime for inferencing machine learning models open sourced by Microsoft
ONNX Runtime, a high-performance inference engine for machine learning models in the ONNX format, is now open source. ONNX Runtime is the first publicly available inference engine that fully implements the ONNX specification, including the ONNX-ML profile. Python, C#, and C APIs are available for Linux, Windows, and Mac. ONNX Runtime can deliver an average performance gain of 2X for inferencing. Partners in the ONNX community including Intel and NVIDIA are actively integrating their technology with ONNX Runtime to enable more acceleration. READ MORE
November 29, 2018 - ONNX.js for running ONNX models on browsers and Node.js
ONNX.js, an open source Javascript library for running ONNX models on browsers and on Node.js, is now available. It allows web developers to score pre-trained ONNX models directly on browsers, and has adopted WebAssembly and WebGL technologies for providing an optimized ONNX model inference runtime for both CPUs and GPUs. ONNX.js is the first solution to utilize multi-threading in a Javascript-based AI inference engine (via Web Workers), offering significant performance improvements over existing solutions on CPU. READ MORE
October 24, 2018 - CEVA Adds ONNX Support to CDNN Neural Network Compiler
CEVA, Inc., the leading licensor of signal processing platforms and artificial intelligence processors for smarter, connected devices, today announced that the latest release of its award-winning CEVA Deep Neural Network (CDNN) compiler supports the Open Neural Network Exchange (ONNX) format. READ MORE
October 16, 2018 - ONNX Runtime for inferencing machine learning models now in preview
We are excited to release the preview of ONNX Runtime, a high-performance inference engine for machine learning models in the Open Neural Network Exchange (ONNX) format. ONNX Runtime is compatible with ONNX version 1.2 and comes in Python packages that support both CPU and GPU to enable inferencing using Azure Machine Learning service and on any Linux machine running Ubuntu 16. READ MORE
September 6, 2018 - Synopsys Announces Support for the Open Neural Network Exchange Format in ARC MetaWare EV Development Toolkit
Synopsys, Inc. today announced support for the Open Neural Network Exchange (ONNX) format in the upcoming release of its DesignWare® ARC® MetaWare EV Development Toolkit, a complete set of tools, runtime software and libraries to develop vision and artificial intelligence (AI) applications for ARC EV6x Embedded Vision Processor IP. READ MORE
New Operator and Operator Updates:
Adding generator op ConstantLike (#1406)
Supporting int32 and int64 in Less and Greater (#1390)
fix AvgPool doc. add default value for count_include_pad (#1391)
Add DynamicSlice experimental op (#1377)
fix the doc for softmax (#1374)
Fix the shape inference for concat (#1361)
Add several hyperbolic function ops. (#1499)
Add OneHot op to ONNX. (#1567)
Fix MaxUnpool shape inference when output_shape is provided as input (#…
Add type shape inferencing for the If operator (#1571)
fix ConvTranspose spec (#1566)
Change upsample operator to allow dynamic 'scales' (#1467)
Fix output type bug in MaxUnpool definition. (#1553)
Add Compress Op (#1454)
Add MaxUnpool op to ONNX. (#1494)
support more types for Gemm, Flatten and PRelu (#1472)
deprecate no-spatial mode of BN (#1637)
Add Where op. (#1569)
Fix output_shape of a testcase for ConvTranspose (#1437)
Adding EyeLike generator op. (#1428)
Clarify the spec for convolution transpose input shape (#1413)
Separate types of inputs 1 and 2 in OneHot op. (#1610)
make output shape clear enough for Softmax family (#1634)
fix batchnorm doc (#1633)
Add Scatter op to ONNX (#1517)
Add Erf operator for computing error function (#1675)
Add IsNaN operator. (#1656)
Add Sign Op (#1658)
Update scan (#1653)
add isnan data (#1685)
Clarify some aspects of the Loop spec. (#1587)
repaire convtranspose shape inference (#1660)
Remove ConstantLike op. Updates to ConstantOfShape op. (#1716)
add constantofshape (#1582)
Add Shrink operator (#1622)
Scan test update (#1732)
Scan output axes (#1737)
Add NonZero op. (#1714)fix the test cases for constantofshape (#1746)
Add sample implementation support (#1712)
Update definition of Cast Op to support casting to/from string (#1704)
Update ConstantOfShape op (#1744)
Add TfIdfVectorizer operator to ONNX (#1721)
ONNXIFI:
ONNXIFI cpp test driver (#1290)
Remove ONNXIFI_CHECK_RESULT from onnxRelease* functions (#1397)
Change onnxifi test driver classname (#1396)
Silence usused result warning in ONNXIFI wrapper cleanup. Fix #1344 (#…
[ONNXIFI]Fix gtest assert (#1482)
[ONNXIFI]Reliable memory of shape in test driver (#1480)
onnxifi test driver bugs fixed (#1462)
[ONNXIFI]gtest:expect to assert (#1456) …
[ONNXIFI]Fix the crash when weightCount = 0 (#1451)
[ONNXIFI]Make TEST_P be able to show the test case name directly (#1487)
[onnxifi] Make sure that backend handles run async. (#1599)
Fix onnxifi test (#1617)
Miscellaneous:
bump up the node test to opset 9 (#1431)
remove unindexed ConstantLike test case (#1432)
Add node name for error & Fix typo (#1426)
Fix the typo in the doc (#1427)
Adding checker/typeshape inference logic for Function (#1423)
[cmake] Allow adding extra source files to the onnx lib (#1439)
Add the ability to deprecate an OpSchema (#1317)
[Anderspapitto patch] fix the shape inference for broadcasting (#1368)
external_data: Store large tensor values in separate files (#678)
Add opaque type support (#1408)
Fix checker logic (#1459)
Add version table to Versioning.md to provide a clear mapping (#1418)
serialized model data in test driver, ir version is now corrected (#1455
refresh onnx-ml.proto (#1448)
Fix ONNX_NAMESPACE definition (#1444)
Add BFLOAT16 data type (FLOAT32 truncated to 16 bits) (#1421)
Use strings directly for casing as np.object w/o redundant StringHold
Remove default value for 'dtype' attribute in ConstantLike op. (#1461)
Fix TensorProto int32_data comment (#1509)
fix ninja external (#1507)
Shut up warnings about markers. (#1505)
add the script (#1501)
Minor cleanup in circleci build scripts (#1498)
fix onnx checker to support proto3 models. (#1495)
Add config files for CircleCI (#1490)
Change function ownership to ONNX (#1493)
maintain the integration of gtest arguments (#1491)
Skip some warning for clang-cl (#1484)
Make ONNX compatible with gcc-8 (#1488)
Build with old version protobuf on Windows (#1486)
Clean memory when failed test (#1476)
Change Function registry flow; Get rid of whole-archive in compile (#…
fix the bug of loading model input/output proto (#1477)
Operator set versioning - tighten wording regarding breaking changes (#…
add skip in gtest & update gtest version (#1473)
Opaque type ToString() does not wrap the result into the supplied (#1468
Fix compiler warnings on unhandled bfloat16 switch case (#1470)
Move the definition of the singleton DomainToVersionRange to .cc file (…
fix some issue with namespace (#1533)
Remove Opaque type parameters as not needed. Adjust DataType handling. (
Use vector instead of set to keep the order of the opt passes (#1524)
Pin awscli to last known good version (#1518)
Update docker image version used in CircleCI (#1511)
Fix the mapping for Complex128 data type (#1422)
add default value to doc (#1410)
Fixup handling of captured values as graph outputs (#1411)
[build] Add ONNX_API for protos in all cases (#1407)
[compiler flag] Issue a warning if class has virtual method but missi…
Add a virtual destructor to GraphInferencer (#1574)
Add Scan type/shape inferencing (#1503)
Add hook to InferenceContext to allow running type/shape inferencing … (
Implemented shape inference for Gather (#1525)
add eliminate nop monotone argmax pass (#1519)
Enable -Wall -Wextra -Werror for CI (#1547)
Introduce SparseTensor ML proto (#1554)
In driver test check the return status of onnxGetBackendIDs (#1597)
Make CI log less verbose (#1595)
Loop type shape inferencing (#1591)
add uint8 (#1590)
Add domain as an optional parameter for make_node function (#1588)
Remove unreachable code in shape_inference.h (#1585)
fix a newline in Scan doc (#1541) …
allow variadic parameters of different types (#1615)
Fix a bug in vector address access (#1598)
Handle new types in the switch. (#1608)
Bump docker image version to 230 used in CircleCI (#1606)
type proto does not exactly match the type str, (#1545)
Fix 'line break after binary operator' flake8 warnings. (#1550)
remove inappropriate consts (#1632)
Shape inference fix for broadcast, concat and scan (#1594)
mark PROTOBUF_INCLUDE_DIRS as BUILD_INTERFACE (#1466)
Add a capability to input/output unicode strings (#1734)
Include guidance on adding new operators (#1416)
Clarify namescopes in the presence of nested subgraphs (#1665)
use an empty initializer to create map (#1643)
Remove redundant const (#1639)
Show the op's type and name when the shape inference is failed. (#1623)
link the tutorial (#1650)
Upgrade label encoder to support more input types (#1596)
Add Doc about Adding New Operator into ONNX (#1647)
Fix unused var warning (#1669)
Changes done internally at Facebook (#1668)
Replace np.long by np.int64 (#1664)
Infer shape from data in Constant nodes (#1667)
fix the const map initializatoin (#1662)
Add scan test case (#1586)
Add bfloat16 support. (#1699)
ONNX does not maintain versions for experimental ops (#1696)
Correct type of value_info in Graph (#1694)
Fix typos (#1686)
Use int instead of enum to store data type (#1626)
fix broken link in VersionConverter.md (#1683)
add a shape inference test for group conv (#1719)
Set symbol visibility to hidden for non-Windows (#1707)
[Minor] Fix Windows line ending in test coverage generating script (#…
Support rtol and atol at the model granularity (#1723)
turn rtol to 0.002 on densenet121, since AMD and Nvidia GPU's precion
typos fixed: iutput -> input (#1726)
print some information (#1724)
Update README.md (#1722)
Handle negative axis in scan shape inference (#1748)
remove stale test cases (#1434)
Show string names of data types instead of int IDs (#1749)
Relax constraint that the initializers must be a subset of graph inputs (#1718)
Fix typo in scan shape inferencing (#1753)
Cheers!
-The ONNX Team