Automatic DNN generation for fuzzing and more
APACHE-2.0 License
Bot releases are hidden (Show)
In this release, NNSmith is made more robust, systematic and extensible. It also supports a wider range of operators, data types and frontends/backends. We upstreamed the main infrastructure improvements done during the NeuRI project (algorithmic parts will be upstreamed in incoming releases), making it "a fuzzer infrastructure" in addition to simply "a fuzzer" for DL frameworks. Open-source developers can easily build or extend their own fuzzers based on NNSmith now. The documentation and GitHub experience are also improved to make it more approachable.
iree
(#74)torchjit
as backend (#79)symbolic-cinit
as generation mode (#96)See the full commit list: https://github.com/ise-uiuc/nnsmith/compare/a8307f2b8043eb64b413a9b68ba2e6a2646cae8a...b4b9fac81cd3b1befae28d17a28f20be3a7ebf77
https://pypi.org/project/nnsmith/0.1.0/
@Co1lin @ganler @jakc4103
Thanks, @soodoshll and many others for issue reporting.
Published by ganler over 1 year ago
iree
(#74)torchjit
as backend (#79)See the full commit list: https://github.com/ise-uiuc/nnsmith/compare/a8307f2b8043eb64b413a9b68ba2e6a2646cae8a...8cee6f6a760a4e8b91a910462986c968aaddb01c
https://pypi.org/project/nnsmith/0.1.0rc2/
@Co1lin @ganler @jakc4103
Thanks, @soodoshll and many others for issue reporting.
Published by ganler over 1 year ago
iree
(#74)torchjit
as backend (#79)See the full commit list: https://github.com/ise-uiuc/nnsmith/compare/a8307f2b8043eb64b413a9b68ba2e6a2646cae8a...8cee6f6a760a4e8b91a910462986c968aaddb01c
https://pypi.org/project/nnsmith/0.1.0rc1/
@Co1lin @ganler @jakc4103
Thanks, @soodoshll and many others for issue reporting.
Published by ganler about 2 years ago
This release mainly stabilizes the v0.0.0
(which is a huge refactor for code quality and extensibility from research code) and improves the usability of NNSmith.
torch.mean
signature spec as it is not supported.addmm
) with MatMul spec (torch.matmul
).tf.Variable
device placement consistency requirement.torch.sum(int32) -> int64
;tvm
;@Co1lin @ganler