hls4ml

Machine learning on FPGAs using HLS

APACHE-2.0 License

Downloads
988
Stars
1.1K
Committers
57
hls4ml - edelweiss 0.8.1 Latest Release

Published by jmitrevs 10 months ago

What's Changed

New Contributors

Full Changelog: https://github.com/fastmachinelearning/hls4ml/compare/v0.8.0...v0.8.1

hls4ml - edelweiss 0.8.0

Published by jmitrevs 11 months ago

What's Changed

New Contributors

Full Changelog: https://github.com/fastmachinelearning/hls4ml/compare/v0.7.1...v0.8.0

hls4ml - edelweiss 0.8.0rc1

Published by jmitrevs 12 months ago

What's Changed

New Contributors

Full Changelog: https://github.com/fastmachinelearning/hls4ml/compare/v0.7.1...v0.8.0rc1

hls4ml - delphinium 0.7.1

Published by jmitrevs over 1 year ago

What's Changed

Full Changelog: https://github.com/fastmachinelearning/hls4ml/compare/v0.7.0...v0.7.1

hls4ml - delphinium

Published by jmitrevs over 1 year ago

What's Changed

New Contributors

Full Changelog: https://github.com/fastmachinelearning/hls4ml/compare/v0.6.0...v0.7.0

hls4ml - delphinium rc1

Published by jmduarte over 1 year ago

What's Changed

New Contributors

Full Changelog: https://github.com/fastmachinelearning/hls4ml/compare/v0.6.0...v0.7.0rc1

hls4ml - coris

Published by thesps almost 3 years ago

What's Changed

  • VivadoAccelerator backend: target pynq-z2 and zcu102 boards directly from hls4ml by @nicologhielmetti
  • Updated PyTorch and ONNX converters by @Duchstf
  • line_buffer Conv2D implementation for io_stream: reduced resource usage and latency by @Keb-L, @violatingcp, @vloncar
  • Support QConv2DBatchnorm layer from QKeras by @nicologhielmetti
  • Improved profiling plots - easier to compare original vs hls4ml converted models by @maksgraczyk
  • Better derivation of data types for QKeras models by @jmduarte, @thesps
  • Improved CI by @thesps
  • More support for models with branches, skip connections, Merge and Concatenate layers by @jmduarte, @vloncar
  • Support for Dense layers over multi-dimensional tensors by @vloncar
  • Overall improvements by @vloncar, @jmduarte, @thesps, @jmitrevs & others

New Contributors

Full Changelog: https://github.com/fastmachinelearning/hls4ml/compare/v0.5.0...v0.6.0

hls4ml - bartsia

Published by thesps over 3 years ago

What's new:

  • Streaming IO layer implementations, especially of Convolutional layers, accessed through the config with IOType: io_stream. Scales CNN support to much larger models than previously possible (see arXiv:2101.05108)
  • New documentation and API reference
  • Further optimizations for QKeras / quantization aware training. A 'shift' operation is now used for po2 quantizers
  • Allow redefinition of weights directory for standalone project compilation
  • profiling for PyTorch models

Deprecated:

  • IOType : io_serial is deprecated, and superceded by new IOType: io_stream

Bugfixes:

  • Fix to Initiation Interval and different min/max latency for Strategy: Resource
  • Fix warnings in hls4ml command line script flow
  • Write yml config from Python API - for mixed API / command line flow
hls4ml -

Published by thesps almost 4 years ago

Pre-release of hls4ml version v0.5.0.

What's new:

  • Streaming IO layer implementations, especially of Convolutional layers, accessed through the config with io_type: io_stream. Scales CNN support to much larger models than previously possible (see paper)
  • New documentation and API reference
  • Further optimizations for QKeras / quantization aware training. A 'shift' operation is now used for po2 quantizers
  • Allow redefinition of weights directory for standalone project compilation
hls4ml - aster

Published by thesps almost 4 years ago

What's new:

  • Support for GarNet layer (see paper)
  • Input layer precision added to config generator utility
  • New 'SkipOptimizers' config option. Now you can run all Optimizers by default (as in v0.3.0) but subtract any specified by 'SkipOptimizers' e.g. hls_config['SkipOptimizers'] = ['fuse_consecutive_batch_normalization']
  • Print out the latency report from Cosimulation

Bugfixes:

  • Fixes related to tensorflow 2.3: new Functional API, changes to handling of Input layer
  • Fix error with config generator utility and activation layers gor granularity='name'
  • Fix issue with reloading of emulation library after configuration change
  • Fix to handling of layers with use_bias=False and merged Dense and BatchNormalization
hls4ml - v0.3.0

Published by thesps about 4 years ago

What's new:

  • API expansion:
    • Create configuration dictionary from model object
    • Run 'C Simulation' from Python with hls_model.predict(X)
    • Trace model layer output with hls_model.trace(X)
    • Write HLS project, run synthesis flow from Python
  • QKeras support: convert models trained using layers and quantizers from QKeras
  • Example models moved to separate repo, added as a submodule with an API to retrieve them
  • New Softmax implementations
  • Minor fixes: weights exported at higher precision, concatenate layer shape corrected
hls4ml - v0.2.0

Published by thesps over 4 years ago

What's new:

  • tf_to_hls: convert tensorflow protobuf (.pb) models to HLS projects
  • Support for Keras model .h5 files (extending existing support for .json architecture + .h5 weights format)
  • Support larger Conv1D / 2D layers
  • Support for binary and ternary layers from QKeras
  • API enhancements for addition of custom layer and new backends
  • Keras and HLS model profiling tool
  • hls4ml report command to gather HLS build reports
  • hls4ml build -l command to run logic synthesis
  • Fused Batch Normalization and Dense layer optimization pass
hls4ml - v0.1.6

Published by vloncar over 4 years ago

  • Support for larger Dense layers (enabled with Strategy: Resource in the configuration file)
  • Binary/Ternary NN refinements
  • Built-in optimization framework
  • Optional C/RTL validation
hls4ml - v0.1.5

Published by jmduarte about 5 years ago

hls4ml - v0.1.2

Published by benjaminkreis over 6 years ago

Update license

hls4ml - v0.1.1

Published by jmduarte over 6 years ago

second beta version: fixed README

Package Rankings
Top 6.01% on Pypi.org
Badges
Extracted from project README
DOI License Documentation Status PyPI version Downloads
Related Projects