Hls4ml

Latest version: v0.8.1

Safety actively analyzes 681881 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.6.0

What's Changed
* `VivadoAccelerator` backend: target `pynq-z2` and `zcu102` boards directly from hls4ml by nicologhielmetti
* Updated `PyTorch` and `ONNX` converters by Duchstf
* `line_buffer` Conv2D implementation for `io_stream`: reduced resource usage and latency by Keb-L, violatingcp, vloncar
* Support `QConv2DBatchnorm` layer from `QKeras` by nicologhielmetti
* Improved profiling plots - easier to compare original vs `hls4ml` converted models by maksgraczyk
* Better derivation of data types for `QKeras` models by jmduarte, thesps
* Improved CI by thesps
* More support for models with branches, skip connections, `Merge` and `Concatenate` layers by jmduarte, vloncar
* Support for `Dense` layers over multi-dimensional tensors by vloncar
* Overall improvements by vloncar, jmduarte, thesps, jmitrevs & others

New Contributors
* siorpaes made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/424
* jmitrevs made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/403
* anders-wind made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/302
* KOVI89alipes made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/318
* maksgraczyk made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/323
* Keb-L made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/332
* ConsVin made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/307
* nicologhielmetti made their first contribution in https://github.com/fastmachinelearning/hls4ml/pull/298

**Full Changelog**: https://github.com/fastmachinelearning/hls4ml/compare/v0.5.0...v0.6.0

0.5.0

What's new:
- Streaming IO layer implementations, especially of Convolutional layers, accessed through the config with `IOType: io_stream`. Scales CNN support to much larger models than previously possible (see [arXiv:2101.05108](https://arxiv.org/abs/2101.05108))
- New [documentation and API reference](https://fastmachinelearning.org/hls4ml/)
- Further optimizations for QKeras / quantization aware training. A 'shift' operation is now used for `po2` quantizers
- Allow redefinition of weights directory for standalone project compilation
- `profiling` for PyTorch models

Deprecated:
- `IOType : io_serial` is deprecated, and superceded by new `IOType: io_stream`

Bugfixes:
- Fix to Initiation Interval and different min/max latency for `Strategy: Resource`
- Fix warnings in `hls4ml` command line script flow
- Write yml config from Python API - for mixed API / command line flow

0.5.0beta

Pre-release of hls4ml version `v0.5.0`.

What's new:
- Streaming IO layer implementations, especially of Convolutional layers, accessed through the config with `io_type: io_stream`. Scales CNN support to much larger models than previously possible (see [paper](https://arxiv.org/abs/2101.05108))
- New [documentation and API reference](https://fastmachinelearning.org/hls4ml/)
- Further optimizations for QKeras / quantization aware training. A 'shift' operation is now used for `po2` quantizers
- Allow redefinition of weights directory for standalone project compilation

0.4.0

What's new:

- Support for GarNet layer (see [paper](https://arxiv.org/abs/2008.03601]))
- Input layer precision added to config generator utility
- New 'SkipOptimizers' config option. Now you can run all Optimizers by default (as in v0.3.0) but subtract any specified by 'SkipOptimizers' e.g. `hls_config['SkipOptimizers'] = ['fuse_consecutive_batch_normalization']`
- Print out the latency report from Cosimulation

Bugfixes:

- Fixes related to tensorflow 2.3: new Functional API, changes to handling of Input layer
- Fix error with config generator utility and activation layers gor `granularity='name'`
- Fix issue with reloading of emulation library after configuration change
- Fix to handling of layers with `use_bias=False` and merged Dense and BatchNormalization

0.3.0

What's new:
- API expansion:
- Create configuration dictionary from model object
- Run 'C Simulation' from Python with `hls_model.predict(X)`
- Trace model layer output with `hls_model.trace(X)`
- Write HLS project, run synthesis flow from Python
- QKeras support: convert models trained using layers and quantizers from QKeras
- Example models moved to separate repo, added as a submodule with an API to retrieve them
- New Softmax implementations
- Minor fixes: weights exported at higher precision, concatenate layer shape corrected

0.2.0

What's new:
- `tf_to_hls`: convert tensorflow protobuf (`.pb`) models to HLS projects
- Support for Keras model `.h5` files (extending existing support for `.json` architecture + `.h5` weights format)
- Support larger Conv1D / 2D layers
- Support for binary and ternary layers from QKeras
- API enhancements for addition of custom layer and new backends
- Keras and HLS model profiling tool
- `hls4ml report` command to gather HLS build reports
- `hls4ml build -l` command to run logic synthesis
- Fused Batch Normalization and Dense layer optimization pass

Page 2 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.