Rockpool

Latest version: v2.9.1

Safety actively analyzes 682361 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

1.1.0.4

- Hotfix to remove references to ctxctl and aiCTX
- Hotfix to include NEST documentation in CI-built docs
- Hotfix to include change log in build docs

1.1

Added

- Considerably expanded support for Denève-Machens spike-timing networks, including training arbitrary dynamical systems in a new `RecFSSpikeADS` layer. Added tutorials for standard D-M networks for linear dynamical systems, as well as a tutorial for training ADS networks
- Added a new "Intro to SNNs" getting-started guide
- A new "sharp points of Rockpool" tutorial collects the tricks and traps for new users and old
- A new `Network` class, `JaxStack`, supports stacking and end-to-end gradient-based training of all Jax-based layers. A new tutorial has been added for this functionality
- `TimeSeries` classes now support best-practices creation from clock or rasterised data. `TSContinuous` provides a `.from_clocked()` method, and `TSEvent` provides a `.from_raster()` method for this purpose. `.from_clocked()` a sample-and-hold interpolation, for intuitive generation of time series from periodically-sampled data.
- `TSContinuous` now supports a `.fill_value` property, which permits extrapolation using `scipy.interpolate`
- New `TSDictOnDisk` class for storing `TimeSeries` objects transparently on disk
- Allow ignoring data points for specific readout units in ridge regression Fisher relabelling. To be used, for example with all-vs-all classification
- Added exponential synapse Jax layers
- Added `RecLIFCurrentIn_SO` layer

Changed

- `TSEvent` time series no longer support creation without explicitly setting `t_stop`. The previous default of taking the final event time as `t_stop` was causing too much confusion. For related reasons, `TSEvent` now forbids events to occur at `t_stop`
- `TimeSeries` classes by default no longer permit sampling outside of the time range they are defined for, raising a `ValueError` exception if this occurs. This renders safe several traps that new users were falling in to. This behaviour is selectable per time series, and can be transferred to a warning instead of an exception using the `beyond_range_exception` flag
- Jax trainable layers now import from a new mixin class `JaxTrainer`. THe class provides a default loss function, which can be overridden in each sub-class to provide suitable regularisation. The training interface now returns loss value and gradients directly, rather than requiring an extra function call and additional evolution
- Improved training method for JAX rate layers, to permit parameterisation of loss function and optimiser
- Improved the `._prepare_input...()` methods in the `Layer` class, such that all `Layer`s that inherit from this superclass are consistent in the number of time steps returned from evolution
- The `Network.load()` method is now a class method
- Test suite now uses multiple cores for faster testing
- Changed company branding from aiCTX -> SynSense
- Documentation is now hosted at [https://rockpool.ai](https://rockpool.ai)

Fixed

- Fixed bugs in precise spike-timing layer `RecSpikeBT`
- Fixed behavior of `Layer` class when passing weights in wrong format
- Stability improvements in `DynapseControl`
- Fix faulty z_score_standardization and Fisher relabelling in `RidgeRegrTrainer`. Fisher relabelling now has better handling of differently sized batches
- Fixed bugs in saving and loading several layers
- More sensible default values for `VirtualDynapse` baseweights
- Fix handling of empty `channels` argument in `TSEvent._matching_channels()` method
- Fixed bug in `Layer._prepare_input`, where it would raise an AssertionError when no input TS was provided
- Fixed a bug in `train_output_target`, where the gradient would be incorrectly handled if no batching was performed
- Fixed `to_dict` method for `FFExpSynJax` classes
- Removed redundant `_prepare_input()` method from Torch layer
- Many small documentation improvements

---

1.0.8

Added

- Introduced new `TimeSeries` class method `concatenate_t()`, which permits construction of a new time series by concatenating a set of existing time series, in the time dimension
- `Network` class now provides a `to_dict()` method for export. `Network` now also can treat sub-`Network`s as layers.
- Training methods for spiking LIF Jax-backed layers in `rockpool.layers.training`. Tutorial demonstrating SGD training of a feed-forward LIF network. Improvements in JAX LIF layers.
- Added `filter_bank` layers, providing `layer` subclasses which act as filter banks with spike-based output
- Added a `filter_width` parameter for butterworth filters
- Added a convenience function `start_at_zero()` to delay `TimeSeries` so that it starts at 0
- Added a change log in `CHANGELOG.md`

Changed

- Improved `TSEvent.raster()` to make it more intuitive. Rasters are now produced in line with time bases that can be created easily with `numpy.arange()`
- Updated `conda_merge_request.sh` to work for conda feedstock
- `TimeSeries.concatenate()` renamed to `concatenate_t()`
- `RecRateEuler` warns if `tau` is too small instead of silently changing `dt`

Fixed or improved

- Fixed issue in `Layer`, where internal property was used when accessing `._dt`. This causes issues with layers that have an unusual internal type for `._dt` (e.g. if data is stored in a JAX variable on GPU)
- Reduce memory footprint of `.TSContinuous` by approximately half
- Reverted regression in layer class `.RecLIFJax_IO`, where `dt` was by default set to `1.0`, instead of being determined by `tau_...`
- Fixed incorrect use of `Optional[]` type hints
- Allow for small numerical differences in comparison between weights in NEST test `test_setWeightsRec`
- Improvements in inline documentation
- Increasing memory efficiency of `FFExpSyn._filter_data` by reducing kernel size
- Implemented numerically stable timestep count for TSEvent rasterisation
- Fixed bugs in `RidgeRegrTrainer`
- Fix plotting issue in time series
- Fix bug of RecRateEuler not handling `dt` argument in `__init__()`
- Fixed scaling between torch and nest weight parameters
- Move `contains()` method from `TSContinuous` to `TimeSeries` parent class
- Fix warning in `RRTrainedLayer._prepare_training_data()` when times of target and input are not aligned
- Brian layers: Replace `np.asscalar` with `float`

---

1.0.7.post1

Added

- New `.Layer` superclass `.RRTrainedLayer`. This superclass implements ridge regression for layers that support ridge regression training
- `.TimeSeries` subclasses now add axes labels on plotting
- New spiking LIF JAX layers, with documentation and tutorials `.RecLIFJax`, `.RecLIFJax_IO`, `.RecLIFCurrentInJax`, `.RecLIFCurrentInJAX_IO`
- Added `save` and `load` facilities to `.Network` objects
- `._matching_channels()` now accepts an arbitrary list of event channels, which is used when analysing a periodic time series

Changed

- Documentation improvements
- :py:meth:`.TSContinuous.plot` method now supports ``stagger`` and ``skip`` arguments
- `.Layer` and `.Network` now deal with a `.Layer.size_out` attribute. This is used to determine whether two layers are compatible to connect, rather than using `.size`
- Extended unit test for periodic event time series to check non-periodic time series as well

Fixed

- Fixed bug in `TSEvent.plot()`, where stop times were not correctly handled
- Fix bug in `Layer._prepare_input_events()`, where if only a duration was provided, the method would return an input raster with an incorrect number of time steps
- Fixed bugs in handling of periodic event time series `.TSEvent`
- Bug fix: `.Layer._prepare_input_events` was failing for `.Layer` s with spiking input
- `TSEvent.__call__()` now correctly handles periodic event time series

---

1.0.6

- CI build and deployment improvements

---

1.0.5

- CI Build and deployment improvements

---

Page 3 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.