Added
- Considerably expanded support for Denève-Machens spike-timing networks, including training arbitrary dynamical systems in a new `RecFSSpikeADS` layer. Added tutorials for standard D-M networks for linear dynamical systems, as well as a tutorial for training ADS networks
- Added a new "Intro to SNNs" getting-started guide
- A new "sharp points of Rockpool" tutorial collects the tricks and traps for new users and old
- A new `Network` class, `JaxStack`, supports stacking and end-to-end gradient-based training of all Jax-based layers. A new tutorial has been added for this functionality
- `TimeSeries` classes now support best-practices creation from clock or rasterised data. `TSContinuous` provides a `.from_clocked()` method, and `TSEvent` provides a `.from_raster()` method for this purpose. `.from_clocked()` a sample-and-hold interpolation, for intuitive generation of time series from periodically-sampled data.
- `TSContinuous` now supports a `.fill_value` property, which permits extrapolation using `scipy.interpolate`
- New `TSDictOnDisk` class for storing `TimeSeries` objects transparently on disk
- Allow ignoring data points for specific readout units in ridge regression Fisher relabelling. To be used, for example with all-vs-all classification
- Added exponential synapse Jax layers
- Added `RecLIFCurrentIn_SO` layer
Changed
- `TSEvent` time series no longer support creation without explicitly setting `t_stop`. The previous default of taking the final event time as `t_stop` was causing too much confusion. For related reasons, `TSEvent` now forbids events to occur at `t_stop`
- `TimeSeries` classes by default no longer permit sampling outside of the time range they are defined for, raising a `ValueError` exception if this occurs. This renders safe several traps that new users were falling in to. This behaviour is selectable per time series, and can be transferred to a warning instead of an exception using the `beyond_range_exception` flag
- Jax trainable layers now import from a new mixin class `JaxTrainer`. THe class provides a default loss function, which can be overridden in each sub-class to provide suitable regularisation. The training interface now returns loss value and gradients directly, rather than requiring an extra function call and additional evolution
- Improved training method for JAX rate layers, to permit parameterisation of loss function and optimiser
- Improved the `._prepare_input...()` methods in the `Layer` class, such that all `Layer`s that inherit from this superclass are consistent in the number of time steps returned from evolution
- The `Network.load()` method is now a class method
- Test suite now uses multiple cores for faster testing
- Changed company branding from aiCTX -> SynSense
- Documentation is now hosted at [https://rockpool.ai](https://rockpool.ai)
Fixed
- Fixed bugs in precise spike-timing layer `RecSpikeBT`
- Fixed behavior of `Layer` class when passing weights in wrong format
- Stability improvements in `DynapseControl`
- Fix faulty z_score_standardization and Fisher relabelling in `RidgeRegrTrainer`. Fisher relabelling now has better handling of differently sized batches
- Fixed bugs in saving and loading several layers
- More sensible default values for `VirtualDynapse` baseweights
- Fix handling of empty `channels` argument in `TSEvent._matching_channels()` method
- Fixed bug in `Layer._prepare_input`, where it would raise an AssertionError when no input TS was provided
- Fixed a bug in `train_output_target`, where the gradient would be incorrectly handled if no batching was performed
- Fixed `to_dict` method for `FFExpSynJax` classes
- Removed redundant `_prepare_input()` method from Torch layer
- Many small documentation improvements
---