Notes
After a few missteps in the PyPI distribution process, we are proud to annouce the release of BindsNET v0.1! We will likely follow up with a series of incremental releases (v0.1.x) to address bugs found by users, or add small-scale features that we may have missed.
Features
This release features the `network` core functionality of the package, which enables the construction and simulation of spiking neural networks (SNNs). The `Network` object may be composed of any number of `Nodes`, `Connection`s, and / or `Monitors`, of which there several varieties. Learning on `Connection` objects is implemented by specifying functions from the `learning` module. Popular machine learning (ML) datasets may be loaded using `datasets`, which can be converted into _spike trains_ (like any other numerical data) with `encoding`.
An interface into the [Open AI `gym`](https://github.com/openai/gym) reinforcement learning (RL) library is implemented using the `environments` module, allowing for the first time easy experimentation with SNNs on RL problems.
To eliminate messy implementation details, a `Pipeline` object is provided (in the `pipeline` module) which simulates altogether the interaction between a spiking neural network and a dataset or environments. This saves users from having to write long scripts to run experiments on supported datasets or RL environments.
Plotting functionality is available in the `analysis.plotting` and `analysis.visualization` modules. The former is typically used for plotting "online" during simulation, and the latter, "offline", for studying long-term network behavior or making figures.
Other modules exist in a developmental or low-user / low-priority state.
Future work?
This depends largely on the users and in particular the needs of the [BINDS lab](http://binds.cs.umass.edu/). Some things we would personally like to see include:
- Tighter integration with [PyTorch](https://pytorch.org/). This likely means using more functionality from the `torch.nn.functional` module (e.g., convolution, pooling, activation functions, etc.), or conforming our network API to that of `torch`'s neural network API.
- Automatic smoothing of SNNs: [Recent work](https://pytorch.org/) has shown that it's possible to convert trained deep learning NNs to SNNs without much loss in accuracy. Conversion of PyTorch models or models specified in the [ONNX](https://github.com/onnx/onnx) format may be supported in BindsNET in the future!
- More features! `Nodes` (neuron) types, `Connection` types, `Dataset`s, `learning` functions, and more. In particular, we want to take steps towards making SNNs robust for ML / RL.
Cheers,
djsaunde