This release brings numerous improvements in terms of speed, usability, specializations, documentation and more. In general, we tried to make Norse more user-friendly and applicable for both the die-hard deep-learning expert and neuroscience enthusiasts new to Python. Specifically, this release includes:
* Compatibility with the [PyTorch Lightning](https://pytorchlightning.ai/) library, which means that Norse now scales to multiple GPUs and even supercomputing clusters with [SLURM](https://en.wikipedia.org/wiki/Slurm_Workload_Manager). As an example, see our [`MNIST` task](https://norse.github.io/norse/tasks.html#mnist-in-pytorch-lightning).
* The [`SequentialState`](https://norse.github.io/norse/started.html#using-norse-neurons-as-pytorch-layers) module, which works similar to PyTorch's `Sequential` layers in that it allows for seamless composition of PyTorch *and* Norse modules. Together with the [`Lift`](https://norse.github.io/norse/started.html#using-norse-in-time) module, this is an important step towards powerful and simple tools for developing spiking neural networks.
* As Norse becomes faster to work with, it is also easier to implement more complex models. Norse now features spiking convolutions, [MobileNet](https://arxiv.org/abs/1704.04861) and [VGG](https://arxiv.org/abs/1409.1556) networks which can be used out-of-the box. See the [`norse.torch.models` package](https://norse.github.io/norse/auto_api/norse.torch.models.html) for more information.
* Improved performance. We implemented the LIF neuron equations and the SuperSpike synthetic gradient in C++. All in all, **Norse is roughly twice as fast** as it was before.
* Improved documentation. The main pages and the introductory pages were edited and cleaned up. This is an area we will be improving much more in the future.
* Various bugfixes. Norse is now more stable and useable than before.
As always, we welcome feedback and are looking forward to hearing how you are using Norse! Happy hacking :partying_face: