Sinabs

Latest version: v2.0.0

Safety actively analyzes 634645 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 6

0.3.0

------

* add basic parameter printing in \_\_repr\_\_
* Update ci-pipeline.yml
* Update ci-pipeline.yml
* update layer docstrings and release notes
* Notebook updated for the new version
* rasterize method accumulates multiple spikes within a time step
* bug fix
* added optional size parameter to events\_to\_raster
* Updated to changes in sinabs 0.3
* add new record\_states feature
* small update to activations tutorial notebook
* update tutorial notebooks
* change functional ALIF behaviour so that v\_mem is not reset beneath 0 after a spike
* update neuron\_model plots
* add tutorial about activations
* remove ActivationFunction class and split into separate parameters spike\_fn, reset\_fn and surrogate\_grad\_fn in all layers
* update neuron\_models notebook
* make tau\_syn in IAF more generic and turn off grads for tau\_mem in IAF
* fix warnings about redundant docstrings in sphinx
* blacken whole repo
* refactor activation module
* reintroduce does\_spike property
* renamed threshold\_low to min\_v\_mem
* make IAF inherit directly from LIF
* Update README.md
* fix some imports
* tutorial notebook that plots different neuron models
* update ExpLeak neuron
* remove does\_spike and change default representation
* make ExpLeak directly inherit from LIF with activation\_fn=None
* change default surrogate gradient fn to SingleExponential
* move SqueezeMixin class to reshape.py
* change MNIST class names in tutorials so that they point to same data. Prevent multiple download on RTD server
* update documentation
* exclude dist from git
* Update README.md
* Update README.md
* Notebook updated with the outputs
* bug fixes for inclusion of threshold\_low
* added threshold\_low for IAF and LIF and corresponding test
* added samna.log files to git ignore
* Notebook with new API verified. Still needs to be rendered with dev-kit
* Moved requirements for sphinx
* Removed InputLayer
* Implemented reset states method
* bumped min version for bug fixes
* added logo with white background
* fundamentals added and notebooks fixed with new api
* Update ci-pipeline.yml
* Update ci-pipeline.yml
* Update ci-pipeline.yml
* Update ci-pipeline.yml
* Update ci-pipeline.yml
* Update ci-pipeline.yml
* updated training with bptt section
* wip
* Update ci-pipeline.yml
* add link to Sinabs-DynapCNN
* show version number in title
* minimum samna version updated in requirements file
* removed extra-index-url
* Update ci-pipeline.yml
* update layer api description in docs
* remove input layer and regroup pooling and to\_spike layers
* update sphinx config
* update about info
* Delete .gitlab-ci.yml
* Delete CONTRIBUTING.md
* Delete CHANGELOG.md
* Update .readthedocs.yaml
* Update .readthedocs.yaml
* update quickstart notebook
* Update README.md
* Update ci-pipeline.yml
* Update requirements.txt
* Update ci-pipeline.yml
* Update ci-pipeline.yml
* move requirements for test
* first version of ci pipeline script
* update gitlab ci script
* blacken some layers
* add parameter norm\_input to LIF layer, re-use lif functional forward call in IAF layers with alpha=1.0, add firing\_rate to spiking layers
* minor changes to activation docs
* add convenience FlattenTime / UnflattenTime layers
* rework weight transfer tutorial
* various docs updates, refurbishing install page, adding differences page,..
* layer docstring updates
* docs api update
* more docs file restructuring
* moving files around in the docs folder
* added new sinabs logo done by DylanMuir
* moved files in doc/ up one level
* Unit tests for copying uninitialized layers. Makes sure that 25 is resolved
* Add 'does\_spike' property
* Fix float precision issue
* Remove backend conversion
* Unify param\_dict shape entry
* Make sure tau is converted to float
* Rename to exodus
* Add MaxSpike activation
* Make sure tau\_leak is converted to float
* remove deprecated layers
* blackened tests and added one test for multiple taus in LIF
* Minor: -> Efficiency improvement
* fix previous commit
* Scale exp surr. grad width with threshold
* Matching definition of exponential surrogate gradient with slayer
* Standardize API for all io functions
* Modules completed
* minor change in variable name
* Exponential surrogate gradient
* Samna 0.11 support initial commit: -> This version is tested and works, however there are still improvements that can be done
* wip, moved reset\_states to config builder
* remove the use of UninitializedBuffer because introduced in PyTorch 1.9 and therefore not compatible with PyTorch LTS (long term support) v1.8.1
* tau\_mem for LIF neurons is now always calculated on CPU and transferred to original device, for better numerical comparison with SLAYER LIF layer
* only zero gradients if state is initialised

0.2.1

------

* added pip install
* added requirements
* Fixed path to conf.py in readthedocs config file
* added rtd yaml file
* split spike detection from reset mechanism in ALIF to be compatible with LSNN paper
* update docstrings
* remove changes introduced by git rebase
* fix bug for reset\_state does not set neuron states to zeros
* add support for auto merge polarity according to inputshape in make\_config
* removed activation state from dynapcnn layer
* add new squeeze layer mixin
* make ALIF membrane decay test pass
* Remove debugging print statement
* Fix discretization unit test
* bug fixes
* change update order in ALIF
* add grad\_scale to MultiGaussian surrogate grad function
* add missing import for MultiGaussian surrogate grad function
* update SingleSpike mechanism to spike when v\_mem >= threshold instead of >
* update ALIF activation function attribute
* checking tol offset, wip
* initialise threshold state as b0 for ALIF classes
* fix init\_states when 2 inputs of different shapes are supplied
* refactor and test reset\_states function
* return copy of states in reset mechanisms rather than in-place modification
* updated state->v\_mem and threshold->activation\_fn.spike\_threshold
* replace state dict that is passed to torch.autograd function and also test backward passes
* fix issue with MembraneReset mechanism
* added tests for initialisation with specific shape
* when resetting, reset on the right device
* merge dev branch back into feature branch
* properly separate layer business from activation function thresholds
* fix device issues with recurrent layers
* revert back to an additional ALIFActivationFunction that passes state['threshold'] instead of self.spike\_threshold to spike\_fn
* remove custom ALIF spike generation function and move threshold state to ActivationFunction.spike\_threshold
* deactivate ONNX tests for now
* fix error where state would not be initialised on the right device
* make network class tests pass
* make backend tests pass
* remove class factories for recurrency and Squeezing completely and just use nn.Flatten and Unflatten in the future
* move Quantize functions to activation module
* update leaky layers and make all copy tests pass
* make LIFRecurrent inherit from LIF
* add Squeeze Module instead of constructing squeeze classes for every layer
* remove debugging print statement
* include LIFRecurrent module and functional forward call for recurrent layers
* update deepcopy method
* update docstrings for activations and leaky layers
* refactor IAF layers
* add MultiGaussian surr grad fn
* update the ResetMechanism docstrings
* refactor ALIF layer
* ALIF refactoring WIP
* remove old threshold API and traces of SpikingLayer
* make reset functions classes
* rename InputLayer file
* delete SpikingLayer
* refactored ExpLeak layer
* remove Activation class and now have option to specify different surrogate gradients
* rename states to state in LIF
* can initialise shape
* break apart activation function into separate spike\_fn, reset\_fn and surrogate\_grad\_fn
* fix initialisation of states if input size changes
* Enable changing backends for ExpLayer
* use a functional forward method in LIF
* minor change in update to i\_syn
* make deepcopy work with weird i\_syn no\_grad exception
* refactoring together with Sadique
* include activation functions for reset and subtract + forward pass tests for those
* tau\_syn can now be None instead of empty parameter
* can now choose to train alphas or not
* update lif tests
* first stab at breaking out activation function
* add support for auto merge polarity according to inputshape in make\_config
* Fixes issues in to\_backend methods if a backend requires a specific device
* Fixes issues in to\_backend methods if a backend requires a specific device
* Address issue 17 and fix some minor issues with torch.exp
* Minor bugfix on events. Int was not propagated when converting dvs events to samna events
* Added a delay factor in seconds so that the first events timestamp is larger than 0
* bug fixes with deep copy in dev
* fix \_param\_dict for pack\_dims.py
* wip
* use list of output spikes that is stacked at the end rather than reserving memory upfront
* update ALIF execution order
* update documentation for leaky layers
* recurrent ALIF layers
* modify order of threshold update update, then spiking within one time step in LIF neuron
* use taus instead of alphas for leaky layers
* change state variable to v\_mem
* Fix default window in threshold functions
* Remove unnecessary line of code in iaf\_bptt.py
* Lift unwanted strict dependency on sinabs-slayer
* to\_backend method in network class
* Alif deepcopy works
* Add unit tests for switching backends and for deepcopying
* Switching between backends works now, even if other backend has not been imported
* update Quantization and Thresholding tools docs
* fixed tests
* minor documentation update
* replacing instances of SpikingLayer to IAF
* deepcopy now works; \_param\_dict -> get\_neuron\_params() LIF added to \_\_init\_\_ file
* Add StatefulLayer (WIP)
* replacing instances of SpikingLayer to IAF
* added monitor layers documentation to the to method as well
* update recurrent module to make it a class factory, which can be combined with Squeeze layers
* renamed LSNN layer back again to ALIF but keep Bellec implementation
* Raster to events without limiting
* Documentation added
* reset states method refractored
* black formatted some layers
* add RecurrentModule to abstract away recurrent layers
* update LSNN layer
* update recurrent LIF layer
* remove ABC from SpikingLayer
* rename ALIF to LSNN layer
* solve timestamp reset
* minimum of torch 1.8 for torch.div with rounding\_mode param
* update leaky layers and their tests
* fix tests
* Forward pass safer implementation
* Macro for easily monitoring all layers
* Synops support for average pooling layers -> Synopcounter now works correctly when average pooling layers are used
* remove dvs\_layer.monitor\_sensor\_enable = True
* divide threshold adaptation by tau\_threshold to isolate the effect of time constants and not current over time
* replace tau parameters such as tau\_mem and tau\_threshold with respective alpha versions
* fix ci
* bug fix and method renated to reset\_states
* Added partial reset method
* squash warning message about floor division by changing to recommended method
* update LeakyExp tests
* update docstring in LIF layer
* fix ExpLeak layer + lif/alif tests
* rename input tensor for gpu
* re-add unit-test for normalize\_weights
* add GPU tensor support for normalize\_weights method
* no more parameter overrides for alif and lif neurons
* zero grad tests for ALIF/LIF and replace in-place operation
* update LIF and ALIF docstrings
* add tests for LIF/ALIF current integration, membrane decay and threshold decay
* remove a wrong condition expression
* add chip\_layers\_ordering checking in make\_config method
* inelegant solution by adjusting the list comprehension in line252
* Typos and discord community url fix
* Added samna requirements to gitlab ci script
* update LIF and ALIF documentation
* rename spike\_threshold to resting\_threshold
* update Quantize, StochasticRounding to fix Pytorch warning
* replace instantiated ThresholdReset autograd methods with static call, as recommended by pytorch
* lif and alif layer update
* ALIF: reuse LIF forward call and just change some of the functions that are called from it
* reuse detect\_spikes function in ALIF layer
* add initial version of adaptive LIF layer
* rework LIF layer and add first tests for it
* specify threshold type as tensor
* skeleton code
* add a few more lines on the cosmetic name change in release history
* add change log to documentation
* inelegant solution by adjusting the list comprehension in line252
* update gitignore to exclude MNIST dataset
* update documentation and remnant methods to update DynapcnnCompatibleNetwork to DynapcnnNetwork
* update tutorial notebook
* add DynapcnnCompatibleNetwork to be backwards compatible
* add dt to events\_to\_raster
* change output format of DynapcnnNetwork to tensors
* update filenames and module paths for dynapcnn\_network and dvs\_layer
* Typos and discord community url fix
* Updates and fixes
* Added discord and documentation urls
* tutorial notebook updated
* added tests for monitoring
* test for cropping fixed + samna requirement bump
* DVSLayer.from\_layers take an input of len 3 Added checks for input\_shape
* ci updated to not wait for confirmation
* replaced swapaxes with transpose for backward compatibility with pytorch 1.7
* gitlab ci updated to install latest version of samna
* added doc strings
* Added instructions for how to add support for a new chip
* api docs nested
* wip
* deleted mnist\_speck2b example script as dynapcnn\_devkit works by just replacing the device name
* update API doc
* Default monitor enabled for last layer is nothing is specified
* merged changes
* Removed redundant/legacy code
* rename API doc headings
* Update unit tests according to recent commits
* clean up API documentation by not displaying module names
* Minor fixes and adaptations. More specific exception type. Can pass network with dvs layer to dynapcnn compatible network
* Smaller fixes in config dicts
* Refactored dvs unit tests
* fixed typos in documentation
* Bug fix in crop2d layer handling
* Added Crop2d layer
* installation instructions and minor documentation changes
* Minor changes
* added some folders to gitignore
* moved event generation methods to ChipFactory
* depricated methods deleted from source
* supported\_devices in ChipFactory and get\_output\_buffer in ChipBuilder
* added support for time-stamp management
* enable pixel array on dvs\_input true
* adding speck2b device names + mnist example script
* speck2b bug fix in builder
* removed factory setting line
* added speck2b to the condition
* added speck2b to the condition
* added parameter file for example
* Added config builders for speck and speck2b
* Refractored to add ConfigBuilder
* Support for InputLayer. Still does not pass \`test\_dvs\_input\` tests
* added index for samna
* Cut dimensions update in the configuration dict
* Minor api corrections
* dynapcnn layers populaiton works. Bug in dvs layer still to be sorted out
* wip: build full network
* construct dvs layer construction works
* Added tests for DVSLayer
* Added custom exceptions and tests
* method to build network fo dynapcnn layers added
* added start layer index to construction methods
* Added tests for layer builders
* Added function to create dynapcnn layers
* DVSLayer, FlipDims functional code added
* Suggestion: DVSLayer. Still to be completed
* WIP
* Added handling of sumpool layers at the start of the model
* Updated MNIST example notebook in the documentaion
* added speck2\_constraints
* make\_config default for chip\_layers\_ordering changed to "auto"
* unhide chip\_layers\_ordering
* Breaking change: monitor\_layers now takes model index instead of chip layer index
* wip
* Added API docs for new files
* Added the basic documentation
* minor documentation typo fixes and some clarifications
* doc skeleton added for the fundamentals
* mapping logic updated to edmond algorithm
* Unit test for from\_torch with num\_timesteps
* Added test to check on initialization with batch\_size
* wip
* Slight refactoring: More methods in SpikingLayer
* Fix zero\_grad test
* Test new zero\_grad method
* Added generic zero\_grad method to SpikingLayer class
* override zero\_grad instead of separate method detach\_state\_grad
* Add unit test. Rename detach\_state\_grads to detach\_state\_grad for consistency with no\_grad
* Method for detaching state gradients without resetting
* Random reset into sensible value range
* Fix output shape
* Do not transpose data in IAF.forward
* Remove Squeeze/Unsqueeze helper classes
* Add missing spiking\_layer module. Minor renaming of squeeze classes
* Make sure that Squeeze layers are registered as subclasses of Squeeze class
* Change data format of iaf input: Batch dimension is first. Always implicitly expect a batch dimension
* IAF expects batch and time separated. IAFSqueeze for old behavior with squeezed dimensions
* bug fix in make\_config effecting auto-mapping
* move name\_list acquiring from plot\_comparison() into compare\_activations()
* Layer dimensions infereed from dimensions dict
* Fix sinabs.network.Network.plot\_comparison() not work correctly for nested ANN and make it only plot Spiking layers' comparison-plot
* updated memory summary to take chip constraints
* samna warning message raised
* open device checks if the device is already open
* moved monitor to make\_config
* added xytp conversion methods
* Added LIF base class
* added warning for discretization
* added test for auto in make\_config
* Added timestamping and memory\_summary methods
* Bug fix: Padding and stride x, y swapped
* Events to raster marked as NotImplemented
* Time stamped events generated
* Forward method defined on events
* Bug fix: config invalid when network uninitialized (no data passed)
* Sub class for flatten batch/time + separate class for IAF
* added bug fix for str 'speck2devkit'
* Added option to specify which layers to monitor in to method
* to device method implemented
* samna device discovery memory errors fixed
* get\_opened\_devices also returns device\_info object
* added get\_device\_map
* Added device\_list
* Added meta class for IAFLayer 5
* Added method to discover connected devices
* added test
* wip: find/move model to device when to() is called
* Config object conditionally created based on device type
* added further test
* Correct error now raised if spiking layer missing at end of network
* speeds up total power use using the new method
* added total synops counter that doesnt use pandas
* speeds up pandas use in synops count, big advantage
* Added io file
* Raise warning when discretize is True and there is an avgpooling layer
* Name convert\_torch\_ann
* necessary change in notebook
* updated docs
* added docs
* Revert "need to test if samna is there"
* bug fixed
* synopcounter tests, changed network and from\_torch accordingly
* moved counter function
* SNNSynopCounter class
* Fix from\_torch method when model contains a ReLU that is not child of a Sequential
* m2r changed to m2r2
* swapped dimensions with batch, default batch None
* membrane reset now implemented properly
* Documentation added and method name renamed to normalize\_weights
* Smart weight rescaling added
* pypi deploy new line added
* sphinx requirements added
* typo in conf.py fixed
* docs folder relocated
* setuptools based setup file
* pbr based project versioning and gitlab ci added
* Samna requirement updated
* fixed cuda issues on from torch, added test
* Method parameter in test corrected
* changed speck to dynapcnn
* fixed mapping problem in auto layer order
* Replace all references to speck as DYNAPCNN, including internal variables
* Type annotation fixed
* Refractored code to dynapcnn from speck
* Changed aiCTX references to SynSense
* fixed bug in discretization of membrane\_subtract (double multiplication)
* membrane reset implementation, removed layer name
* Equation rendering in docs fixed
* Doc front page changed to README
* Added documentation pipeline for testing doc generation
* Setup tools integration for sphinx documentation
* Martino added to authors
* Theme changed to rtd
* added a detach() call
* changed network removing no\_grad
* updated tests to reflect changes in sinabs network
* working bptt notebook
* twine deploy conditional on env variable
* Added condition on env variable to pypi\_deploy
* Add another pipeline that shouldn't execute
* WIP bptt notebook
* CI Lint corrections
* Added test for CI pipeline
* Link to contributing.md file fixed
* Description file content type updated
* Description file content type updated
* Update description type to markdown
* Update development status
* Updated Classifiers
* fixed docs, removed commented-out areas
* removed dependency on samna for validation, and on SpikingLayerBPTT

0.2.0

------

* Threshold gradient scaled by threshold (Bug fix)
* updated docs, removed exclude\_negative\_spikes from fromtorch (no effect)
* test requirements separated
* added coverage
* temporary solution for onnx
* temporary solution for onnxruntime
* amended test requirements to include onnxruntime
* trying to trigger CI
* Updated MNIST notebook
* Instructions for testing added
* \_\_version\_\_ specified from pbr
* Cleaned up setup.py and requirements with pbr
* added coverage tools
* removed network utilities not needed
* updated tests using pathlib
* added some network tests
* WIP on functional docstrings
* removed old stuff from network summary
* update gitignore
* notebook docs updated (WIP)
* fix docs for input shape in from\_torch, removed depency of Network on legacy layers
* removed deprecated arguments of from\_torch
* cleaned up keras in docs
* removed input shape from spiking which caused bugs, and output\_shape from inputlayer
* Changed 'input\_layer' management for sinabs changes'
* change dummy input to device, calculate layer-wise output size
* Updated URL
* Keras-related stuff all removed
* removed pandas from layers
* removed and updated keras tests
* removed summary; device not determined automatically in from\_torch
* removed old tests
* Fixed relative imports
* Added deprecation warning
* Moved layers around, added deprecation
* Moved neuromorphicrelu, quantize, sumpool to separate files, functions to functional
* fixed tests, one not passing
* started changing dvs\_input default
* added dropout
* Unit test for adding spiking output in 'from\_model'
* Enable adding spiking layer to sequential model in from\_torch function
* Roll back changes from last commit and only make sure that meaningful error is produced when last layer is not spiking. Handling of last layer done in sinabs from\_model
* wip: handle networks that end with linear or conv layer
* fixed true\_divide torch fussiness
* removed print statement
* merged commit with sumpool support
* implemented support for sumpool in input network
* Disable default monitor and support one dvs input channel
* Version bump
* removed bad choice
* removed unnecessary calls to print
* fixed bug in old version
* In-code docs for test\_discretized
* Smaller fixes in discretize
* Tests for discretization module
* Added leak management, and test
* individual\_tests made deterministic
* fixed input tests
* valid\_mapping complies with variable naming convention. Extended in-code documentation
* Minor fix in test\_dvs\_input
* Ignore jupyter checkpoints
* Placeholder in tutorial for validation and upload to Speck
* Fixes in test\_dvs\_input
* Rename test\_dvs to test\_dvs\_input
* test\_dvs: Tests with input\_layers
* Warn if both input\_shape and input layer are provided and shapes don't match
* test\_dvs: make sure that missing input specifications are detected
* test made deterministic
* Removed requirement of samna, particularly for tests
* added skip tests with no samna
* doorbell test fixed
* updated large net test to an actual test
* added tests; added support for 3d states
* fixed bug DVS input size
* extended tests to config
* and again
* More updates to deepcopy
* Second deepcopy argument
* Added tentative deepcopy
* deal with missing neuron states
* automatic choice of layer ordering
* add handling swapping layers while searching for a solution
* removed prints, fixed test
* Many fixes needed for the configuration to be valid. Now works
* Documentation for discretize
* Cannot change conv and spk layers, but access them through property. Pool can be changed
* Cannot change conv and spk layers, but access them through property. Pool can be changed
* getting closer
* improvements
* working check on real samna
* validation thing to be compared across machines
* Specklayer correctly handles changing layers. Todo: Update unit tests
* wip: specklayer: make sure that when changing layers, config dict gets updated. TODO: unit test fails
* Property-like behavior for conv/pool/spk layers
* Comparison with original snn only when not discretizing
* Ensure no overwrite of the conv layer during batchnorm merging
* Making sure discretization happens after scaling
* Tutorial for converting from torch model to speck config
* Update documentation
* WIP: Documentation for specklayer. Numpy style docstrings
* WIP: Sphinx documentation
* Minor fixes. Still to do: Discretization of snn (discretize\_sl) does not work)
* Minor fixes in tests
* added ugly workaround to samna-torch crash problem
* fixed bug in sumpool config
* Fixed SumPool
* Completed name change and move of files
* Fix module naming
* deleted references to sumpool2dlayer, loaded sinabs sumpool
* removed unused imports
* uses SumPool from sinabs
* moved test
* updated tests to new locations; new constructor in SpeckNetwork
* moved tests to folder
* deleted scratch folder
* Tests related to dvs
* Fixes wrt to handling dvs and pooling, completed type hints
* wrote docstrings
* should now be safe to commit init
* some minor changes
* added test, changed var names
* small correction to previous commit
* added support for a specific case of batchnorm
* Use deepcopy for copying layers
* merge bc of black
* Avg pooling now turned to sum pooling and weights rescaling (1 failing test)
* Test to verify that all layers are copy and not references
* Make sure all layers in SpeckCompatibleNetwork are copies of the original
* (WIP) started implementing transfer to sumpool
* Workaround for copying spiking layers in discretize\_conv\_spike
* updated and added tests
* fixed several issues that arose with testing
* bugfix: reset\_states in network
* correct way of ignoring neurons states
* discretization now optional (for testing)
* input shape removed where not needed; more cleanup
* Minor
* separated make\_config from the rest
* a little cleanup and commenting
* seemingly working class-based version
* somewhat working version of class-based
* Handle Linear layers and Flatten, ignore Dropout2d
* started transformation into class
* added gitignore
* updated new api of samna
* added smartdoor test
* Doorbell test
* Un-comment speck related lines
* minor
* samna independent test-mode for fixing some bugs
* Fixing bugs
* Wip: update for sinabs 0.2 - discretization
* Wip: update tospeck for compatibility for sinabs 0.2
* Wip: update tospeck for compatibility for sinabs 0.2
* Refactored keras\_model -> analog\_model
* Added tool to compute output shapes
* correct device for spiking layers
* added tentative synops support
* version number updated
* updated file paths in tests
* threshold methods updated, onnnx conversion works now
* wip:added test for equivalence
* fixed bug from\_torch was doing nothing
* model build method separately added
* changed default membrane subtract to Threshold, as in IAF. implemented in from\_torch
* updated documentation
* fixed bug in from\_torch; negative spikes no longer supported
* onnx support for threshold operation
* updated test; removed dummy input shape
* added warnings for unsupported operations
* Input shape optional and neurons dynamically allocated
* from\_torch completely rewritten (WIP)
* wip: from\_torch refractoring
* marked all torch layer wrappers to deprecated
* Depricated TorchLayer added
* merged master to bptt\_devel

0.1.0

------

* fixed version number
* removed contrib branch
* added initial text for contributions file
* updated mirror url
* Added contact text
* added contributing file
* added license text to readme
* added AGPL license notice to all files in library
* added LeNet 5 example
* default neuron parameters updated to work out of the box
* added convtranspose2d layer
* abstract class SpikingLayer added to documentation
* iaf code moved to abstract class
* summary added to layer base class
* summary modified
* update example to generate and readout spike trains
* max pooling keras
* restored readme text
* added readme in docs folder
* added license AGPL
* auto rescale multiple average pooling in row
* fix quantizing nBits for weights and threshold
* softmax means ReLU for inference and fix auto-rescaling
* push test
* push test
* summary modified
* added build to gitignore list
* typos in readme
* updated documentation file structure
* Initial file commit
* Initial commit

0.1.dev7

---------

* install m2r with hotfix for new version of sphinx
* changed membrane\_subtract and reset defaults, some cleanup
* added test to compare iaf implementations
* added dummy test file intended for bptt layer
* removed content from init file, since it breaks for people who do not have dependencies
* sumpool layer stride is None bug
* introduced test of discretization in simulation
* Made necessary changes to actually simulate the discretized network
* added bias check in descretize sc2d
* merged changes from features/separate\_discretization
* bugfixes
* fix import
* misc
* Fix bias shapes and handling of 'None'-states
* merge updates from feature/spiking\_model\_to\_speck
* Fix biases shape
* wip
* updated version number
* added support for batch mode operation
* Fixes in neuron states and weight shapes. Updated test
* Undo reverting commit 5af49846 and fix dimensions for neuron states
* Fix weight dimensions
* added conversion of flatten and linear to convolutions
* Use speckdemo module and handle models without biases
* provided default implementation of iaf\_bptt to passthrough
* Small fix in plotting in test
* Improved in-code documentation of tospeck.py
* Test script for porting a simple spiking model to a speck config
* Quantization of weights, biases and thresholds
* SpikingLayer with learning added to layers without default import
* Bugfixes in tospeck.py
* can handle sequential models of SpikingConv2dLayers and SumPooling2dLayers
* Remove tests that should be handled by ctxctl
* wip: handling of pooling layers
* For compatibility issues that result in not matching dimensions raise exceptions instead of warnings
* WIP: Method for converting Spiking Model to speck configurations
* SpikingLayer attributes membrane\_subtract and membrane\_reset as properties to avoid that both are None at the same time
* WIP: Method for converting SpikingConv2DLayer to speck configurations
* threshold function fix, bptt mnist example with spiking layer in notebook
* threshold functions used in forward iaf method for bptt
* added differentiable threshold functions
* bugfix related to sumpool
* added synopscount to master
* added documentation synopcounter and sumpool
* added new layers to docs
* Added analogue sumpool layer
* added two layers by qian
* updated summary function for iaf\_tc
* added synoploss and refactored
* added classifiers to setup.py
* fixed typos in setup.py
* updated setup file
* updated branch to master for pypi deployment
* fixed reference to rockpool in tag
* upload to pypi and tags in readme file
* version bump for test
* direct execute with twine
* typo fix
* added tags of the runner
* pypi build triggered on pip branch
* removed trailing line
* ci script to upload to test pypi
* wip: adding pip support for sinabs
* added option to only reproduce current instead of spikes
* added clear buffer method
* pew workon to pew ls
* added pew to documentation
* round on stochastic rounding eval mode
* stochastic rounding only during learning
* added stochastic rounding option to NeuromorphicReLU
* added normalization level to sig2spike layer
* updated documentation structure and pipenv tutorial
* modified iaf tc's expecte dims to be [t, ch]
* merged changes from master
* fixed missing module sinabs.from\_keras
* fixed tensorflow version 1.15
* fixed tensorflow version in ci script
* added tensorflow install to ci script
* typo fix
* force install torch
* updated documentation for from\_keras
* added pipfile
* moved all from keras methods to from\_keras.py
* added doc string
* added rescaling of biases to from\_torch
* breaking change to Sig2SpikeLayer
* time steps computed based on dimensions of synaptic output
* functioning code for spiking model
* renamed TorchLayer to Layer, TDS to TemporalConv1d
* added kernel shape for tds layer
* fixed cuda imports in tests
* merged master
* updated notebook with a full run time output
* added mnist\_cnn weight matrix for the examples to run smoothly
* example of from\_torch Lenet 5
* example of from\_torch Lenet 5
* lenet example from\_torch, and in Chinese
* missing import added
* quantize layers are not called by name any more
* supported avgpool with different kernel sizes
* added some documentation, quantize now does nothing
* fix linear layer and add sumpool layer to from\_torch
* clean up maxpooling2d
* clean up maxpoooling2d
* fix maxpooling spike\_count error
* fix maxpooling spike\_count error
* initial mock code
* implemented quantization
* Initial commit
* load DynapSumPool and DynapConv2dSynop from pytorch
* added flag to exclude negative spikes
* added support for neuromorphicrelu
* updated setup file to specify tensorflow version dependency
* Some minor changes
* fixes summary
* threshold management in from torch
* functionalities added to torch vonverter
* line-height fixed in h1
* added intro to snns notebook documentation
* merge errors fixed in init file
* synops to cpu
* fixes needed for summary and synops
* merged init file
* init file merged
* overwrote forward method
* removed detach()
* all self.spikes\_number are numbers only and detached now
* fixed incorrect variable name for weights
* added SpikingLinearLayer
* doc string corrections
* fixed test following small refactor
* fixed documentation
* allowed threshold\_low setting
* added documentation
* added YOLO layer and converted converter
* converter uses Sequential and Network instead of ModuleList
* merged latest version (PR) or no\_spike\_tracking
* iaf layers do not save spikes
* fixed loss return with flag
* trivial merge of no spike tracking
* removed status caching and sum(0)
* merged no spike tracking but test not fixed
* iaf layers do not save spikes
* changed copying strategy to avoid warnings
* Sadique worked on clearing cache on iaf forward()
* img\_to\_spk fix
* small improvements to spkconverter
* linear imgtospk
* spike converter from torch and test
* remove unwanted prints
* small changes useful for yolo
* implemented linear mode for conv2d
* changes to synaptic output
* implemented spike generation layer from analog signals
* fixed causal convolutions and padding
* implemented delay buffer
* added initial code for time delayed spiking layer
* added image to spike conversion layer
* added conv1d to the documentation
* added conv1d layer
* updated notebooks in examples
* conversion from markdown fixed
* added link to gitlab pages in readme
* documentation added to pages
* updated branch for testing and building
* fixed path to build folder
* pip upgrade command missing pip
* added gitlab CI script
* state to cuda device
* license notice updated in setup file
* layers submodule added to setupfile
* fixed calls to np load with allow\_pickle arg
* added conv3d layer
* initial code
* merged
* fixed typos in readme

Page 6 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.