Nasbench-pytorch

Latest version: v1.3.1

Safety actively analyzes 687918 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

1.3.1

- num_workers was not set on train-set only cifar
- set all default hyperparams inside train and Network

1.3

The code was modified so that it is easier to reproduce the original results - before, only the code structure was the same, but the hyperparameters were different and the optimizer was SGD - there were difficulties with making RMSProp training work.

Now the networks can be successfully trained with RMSProp and with the same hyperparameters as in the paper.

- Added reproducibility section to the readme
- Hyperparameters were modified so that they match those from the NAS-Bench-101 paper
- TensorFlow version of RMSProp is supported
- Gradient clipping can be turned off

Special thanks to [longerhost](https://github.com/longerHost) for helping to reproduce the original training!

1.2.3

- fixed a bug where the model couldn't be cast to double (torch.zeros was replaced by torch.zeros_like)
- by [abhash-er](https://github.com/abhash-er/)

1.2.2

- fixed an error in in place modification that resulted in backward pass crashing for some architectures (a36d7a7c6770079c35db3ad3b5d0aa139e1105a2)

1.2.1

- fixed inconsistencies in devices when training on cuda - torch.zeros() caused the problem

1.2

- fixed a bug in training - when optimizer was None, it wasn't set to sgd properly
- modified the code so that the networks can be passed to `torch.jit.script()`

Page 1 of 2

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.