- **ADD:** Linear layer: Provides an abstraction to a linear model - **ADD:** Log, exp and softmax functions - **ADD:** Momentum to SGD - **ADD:** Uniform weight initialization to linear layer - **FIX:** Softmax underflow issue, Tanh bug,
1.0.1
* Used 1.0.0 for testing * **ADD:** Tanh function, RMSE loss, randn and randint
0.1.1
* **ADD:** Optimizer: SGD * **ADD:** Functions: Relu * **ADD:** Loss functions: RMSE, MSETensor * **ADD:** Module: For defining neural networks * **FIX:** Floating point precision issue when calculating gradient
0.1.0
* First release * **ADD:** Tensor, tensor operations, sigmoid functions * **FIX:** Inaccuracies with gradient computation