=================
The aim of 1.1.0 release is to refine the already exposed API, by improving performance, security, documentation, and test coverage. A big focus also was to streamline the build process, making it happen automatically when the project is being built using standard tools with no need for any extra steps.
**New Features:**
- Losses and Optimizers can now be passed as their string representation, which will automatically construct the respective class with its default parameters.
- `Model.add` will automatically build the newly added layers if the model has already been built.
- `Model.build` works correctly with layers that have been built then `add`ed to the model.
- `Model.output_shape`: Retrun the output shape of the model if it's built.
- `Dense.build` doesn't require `input_shape` to be passed if `input_dim` is passed to the constructor.
- `Metrics.Accuracy` works with 2 dimensional lables.
- **Serialization and Saving API Rewrite:**
- Implement JSON serialization and deserialization for objects through `get_config` and `from_config` functions.
- Rename `(get/set/save/load)_parameters` to `(get/set/save/load)_params`.
- Improve security for `load` and `save` methods by getting rid of `pickle` usage.
- Remove `set_params_from_keras_model`. The same functionality can be achieved by passing `keras=True` to `set_params`.
**Bug Fixes:**
- `Flatten.compute_output_shape` return np.int32
- `Model.backward` wouldn't call `Layer.backward` when `Layer.training` is set to False.
- `Model.train` didn't increment `Optimizer.epoch`.
- `Model.train` would print even if `print_every` is turned off
- Casting `bool` arrays to `float` when changing the data type of the layer.
- `CategoricalCrossentropy` didn't handle 1 dimensional labels correctly.
- `MSE` performed labels clipping which caused incorrect results for any label with a value > 1 or < -1.
- `Dropout` used the same mask (random array to "turn off" neurons) for all batches.
- Calculating accuracy for MSE would cause an error because of the use of `np.abs`, which isn't implemented in all Numpy versions.
- `ReLU` uses results cached when it was in training mode during inference which would raise an error if it wasn't trained or give the wrong results if it was trained.
**Improved tests:**
- Implement tests for all the modules.
- 98% coverage rate.
**Cross-Platform Compatibility:**
- Package is now cross-platform, supporting various architectures.
**Automatic Build Process:**
- Integrated automatic build process using standard tools.
- Developers no longer need to perform special actions for building.
**Pre-built Binaries:**
- Provided pre-built binaries (wheels) for almost all platforms and architectures.
**Compiler Agnostic Build Process:**
- Ensured compatibility with different compilers for seamless integration.
- Any version of GCC, Clang, MSVC can be used to compile the package on any platform where they are available.
- Set the compiler to be used in `pyproject.toml` under the `xrnn` tool table.
**Enhanced Documentation:**
- Improved documentation with better code examples for easier usage.
- Rewritten README to reflect the major changes.
**Improved External Data Handling and Validation:**
- Strengthened security measures for handling external data.
- Downloaded datasets are now saved at ~/.xrnn directory not inside the package's root.
**GitHub Actions Workflow Integration:**
- `build.yml`: Builds the package on Windows (32+64), macOS (Intel+Arm) and many/musl linux (x86_64, i386, aarch64).
- `tests.yml`: Tests the built package on all supported Python versions (3.6-3.12).
- `publish.yml`: Deploys the artifacts (wheels + sdist) to PyPI on release creation.
**Performance Critical Part Rewrite:**
- The code had to be written in C++ even though it's intended to be C code because of a limitation in MSVC caused by its ancient `openmp` library, which I was able to get around and the code is written in C directly how it's supposed to be.
This release marks a significant milestone, bringing improved performance, security, and convenience to users.