Major improvements and expanded capabilities for decentralized federated learning.
* **Unified Model Interface:** ๐ค Introducing the `P2PFLModel` abstract class for seamless interaction with models from different frameworks (PyTorch, TensorFlow/Keras, and Flax), simplifying development and enabling easy framework switching.
* **Enhanced Dataset Handling:** ๐๏ธ The `P2PFLDataset` class streamlines data loading from various sources (CSV, JSON, Parquet, Pandas, Python data structures, and Hugging Face Datasets) and offers automated partitioning strategies for both IID (`RandomIIDPartitionStrategy`) and non-IID (`DirichletPartitionStrategy`) scenarios. `DataExportStrategy` facilitates framework-specific data preparation.
* **Expanded Framework Support:** ๐ Added support for TensorFlow/Keras and JAX/Flax via new `KerasLearner` and `FlaxLearner` classes, respectively.
* **Advanced Aggregators:** ๐ก๏ธ Implemented `FedMedian` for enhanced robustness against outliers and `SCAFFOLD` to address client drift in non-IID data distributions. A new callback system allows aggregators to request additional information during training.
* **Security Boost:** ๐ Enabled secure communication using SSL/TLS and mutual TLS (mTLS) for the gRPC protocol.
* **Simulation with Ray:** โก `SuperActorPool` for scalable, fault-tolerant simulations using Ray's distributed computing capabilities. Option to disable Ray is available via `Settings.DISABLE_RAY`.
* **Refactoring & Improvements:** ๐งน Enhanced code organization, logging with the improved `P2PFLogger`, unit testing, and documentation.
New Contributors
* kirle made their first contribution in https://github.com/p2pfl/p2pfl/pull/27
* hectorpadin1 made their first contribution in https://github.com/p2pfl/p2pfl/pull/31
* Casomoon made their first contribution in https://github.com/p2pfl/p2pfl/pull/39
**Full Changelog**: https://github.com/p2pfl/p2pfl/compare/v0.3.0...v0.4.0