We're excited to announce the official release of Fusilli v1.0.0! 🎉
**What's New?**
- **Multimodal Fusion at Your Fingertips**: Fusilli v1.0.0 introduces a comprehensive set of multimodal data fusion methods, offering 23 different fusion models. Dive into a diverse collection of techniques, including graph neural networks, attention mechanisms, variational autoencoders, and more!
- **Enhanced Usability**: This release simplifies the process of handling multimodal data for predictive tasks. Seamlessly fuse tabular data with 2D or 3D images to perform tasks like binary classification, multi-class classification, or regression with ease.
- **Documentation Overhaul**: Experience a revamped documentation with clear usage examples, detailed descriptions of fusion models, and step-by-step guides on getting started. Explore the vast functionalities Fusilli offers through our updated documentation.
**How to Get Started?**
Getting started with Fusilli is easy! Visit our [documentation](https://fusilli.readthedocs.io/en/latest/index.html#) for installation instructions, detailed usage guides, and examples. Find the method that best fits your multimodal data fusion needs!
**How to Contribute?**
Contributions are always welcome! Whether it's bug fixes, new fusion models, or improvements to existing functionalities, your contributions can help enhance the Fusilli library. Check out our [contribution guidelines](https://fusilli.readthedocs.io/en/latest/developers_guide.html) to get involved.
**Thank You!**
We extend our heartfelt gratitude to the contributors, early adopters, and supporters. Your feedback and support have been invaluable in shaping Fusilli into what it is today.
Download Fusilli v1.0.0 now and start fusing your multimodal data in exciting new ways!