Just as a reminder, the current deep learning models for tabular data available in the library are:
- Wide
- TabMlp
- TabResNet
- [TabNet](https://arxiv.org/abs/1908.07442)
- [TabTransformer](https://arxiv.org/abs/2012.06678)
- [FTTransformer](https://arxiv.org/abs/2106.11959v2)
- [SAINT](https://arxiv.org/abs/2106.01342)
- [TabFastformer](https://arxiv.org/abs/2108.09084)
- [TabPerceiver](https://arxiv.org/abs/2103.03206)
- BayesianWide
- BayesianTabMlp
2. The text related component has now 3 available models, all based on RNNs. There are reasons for that although the integration with the Hugginface Transformer library is the next step in the development of the library. The 3 models available are:
- BasicRNN
- AttentiveRNN
- StackedAttentiveRNN
The last two are based on [Hierarchical Attention Networks for Document Classification](https://www.cs.cmu.edu/~hovy/papers/16HLT-hierarchical-attention-networks.pdf). See the docs for details
3. The image related component is now fully integrated with the latest [torchvision](https://pytorch.org/vision/stable/models.html) release, with a new [Multi-Weight Support API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/). Currently, the model variants supported by our library are:
- resnet
- shufflenet
- resnext
- wide_resnet
- regnet
- densenet
- mobilenet
- mnasnet
- efficientnet
- squeezenet