- Added `LineAdapter` to read files line by line.
- Every `add_*_specific_args` method now should return nothing.
- Added `predict` capability to `AdaptersDataModule`.
- Added `predict` capability to `CompressedDataModule`.
- Added `do_predict()` and `predict_dataloader()` to `SuperDataModule`.
- Added `do_preprocessing` init argument to `MapDataset` and `TransformersIterableDataset` to eventually avoid calling the preprocessing function defined in the `Adapter`.
- Added check over tokenizer type in `whole_word_tails_mask()`.
- Added functions `get_optimizer`, `get_scheduler`, `num_training_steps` and corresponding CLI parameters to `TransformersModel` to allow for more flexible definition of optimizers and schedulers.
- Added optimizer wrappers to be instantiated through CLI parameters. You can still use your own optimizer in `configure_optimizers` without problems.
- Added scheduler wrappers to be instantiated through CLI parameters. You can still use your own scheduler in `configure_optimizers` without problems.
- (Re)Added metrics package with `HitRate`. However, this will likely be moved to `torchmetrics` in the next releases.
- Changed `hparams` attribute of every class (`models`, `adapters`, `datamodules`, `optimizers`, `schedulers`, `callbacks` and `datasets`) to `hyperparameters` to avoid conflict with new lightning `hparams` getters and setters.
- Changed logic of `TransformersModelCheckpointCallback` since training loop has changed in `pytorch-lightning` **v1.4**.
- Removed `TransformersAdapter` because it was too specific and useless.
- General refactoring of classes. Cleaned and removed unused imports. Refactored also some tests.