* Updated the Seq2Seq models to use some of the latest huggingface bits like tokenizer.prepare_seq2seq_batch.
* Separated out the Seq2Seq and Token Classification metrics into metrics-specific callbacks for a better separation of concerns. As a best practice, you should now only use them as fit_one_cycle, etc.. callbacks rather than attach them to your Learner.
* NEW: Translation are now available in blurr, joining causal language modeling and summarization in our core Seq2Seq stack
* NEW: Integration of huggingface's Seq2Seq metrics (rouge, bertscore, meteor, bleu, and sacrebleu). Plenty of info on how to set this up in the docs.
* NEW: Added default_text_gen_kwargs, a method that given a huggingface config, model, and task (optional), will return the default/recommended kwargs for any text generation models.
* A lot of code cleanup (e.g., refactored naming and removal of redundant code into classes/methods)
* More model support and more tests across the board! Check out the docs for more info
* Misc. validation improvements and bug fixes.
See the docs for each task for more info!