- Update minimum required Python version to 3.8 - Require transformers<4.34 to ensure compatibility for `small100` model - `m2m100`/`small100`: Stop adding extra EOS tokens when scoring, which is not needed anymore
0.3.2
- Fix score calculation with `small100` model (account for the fact that the target sequence is not prefixed with the target language, as is the case for `m2m100`). - Improve caching efficiency
0.3.1
- Set `small100` as default model when instantiating NMTScorer
0.3.0
- Implement the distilled [`small100`](https://huggingface.co/alirezamsh/small100) model by [Mohammadshahi et al. (2022)](https://arxiv.org/abs/2210.11621) and use this model by default. - Enable half-precision inference for `m2m100` models and `small100` by default; see [experiments/results/summary.md](https://github.com/ZurichNLP/nmtscore/blob/8733fc1258005a9bda230f5dc379844de2a2a22c/experiments/results/summary.md) for benchmark results
0.2.0
Bugfix: Provide source language to m2m100 models (2) This fix is backwards-compatible but a warning is now raised if m2m100 is used without specifying the input language.