Fixed: - Properly set device in `text.Translator` and use cuda when available
0.17.0
New: - support for language translation using pretraiend `MarianMT` models - added `core.evaluate` as alias to `core.validate` - `Learner.estimate_lr` method will return numerical estimates of learning rate using two different methods. Should only be called **after** running `Learner.lr_find`.
Changed - `text.zsl.ZeroShotClassifier` changed to use `AutoModel*` and `AutoTokenizer` in order to load any `mlni` model - remove external modules from `ktrain.__init__.py` so that they do not appear when pressing TAB in notebook - added `Transformer.save_tokenizer` and `Transformer.get_tokenizer` methods to facilitate training on machines with no internet
Fixed: - explicitly call `plt.show()` in `LRFinder.plot_loss` to resolved issues with plot not displaying in certain cases (PR 170) - suppress warning about text regression when making text regression predictions - allow `xnli` models for `zsl` module
0.16.3
New: - added `metrics` parameter to `text.text_classifier` and `text.text_regression_model` functions - added `metrics` parameter to `Transformer.get_classifier` and `Transformer.get_regrssion_model` methods
Changed - `metric` parameter in `vision.image_classifier` and `vision.image_regression_model` functions changed to `metrics`
Fixed: - N/A
0.16.2
New: - N/A
Changed - default model for summarization changed to `facebook/bart-large-cnn` due to breaking change in v2.11 - added `device` argument to `TransformerSummarizer` constructor to control PyTorch device
Fixed: - require `transformers>=2.11.0` due to breaking changes in v2.11 related to `BART` models
0.16.1
New: - N/A
Changed - N/A/
Fixed: - prevent `transformer` tokenizers from being pickled during `predictor.save`, as it causes problems for some community-uploaded models like `bert-base-japanese-whole-word-masking`.
0.16.0
New: - support for Zero-Shot Topic Classification via the `text.ZeroShotClassifier`.