Autogluon

Latest version: v1.2

Safety actively analyzes 710445 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 6

0.4.2

Not secure
v0.4.2 is a hotfix release to fix [breaking change](https://github.com/protocolbuffers/protobuf/issues/10051) in protobuf.

This release is **non-breaking** when upgrading from v0.4.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.4.1...v0.4.2

This version supports Python versions 3.7 to 3.9.

0.4.1

Not secure
We're happy to announce the AutoGluon 0.4.1 release. 0.4.1 contains minor enhancements to Tabular, Text, Image, and Multimodal modules, along with many quality of life improvements and fixes.

This release is **non-breaking** when upgrading from v0.4.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

This release contains [**55** commits from **10** contributors](https://github.com/awslabs/autogluon/graphs/contributors?from=2022-03-10&to=2022-05-23&type=c)!

See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.4.0...v0.4.1

Special thanks to yiqings, leandroimail, huibinshen who were first time contributors to AutoGluon this release!

Full Contributor List (ordered by of commits):
- Innixma, zhiqiangdon, yinweisu, sxjscience, yiqings, gradientsky, willsmithorg, canerturkmen, leandroimail, huibinshen.

This version supports Python versions 3.7 to 3.9.

Changes

AutoMM

New features

- Added `optimization.efficient_finetune` flag to support multiple efficient finetuning algorithms. (1666) sxjscience
- Supported options:
- `bit_fit`: ["BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models"](https://arxiv.org/abs/2106.10199)
- `norm_fit`: An extension of the algorithm in ["Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs"](https://arxiv.org/abs/2003.00152) and BitFit. We finetune both the parameters in the norm layers as long as the biases.

- Enabled knowledge distillation for AutoMM (1670) zhiqiangdon
- Distillation API for `AutoMMPredictor` reuses the `.fit()` function:

python
from autogluon.text.automm import AutoMMPredictor
teacher_predictor = AutoMMPredictor(label="label_column").fit(train_data)
student_predictor = AutoMMPredictor(label="label_column").fit(
train_data,
hyperparameters=student_and_distiller_hparams,
teacher_predictor=teacher_predictor,
)


- Option to turn on returning feature column information (1711) zhiqiangdon
- The feature column information is turned on for feature column distillation; for other cases it is turned off by default to reduce dataloader‘s latency.
- Added a `requires_column_info` flag in data processors and a utility function to turn this flag on or off.

- FT-Transformer implementation for tabular data in AutoMM (1646) yiqings
- Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, Artem Babenko, "Revisiting Deep Learning Models for Tabular Data" 2022. ([arxiv](https://arxiv.org/abs/2106.11959), [official implementation](https://github.com/Yura52/tabular-dl-revisiting-models))

- Make CLIP support multiple images per sample (1606) zhiqiangdon
- Added multiple images support for CLIP. Improved data loader robustness: added missing images handling to prevent training crashes.
- Added the choice of using a zero image if an image is missing.

- Avoid using `eos` as the sep token for CLIP. (1710) zhiqiangdon

- Update fusion transformer in AutoMM (1712) yiqings
- Support constant learning rate in `polynomial_decay` scheduler.
- Update `[CLS]` token in numerical/categorical transformer.

- Added more image augmentations: `verticalflip`, `colorjitter`, `randomaffine` (1719) Linuxdex, sxjscience

- Added prompts for the percentage of missing images during image column detection. (1623) zhiqiangdon

- Support `average_precision` in AutoMM (1697) sxjscience

- Convert `roc_auc` / `average_precision` to `log_loss` for torchmetrics (1715) zhiqiangdon
- `torchmetrics.AUROC` requires both positive and negative examples are available in a mini-batch. When training a large model, the per gpu batch size is probably small, leading to an incorrect `roc_auc` score. Conversion from `roc_auc` to `log_loss` improves training stablility.

- Added `pytorch-lightning` 1.6 support (1716) sxjscience

Checkpointing and Model Outputs Changes

- Updated the names of top-k checkpoint average methods and support customizing model names for terminal input (1668) zhiqiangdon
- Following paper: https://arxiv.org/pdf/2203.05482.pdf to update top-k checkpoint average names: `union_soup` -> `uniform_soup` and `best_soup` -> `best`.
- Update function names (`customize_config_names` -> `customize_model_names` and `verify_config_names` -> `verify_model_names`) to make it easier to understand them.
- Support customizing model names for the terminal input.

- Implemented the GreedySoup algorithm proposed in [paper](https://arxiv.org/pdf/2203.05482.pdf). Added `union_soup`, `greedy_soup`, `best_soup` flags and changed the default value correspondingly. (#1613) sxjscience

- Updated the `standalone` flag in `automm.predictor.save()` to save the pertained model for offline deployment (1575) yiqings
- An efficient implementation to save the donwloaded models from transformers for the offline deployment. Revised logic is in 1572, and discussed in 1572 (comment).

- Simplified checkpoint template (1636) zhiqiangdon
- Stopped using pytorch lightning's model checkpoint template in saving `AutoMMPredictor`'s final model checkpoint.
- Improved the logic of continuous training. We pass the `ckpt_path` argument to pytorch lightning's trainer only when `resume=True`.

- Unified AutoMM's model output format and support customizing model names (1643) zhiqiangdon
- Now each model's output is dictionary with the model prefix as the first level key. The format is uniform between single model and fusion model.
- Now users can customize model names by using the internal registered names (`timm_image`, `hf_text`, `clip`, `numerical_mlp`, `categorical_mlp`, and `fusion_mlp`) as prefixes. This is helpful when users want to simultaneously use two models of the same type, e.g., `hf_text`. They can just use names `hf_text_0` and `hf_text_1`.

- Support `standalone` feature in `TextPredictor` (1651) yiqings

- Fixed saving and loading tokenizers and text processors (1656) zhiqiangdon
- Saved pre-trained huggingface tokenizers separately from the data processors.
- This change is backwards-compatibile with checkpoints saved by verison `0.4.0`.

- Change load from a classmethod to staticmethod to avoid incorrect usage. (1697) sxjscience

- Added `AutoMMModelCheckpoint` to avoid evaluating the models to obtain the scores (1716) sxjscience
- checkpoint will save the best_k_models into a yaml file so that it can be loaded later to determine the path to model checkpoints.

- Extract column features from AutoMM's model outputs (1718) zhiqiangdon
- Add one util function to extract column features for both image and text.
- Support extracting column features for models `timm_image`, `hf_text`, and `clip`.

- Make AutoMM dataloader return feature column information (1710) zhiqiangdon

Bug fixes

- Fixed calling `save_pretrained_configs` in `AutoMMPrediction.save(standalone=True)` when no fusion model exists ([here](https://github.com/awslabs/autogluon/blob/5a323641072431091d2be5e6dbef5a87b646a408/text/src/autogluon/text/automm/utils.py#L644 )) (1651) yiqings

- Fixed error raising for setting key that does not exist in the configuration (1613) sxjscience

- Fixed warning message about bf16. (1625) sxjscience

- Fixed the corner case of calculating the gradient accumulation step (1633) sxjscience

- Fixes for top-k averaging in the multi-gpu setting (1707) zhiqiangdon

Tabular

- Limited RF `max_leaf_nodes` to 15000 (previously uncapped) (1717) Innixma
- Previously, for very large datasets RF/XT memory and disk usage would quickly become unreasonable. This ensures that at a certain point RF and XT will no longer become larger given more rows of training data. Benchmark results showed that the change is an improvement, particularly for the `high_quality` preset.

- Limit KNN to 32 CPUs to avoid OpenBLAS error (1722) Innixma
- Issue 1020. When training K-nearest-neighbors (KNN) models, sometimes a rare error can occur that crashes the entire process:

BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
Segmentation fault: 11


This error occurred when the machine had many CPU cores (>64 vCPUs) due to too many threads being created at once. By limiting to 32 cores used, the error is avoided.

- Improved memory warning thresholds (1626) Innixma

- Added `get_results` and `model_base_kwargs` (1618) Innixma
- Added `get_results` to searchers, useful for debugging and for future extensions to HPO functionality.
Added new way to init a `BaggedEnsembleModel` that avoids having to init the base model prior to initing the bagged ensemble model.

- Update resource logic in models (1689) Innixma
- Previous implementation would crash if user specified `auto` for resources, fixed in this PR.
- Added `get_minimum_resources` to explicitly define minimum resource requirements within a method.

- Updated feature importance default `subsample_size` 1000 -> 5000, `num_shuffle_sets 3` -> 5 (1708) Innixma
- This will improve the quality of the feature importance values by default, especially the 99% confidence bounds. The change increases the time taken by ~8x, but this is acceptable because of the numerous inference speed optimizations done since these defaults were first introduced.

- Added notice to ensure serializable custom metrics (1705) Innixma

Bug fixes

- Fixed `evaluate` when `weight_evaluation=True` (1612) Innixma
- Previously, AutoGluon would crash if the user specified `predictor.evaluate(...)` or `predictor.evaluate_predictions(...)` when `self.weight_evaluation==True`.

- Fixed RuntimeError: dictionary changed size during iteration (1684, 1685) leandroimail

- Fixed CatBoost custom metric & F1 support (1690) Innixma

- Fixed HPO not working for bagged models if the bagged model is loaded from disk (1702) Innixma

- Fixed Feature importance erroring if `self.model_best` is `None` (can happen if no Weighted Ensemble is fit) (1702) Innixma

Documentation

- updated the text tutorial of cutomizing hyperparameters (1620) zhiqiangdon
- Added customizeable backbones from the Huggingface model zoo and how to use local backbones.

- Improved implementations and docstrings of `save_pretrained_models` and `convert_checkpoint_name`. (1656) zhiqiangdon

- Added cheat sheet to website (1605) yinweisu

- Doc fix to use correct predictor when calling leaderboard (1652) Innixma

Miscellaneous changes

- [security] updated `pillow` to `9.0.1`+ (1615) gradientsky

- [security] updated `ray` to `1.10.0`+ (1616) yinweisu

- Tabular regression tests improvements (1555) willsmithorg
- Regression testing of model list and scores in tabular on small synthetic datasets (for speed).
- Tests about 20 different calls to `TabularPredictor` on both regression and classification tasks, multiple presets etc.
- When a test fails it dumps out the config change required to make it pass, for ease of updating.

- Disabled image/text predictor when gpu is not available in `TabularPredictor` (1676) yinweisu
- Resources are validated before bagging is started. Image/text predictor model would require minimum of 1 gpu.

- Use class property to set keys in model classes. In this way, if we customize the prefix key, other keys are automatically updated. (1669) zhiqiangdon

Various bugfixes, documentation and CI improvements
- yinweisu (1605, 1611, 1631, 1638, 1691)
- zhiqiangdon (1721)
- Innixma (1608, 1701)
- sxjscience (1714)

0.4

- 1607 `pip install autogluon.text` will error on import if installed standalone due to missing `autogluon.features` as a dependency. To fix: `pip install autogluon.features`. This will be resolved in v0.4.1 release.

Changes

General

- [AutoGluon now supports Windows OS!](https://auto.gluon.ai/0.4.0/index.html) Both CPU and GPU are supported on Windows.
- AutoGluon now supports Python 3.9. Python 3.6 is no longer supported.
- AutoGluon has migrated from MXNet to PyTorch for all deep learning models resulting in major speedups.
- [AutoGluon v0.4 Cheat Sheet](https://auto.gluon.ai/stable/autogluon-cheat-sheet.pdf): Get started faster than ever before with this handy reference page!
- [New tutorials](https://auto.gluon.ai/0.4.0/tutorials/cloud_fit_deploy/index.html) showcasing cloud training and deployment with AWS SageMaker and Lambda.

Text

AutoGluon-Text is refactored with [PyTorch Lightning](https://www.pytorchlightning.ai/). It now supports backbones in [huggingface/transformers](https://huggingface.co/docs/transformers/index). The new version has better performance, faster training time, and faster inference speed. In addition, AutoGluon-Text now supports solving multilingual problems and a new `AutoMMPredictor` has been implemented for automatically building multimodal DL models.

- **Better Performance**
- Compared with TextPredictor in AutoGluon 0.3, TextPredictor in AutoGluon 0.4 has **72.22%** win-rate in the [multimodal text-tabular benchmark published in NeurIPS 2021](https://arxiv.org/abs/2111.02705). If we use `presets="high_quality"`, the win-rate increased to **77.8%** thanks to the [DeBERTa-v3 backbone](https://arxiv.org/abs/2111.09543).
- In addition, we resubmitted our results to [MachineHack: Product Sentiment Analysis](https://machinehack.com/hackathon/product_sentiment_classification_weekend_hackathon_19/overview
), ["MachineHack: Predict the Price of Books"](https://machinehack.com/hackathon/predict_the_price_of_books/overview
), and ["Kaggle: Mercari Price Suggestion"](https://www.kaggle.com/c/mercari-price-suggestion-challenge). With three lines of code, AutoGluon 0.4 is able to achieve top places in these competitions (1st, 2nd, 2nd correspondingly). The results obtained by AutoGluon 0.4 also consistently outperform the results obtained by AutoGluon 0.3.
- **Faster Speed**
- The new version has **~2.88x** speedup in training and **~1.40x** speedup in inference. With g4dn.12x instance, the model can achieve an additional 2.26x speedup with 4 GPUs.
- **Multilingual Support**
- AutoGluon-Text now supports solving multilingual problems via cross-lingual transfer ([Tutorial](https://auto.gluon.ai/0.4.0/tutorials/text_prediction/multimodal_text.html)). This is triggered by setting `presets="multilingual"`. You can now train a model on the English dataset and directly apply the model on datasets in other languages such as German, Japanese, Italian, etc.
- **AutoMMPredictor for Multimodal Problems**
- Support an experimental AutoMMPredictor that supports fusion image backbones in [timm](https://github.com/rwightman/pytorch-image-models/tree/master/timm), text backbone in [huggingface/transformers](https://huggingface.co/docs/transformers/index), and multimodal backbones like [CLIP](https://openai.com/blog/clip/) ([Tutorial](https://auto.gluon.ai/0.4.0/tutorials/text_prediction/automm.html)). It may perform better than ensembling ImagePredictor + TextPredictor.
- **Other Features**
- Support continuous training from an existing checkpoint. You may just call `.fit()` again after a previous trained model has been loaded.

Thanks to zhiqiangdon and sxjscience for contributing the AutoGluon-Text refactors! (1537, 1547, 1557, 1565, 1571, 1574, 1578, 1579, 1581, 1585, 1586)

Tabular

AutoGluon-Tabular has been majorly enhanced by numerous optimizations in 0.4. In summation, these improvements have led to a:

- **~2x** training speedup in Good, High, and Best quality presets.
- **~1.3x** inference speedup.
- **63%** win-rate vs AutoGluon 0.3.1 (Results from [AutoMLBenchmark](https://github.com/openml/automlbenchmark))
- **93%** win-rate vs AutoGluon 0.3.1 on datasets with >=100,000 rows of data (!!!)

Specific updates:

- Added `infer_limit` and `infer_limit_batch_size` as new fit-time constraints ([Tutorial](https://auto.gluon.ai/0.4.0/tutorials/tabular_prediction/tabular-indepth.html#inference-speed-as-a-fit-constraint)). This allows users to specify
the desired end-to-end inference latency of the final model and AutoGluon will automatically train models
to satisfy the constraint. This is extremely useful for online-inference scenarios where you need to satisfy an
end-to-end latency constraint (for example 50ms). Innixma (1541, 1584)
- Implemented automated semi-supervised and transductive learning in TabularPredictor.
[Try it out](https://auto.gluon.ai/0.4.0/api/autogluon.predictor.html#autogluon.tabular.TabularPredictor.fit_pseudolabel) via `TabularPredictor.fit_pseudolabel(...)`! DolanTheMFWizard (1323, 1382)
- Implemented automated feature pruning (i.e. feature selection) in TabularPredictor.
Try it out via `TabularPredictor.fit(..., feature_prune_kwargs={})`! truebluejason (1274, 1305)
- Implemented automated model calibration to improve AutoGluon's predicted probabilities for classification problems.
This is enabled by default, and can be toggled via the `calibrate` fit argument. DolanTheMFWizard (1336, 1374, 1502)
- Implemented parallel bag training via Ray. This results in a ~2x training speedup when bagging is enabled
compared to v0.3.1 with the same hardware due to more efficient usage of resources
for models that cannot effectively use all cores. yinweisu (1329, 1415, 1417, 1423)
- Added adaptive early stopping logic which greatly improves the quality of models within a time budget. Innixma (1380)
- Added automated model calibration in quantile regression. taesup-aws (1388)
- Enhanced datetime feature handling. willsmithorg (1446)
- Added support for custom confidence levels in feature importance. jwmueller (1328)
- Improved neural network HPO search spaces. jwmueller (1346)
- Optimized one-hot encoding preprocessing. Innixma (1376)
- Refactored `refit_full` logic to majorly simplify user model contributions and improve multimodal support with advanced presets. Innixma (1567)
- Added experimental TabularPredictor config helper. gradientsky (1491)
- New Tutorials
- [GPU training tutorial for tabular models](https://auto.gluon.ai/0.4.0/tutorials/tabular_prediction/tabular-gpu.html). gradientsky (#1527)
- [Feature preprocessing tutorial](https://auto.gluon.ai/0.4.0/tutorials/tabular_prediction/tabular-feature-engineering.html). willsmithorg (#1478)

Tabular Models

NEW: TabularNeuralNetTorchModel (alias: 'NN_TORCH')

As part of the migration from MXNet to Torch, we have created a Torch based counterpart
to the prior MXNet tabular neural network model. This model has several major advantages, such as:

- **1.9x** faster training speed
- **4.7x** faster inference speed
- **51%** win-rate vs MXNet Tabular NN

This model has replaced the MXNet tabular neural network model in the default hyperparameters configuration,
and is enabled by default.

Thanks to jwmueller and Innixma for contributing TabularNeuralNetTorchModel to AutoGluon! (1489)

NEW: VowpalWabbitModel (alias: 'VW')

VowpalWabbit has been added as a new model in AutoGluon. VowpalWabbit is not installed by default, and must be installed separately.
VowpalWabbit is used in the `hyperparameters='multimodal'` preset, and the model is a great option to use for datasets containing text features.

To install VowpalWabbit, specify it via `pip install autogluon.tabular[all, vowpalwabbit]` or `pip install "vowpalwabbit>=8.10,<8.11"`

Thanks to killerSwitch for contributing VowpalWabbitModel to AutoGluon! (1422)

XGBoostModel (alias: 'XGB')

- Optimized model serialization method, which results in 5.5x faster inference speed and halved disk usage. Innixma (1509)
- Adaptive early stopping logic leading to 54.7% win-rate vs prior implementation. Innixma (1380)
- Optimized training speed with expensive metrics such as F1 by ~10x. Innixma (1344)
- Optimized num_cpus default to equal physical cores rather than virtual cores. Innixma (1467)

CatBoostModel (alias: 'CAT')

- CatBoost now incorporates callbacks which make it more stable and resilient to memory errors,
along with more advanced adaptive early stopping logic that leads to 63.2% win-rate vs prior implementation. Innixma (1352, 1380)

LightGBMModel (alias: 'GBM')

- Optimized training speed with expensive metrics such as F1 by ~10x. Innixma (1344)
- Adaptive early stopping logic leading to 51.1% win-rate vs prior implementation. Innixma (1380)
- Optimized num_cpus default to equal physical cores rather than virtual cores. Innixma (1467)

FastAIModel (alias: 'FASTAI')

- Added adaptive batch size selection and epoch selection. gradientsky (1409)
- Enabled HPO support in FastAI (previously HPO was not supported for FastAI). Innixma (1408)
- Made FastAI training deterministic (it is now consistently seeded). Innixma (1419)
- Fixed GPU specification in FastAI to respect the num_gpus parameter. Innixma (1421)
- Forced correct number of threads during fit and inference to avoid issues with global thread updates. yinweisu (1535)

LinearModel (alias: 'LR')

Linear models have been accelerated by **20x** in training and **20x** in inference thanks to a variety of optimizations.
To get the accelerated training speeds, please install [scikit-learn-intelex](https://github.com/intel/scikit-learn-intelex) via `pip install "scikit-learn-intelex>=2021.5,<2021.6"`

Note that currently LinearModel is not enabled by default in AutoGluon,
and must be specified in `hyperparameters` via the key `'LR'`.
Further testing is planned to incorporate LinearModel as a default model in future releases.

Thanks to the `scikit-learn-intelex` team and Innixma for the LinearModel optimizations! (1378)

Vision

- Refactored backend logic to be more robust. yinweisu (1427)
- Added support for inference via CPU. Previously, inferring without GPU would error. yinweisu (1533)
- Refactored HPO logic. Innixma (1511)

Miscellaneous

- AutoGluon no longer depends on ConfigSpace, cython, dill, paramiko, autograd, openml, d8, and graphviz.
This greatly simplifies installation of AutoGluon, particularly on Windows.
- Entirely refactored HPO logic to break dependencies on ConfigSpace and improve stability and ease of development.
HPO has been simplified to use random search in this release while we work on
re-introducing the more advanced HPO methods such as bayesopt in a future release.
Additionally, removed 40,000 lines of out-dated code to streamline future development.
Innixma (1397, 1411, 1414, 1431, 1443, 1511)
- Added `autogluon.common` to simplify dependency management for future submodules. Innixma (1386)
- Removed `autogluon.mxnet` and `autogluon.extra` submodules as part of code cleanup. Innixma (1397, 1411, 1414)
- Refactored logging to avoid interfering with other packages. yinweisu (1403)
- Fixed logging output on Kaggle, previously no logs would be displayed while fitting AutoGluon in a Kaggle kernel. Innixma (1468)
- Added platform tests for Linux, MacOS, and Windows. yinweisu (1464, 1506, 1513)
- Added [ROADMAP.md](https://github.com/awslabs/autogluon/blob/master/ROADMAP.md) to highlight past, present, and future feature prioritization and progress to the community. Innixma (#1420)
- Various documentation and CI improvements
- jwmueller (1379, 1408, 1429)
- gradientsky (1383, 1387, 1471, 1500)
- yinweisu (1441, 1482, 1566, 1580)
- willsmithorg (1476, 1483)
- Xilorole (1526)
- Innixma (1452, 1453, 1528, 1577, 1584, 1588, 1593)
- Various backend enhancements / refactoring / cleanup
- DolanTheMFWizard (1319)
- gradientsky (1320, 1366, 1385, 1448, 1488, 1490, 1570, 1576)
- mseeger (1349)
- yinweisu (1497, 1503, 1512, 1563, 1573)
- willsmithorg (1525, 1543)
- Innixma (1311, 1313, 1327, 1331, 1338, 1345, 1369, 1377, 1380, 1408, 1410, 1412, 1419, 1425, 1428, 1462, 1465, 1562, 1569, 1591, 1593)
- Various bug fixes
- jwmueller (1314, 1356)
- yinweisu (1472, 1499, 1504, 1508, 1516)
- gradientsky (1514)
- Innixma (1304, 1325, 1326, 1337, 1365, 1395, 1405, 1587, 1599)

0.4.0

Not secure
We're happy to announce the AutoGluon 0.4 release. 0.4 contains major enhancements to Tabular and Text modules, along with many quality of life improvements and fixes.

This release is **non-breaking** when upgrading from v0.3.1. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

This release contains [**151** commits from **14** contributors](https://github.com/awslabs/autogluon/graphs/contributors?from=2021-09-01&to=2022-03-09&type=c)!

See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.3.1...v0.4.0

Special thanks to zhiqiangdon, willsmithorg, DolanTheMFWizard, truebluejason, killerSwitch, and Xilorole who were first time contributors to AutoGluon this release!

Full Contributor List (ordered by of commits):
- Innixma, yinweisu, gradientsky, zhiqiangdon, jwmueller, willsmithorg, sxjscience, DolanTheMFWizard, truebluejason, taesup-aws, Xilorole, mseeger, killerSwitch, rschmucker

This version supports Python versions 3.7 to 3.9.

0.3.1

Not secure
v0.3.1 is a hotfix release which fixes several major bugs as well as including several model quality improvements.

This release is **non-breaking** when upgrading from v0.3.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

This release contains **9** commits from **4** contributors.

See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.3.0...v0.3.1

Thanks to the 4 contributors that contributed to the v0.3.1 release!

Special thanks to yinweisu who is a first time contributor to AutoGluon and fixed a major bug in ImagePredictor HPO!

Full Contributor List (ordered by of commits):

Innixma, gradientsky, yinweisu, sackoh

Changes

Tabular

- AutoGluon v0.3.1 has a **58% win-rate** vs AutoGluon v0.3.0 for `best_quality` preset.
- AutoGluon v0.3.1 has a **75% win-rate** vs AutoGluon v0.3.0 for high and good quality presets.
- Fixed major bug introduced in v0.3.0 with models trained in refit_full causing weighted ensembles to incorrectly weight models. This severely impacted accuracy and caused worse results for high and good quality presets. Innixma (1293)
- Removed KNN from stacker models, resulting in stack quality improvement. Innixma (1294)
- Added automatic detection and optimized usage of boolean features. Innixma (1286)
- Improved handling of time limit in FastAI NN model to avoid edge cases where the model would use the entire time budget but fail to train. Innixma (1284)
- Updated XGBoost to use `-1` as `n_jobs` value instead of using `os.cpu_count()`. sackoh (1289)

Vision

- Fixed major bug that caused HPO with time limits specified to return very poor models. yinweisu (1282)

General

- Minor doc updates. gradientsky (1288, 1290)

0.3.0

Not secure
This release is **non-breaking** when upgrading from v0.2.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

This release contains **70** commits from **10** contributors.

See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.2.0...v0.3.0

Thanks to the [**10 contributors**](https://github.com/awslabs/autogluon/graphs/contributors?from=2021-04-27&to=2021-08-14&type=c) that contributed to the v0.3.0 release!

Special thanks to the 3 first-time contributors! rxjx, sallypannn, sarahyurick

Special thanks to talhaanwarch who opened 21 GitHub issues (!) and participated in numerous discussions during v0.3.0 development. His feedback was incredibly valuable when diagnosing issues and improving the user experience throughout AutoGluon!

Full Contributor List (ordered by of commits):

Innixma, zhreshold, jwmueller, gradientsky, sxjscience, ValerioPerrone, taesup-aws, sallypannn, rxjx, sarahyurick

Major Changes

Multimodal

- Added multimodal tabular, text, image functionality! See the [tutorial](https://auto.gluon.ai/stable/tutorials/tabular_prediction/tabular-multimodal.html) to get started. innixma, zhreshold (#1041, 1211, 1277)

Tutorials

- Added a new [custom model tutorial](https://auto.gluon.ai/stable/tutorials/tabular_prediction/tabular-custom-model.html) to showcase how to easily add **any** model to AutoGluon! Innixma (#1238)
- Added a new [custom metric tutorial](https://auto.gluon.ai/stable/tutorials/tabular_prediction/tabular-custom-metric.html) to showcase how to add custom metrics to AutoGluon! Innixma (#1271)
- Added [FairHPO tutorial](https://auto.gluon.ai/stable/tutorials/course/fairbo.html). ValerioPerrone (#1090, 1236)

Tabular

- Overall, **AutoGluon-Tabular v0.3 wins 57.6% of the time against AutoGluon-Tabular v0.2** in AutoMLBenchmark!
- **Improved online inference speed by 1.5x-10x** via various low level pandas and numpy optimizations. Innixma (1136)
- Accelerated feature preprocessing speed by **100x+** for datetime and text features. Innixma (1203)
- Fixed FastAI model not properly scaling regression label values, improving model quality significantly. Innixma (1162)
- Fixed r2 metric having the wrong sign in FastAI model, dramatically improving performance when r2 metric is specified. Innixma (1159)
- Updated XGBoost to 1.4, defaulted hyperparameter `tree_method='hist'` for improved performance. Innixma (1239)
- Added `groups` parameter. Now users can specify the exact split indices in a `groups` column when performing model bagging. This solution leverages sklearn's [LeaveOneGroupOut](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.LeaveOneGroupOut.html) cross-validator. Innixma (#1224)
- Added option to use holdout data for final ensembling weights in multi-layer stacking via a new `use_bag_holdout` argument. Innixma (1105)
- Added neural network based quantile regression models. taesup-aws (1047)
- Bug fix for random forest models' out-of-fold prediction computation in quantile regression. jwmueller, Innixma (1100, 1102)
- Added `predictor.features()` to get the original feature names used during training. Innixma (1257)
- Refactored AbstractModel code to be easier to use. Innixma (1151, 1216, 1245, 1266)
- Refactored BaggedEnsembleModel code in preparation for distributed bagging. gradientsky (1078)
- Updated RAPIDS version to 21.06. sarahyurick (1241)
- Force dtype conversion in feature preprocessing to align with FeatureMetadata. Now users can specify the dtypes of features via FeatureMetadata rather than updating the DataFrame. Innixma (1212)
- Fixed various edge cases with out-of-bounds date time values. Now out-of-bounds date time values are treated as missing. Innixma (1182)

Vision

- Added Torch / TIMM backend support! Now AutoGluon can train any TIMM model natively, and MXNet is no longer required to train vision models. zhreshold (1249)
- Added regression `problem_type` support to ImagePredictor. sallypannn (1165)
- Added GPU memory check to avoid going OOM during training. Innixma (1199)
- Fixed error when vision models are hyperparameter tuned with forked multiprocessing. gradientsky (1107)
- Fixed crash when an image is missing (both train and inference). Use [TabularPredictor's Image API](https://auto.gluon.ai/stable/tutorials/tabular_prediction/tabular-multimodal.html) to get this functionality. Innixma (#1210)
- Fixed error when the same image is in multiple rows when calling `predict_proba`. Innixma (1206)
- Fixed invalid preset configurations. Innixma (1199)
- Fixed major defect causing tuning data to not be properly created if tuning data was not provided by user. Innixma (1168)
- Upgraded Pillow version to '>=8.3.0,<8.4.0'. gradientsky (1262)

Text

- Removed pyarrow as a required dependency. Innixma (1200)
- Fixed crash when `eval_metric='average_precision'`. rxjx (1092)

General

- Improved support for GPU on Windows. Innixma (1255)
- Added quadratic kappa evaluation metric. sxjscience (1104)
- Improved access method for `__version__`. Innixma (1122)
- Upgraded pandas to 1.3. Innixma (1258)
- Upgraded ConfigSpace to 0.4.19. Innixma (1265)
- Upgraded numpy, graphviz, and dill versions. Innixma (1275)
- Various minor doc improvements. jwmueller, Innixma (1089, 1091, 1093, 1095, 1219, 1253)
- Various minor updates and fixes. Innixma, zhreshold, gradientsky (1098, 1099, 1101, 1113, 1117, 1118, 1166, 1177, 1188, 1197, 1227, 1229, 1235, 1245, 1251)

Page 4 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.