Autogluon

Latest version: v1.2

Safety actively analyzes 685525 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 6

1.2.0

We're happy to announce the AutoGluon 1.2.0 release.

AutoGluon 1.2 contains massive improvements to both Tabular and TimeSeries modules, each achieving a 70% win-rate vs AutoGluon 1.1. This release additionally adds support for Python 3.12 and drops support for Python 3.8.

This release contains [186 commits from 19 contributors](https://github.com/autogluon/autogluon/graphs/contributors?from=2024-06-15&to=2024-11-29&type=c)! See the full commit change-log here: https://github.com/autogluon/autogluon/compare/v1.1.1...v1.2.0

We are also excited to announce [AutoGluon-Assistant](https://github.com/autogluon/autogluon-assistant/) (AG-A), our first venture into the realm of Automated Data Science!

WIP: The 1.2.0 release notes are still being updated. Come back in a few hours for the complete release notes. The below is a draft.

For Tabular, we encompass the primary enhancements of the new [TabPFNMix tabular foundation model](https://huggingface.co/autogluon/tabpfn-mix-1.0-classifier) and parallel fit strategy into the new `"experimental_quality"` preset to ensure a smooth transition period for those who wish to try the new cutting edge features. We will be using this release to gather feedback prior to incorporating these features into the other presets. We also introduce a new stack layer model pruning technique that results in a 3x inference speedup on small datasets with zero performance loss and greatly improved post-hoc calibration across the board, particularly on small datasets.

For TimeSeries, we introduce [Chronos-Bolt](https://huggingface.co/autogluon/chronos-bolt-base), our latest foundation model integrated into AutoGluon, with massive improvements to both accuracy and inference speed compared to Chronos, along with fine-tuning capabilities. We also added covariate regressor support!

See more details in the Spotlights below!

Spotlight

AutoGluon Becomes the Golden Standard for Competition ML in 2024

Before diving into the new features of 1.2, we would like to start by highlighting the [wide-spread adoption](https://www.kaggle.com/search?q=autogluon+sortBy%3Adate) AutoGluon has received on competition ML sites like Kaggle in 2024. Across all of 2024, AutoGluon was used to achieve a top 3 finish in 15 out of 18 tabular Kaggle competitions, including 7 first place finishes, and was never outside the top 1% of private leaderboard placements, with an average of over 1000 competing human teams in each competition. In the $75,000 prize money [2024 Kaggle AutoML Grand Prix](https://www.kaggle.com/automl-grand-prix), AutoGluon was used by the 1st, 2nd, and 3rd place teams, with the 2nd place team led by two AutoGluon developers: [Lennart Purucker](https://github.com/LennartPurucker) and [Nick Erickson](https://github.com/Innixma)! For comparison, in 2023 AutoGluon achieved only 1 first place and 1 second place solution. We attribute the bulk of this increase to the improvements seen in AutoGluon 1.0 and beyond.

<center>
<img src="https://autogluon.s3.amazonaws.com/images/autogluon_kaggle_results_2024.png" width="75%"/>
</center>

We'd like to emphasize that these results are achieved via human expert interaction with AutoGluon and other tools, and often includes manual feature engineering and hyperparameter tuning to get the most out of AutoGluon. To see a live tracking of all AutoGluon solution placements on Kaggle, refer to our [AWESOME.md ML competition section](https://github.com/autogluon/autogluon/blob/master/AWESOME.md#kaggle) where we provide links to all solution write-ups.

AutoGluon-Assistant: Automating Data Science with AutoGluon and LLMs

We are excited to share the release of a new [AutoGluon-Assistant module](https://github.com/autogluon/autogluon-assistant/) (AG-A), powered by LLMs from AWS Bedrock or OpenAI. AutoGluon-Assistant empowers users to solve tabular machine learning problems using only natural language descriptions, in zero lines of code with our simple user interface. Fully autonomous AG-A outperforms 74% of human ML practitioners in Kaggle competitions and secured a live top 10 finish in the $75,000 prize money [2024 Kaggle AutoML Grand Prix](https://www.kaggle.com/automl-grand-prix) competition as Team AGA 🤖!

TabularPredictor presets="experimental_quality"

TabularPredictor has a new `"experimental_quality"` preset that offers even better predictive quality than `"best_quality"`. On [the AutoMLBenchmark](https://github.com/openml/automlbenchmark), we observe a 70% winrate vs `best_quality` when running for 4 hours on a 64 CPU machine. This preset is a testing ground for cutting edge features and models which we hope to incorporate into `best_quality` for future releases. We recommend to use a machine with at least 16 CPU cores, 64 GB of memory, and a 4 hour+ `time_limit` to get the most benefit out of `experimental_quality`. Please let us know via a GitHub issue if you run into any problems running the `experimental_quality` preset.

TabPFNMix: A Foundation Model for Tabular Data

[TabPFNMix]((https://huggingface.co/autogluon/tabpfn-mix-1.0-classifier)) is the first tabular foundation model created by the AutoGluon team, and was pre-trained exclusively on synthetic data.
The model builds upon the prior work of [TabPFN](https://arxiv.org/abs/2207.01848) and [TabForestPFN](https://arxiv.org/abs/2405.13396). TabPFNMix to the best of our knowledge achieves a new state-of-the-art for individual open source model performance on datasets between 1000 and 10000 samples, and also supports regression tasks! Across the 109 classification datasets with less than or equal to 10000 training samples in [TabRepo](https://github.com/autogluon/tabrepo), fine-tuned TabPFNMix outperforms all prior models, with a 64% win-rate vs the strongest tree model, CatBoost, and a 61% win-rate vs fine-tuned TabForestPFN.

The model is available via the `TABPFNMIX` hyperparameters key, and is used in the new `experimental_quality` preset. We recommend using this model for datasets smaller than 50,000 training samples, ideally with a large time limit and 64+ GB of memory. This work is still in the early stages, and we appreciate any feedback from the community to help us iterate and improve for future releases. You can learn more by going to our HuggingFace model page for the model ([tabpfn-mix-1.0-classifier](https://huggingface.co/autogluon/tabpfn-mix-1.0-classifier), [tabpfn-mix-1.0-regressor](https://huggingface.co/autogluon/tabpfn-mix-1.0-regressor)). Give us a like on HuggingFace if you want to see more! A paper is planned in future to provide more details about the model.

fit_strategy="parallel"

AutoGluon's TabularPredictor now supports the new fit argument `fit_strategy` and the new `"parallel"` option, enabled by default in the new `experimental_quality` preset. For machines with 16 or more CPU cores, the parallel fit strategy offers a major speedup over the previous `"sequential"` strategy. We estimate with 64 CPU cores that most datasets will experience a 2-4x speedup, with the speedup getting larger as CPU cores increase.

Chronos-Bolt⚡: a 250x faster, more accurate Chronos model

Chronos-Bolt is our latest foundation model for forecasting that has been integrated into AutoGluon. It is based on the T5 encoder-decoder architecture and has been trained on nearly 100 billion time series observations. It chunks the historical time series context into patches of multiple observations, which are then input into the encoder. The decoder then uses these representations to directly generate quantile forecasts across multiple future steps—_a method known as direct multi-step forecasting_. Chronos-Bolt models are up to 250 times faster and 20 times more memory-efficient than the original Chronos models of the same size.

The following plot compares the inference time of Chronos-Bolt against the original Chronos models for forecasting 1024 time series with a context length of 512 observations and a prediction horizon of 64 steps.

<center>
<img src="https://autogluon.s3.amazonaws.com/images/chronos_bolt_speed.svg" width="50%"/>
</center>

Chronos-Bolt models are not only significantly faster but also more accurate than the original Chronos models. The following plot reports the probabilistic and point forecasting performance of Chronos-Bolt in terms of the [Weighted Quantile Loss (WQL)](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.WQL) and the [Mean Absolute Scaled Error (MASE)](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.MASE), respectively, aggregated over 27 datasets (see the [Chronos paper](https://arxiv.org/abs/2403.07815) for details on this benchmark). Remarkably, despite having no prior exposure to these datasets during training, the zero-shot Chronos-Bolt models outperform commonly used statistical models and deep learning models that have been trained on these datasets (highlighted by *). Furthermore, they also perform better than other FMs, denoted by a +, which indicates that these models were pretrained on certain datasets in our benchmark and are not entirely zero-shot. Notably, Chronos-Bolt (Base) also surpasses the original Chronos (Large) model in terms of the forecasting accuracy while being over 600 times faster.

<center>
<img src="https://autogluon.s3.amazonaws.com/images/chronos_bolt_accuracy.svg" width="80%"/>
</center>

Chronos-Bolt models are now available through AutoGluon in four sizes—Tiny (9M), Mini (21M), Small (48M), and Base (205M)—and can also be used on the CPU. With the addition of Chronos-Bolt models and other enhancements, **AutoGluon v1.2 achieves a 70%+ win rate against the previous release**!

In addition to the new Chronos-Bolt models, we have also added support for effortless fine-tuning of Chronos and Chronos-Bolt models. Check out the updated [Chronos tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html) to learn how to use and fine-tune Chronos-Bolt models.

Time Series Covariate Regressors

We have added support for covariate regressors for all forecasting models. Covariate regressors are tabular regression models that can be combined with univariate forecasting models to incorporate exogenous information. These are particularly useful for foundation models like Chronos-Bolt, which rely solely on the target time series' historical data and cannot directly use exogenous information (such as holidays or promotions). To improve the predictions of univariate models when covariates are available, a covariate regressor is first fit on the known covariates and static features to predict the target column at each time step. The predictions of the covariate regressor are then subtracted from the target column, and the univariate model then forecasts the residuals. The [Chronos tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html) showcases how covariate regressors can be used with Chronos-Bolt.

General

WIP: Come back in a few hours for the full release notes

Tabular

WIP: Come back in a few hours for the full release notes

TimeSeries

New Features
* Add fine-tuning support for Chronos and Chronos-Bolt models abdulfatir (4608, 4645, 4653, 4655, 4659, 4661, 4673, 4677)
* Add Chronos-Bolt canerturkmen (4625)
* `TimeSeriesPredictor.leaderboard` now can compute extra metrics and return hyperparameters for each model shchur (4481)
* Add `target_scaler` support for all forecasting models shchur (4460, 4644)
* Add `covariate_regressor` support for all forecasting models shchur (4566, 4641)
* Add method to convert a TimeSeriesDataFrame to a regular pd.DataFrame shchur (4415)
* [experimental] Add the weighted cumulative error forecasting metric shchur (4594)
* [experimental] Allow custom ensemble model types for time series shchur (4662)

Fixes and Improvements
* Update presets canerturkmen shchur (4656, 4658, 4666, 4672)
* Unify all Croston models into a single class shchur (4564)
* Bump `statsforecast` version to 1.7 canerturkmen shchur (4194, 4357)
* Fix deep learning models failing if item_ids have StringDtype rsj123 (4539)
* Update logic for inferring the time series frequency shchur (4540)
* Speed up and reduce memory usage of the `TimeSeriesFeatureGenerator` preprocessing logic shchur (4557)
* Update to GluonTS v0.16.0 shchur (4628)
* Refactor GluonTS default parameter handling, update TiDE parameters canerturkmen (4640)
* Move covariate scaling logic into a separate class shchur (4634)
* Prune timeseries unit and smoke tests canerturkmen (4650)
* Minor fixes abdulfatir canerturkmen shchur (4259, 4299, 4395, 4386, 4409, 4533, 4565, 4633, 4647)


Multimodal

Fixes and Improvements
* Fix Missing Validation Metric While Resuming A Model Failed At Checkpoint Fusing Stage by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4449
* Add coco_root for better support for custom dataset in COCO format. by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/3809
* Add COCO Format Saving Support and Update Object Detection I/O Handling by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/3811
* Skip MMDet Config Files While Checking with bandit by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4630
* Fix Logloss Bug and Refine Compute Score Logics by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4629
* Fix Index Typo in Tutorial by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4642
* Fix Proba Metrics for Multiclass by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4643
* Support torch 2.4 by tonyhoo in https://github.com/autogluon/autogluon/pull/4360
* Add Installation Guide for Object Detection in Tutorial by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4430
* Add Bandit Warning Mitigation for Internal `torch.save` and `torch.load` Usage by tonyhoo in https://github.com/autogluon/autogluon/pull/4502
* update accelerate version range by cheungdaven in https://github.com/autogluon/autogluon/pull/4596
* Bound nltk version to avoid verbose logging issue by tonyhoo in https://github.com/autogluon/autogluon/pull/4604
* Upgrade TIMM by prateekdesai04 in https://github.com/autogluon/autogluon/pull/4580
* Key dependency updates in _setup_utils.py for v1.2 release by tonyhoo in https://github.com/autogluon/autogluon/pull/4612
* Configurable Number of Checkpoints to Keep per HPO Trial by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4615
* Refactor Metrics for Each Problem Type by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4616
* Fix Torch Version and Colab Installation for Object Detection by FANGAreNotGnu in https://github.com/autogluon/autogluon/pull/4447

Special Thanks

WIP: Come back in a few hours for the full release notes

* [Xiyuan Zhang](https://xiyuanzh.github.io/) for leading the development of TabPFNMix!
* The TabPFN author's [Noah Hollmann](https://twitter.com/noahholl), [Samuel Muller](https://twitter.com/SamuelMullr), [Katharina Eggensperger](https://twitter.com/KEggensperger), and [Frank Hutter](https://twitter.com/FrankRHutter) for unlocking the power of foundation models for tabular data, and the TabForestPFN author's [Felix den Breejen](https://github.com/FelixdenBreejen), [Sangmin Bae](https://scholar.google.com/citations?user=T5rHY14AAAAJ&hl=ko), [Stephen Cha](https://scholar.google.com/citations?user=jqLvFdIAAAAJ&hl=en), and [Se-Young Yun](https://fbsqkd.github.io/) for extending the idea to a more generic representation. Our TabPFNMix work builds upon the shoulders of giants.
* [Lennart Purucker](https://x.com/LennartPurucker) for leading development of the [parallel model fit functionality](https://github.com/autogluon/autogluon/pull/4606) and pushing AutoGluon to its limits in the 2024 Kaggle AutoML Grand Prix.
* [Robert Hatch](https://www.kaggle.com/roberthatch), [Tilii](https://www.kaggle.com/tilii7), [Optimistix](https://www.kaggle.com/optimistix), [Mart Preusse](https://www.kaggle.com/martinapreusse), [Ravi Ramakrishnan](https://www.kaggle.com/ravi20076), [Samvel Kocharyan](https://www.kaggle.com/samvelkoch), [Kirderf](https://www.kaggle.com/kirderf), [Carl McBride Ellis](https://www.kaggle.com/carlmcbrideellis), [Konstantin Dmitriev](https://www.kaggle.com/kdmitrie), and others for their insightful discussions and for championing AutoGluon on Kaggle!
* [Eddie Bergman](https://x.com/edberg_wardman) for his insightful surprise code review of the [tabular callback support](https://github.com/autogluon/autogluon/pull/4327) feature.

Contributors

Full Contributor List (ordered by of commits):

Innixma shchur prateekdesai04 tonyhoo FangAreNotGnu suzhoum abdulfatir canerturkmen LennartPurucker abhishek-iitmadras adibiasio rsj123 nathanaelbosch cheungdaven lostella zkalson rey-allan echowve xiyuanzh

New Contributors
* nathanaelbosch made their first contribution in https://github.com/autogluon/autogluon/pull/4366
* adibiasio made their first contribution in https://github.com/autogluon/autogluon/pull/4391
* abdulfatir made their first contribution in https://github.com/autogluon/autogluon/pull/4608
* echowve made their first contribution in https://github.com/autogluon/autogluon/pull/4667
* abhishek-iitmadras made their first contribution in https://github.com/autogluon/autogluon/pull/4685
* xiyuanzh made their first contribution in https://github.com/autogluon/autogluon/pull/4694

1.1.1

We're happy to announce the AutoGluon 1.1.1 release.

AutoGluon 1.1.1 contains bug fixes and logging improvements for Tabular, TimeSeries, and Multimodal modules, as well as support for PyTorch 2.2 and 2.3.

Join the community: [![](https://img.shields.io/discord/1043248669505368144?logo=discord&style=flat)](https://discord.gg/wjUmjqAc2N)
Get the latest updates: [![Twitter](https://img.shields.io/twitter/follow/autogluon?style=social)](https://twitter.com/autogluon)

This release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.1.1.

This release contains **[52 commits from 11 contributors](https://github.com/autogluon/autogluon/compare/v1.1.0...v1.1.1)**!

General
- Add support for PyTorch 2.2. prateekdesai04 (4123)
- Add support for PyTorch 2.3. suzhoum (4239, 4256)
- Upgrade GluonTS to 0.15.1. shchur (4231)

Tabular
Note: Trying to load a TabularPredictor with a FastAI model trained on a previous AutoGluon release will raise an exception when calling `predict` due to a fix in the `model-interals.pkl` path. Please ensure matching versions.

- Fix deadlock when `num_gpus>0` and dynamic_stacking is enabled. Innixma (4208)
- Improve decision threshold calibration. Innixma (4136, 4137)
- Improve dynamic stacking logging. Innixma (4208, 4262)
- Fix regression metrics (other than RMSE and MSE) being calculated incorrectly for LightGBM early stopping. Innixma (4174)
- Fix custom multiclass metrics being calculated incorrectly for LightGBM early stopping. Innixma (4250)
- Fix HPO crashing with NN_TORCH and FASTAI models. Innixma (4232)
- Improve NN_TORCH runtime estimate. Innixma (4247)
- Add infer throughput logging. Innixma (4200)
- Disable sklearnex for linear models due to observed performance degradation. Innixma (4223)
- Improve sklearnex logging verbosity in Kaggle. Innixma (4216)
- Rename cached version file to version.txt. Innixma (4203)
- Add refit_full support for Linear models. Innixma (4222)
- Add AsTypeFeatureGenerator detailed exception logging. Innixma (4251, 4252)

TimeSeries
- Ensure prediction_length is stored as an integer. shchur (4160)
- Fix tabular model preprocessing failure edge-case. shchur (4175)
- Fix loading of Tabular models failure if predictor moved to a different directory. shchur (4171)
- Fix cached predictions error when predictor saved on-top of an existing predictor. shchur (4202)
- Use AutoGluon forks of Chronos models. shchur (4198)
- Fix off-by-one bug in Chronos inference. canerturkmen (4205)
- Rename cached version file to version.txt. Innixma (4203)
- Use correct target and quantile_levels in fallback model for MLForecast. shchur (4230)

Multimodal
- Fix bug in CLIP's image feature normalization. Harry-zzh (4114)
- Fix bug in text augmentation. Harry-zzh (4115)
- Modify default fine-tuning tricks. Harry-zzh (4166)
- Add PyTorch version warning for object detection. FANGAreNotGnu (4217)

Docs and CI
- Add competition solutions to `AWESOME.md`. Innixma shchur (4122, 4163, 4245)
- Fix PDF classification tutorial. zhiqiangdon (4127)
- Add AutoMM paper citation. zhiqiangdon (4154)
- Add pickle load warning in all modules and tutorials. shchur (4243)
- Various minor doc and test fixes and improvements. tonyhoo shchur lovvge Innixma suzhoum (4113, 4176, 4225, 4233, 4235, 4249, 4266)

Contributors

Full Contributor List (ordered by of commits):

Innixma shchur Harry-zzh suzhoum zhiqiangdon lovvge rey-allan prateekdesai04 canerturkmen FANGAreNotGnu tonyhoo

New Contributors
* lovvge made their first contribution in https://github.com/autogluon/autogluon/commit/57a15fcfbbbc94514ff20ed2774cd447d9f4115f
* rey-allan made their first contribution in 4145

1.1.0

We're happy to announce the AutoGluon 1.1 release.

AutoGluon 1.1 contains major improvements to the TimeSeries module, achieving a 60% win-rate vs AutoGluon 1.0 through the addition of Chronos, a pretrained model for time series forecasting, along with numerous other enhancements. The other modules have also been enhanced through new features such as Conv-LORA support and improved performance for large tabular datasets between 5 - 30 GB in size. For a full breakdown of AutoGluon 1.1 features, please refer to the feature spotlights and the itemized enhancements below.

Join the community: [![](https://img.shields.io/discord/1043248669505368144?logo=discord&style=flat)](https://discord.gg/wjUmjqAc2N)
Get the latest updates: [![Twitter](https://img.shields.io/twitter/follow/autogluon?style=social)](https://twitter.com/autogluon)

This release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.1.

This release contains **[125 commits from 20 contributors](https://github.com/autogluon/autogluon/compare/v1.0.0...v1.1.0)**!

Full Contributor List (ordered by of commits):

shchur prateekdesai04 Innixma canerturkmen zhiqiangdon tonyhoo AnirudhDagar Harry-zzh suzhoum FANGAreNotGnu nimasteryang lostella dassaswat afmkt npepin-hub mglowacki100 ddelange LennartPurucker taoyang1122 gradientsky

Special thanks to ddelange for their continued assistance with Python 3.11 support and Ray version upgrades!

Spotlight

AutoGluon Achieves Top Placements in ML Competitions!

AutoGluon has experienced [wide-spread adoption on Kaggle](https://www.kaggle.com/search?q=autogluon+sortBy%3Adate) since the AutoGluon 1.0 release.
AutoGluon has been used in over 130 Kaggle notebooks and mentioned in over 100 discussion threads in the past 90 days!
Most excitingly, AutoGluon has already been used to achieve top ranking placements in multiple competitions with thousands of competitors since the start of 2024:

| Placement | Competition | Author | Date | AutoGluon Details | Notes |
|:-----------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------|:-----------|:------------------|:-------------------------------|
| :3rd_place_medal: Rank 3/2303 (Top 0.1%) | [Steel Plate Defect Prediction](https://www.kaggle.com/competitions/playground-series-s4e3/discussion/488127) | [Samvel Kocharyan](https://github.com/samvelkoch) | 2024/03/31 | v1.0, Tabular | Kaggle Playground Series S4E3 |
| :2nd_place_medal: Rank 2/93 (Top 2%) | [Prediction Interval Competition I: Birth Weight](https://www.kaggle.com/competitions/prediction-interval-competition-i-birth-weight/leaderboard) | [Oleksandr Shchur](https://shchur.github.io/) | 2024/03/21 | v1.0, Tabular | |
| :2nd_place_medal: Rank 2/1542 (Top 0.1%) | [WiDS Datathon 2024 Challenge 1](https://www.kaggle.com/competitions/widsdatathon2024-challenge1/discussion/482285) | [lazy_panda](https://www.kaggle.com/byteliberator) | 2024/03/01 | v1.0, Tabular | |
| :2nd_place_medal: Rank 2/3746 (Top 0.1%) | [Multi-Class Prediction of Obesity Risk](https://www.kaggle.com/competitions/playground-series-s4e2/discussion/480939) | [Kirderf](https://twitter.com/kirderf9) | 2024/02/29 | v1.0, Tabular | Kaggle Playground Series S4E2 |
| :2nd_place_medal: Rank 2/3777 (Top 0.1%) | [Binary Classification with a Bank Churn Dataset](https://www.kaggle.com/competitions/playground-series-s4e1/discussion/472496) | [lukaszl](https://www.kaggle.com/lukaszl) | 2024/01/31 | v1.0, Tabular | Kaggle Playground Series S4E1 |
| Rank 4/1718 (Top 0.2%) | [Multi-Class Prediction of Cirrhosis Outcomes](https://www.kaggle.com/competitions/playground-series-s3e26/discussion/464863) | [Kirderf](https://twitter.com/kirderf9) | 2024/01/01 | v1.0, Tabular | Kaggle Playground Series S3E26 |

We are thrilled that the data science community is leveraging AutoGluon as their go-to method to quickly and effectively achieve top-ranking ML solutions! For an up-to-date list of competition solutions using AutoGluon refer to our [AWESOME.md](https://github.com/autogluon/autogluon/blob/master/AWESOME.md#competition-solutions-using-autogluon), and don't hesitate to let us know if you use AutoGluon in a competition!

Chronos, a pretrained model for time series forecasting

AutoGluon-TimeSeries now features [Chronos](https://arxiv.org/abs/2403.07815), a family of forecasting models pretrained on large collections of open-source time series datasets that can generate accurate zero-shot predictions for new unseen data. Check out the [new tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html) to learn how to use Chronos through the familiar `TimeSeriesPredictor` API.


General

- Refactor project README & project Tagline Innixma (3861, 4066)
- Add AWESOME.md competition results and other doc improvements. Innixma (4023)
- Pandas version upgrade. shchur Innixma (4079, 4089)
- PyTorch, CUDA, Lightning version upgrades. prateekdesai04 canerturkmen zhiqiangdon (3982, 3984, 3991, 4006)
- Ray version upgrade. ddelange tonyhoo (3774, 3956)
- Scikit-learn version upgrade. prateekdesai04 (3872, 3881, 3947)
- Various dependency upgrades. Innixma tonyhoo (4024, 4083)

TimeSeries

Highlights
AutoGluon 1.1 comes with numerous new features and improvements to the time series module. These include highly requested functionality such as feature importance, support for categorical covariates, ability to visualize forecasts, and enhancements to logging. The new release also comes with considerable improvements to forecast accuracy, achieving 60% win rate and 3% average error reduction compared to the previous AutoGluon version. These improvements are mostly attributed to the addition of Chronos, improved preprocessing logic, and native handling of missing values.


New Features
- Add Chronos pretrained forecasting model ([tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html)). canerturkmen shchur lostella (#3978, 4013, 4052, 4055, 4056, 4061, 4092, 4098)
- Measure the importance of features & covariates on the forecast accuracy with `TimeSeriesPredictor.feature_importance()`. canerturkmen (4033, 4087)
- Native missing values support (no imputation required). shchur (3995, 4068, 4091)
- Add support for categorical covariates. shchur (3874, 4037)
- Improve inference speed by persisting models in memory with `TimeSeriesPredictor.persist()`. canerturkmen (4005)
- Visualize forecasts with `TimeSeriesPredictor.plot()`. shchur (3889)
- Add `RMSLE` evaluation metric. canerturkmen (3938)
- Enable logging to file. canerturkmen (3877)
- Add option to keep lightning logs after training with `keep_lightning_logs` hyperparameter. shchur (3937)

Fixes and Improvements
- Automatically preprocess real-valued covariates shchur (4042, 4069)
- Add option to skip model selection when only one model is trained. shchur (4002)
- Ensure all metrics handle missing values in target shchur (3966)
- Fix bug when loading a GPU trained model on a CPU machine shchur (3979)
- Fix inconsistent random seed. canerturkmen shchur (3934, 4099)
- Fix crash when calling .info after load. afmkt (3900)
- Fix leaderboard crash when no models trained. shchur (3849)
- Add prototype TabRepo simulation artifact generation. shchur (3829)
- Fix refit_full bug. shchur (3820)
- Documentation improvements, hide deprecated methods. shchur (3764, 4054, 4098)
- Minor fixes. canerturkmen, shchur, AnirudhDagar (4009, 4040, 4041, 4051, 4070, 4094)

AutoMM

Highlights

AutoMM 1.1 introduces the innovative Conv-LoRA, a parameter-efficient fine-tuning (PEFT) method stemming from our latest paper presented at ICLR 2024, titled "[Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model](https://arxiv.org/abs/2401.17868)". Conv-LoRA is designed for fine-tuning the Segment Anything Model, exhibiting superior performance compared to previous PEFT approaches, such as LoRA and visual prompt tuning, across various semantic segmentation tasks in diverse domains including natural images, agriculture, remote sensing, and healthcare. Check out [our Conv-LoRA example](https://github.com/autogluon/autogluon/tree/master/examples/automm/Conv-LoRA).

New Features

- Added Conv-LoRA, a new parameter efficient fine-tuning method. Harry-zzh zhiqiangdon (3933, 3999, 4007, 4022, 4025)
- Added support for new column type: 'image_base64_str'. Harry-zzh zhiqiangdon (3867)
- Added support for loading pre-trained weights in FT-Transformer. taoyang1122 zhiqiangdon (3859)

Fixes and Improvements

- Fixed bugs in semantic segmentation. Harry-zzh (3801, 3812)
- Fixed crashes when using F1 metric. suzhoum (3822)
- Fixed bugs in PEFT methods. Harry-zzh (3840)
- Accelerated object detection training by ~30\% for the high_quality and best_quality presets. FANGAreNotGnu (3970)
- Depreciated Grounding-DINO FANGAreNotGnu (3974)
- Fixed lightning upgrade issues zhiqiangdon (3991)
- Fixed using f1, f1_macro, f1_micro for binary classification in knowledge distillation. nimasteryang (3837)
- Removed MyMuPDF from installation due to the license issue. Users need to install it by themselves to do document classification. zhiqiangdon (4093)


Tabular

Highlights
AutoGluon-Tabular 1.1 primarily focuses on bug fixes and stability improvements. In particular, we have greatly improved the runtime performance for large datasets between 5 - 30 GB in size through the usage of subsampling for decision threshold calibration and the weighted ensemble fitting to 1 million rows, maintaining the same quality while being far faster to execute. We also adjusted the default weighted ensemble iterations from 100 to 25, which will speedup all weighted ensemble fit times by 4x. We heavily refactored the `fit_pseudolabel` logic, and it should now achieve noticeably stronger results.

Fixes and Improvements
- Fix return value in `predictor.fit_weighted_ensemble(refit_full=True)`. Innixma (1956)
- Enhance performance on large datasets through subsampling. Innixma (3977)
- Fix refit_full crash when out of memory. Innixma (3977)
- Refactor and enhance `.fit_pseudolabel` logic. Innixma (3930)
- Fix crash in memory check during HPO for LightGBM, CatBoost, and XGBoost. Innixma (3931)
- Fix dynamic stacking on windows. Innixma (3893)
- LightGBM version upgrade. mglowacki100, Innixma (3427)
- Fix memory-safe sub-fits being skipped if Ray is not initialized. LennartPurucker (3868)
- Logging improvements. AnirudhDagar (3873)
- Hide deprecated methods. Innixma (3795)
- Documentation improvements. Innixma AnirudhDagar (2024, 3975, 3976, 3996)

Docs and CI
- Add auto benchmarking report generation. prateekdesai04 (4038, 4039)
- Fix tabular tests for Windows. tonyhoo (4036)
- Fix hanging tabular unit tests. prateekdesai04 (4031)
- Fix CI evaluation. suzhoum (4019)
- Add package version comparison between CI runs prateekdesai04 (3962, 3968, 3972)
- Update conf.py to reflect current year. dassaswat (3932)
- Avoid redundant unit test runs. prateekdesai04 (3942)
- Fix colab notebook links prateekdesai04 (3926)

New Contributors
* npepin-hub made their first contribution in https://github.com/autogluon/autogluon/pull/3898
* afmkt made their first contribution in https://github.com/autogluon/autogluon/pull/3900
* dassaswat made their first contribution in https://github.com/autogluon/autogluon/pull/3932
* nimasteryang made their first contribution in https://github.com/autogluon/autogluon/pull/3837
* zkalson made their first contribution in https://github.com/autogluon/autogluon/pull/4096

1.0

New Features
* Added `dynamic_stacking` predictor fit argument to mitigate [stacked overfitting](https://github.com/autogluon/autogluon/issues/2779#issuecomment-1736468165) LennartPurucker Innixma (3616)
* Added [zeroshot-HPO learned portfolio](https://github.com/autogluon/autogluon/blob/master/tabular/src/autogluon/tabular/configs/zeroshot/zeroshot_portfolio_2023.py) as new hyperparameters for `best_quality` and `high_quality` presets. Innixma geoalgo (#3750)
* Added experimental scikit-learn API compatible wrappers to TabularPredictor. You can access them via `from autogluon.tabular.experimental import TabularClassifier, TabularRegressor`. Innixma (3769)
* Added `predictor.model_failures()` Innixma (3421)
* Added enhanced FT-Transformer taoyang1122 Innixma (3621, 3644, 3692)
* Added `predictor.simulation_artifact()` to support integration with [TabRepo](https://github.com/autogluon/tabrepo) Innixma (#3555)

Performance Improvements
* Enhanced FastAI model quality on regression via output clipping LennartPurucker Innixma (3597)
* Added Skip-connection Weighted Ensemble LennartPurucker (3598)
* Fix memory leaks by using ray processes for sequential fitting LennartPurucker (3614)
* Added dynamic parallel folds support to better utilize compute in low memory scenarios yinweisu Innixma (3511)
* Fixed linear model crashes during HPO and added search space for linear models Innixma (3571, 3720)

Other Enhancements
* Multi-layer stacking now produces deterministic results LennartPurucker (3573)
* Various model dependency updates mglowacki100 (3373)
* Various code cleanup and logging improvements Innixma (3408, 3570, 3652, 3734)

Bug Fixes / Code and Doc Improvements
* Fixed incorrect model memory usage calculation Innixma (3591)
* Fixed `infer_limit` being used incorrectly when bagging Innixma (3467)
* Fixed rare edge-case FastAI model crash Innixma (3416)
* Various minor bug fixes Innixma (3418, 3480)

AutoMM
[AutoGluon Multimodal (AutoMM)](https://auto.gluon.ai/stable/tutorials/multimodal/index.html) is designed to simplify the fine-tuning of foundation models for downstream applications with just three lines of code. It seamlessly integrates with popular model zoos such as [HuggingFace Transformers](https://github.com/huggingface/transformers), [TIMM](https://github.com/huggingface/pytorch-image-models), and [MMDetection](https://github.com/open-mmlab/mmdetection), providing support for a diverse range of data modalities,
including image, text, tabular, and document data, whether used individually or in combination.

New Features

* Semantic Segmentation
* Introducing the new problem type `semantic_segmentation`, for fine-tuning [Segment Anything Model (SAM)](https://segment-anything.com/) with three lines of code. Harry-zzh zhiqiangdon (#3645, 3677, 3697, 3711, 3722, 3728)
* Added comprehensive benchmarks from diverse domains, including natural images, agriculture, remote sensing, and healthcare.
* Utilizing parameter-efficient finetuning (PEFT) [LoRA](https://arxiv.org/abs/2106.09685), showcasing consistent superior performance over alternatives ([VPT](https://arxiv.org/abs/2203.12119), [adaptor](https://arxiv.org/abs/1902.00751), [BitFit](https://arxiv.org/abs/2106.10199), [SAM-adaptor](https://arxiv.org/abs/2304.09148), and [LST](https://arxiv.org/abs/2206.06522)) in the extensive benchmarks.
* Added one [semantic segmentation tutorial](https://auto.gluon.ai/stable/tutorials/multimodal/image_segmentation/beginner_semantic_seg.html) zhiqiangdon (#3716).
* Using [SAM-ViT Huge](https://huggingface.co/facebook/sam-vit-huge) by default (GPU memory > 25GB required).
* Few Shot Classification
* Added the new `few_shot_classification` problem type for training few shot classifiers on images or texts. zhiqiangdon (3662, 3681, 3695)
* Leveraging image/text foundation models to extract features and train SVM classifiers.
* Added one [few shot classification tutorial](https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/few_shot_learning.html). zhiqiangdon (#3662)
* Supported [torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) for faster training (experimental and torch >=2.2 required) zhiqiangdon (#3520).

Performance Improvements
* Improved default image backbones, achieving a 100% win-rate on the image benchmark. taoyang1122 (3738)
* Replaced MLPs with FT-Transformer as the default tabular backbones, resulting in a 67% win-rate on the text+tabular benchmark. taoyang1122 (3732)
* Using both the improved default image backbones and FT-Transformer achieves a 62% win-rate on the text+tabular+image benchmark. taoyang1122 (3732, 3738)

Stability Enhancements
* Enabled rigorous multi-GPU CI testing. prateekdesai04 (3566)
* Fixed multi-GPU issues. FANGAreNotGnu (3617 3665 3684 3691, 3639, 3618)

Enhanced Usability
* Supported custom evaluation metrics, which allows defining custom [metric object](https://auto.gluon.ai/dev/tutorials/tabular/advanced/tabular-custom-metric.html) and passing it to the `eval_metric` argument taoyang1122 (#3548)
* Supported multi-GPU training in notebooks (experimental) zhiqiangdon (3484)
* Improved logging with system info zhiqiangdon (3735)

Improved Scalability
* The introduction of the new learner class design facilitates easier support for new tasks and data modalities within AutoMM, enhancing overall scalability. zhiqiangdon (3650, 3685, 3735)

Other Enhancements

* Added the option `hf_text.use_fast` for customizing fast tokenizer usage in `hf_text` models. zhiqiangdon (3379)
* Added fallback evaluation/validation metric, supporting `f1_macro` `f1_micro`, and `f1_weighted`. FANGAreNotGnu (3696)
* Supported multi-GPU inference with the DDP strategy. zhiqiangdon (3445, 3451)
* Upgraded torch to 2.0. zhiqiangdon (3404)
* Upgraded lightning to 2.0 zhiqiangdon (3419)
* Upgraded torchmetrics to 1.0 zhiqiangdon (3422)

Code Improvements

* Refactored AutoMM with the learner class for improved design. zhiqiangdon (3650, 3685, 3735)
* Refactored FT-Transformer. taoyang1122 (3621, 3700)
* Refactored the visualizers of object detection, semantic segmentation, and NER. zhiqiangdon (3716)
* Other code refactor/clean-up: zhiqiangdon FANGAreNotGnu (3383 3399 3434 3667 3684 3695)

Bug Fixes/Doc Improvements

* Fixed HPO for focal loss. suzhoum (3739)
* Fixed one ONNX export issue. AnirudhDagar (3725)
* Improved AutoMM introduction for clarity. zhiqiangdon (3388 3726)
* Improved AutoMM API doc. zhiqiangdon AnirudhDagar (3772 3777)
* Other bug fixes zhiqiangdon FANGAreNotGnu taoyang1122 tonyhoo rsj123 AnirudhDagar (3384, 3424, 3526, 3593, 3615, 3638, 3674, 3693, 3702, 3690, 3729, 3736, 3474, 3456, 3590, 3660)
* Other doc improvements zhiqiangdon FANGAreNotGnu taoyang1122 (3397, 3461, 3579, 3670, 3699, 3710, 3716, 3737, 3744, 3745, 3680)

TimeSeries

Highlights
AutoGluon 1.0 features numerous usability and performance improvements to the TimeSeries module. These include automatic handling of missing data and irregular time series, new forecasting metrics (including custom metric support), advanced time series cross-validation options, and new forecasting models. AutoGluon produces state-of-the-art results in forecast accuracy, achieving [70%+ win rate](https://openreview.net/forum?id=XHIY3cQ8Tew) compared to other popular forecasting frameworks.

New features
- Support for custom forecasting metrics shchur (3760, 3602)
- New forecasting metrics `WAPE`, `RMSSE`, `SQL` + improved [documentation for metrics](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html) melopeo shchur (#3747, 3632, 3510, 3490)
- Improved robustness: `TimeSeriesPredictor` can now handle data with all [pandas frequencies](https://pandas.pydata.org/docs/user_guide/timeseries.html#offset-aliases), irregular timestamps, or missing values represented by `NaN` shchur (3563, 3454)
- New models: intermittent demand forecasting models based on conformal prediction (`ADIDA`, `CrostonClassic`, `CrostonOptimized`, `CrostonSBA`, `IMAPA`); `WaveNet` and `NPTS` from GluonTS; new baseline models (`Average`, `SeasonalAverage`, `Zero`) canerturkmen shchur (3706, 3742, 3606, 3459)
- Advanced cross-validation options: avoid retraining the models for each validation window with `refit_every_n_windows` or adjust the step size between validation windows with `val_step_size` arguments to `TimeSeriesPredictor.fit` shchur (3704, 3537)

Enhancements
- Enable Ray Tune for deep-learning forecasting models canerturkmen (3705)
- Support passing multiple evaluation metrics to `TimeSeriesPredictor.evaluate` shchur (3646)
- Static features can now be passed directly to `TimeSeriesDataFrame.from_path` and `TimeSeriesDataFrame.from_data_frame` constructors shchur (3635)

Performance improvements
- Much more accurate forecasts at low time limits thanks to new presets and updated logic for splitting the training time across models shchur (3749, 3657, 3741)
- Faster training and prediction + lower memory usage for `DirectTabular` and `RecursiveTabular` models (3740, 3620, 3559)
- Enable early stopping and improve inference speed for GluonTS models shchur (3575)
- Reduce import time for `autogluon.timeseries` by moving import statements inside model classes (3514)

Bug Fixes / Code and Doc Improvements
- Improve log messages shchur (3721)
- Add reference to the publication on AutoGluon-TimeSeries to README shchur (3482)
- Align API of `TimeSeriesPredictor` with `TabularPredictor`, remove deprecated methods shchur (3714, 3655, 3396)
- General bug fixes and improvements shchur(3758, 3756, 3755, 3754, 3746, 3743, 3727, 3698, 3654, 3653, 3648, 3628, 3588, 3560, 3558, 3536, 3533, 3523, 3522, 3476, 3463)

EDA

The EDA module will be released at a later time, as it requires additional development effort before it is ready for 1.0.
We will make an announcement when EDA is ready for release. For now, please continue to use `"autogluon.eda==0.8.2"`.

Deprecations

General
* `autogluon.core.spaces` has been deprecated. Please use `autogluon.common.spaces` instead Innixma (3701)

Tabular
Tabular will log warnings if using the deprecated methods. Deprecated methods are planned to be removed in AutoGluon 1.2 Innixma (3701)
* `autogluon.tabular.TabularPredictor`
* `predictor.get_model_names()` -> `predictor.model_names()`
* `predictor.get_model_names_persisted()` -> `predictor.model_names(persisted=True)`
* `predictor.compile_models()` -> `predictor.compile()`
* `predictor.persist_models()` -> `predictor.persist()`
* `predictor.unpersist_models()` -> `predictor.unpersist()`
* `predictor.get_model_best()` -> `predictor.model_best`
* `predictor.get_pred_from_proba()` -> `predictor.predict_from_proba()`
* `predictor.get_oof_pred_proba()` -> `predictor.predict_proba_oof()`
* `predictor.get_oof_pred()` -> `predictor.predict_oof()`
* `predictor.get_model_full_dict()` -> `predictor.model_refit_map()`
* `predictor.get_size_disk()` -> `predictor.disk_usage()`
* `predictor.get_size_disk_per_file()` -> `predictor.disk_usage_per_file()`
* `predictor.leaderboard()` `silent` argument deprecated, replaced by `display`, defaults to False
* Same for `predictor.evaluate()` and `predictor.evaluate_predictions()`

AutoMM

* Deprecated the `FewShotSVMPredictor` in favor of the new `few_shot_classification` problem type zhiqiangdon (3699)
* Deprecated the `AutoMMPredictor` in favor of `MultiModalPredictor` zhiqiangdon (3650)
* `autogluon.multimodal.MultiModalPredictor`
* Deprecated the `config` argument in the fit API. zhiqiangdon (3679)
* Deprecated the `init_scratch` and `pipeline` arguments in the init API zhiqiangdon (3668)

TimeSeries
* `autogluon.timeseries.TimeSeriesPredictor`
* Deprecated argument `TimeSeriesPredictor(ignore_time_index: bool)`. Now, if the data contains irregular timestamps, either convert it to regular frequency with `data = data.convert_frequency(freq)` or provide frequency when creating the predictor as `TimeSeriesPredictor(freq=freq)`.
* `predictor.evaluate()` now returns a dictionary (previously returned a float)
* `predictor.score()` -> `predictor.evaluate()`
* `predictor.get_model_names()` -> `predictor.model_names()`
* `predictor.get_model_best()` -> `predictor.model_best`
* Metric `"mean_wQuantileLoss"` has been renamed to `"WQL"`
* `predictor.leaderboard()` `silent` argument deprecated, replaced by `display`, defaults to False
* When setting `hyperparameters` to a string in `predictor.fit()`, supported values are now `"default"`, `"light"` and `"very_light"`
* `autogluon.timeseries.TimeSeriesDataFrame`
- `df.to_regular_index()` -> `df.convert_frequency()`
- Deprecated method `df.get_reindexed_view()`. Please see deprecation notes for `ignore_time_index` under `TimeSeriesPredictor` above for information on how to deal with irregular timestamps
- Models
- All models based on MXNet (`DeepARMXNet`, `MQCNNMXNet`, `MQRNNMXNet`, `SimpleFeedForwardMXNet`, `TemporalFusionTransformerMXNet`, `TransformerMXNet`) have been removed
- Statistical models from Statmodels (`ARIMA`, `Theta`, `ETS`) have been replaced by their counterparts from StatsForecast (3513). Note that these models now have different hyperparameter names.
- `DirectTabular` is now implemented using `mlforecast` backend (same as `RecursiveTabular`), most hyperparameter names for the model have changed.
- `autogluon.timeseries.TimeSeriesEvaluator` has been deprecated. Please use metrics available in `autogluon.timeseries.metrics` instead.
- `autogluon.timeseries.splitter.MultiWindowSplitter` and `autogluon.timeseries.splitter.LastWindowSplitter` have been deprecated. Please use `num_val_windows` and `val_step_size` arguments to `TimeSeriesPredictor.fit` instead (alternatively, use `autogluon.timeseries.splitter.ExpandingWindowSplitter`).

Papers

AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting

We have published a paper on AutoGluon-TimeSeries at AutoML Conference 2023 ([Paper Link](https://openreview.net/forum?id=XHIY3cQ8Tew), [YouTube Video](https://www.youtube.com/watch?v=niLmfjXeHnE)). In the paper, we benchmarked AutoGluon and popular open-source forecasting frameworks (including DeepAR, TFT, AutoARIMA, AutoETS, AutoPyTorch). AutoGluon produces SOTA results in point and probabilistic forecasting, and even **achieves 65% win rate against the best-in-hindsight combination of models**.

TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications

We have published a paper on Tabular Zeroshot-HPO ensembling simulation to arXiv ([Paper Link](https://arxiv.org/pdf/2311.02971.pdf), [GitHub](https://github.com/autogluon/tabrepo)). This paper is key to achieving the performance improvements seen in AutoGluon 1.0, and we plan to continue to develop the code-base to support future enhancements.

XTab: Cross-table Pretraining for Tabular Transformers

We have published a paper on tabular Transformer pre-training at ICML 2023 ([Paper Link](https://arxiv.org/abs/2305.06090), [GitHub](https://github.com/BingzhaoZhu/XTab)). In the paper we demonstrate state-of-the-art performance for tabular deep learning models, including being able to match the performance of XGBoost and LightGBM models. While the pre-trained transformer is not yet incorporated into AutoGluon, we plan to integrate it in a future release.

Learning Multimodal Data Augmentation in Feature Space

Our paper on learning multimodal data augmentation was accepted at ICLR 2023 ([Paper Link](https://arxiv.org/pdf/2212.14453.pdf), [GitHub](https://github.com/lzcemma/LeMDA/)). This paper introduces a plug-and-play module to learn multimodal data augmentation in feature space, with no constraints on the identities of the modalities or the relationship between modalities. We show that it can (1) improve the performance of multimodal deep learning architectures, (2) apply to combinations of modalities that have not been previously considered, and (3) achieve state-of-the-art results on a wide range of applications comprised of image, text, and tabular data. This work is not yet incorporated into AutoGluon, but we plan to integrate it in a future release.

Data Augmentation for Object Detection via Controllable Diffusion Models

Our paper on generative object detection data augmentation has been accepted at WACV 2024 (Paper and GitHub link will be available soon). This paper proposes a data augmentation pipeline based on controllable diffusion models and CLIP, with visual prior generation to guide the generation and post-filtering by category-calibrated CLIP scores to control its quality. We demonstrate that the performance improves across various tasks and settings when using our augmentation pipeline with different detectors. Although diffusion models are currently not integrated into AutoGluon, we plan to incorporate the data augmentation techniques in a future release.

Adapting Image Foundation Models for Video Understanding

We have published a paper on how to efficiently adapt image foundation models for video understanding at ICLR 2023 ([Paper Link](https://arxiv.org/pdf/2302.03024.pdf), [GitHub](https://github.com/taoyang1122/adapt-image-models)). This paper introduces spatial adaptation, temporal adaptation and joint adaptation to gradually equip a frozen image model with spatiotemporal reasoning capability. The proposed method achieves competitive or even better performance than traditional full finetuning while largely saving the training cost of large foundation models.

1.0.0

Today is finally the day... AutoGluon 1.0 has arrived!! After [over four years of development](https://automlpodcast.com/episode/autogluon-the-story) and [2061 commits from 111 contributors](https://github.com/autogluon/autogluon/graphs/contributors), we are excited to share with you the culmination of our efforts to create and democratize the most powerful, easy to use, and feature rich automated machine learning system in the world.

AutoGluon 1.0 comes with transformative enhancements to predictive quality resulting from the combination of multiple novel ensembling innovations, spotlighted below. Besides performance enhancements, many other improvements have been made that are detailed in the individual module sections.

This release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.0.

This release contains 223 commits from 17 contributors!

Full Contributor List (ordered by of commits):

shchur, zhiqiangdon, Innixma, prateekdesai04, FANGAreNotGnu, yinweisu, taoyang1122, LennartPurucker, Harry-zzh, AnirudhDagar, jaheba, gradientsky, melopeo, ddelange, tonyhoo, canerturkmen, suzhoum

Join the community: [![](https://img.shields.io/discord/1043248669505368144?logo=discord&style=flat)](https://discord.gg/wjUmjqAc2N)
Get the latest updates: [![Twitter](https://img.shields.io/twitter/follow/autogluon?style=social)](https://twitter.com/autogluon)

Spotlight

Tabular Performance Enhancements

0.8.3

What's Changed
v0.8.3 is a patch release to address security vulnerabilities.

See the full commit change-log here: https://github.com/autogluon/autogluon/compare/0.8.2...0.8.3

This version supports Python versions 3.8, 3.9, and 3.10.

Changes
* `transformers` and other packages version upgrades + some fixes: suzhoum (4155)

Page 1 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.