This is the release note of [v3.0.0-a0](https://github.com/optuna/optuna/milestone/39?closed=1).
Highlights
First alpha pre-release in preparation for the upcoming major version update v3.
Included are several new features, improved optimization algorithms, removals of deprecated interfaces and many quality of life improvements.
*To read about the entire v3 roadmap, please refer to the [Wiki](https://github.com/optuna/optuna/wiki/Optuna-V3-Roadmap).*
*While this is a pre-release, we encourage users to keep using the latest releases of Optuna, including this one, for a smoother transition to the coming major release. Early feedback is welcome!*
CLI Improvements
Optuna CLI speed and usability improvements. Previously, it took several seconds to launch a CLI command, 3000 significantly speeds up the commands by halving the module load time.
The usability of the ask-and-tell interface is also improved. The `ask` command allows users to define search space with short and simple JSON strings after 2905. The `tell` command supports `--skip-if-finished` which ignores duplicated reports of values and statuses instead of raising errors. It for instance improves robustness against pod retries on cluster environments. See 2905.
Before:
console
$ optuna ask --storage sqlite:///mystorage.db --study-name mystudy \
--search-space '{"x": {"name": "UniformDistribution", "attributes": {"low": 0.0, "high": 1.0}}}'
After:
console
$ optuna ask --storage sqlite:///mystorage.db --study-name mystudy \
--search-space '{"x": {"type": "float", "low": 0.0, "high": 1.0}}'
New NSGA-II Crossover Options
The optimization performance of NSGA-II has been greatly improved for real-valued problems. We introduce the `crossover` argument in `NSGAIISampler`. You can select several variants of the `crossover` option from `uniform` (default), `blxalpha`, `sbx`, `vsbx`, `undx`, and `spx`.
The following figure shows that the newly introduced crossover algorithms perform better than existing algorithms, that is, the uniform crossover algorithm and the Gaussian process based algorithm, in terms of biasness, convergence, and diversity. Note that the previous method, other implementations (in kurobako), and the default of the new method are based on uniform crossover.
See 2903 for more information.

New History Visualization with Multiple Studies
The optimization history plot now supports visualization of multiple studies. It receives a list of studies. If the `error_bar` option is `False`, it outputs those histories in one figure. If the `error_bar` option is `True`, it calculates and shows the means and the standard deviations of those histories.
See 2807 for more details.
python
import optuna
def objective(trial):
return trial.suggest_float("x", 0, 1) ** 2
n_studies = 5
studies = [optuna.create_study(study_name=f"{i}th-study") for i in range(n_studies)]
for study in studies:
study.optimize(objective, n_trials=20)
This generates the first figure.
fig = optuna.visualization.plot_optimization_history(studies)
fig.write_image("./multiple.png")
This generates the second figure.
fig = optuna.visualization.plot_optimization_history(studies, error_bar=True)
fig.write_image("./error_bar.png")


AllenNLP Distributed Pruning
The AllenNLP integration supports pruning in distributed environments. This change enables users to use the `optuna_pruner` callback option along with the `distributed` option as can be seen in the following training configuration. See 2977.
yaml
...
trainer: {
optimizer: 'adam',
cuda_device: -1,
callbacks: [
{
type: 'optuna_pruner',
}
],
},
distributed: {
cuda_devices: [-1, -1],
},
Preparations for Unification of Distributions Classes
There are several implementations of `BaseDistribution` in Optuna, such as `UniformDistribution`, `DiscreteUniformDistribution`, `IntUniformDistribution`, `CategoricalDistribution`, This release includes part of ongoing work in reducing the number of these distribution classes to just `FloatDistribution`, `IntDistribution`, and `CategoricalDistribution`, aligning the classes to the trial suggest interface (`suggest_float`, `suggest_int`, and `suggest_categorical`). Please note that users are not recommended to use these distributions yet, because samplers haven’t been updated to support those. See 3063 for more details.
Breaking Changes
Some deprecated features including the `optuna.structs` module, `LightGBMTuner.best_booster`, and the `optuna dashboard` command are removed in 3057 and 3058. If you use such features please migrate to the new ones.
| Removed APIs | Corresponding active APIs |
| --- | --- |
| `optuna.structs.StudyDirection` | `optuna.study.StudyDirection` |
| `optuna.structs.StudySummary` | `optuna.study.StudySummary` |
| `optuna.structs.FrozenTrial` | `optuna.trial.FrozenTrial` |
| `optuna.structs.TrialState` | `optuna.trial.TrialState` |
| `optuna.structs.TrialPruned` | `optuna.exceptions.TrialPruned` |
| `optuna.integration.lightgbm.LightGBMTuner.best_booster` | `optuna.integration.lightgbm.LightGBMTuner.get_best_booster` |
| `optuna dashboard` | [`optuna-dashboard`](https://github.com/optuna/optuna-dashboard) |
- Unify `suggest` APIs for floating-point parameters (2990, thanks xadrianzetx!)
- Clean up deprecated features (3057, thanks nuka137!)
- Remove `optuna dashboard` (3058)
New Features
- Add interval for LightGBM callback (2490)
- Allow multiple studies and add error bar option to `plot_optimization_history` (2807)
- Support PyTorch-lightning DDP training (2849, thanks tohmae!)
- Add crossover operators for NSGA-II (2903, thanks yoshinobc!)
- Add abbreviated JSON formats of distributions (2905)
- Extend `MLflowCallback` interface (2912, thanks xadrianzetx!)
- Support AllenNLP distributed pruning (2977)
- Make `trial.user_attrs` logging optional in `MLflowCallback` (3043, thanks xadrianzetx!)
- Support multiple input of studies when plot with Matplotlib (3062, thanks TakuyaInoue-github!)
- Add `IntDistribution` & `FloatDistribution` (3063, thanks nyanhi!)
- Add `trial.user_attrs` to `pareto_front` hover text (3082, thanks kasparthommen!)
- Support error bar for Matplotlib (3122, thanks TakuyaInoue-github!)
- Add `optuna tell` with `--skip-if-finished` (3131)
Enhancements
- Add single distribution support to `BoTorchSampler` (2928)
- Speed up `import optuna` (3000)
- Fix `_contains` of `IntLogUniformDistribution` (3005)
- Render importance scores next to bars in `matplotlib.plot_param_importances` (3012, thanks xadrianzetx!)
- Make default value of `verbose_eval` `NoneN for `LightGBMTuner`/`LightGBMTunerCV` to avoid conflict (3014, thanks chezou!)
- Unify colormap of `plot_contour` (3017)
- Relax `FixedTrial` and `FrozenTrial` allowing not-contained parameters during `suggest_*` (3018)
- Raise errors if `optuna ask` CLI receives `--sampler-kwargs` without `--sampler` (3029)
- Remove `_get_removed_version_from_deprecated_version` function (3065, thanks nuka137!)
- Reformat labels for small importance scores in `plotly.plot_param_importances` (3073, thanks xadrianzetx!)
- Speed up Matplotlib backend `plot_contour` using SciPy's `spsolve` (3092)
- Remove updates in cached storage (3120, thanks shu65!)
Bug Fixes
- Add tests of `sample_relative` and fix type of return values of `SkoptSampler` and `PyCmaSampler` (2897)
- Fix `GridSampler` with `RetryFailedTrialCallback` or `enqueue_trial` (2946)
- Fix the type of `trial.values` in MLflow integration (2991)
- Fix to raise `ValueError` for invalid `q` in `DiscreteUniformDistribution` (3001)
- Do not call `trial.report` during sanity check (3002)
- Fix `matplotlib.plot_contour` bug (3046, thanks IEP!)
- Handle `single` distributions in `fANOVA` evaluator (3085, thanks xadrianzetx!)
Installation
- Support scikit-learn v1.0.0 (3003)
- Pin `tensorflow` and `tensorflow-estimator` versions to `<2.7.0` (3059)
- Add upper version constraint of PyTorchLightning (3077)
- Pin `keras` version to `<2.7.0` (3078)
- Remove version constraints of `tensorflow` (3084)
Documentation
- Add note of the behavior when calling multiple `trial.report` (2980)
- Add note for DDP training of `pytorch-lightning` (2984)
- Add note to `OptunaSearchCV` about direction (3007)
- Clarify `n_trials` in the docs (3016, thanks Rohan138!)
- Add a note to use pickle with different optuna versions (3034)
- Unify the visualization docs (3041, thanks sidshrivastav!)
- Fix a grammatical error in FAQ doc (3051, thanks belldandyxtq!)
- Less ambiguous documentation for `optuna tell` (3052)
- Add example for `logging.set_verbosity` (3061, thanks drumehiron!)
- Mention the tutorial of `002_configurations.py` in the `Trial` API page (3067, thanks makkimaki!)
- Mention the tutorial of `003_efficient_optimization_algorithms.py` in the `Trial` API page (3068, thanks makkimaki!)
- Add link from `set_user_attrs` in `Study` to the `user_attrs` entry in Tutorial (3069, thanks MasahitoKumada!)
- Update description for missing samplers and pruners (3087, thanks masaaldosey!)
- Simplify the unit testing explanation (3089)
- Fix range description in `suggest_float` docstring (3091, thanks xadrianzetx!)
- Fix documentation for the package installation procedure on different OS (3118, thanks masap!)
- Add description of `ValueError` and `TypeErorr` to `Raises` section of `Trial.report` (3124, thanks MasahitoKumada!)
Examples
- Use `RetryFailedTrialCallback` in `pytorch_checkpoint` example (https://github.com/optuna/optuna-examples/pull/59, thanks xadrianzetx!)
- Add Python 3.9 to CI yaml files (https://github.com/optuna/optuna-examples/pull/61)
- Replace `suggest_uniform` with `suggest_float` (https://github.com/optuna/optuna-examples/pull/63)
- Remove deprecated warning message in `lightgbm` (https://github.com/optuna/optuna-examples/pull/64)
- Pin `tensorflow` and `tensorflow-estimator` versions to `<2.7.0` (https://github.com/optuna/optuna-examples/pull/66)
- Restrict upper version of `pytorch-lightning` (https://github.com/optuna/optuna-examples/pull/67)
- Add an external resource to `README.md` (https://github.com/optuna/optuna-examples/pull/68, thanks solegalli!)
Tests
- Add test case of samplers for conditional objective function (2904)
- Test int distributions with default step (2924)
- Be aware of trial preparation when checking heartbeat interval (2982)
- Simplify the DDP model definition in the test of `pytorch-lightning` (2983)
- Wrap data with `np.asarray` in `lightgbm` test (2997)
- Patch calls to deprecated `suggest` APIs across codebase (3027, thanks xadrianzetx!)
- Make `return_cvbooster` of `LightGBMTuner` consistent to the original value (3070, thanks abatomunkuev!)
- Fix `parametrize_sampler` (3080)
- Fix verbosity for `tests/integration_tests/lightgbm_tuner_tests/test_optimize.py` (3086, thanks nyanhi!)
- Generalize empty search space test case to all hyperparameter importance evaluators (3096, thanks xadrianzetx!)
- Check if texts in legend by order agnostic way (3103)
- Add tests for axis scales to `matplotlib.plot_slice` (3121)
Code Fixes
- Add test case of samplers for conditional objective function (2904)
- Fix 2949, remove `BaseStudy` (2986, thanks twsl!)
- Use `optuna.load_study` in `optuna ask` CLI to omit `direction`/`directions` option (2989)
- Fix typo in `Trial` warning message (3008, thanks xadrianzetx!)
- Replaces boston dataset with california housing dataset (3011, thanks avats-dev!)
- Fix deprecation version of `suggest` APIs (3054, thanks xadrianzetx!)
- Add `remove_version` to the missing `deprecated` argument (3064, thanks nuka137!)
- Add example of `optuna.logging.get_verbosity` (3066, thanks MasahitoKumada!)
- Support `{Float|Int}Distribution` in NSGA-II crossover operators (3139, thanks xadrianzetx!)
Continuous Integration
- Install `botorch` to CI jobs on mac (2988)
- Use libomp 11.1.0 for Mac (3024)
- Run `mac-tests` CI at a scheduled time (3028)
- Set concurrency to github workflows (3095)
- Skip CLI tests when calculating the coverage (3097)
- Migrate `mypy` version to 0.910 (3123)
- Avoid installing the latest MLfow to prevent doctests from failing (3135)
Other
- Bump up version to 2.11.0dev (2976)
- Add roadmap news to `README.md` (2999)
- Bump up version number to 3.0.0a1.dev (3006)
- Add Python 3.9 to `tox.ini` (3025)
- Fix version number to 3.0.0a0 (3140)
Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
Crissman, HideakiImamura, IEP, MasahitoKumada, Rohan138, TakuyaInoue-github, abatomunkuev, avats-dev, belldandyxtq, chezou, drumehiron, g-votte, himkt, hvy, kasparthommen, keisuke-umezawa, makkimaki, masaaldosey, masap, not522, nuka137, nyanhi, nzw0301, shu65, sidshrivastav, sile, solegalli, tohmae, toshihikoyanase, twsl, xadrianzetx, yoshinobc, ytsmiling