This is the release note of [v2.8.0](https://github.com/optuna/optuna/milestone/35?closed=1).
New Examples Repository
The number of Optuna examples has grown as the number of integrations have increased, and we’ve moved them to their own repository: [optuna/optuna-examples](https://github.com/optuna/optuna-examples/).
Highlights
TPE Sampler Improvements
Constant Liar for Distributed Optimization
In distributed environments, the TPE sampler may sample many points in a small neighborhood, because it does not have knowledge that other trials running in parallel are sampling nearby. To avoid this issue, we’ve implemented the Constant Liar (CL) heuristic to return a poor value for trials which have started but are not yet complete, to reduce search effort.
python
study = optuna.create_study(sampler=optuna.samplers.TPESampler(constant_liar=True))
The following history plots demonstrate how optimization can be improved using this feature. Ten parallel workers are simultaneously trying to optimize the same function which takes about one second to compute. The first plot has `constant_liar=False`, and the second with `constant_liar=True`, uses the Constant Liar feature. We can see that with Constant Liar, the sampler does a better job of assigning different parameter configurations to different trials and converging faster.

See 2664 for details.
Tree-structured Search Space Support
The TPE sampler with `multivariate=True` now supports tree-structured search spaces. Previously, if the user split the search space with an if-else statement, as shown below, the TPE sampler with `multivariate=True` would fall back to random sampling. Now, if you set `multivariate=True` and `group=True`, the TPE sampler algorithm will be applied to each partitioned search space to perform efficient sampling.
See 2526 for more details.
python
def objective(trial):
classifier_name = trial.suggest_categorical("classifier", ["SVC", "RandomForest"])
if classifier_name == "SVC":
If `multivariate=True` and `group=True`, the following 2 parameters are sampled jointly by TPE.
svc_c = trial.suggest_float("svc_c", 1e-10, 1e10, log=True)
svc_kernel = trial.suggest_categorical("kernel", ["linear", "rbf", "sigmoid"])
classifier_obj = sklearn.svm.SVC(C=svc_c, kernel=svc_kernel)
else:
If `multivariate=True` and `group=True`, the following 3 parameters are sampled jointly by TPE.
rf_n_estimators = trial.suggest_int("rf_n_estimators", 1, 20)
rf_criterion = trial.suggest_categorical("rf_criterion", ["gini", "entropy"])
rf_max_depth = trial.suggest_int("rf_max_depth", 2, 32, log=True)
classifier_obj = sklearn.ensemble.RandomForestClassifier(n_estimators=rf_n_estimators, criterion=rf_criterion, max_depth=rf_max_depth)
...
sampler = optuna.samplers.TPESampler(multivariate=True, group=True)
Copying Studies
Studies can now be copied across storages. The trial history as well as `Study.user_attrs` and `Study.system_attrs` are preserved.
For instance, this allows dumping a study in an MySQL `RDBStorage` into an SQLite file. Serializing the study this way, it can be shared with other users who are unable to access the original storage.
python
study = optuna.create_study(
study_name=”my-study”, storage=”mysql+pymysql://rootlocalhost/optuna"
)
study.optimize(..., n_trials=100)
Creates a copy of the study “my-study” in an MySQL `RDBStorage` to a local file named `optuna.db`.
optuna.copy_study(
from_study_name="my-study",
from_storage="mysql+pymysql://rootlocalhost/optuna",
to_storage="sqlite:///optuna.db",
)
study = optuna.load_study(study_name=”my-study”, storage=”sqlite:///optuna.db”)
assert len(study.trials) >= 100
See 2607 for details.
Callbacks
`optuna.storages.RetryFailedTrialCallback` Added
Used as a callback in `RDBStorage`, this allows a previously pre-empted or otherwise aborted trials that are detected by a failed heartbeat to be re-run.
python
storage = optuna.storages.RDBStorage(
url="sqlite:///:memory:",
heartbeat_interval=60,
grace_period=120,
failed_trial_callback=optuna.storages.RetryFailedTrialCallback(max_retry=3),
)
study = optuna.create_study(storage=storage)
See 2694 for details.
`optuna.study.MaxTrialsCallback` Added
Used as a callback in `study.optimize`, this allows setting of a maximum number of trials of a particular state, such as setting the maximum number of failed trials, before stopping the optimization.
python
study.optimize(
objective,
callbacks=[optuna.study.MaxTrialsCallback(10, states=(optuna.trial.TrialState.COMPLETE,))],
)
See 2636 for details.
Breaking Changes
- Allow `None` as `study_name` when there is only a single study in `load_study` (2608)
- Relax `GridSampler` allowing not-contained parameters during `suggest_*` (2663)
New Features
- Make `LightGBMTuner` and `LightGBMTunerCV` reproducible (2431, thanks tetsuoh0103!)
- Add `visualization.matplotlib.plot_pareto_front` (2450, thanks tohmae!)
- Support a group decomposed search space and apply it to TPE (2526)
- Add `__str__` for samplers (2539)
- Add `n_min_trials` argument for `PercentilePruner` and `MedianPruner` (2556)
- Copy study (2607)
- Allow `None` as `study_name` when there is only a single study in `load_study` (2608)
- Add `MaxTrialsCallback` class to enable stopping after fixed number of trials (2612)
- Implement `PatientPruner` (2636)
- Support multi-objective optimization in CLI (`optuna create-study`) (2640)
- Constant liar for `TPESampler` (2664)
- Add automatic retry callback (2694)
- Sorts categorical values on axis that contains only numerical values in `visualization.matplotlib.plot_slice` (2709, thanks Muktan!)
Enhancements
- `PyTorchLightningPruningCallback` to warn when an evaluation metric does not exist (2157, thanks bigbird555!)
- Pareto front visualization to visualize study progress with color scales (2563)
- Sort categorical values on axis that contains only numerical values in `visualization.plot_contour` (2569)
- Improve `param_importances` (2576)
- Sort categorical values on axis that contains only numerical values in `visualization.matplotlib.plot_contour` (2593)
- Show legend of `optuna.visualization.matplotlib.plot_edf` (2603)
- Show legend of `optuna.visualization.matplotlib.plot_intermediate_values` (2606)
- Make `MOTPEMultiObjectiveSampler` a thin wrapper for `MOTPESampler` (2615)
- Do not wait for next heartbeat on study completion (2686, thanks Turakar!)
- Change colour scale of contour plot by `matplotlib` for consistency with plotly results (2711, thanks 01-vyom!)
Bug Fixes
- Add type conversion for reference point and solution set (2584)
- Fix contour plot with multi-objective study and `target` being specified (2589)
- Fix distribution's `_contains` (2652)
- Read environment variables in `dump_best_config` (2681)
- Update version info entry on RDB storage upgrade (2687)
- Fix results not reproducible when running `AllenNLPExecutor` multiple t… (Backport of 2717) (2728)
Installation
- Replace `sklearn` constraint (2634)
- Add constraint of Sphinx version (2657)
- Add `click==7.1.2` to GitHub workflows to solve AllenNLP import error (2665)
- Avoid `tensorflow` 2.5.0 (2674)
- Remove `example` from `setup.py` (2676)
Documentation
- Add example to `optuna.logging.disable_propagation` (2477, thanks jeromepatel!)
- Add documentation for hyperparameter importance target parameter (2551)
- Remove the news section in `README.md` (2586)
- Documentation updates to `CmaEsSampler` (2591, thanks turian!)
- Rename `ray-joblib.py` to snakecase with underscores (2594)
- Replace `If` with `if` in a sentence (2602)
- Use `CmaEsSampler` instead of `TPESampler` in the batch optimization example (2610)
- README fixes (2617, thanks Scitator!)
- Remove wrong returns description in docstring (2619)
- Improve document on `BoTorchSampler` page (2631)
- Add the missing colon (2661)
- Add missing parameter `WAITING` details in docstring (2683, thanks jeromepatel!)
- Update URLs to `optuna-examples` (2684)
- Fix indents in the ask-and-tell tutorial (2690)
- Join sampler examples in `README.md` (2692)
- Fix typo in the tutorial (2704)
- Update command for installing auto-formatters (2710, thanks 01-vyom!)
- Some edits for `CONTRIBUTING.md` (2719)
Examples
- Split GitHub Actions workflows (https://github.com/optuna/optuna-examples/pull/1)
- Cherry pick 2611 of `optuna/optuna` (https://github.com/optuna/optuna-examples/pull/2)
- Add checks workflow (https://github.com/optuna/optuna-examples/pull/5)
- Add `MaxTrialsCallback` class to enable stopping after fixed number of trials (https://github.com/optuna/optuna-examples/pull/9)
- Update `README.md` (https://github.com/optuna/optuna-examples/pull/10)
- Add an example of warm starting CMA-ES (https://github.com/optuna/optuna-examples/pull/11, thanks nmasahiro!)
- Replace old links to example files (https://github.com/optuna/optuna-examples/pull/12)
- Avoid `tensorflow` 2.5.0 (https://github.com/optuna/optuna-examples/pull/13)
- Avoid `tensorflow` 2.5 (https://github.com/optuna/optuna-examples/pull/15)
- Test `multi_objective` in CI (https://github.com/optuna/optuna-examples/pull/16)
- Use only one GPU for PyTorch Lightning example by default (https://github.com/optuna/optuna-examples/pull/17)
- Remove example of CatBoost in pruning section (https://github.com/optuna/optuna-examples/pull/18, #2702)
- Add issues and pull request templates (https://github.com/optuna/optuna-examples/pull/20)
- Add `CONTRIBUTING.md` file ((https://github.com/optuna/optuna-examples/pull/21)
- Change PR approvers from two to one (https://github.com/optuna/optuna-examples/pull/22)
- Improved search space XGBoost (2346, thanks jeromepatel!)
- Remove `n_jobs` for `study.optimize` in `examples/` (2588, thanks jeromepatel!)
- Using the "log" key is deprecated in `pytorch_lightning` (2611, thanks sushi30!)
- Move examples to a new repository (2655)
- Remove remaining examples (2675)
- `optuna-examples` (https://github.com/optuna/optuna-examples/pull/11 follow up (#2689)
Tests
- Remove assertions for supported dimensions from `test_plot_pareto_front_unsupported_dimensions` (2578)
- Update a test function of `matplotliv.plot_pareto_front` for consistency (2583)
- Add `deterministic` parameter to make LightGBM training reproducible (2623)
- Add `force_col_wise` parameter of LightGBM in test cases of `LightGBMTuner` and `LightGBMTunerCV` (2630, thanks tetsuoh0103!)
- Remove `CudaCallback` from the fastai test (2641)
- Add test cases in `optuna/visualization/matplotlib/edf.py` (2642)
- Refactor a unittest in `test_median.py` (2644)
- Refactor `pruners_test` (2691, thanks tsumli!)
Code Fixes
- Remove redundant lines in CI settings of `examples` (2554)
- Remove the unused argument of functions in `matplotlib.contour` (2571)
- Fix axis labels of `optuna.visualization.matplotlib.plot_pareto_front` when `axis_order` is specified (2577)
- Remove list casts (2601)
- Remove `_get_distribution` from `visualization/matplotlib/_param_importances.py` (2604)
- Fix grammatical error in failure message (2609, thanks agarwalrounak!)
- Separate `MOTPESampler` from `TPESampler` (2616)
- Simplify `add_distributions` in `_SearchSpaceGroup` (2651)
- Replace old example URLs in `optuna.integrations` (2700)
Continuous Integration
- Supporting Python 3.9 with integration modules and optional dependencies (2530, thanks 0x41head!)
- Upgrade pip in PyPI and Test PyPI workflow (2598)
- Fix PyPI publish workflow (2624)
- Introduce speed benchmarks using `asv` (2673)
Other
- Bump `master` version to `2.8.0dev` (2562)
- Upload to TestPyPI at the time of release as well (2573)
- Install blackdoc in `formats.sh` (2637)
- Use `command` to check the existence of the libraries to avoid partially matching (2653)
- Add an example section to the README (2667)
- Fix formatting in contribution guidelines (2668)
- Update `CONTRIBUTING.md` with `optuna-examples` (2669)
Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
toshihikoyanase, himkt, Scitator, tohmae, crcrpar, c-bata, 01-vyom, sushi30, tsumli, not522, tetsuoh0103, jeromepatel, bigbird555, hvy, g-votte, nzw0301, turian, Crissman, sile, agarwalrounak, Muktan, Turakar, HideakiImamura, keisuke-umezawa, 0x41head, toshihikoyanase, himkt, Scitator, tohmae, crcrpar, c-bata, 01-vyom, sushi30, tsumli, not522, tetsuoh0103, jeromepatel, bigbird555, hvy, g-votte, nzw0301, turian, nmasahiro, Crissman, sile, agarwalrounak, Muktan, Turakar, HideakiImamura, keisuke-umezawa, 0x41head