This is the release note of [v2.6.0](https://github.com/optuna/optuna/milestone/32?closed=1).
Highlights
Warm Starting CMA-ES and sep-CMA-ES Support
Two new CMA-ES variants are available. Warm starting CMA-ES enables transferring prior knowledge on similar tasks. More specifically, CMA-ES can be initialized based on existing results of similar tasks. sep-CMA-ES is an algorithm which constrains the covariance matrix to be diagonal and is suitable for separable objective functions. See 2307 and 1951 for more details.
Example of Warm starting CMA-ES:
python
study = optuna.load_study(storage=”...”, study_name=”existing-study”)
study.sampler = CmaEsSampler(source_trials=study.trials)
study.optimize(objective, n_trials=100)

Example of sep-CMA-ES:
python
study = optuna.create_study(sampler=CmaEsSampler(use_separable_cma=True))
study.optimize(objective, n_trials=100)

PyTorch Distributed Data Parallel
Hyperparameter optimization for distributed neural-network training using [PyTorch Distributed Data Parallel](https://pytorch.org/docs/stable/distributed.html) is supported. A new integration module`TorchDistributedTrial`, synchronizes the hyperparameters among all nodes. See #2303 for further details.
Example:
python
def objective(trial):
distributed_trial = optuna.integration.TorchDistributedTrial(trial)
lr = distributed_trial.suggest_float("lr", 1e-5, 1e-1, log=True)
…
`RDBStorage` Improvements
The `RDBStorage` now allows longer user and system attributes, as well as choices for categorical distributions (e.g. choices spanning thousands of bytes/characters) to be persisted. Corresponding column data types of the underlying SQL tables have been changed from `VARCHAR` to `TEXT`. If you want to upgrade from an older version of Optuna and keep using the same storage, please migrate your tables as follows. Please make sure to create any backups before the migration and note that databases that don’t support `TEXT` will not work with this release.
console
Alter table columns from `VARCHAR` to `TEXT` to allow storing larger data.
optuna storage upgrade --storage <storage URL>
For more details, see 2395.
Heartbeat Improvements
The heartbeat feature was introduced in v2.5.0 to automatically mark stale trials as failed. It is now possible to not only fail the trials but also execute user-specified callback functions to process the failed trials. See 2347 for more details.
Example:
python
def objective(trial):
… Very time-consuming computation.
Adding a failed trial to the trial queue.
def failed_trial_callback(study, trial):
study.add_trial(
optuna.create_trial(
state=optuna.trial.TrialState.WAITING,
params=trial.params,
distributions=trial.distributions,
user_attrs=trial.user_attrs,
system_attrs=trial.system_attrs,
)
)
storage = optuna.storages.RDBStorage(
url=...,
heartbeat_interval=60,
grace_period=120,
failed_trial_callback=failed_trial_callback,
)
study = optuna.create_study(storage=storage)
study.optimize(objective, n_trials=100)
Pre-defined Search Space with Ask-and-tell Interface
The ask-and-tell interface allows specifying pre-defined search spaces through the new `fixed_distributions` argument. This option will keep the code short when the search space is known beforehand. It replaces calls to `Trial.suggest_…`. See 2271 for more details.
python
study = optuna.create_study()
For example, the distributions are previously defined when using `create_trial`.
distributions = {
"optimizer": optuna.distributions.CategoricalDistribution(["adam", "sgd"]),
"lr": optuna.distributions.LogUniformDistribution(0.0001, 0.1),
}
trial = optuna.trial.create_trial(
params={"optimizer": "adam", "lr": 0.0001},
distributions=distributions,
value=0.5,
)
study.add_trial(trial)
You can pass the distributions previously defined.
trial = study.ask(fixed_distributions=distributions)
`optimizer` and `lr` are already suggested and accessible with `trial.params`.
print(trial.params)
Breaking Changes
`RDBStorage` data type updates
Databases must be migrated for storages that were created with earlier versions of Optuna. Please refer to the highlights above.
For more details, see 2395.
`datatime_start` of enqueued trials.
The `datetime_start` property of `Trial`, `FrozenTrial` and `FixedTrial` shows when a trial was started. This property may now be `None`. For trials enqueued with `Study.enqueue_trial`, the timestamp used to be set to the time of queue. Now, the timestamp is first set to `None` when enqueued, and later updated to the timestamp when popped from the queue to run. This has implications on `StudySummary.datetime_start` as well which may be `None` in case trials have been enqueued but not popped.
For more details, see 2236.
`joblib` internals removed
`joblib` was partially supported as a backend for parallel optimization via the `n_jobs` parameter to `Study.optimize`. This support has now been removed and internals have been replaced with `concurrent.futures`.
For more details, see 2269.
AllenNLP v2 support
Optuna now officially supports AllenNLP v2. We also dropped the AllenNLP v0 support and the pruning support for AllenNLP v1. If you want to use AllenNLP v0 or v1 with Optuna, please install Optuna v2.5.0.
For more details, see 2412.
New Features
- Support sep-CMA-ES algorithm (1951)
- Add an option to the `Study.ask` method that allows define-and-run parameter suggestion (2271)
- Add integration module for PyTorch Distributed Data Parallel (2303)
- Support Warm Starting CMA-ES (2307)
- Add callback argument for heartbeat functionality (2347)
- Support `IntLogUniformDistribution` for TensorBoard (2362, thanks nzw0301!)
Enhancements
- Fix the wrong way to set `datetime_start` (clean) (2236, thanks chenghuzi!)
- Multi-objective error messages from `Study` to suggest solutions (2251)
- Adds missing `LightGBMTuner` metrics for the case of higher is better (2267, thanks mavillan!)
- Color Inversion to make contour plots more visually intuitive (2291, thanks 0x41head!)
- Close sessions at the end of with-clause in `Storage` (2345)
- Improve "plot_pareto_front" (2355, thanks 0x41head!)
- Implement `after_trial` method in `CmaEsSampler` (2359, thanks jeromepatel!)
- Convert `low` and `high` to float explicitly in distributions (2360)
- Add `after_trial` for `PyCmaSampler` (2365, thanks jeromepatel!)
- Implement `after_trial` for `BoTorchSampler` and `SkoptSampler` (2372, thanks jeromepatel!)
- Implement `after_trial` for `TPESampler` (2376, thanks jeromepatel!)
- Support `BoTorch >= 0.4.0` (2386, thanks nzw0301!)
- Mitigate string-length limitation of `RDBStorage` (2395)
- Support AllenNLP v2 (2412)
- Implement `after_trial` for `MOTPESampler` (2425, thanks jeromepatel!)
Bug Fixes
- Add test and fix for relative sampling failure in multivariate TPE (2055, thanks alexrobomind!)
- Fix `optuna.visualization.plot_contour` of subplot case with categorical axes (2297, thanks nzw0301!)
- Only fail trials associated with the current study (2330)
- Fix TensorBoard integration for `suggest_float` (2335, thanks nzw0301!)
- Add type conversions for upper/lower whose values are integers (2343)
- Fix improper stopping with the combination of `GridSampler` and `HyperbandPruner` (2353)
- Fix `matplotlib.plot_parallel_coordinate` with only one suggested parameter (2354, thanks nzw0301!)
- Create `model_dir` by `_LightGBMBaseTuner` (2366, thanks nyanhi!)
- Fix assertion in cached storage for state update (2370)
- Use `low` in `_transform_from_uniform` for TPE sampler (2392, thanks nzw0301!)
- Remove indices from `optuna.visualization.plot_parallel_coordinate` with categorical values (2401, thanks nzw0301!)
Installation
- `mypy` hotfix voiding latest NumPy 1.20.0 (2292)
- Remove `jax` from `setup.py` (2308, thanks nzw0301!)
- Install `torch` from PyPI for ReadTheDocs (2361)
- Pin `botorch` version (2379)
Documentation
- Fix broken links in `README.md` (2268)
- Provide `docs/source/tutorial` for faster local documentation build (2277)
- Remove specification of `n_trials` from example of `GridSampler` (2280)
- Fix typos and errors in document (2281, thanks belldandyxtq!)
- Add tutorial of multi-objective optimization of neural network with PyTorch (2305)
- Add explanation for local verification (2309)
- Add `sphinx.ext.imgconverter` extension (2323, thanks KoyamaSohei!)
- Include `high` in the documentation of `UniformDistribution` and `LogUniformDistribution` (2348)
- Fix typo; Replace dimentional with dimensional (2390, thanks nzw0301!)
- Fix outdated docstring of `TFKerasPruningCallback` (2399, thanks sfujiwara!)
- Call `fig.show()` in visualization code examples (2403, thanks harupy!)
- Explain the backend of parallelisation (2428, thanks nzw0301!)
- Navigate with left/right arrow keys in the document (2433, thanks ydcjeff!)
- Hotfix for MNIST download in tutorial (2438)
Examples
- Provide a user-defined pruner example (2140, thanks tktran!)
- Add Hydra example (2290, thanks nzw0301!)
- Use `trainer.callback_metrics` in the Pytorch Lightning example (2294, thanks TezRomacH!)
- Example folders (2302)
- Update PL example with typing and `DataModule` (2332, thanks TezRomacH!)
- Remove unsupported argument from PyTorch Lightning example (2357)
- Update `examples/kubernetes/mlflow/check_study.sh` to match whole words (2363, thanks twolffpiggott!)
- Add PyTorch checkpoint example using `failed_trial_callback` (2373)
- Update `Dockerfile` of Kubernetes simple example (2375, thanks 0x41head!)
Tests
- Refactor test of `GridSampler` (2285)
- Replace `parametrize_storage` with `StorageSupplier` (2404, thanks nzw0301!)
Code Fixes
- Replace `joblib` with `concurrent.futures` for parallel optimization (2269)
- Make trials stale only when succeeded to fail (2284)
- Apply code-fix to `LightGBMTuner` (Follow-up 2267) (2299)
- Inherit `PyTorchLightningPruningCallback` from Callback (2326, thanks TezRomacH!)
- Consistently use `suggest_float` (2344)
- Fix typo (2352, thanks nzw0301!)
- Increase API request limit for stale bot (2369)
- Fix typo; replace `contraints` with `constraints` (2378, thanks nzw0301!)
- Fix typo (2383, thanks nzw0301!)
- Update examples for `study.get_trials` for states filtering (2393, thanks jeromepatel!)
- Fix - remove arguments of python2 `super().__init__` (2402, thanks nyanhi!)
Continuous Integration
- Turn off RDB tests on circleci (2255)
- Allow allennlp in py3.8 integration tests (2367)
- Color pytest logs (2400, thanks harupy!)
- Remove `-f` option from doctest pip installation (2418)
Other
- Bump up version number to `v2.6.0.dev` (2283)
- Enable automatic closing of stale issues and pull requests by github actions (2287)
- Add setup section to `CONTRIBUTING.md` (2342)
- Fix the local `mypy` error on Pytorch Lightning integration (2349)
- Update the link to the `botorch` example (2377, thanks nzw0301!)
- Remove `-f` option from documentation installation (2407)
Thanks to All the Contributors!
This release was made possible by authors, and everyone who participated in reviews and discussions.
0x41head, Crissman, HideakiImamura, KoyamaSohei, TezRomacH, alexrobomind, belldandyxtq, c-bata, chenghuzi, crcrpar, g-votte, harupy, himkt, hvy, jeromepatel, keisuke-umezawa, mavillan, not522, nyanhi, nzw0301, sfujiwara, sile, tktran, toshihikoyanase, twolffpiggott, ydcjeff, ytsmiling