Optuna

Latest version: v4.2.1

Safety actively analyzes 714772 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 19

2.10.1

This is the release note of [v2.10.1](https://github.com/optuna/optuna/milestone/45?closed=1).

This is a patch release to resolve the issues on the document build. No feature updates are included.

Installation

- Fix document build of v2.10.1 (3642)

Documentation

- Backport 3590: Replace `youtube.com` with `youtube-nocookie.com` (3633)


Other

- Bump up version to v2.10.1 (3635)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

contramundum53, toshihikoyanase

2.10.0

This is the release note of [v2.10.0](https://github.com/optuna/optuna/milestone/37?closed=1).

Highlights

New CLI Subcommand for Analyzing Studies

New subcommands `optuna trials`, `optuna best-trial` and `optuna best-trials` have been introduced to Optuna’s CLI for listing trials in studies with RDB storages. It allows direct interaction with trial data from the command line in various formats including human readable tables, JSON or YAML. See the following examples:

Show all trials in a study.

console
$ optuna trials --storage sqlite:///example.db --study-name example
+--------+---------------------+---------------------+---------------------+----------------+---------------------+----------+
| number | value | datetime_start | datetime_complete | duration | params | state |
+--------+---------------------+---------------------+---------------------+----------------+---------------------+----------+
| 0 | 0.6098421143538713 | 2021-10-01 14:36:46 | 2021-10-01 14:36:46 | 0:00:00.026059 | {'x': 'A', 'y': 6} | COMPLETE |
| 1 | 0.6584108953598753 | 2021-10-01 14:36:46 | 2021-10-01 14:36:46 | 0:00:00.023447 | {'x': 'A', 'y': 10} | COMPLETE |
| 2 | 0.612883262548314 | 2021-10-01 14:36:46 | 2021-10-01 14:36:46 | 0:00:00.021577 | {'x': 'C', 'y': 3} | COMPLETE |
| 3 | 0.09326753798819143 | 2021-10-01 14:36:46 | 2021-10-01 14:36:46 | 0:00:00.024183 | {'x': 'A', 'y': 0} | COMPLETE |
| 4 | 0.7316749689191168 | 2021-10-01 14:36:46 | 2021-10-01 14:36:46 | 0:00:00.021994 | {'x': 'C', 'y': 4} | COMPLETE |
+--------+---------------------+---------------------+---------------------+----------------+---------------------+----------+


Show the best trial as YAML.

console
$ optuna best-trial --storage sqlite:///example.db --study-name example --format yaml
datetime_complete: '2021-10-01 14:36:46'
datetime_start: '2021-10-01 14:36:46'

2.9.1

This is the release note of [v2.9.1](https://github.com/optuna/optuna/milestone/38?closed=1).

Highlights

Ask-and-Tell CLI Fix

The storage URI and the study name are no longer logged by `optuna ask` and `optuna tell`. The former could contain sensitive information.

Enhancements

- Remove storage URI from `ask` and `tell` CLI subcommands (2838)

Other

- Bump to v2.9.1 (2839)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

himkt, hvy, not522

2.9.0

This is the release note of [v2.9.0](https://github.com/optuna/optuna/milestone/36?closed=1).

Help us create the next version of Optuna! Please take a few minutes to fill in this survey, and let us know how you use Optuna now and what improvements you'd like. https://forms.gle/TtJuuaqFqtjmbCP67

Highlights

Ask-and-Tell CLI: Optuna from the Command Line

The built-in CLI which you can use to upgrade storages or check the installed version with `optuna --version`, now provides experimental subcommands for the [Ask-and-Tell interface](https://optuna.readthedocs.io/en/v2.9.0/tutorial/20_recipes/009_ask_and_tell.html#sphx-glr-tutorial-20-recipes-009-ask-and-tell-py). It is now possible to optimize using Optuna entirely from the CLI, without writing a single line of Python.

Ask with `optuna ask`

Ask for parameters using `optuna ask`, specifying the search space, storage, study name, sampler and optimization direction. The parameters and the associated trial number can be output as either JSON or YAML.

The following is an example outputting and piping the results to a YAML file.

console
$ optuna ask --storage sqlite:///mystorage.db \
--study-name mystudy \
--sampler TPESampler \
--sampler-kwargs '{"multivariate": true}' \
--search-space '{"x": {"name": "UniformDistribution", "attributes": {"low": 0.0, "high": 1.0}}, "y": {"name": "CategoricalDistribution", "attributes": {"choices": ["foo", "bar"]}}}' \
--direction minimize \
--out yaml \
> out.yaml
[I 2021-07-30 15:56:50,774] A new study created in RDB with name: mystudy
[I 2021-07-30 15:56:50,808] Asked trial 0 with parameters {'x': 0.21492964898919975, 'y': 'foo'} in study 'mystudy' and storage 'sqlite:///mystorage.db'.

$ cat out.yaml
trial:
number: 0
params:
x: 0.21492964898919975
y: foo


_Specify multiple whitespace separated directions for multi-objective optimization._

Tell with `optuna tell`

After computing the objective value based on the output of ask, you can report the result back using `optuna tell` and it will be stored in the study.

console
$ optuna tell --storage sqlite:///mystorage.db \
--study-name mystudy \
--trial-number 0 \
--values 1.0
[I 2021-07-30 16:01:13,039] Told trial 0 with values [1.0] and state TrialState.COMPLETE in study 'mystudy' and storage 'sqlite:///mystorage.db'.


_Specify multiple whitespace separated values for multi-objective optimization._

See https://github.com/optuna/optuna/pull/2817 for details.

Weights & Biases Integration

`WeightsAndBiasesCallback` is a new study optimization callback that allows logging with [Weights & Biases](https://wandb.ai/site). This allows utilizing Weight & Biases’ rich visualization features to analyze studies to complement Optuna’s visualization.

python
import optuna
from optuna.integration.wandb import WeightsAndBiasesCallback

def objective(trial):
x = trial.suggest_float("x", -10, 10)

return (x - 2) ** 2

wandb_kwargs = {"project": "my-project"}
wandbc = WeightsAndBiasesCallback(wandb_kwargs=wandb_kwargs)
study = optuna.create_study(study_name="mystudy")
study.optimize(objective, n_trials=10, callbacks=[wandbc])


See https://github.com/optuna/optuna/pull/2781 for details.

TPE Sampler Refactorings

The Tree-structured Parzen Estimator (TPE) sampler has always been the default sampler in Optuna. Both it’s API and internal code has over time grown to accomodate for various needs such as independent and join parameter sampling (the `multivariate` parameter) , and multi-objective optimization (the `MOTPESampler` sampler). In this release, the TPE sampler has been refactored and its code greatly reduced. The previously experimental multi-objective TPE Sampler `MOTPESampler` has also been deprecated and its capabilities are now absorbed by the standard `TPESampler`.

This change may break code that depends on fixed seeds with this sampler. The optimization algorithms otherwise have not been changed.

Following demonstrates how you can now use the `TPESampler` for multi-objective optimization.

python
import optuna

def objective(trial):
x = trial.suggest_float("x", 0, 5)
y = trial.suggest_float("y", 0, 3)

v0 = 4 * x ** 2 + 4 * y ** 2
v1 = (x - 5) ** 2 + (y - 5) ** 2

return v0, v1

sampler = optuna.samplers.TPESampler() `MOTPESampler` used to be required for multi-objective optimization.
study = optuna.create_study(
directions=["minimize", "minimize"],
sampler=sampler,
)
study.optimize(objective, n_trials=100)


_Note that omitting the `sampler` argument or specifying `None` currently defaults to the `NSGAIISampler` for multi-objective studies instead of the `TPESampler`._

See https://github.com/optuna/optuna/pull/2618 for details.

Breaking Changes

- Unify the univariate and multivariate TPE (2618)

New Features

- MLFlow decorator for optimization function (2670, thanks lucafurrer!)
- Redis Heartbeat (2780, thanks Turakar!)
- Introduce Weights & Biases integration (2781, thanks xadrianzetx!)
- Function for failing zombie trials and invoke their callbacks (2811)
- Optuna ask and tell CLI options (2817)

Enhancements

- Unify `MOTPESampler` and `TPESampler` (2688)
- Changed interpolation type to make numeric range consistent with Plotly (2712, thanks 01-vyom!)
- Add the warning if an intermediate value is already reported at the same step (2782, thanks TakuyaInoue-github!)
- Prioritize grids that are not yet running in `GridSampler` (2783)
- Fix `warn_independent_sampling` in `TPESampler` (2786)
- Avoid applying `constraint_fn` to non-`COMPLETE` trials in NSGAII-sampler (2791)
- Speed up `TPESampler` (2816)
- Enable CLI helps for subcommands (2823)

Bug Fixes

- Fix `AllenNLPExecutor` reproducibility (2717, thanks MagiaSN!)
- Use `repr` and `eval` to restore pruner parameters in AllenNLP integration (2731)
- Fix `Nan` cast bug in `TPESampler` (2739)
- Fix `infer_relative_search_space` of TPE with the single point distributions (2749)

Installation

- Avoid latest numpy 1.21 (2766)
- Fix numpy 1.21 related mypy errors (2767)

Documentation

- Add how to suggest proportion to FAQ (2718)
- Explain how to add a user's own logging callback function (2730)
- Add `copy_study` to the docs (2737)
- Fix link to kurobako benchmark page (2748)
- Improve docs of constant liar (2785)
- Fix the document of `RetryFailedTrialCallback.retried_trial_number` (2789)
- Match the case of `ID` (2798, thanks belldandyxtq!)
- Rephrase `RDBStorage` `RuntimeError` description (2802, thanks belldandyxtq!)

Examples

- Add remaining examples to CI tests (https://github.com/optuna/optuna-examples/pull/26)
- Use hydra 1.1.0 syntax (https://github.com/optuna/optuna-examples/pull/28)
- Replace monitor value with accuracy (https://github.com/optuna/optuna-examples/pull/32)

Tests

- Count the number of calls of the wrapped method in the test of `MOTPEMultiObjectiveSampler` (2666)
- Add specific test cases for `visualization.matplotlib.plot_intermediate_values` (2754, thanks asquare100!)
- Added unit tests for optimization history of matplotlib tests (2761, thanks 01-vyom!)
- Changed unit tests for pareto front of matplotlib tests (2763, thanks 01-vyom!)
- Added unit tests for slice of matplotlib tests (2764, thanks 01-vyom!)
- Added unit tests for param importances of matplotlib tests (2774, thanks 01-vyom!)
- Changed unit tests for parallel coordinate of matplotlib tests (2778, thanks 01-vyom!)
- Use more specific assert in `tests/visualization_tests/matplotlib/test_intermediate_plot.py` (2803)
- Added unit tests for contour of matplotlib tests (2806, thanks 01-vyom!)

Code Fixes

- Create `study` directory (2721)
- Dissect allennlp integration in submodules based on roles (2745)
- Fix deprecated version of `MOTPESampler` (2770)

Continuous Integration

- Daily CI of `Checks` (2760)
- Use default resolver in CI's pip installation (2779)

Other

- Bump up version to v2.9.0dev (2723)
- Add an optional section to ask reproducible codes (2799)
- Add survey news to `README.md` (2801)
- Add python code to issue templates for making reporting runtime information easy (2805)
- Bump to v2.9.0 (2828)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

ytsmiling, harupy, asquare100, hvy, c-bata, nzw0301, lucafurrer, belldandyxtq, not522, TakuyaInoue-github, 01-vyom, himkt, Crissman, toshihikoyanase, sile, vanpelt, HideakiImamura, MagiaSN, keisuke-umezawa, Turakar, xadrianzetx, ytsmiling, harupy, asquare100, hvy, c-bata, nzw0301, lucafurrer, belldandyxtq, not522, TakuyaInoue-github, 01-vyom, himkt, Crissman, toshihikoyanase, sile, vanpelt, HideakiImamura, MagiaSN, keisuke-umezawa, Turakar, xadrianzetx

2.8.0

This is the release note of [v2.8.0](https://github.com/optuna/optuna/milestone/35?closed=1).

New Examples Repository

The number of Optuna examples has grown as the number of integrations have increased, and we’ve moved them to their own repository: [optuna/optuna-examples](https://github.com/optuna/optuna-examples/).

Highlights

TPE Sampler Improvements

Constant Liar for Distributed Optimization

In distributed environments, the TPE sampler may sample many points in a small neighborhood, because it does not have knowledge that other trials running in parallel are sampling nearby. To avoid this issue, we’ve implemented the Constant Liar (CL) heuristic to return a poor value for trials which have started but are not yet complete, to reduce search effort.

python
study = optuna.create_study(sampler=optuna.samplers.TPESampler(constant_liar=True))


The following history plots demonstrate how optimization can be improved using this feature. Ten parallel workers are simultaneously trying to optimize the same function which takes about one second to compute. The first plot has `constant_liar=False`, and the second with `constant_liar=True`, uses the Constant Liar feature. We can see that with Constant Liar, the sampler does a better job of assigning different parameter configurations to different trials and converging faster.

![tpe_without_constant_liar_edit_v2](https://user-images.githubusercontent.com/5983694/120973014-5981a080-c7a9-11eb-9d01-cc5303a8db21.png)

See 2664 for details.

Tree-structured Search Space Support

The TPE sampler with `multivariate=True` now supports tree-structured search spaces. Previously, if the user split the search space with an if-else statement, as shown below, the TPE sampler with `multivariate=True` would fall back to random sampling. Now, if you set `multivariate=True` and `group=True`, the TPE sampler algorithm will be applied to each partitioned search space to perform efficient sampling.

See 2526 for more details.

python
def objective(trial):
classifier_name = trial.suggest_categorical("classifier", ["SVC", "RandomForest"])

if classifier_name == "SVC":
If `multivariate=True` and `group=True`, the following 2 parameters are sampled jointly by TPE.
svc_c = trial.suggest_float("svc_c", 1e-10, 1e10, log=True)
svc_kernel = trial.suggest_categorical("kernel", ["linear", "rbf", "sigmoid"])

classifier_obj = sklearn.svm.SVC(C=svc_c, kernel=svc_kernel)
else:
If `multivariate=True` and `group=True`, the following 3 parameters are sampled jointly by TPE.
rf_n_estimators = trial.suggest_int("rf_n_estimators", 1, 20)
rf_criterion = trial.suggest_categorical("rf_criterion", ["gini", "entropy"])
rf_max_depth = trial.suggest_int("rf_max_depth", 2, 32, log=True)

classifier_obj = sklearn.ensemble.RandomForestClassifier(n_estimators=rf_n_estimators, criterion=rf_criterion, max_depth=rf_max_depth)

...

sampler = optuna.samplers.TPESampler(multivariate=True, group=True)


Copying Studies

Studies can now be copied across storages. The trial history as well as `Study.user_attrs` and `Study.system_attrs` are preserved.

For instance, this allows dumping a study in an MySQL `RDBStorage` into an SQLite file. Serializing the study this way, it can be shared with other users who are unable to access the original storage.

python
study = optuna.create_study(
study_name=”my-study”, storage=”mysql+pymysql://rootlocalhost/optuna"
)
study.optimize(..., n_trials=100)

Creates a copy of the study “my-study” in an MySQL `RDBStorage` to a local file named `optuna.db`.
optuna.copy_study(
from_study_name="my-study",
from_storage="mysql+pymysql://rootlocalhost/optuna",
to_storage="sqlite:///optuna.db",
)

study = optuna.load_study(study_name=”my-study”, storage=”sqlite:///optuna.db”)
assert len(study.trials) >= 100


See 2607 for details.

Callbacks

`optuna.storages.RetryFailedTrialCallback` Added

Used as a callback in `RDBStorage`, this allows a previously pre-empted or otherwise aborted trials that are detected by a failed heartbeat to be re-run.

python
storage = optuna.storages.RDBStorage(
url="sqlite:///:memory:",
heartbeat_interval=60,
grace_period=120,
failed_trial_callback=optuna.storages.RetryFailedTrialCallback(max_retry=3),
)
study = optuna.create_study(storage=storage)


See 2694 for details.

`optuna.study.MaxTrialsCallback` Added
Used as a callback in `study.optimize`, this allows setting of a maximum number of trials of a particular state, such as setting the maximum number of failed trials, before stopping the optimization.

python
study.optimize(
objective,
callbacks=[optuna.study.MaxTrialsCallback(10, states=(optuna.trial.TrialState.COMPLETE,))],
)

See 2636 for details.

Breaking Changes

- Allow `None` as `study_name` when there is only a single study in `load_study` (2608)
- Relax `GridSampler` allowing not-contained parameters during `suggest_*` (2663)

New Features

- Make `LightGBMTuner` and `LightGBMTunerCV` reproducible (2431, thanks tetsuoh0103!)
- Add `visualization.matplotlib.plot_pareto_front` (2450, thanks tohmae!)
- Support a group decomposed search space and apply it to TPE (2526)
- Add `__str__` for samplers (2539)
- Add `n_min_trials` argument for `PercentilePruner` and `MedianPruner` (2556)
- Copy study (2607)
- Allow `None` as `study_name` when there is only a single study in `load_study` (2608)
- Add `MaxTrialsCallback` class to enable stopping after fixed number of trials (2612)
- Implement `PatientPruner` (2636)
- Support multi-objective optimization in CLI (`optuna create-study`) (2640)
- Constant liar for `TPESampler` (2664)
- Add automatic retry callback (2694)
- Sorts categorical values on axis that contains only numerical values in `visualization.matplotlib.plot_slice` (2709, thanks Muktan!)

Enhancements

- `PyTorchLightningPruningCallback` to warn when an evaluation metric does not exist (2157, thanks bigbird555!)
- Pareto front visualization to visualize study progress with color scales (2563)
- Sort categorical values on axis that contains only numerical values in `visualization.plot_contour` (2569)
- Improve `param_importances` (2576)
- Sort categorical values on axis that contains only numerical values in `visualization.matplotlib.plot_contour` (2593)
- Show legend of `optuna.visualization.matplotlib.plot_edf` (2603)
- Show legend of `optuna.visualization.matplotlib.plot_intermediate_values` (2606)
- Make `MOTPEMultiObjectiveSampler` a thin wrapper for `MOTPESampler` (2615)
- Do not wait for next heartbeat on study completion (2686, thanks Turakar!)
- Change colour scale of contour plot by `matplotlib` for consistency with plotly results (2711, thanks 01-vyom!)

Bug Fixes

- Add type conversion for reference point and solution set (2584)
- Fix contour plot with multi-objective study and `target` being specified (2589)
- Fix distribution's `_contains` (2652)
- Read environment variables in `dump_best_config` (2681)
- Update version info entry on RDB storage upgrade (2687)
- Fix results not reproducible when running `AllenNLPExecutor` multiple t… (Backport of 2717) (2728)

Installation

- Replace `sklearn` constraint (2634)
- Add constraint of Sphinx version (2657)
- Add `click==7.1.2` to GitHub workflows to solve AllenNLP import error (2665)
- Avoid `tensorflow` 2.5.0 (2674)
- Remove `example` from `setup.py` (2676)

Documentation

- Add example to `optuna.logging.disable_propagation` (2477, thanks jeromepatel!)
- Add documentation for hyperparameter importance target parameter (2551)
- Remove the news section in `README.md` (2586)
- Documentation updates to `CmaEsSampler` (2591, thanks turian!)
- Rename `ray-joblib.py` to snakecase with underscores (2594)
- Replace `If` with `if` in a sentence (2602)
- Use `CmaEsSampler` instead of `TPESampler` in the batch optimization example (2610)
- README fixes (2617, thanks Scitator!)
- Remove wrong returns description in docstring (2619)
- Improve document on `BoTorchSampler` page (2631)
- Add the missing colon (2661)
- Add missing parameter `WAITING` details in docstring (2683, thanks jeromepatel!)
- Update URLs to `optuna-examples` (2684)
- Fix indents in the ask-and-tell tutorial (2690)
- Join sampler examples in `README.md` (2692)
- Fix typo in the tutorial (2704)
- Update command for installing auto-formatters (2710, thanks 01-vyom!)
- Some edits for `CONTRIBUTING.md` (2719)

Examples

- Split GitHub Actions workflows (https://github.com/optuna/optuna-examples/pull/1)
- Cherry pick 2611 of `optuna/optuna` (https://github.com/optuna/optuna-examples/pull/2)
- Add checks workflow (https://github.com/optuna/optuna-examples/pull/5)
- Add `MaxTrialsCallback` class to enable stopping after fixed number of trials (https://github.com/optuna/optuna-examples/pull/9)
- Update `README.md` (https://github.com/optuna/optuna-examples/pull/10)
- Add an example of warm starting CMA-ES (https://github.com/optuna/optuna-examples/pull/11, thanks nmasahiro!)
- Replace old links to example files (https://github.com/optuna/optuna-examples/pull/12)
- Avoid `tensorflow` 2.5.0 (https://github.com/optuna/optuna-examples/pull/13)
- Avoid `tensorflow` 2.5 (https://github.com/optuna/optuna-examples/pull/15)
- Test `multi_objective` in CI (https://github.com/optuna/optuna-examples/pull/16)
- Use only one GPU for PyTorch Lightning example by default (https://github.com/optuna/optuna-examples/pull/17)
- Remove example of CatBoost in pruning section (https://github.com/optuna/optuna-examples/pull/18, #2702)
- Add issues and pull request templates (https://github.com/optuna/optuna-examples/pull/20)
- Add `CONTRIBUTING.md` file ((https://github.com/optuna/optuna-examples/pull/21)
- Change PR approvers from two to one (https://github.com/optuna/optuna-examples/pull/22)
- Improved search space XGBoost (2346, thanks jeromepatel!)
- Remove `n_jobs` for `study.optimize` in `examples/` (2588, thanks jeromepatel!)
- Using the "log" key is deprecated in `pytorch_lightning` (2611, thanks sushi30!)
- Move examples to a new repository (2655)
- Remove remaining examples (2675)
- `optuna-examples` (https://github.com/optuna/optuna-examples/pull/11 follow up (#2689)

Tests

- Remove assertions for supported dimensions from `test_plot_pareto_front_unsupported_dimensions` (2578)
- Update a test function of `matplotliv.plot_pareto_front` for consistency (2583)
- Add `deterministic` parameter to make LightGBM training reproducible (2623)
- Add `force_col_wise` parameter of LightGBM in test cases of `LightGBMTuner` and `LightGBMTunerCV` (2630, thanks tetsuoh0103!)
- Remove `CudaCallback` from the fastai test (2641)
- Add test cases in `optuna/visualization/matplotlib/edf.py` (2642)
- Refactor a unittest in `test_median.py` (2644)
- Refactor `pruners_test` (2691, thanks tsumli!)

Code Fixes

- Remove redundant lines in CI settings of `examples` (2554)
- Remove the unused argument of functions in `matplotlib.contour` (2571)
- Fix axis labels of `optuna.visualization.matplotlib.plot_pareto_front` when `axis_order` is specified (2577)
- Remove list casts (2601)
- Remove `_get_distribution` from `visualization/matplotlib/_param_importances.py` (2604)
- Fix grammatical error in failure message (2609, thanks agarwalrounak!)
- Separate `MOTPESampler` from `TPESampler` (2616)
- Simplify `add_distributions` in `_SearchSpaceGroup` (2651)
- Replace old example URLs in `optuna.integrations` (2700)

Continuous Integration

- Supporting Python 3.9 with integration modules and optional dependencies (2530, thanks 0x41head!)
- Upgrade pip in PyPI and Test PyPI workflow (2598)
- Fix PyPI publish workflow (2624)
- Introduce speed benchmarks using `asv` (2673)

Other

- Bump `master` version to `2.8.0dev` (2562)
- Upload to TestPyPI at the time of release as well (2573)
- Install blackdoc in `formats.sh` (2637)
- Use `command` to check the existence of the libraries to avoid partially matching (2653)
- Add an example section to the README (2667)
- Fix formatting in contribution guidelines (2668)
- Update `CONTRIBUTING.md` with `optuna-examples` (2669)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

toshihikoyanase, himkt, Scitator, tohmae, crcrpar, c-bata, 01-vyom, sushi30, tsumli, not522, tetsuoh0103, jeromepatel, bigbird555, hvy, g-votte, nzw0301, turian, Crissman, sile, agarwalrounak, Muktan, Turakar, HideakiImamura, keisuke-umezawa, 0x41head, toshihikoyanase, himkt, Scitator, tohmae, crcrpar, c-bata, 01-vyom, sushi30, tsumli, not522, tetsuoh0103, jeromepatel, bigbird555, hvy, g-votte, nzw0301, turian, nmasahiro, Crissman, sile, agarwalrounak, Muktan, Turakar, HideakiImamura, keisuke-umezawa, 0x41head

2.7.0

This is the release note for [v2.7.0](https://github.com/optuna/optuna/milestone/33?closed=1).

Highlights

New `optuna-dashboard` Repository

A new dashboard [`optuna-dashboard`](https://github.com/optuna/optuna-dashboard) is being developed in a separate repository under the Optuna organization. Install it with `pip install optuna-dashboard` and run it with `optuna-dashboard $STORAGE_URL`.
The previous `optuna dashboard` command is now deprecated.

Deprecate `n_jobs` Argument of `Study.optimize`

The GIL has been an issue when using the `n_jobs` argument for multi-threaded optimization. We decided to deprecate this option in favor of the more stable process-level parallelization. Details available in the [tutorial](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/004_distributed.html#sphx-glr-tutorial-10-key-features-004-distributed-py). Users who have been parallelizing at the thread level using the `n_jobs` argument are encouraged to refer to the tutorial for process-level parallelization.

If the objective function is not affected by the GIL, thread-level parallelism may be useful. You can achieve thread-level parallelization in the following way.

python
with ThreadPoolExecutor(max_workers=5) as executor:
for _ in range(5):
executor.submit(study.optimize, objective, 100)


New Tutorial and Examples

Tutorial pages about the usage of the ask-and-tell interface (2422) and `best_trial` (2427) have been added, as well as an example that demonstrates parallel optimization using Ray (2298) and an example to explain how to stop the optimization based on the number of completed trials instead of the total number of trials (2449).

Improved Code Quality

The code quality was improved in terms of bug fixes, third party library support, and platform support.

For instance, the bugs on warm starting CMA-ES and `visualization.matplotlib.plot_optimization_history` were resolved by 2501 and 2532, respectively.

Third party libraries such as PyTorch, fastai, and AllenNLP were updated. We have updated the corresponding integration modules and examples for the new versions. See 2442, 2550 and 2528 for details.

From this version, we are expanding the platform support. Previously, changes were tested in Linux containers. Now, we also test changes merged into the master branch in macOS containers as well (2461).

Breaking Changes

- Deprecate dashboard (2559)
- Deprecate `n_jobs` in `Study.optimize` (2560)

New Features

- Support object representation of `StudyDirection` for `create_study` arguments (2516)

Enhancements

- Change caching implementation of MOTPE (2406, thanks y0z!)
- Fix to replace `numpy.append` (2419, thanks nyanhi!)
- Modify `after_trial` for `NSGAIISampler` (2436, thanks jeromepatel!)
- Print a URL of a related release note in the warning message (2496)
- Add log-linear algorithm for 2d Pareto front (2503, thanks parsiad!)
- Concatenate the argument text after the deprecation warning (2558)

Bug Fixes

- Use 2.0 style delete API of SQLAlchemy (2487)
- Fix Warm Starting CMA-ES with a maximize direction (2501)
- Fix `visualization.matplotlib.plot_optimization_history` for multi-objective (2532)

Installation

- Bump `torch` to 1.8.0 (2442)
- Remove Cython from `install_requires` (2466)
- Fix Cython installation for Python 3.9 (2474)
- Avoid catalyst 21.3 (2480, thanks crcrpar!)

Documentation

- Add ask and tell interface tutorial (2422)
- Add tutorial for re-use of the `best_trial` (2427)
- Add explanation for get_storage in the API reference (2430)
- Follow-up of the user-defined pruner tutorial (2446)
- Add a new example `max_trial_callback` to `optuna/examples` (2449, thanks jeromepatel!)
- Standardize on 'hyperparameter' usage (2460)
- Replace MNIST with Fashion MNIST in multi-objective optimization tutorial (2468)
- Fix links on `SuccessiveHalvingPruner` page (2489)
- Swap the order of `load_if_exists` and `directions` for consistency (2491)
- Clarify `n_jobs` for `OptunaSearchCV` (2545)
- Mention the paper is in Japanese (2547, thanks crcrpar!)
- Fix typo of the paper's author name (2552)

Examples

- Add an example of `Ray` with `joblib` backend (2298)
- Added RL and Multi-Objective examples to `examples/README.md` (2432, thanks jeromepatel!)
- Replace `sh` with `bash` in README of kubernetes examples (2440)
- Apply 2438 to pytorch examples (2453, thanks crcrpar!)
- More Examples Folders after 2302 (2458, thanks crcrpar!)
- Apply `urllib` patch for MNIST download (2459, thanks crcrpar!)
- Update `Dockerfile` of MLflow Kubernetes examples (2472, thanks 0x41head!)
- Replace Optuna's Catalyst pruning callback with Catalyst's Optuna pruning callback (2485, thanks crcrpar!)
- Use whitespace tokenizer instead of spacy tokenizer (2494)
- Use Fashion MNIST in example (2505, thanks crcrpar!)
- Update `pytorch_lightning_distributed.py` to remove MNIST and PyTorch Lightning errors (2514, thanks 0x41head!)
- Use `OptunaPruningCallback` in `catalyst_simple.py` (2546, thanks crcrpar!)
- Support fastai 2.3.0 (2550)

Tests

- Add `MOTPESampler` in `parametrize_multi_objective_sampler` (2448)
- Extract test cases regarding Pareto front to `test_multi_objective.py` (2525)

Code Fixes

- Fix `mypy` errors produced by `numpy==1.20.0` (2300, thanks 0x41head!)
- Simplify the code to find best values (2394)
- Use `_SearchSpaceTransform` in `RandomSampler` (2410, thanks sfujiwara!)
- Set the default value of `state` of `create_trial` as `COMPLETE` (2429)

Continuous Integration

- Run TensorFlow related examples on Python3.8 (2368, thanks crcrpar!)
- Use legacy resolver in CI's pip installation (2434, thanks crcrpar!)
- Run tests and integration tests on Mac & Python3.7 (2461, thanks crcrpar!)
- Run Dask ML example on Python3.8 (2499, thanks crcrpar!)
- Install OpenBLAS for mxnet1.8.0 (2508, thanks crcrpar!)
- Add ray to requirements (2519, thanks crcrpar!)
- Upgrade AllenNLP to `v2.2.0` (2528)
- Add Coverage for ChainerMN in codecov (2535, thanks jeromepatel!)
- Skip fastai2.3 tentatively (2548, thanks crcrpar!)

Other

- Add `-f` option to `make clean` command idempotent (2439)
- Bump `master` version to `2.7.0dev` (2444)
- Document how to write a new tutorial in `CONTRIBUTING.md` (2463, thanks crcrpar!)
- Bump up version number to 2.7.0 (2561)

Thanks to All the Contributors!

This release was made possible by authors, and everyone who participated in reviews and discussions.


0x41head, AmeerHajAli, Crissman, HideakiImamura, c-bata, crcrpar, g-votte, himkt, hvy, jeromepatel, keisuke-umezawa, not522, nyanhi, nzw0301, parsiad, sfujiwara, sile, toshihikoyanase, y0z

Page 7 of 19

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.