Optuna

Latest version: v4.2.1

Safety actively analyzes 714772 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 19

2.6.0

This is the release note of [v2.6.0](https://github.com/optuna/optuna/milestone/32?closed=1).

Highlights

Warm Starting CMA-ES and sep-CMA-ES Support

Two new CMA-ES variants are available. Warm starting CMA-ES enables transferring prior knowledge on similar tasks. More specifically, CMA-ES can be initialized based on existing results of similar tasks. sep-CMA-ES is an algorithm which constrains the covariance matrix to be diagonal and is suitable for separable objective functions. See 2307 and 1951 for more details.

Example of Warm starting CMA-ES:

python
study = optuna.load_study(storage=”...”, study_name=”existing-study”)

study.sampler = CmaEsSampler(source_trials=study.trials)
study.optimize(objective, n_trials=100)


![result](https://user-images.githubusercontent.com/38826298/110283760-f0608480-8023-11eb-8ed8-ab014846ade0.png)

Example of sep-CMA-ES:

python
study = optuna.create_study(sampler=CmaEsSampler(use_separable_cma=True))
study.optimize(objective, n_trials=100)


![68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f6b75726f62616b6f2d7265706f7274732f43796265724167656e742f636d6165732f7369782d68756d702d63616d656c2d33633661303738636666656333366262316130316264353965623539306133633037396237616334](https://user-images.githubusercontent.com/38826298/110283792-fbb3b000-8023-11eb-9560-3a85351dfa0d.png)

PyTorch Distributed Data Parallel

Hyperparameter optimization for distributed neural-network training using [PyTorch Distributed Data Parallel](https://pytorch.org/docs/stable/distributed.html) is supported. A new integration module`TorchDistributedTrial`, synchronizes the hyperparameters among all nodes. See #2303 for further details.

Example:

python
def objective(trial):
distributed_trial = optuna.integration.TorchDistributedTrial(trial)
lr = distributed_trial.suggest_float("lr", 1e-5, 1e-1, log=True)



`RDBStorage` Improvements

The `RDBStorage` now allows longer user and system attributes, as well as choices for categorical distributions (e.g. choices spanning thousands of bytes/characters) to be persisted. Corresponding column data types of the underlying SQL tables have been changed from `VARCHAR` to `TEXT`. If you want to upgrade from an older version of Optuna and keep using the same storage, please migrate your tables as follows. Please make sure to create any backups before the migration and note that databases that don’t support `TEXT` will not work with this release.

console
Alter table columns from `VARCHAR` to `TEXT` to allow storing larger data.
optuna storage upgrade --storage <storage URL>


For more details, see 2395.

Heartbeat Improvements

The heartbeat feature was introduced in v2.5.0 to automatically mark stale trials as failed. It is now possible to not only fail the trials but also execute user-specified callback functions to process the failed trials. See 2347 for more details.

Example:

python
def objective(trial):
… Very time-consuming computation.

Adding a failed trial to the trial queue.
def failed_trial_callback(study, trial):
study.add_trial(
optuna.create_trial(
state=optuna.trial.TrialState.WAITING,
params=trial.params,
distributions=trial.distributions,
user_attrs=trial.user_attrs,
system_attrs=trial.system_attrs,
)
)

storage = optuna.storages.RDBStorage(
url=...,
heartbeat_interval=60,
grace_period=120,
failed_trial_callback=failed_trial_callback,
)
study = optuna.create_study(storage=storage)
study.optimize(objective, n_trials=100)



Pre-defined Search Space with Ask-and-tell Interface

The ask-and-tell interface allows specifying pre-defined search spaces through the new `fixed_distributions` argument. This option will keep the code short when the search space is known beforehand. It replaces calls to `Trial.suggest_…`. See 2271 for more details.

python
study = optuna.create_study()

For example, the distributions are previously defined when using `create_trial`.
distributions = {
"optimizer": optuna.distributions.CategoricalDistribution(["adam", "sgd"]),
"lr": optuna.distributions.LogUniformDistribution(0.0001, 0.1),
}
trial = optuna.trial.create_trial(
params={"optimizer": "adam", "lr": 0.0001},
distributions=distributions,
value=0.5,
)
study.add_trial(trial)

You can pass the distributions previously defined.
trial = study.ask(fixed_distributions=distributions)

`optimizer` and `lr` are already suggested and accessible with `trial.params`.
print(trial.params)


Breaking Changes

`RDBStorage` data type updates

Databases must be migrated for storages that were created with earlier versions of Optuna. Please refer to the highlights above.

For more details, see 2395.

`datatime_start` of enqueued trials.

The `datetime_start` property of `Trial`, `FrozenTrial` and `FixedTrial` shows when a trial was started. This property may now be `None`. For trials enqueued with `Study.enqueue_trial`, the timestamp used to be set to the time of queue. Now, the timestamp is first set to `None` when enqueued, and later updated to the timestamp when popped from the queue to run. This has implications on `StudySummary.datetime_start` as well which may be `None` in case trials have been enqueued but not popped.

For more details, see 2236.

`joblib` internals removed

`joblib` was partially supported as a backend for parallel optimization via the `n_jobs` parameter to `Study.optimize`. This support has now been removed and internals have been replaced with `concurrent.futures`.

For more details, see 2269.

AllenNLP v2 support

Optuna now officially supports AllenNLP v2. We also dropped the AllenNLP v0 support and the pruning support for AllenNLP v1. If you want to use AllenNLP v0 or v1 with Optuna, please install Optuna v2.5.0.

For more details, see 2412.

New Features

- Support sep-CMA-ES algorithm (1951)
- Add an option to the `Study.ask` method that allows define-and-run parameter suggestion (2271)
- Add integration module for PyTorch Distributed Data Parallel (2303)
- Support Warm Starting CMA-ES (2307)
- Add callback argument for heartbeat functionality (2347)
- Support `IntLogUniformDistribution` for TensorBoard (2362, thanks nzw0301!)

Enhancements

- Fix the wrong way to set `datetime_start` (clean) (2236, thanks chenghuzi!)
- Multi-objective error messages from `Study` to suggest solutions (2251)
- Adds missing `LightGBMTuner` metrics for the case of higher is better (2267, thanks mavillan!)
- Color Inversion to make contour plots more visually intuitive (2291, thanks 0x41head!)
- Close sessions at the end of with-clause in `Storage` (2345)
- Improve "plot_pareto_front" (2355, thanks 0x41head!)
- Implement `after_trial` method in `CmaEsSampler` (2359, thanks jeromepatel!)
- Convert `low` and `high` to float explicitly in distributions (2360)
- Add `after_trial` for `PyCmaSampler` (2365, thanks jeromepatel!)
- Implement `after_trial` for `BoTorchSampler` and `SkoptSampler` (2372, thanks jeromepatel!)
- Implement `after_trial` for `TPESampler` (2376, thanks jeromepatel!)
- Support `BoTorch >= 0.4.0` (2386, thanks nzw0301!)
- Mitigate string-length limitation of `RDBStorage` (2395)
- Support AllenNLP v2 (2412)
- Implement `after_trial` for `MOTPESampler` (2425, thanks jeromepatel!)

Bug Fixes

- Add test and fix for relative sampling failure in multivariate TPE (2055, thanks alexrobomind!)
- Fix `optuna.visualization.plot_contour` of subplot case with categorical axes (2297, thanks nzw0301!)
- Only fail trials associated with the current study (2330)
- Fix TensorBoard integration for `suggest_float` (2335, thanks nzw0301!)
- Add type conversions for upper/lower whose values are integers (2343)
- Fix improper stopping with the combination of `GridSampler` and `HyperbandPruner` (2353)
- Fix `matplotlib.plot_parallel_coordinate` with only one suggested parameter (2354, thanks nzw0301!)
- Create `model_dir` by `_LightGBMBaseTuner` (2366, thanks nyanhi!)
- Fix assertion in cached storage for state update (2370)
- Use `low` in `_transform_from_uniform` for TPE sampler (2392, thanks nzw0301!)
- Remove indices from `optuna.visualization.plot_parallel_coordinate` with categorical values (2401, thanks nzw0301!)

Installation

- `mypy` hotfix voiding latest NumPy 1.20.0 (2292)
- Remove `jax` from `setup.py` (2308, thanks nzw0301!)
- Install `torch` from PyPI for ReadTheDocs (2361)
- Pin `botorch` version (2379)

Documentation

- Fix broken links in `README.md` (2268)
- Provide `docs/source/tutorial` for faster local documentation build (2277)
- Remove specification of `n_trials` from example of `GridSampler` (2280)
- Fix typos and errors in document (2281, thanks belldandyxtq!)
- Add tutorial of multi-objective optimization of neural network with PyTorch (2305)
- Add explanation for local verification (2309)
- Add `sphinx.ext.imgconverter` extension (2323, thanks KoyamaSohei!)
- Include `high` in the documentation of `UniformDistribution` and `LogUniformDistribution` (2348)
- Fix typo; Replace dimentional with dimensional (2390, thanks nzw0301!)
- Fix outdated docstring of `TFKerasPruningCallback` (2399, thanks sfujiwara!)
- Call `fig.show()` in visualization code examples (2403, thanks harupy!)
- Explain the backend of parallelisation (2428, thanks nzw0301!)
- Navigate with left/right arrow keys in the document (2433, thanks ydcjeff!)
- Hotfix for MNIST download in tutorial (2438)

Examples

- Provide a user-defined pruner example (2140, thanks tktran!)
- Add Hydra example (2290, thanks nzw0301!)
- Use `trainer.callback_metrics` in the Pytorch Lightning example (2294, thanks TezRomacH!)
- Example folders (2302)
- Update PL example with typing and `DataModule` (2332, thanks TezRomacH!)
- Remove unsupported argument from PyTorch Lightning example (2357)
- Update `examples/kubernetes/mlflow/check_study.sh` to match whole words (2363, thanks twolffpiggott!)
- Add PyTorch checkpoint example using `failed_trial_callback` (2373)
- Update `Dockerfile` of Kubernetes simple example (2375, thanks 0x41head!)

Tests

- Refactor test of `GridSampler` (2285)
- Replace `parametrize_storage` with `StorageSupplier` (2404, thanks nzw0301!)

Code Fixes

- Replace `joblib` with `concurrent.futures` for parallel optimization (2269)
- Make trials stale only when succeeded to fail (2284)
- Apply code-fix to `LightGBMTuner` (Follow-up 2267) (2299)
- Inherit `PyTorchLightningPruningCallback` from Callback (2326, thanks TezRomacH!)
- Consistently use `suggest_float` (2344)
- Fix typo (2352, thanks nzw0301!)
- Increase API request limit for stale bot (2369)
- Fix typo; replace `contraints` with `constraints` (2378, thanks nzw0301!)
- Fix typo (2383, thanks nzw0301!)
- Update examples for `study.get_trials` for states filtering (2393, thanks jeromepatel!)
- Fix - remove arguments of python2 `super().__init__` (2402, thanks nyanhi!)

Continuous Integration

- Turn off RDB tests on circleci (2255)
- Allow allennlp in py3.8 integration tests (2367)
- Color pytest logs (2400, thanks harupy!)
- Remove `-f` option from doctest pip installation (2418)

Other

- Bump up version number to `v2.6.0.dev` (2283)
- Enable automatic closing of stale issues and pull requests by github actions (2287)
- Add setup section to `CONTRIBUTING.md` (2342)
- Fix the local `mypy` error on Pytorch Lightning integration (2349)
- Update the link to the `botorch` example (2377, thanks nzw0301!)
- Remove `-f` option from documentation installation (2407)

Thanks to All the Contributors!

This release was made possible by authors, and everyone who participated in reviews and discussions.

0x41head, Crissman, HideakiImamura, KoyamaSohei, TezRomacH, alexrobomind, belldandyxtq, c-bata, chenghuzi, crcrpar, g-votte, harupy, himkt, hvy, jeromepatel, keisuke-umezawa, mavillan, not522, nyanhi, nzw0301, sfujiwara, sile, tktran, toshihikoyanase, twolffpiggott, ydcjeff, ytsmiling

2.5.0

This is the release note of [v2.5.0](https://github.com/optuna/optuna/milestone/31?closed=1).

Highlights

Ask-and-Tell

The ask-and-tell interface is a new complement to `Study.optimize`. It allows users to construct `Trial` instances without the need of an objective function callback, giving more flexibility in how to define search spaces, ask for suggested hyperparameters and how to evaluate objective functions. The interface is made out of two methods, `Study.ask` and `Study.tell`.

- `Study.ask` returns a new `Trial` object.
- `Study.tell` takes either a `Trial` object or a trial number along with the result of that trial, i.e. a value and/or the state, and saves it. Since `Study.tell` accepts a trial number, a trial object can be disposed after parameters have been suggested. This allows objective function evaluations on a different thread or process.

python
import optuna
from optuna.trial import TrialState

study = optuna.create_study()

Use a Python for-loop to iteratively optimize the study.
for _ in range(100):
trial = study.ask() `trial` is a `Trial` and not a `FrozenTrial`.

Objective function, in this case not as a function but at global scope.
x = trial.suggest_float("x", -1, 1)
y = x ** 2

study.tell(trial, y)

Or, tell by trial number. This is equivalent to `study.tell(trial, y)`.
study.tell(trial.number, y)

Or, prune if the trial seems unpromising.
study.tell(trial, state=TrialState.PRUNED)

assert len(study.trials) == 100


Heartbeat

Now, Optuna supports monitoring trial heartbeats with RDB storages. For example, if a process running a trial is killed by a scheduler in a cluster environment, Optuna will automatically change the state of the trial that was running on that process to `TrialState.FAIL` from `TrialState.RUNNING`.

python
Consider running this script on several processes.
import optuna

def objective(trial):
(Very time-consuming computation)

Recording heartbeats every 60 seconds.
Other processes' trials where more than 120 seconds have passed
since the last heartbeat was recorded will be automatically failed.
storage = optuna.storages.RDBStorage(url=..., heartbeat_interval=60, grace_period=120)
study = optuna.create_study(storage=storage)
study.optimize(objective, n_trials=100)


Constrained NSGA-II

NSGA-II experimentally supports constrained optimization. Users can introduce constraints with the new `constraints_func` argument of `NSGAIISampler.__init__`.

The following is an example using this argument, a bi-objective version of the knapsack problem. We have 100 pairs of items and two knapsacks, and would like to maximize the profits of items within the weight limitation.

python
import numpy as np
import optuna

Define bi-objective knapsack problem.
n_items = 100
n_knapsacks = 2

2.4.0

This is the release note of [v2.4.0](https://github.com/optuna/optuna/milestone/30?closed=1).

Highlights

Python 3.9 Support

This is the first version to officially support Python 3.9. Everything is tested with the exception of certain integration modules under `optuna.integration`. We will continue to extend the support in the coming releases.

Multi-objective Optimization

Multi-objective optimization in Optuna is now a stable first-class citizen. Multi-objective optimization allows optimizing multi objectives at the same time such as maximizing model accuracy while minimizing model inference time.

Single-objective optimization can be extended to multi-objective optimization by

1. specifying a sequence (e.g. a tuple) of `directions` instead of a single `direction` in `optuna.create_study`. Both parameters are supported for backwards compatibility
1. (optionally) specifying a sampler that supports multi-objective optimization in `optuna.create_study`. If skipped, will default to the `NSGAIISampler`
1. returning a sequence of values instead of a single value from the objective function

Multi-objective Sampler

Samplers that support multi-objective optimization are currently the `NSGAIISampler`, the `MOTPESampler`, the `BoTorchSampler` and the `RandomSampler`.

Example

python
import optuna

def objective(trial):
The Binh and Korn function. It has two objectives to minimize.
x = trial.suggest_float("x", 0, 5)
y = trial.suggest_float("y", 0, 3)

v0 = 4 * x ** 2 + 4 * y ** 2
v1 = (x - 5) ** 2 + (y - 5) ** 2
return v0, v1

sampler = optuna.samplers.NSGAIISampler()
study = optuna.create_study(directions=["minimize", "minimize"], sampler=sampler)
study.optimize(objective, n_trials=100)

Get a list of the best trials.
best_trials = study.best_trials

Visualize the best trials (i.e. Pareto front) in blue.
fig = optuna.visualization.plot_pareto_front(study, target_names=["v0", "v1"])
fig.show()


![v240_pareto_front](https://user-images.githubusercontent.com/5983694/104276451-3992cd00-54e8-11eb-8489-5480faaaefe0.png)

Migrating from the Experimental `optuna.multi_objective`

`optuna.multi_objective`, used to be an experimental submodule for multi-objective optimization. This submodule is now deprecated. Changes required to migrate to the new interfaces are subtle as described by the steps in the previous section.

Database Storage Schema Upgrade

With the introduction of multi-objective optimization, the database storage schema for the `RDBStorage` has been changed. To continue to use databases from v2.3, run the following command to upgrade your tables. Please create a backup of the database before.

bash
optuna storage upgrade --storage <URL to the storage, e.g. sqlite:///example.db>


BoTorch Sampler

`BoTorchSampler` is an experimental sampler based on BoTorch. BoTorch is a library for Bayesian optimization using PyTorch. See [example](https://github.com/optuna/optuna/blob/release-v2.4.0/examples/botorch_simple.py) for an example usage.

Constrained Optimization

For the first time in Optuna, `BoTorchSampler` allows constrained optimization. Users can impose constraints on hyperparameters or objective function values as follows.

python
import optuna

def objective(trial):
x = trial.suggest_float("x", -15, 30)
y = trial.suggest_float("y", -15, 30)

Constraints which are considered feasible if less than or equal to zero.
The feasible region is basically the intersection of a circle centered at (x=5, y=0)
and the complement to a circle centered at (x=8, y=-3).
c0 = (x - 5) ** 2 + y ** 2 - 25
c1 = -((x - 8) ** 2) - (y + 3) ** 2 + 7.7

Store the constraints as user attributes so that they can be restored after optimization.
trial.set_user_attr("constraint", (c0, c1))

return x ** 2 + y ** 2

def constraints(trial):
return trial.user_attrs["constraint"]

Specify the constraint function when instantiating the `BoTorchSampler`.
sampler = optuna.integration.BoTorchSampler(constraints_func=constraints)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=32)


Multi-objective Optimization

`BoTorchSampler` supports both single- and multi-objective optimization. By default, the sampler selects the appropriate sampling algorithm with respect to the number of objectives.

Customizability

`BoTorchSampler` is customizable via the `candidates_func` callback parameter. Users familiar with BoTorch can change the surrogate model, acquisition function, and its optimizer in this callback to utilize any of the algorithms provided by BoTorch.

Visualization with Callback Specified Target Values

Visualization functions can now plot values other than objective values, such as inference time or evaluation by other metrics. Users can specify the values to be plotted by specifying the `target` argument. Even in multi-objective optimization, visualization functions can be available with the `target` argument along a specific objective.

New Tutorials

[The tutorial](https://optuna.readthedocs.io/en/v2.4.0/tutorial/index.html) has been improved and new content for each Optuna’s key feature have been added. More contents will be added in the future. Please look forward to it!

Breaking Changes

- Allow filtering trials from `Study` and `BaseStorage` based on `TrialState` (1943)
- Stop storing error stack traces in `fail_reason` in trial `system_attr` (1964)
- Importance with target values other than objective value (2109)

New Features

- Implement `plot_contour` and `_get_contour_plot` with Matplotlib backend (1782, thanks ytknzw!)
- Implement `plot_param_importances` and `_get_param_importance_plot` with Matplotlib backend (1787, thanks ytknzw!)
- Implement `plot_slice` and `_get_slice_plot` with Matplotlib backend (1823, thanks ytknzw!)
- Add `PartialFixedSampler` (1892, thanks norihitoishida!)
- Allow filtering trials from `Study` and `BaseStorage` based on `TrialState` (1943)
- Add rung promotion limitation in ASHA/Hyperband to enable arbitrary unknown length runs (1945, thanks alexrobomind!)
- Add Fastai V2 pruner callback (1954, thanks hal-314!)
- Support options available on AllenNLP except to `node_rank` and `dry_run` (1959)
- Universal data transformer (1987)
- Introduce `BoTorchSampler` (1989)
- Add axis order for `plot_pareto_front` (2000, thanks okdshin!)
- `plot_optimization_history` with target values other than objective value (2064)
- `plot_contour` with target values other than objective value (2075)
- `plot_parallel_coordinate` with target values other than objective value (2089)
- `plot_slice` with target values other than objective value (2093)
- `plot_edf` with target values other than objective value (2103)
- Importance with target values other than objective value (2109)
- Migrate `optuna.multi_objective.visualization.plot_pareto_front` (2110)
- Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `plot_contour` (2112)
- Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `plot_edf` (2117)
- Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `plot_optimization_history` (2118)
- `plot_param_importances` with target values other than objective value (2119)
- Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `plot_parallel_coordinate` (2120)
- Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `plot_slice` (2121)
- Trial post processing (2134)
- Raise `NotImplementedError` for `trial.report` and `trial.should_prune` during multi-objective optimization (2135)
- Raise `ValueError` in TPE and CMA-ES if `study` is being used for multi-objective optimization (2136)
- Raise `ValueError` if `target` is `None` and `study` is for multi-objective optimization for `get_param_importances`, `BaseImportanceEvaluator.evaluate`, and `plot_param_importances` (2137)
- Raise `ValueError` in integration samplers if `study` is being used for multi-objective optimization (2145)
- Migrate NSGA2 sampler (2150)
- Migrate MOTPE sampler (2167)
- Storages to query trial IDs from numbers (2168)

Enhancements

- Use context manager to treat session correctly (1628)
- Integrate multi-objective optimization module for the storages, study, and frozen trial (1994)
- Pass `include_package` to AllenNLP for distributed setting (2018)
- Change the RDB schema for multi-objective integration (2030)
- Update pruning callback for xgboost 1.3 (2078, thanks trivialfis!)
- Fix log format for single objective optimization to include best trial (2128)
- Implement `Study._is_multi_objective()` to check whether study has multiple objectives (2142, thanks nyanhi!)
- `TFKerasPruningCallback` to warn when an evaluation metric does not exist (2156, thanks bigbird555!)
- Warn default target name when target is specified (2170)
- `Study.trials_dataframe` for multi-objective optimization (2181)

Bug Fixes

- Make always compute `weights_below` in `MOTPEMultiObjectiveSampler` (1979)
- Fix the range of categorical values (1983)
- Remove circular reference of study (2079)
- Fix flipped colormap in `matplotlib` backend `plot_parallel_coordinate` (2090)
- Replace builtin `isnumerical` to capture float values in `plot_contour` (2096, thanks nzw0301!)
- Drop unnecessary constraint from upgraded `trial_values` table (2180)

Installation

- Ignore `tests` directory on install (2015, thanks 130ndim!)
- Clean up `setup.py` requirements (2051)
- Pin `xgboost<1.3` (2084)
- Bump up PyTorch version (2094)

Documentation

- Update tutorial (1722)
- Introduce plotly directive (1944, thanks harupy!)
- Check everything by `blackdoc` (1982)
- Remove `codecov` from `CONTRIBUTING.md` (2005)
- Make the visualization examples deterministic (2022, thanks harupy!)
- Use plotly directive in `plot_pareto_front` (2025)
- Remove plotly scripts and unused generated files (2026)
- Add mandarin link to ReadTheDocs layout (2028)
- Document about possible duplicate parameter configurations in `GridSampler` (2040)
- Fix `MOTPEMultiObjectiveSampler`'s example (2045, thanks norihitoishida!)
- Fix Read the Docs build failure caused by `pip install --find-links` (2065)
- Fix `lt` symbol (2068, thanks KoyamaSohei!)
- Fix parameter section of `RandomSampler` in docs (2071, thanks akihironitta!)
- Add note on the behavior of `suggest_float` with `step` argument (2087)
- Tune build time of 2076 (2088)
- Add `matplotlib.plot_parallel_coordinate` example (2097, thanks nzw0301!)
- Add `matplotlib.plot_param_importances` example (2098, thanks nzw0301!)
- Add `matplotlib.plot_slice` example (2099, thanks nzw0301!)
- Add `matplotlib.plot_contour` example (2100, thanks nzw0301!)
- Bump Sphinx up to 3.4.0 (2127)
- Additional docs about `optuna.multi_objective` deprecation (2132)
- Move type hints to description from signature (2147)
- Add copy button to all the code examples (2148)
- Fix wrong wording in distributed execution tutorial (2152)

Examples

- Add MXNet Gluon example (1985)
- Update logging in PyTorch Lightning example (2037, thanks pbmstrk!)
- Change return type of `training_step` of PyTorch Lightning example (2043)
- Fix dead links in `examples/README.md` (2056, thanks nai62!)
- Add `enqueue_trial` example (2059)
- Skip FastAI v2 example in examples job (2108)
- Move `examples/multi_objective/plot_pareto_front.py` to `examples/visualization/plot_pareto_front.py` (2122)
- Use latest multi-objective functionality in multi-objective example (2123)
- Add haiku and jax simple example (2155, thanks nzw0301!)

Tests

- Update `parametrize_sampler` of `test_samplers.py` (2020, thanks norihitoishida!)
- Change `trail_id + 123` -> `trial_id` (2052)
- Fix `scipy==1.6.0` test failure with `LogisticRegression` (2166)

Code Fixes

- Introduce plotly directive (1944, thanks harupy!)
- Stop storing error stack traces in `fail_reason` in trial `system_attr` (1964)
- Check everything by blackdoc (1982)
- HPI with `_SearchSpaceTransform` (1988)
- Fix TODO comment about orders of `dict`s (2007)
- Add `__all__` to reexport modules explicitly (2013)
- Update `CmaEsSampler`'s warning message (2019, thanks norihitoishida!)
- Put up an alias for `structs.StudySummary` against `study.StudySummary` (2029)
- Deprecate `optuna.type_checking` module (2032)
- Remove `py35` from black config in `pyproject.toml` (2035)
- Use model methods instead of `session.query()` (2060)
- Use `find_or_raise_by_id` instead of `find_by_id` to raise if a study does not exist (2061)
- Organize and remove unused model methods (2062)
- Leave a comment about RTD compromise (2066)
- Fix ideographic space (2067, thanks KoyamaSohei!)
- Make new visualization parameters keyword only (2082)
- Use latest APIs in `LightGBMTuner` (2083)
- Add `matplotlib.plot_slice` example (2099, thanks nzw0301!)
- Deprecate previous multi-objective module (2124)
- `_run_trial` refactoring (2133)
- Cosmetic fix of `xgboost` integration (2143)

Continuous Integration

- Partial support of python 3.9 (1908)
- Check everything by blackdoc (1982)
- Avoid `set-env` in GitHub Actions (1992)
- PyTorch and AllenNLP (1998)
- Remove `checks` from circleci (2004)
- Migrate tests and coverage to GitHub Actions (2027)
- Enable blackdoc `--diff` option (2031)
- Unpin mypy version (2069)
- Skip FastAI v2 example in examples job (2108)
- Fix CI examples for Py3.6 (2129)

Other

- Add `tox.ini` (2024)
- Allow passing additional arguments when running tox (2054, thanks harupy!)
- Add Python 3.9 to README badge (2063)
- Clarify that generally pull requests need two or more approvals (2104)
- Release wheel package via PyPI (2105)
- Adds news entry about the Python 3.9 support (2114)
- Add description for tox to `CONTRIBUTING.md` (2159)
- Bump up version number to 2.4.0 (2183)
- [Backport] Fix the syntax of `pypi-publish.yml` (2188)

Thanks to All the Contributors!

This release was made possible by authors, and everyone who participated in reviews and discussions.

130ndim, Crissman, HideakiImamura, KoyamaSohei, akihironitta, alexrobomind, bigbird555, c-bata, crcrpar, eytan, g-votte, hal-314, harupy, himkt, hvy, keisuke-umezawa, nai62, norihitoishida, not522, nyanhi, nzw0301, okdshin, pbmstrk, sdaulton, sile, toshihikoyanase, trivialfis, ytknzw, ytsmiling

2.3.0

This is the release note of [v2.3.0](https://github.com/optuna/optuna/milestone/29?closed=1).

Highlights

Multi-objective TPE sampler

TPE sampler now supports multi-objective optimization. This new algorithm is implemented in `optuna.multi_objective` and used via`optuna.multi_objective.samplers.MOTPEMultiObjectiveSampler`. See 1530 for the details.

![87849998-c7ba3c00-c927-11ea-8d5b-c7712f77abbe](https://user-images.githubusercontent.com/38826298/98068220-cdb83680-1e9e-11eb-9c6c-90a5a2859804.gif)


`LightGBMTunerCV` returns the best booster

The best booster of `LightGBMTunerCV` can now be obtained in the same way as the `LightGBMTuner`. See 1609 and 1702 for details.

PyTorch Lightning v1.0 support

The integration with PyTorch Lightning v1.0 is available. The pruning feature of Optuna can be used with the new version of PyTorch Lightning using `optuna.integration.PyTorchLightningPruningCallback`. See 597 and 1926 for details.

RAPIDS + Optuna example

An example to illustrate how to use [RAPIDS](https://rapids.ai/) with Optuna is available. You can use this example to harness the computational power of the GPU along with Optuna.

New Features

- Introduce Multi-objective TPE to `optuna.multi_objective.samplers` (1530, thanks y0z!)
- Return `LGBMTunerCV` booster (1702, thanks nyanhi!)
- Implement `plot_intermediate_values` and `_get_intermediate_plot` with Matplotlib backend (1762, thanks ytknzw!)
- Implement `plot_optimization_history` and `_get_optimization_history_plot` with Matplotlib backend (1763, thanks ytknzw!)
- Implement `plot_parallel_coordinate` and `_get_parallel_coordinate_plot` with Matplotlib backend (1764, thanks ytknzw!)
- Improve MLflow callback functionality: allow nesting, and attached study attrs (1918, thanks drobison00!)

Enhancements

- Copy datasets before objective evaluation (1805)
- Fix 'Mean of empty slice' warning (1927, thanks carefree0910!)
- Add `reseed_rng` to `NSGAIIMultiObjectiveSampler` (1938)
- Add RDB support to `MoTPEMultiObjectiveSampler` (1978)

Bug Fixes

- Add some jitters in `_MultivariateParzenEstimators` (1923, thanks kstoneriv3!)
- Fix `plot_contour` (1929, thanks carefree0910!)
- Fix return type of the multivariate TPE samplers (1955, thanks nzw0301!)
- Fix `StudyDirection` of `mape` in `LightGBMTuner` (1966)

Documentation

- Add explanation for most module-level reference pages (1850, thanks tktran!)
- Revert module directives (1873)
- Remove `with_trace` method from docs (1882, thanks i-am-jeetu!)
- Add CuPy to projects using Optuna (1889)
- Add more sphinx doc comments (1894, thanks yuk1ty!)
- Fix a broken link in `matplotlib.plot_edf` (1899)
- Fix broken links in `README.md` (1901)
- Show module paths in `optuna.visualization` and `optuna.multi_objective.visualization` (1902)
- Add a short description to the example in FAQ (1903)
- Embed `plot_edf` figure in documentation by using matplotlib plot directive (1905, thanks harupy!)
- Fix plotly figure iframe paths (1906, thanks harupy!)
- Update docstring of `CmaEsSampler` (1909)
- Add `matplotlib.plot_intermediate_values` figure to doc (1933, thanks harupy!)
- Add `matplotlib.plot_optimization_history` figure to doc (1934, thanks harupy!)
- Make code example of `MOTPEMultiObjectiveSampler` executable (1953)
- Add `Raises` comments to samplers (1965, thanks yuk1ty!)

Examples

- Make src comments more descriptive in `examples/pytorch_lightning_simple.py` (1878, thanks iamshnoo!)
- Add an external project in Optuna examples (1888, thanks resnant!)
- Add RAPIDS + Optuna simple example (1924, thanks Nanthini10!)
- Apply follow-up of 1924 (1960)

Tests

- Fix RDB test to avoid deadlock when creating study (1919)
- Add a test to verify `nest_trials` for `MLflowCallback` works properly (1932, thanks harupy!)
- Add a test to verify `tag_study_user_attrs` for `MLflowCallback` works properly (1935, thanks harupy!)

Code Fixes

- Fix typo (1900)
- Refactor `Study.optimize` (1904)
- Refactor `Study.trials_dataframe` (1907)
- Add variable annotation to `optuna/logging.py` (1920, thanks akihironitta!)
- Fix duplicate stack traces (1921, thanks akihironitta!)
- Remove `_log_normal_cdf` (1922, thanks kstoneriv3!)
- Convert comment style type hints (1950, thanks akihironitta!)
- Align the usage of type hints and instantiation of dictionaries (1956, thanks akihironitta!)

Continuous Integration

- Run documentation build and doctest in GitHub Actions (1891)
- Resolve conflict of `job-id` of GitHub Actions workflows (1898)
- Pin `mypy==0.782` (1913)
- Run `allennlp_jsonnet.py` on GitHub Actions (1915)
- Fix for PyTorch Lightning 1.0 (1926)
- Check blackdoc in CI (1958)
- Fix path for `store_artifacts` step in `document` CircleCI job (1962, thanks harupy!)

Other

- Fix how to check the format, coding style, and type hints (1755)
- Fix typo (1968, thanks nzw0301!)

Thanks to All the Contributors!

This release was made possible by authors, and everyone who participated in reviews and discussions.

Crissman, HideakiImamura, Nanthini10, akihironitta, c-bata, carefree0910, crcrpar, drobison00, harupy, himkt, hvy, i-am-jeetu, iamshnoo, keisuke-umezawa, kstoneriv3, nyanhi, nzw0301, resnant, sile, smly, tktran, toshihikoyanase, y0z, ytknzw, yuk1ty

2.2.0

This is the release note of [v2.2.0](https://github.com/optuna/optuna/milestone/28?closed=1).

In this release, we drop support for Python 3.5. If you are using Python 3.5, please consider upgrading your Python environment to Python 3.6 or newer, or install older versions of Optuna.

Highlights

Multivariate TPE sampler

`TPESampler` is updated with an experimental option to enable multivariate sampling. This algorithm captures dependencies among hyperparameters better than the previous algorithm. See 1767 for more details.

<img src="https://user-images.githubusercontent.com/3255979/95030825-40da5b80-06ed-11eb-84b1-fcc24dc1b70a.gif" width="480">

<!-- ![density_ratio](https://user-images.githubusercontent.com/3255979/95030825-40da5b80-06ed-11eb-84b1-fcc24dc1b70a.gif) -->

<img src="https://user-images.githubusercontent.com/3255979/95030841-58b1df80-06ed-11eb-8e3c-a74e3687c78f.png" width="480">

<!-- ![92350529-3f306e80-f114-11ea-8782-36e463c19320](https://user-images.githubusercontent.com/3255979/95030841-58b1df80-06ed-11eb-8e3c-a74e3687c78f.png) -->


Improved AllenNLP support

`AllenNLPExecutor` supports pruning. It is introduced in the official [hyperparameter search guide](https://guide.allennlp.org/hyperparameter-optimization) by AllenNLP. Both `AllenNLPExecutor` and the guide were written by himkt. See #1772.

![allennlp-executor-jsonnet4](https://user-images.githubusercontent.com/3255979/95030850-66676500-06ed-11eb-9f95-1f8862510c8e.png)

New Features

- Create `optuna.visualization.matplotlib` (1756, thanks ytknzw!)
- Add multivariate TPE sampler (1767, thanks kstoneriv3!)
- Support `AllenNLPPruningCallback` for `AllenNLPExecutor` (1772)

Enhancements

- `KerasPruningCallback` to warn when an evaluation metric does not exist (1759, thanks bigbird555!)
- Implement `plot_edf` and `_get_edf_plot` with Matplotlib backend (1760, thanks ytknzw!)
- Fix exception chaining all over the codebase (1781, thanks akihironitta!)
- Add metric alias of rmse for `LightGBMTuner` (1807, thanks upura!)
- Update PyTorch-Lighting minor version (1813, thanks nzw0301!)
- Improve `TensorBoardCallback` (1814, thanks sfujiwara!)
- Add metric alias for `LightGBMTuner` (1822, thanks nyanhi!)
- Introduce a new argument to plot all evaluation points by `optuna.multi_objective.visualization.plot_pareto_front` (1824, thanks nzw0301!)
- Add `reseed_rng` to `RandomMultiobjectiveSampler` (1831, thanks y0z!)

Bug Fixes

- Fix fANOVA for `IntLogUniformDistribution` (1788)
- Fix `mypy` in an environment where some dependencies are installed (1804)
- Fix `WFG._compute()` (1812, thanks y0z!)
- Fix contour plot error for categorical distributions (1819, thanks zchenry!)
- Store CMAES optimizer after splitting into substrings (1833)
- Add maximize support on `CmaEsSampler` (1849)
- Add `matplotlib` directory to `optuna.visualization.__init__.py` (1867)

Installation

- Update `setup.py` to drop Python 3.5 support (1818, thanks harupy!)
- Add Matplotlib to `setup.py` (1829, thanks ytknzw!)

Documentation

- Fix `plot_pareto_front` preview path (1808)
- Fix indents of the example of `multi_objective.visualization.plot_pareto_front` (1815, thanks nzw0301!)
- Hide `__init__` from docs (1820, thanks upura!)
- Explicitly omit Python 3.5 from `README.md` (1825)
- Follow-up 1832: alphabetical naming and fixes (1841)
- Mention `isort` in the contribution guidelines (1842)
- Add news sections about introduction of `isort` (1843)
- Add `visualization.matpltlib` to docs (1847)
- Add sphinx doc comments regarding exceptions in the optimize method (1857, thanks yuk1ty!)
- Avoid global study in `Study.stop` testcode (1861)
- Fix documents of `visualization.is_available` (1869)
- Improve `ThresholdPruner` example (1876, thanks fsmosca!)
- Add logging levels to `optuna.logging.set_verbosity` (1884, thanks nzw0301!)

Examples

- Add XGBoost cross-validation example (1836, thanks sskarkhanis!)
- Minor code fix of XGBoost examples (1844)

Code Fixes

- Add default implementation of `get_n_trials` (1568)
- Introduce `isort` to automatically sort import statements (1695, thanks harupy!)
- Avoid using experimental decorator on `CmaEsSampler` (1777)
- Remove `logger` member attributes from `PyCmaSampler` and `CmaEsSampler` (1784)
- Apply `blackdoc` (1817)
- Remove TODO (1821, thanks sfujiwara!)
- Fix Redis example code (1826)
- Apply `isort` to `visualization/matplotlib/` and `multi_objective/visualization` (1830)
- Move away from `.scoring` imports (1864, thanks norihitoishida!)
- Add experimental decorator to `matplotlib.*` (1868)

Continuous Integration

- Disable `--cache-from` if trigger of docker image build is `release` (1791)
- Remove Python 3.5 from CI checks (1810, thanks harupy!)
- Update python version in docs (1816, thanks harupy!)
- Migrate `checks` to GitHub Actions (1838)
- Add option `--diff` to black (1840)

Thanks to All the Contributors!

This release was made possible by authors, and everyone who participated in reviews and discussions.

HideakiImamura, akihironitta, bigbird555, c-bata, crcrpar, fsmosca, g-votte, harupy, himkt, hvy, keisuke-umezawa, kstoneriv3, norihitoishida, nyanhi, nzw0301, sfujiwara, sile, sskarkhanis, toshihikoyanase, upura, y0z, ytknzw, yuk1ty, zchenry

2.1.0

This is the release note of [v2.1.0](https://github.com/optuna/optuna/milestone/27?closed=1).

*Optuna v2.1.0 will be the last version to support Python 3.5. See 1067.*

Highlights

Allowing `objective(study.best_trial)`

`FrozenTrial` used to subclass `object` but now implements `BaseTrial`. It can be used in places where a `Trial` is expected, including user-defined objective functions.

Re-evaluating the objective functions with the best parameter configuration is now straight forward. See 1503 for more details.

python
study.optimize(objective, n_trials=100)
best_trial = study.best_trial
best_value = objective(best_trial) Did not work prior to v2.1.0.


IPOP-CMA-ES Sampling Algorithm

`CmaEsSampler` comes with an experimental option to switch to IPOP-CMA-ES. This algorithm restarts the strategy with an increased population size after premature convergence, allowing a more explorative search. See 1548 for more details.

![image](https://user-images.githubusercontent.com/38826298/92208970-2aab6680-eec7-11ea-863e-a30e18a79f25.png)

*Comparing the new option with the previous `CmaEsSampler` and `RandomSampler`.*

Optuna & MLFlow on Kubernetes Example

Optuna can be easily integrated with MLFlow on Kubernetes clusters. The example contained [here](https://github.com/optuna/optuna/tree/master/examples/kubernetes) is a great introduction to get you started with a few lines of commands. See #1464 for more details.

Providing Type Hinting to Applications

Type hint information is packaged following [PEP 561](https://www.python.org/dev/peps/pep-0561/). Users of Optuna can now run style checkers against the framework. Note that the applications which ignore missing imports may raise new type-check errors due to this change. See #1720 for more details.

Breaking Changes

Configuration files for `AllenNLPExecutor` may need to be updated. See 1544 for more details.

- Remove `allennlp.common.params.infer_and_cast` from AllenNLP integrations (1544)
- Deprecate `optuna.integration.KerasPruningCallback` (1670, thanks VamshiTeja!)
- Make Optuna PEP 561 Compliant (1720, thanks MarioIshac!)

New Features

- Add sampling functions to `FrozenTrial` (1503, thanks nzw0301!)
- Add modules to compute hypervolume (1537)
- Add IPOP-CMA-ES support in `CmaEsSampler` (1548)
- Implement skorch pruning callback (1668)

Enhancements

- Make sampling from trunc-norm efficient in `TPESampler` (1562)
- Add trials to cache when awaking `WAITING` trials in `_CachedStorage` (1570)
- Add log in `create_new_study` method of storage classes (1629, thanks tohmae!)
- Add border to markers in contour plot (1691, thanks zchenry!)
- Implement hypervolume calculator for two-dimensional space (1771)

Bug Fixes

- Avoid to sample the value which equals to upper bound (1558)
- Exit thread after session is destroyed (1676, thanks KoyamaSohei!)
- Disable `feature_pre_filter` in `LightGBMTuner` (1774)
- Fix fANOVA for `IntLogUniformDistribution` (1790)

Installation

- Add `packaging` in `install_requires` (1551)
- Fix failure of Keras integration due to TF2.3 (1563)
- Install `fsspec<0.8.0` for Python 3.5 (1596)
- Specify the version of `packaging` to `>= 20.0` (1599, thanks Isa-rentacs!)
- Install `lightgbm<3.0.0` to circumvent error with `feature_pre_filter` (1773)

Documentation

- Fix link to the definition of `StudySummary` (1533, thanks nzw0301!)
- Update log format in docs (1538)
- Integrate Sphinx Gallery to make tutorials easily downloadable (1543)
- Add AllenNLP pruner to list of pruners in tutorial (1545)
- Refine the help of `study-name` (1565, thanks belldandyxtq!)
- Simplify contribution guidelines by removing rule about PR title naming (1567)
- Remove license section from `README.md` (1573)
- Update key features (1582)
- Simplify documentation of `BaseDistribution.single` (1593)
- Add navigation links for contributors to `README.md` (1597)
- Apply minor changes to `CONTRIBUTING.md` (1601)
- Add list of projects using Optuna to `examples/README.md` (1605)
- Add a news section to `README.md` (1606)
- Avoid the latest stable `sphinx` (1613)
- Add link to examples in tutorial (1625)
- Add the description of default pruner (`MedianPruner`) to the documentation (1657, thanks Chillee!)
- Remove generated directories with `make clean` (1658)
- Delete a useless auto generated directory (1708)
- Divide a section for each integration repository (1709)
- Add example to `optuna.study.create_study` (1711, thanks Ruketa!)
- Add example to `optuna.study.load_study` (1712, thanks bigbird555!)
- Fix broken doctest example code (1713)
- Add some notes and usage example for the hypervolume computing module (1715)
- Fix issue where doctests are not executed (1723, thanks harupy!)
- Add example to `optuna.study.Study.optimize` (1726, thanks norihitoishida!)
- Add target for doctest to `Makefile` (1732, thanks harupy!)
- Add example to `optuna.study.delete_study` (1741, thanks norihitoishida!)
- Add example to `optuna.study.get_all_study_summaries` (1742, thanks norihitoishida!)
- Add example to `optuna.study.Study.set_user_attr` (1744, thanks norihitoishida!)
- Add example to `optuna.study.Study.user_attrs` (1745, thanks norihitoishida!)
- Add example to `optuna.study.Study.get_trials` (1746, thanks norihitoishida!)
- Add example to `optuna.multi_objective.study.MultiObjectiveStudy.optimize` (1747, thanks norihitoishida!)
- Add explanation for `optuna.trial` (1748)
- Add example to `optuna.multi_objective.study.create_study` (1749, thanks norihitoishida!)
- Add example to `optuna.multi_objective.study.load_study` (1750, thanks norihitoishida!)
- Add example to `optuna.study.Study.stop` (1752, thanks Ruketa!)
- Re-generate contour plot example with padding (1758)

Examples

- Add an example of Kubernetes, PyTorchLightning, and MLflow (1464)
- Create study before multiple workers are launched in Kubernetes MLflow example (1536)
- Fix typo in `examples/kubernetes/mlflow/README.md` (1540)
- Reduce search space for AllenNLP example (1542)
- Introduce `plot_param_importances` in example (1555)
- Removing references to deprecated `optuna study optimize` commands from examples (1566, thanks ritvik1512!)
- Add scripts to run `examples/kubernetes/*` (1584, thanks VamshiTeja!)
- Update Kubernetes example of "simple" to avoid potential errors (1600, thanks Nishikoh!)
- Implement `skorch` pruning callback (1668)
- Add a `tf.keras` example (1681, thanks sfujiwara!)
- Update `examples/pytorch_simple.py` (1725, thanks wangxin0716!)
- Fix Binh and Korn function in MO example (1757)

Tests

- Test `_CachedStorage` in `test_study.py` (1575)
- Rename `tests/multi_objective` as `tests/multi_objective_tests` (1586)
- Do not use deprecated `pytorch_lightning.data_loader` decorator (1667)
- Add test for hypervolume computation for solution sets with duplicate points (1731)

Code Fixes

- Match the order of definition in `trial` (1528, thanks nzw0301!)
- Add type hints to storage (1556)
- Add trials to cache when awaking `WAITING` trials in `_CachedStorage` (1570)
- Use `packaging` to check the library version (1610, thanks VamshiTeja!)
- Fix import order of `packaging.version` (1623)
- Refactor TPE's `sample_from_categorical_dist` (1630)
- Fix error messages in `TPESampler` (1631, thanks kstoneriv3!)
- Add code comment about `n_ei_candidates` for categorical parameters (1637)
- Add type hints into `optuna/integration/keras.py` (1642, thanks airyou!)
- Fix how to use `black` in `CONTRIBUTING.md` (1646)
1- Add type hints into `optuna/cli.py` (1648, thanks airyou!)
- Add type hints into `optuna/dashboard.py`, `optuna/integration/__init__.py` (1653, thanks airyou!)
- Add type hints `optuna/integration/_lightgbm_tuner` (1655, thanks upura!)
- Fix LightGBM Tuner import code (1659)
- Add type hints to `optuna/storages/__init__.py` (1661, thanks akihironitta!)
- Add type hints to `optuna/trial` (1662, thanks upura!)
- Enable flake8 E231 (1663, thanks harupy!)
- Add type hints to `optuna/testing` (1665, thanks upura!)
- Add type hints to `tests/storages_tests/rdb_tests` (1666, thanks akihironitta!)
- Add type hints to `optuna/samplers` (1673, thanks akihironitta!)
- Fix type hint of `optuna.samplers._random` (1678, thanks nyanhi!)
- Add type hints into `optuna/integration/mxnet.py` (1679, thanks norihitoishida!)
- Fix type hint of `optuna/pruners/_nop.py` (1680, thanks Ruketa!)
- Update Type Hints: `prunes/_percentile.py` and `prunes/_median.py` (1682, thanks ytknzw!)
- Fix incorrect type annotations for `args` and `kwargs` (1684, thanks harupy!)
- Update type hints in `optuna/pruners/_base.py` and `optuna/pruners/_successive_halving.py` (1685, thanks ytknzw!)
- Add type hints to `test_optimization_history.py` (1686, thanks yosupo06!)
- Fix type hint of `tests/pruners_tests/test_median.py` (1687, thanks polyomino-24!)
- Type hint and reformat of files under `visualization_tests` (1689, thanks gasin!)
- Remove unused argument `trial` from `optuna.samplers._tpe.sampler._get_observation_pairs` (1692, thanks ytknzw!)
- Add type hints into `optuna/integration/chainer.py` (1693, thanks norihitoishida!)
- Add type hints to `optuna/integration/tensorflow.py` (1698, thanks uenoku!)
- Add type hints into `optuna/integration/chainermn.py` (1699, thanks norihitoishida!)
- Add type hints to `optuna/integration/xgboost.py` (1700, thanks Ruketa!)
- Add type hints to files under `tests/integration_tests` (1701, thanks gasin!)
- Use `Optional` for keyword arguments that default to `None` (1703, thanks harupy!)
- Fix type hint of all the rest files under `tests/` (1704, thanks gasin!)
- Fix type hint of `optuna/integration` (1705, thanks akihironitta!)
- Add l2 metric aliases to `LightGBMTuner` (1717, thanks thigm85!)
- Convert type comments in `optuna/study.py` into type annotations (1724, thanks harupy!)
- Apply `black==20.8b1` (1730)
- Fix type hint of `optuna/integration/sklearn.py` (1735, thanks akihironitta!)
- Add type hints into `optuna/structs.py` (1743, thanks norihitoishida!)
- Fix typo in `optuna/samplers/_tpe/parzen_estimator.py` (1754, thanks akihironitta!)

Continuous Integration

- Temporarily skip `allennlp_jsonnet.py` example in CI (1527)
- Run TensorFlow on Python 3.8 (1564)
- Bump PyTorch to 1.6 (1572)
- Skip entire `allennlp` example directory in CI (1585)
- Use `actions/setup-pythonv2` (1594)
- Add `cache` to GitHub Actions Workflows (1595)
- Run example after docker build to ensure that built image is setup properly (1635, thanks harupy!)
- Use cache-from to build docker image faster (1638, thanks harupy!)
- Fix issue where doctests are not executed (1723, thanks harupy!)

Other

- Remove Swig installation from Dockerfile (1462)
- Add: How to run examples with our Docker images (1554)
- GitHub Action labeler (1591)
- Do not trigger labeler on push (1624)
- Fix invalid YAML syntax (1626)
- Pin `sphinx` version to `3.0.4` (1627, thanks harupy!)
- Add `.dockerignore` (1633, thanks harupy!)
- Fix how to use `black` in `CONTRIBUTING.md` (1646)
- Add `pyproject.toml` for easier use of black (1649)
- Fix `docs/Makefile` (1650)
- Ignore vscode configs (1660)
- Make Optuna PEP 561 Compliant (1720, thanks MarioIshac!)

Page 8 of 19

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.