This is the release note of [v3.1.0](https://github.com/optuna/optuna/milestone/53?closed=1).
This is not something you have to read from top to bottom to learn about the summary of Optuna v3.1. The recommended way is reading [the release blog](https://medium.com/optuna/announcing-optuna-3-1-7b4c5fac227c).
Highlights
New Features
CMA-ES with Margin
> | CMA-ES | CMA-ES with Margin |
> | ------- | -------- |
> |  |  |
>
> “The animation is referred from https://github.com/EvoConJP/CMA-ES_with_Margin, which is distributed under the MIT license.”
CMA-ES achieves strong performance for continuous optimization, but there is still room for improvement in mixed-integer search spaces. To address this, we have added support for the "CMA-ES with Margin" algorithm to our `CmaEsSampler`, which makes it more efficient in these cases. You can see the benchmark results [here](https://github.com/CyberAgentAILab/cmaes/pull/121#issuecomment-1296691448). For more detailed information about CMA-ES with Margin, please refer to the paper “CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization - [arXiv](https://arxiv.org/abs/2205.13482)”, which has been accepted for presentation at GECCO 2022.
python
import optuna
from optuna.samplers import CmaEsSampler
def objective(trial):
x = trial.suggest_float("y", -10, 10, step=0.1)
y = trial.suggest_int("x", -100, 100)
return x**2 + y
study = optuna.create_study(sampler=CmaEsSampler(with_margin=True))
study.optimize(objective)
Distributed Optimization via NFS
`JournalFileStorage`, a file storage backend based on `JournalStorage`, supports NFS (Network File System) environments. It is the easiest option for users who wish to execute distributed optimization in environments where it is difficult to set up database servers such as MySQL, PostgreSQL or Redis (e.g. 815, 1330, 1457 and 2216).
python
import optuna
from optuna.storages import JournalStorage, JournalFileStorage
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_float("y", -100, 100)
return x**2 + y
storage = JournalStorage(JournalFileStorage("./journal.log"))
study = optuna.create_study(storage=storage)
study.optimize(objective)
For more information on `JournalFileStorage`, see the blog post [“Distributed Optimization via NFS Using Optuna’s New Operation-Based Logging Storage”](https://medium.com/optuna/distributed-optimization-via-nfs-using-optunas-new-operation-based-logging-storage-9815f9c3f932) written by wattlebirdaz.
A Brand-New Redis Storage
We have replaced the Redis storage backend with a `JournalStorage`-based one. The experimental `RedisStorage` class has been removed in v3.1. The following example shows how to use the new `JournalRedisStorage` class.
python
import optuna
from optuna.storages import JournalStorage, JournalRedisStorage
def objective(trial):
…
storage = JournalStorage(JournalRedisStorage("redis://localhost:6379"))
study = optuna.create_study(storage=storage)
study.optimize(objective)
Dask.distributed Integration
`DaskStorage`, a new storage backend based on [Dask.distributed](https://distributed.dask.org/en/stable/), is supported. It allows you to leverage distributed capabilities in similar APIs with `concurrent.futures`. `DaskStorage` can be used with `InMemoryStorage`, so you don't need to set up a database server. Here's a code example showing how to use `DaskStorage`:
python
import optuna
from optuna.storages import InMemoryStorage
from optuna.integration import DaskStorage
from distributed import Client, wait
def objective(trial):
...
with Client("192.168.1.8:8686") as client:
study = optuna.create_study(storage=DaskStorage(InMemoryStorage()))
futures = [
client.submit(study.optimize, objective, n_trials=10, pure=False)
for i in range(10)
]
wait(futures)
print(f"Best params: {study.best_params}")
Setting up a Dask cluster is easy: install `dask` and `distributed`, then run the `dask scheduler` and `dask worker` commands, as detailed in the [Quick Start Guide](https://distributed.dask.org/en/stable/quickstart.html) in the Dask.distributed documentation.
console
$ pip install optuna dask distributed
$ dark scheduler