Deephyper

Latest version: v0.10.0

Safety actively analyzes 723158 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 7

0.6.0

- The documentation site theme was updated.
- PyPI Release: https://pypi.org/project/deephyper/0.6.0/
- New BibTeX citation for DeepHyper to include our growing community:



misc{deephyper_software,
title = {"DeepHyper: A Python Package for Scalable Neural Architecture and Hyperparameter Search"},
author = {Balaprakash, Prasanna and Egele, Romain and Salim, Misha and Maulik, Romit and Vishwanath, Venkat and Wild, Stefan and others},
organization = {DeepHyper Team},
year = 2018,
url = {https://github.com/deephyper/deephyper}
}


deephyper.evaluator

- `profile(memory=True)` decorator can now profile memory using `tracemalloc` (adding an overhead).
- `RayStorage` is now available for the `ray` parallel backend. It is based on remote actors and is a wrapper around the base `MemoryStorage`. This allows to use `deephyper.stopper` in parallel only with `ray` backend requirements.

deephyper.search

- Multi-objective optimization (MOO) has been upgraded for better optimization performance. A new tutorial to discover this feature is available at [Multi-Objective Optimization - 101](https://deephyper.readthedocs.io/en/develop/tutorials/tutorials/colab/Multi_objective_optimization_101.html).
- A minimum-lower bound performance can be specified to avoid exploring not interesting trade-offs `moo_lower_bounds=...`.
- A new objective scaler is available to normalize objectives (e.g., accuracy and latency) more efficiently `objective_scaler="quantile-uniform"`.
- The `results.csv` or DataFrame now contains a new information `pareto_efficient` which indicates the optimal solution in a multi-objective problem (i.e., Pareto-set/front).
- Random-Forest (RF) surrogate model predictions are faster by about x1.5 factor, speeding up the Bayesian optimization process.
- Added a dynamic prior update for Bayesian optimization: `update_prior=..., update_prior_quantile=...` This allows to increase the density of sampling in areas of interest and makes "random"-sampling-based optimization of the surrogate model more competitive (against more expensive optimizers like gradient-based or genetic algorithms).
-


deephyper.stopper

- `SuccessiveHalvingStopper` is now compatible with failures. If a "failure" is observed during training (i.e., observation starting with `"F"`) then previous observations are replaced in shared memory to notify other competitors of the failure.

deephyper.analysis

- Creation of a new module to provide utilities for the analysis of experiments.

0.5.0

deephyper.evaluator

- removed `SubprocessEvaluator` evaluator due to limited features and confusion with `ProcessPoolEvaluator`. The `ProcessPoolEvaluator` seamed to be enough for our different use cases (hyperparameter optimization, auto-tuning).
- retrieving and returning remote exception when using `MPICommEvaluator`.
- timeout of the search is now handled through thread-based timeout instead of signal handlers.
- timeout with `MPICommEvaluator` is now handled through refreshed "remaining time" and thread-based timeout in each rank.
- exceptions happening in remote rank when using `MPICommEvaluator` are now traced-back and printed out in root rank.
- possibility to return metadata logged in the results DataFrame with the run-function.

python
def run(job):
config = job.parameters
return {"objective": config["x"], "metadata": {"time": time.time()}}

- new in the `resutls.csv` or returned `pd.DataFrame`, the hyperparameters will start with `p:` prefix and metadata will start with `m:` to allow for easier filtration of the columns.
- new `deephyper.evaluator.storage`. The interface is defined by `Storage` with two basic implementations `MemoryStorage` (local memory) and `RedisStorage` (in-memory key-value database).
- new `RunningJob` API. The run-function is now passed a `RunningJob` instance instead of `dict`. The `RunningJob.parameters` corresponds to the former dictionary passed to the run-function. The `RunningJob` object should implement the dictionary interface to be backward compatible with the previous standard argument of the run-function.

deephyper.search

- new preprocessing of the objective values when using `surrogate="RF"` in `CBO`. The objective is now preprocessed with `Min-Max` and `Log` to improve the fitting of the surrogate model on small values and improve Bayesian optimisation convergence. (see [paper](https://arxiv.org/pdf/1908.06674.pdf))
- new cyclic exponential decay of the exploration-exploitation trade-off in `CBO` this is particularly when using `MPIDistributedDBO` and scaling the number of BO instances to avoid "over-exploration".
- new distributed Bayesian optimization through MPI and Storage available. See [Tutorial - Distributed Bayesian Optimization with MPI and Redis](https://deephyper.readthedocs.io/en/latest/tutorials/tutorials/scripts/02_Intro_to_DBO/README.html).

deephyper.stopper

A new module in DeepHyper to allow for Multi-Fidelity Bayesian Optimization. A stopper can observe the evolving performance of an iterative algorithm and decide to continue its evaluation or stop it early. This can allow to the search to be more effective when the time-ressource or computational ressource is a bottleneck. However, it can also converge to sub-optimal solution. Different, multi-fidelity schemes are now proposed and documented at [DeepHyper Documentation - Stopper](https://deephyper.readthedocs.io/en/latest/_autosummary/deephyper.stopper.html).

0.4.2

deephyper.evaluator

* patched `ThreadPoolEvaluator` to remove extra overheads of pool initialisation

deephyper.search

* resolved constant hyperparameter when hyperparameter is discrete with log-uniform in df01040d44a8f3b80700f2f853a6b452680e1112
* patch `id` to `job_id` in neural architecture search history saver
* adding multi objectives optimisation to `CBO` a `run`-function can now return multiple objectives as a tuple to be maximised

python
def run(config):
...
return objective_0, objective_1


deephyper.ensemble

* ensemble with uncertainty quantification `UQBaggingEnsembleRegressor` is now compatible with predictions of arbitrary shapes

deephyper dashboard

* adding dashboard with `deephyper-analytics dashboard` paired with results stored in local deephyper database managed through `DBManager`
* adding dataframe visualization
* adding scatter plot visualization

0.4.0

global updates

* contributors of DeepHyper now appear on a dedicated page, see [DeepHyper Authors](https://deephyper.readthedocs.io/en/latest/authors.html), submit a PR if we forgot you!
* lighter installation via `pip install deephyper` packed with the minimum requirements for hyperparameter search.
* update API documentation
* removed `deephyper.benchmark`
* make neural architecture search features optional with `pip install deephyper[nas]`
* make auto-sklearn features optional with `pip install deephyper[popt]` (Pipeline OPTimization)
* improve epistemic uncertainty quantification for Random Forest surrogate model in Bayesian Optimisation
* moved `deephyper/scikit-optimize` as a sub package `deephyper.skopt`
* new tutorials dedicated to ALCF systems, see [Tutorials - Argonne Leadership Computing Facility](https://deephyper.readthedocs.io/en/latest/tutorials/tutorials/alcf/index.html)

deephyper.search

* renamed AMBS to CBO (Centralised Bayesian Optimization) at `deephyper.search.hps.CBO`
* added new scalable Distributed Bayesian Optimization algorithm at `deephyper.search.hps.DBO` (experimented with up to 4,096 parallel workers)
* moved `problem.add_starting_point` of `HpProblem` to `CBO(..., initial_points=[...])`
* added generative-model based transfer-learning for hyper-parameter optimisation ([Example - Transfer Learning for Hyperparameter Search](https://deephyper.readthedocs.io/en/latest/examples/plot_transfer_learning_for_hps.html))
* added filtration of duplicated configurations in CBO `CBO(..., filter_duplicated=True)`
* notify failures to the optimiser for it to learn them ([Example - Notify Failures in Hyperparameter optimization](https://deephyper.readthedocs.io/en/latest/examples/plot_notify_failures_hyperparameter_search.html#sphx-glr-examples-plot-notify-failures-hyperparameter-search-py))
* added new multi-point acquisition strategy for better scalability in CBO `CBO(..., acq_func="UCB, multi_point_strategy="qUCB", ...)`
* added the possibility to switch between synchronous/asynchronous communication in CBO `CBO(..., sync_communication=True, ...)`

deephyper.evaluator

* added MPI-based Evaluators (better scaling, lower initialisation overhead): `MPICommEvaluator` and `MPIPoolEvaluator`
* added `profile(run_function)` decorator for run-function to collect execution times/durations of the black-box, this allow to profile the worker utilisation ([Example - Profile the Worker Utilisation](https://deephyper.readthedocs.io/en/latest/examples/plot_profile_worker_utilization.html))
* added `queued(Evaluator)` decorator for any evaluator class to manage a queue of resources
* added `SerialEvalutor` to adapt to serial-search (one by one)
* added `deephyper.evaluator.callback.TqdmCallback` to display progress bar when running a search
* the run-function can now return other values than the objective to be logged in the `results.csv` for example `{"objective": 0.9, "num_parameters": 20000, ...}`
* asyncio is patched automatically when using notebooks/ipython

deephyper.ensemble

* an ensemble API is provided to have uncertainty quantification estimates after running a neural architecture search or hyperparameter search ([Tutorial - From Neural Architecture Search to Automated Deep Ensemble with Uncertainty Quantification](https://deephyper.readthedocs.io/en/latest/tutorials/tutorials/notebooks/07_NAS_with_Ensemble_and_UQ/tutorial_07.html))

0.3.3

* Now compatible with `Python >=3.7, <3.10`
* Fixed `log_dir` argument in search
* Added logging for command line HPS/NAS

0.3.2

* All the search algorithms were tested to have a correct behaviour when `random_state` is set.
* Callbacks (`deephyper.evaluator.callback`) can now be used to extend the behavior of the existing `Evaluator`. A `LoggerCallback`, `ProfilingCallback`, `SearchEarlyStopping` are already available (see example below).
* All search algorithms are now importable from their `hps` or `nas` package. For example, `from deephyper.search.hps import AMBS` and `from deephyper.search.nas import AgEBO`.
* `HpProblem` and `NaProblem` do not have a `seed` parameter anymore. The `random_state` has to be set when instantiating a `Search(random_state=...)`.

**Examlpe**: `SearchEarlyStopping`

python
from deephyper.problem import HpProblem
from deephyper.search.hps import AMBS
from deephyper.evaluator import Evaluator
from deephyper.evaluator.callback import LoggerCallback, SearchEarlyStopping

problem = HpProblem()
problem.add_hyperparameter((0.0, 10.0), "x")

def f(config):
return config["x"]

evaluator = Evaluator.create(f,
method="ray",
method_kwargs={
"num_cpus": 1,
"num_cpus_per_task": 0.25,
"callbacks": [LoggerCallback(), SearchEarlyStopping(patience=10)]
})
print(f"Num. Workers {evaluator.num_workers}")

search = AMBS(problem, evaluator, filter_duplicated=False)

results = search.search(max_evals=500)


Gives the following output:

console
Num. Workers 4
[00001] -- best objective: 3.74540 -- received objective: 3.74540
[00002] -- best objective: 6.38145 -- received objective: 6.38145

Page 5 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.