Deephyper

Latest version: v0.9.1

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 6

6.38145

[00003] -- best objective: 6.38145 -- received objective: 3.73641
[00004] -- best objective: 7.29998 -- received objective: 7.29998

0.9236862659

optimizer: adam
patience_EarlyStopping: 22
patience_ReduceLROnPlateau: 10
'1':
arch_seq: '[229, 0, 22, 0, 1, 235, 29, 1, 313, 1, 0, 116, 123, 1, 37, 0, 1, 388]'
batch_size: 51

0.9231553674

optimizer: nadam
patience_EarlyStopping: 23
patience_ReduceLROnPlateau: 14


Neural architecture search

New documentation for the problem definition

A new documentation for the neural architecture search problem setup can be found [here](
https://deephyper.readthedocs.io/en/latest/user_guides/nas/problem.html).

It is now possible to defined [auto-tuned hyperparameters](https://deephyper.readthedocs.io/en/latest/user_guides/nas/problem.html#searched-hyperparameters) in addition of the architecture in a NAS Problem.


New Algorithms for Joint Hyperparameter and Neural Architecture Search

Three new algorithms are available to run a joint Hyperparameter and neural architecture search. The Hyperparameter optimisation is defined as HPO and neural architecture search as NAS.

* `agebo` (Aging Evolution for NAS with Bayesian Optimisation for HPO)
* `ambsmixed` (an extension of Asynchronous Model-Based Search for HPO + NAS)
* `regevomixed` (an extension of regularised evolution for HPO + NAS)


A run function to use data-parallelism with Tensorflow

A new run function to use data-parallelism during neural architecture search is available ([link to code](https://github.com/deephyper/deephyper/blob/c7608e0c61bd805c109145744b567cbb6cf01673/deephyper/nas/run/tf_distributed.py#L51))

To use this function pass it to the run argument of the command line such as:

console
deephyper nas agebo ... --run deephyper.nas.run.tf_distributed.run ... --num-cpus-per-task 2 --num-gpus-per-task 2 --evaluator ray --address auto ...


This function allows for new hyperparameters in the `Problem.hyperparameters(...)`:

python
...
Problem.hyperparameters(
...
lsr_batch_size=True,
lsr_learning_rate=True,
warmup_lr=True,
warmup_epochs=5,
...
)
...


Optimization of the input pipeline for the training

The data-ingestion pipeline was better optimised to reduce the overheads on GPU instances:

python
self.dataset_train = (
self.dataset_train.cache()
.shuffle(self.train_size, reshuffle_each_iteration=True)
.batch(self.batch_size)
.prefetch(tf.data.AUTOTUNE)
.repeat(self.num_epochs)
)


Easier model generation from Neural Architecture Search results

A new method is now available from the Problem object `Problem.get_keras_model(arch_seq)` to easily build a Keras model instance from an `arch_seq` (list encoding a neural network).

0.0001614947

loss: log_cosh

0.0001265946

loss: mae

0.9.0

We are happy to release the new version of DeepHyper with software quality updates.

- DeepHyper is now compatible with `Python>=3.10;<=3.13`.
- The [pip installation](https://deephyper.readthedocs.io/en/stable/install/pip.html) was updated.
- The package build tools have been updated to modern tools (mainly, `pyproject.toml`, hatchling, and ruff). wigging
- The code base style/content was improved accordingly. wigging
- Our [contributing guidelines](https://deephyper.readthedocs.io/en/stable/developer_guides/contributing.html) have been updated. wigging
- The CI tests pipeline has been updated. wigging

deephyper.evaluator

The `Evaluator` API has been updated to avoid possibly leaking threads when using `search.search(timeout=...)`. The `"serial"` method is now only accepting `coroutinefunction` (i.e., using the `async def` keywords in the function definition). The running job received by the run-function `def run(job)` now has `job.status` (`READY`, `RUNNING`, `DONE`, `CANCELLING`, `CANCELLED`) to handle coopérative cancellation. The user can regularly check this status to manage the job termination (useful with parallel backends such as `"serial", "thread", "mpicomm"`. The dumped `results.csv` will now provide the final status of each job. Deathn0t

New examples will be provided in the documentation.

deephyper.evaluator.storage

Two new `Storage` are now available to benefit from shared memory in distributed parallel execution.
- [MPIWinStorage](https://deephyper.readthedocs.io/en/stable/_autosummary/deephyper.evaluator.storage.MPIWinStorage.html#deephyper.evaluator.storage.MPIWinStorage): specific to `"mpicomm"` execution and based on one-sided communication (a.k.a., remote memory access RMA). Deathn0t
- [SharedMemoryStorage](https://deephyper.readthedocs.io/en/stable/_autosummary/deephyper.evaluator.storage.SharedMemoryStorage.html#deephyper.evaluator.storage.SharedMemoryStorage): specific to `"process"` execution. Deathn0t

Removing deprecated modules

The `deephyper.search` and `deephyper.problem` subpackages previously deprecated are now removed. The `deephyper.hpo` should be used instead.

Spack installation

Our team is currently undergoing an update of our [Spack installation](https://deephyper.readthedocs.io/en/stable/install/spack.html). If you are a Spack user and would like to experiment with it, please get in touch with us. bretteiffert

Page 3 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.