Nengo-dl

Latest version: v3.6.0

Safety actively analyzes 687767 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 6

2.2.1

-----------------------

*Compatible with Nengo 2.8.0*

*Compatible with TensorFlow 1.4.0 - 2.0.0*

**Changed**

- Update testing framework to use new nengo pytest ecosystem (``pytest-rng``,
``pytest-allclose``, and ``pytest-nengo``)
- Disable TensorFlow 2.0 behaviour (e.g. control flow v2) by default. This will be
re-enabled when full TensorFlow 2.0 support is added.

**Fixed**

- Fixed ``tensorflow-gpu`` installation check in pep517-style isolated build
environments.

2.2.0

---------------------

*Compatible with Nengo 2.8.0*

*Compatible with TensorFlow 1.4.0 - 2.0.0*

**Added**

- Added a
`new example <https://www.nengo.ai/nengo-dl/examples/tensorflow-models>`_
demonstrating how to integrate a Keras model with NengoDL (thanks to new
contributor `NickleDave <https://github.com/NickleDave>`_).
- Added support for TensorFlow 2.0 (pre-release).
- Added support for sparse transforms
(see https://github.com/nengo/nengo/pull/1532).
- Added support for stateful Processes
(see https://github.com/nengo/nengo/pull/1387).

**Changed**

- The default session will now be set to the NengoDL session before calling
TensorNodes' ``post_build`` function.
- Renamed the pytest ``unroll_simulation`` argument to ``unroll-simulation``.
- Switched to nengo-bones templating system for TravisCI config/scripts.
- NengoDL will disable eager execution on import (and will probably not
work properly if it is manually re-enabled).
- Increased minimum numpy version to 1.14.5 (required by TensorFlow 1.14).
- Minimum Nengo version is now 2.8.0.
- Update LinearFilter synapse implementation to match recent changes in
Nengo core (see https://github.com/nengo/nengo/pull/1535).

**Fixed**

- Fixed TensorFlow seeding so that randomness can be reliably controlled by
setting the Simulator seed.
- Improved robustness of ``tensorflow-gpu`` installation check (in particular,
it will now correctly detect GPU dists installed through ``conda``).
- Fixed inspection of ``TensorNode.tensor_func`` arguments for partial
functions.
- Simulator seed will now be deterministic for a given top-level Network seed.
- Raise a more informative error if user attempts to pickle a Simulator
(this is not possible to do with TensorFlow sessions; see
`the documentation
<https://www.nengo.ai/nengo-dl/simulator.html#saving-and-loading-parameters>`__
for other methods of saving/loading a NengoDL model).

**Removed**

- NengoDL no longer supports Python 3.4 (official support for 3.4 ended in
March 2019).

2.1.1

------------------------

**Added**

- Added ``nengo_dl.obj`` as a shortcut alias for ``nengo_dl.objectives``.
- Added tutorial for `Nengo users coming to NengoDL
<https://www.nengo.ai/nengo-dl/examples/from-nengo.html>`_
- Added tutorial for `TensorFlow users coming to NengoDL
<https://www.nengo.ai/nengo-dl/examples/from-tensorflow.html>`_

**Changed**

- Increased minimum ``progressbar2`` version to 3.39.0.
- We now only provide ``sdist`` releases, not ``bdist_wheel``. Due to the way
the TensorFlow packages are organized, ``bdist_wheel`` forces any existing
TensorFlow installations (e.g. ``tensorflow-gpu`` or ``tf-nightly``)
to be overwritten by ``tensorflow``, which we don't want to do.

**Removed**

- Removed the ``nef-init`` tutorial (replaced by the new ``from-nengo``
tutorial).

2.1.0

------------------------

**Added**

- Added a built-in objective to assist in applying regularization during
training.
- Added `keep_history config option
<https://www.nengo.ai/nengo-dl/config.html#keep-history>`_, which can be set
to ``False`` on Probes if only the data from the most recent simulation step
is desired (as opposed to the default behaviour of keeping the data from
all steps).

**Changed**

- Moved ``utils.mse`` to ``objectives.mse``.
- ``sim.loss`` will now apply ``nengo_dl.objectives.mse`` to all probes in
``data`` if no explicit ``objective`` is given (mirroring the default
behaviour in ``sim.train``).
- The Spaun benchmark network will now be installed through pip rather than
manually cloning and importing the repo.

**Fixed**

- Fixed objective argument parsing if objective is a callable class or method.
- Fixed bug in ``sim.train`` 1-step synapse warning when explicitly specifying
``n_steps`` (rather than passing in ``data``).

**Deprecated**

- Passing ``"mse"`` as the objective in ``sim.train``/``sim.loss`` is no longer
supported. Use the function ``nengo_dl.objectives.mse`` instead.

2.0.0

-------------------------

**Breaking API changes**

- ``sim.train`` and ``sim.loss`` now accept a single ``data`` argument, which
combines the previous ``inputs`` and ``targets`` arguments. For example,

.. code-block:: python

sim.train({my_node: x}, {my_probe: y}, ...)

is now equivalent to

.. code-block:: python

sim.train({my_node: x, my_probe: y}, ...)

The motivation for this change is that not all objective functions require
target values. Switching to the more generic ``data`` argument simplifies
the API and makes it more flexible, allowing users to specify whatever
training/loss data is actually required.
- The ``objective`` argument in ``sim.train``/``sim.loss`` is now always
specified as a dictionary mapping probes to objective functions. Note that
this was available but optional previously; it was also possible to pass
a single value for the objective function, which would be applied to all
probes in ``targets``. The latter is no longer supported. For example,

.. code-block:: python

sim.train(..., objective="mse")

must now be explicitly specified as

.. code-block:: python

sim.train(..., objective={my_probe: "mse"})

The motivation for this change is that, especially with the other new
features introduced in the 2.0 update, there were a lot of different ways to
specify the ``objective`` argument. This made it somewhat unclear how
exactly this argument worked, and the automatic "broadcasting" was also
ambiguous (e.g., should the single objective be applied to each probe
individually, or to all of them together?). Making the argument explicit
helps clarify the mental model.

**Added**

- An integer number of steps can now be passed for the
``sim.loss``/``sim.train`` data argument, if no input/target data is
required.
- The ``objective`` dict in ``sim.train``/``sim.loss`` can now contain
tuples of probes as the keys, in which case the objective function will be
called with a corresponding tuple of probe/target values as each argument.
- Added the ``sim.run_batch`` function. This exposes all the functionality
that the ``sim.run``/``sim.train``/``sim.loss`` functions are based on,
allowing advanced users full control over how to run a NengoDL simulation.
- Added option to disable progress bar in ``sim.train`` and ``sim.loss``.
- Added ``training`` argument to ``sim.loss`` to control whether the loss
is evaluated in training or inference mode.
- Added support for the new Nengo ``Transform`` API (see
https://github.com/nengo/nengo/pull/1481).

**Changed**

- Custom objective functions passed to ``sim.train``/``sim.loss`` can now
accept a single argument (``my_objective(outputs): ...`` instead of
``my_objective(outputs, targets): ...``) if no target values are required.
- ``utils.minibatch_generator`` now accepts a single ``data`` argument rather
than ``inputs`` and ``targets`` (see discussion in "Breaking API changes").
- ``sim.training_step`` is now the same as
``tf.train.get_or_create_global_step()``.
- Switched documentation to new
`nengo-sphinx-theme <https://github.com/nengo/nengo-sphinx-theme>`_.
- Reorganized documentation into "User guide" and "API reference" sections.
- Improve build speed of models with large constants
(`69 <https://github.com/nengo/nengo-dl/pull/69>`_)
- Moved op-specific merge logic into the ``OpBuilder`` classes.

**Fixed**

- Ensure that training step is always updated before TensorBoard events are
added (previously it could update before or after depending on the platform).

**Deprecated**

- The ``sim.run`` ``input_feeds`` argument has been renamed to ``data`` (for
consistency with other simulator functions).

**Removed**

- NengoDL no longer supports Python 2 (see https://python3statement.org/ for
more information)

1.2.1

------------------------

**Added**

- Added a warning if users run one-timestep training with a network containing
synaptic filters.

**Changed**

- Test Simulator parameters are now controlled through pytest arguments,
rather than environment variables.
- Disable INFO-level TensorFlow logging (from C side) on import. Added a
NengoDL log message indicating the device the simulation will run on, as
a more concise replacement.
- Boolean signals are now supported
(`61 <https://github.com/nengo/nengo-dl/issues/61>`_)

**Fixed**

- Avoid backpropagating NaN gradients from spiking neurons.
- Fixed an error that was thrown when calling ``get_tensor`` on a ``Signal``
that was first initialized inside the Simulation while loop
(`56 <https://github.com/nengo/nengo-dl/issues/56>`_)
- Allow TensorNodes to run in Nengo GUI.
- Avoid bug in TensorFlow 1.11.0 that prevents certain models from
running (see https://github.com/tensorflow/tensorflow/issues/23383). Note
that this doesn't prevent this from occurring in user models, as we cannot
control the model structure there. If your model hangs indefinitely when
you call ``sim.train``, try downgrading to TensorFlow 1.10.0.
- Ensure that ``sim.training_step`` is always updated after the optimization
step (in certain race conditions it would sometimes update part-way through
the optimization step).

Page 3 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.