Netket

Latest version: v3.15.2

Safety actively analyzes 701892 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 7

3.6

New features
* Added a new 'Full statevector' model {class}`netket.models.LogStateVector` that stores the exponentially large state and can be used as an exact ansatz [1324](https://github.com/netket/netket/pull/1324).
* Added a new experimental {class}`~netket.experimental.driver.TDVPSchmitt` driver, implementing the signal-to-noise ratio TDVP regularisation by Schmitt and Heyl [1306](https://github.com/netket/netket/pull/1306).
* Added a new experimental {class}`~netket.experimental.driver.TDVPSchmitt` driver, implementing the signal-to-noise ratio TDVP regularisation by Schmitt and Heyl [1306](https://github.com/netket/netket/pull/1306).
* QGT classes accept a `chunk_size` parameter that overrides the `chunk_size` set by the variational state object [1347](https://github.com/netket/netket/pull/1347).
* {func}`~netket.optimizer.qgt.QGTJacobianPyTree` and {func}`~netket.optimizer.qgt.QGTJacobianDense` support diagonal entry regularisation with constant and scale-invariant contributions. They accept a new `diag_scale` argument to pass the scale-invariant component [1352](https://github.com/netket/netket/pull/1352).
* {func}`~netket.optimizer.SR` preconditioner now supports scheduling of the diagonal shift and scale regularisations [1364](https://github.com/netket/netket/pull/1364).

Improvements
* {meth}`~netket.vqs.ExactState.expect_and_grad` now returns a {class}`netket.stats.Stats` object that also contains the variance, as {class}`~netket.vqs.MCState` does [1325](https://github.com/netket/netket/pull/1325).
* Experimental RK solvers now store the error of the last timestep in the integrator state [1328](https://github.com/netket/netket/pull/1328).
* {class}`~netket.operator.PauliStrings` can now be constructed by passing a single string, instead of the previous requirement of a list of strings [1331](https://github.com/netket/netket/pull/1331).
* {class}`~flax.core.frozen_dict.FrozenDict` can now be logged to netket's loggers, meaning that one does no longer need to unfreeze the parameters before logging them [1338](https://github.com/netket/netket/pull/1338).
* Fermion operators are much more efficient and generate fewer connected elements [1279](https://github.com/netket/netket/pull/1279).
* NetKet now is completely PEP 621 compliant and does not have anymore a `setup.py` in favour of a `pyproject.toml` based on [hatchling](https://hatch.pypa.io/latest/). To install NetKet you should use a recent version of `pip` or a compatible tool such as poetry/hatch/flint [#1365](https://github.com/netket/netket/pull/1365).
* {func}`~netket.optimizer.qgt.QGTJacobianDense` can now be used with {class}`~netket.vqs.ExactState` [1358](https://github.com/netket/netket/pull/1358).


Bug Fixes
* {meth}`netket.vqs.ExactState.expect_and_grad` returned a scalar while {meth}`~netket.vqs.ExactState.expect` returned a {class}`netket.stats.Stats` object with 0 error. The inconsistency has been addressed and now they both return a `Stats` object. This changes the format of the files logged when running `VMC`, which will now store the average under `Mean` instead of `value` [1325](https://github.com/netket/netket/pull/1325).
* {func}`netket.optimizer.qgt.QGTJacobianDense` now returns the correct output for models with mixed real and complex parameters [1397](https://github.com/netket/netket/pull/1397)

Deprecations
* The `rescale_shift` argument of {func}`~netket.optimizer.qgt.QGTJacobianPyTree` and {func}`~netket.optimizer.qgt.QGTJacobianDense` is deprecated in favour the more flexible syntax with `diag_scale`. `rescale_shift=False` should be removed. `rescale_shift=True` should be replaced with `diag_scale=old_diag_shift`. [1352](https://github.com/netket/netket/pull/1352).
* The call signature of preconditioners passed to {class}`netket.driver.VMC` and other drivers has changed as a consequence of scheduling, and preconditioners should now accept an extra optional argument `step`. The old signature is still supported but is deprecated and will eventually be removed [1364](https://github.com/netket/netket/pull/1364).

3.5.2

Bug Fixes
* {class}`~netket.operator.PauliStrings` now support the subtraction operator [1336](https://github.com/netket/netket/pull/1336).
* Autoregressive networks had a default activation function (`selu`) that did not act on the imaginary part of the inputs. We now changed that, and the activation function is `reim_selu`, which acts independently on the real and imaginary part. This changes nothing for real parameters, but improves the defaults for complex ones [1371](https://github.com/netket/netket/pull/1371).
* A **major performance degradation** that arose when using {class}`~netket.operator.LocalOperator` has been addressed. The bug caused our operators to be recompiled every time they were queried, imposing a large overhead [1377](https://github.com/netket/netket/pull/1377).

3.5.1

New features
* Added a new configuration option `netket.config.netket_experimental_disable_ode_jit` to disable jitting of the ODE solvers. This can be useful to avoid hangs that might happen when working on GPUs with some particular systems [1304](https://github.com/netket/netket/pull/1304).

Bug Fixes
* Continuous operatorors now work correctly when `chunk_size != None`. This was broken in v3.5 [1316](https://github.com/netket/netket/pull/1316).
* Fixed a bug ([1101](https://github.com/netket/netket/pull/1101)) that crashed NetKet when trying to take the product of two different Hilber spaces. It happened because the logic to build a `TensorHilbert` was ending in an endless loop. [#1321](https://github.com/netket/netket/pull/1321).

3.5

[GitHub commits](https://github.com/netket/netket/compare/v3.4...master).

This release adds support and needed functions to run TDVP for neural networks with real/non-holomorphic parameters, an experimental HDF5 logger, and an `MCState` method to compute the local estimators of an observable for a set of samples.

This release also drops support for older version of flax, while adopting the new interface which completely supports complex-valued neural networks. Deprecation warnings might be raised if you were using some layers from `netket.nn` that are now avaiable in flax.

A new, more accurate, estimation of the autocorrelation time has been introduced, but it is disabled by default. We welcome feedback.

New features

* The method {meth}`~netket.vqs.MCState.local_estimators` has been added, which returns the local estimators `O_loc(s) = 〈s|O|ψ〉 / 〈s|ψ〉` (which are known as local energies if `O` is the Hamiltonian). [1179](https://github.com/netket/netket/pull/1179)
* The permutation equivariant {class}`netket.models.DeepSetRelDistance` for use with particles in periodic potentials has been added together with an example. [1199](https://github.com/netket/netket/pull/1199)
* The class {class}`HDF5Log` has been added to the experimental submodule. This logger writes log data and variational state variables into a single HDF5 file. [1200](https://github.com/netket/netket/issues/1200)
* Added a new method {meth}`~netket.logging.RuntimeLog.serialize` to store the content of the logger to disk [1255](https://github.com/netket/netket/issues/1255).
* New {class}`netket.callbacks.InvalidLossStopping` which stops optimisation if the loss function reaches a `NaN` value. An optional `patience` argument can be set. [1259](https://github.com/netket/netket/pull/1259)
* Added a new method {meth}`netket.graph.SpaceGroupBuilder.one_arm_irreps` to construct GCNN projection coefficients to project on single-wave-vector components of irreducible representations. [1260](https://github.com/netket/netket/issues/1260).
* New method {meth}`~netket.vqs.MCState.expect_and_forces` has been added, which can be used to compute the variational forces generated by an operator, instead of only the (real-valued) gradient of an expectation value. This in general is needed to write the TDVP equation or other similar equations. [1261](https://github.com/netket/netket/issues/1261)
* TDVP now works for real-parametrized wavefunctions as well as non-holomorphic ones because it makes use of {meth}`~netket.vqs.MCState.expect_and_forces`. [1261](https://github.com/netket/netket/issues/1261)
* New method {meth}`~netket.utils.group.Permutation.apply_to_id` can be used to apply a permutation (or a permutation group) to one or more lattice indices. [1293](https://github.com/netket/netket/issues/1293)
* It is now possible to disable MPI by setting the environment variable `NETKET_MPI`. This is useful in cases where mpi4py crashes upon load [1254](https://github.com/netket/netket/issues/1254).
* The new function {func}`netket.nn.binary_encoding` can be used to encode a set of samples according to the binary shape defined by an Hilbert space. It should be used similarly to {func}`flax.linen.one_hot` and works with non homogeneous Hilbert spaces [1209](https://github.com/netket/netket/issues/1209).
* A new method to estimate the correlation time in Markov chain Monte Carlo (MCMC) sampling has been added to the {func}`netket.stats.statistics` function, which uses the full FFT transform of the input data. The new method is not enabled by default, but can be turned on by setting the `NETKET_EXPERIMENTAL_FFT_AUTOCORRELATION` environment variable to `1`. In the future we might turn this on by default [1150](https://github.com/netket/netket/issues/1150).

Dependencies
* NetKet now requires at least Flax v0.5

Deprecations

* `netket.nn.Module` and `netket.nn.compact` have been deprecated. Please use the {class}`flax.linen.Module` and {func}`flax.linen.compact` instead.
* `netket.nn.Dense(dtype=mydtype)` and related Modules (`Conv`, `DenseGeneral` and `ConvGeneral`) are deprecated. Please use `flax.linen.***(param_dtype=mydtype)` instead. Before flax v0.5 they did not support complex numbers properly within their modules, but starting with flax 0.5 they now do so we have removed our linear module wrappers and encourage you to use them. Please notice that the `dtype` argument previously used by netket should be changed to `param_dtype` to maintain the same effect. [...](https://github.com/netket/netket/pull/...)

Bug Fixes
* Fixed bug where a `netket.operator.LocalOperator` representing the identity would lead to a crash. [1197](https://github.com/netket/netket/pull/1197)
* Fix a bug where Fermionic operators {class}`nkx.operator.FermionOperator2nd` would not result hermitian even if they were. [1233](https://github.com/netket/netket/pull/1233)
* Fix serialization of some arrays with complex dtype in `RuntimeLog` and `JsonLog` [1258](https://github.com/netket/netket/pull/1258)
* Fixed bug where the {class}`netket.callbacks.EarlyStopping` callback would not work as intended when hitting a local minima. [1238](https://github.com/netket/netket/pull/1238)
* `chunk_size` and the random seed of Monte Carlo variational states are now serialised. States serialised previous to this change can no longer be unserialised [1247](https://github.com/netket/netket/pull/1247)
* Continuous-space hamiltonians now work correctly with neural networks with complex parameters [1273](https://github.com/netket/netket/pull/1273).
* NetKet now works under MPI with recent versions of jax (>=0.3.15) [1291](https://github.com/netket/netket/pull/1291).

3.4.2

[GitHub commits](https://github.com/netket/netket/compare/v3.4.1...v3.4.2).

Internal Changes
* Several deprecation warnings related to `jax.experimental.loops` being deprecated have been resolved by changing those calls to {func}`jax.lax.fori_loop`. Jax should feel more tranquillo now. [1172](https://github.com/netket/netket/pull/1172)

Bug Fixes
* Several _type promotion_ bugs that would end up promoting single-precision models to double-precision have been squashed. Those involved `nk.operator.Ising` and `nk.operator.BoseHubbard`[1180](https://github.com/netket/netket/pull/1180), `nkx.TDVP` [#1186](https://github.com/netket/netket/pull/1186) and continuous-space samplers and operators [#1187](https://github.com/netket/netket/pull/1187).
* `nk.operator.Ising`, `nk.operator.BoseHubbard` and `nk.operator.LocalLiouvillian` now return connected samples with the same precision (`dtype`) as the input samples. This allows to preserve low precision along the computation when using those operators.[1180](https://github.com/netket/netket/pull/1180)
* `nkx.TDVP` now updates the expectation value displayed in the progress bar at every time step. [1182](https://github.com/netket/netket/pull/1182)
* Fixed bug [1192](https://github.com/netket/netket/pull/1192) that affected most operators (`nk.operator.LocalOperator`) constructed on non-homogeneous hilbert spaces. This bug was first introduced in version 3.3.4 and affects all subsequent versions until 3.4.2. [#1193](https://github.com/netket/netket/pull/1193)
* It is now possible to add an operator and it's lazy transpose/hermitian conjugate [1194](https://github.com/netket/netket/pull/1194)

3.4.1

[GitHub commits](https://github.com/netket/netket/compare/v3.4...v3.4.1).

Internal Changes
* Several deprecation warnings related to `jax.tree_util.tree_multimap` being deprecated have been resolved by changing those calls to `jax.tree_util.tree_map`. Jax should feel more tranquillo now. [1156](https://github.com/netket/netket/pull/1156)

Bug Fixes
* ~`TDVP` now supports model with real parameters such as `RBMModPhase`. [1139](https://github.com/netket/netket/pull/1139)~ (not yet fixed)
* An error is now raised when user attempts to construct a `LocalOperator` with a matrix of the wrong size (bug [1157](https://github.com/netket/netket/pull/1157). [#1158](https://github.com/netket/netket/pull/1158)
* A bug where `QGTJacobian` could not be used with models in single precision has been addressed (bug [1153](https://github.com/netket/netket/pull/1153). [#1155](https://github.com/netket/netket/pull/1155)

Page 5 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.