Netket

Latest version: v3.15.2

Safety actively analyzes 701892 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 7

3.10

The highlights of this version are a new experimental driver to optimise networks with millions of parameters using SR, and introduces new utility functions to convert a pyscf molecule to a netket Hamiltonian.

Read below for a more detailed changelog

New Features

* Added new {class}`netket.experimental.driver.VMC_SRt` driver, which leads in identical parameter updates as the standard Stochastic Reconfiguration with diagonal shift regularization. Therefore, it is essentially equivalent to using the standard {class}`netket.driver.VMC` with the {class}`netket.optimizer.SR` preconditioner. The advantage of this method is that it requires the inversion of a matrix with side number of samples instead of number of parameters, making this formulation particularly useful in typical deep learning scenarios [1623](https://github.com/netket/netket/pull/1623).
* Added a new function {func}`netket.experimental.operator.from_pyscf_molecule` to construct the electronic hamiltonian of a given molecule specified through pyscf. This is accompanied by {func}`netket.experimental.operator.pyscf.TV_from_pyscf_molecule` to compute the T and V tensors of a pyscf molecule [1602](https://github.com/netket/netket/pull/1602).
* Added the operator computing the Rényi2 entanglement entropy on Hilbert spaces with discrete dofs [1591](https://github.com/netket/netket/pull/1591).
* It is now possible to disable netket's double precision default activation and force all calculations to be performed using single precision by setting the environment variable/configuration flag `NETKET_ENABLE_X64=0`, which also sets `JAX_ENABLE_X64=0`. When running with this flag, the number of warnings printed by jax is considerably reduced as well [1544](https://github.com/netket/netket/pull/1544).
* Added new shortcuts to build the identity operator as {func}`netket.operator.spin.identity` and {func}`netket.operator.boson.identity` [1601](https://github.com/netket/netket/pull/1601).
* Added new {class}`netket.hilbert.Particle` constructor that only takes as input the number of dimensions of the system [1577](https://github.com/netket/netket/pull/1577).
* Added new {class}`netket.experimental.models.Slater2nd` model implementing a Slater ansatz [1622](https://github.com/netket/netket/pull/1622).
* Added new {func}`netket.jax.logdet_cmplx` function to compute the complex log-determinant of a batch of matrices [1622](https://github.com/netket/netket/pull/1622).

Breaking changes

* {class}`netket.experimental.hilbert.SpinOrbitalFermions` attributes have been changed: {attr}`~netket.experimental.hilbert.SpinOrbitalFermions.n_fermions` now always returns an integer with the total number of fermions in the system (if specified). A new attribute {attr}`~netket.experimental.hilbert.SpinOrbitalFermions.n_fermions_per_spin` has been introduced that returns the same tuple of fermion number per spin subsector as before. A few fields are now marked as read-only as modifications where ignored [1622](https://github.com/netket/netket/pull/1622).
* The {class}`netket.nn.blocks.SymmExpSum` layer is now normalised by the number of elements in the symmetry group in order to maintain a reasonable normalisation [1624](https://github.com/netket/netket/pull/1624).
* The labelling of spin sectors in {func}`netket.experimental.operator.fermion.create` and similar operators has now changed from the eigenvalue of the spin operator ({math}`\pm 1/2` and so on) to the eigenvalue of the Pauli matrices ({math}`\pm 1` and so on) [1637](https://github.com/netket/netket/pull/1637).
* The connected elements and expectation values of all non-simmetric fermionic operators is now changed in order to be correct [1640](https://github.com/netket/netket/pull/1640).

Improvements

* Considerably reduced the memory consumption of {class}`~netket.operator.LocalOperator`, especially in the case of large local hilbert spaces. Also leveraged sparsity in the terms to speed up compilation (`_setup`) in the same cases [1558](https://github.com/netket/netket/pull/1558).
* {class}`netket.nn.blocks.SymmExpSum` now works with inputs of arbitrary dimensions, while previously it errored for all inputs that were not 2D [1616](https://github.com/netket/netket/pull/1616)
* Stop using `FrozenDict` from `flax` and instead return standard dictionaries for the variational parameters from the variational state. This makes it much easier to edit parameters [1547](https://github.com/netket/netket/pull/1547).
* Vastly improved, finally readable documentation of all Flax modules and neural network architectures [1641](https://github.com/netket/netket/pull/1641).

Bug Fixes

* Fixed minor bug where {class}`netket.operator.LocalOperator` could not be built with `np.matrix` object obtained by converting scipy sparse matrices to dense [1597](https://github.com/netket/netket/pull/1597).
* Raise correct error instead of unintelligible one when multiplying {class}`netket.experimental.operator.FermionOperator2nd` with other operators [1599](https://github.com/netket/netket/pull/1599).
* Do not rescale the output of {func}`netket.jax.jacobian` by the square root of number of samples. Previously, when specifying `center=True` we were incorrectly rescaling the output [1614](https://github.com/netket/netket/pull/1614).
* Fix bug in {class}`netket.operator.PauliStrings` that caused the dtype to get out of sync with the dtype of the internal arrays, causing errors when manipulating them symbolically [1619](https://github.com/netket/netket/pull/1619).
* Fix bug that prevented the use of {class}`netket.operator.DiscreteJaxOperator` as observables with all drivers [1625](https://github.com/netket/netket/pull/1625).
* Fermionic operator `get_conn` method was returning values as if the operator was transposed, and has now been fixed. This will break the expectation value of non-simmetric fermionic operators, but hopefully nobody was looking into them [1640](https://github.com/netket/netket/pull/1640).

3.9.2

This release requires at least Python 3.9 and Jax 0.4.

Bug Fixes

* Fix a bug introduced in version 3.9 for {class}`netket.experimental.driver.TDVPSchmitt` which resulted in the wrong dynamics [1551](https://github.com/netket/netket/pull/1551).

3.9.1

Bug Fixes

* Fix a bug in the construction of {class}`netket.operator.PauliStringsJax` in some cases [1539](https://github.com/netket/netket/pull/1539).

3.9

This release requires Python 3.8 and Jax 0.4.

New Features
* {class}`netket.callbacks.EarlyStopping` now supports relative tolerances for determining when to stop [1481](https://github.com/netket/netket/pull/1481).
* {class}`netket.callbacks.ConvergenceStopping` has been added, which can stop a driver when the loss function reaches a certain threshold [1481](https://github.com/netket/netket/pull/1481).
* A new base class {class}`netket.operator.DiscreteJaxOperator` has been added, which will be used as a base class for a set of operators that are jax-compatible [1506](https://github.com/netket/netket/pull/1506).
* {func}`netket.sampler.rules.HamiltonianRule` has been split into two implementations, {class}`netket.sampler.rules.HamiltonianRuleJax` and {class}`netket.sampler.rules.HamiltonianRuleNumba`, which are to be used for {class}`~netket.operator.DiscreteJaxOperator` and standard numba-based {class}`~netket.operator.DiscreteOperator`s. The user-facing API is unchanged, but the returned type might now depend on the input operator [1514](https://github.com/netket/netket/pull/1514).
* {class}`netket.operator.PauliStringsJax` is a new operator that behaves as {class}`netket.operator.PauliStrings` but is Jax-compatible, meaning that it can be used inside of jax-jitted contexts and works better with chunking. It can also be constructed starting from a standard Ising operator by calling `operator.to_jax_operator()` [1506](https://github.com/netket/netket/pull/1506).
* {class}`netket.operator.IsingJax` is a new operator that behaves as `netket.operator.Ising` but is Jax-compatible, meaning that it can be used inside of jax-jitted contexts and works better with chunking. It can also be constructed starting from a standard Ising operator by calling `operator.to_jax_operator()` [1506](https://github.com/netket/netket/pull/1506).
* Added a new method {meth}`netket.operator.LocalOperator.to_pauli_strings` to convert {class}`netket.operator.LocalOperator` to {class}`netket.operator.PauliStrings`. As PauliStrings can be converted to Jax-operators, this now allows to convert arbitrary operators to Jax-compatible ones [1515](https://github.com/netket/netket/pull/1515).
* The constructor of {meth}`~netket.optimizer.qgt.QGTOnTheFly` now takes an optional boolean argument `holomorphic : Optional[bool]` in line with the other geometric tensor implementations. This flag does not affect the computation algorithm, but will be used to raise an error if the user attempts to call {meth}`~netket.optimizer.qgt.QGTOnTheFly.to_dense()` with a non-holomorphic ansatz. While this might break past code, the numerical results were incorrect.

Breaking Changes
* The first two axes in the output of the samplers have been swapped, samples are now of shape `(n_chains, n_samples_per_chain, ...)` consistent with `netket.stats.statistics`. Custom samplers need to be updated to return arrays of shape `(n_chains, n_samples_per_chain, ...)` instead of `(n_samples_per_chain, n_chains, ...)`. [1502](https://github.com/netket/netket/pull/1502)
* The tolerance arguments of {class}`~netket.experimental.dynamics.TDVPSchmitt` have all been renamed to more understandable quantities without inspecting the source code. In particular, `num_tol` has been renamed to `rcond`, `svd_tol` to `rcond_smooth` and `noise_tol` to `noise_atol`.

Deprecations
* `netket.vqs.ExactState` has been renamed to {class}`netket.vqs.FullSumState` to better reflect what it does. Using the old name will now raise a warning [1477](https://github.com/netket/netket/pull/1477).


Known Issues
* The new `Jax`-friendly operators do not work with {class}`netket.vqs.FullSumState` because they are not hashable. This will be fixed in a minor patch (coming soon).

3.8

This is the last NetKet release to support Python 3.7 and Jax 0.3.
Starting with NetKet 3.9 we will require Jax 0.4, which in turns requires Python 3.8 (and soon 3.9).

New features
* {class}`netket.hilbert.TensorHilbert` has been generalised and now works with both discrete, continuous or a combination of discrete and continuous hilbert spaces [1437](https://github.com/netket/netket/pull/1437).
* NetKet is now compatible with Numba 0.57 and therefore with Python 3.11 [1462](https://github.com/netket/netket/pull/1462).
* The new Metropolis sampling transition proposal rules {func}`netket.sampler.rules.MultipleRules` has been added, which can be used to pick from different transition proposals according to a certain probability distribution.
* The new Metropolis sampling transition proposal rules {func}`netket.sampler.rules.TensorRule` has been added, which can be used to combine different transition proposals acting on different subspaces of the Hilbert space together.
* The new Metropolis sampling transition proposal rules {func}`netket.sampler.rules.FixedRule` has been added, which does not change the configuration.

Deprecations
* The non-public API function to select the default QGT mode for `QGTJacobian`, located at `netket.optimizer.qgt.qgt_jacobian_common.choose_jacobian_mode` has been renamed and made part of the public API of as `netket.jax.jacobian_default_mode`. If you were using this function, please update your codes [1473](https://github.com/netket/netket/pull/1473).

Bug Fixes
* Fix issue [1435](https://github.com/netket/netket/issues/1435), where a 0-tangent originating from integer samples was not correctly handled by {func}`netket.jax.vjp` [#1436](https://github.com/netket/netket/pull/1436).
* Fixed a bug in {class}`netket.sampler.rules.LangevinRule` when setting `chunk_size` [1465](https://github.com/netket/netket/pull/1465).

Improvements
* {class}`netket.operator.ContinuousOperator` has been improved and now they correctly test for equality and generate a consistent hash. Moreover, the internal logic of {class}`netket.operator.SumOperator` and {class}`netket.operator.Potential` has been improved, and they lead to less recompilations when constructed again but identical. A few new attributes for those operators have also been exposed [1440](https://github.com/netket/netket/pull/1440).
* {func}`netket.nn.to_array` accepts an optional keyword argument `chunk_size`, and related methods on variational states now use the chunking specified in the variational state when generating the dense array [1470](https://github.com/netket/netket/pull/1470).

Breaking Changes
* Jax version `0.4` is now required, meaning that NetKet no longer works on Python 3.7.

3.7

New features
* Input and hidden layer masks can now be specified for {class}`netket.models.GCNN` [1387](https://github.com/netket/netket/pull/1387).
* Support for Jax 0.4 added [1416](https://github.com/netket/netket/pull/1416).
* Added a continuous space langevin-dynamics transition rule {class}`netket.sampler.rules.LangevinRule` and its corresponding shorthand for constructing the MCMC sampler {func}`netket.sampler.MetropolisAdjustedLangevin` [1413](https://github.com/netket/netket/pull/1413).
* Added an experimental Quantum State Reconstruction driver at {class}`netket.experimental.QSR` to reconstruct states from data coming from quantum computers or simulators [1427](https://github.com/netket/netket/pull/1427).
* Added `netket.nn.blocks.SymmExpSum` flax module that symmetrizes a bare neural network module by summing the wave-function over all possible symmetry-permutations given by a certain symmetry group [1433](https://github.com/netket/netket/pull/1433).

Breaking Changes
* Parameters of models {class}`netket.models.GCNN` and layers {class}`netket.nn.DenseSymm` and {class}`netket.nn.DenseEquivariant` are stored as an array of shape '[features,in_features,mask_size]'. Masked parameters are now excluded from the model instead of multiplied by zero [1387](https://github.com/netket/netket/pull/1387).

Improvements
* The underlying extension API for Autoregressive models that can be used with Ancestral/Autoregressive samplers has been simplified and stabilized and will be documented as part of the public API. For most models, you should now inherit from {class}`netket.models.AbstractARNN` and define the method {meth}`~netket.models.AbstractARNN.conditionals_log_psi`. For additional performance, implementers can also redefine {meth}`~netket.models.AbstractARNN.__call__` and {meth}`~netket.models.AbstractARNN.conditional` but this should not be needed in general. This will cause some breaking changes if you were relying on the old undocumented interface [1361](https://github.com/netket/netket/pull/1361).
* {class}`netket.operator.PauliStrings` now works with non-homogeneous Hilbert spaces, such as those obtained by taking the tensor product of multiple Hilbert spaces [1411](https://github.com/netket/netket/pull/1411).
* The {class}`netket.operator.LocalOperator` now keep sparse matrices sparse, leading to faster algebraic manipulations of those objects. The overall computational and memory cost is, however, equivalent, when running VMC calculations. All pre-constructed operators such as {func}`netket.operator.spin.sigmax` and {func}`netket.operator.boson.create` now build sparse-operators [1422](https://github.com/netket/netket/pull/1422).
* When multiplying an operator by it's conjugate transpose NetKet does not return anymore a lazy {class}`~netket.operator.Squared` object if the operator is hermitian. This avoids checking if the object is hermitian which greatly speeds up algebric manipulations of operators, and returns more unbiased epectation values [1423](https://github.com/netket/netket/pull/1423).

Bug Fixes
* Fixed a bug where {meth}`netket.hilbert.Particle.random_state` could not be jit-compiled, and therefore could not be used in the sampling [1401](https://github.com/netket/netket/pull/1401).
* Fixed bug [1405](https://github.com/netket/netket/pull/1405) where {meth}`netket.nn.DenseSymm` and {meth}`netket.models.GCNN` did not work or correctly consider masks [#1428](https://github.com/netket/netket/pull/1428).

Deprecations
* {meth}`netket.models.AbstractARNN._conditional` has been removed from the API, and its use will throw a deprecation warning. Update your ARNN models accordingly! [1361](https://github.com/netket/netket/pull/1361).
* Several undocumented internal methods from {class}`netket.models.AbstractARNN` have been removed [1361](https://github.com/netket/netket/pull/1361).

Page 4 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.