[GitHub commits](https://github.com/netket/netket/compare/v3.4...master).
This release adds support and needed functions to run TDVP for neural networks with real/non-holomorphic parameters, an experimental HDF5 logger, and an `MCState` method to compute the local estimators of an observable for a set of samples.
This release also drops support for older version of flax, while adopting the new interface which completely supports complex-valued neural networks. Deprecation warnings might be raised if you were using some layers from `netket.nn` that are now avaiable in flax.
A new, more accurate, estimation of the autocorrelation time has been introduced, but it is disabled by default. We welcome feedback.
New features
* The method {meth}`~netket.vqs.MCState.local_estimators` has been added, which returns the local estimators `O_loc(s) = 〈s|O|ψ〉 / 〈s|ψ〉` (which are known as local energies if `O` is the Hamiltonian). [1179](https://github.com/netket/netket/pull/1179)
* The permutation equivariant {class}`netket.models.DeepSetRelDistance` for use with particles in periodic potentials has been added together with an example. [1199](https://github.com/netket/netket/pull/1199)
* The class {class}`HDF5Log` has been added to the experimental submodule. This logger writes log data and variational state variables into a single HDF5 file. [1200](https://github.com/netket/netket/issues/1200)
* Added a new method {meth}`~netket.logging.RuntimeLog.serialize` to store the content of the logger to disk [1255](https://github.com/netket/netket/issues/1255).
* New {class}`netket.callbacks.InvalidLossStopping` which stops optimisation if the loss function reaches a `NaN` value. An optional `patience` argument can be set. [1259](https://github.com/netket/netket/pull/1259)
* Added a new method {meth}`netket.graph.SpaceGroupBuilder.one_arm_irreps` to construct GCNN projection coefficients to project on single-wave-vector components of irreducible representations. [1260](https://github.com/netket/netket/issues/1260).
* New method {meth}`~netket.vqs.MCState.expect_and_forces` has been added, which can be used to compute the variational forces generated by an operator, instead of only the (real-valued) gradient of an expectation value. This in general is needed to write the TDVP equation or other similar equations. [1261](https://github.com/netket/netket/issues/1261)
* TDVP now works for real-parametrized wavefunctions as well as non-holomorphic ones because it makes use of {meth}`~netket.vqs.MCState.expect_and_forces`. [1261](https://github.com/netket/netket/issues/1261)
* New method {meth}`~netket.utils.group.Permutation.apply_to_id` can be used to apply a permutation (or a permutation group) to one or more lattice indices. [1293](https://github.com/netket/netket/issues/1293)
* It is now possible to disable MPI by setting the environment variable `NETKET_MPI`. This is useful in cases where mpi4py crashes upon load [1254](https://github.com/netket/netket/issues/1254).
* The new function {func}`netket.nn.binary_encoding` can be used to encode a set of samples according to the binary shape defined by an Hilbert space. It should be used similarly to {func}`flax.linen.one_hot` and works with non homogeneous Hilbert spaces [1209](https://github.com/netket/netket/issues/1209).
* A new method to estimate the correlation time in Markov chain Monte Carlo (MCMC) sampling has been added to the {func}`netket.stats.statistics` function, which uses the full FFT transform of the input data. The new method is not enabled by default, but can be turned on by setting the `NETKET_EXPERIMENTAL_FFT_AUTOCORRELATION` environment variable to `1`. In the future we might turn this on by default [1150](https://github.com/netket/netket/issues/1150).
Dependencies
* NetKet now requires at least Flax v0.5
Deprecations
* `netket.nn.Module` and `netket.nn.compact` have been deprecated. Please use the {class}`flax.linen.Module` and {func}`flax.linen.compact` instead.
* `netket.nn.Dense(dtype=mydtype)` and related Modules (`Conv`, `DenseGeneral` and `ConvGeneral`) are deprecated. Please use `flax.linen.***(param_dtype=mydtype)` instead. Before flax v0.5 they did not support complex numbers properly within their modules, but starting with flax 0.5 they now do so we have removed our linear module wrappers and encourage you to use them. Please notice that the `dtype` argument previously used by netket should be changed to `param_dtype` to maintain the same effect. [...](https://github.com/netket/netket/pull/...)
Bug Fixes
* Fixed bug where a `netket.operator.LocalOperator` representing the identity would lead to a crash. [1197](https://github.com/netket/netket/pull/1197)
* Fix a bug where Fermionic operators {class}`nkx.operator.FermionOperator2nd` would not result hermitian even if they were. [1233](https://github.com/netket/netket/pull/1233)
* Fix serialization of some arrays with complex dtype in `RuntimeLog` and `JsonLog` [1258](https://github.com/netket/netket/pull/1258)
* Fixed bug where the {class}`netket.callbacks.EarlyStopping` callback would not work as intended when hitting a local minima. [1238](https://github.com/netket/netket/pull/1238)
* `chunk_size` and the random seed of Monte Carlo variational states are now serialised. States serialised previous to this change can no longer be unserialised [1247](https://github.com/netket/netket/pull/1247)
* Continuous-space hamiltonians now work correctly with neural networks with complex parameters [1273](https://github.com/netket/netket/pull/1273).
* NetKet now works under MPI with recent versions of jax (>=0.3.15) [1291](https://github.com/netket/netket/pull/1291).