Mrmustard

Latest version: v0.7.3

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.7.3

New features
* Added a function ``to_fock`` to map different representations into Fock representation.
[(355)](https://github.com/XanaduAI/MrMustard/pull/355)

* Added a new Abc triple for s-parametrized displacement gate.
[(368)](https://github.com/XanaduAI/MrMustard/pull/368)

Contributors
[Samuele Ferracin](https://github.com/SamFerracin),
[Yuan Yao](https://github.com/sylviemonet)
[Filippo Miatto](https://github.com/ziofil)

0.7.2

New features

- Added functions to generate the (A, b, c) triples for the Fock-Bargmann representation of several states and gates. https://github.com/XanaduAI/MrMustard/pull/338

Contributors

SamFerracin sylviemonet ziofil

0.7.0

New features
* Added a new interface for backends, as well as a `numpy` backend (which is now default). Users can run
all the functions in the `utils`, `math`, `physics`, and `lab` with both backends, while `training`
requires using `tensorflow`. The `numpy` backend provides significant improvements both in import
time and runtime. [(301)](https://github.com/XanaduAI/MrMustard/pull/301)

* Added the classes and methods to create, contract, and draw tensor networks with `mrmustard.math`.
[(284)](https://github.com/XanaduAI/MrMustard/pull/284)

* Added functions in physics.bargmann to join and contract (A,b,c) triples.
[(295)](https://github.com/XanaduAI/MrMustard/pull/295)

* Added an Ansatz abstract class and PolyExpAnsatz concrete implementation. This is used in the Bargmann representation.
[(295)](https://github.com/XanaduAI/MrMustard/pull/295)

* Added `complex_gaussian_integral` and `real_gaussian_integral` methods.
[(295)](https://github.com/XanaduAI/MrMustard/pull/295)

* Added `Bargmann` representation (parametrized by Abc). Supports all algebraic operations and CV (exact) inner product.
[(296)](https://github.com/XanaduAI/MrMustard/pull/296)

Breaking changes
* Removed circular dependencies by:
* Removing `graphics.py`--moved `ProgressBar` to `training` and `mikkel_plot` to `lab`.
* Moving `circuit_drawer` and `wigner` to `physics`.
* Moving `xptensor` to `math`.
[(289)](https://github.com/XanaduAI/MrMustard/pull/289)

* Created `settings.py` file to host `Settings`.
[(289)](https://github.com/XanaduAI/MrMustard/pull/289)

* Moved `settings.py`, `logger.py`, and `typing.py` to `utils`.
[(289)](https://github.com/XanaduAI/MrMustard/pull/289)

* Removed the `Math` class. To use the mathematical backend, replace
`from mrmustard.math import Math ; math = Math()` with `import mrmustard.math as math`
in your scripts.
[(301)](https://github.com/XanaduAI/MrMustard/pull/301)

* The `numpy` backend is now default. To switch to the `tensorflow`
backend, add the line `math.change_backend("tensorflow")` to your scripts.
[(301)](https://github.com/XanaduAI/MrMustard/pull/301)

Improvements

* Calculating Fock representations and their gradients is now more numerically stable (i.e. numerical blowups that
result from repeatedly applying the recurrence relation are postponed to higher cutoff values).
This holds for both the "vanilla strategy" [(274)](https://github.com/XanaduAI/MrMustard/pull/274) and for the
"diagonal strategy" and "single leftover mode strategy" [(288)](https://github.com/XanaduAI/MrMustard/pull/288/).
This is done by representing Fock amplitudes with a higher precision than complex128 (countering floating-point errors).
We run Julia code via PyJulia (where Numba was used before) to keep the code fast.
The precision is controlled by `setting settings.PRECISION_BITS_HERMITE_POLY`. The default value is ``128``,
which uses the old Numba code. When setting to a higher value, the new Julia code is run.

* Replaced parameters in `training` with `Constant` and `Variable` classes.
[(298)](https://github.com/XanaduAI/MrMustard/pull/298)

* Improved how states, transformations, and detectors deal with parameters by replacing the `Parametrized` class with `ParameterSet`.
[(298)](https://github.com/XanaduAI/MrMustard/pull/298)

* Includes julia dependencies into the python packaging for downstream installation reproducibility.
Removes dependency on tomli to load pyproject.toml for version info, uses importlib.metadata instead.
[(303)](https://github.com/XanaduAI/MrMustard/pull/303)
[(304)](https://github.com/XanaduAI/MrMustard/pull/304)

* Improves the algorithms implemented in `vanilla` and `vanilla_vjp` to achieve a speedup.
Specifically, the improved algorithms work on flattened arrays (which are reshaped before being returned) as opposed to multi-dimensional array.
[(312)](https://github.com/XanaduAI/MrMustard/pull/312)
[(318)](https://github.com/XanaduAI/MrMustard/pull/318)

* Adds functions `hermite_renormalized_batch` and `hermite_renormalized_diagonal_batch` to speed up calculating
Hermite polynomials over a batch of B vectors.
[(308)](https://github.com/XanaduAI/MrMustard/pull/308)

* Added suite to filter undesired warnings, and used it to filter tensorflow's ``ComplexWarning``s.
[(332)](https://github.com/XanaduAI/MrMustard/pull/332)


Bug fixes

* Added the missing `shape` input parameters to all methods `U` in the `gates.py` file.
[(291)](https://github.com/XanaduAI/MrMustard/pull/291)
* Fixed inconsistent use of `atol` in purity evaluation for Gaussian states.
[(294)](https://github.com/XanaduAI/MrMustard/pull/294)
* Fixed the documentations for loss_XYd and amp_XYd functions for Gaussian channels.
[(305)](https://github.com/XanaduAI/MrMustard/pull/305)
* Replaced all instances of `np.empty` with `np.zeros` to fix instabilities.
[(309)](https://github.com/XanaduAI/MrMustard/pull/309)

Documentation

Tests
* Added tests for calculating Fock amplitudes with a higher precision than `complex128`.

Contributors
elib20 rdprins SamFerracin jan-provaznik sylviemonet ziofil

0.6.1post1

New features


Breaking changes


Improvements

* Relaxes dependency versions in pyproject.toml. More specifically, this is to unpin scipy.
[(300)](https://github.com/XanaduAI/MrMustard/pull/300)

Bug fixes


Documentation

Contributors
[Zeyue Niu](https://github.com/zeyueN)

0.6.0

New features

* Added a new method to discretize Wigner functions that revolves Clenshaw summations. This method is expected to be fast and
reliable for systems with high number of excitations, for which the pre-existing iterative method is known to be unstable. Users
can select their preferred methods by setting the value of `Settings.DISCRETIZATION_METHOD` to either `interactive` (default) or
`clenshaw`.
[(280)](https://github.com/XanaduAI/MrMustard/pull/280)

* Added the `PhaseNoise(phase_stdev)` gate (non-Gaussian). Output is a mixed state in Fock representation.
It is not based on a choi operator, but on a nonlinear transformation of the density matrix.
[(275)](https://github.com/XanaduAI/MrMustard/pull/275)

Breaking changes

* The value of `hbar` can no longer be specified outside of `Settings`. All the classes and
methods that allowed specifying its value as an input now retrieve it directly from `Settings`.
[(278)](https://github.com/XanaduAI/MrMustard/pull/278)

* Certain attributes of `Settings` can no longer be changed after their value is queried for the
first time.
[(278)](https://github.com/XanaduAI/MrMustard/pull/278)

Improvements

* Tensorflow bumped to v2.14 with poetry installation working out of the box on Linux and Mac.
[(281)](https://github.com/XanaduAI/MrMustard/pull/281)

Bug fixes

* Fixed a bug about the variable names in functions (apply_kraus_to_ket, apply_kraus_to_dm, apply_choi_to_ket, apply_choi_to_dm).
[(271)](https://github.com/XanaduAI/MrMustard/pull/271)

* Fixed a bug that was leading to an error when computing the Choi representation of a unitary transformation.
[(283)](https://github.com/XanaduAI/MrMustard/pull/283)

* Fixed the internal function to calculate ABC of Bargmann representation (now corresponds to the literature) and other fixes to get the correct Fock tensor.
[(255)](https://github.com/XanaduAI/MrMustard/pull/255)

Documentation

Contributors
ziofil , SamFerracin , sylviemonet , zeyueN

0.5.0

New features

* Optimization callback functionalities has been improved. A dedicated `Callback` class is added which
is able to access the optimizer, the cost function, the parameters as well as gradients, during the
optimization. In addition, multiple callbacks can be specified. This opens up the endless possiblities
of customizing the the optimization progress with schedulers, trackers, heuristics, tricks, etc.
[(219)](https://github.com/XanaduAI/MrMustard/pull/219)

* Tensorboard-based optimization tracking is added as a builtin `Callback` class: `TensorboardCallback`.
It can automatically track costs as well as all trainable parameters during optimization in realtime.
Tensorboard can be most conveniently viewed from VScode.
[(219)](https://github.com/XanaduAI/MrMustard/pull/219)

python
import numpy as np
from mrmustard.training import Optimizer, TensorboardCallback

def cost_fn():
...

def as_dB(cost):
delta = np.sqrt(np.log(1 / (abs(cost) ** 2)) / (2 * np.pi))
cost_dB = -10 * np.log10(delta**2)
return cost_dB

tb_cb = TensorboardCallback(cost_converter=as_dB, track_grads=True)

opt = Optimizer(euclidean_lr = 0.001);
opt.minimize(cost_fn, max_steps=200, by_optimizing=[...], callbacks=tb_cb)

Logs will be stored in `tb_cb.logdir` which defaults to `./tb_logdir/...` but can be customized.
VScode can be used to open the Tensorboard frontend for live monitoring.
Or, in command line: `tensorboard --logdir={tb_cb.logdir}` and open link in browser.


* Gaussian states support a `bargmann` method for returning the bargmann representation.
[(235)](https://github.com/XanaduAI/MrMustard/pull/235)

* The `ket` method of `State` now supports new keyword arguments `max_prob` and `max_photons`.
Use them to speed-up the filling of a ket array up to a certain probability or *total* photon number.
[(235)](https://github.com/XanaduAI/MrMustard/pull/235)

python
from mrmustard.lab import Gaussian

Fills the ket array up to 99% probability or up to the |0,3>, |1,2>, |2,1>, |3,0> subspace, whichever is reached first.
The array has the autocutoff shape, unless the cutoffs are specified explicitly.
ket = Gaussian(2).ket(max_prob=0.99, max_photons=3)


* Gaussian transformations support a `bargmann` method for returning the bargmann representation.
[(239)](https://github.com/XanaduAI/MrMustard/pull/239)

* BSGate.U now supports method='vanilla' (default) and 'schwinger' (slower, but stable to any cutoff)
[(248)](https://github.com/XanaduAI/MrMustard/pull/248)

Breaking Changes

* The previous `callback` argument to `Optimizer.minimize` is now `callbacks` since we can now pass
multiple callbacks to it.
[(219)](https://github.com/XanaduAI/MrMustard/pull/219)

* The `opt_history` attribute of `Optimizer` does not have the placeholder at the beginning anymore.
[(235)](https://github.com/XanaduAI/MrMustard/pull/235)

Improvements

* The math module now has a submodule `lattice` for constructing recurrence relation strategies in the Fock lattice.
There are a few predefined strategies in `mrmustard.math.lattice.strategies`.
[(235)](https://github.com/XanaduAI/MrMustard/pull/235)

* Gradients in the Fock lattice are now computed using the vector-jacobian product.
This saves a lot of memory and speeds up the optimization process by roughly 4x.
[(235)](https://github.com/XanaduAI/MrMustard/pull/235)

* Tests of the compact_fock module now use hypothesis.
[(235)](https://github.com/XanaduAI/MrMustard/pull/235)

* Faster implementation of the fock representation of `BSgate`, `Sgate` and `SqueezedVacuum`, ranging from 5x to 50x.
[(239)](https://github.com/XanaduAI/MrMustard/pull/239)

* More robust implementation of cutoffs for States.
[(239)](https://github.com/XanaduAI/MrMustard/pull/239)

* Dependencies and versioning are now managed using Poetry.
[(257)](https://github.com/XanaduAI/MrMustard/pull/257)

Bug fixes

* Fixed a bug that would make two progress bars appear during an optimization
[(235)](https://github.com/XanaduAI/MrMustard/pull/235)

* The displacement of the dual of an operation had the wrong sign
[(239)](https://github.com/XanaduAI/MrMustard/pull/239)

* When projecting a Gaussian state onto a Fock state, the upper limit of the autocutoff now respect the Fock projection.
[(246)](https://github.com/XanaduAI/MrMustard/pull/246)

* Fixed a bug for the algorithms that allow faster PNR sampling from Gaussian circuits using density matrices. When the
cutoff of the first detector is equal to 1, the resulting density matrix is now correct.

Documentation

Contributors
ziofil zeyueN rdprins ggulli ryk-wolf

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.