Zfit

Latest version: v0.23.0

Safety actively analyzes 685525 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 12

2.1

Major Features and Improvements
-------------------------------

- Fixed the comparison in caching the graph (implementation detail) that leads to an error.

0.23.0

======================

Major Features and Improvements
-------------------------------
- Minimizers can use the new ``SimpleLoss.from_any`` method that allows other libraries to hook into the minimization.
For example, using zfit-physics, minimizers can directly minimize RooFit ``RooNllVar`` (as created by ``createNLL`` described `here <https://root.cern.ch/doc/master/classRooAbsPdf.html#a24b1afec4fd149e08967eac4285800de>`_
- Added the well performing ``LevenbergMarquardt`` minimizer, a new implementation of the Levenberg-Marquardt algorithm.
- New BFGS minimizer implementation of Scipy, ``ScipyBFGS``.
- Reactivate a few minimizers: ``ScipyDogleg``, ``ScipyNCG``, ``ScipyCOBYLA`` and ``ScipyNewtonCG``

Breaking changes
------------------
- removed multiple, old deprecated methods and arguments


Deprecations
-------------
- use ``stepsize`` instead of ``step_size`` in the ``zfit.Parameter`` constructor

Bug fixes and small changes
---------------------------
- add possibility to not jit by using ``force_eager`` in ``tf.function`` or raise a ``z.DoNotCompile`` error
- ``SimpleLoss`` can now be added together with another SimpleLoss
- ``get_params`` supports an ``autograd`` argument to filter parameters that do not support automatic differentiation.
An object with parameters can advertise, which parameters are differentiable (with ``autograd_params``); by default, all
parameters are assumed to be differentiable, the same effect as ``True``. If autograd is performed on parameters that
do not support it, an error is raised.
- Use ``kanah`` sum for larger likelihoods by default to improve numerical stability
- Using the same ``zfit.Parameter`` for multiple arguments (i.e. to specify a common width in a PDF with a different width
for left and right) could cause a crash due to some internal caching. This is now fixed.
- Minimizers have now been renamed without the trailing ``V1``. The old names are still available but will be removed in the future.

Experimental
------------

Requirement changes
-------------------

Thanks
------

0.22.0

===================

Bug fixes and small changes
---------------------------
- change the truncated PDF with a yield to reflect a dynamic change in shape

Requirement changes
-------------------
- Upgrade from Pydantic V1 to V2

0.21.1

========================

Bug fixes and small changes
---------------------------
- ``full`` argument for binned NLLs was not working properly and return a partially optimized loss value.
- jit compile all methods of the loss (gradient, hessian) to avoid recompilation every time. This can possibly speed up
different minimizers significantly.

0.21.0

========================

Major Features and Improvements
-------------------------------
- add ``JohnsonSU`` PDF, the Johnson SU distribution.



Bug fixes and small changes
---------------------------
- increase reliability of ``zfit.dill.dump`` and ``zfit.dill.dumps`` with an additional ``verify`` argument that reloads the dumped object to verify it was correctly dumped and retries if it wasn't.
- fix missing imported namespaces
- fixed a memory leak when creating multiple parameters
- add data loaders to ``zfit.data`` namespace



Requirement changes
-------------------
- upgrade to TensorFlow 2.17 and TensorFlow Probability 0.25

Thanks
------
- Davide Lancierini for finding and helping to debug the dill dumping issue
- James Herd for finding and reproducing the memory leak

0.20.3

========================

Bug fixes and small changes
---------------------------
- consistent behavior in loss: simple loss can take a gradient and hesse function and the default base loss provides fallbacks that work correctly between ``value_gradient`` and ``gradient``. This maybe matters if you've implemented a custom loss and should fix any issues with it.
- multiprocessing would get stuck due to an `upstream bug in TensorFlow <https://github.com/tensorflow/tensorflow/issues/66115>`_. Working around it by disabling an unused piece of code.

Thanks
------
- acampoverde for finding the bug in the multiprocessing

Page 1 of 12

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.