Inference-tools

Latest version: v0.14.0

Safety actively analyzes 723929 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 5

0.6.0

Three new modules have been added to support the construction of likelihood, prior and posterior distributions.

- The `inference.likelihoods` module provides classes for constructing likelihood functions for a given user-defined forward-model. Currently the module includes classes for constructing Gaussian, Cauchy and logistic likelihoods. Classes for additional distributions are planned for inclusion in a future release.

- The `inference.priors` module provides classes for constructing a prior distributions over model variables. Currently the module includes classes for constructing Gaussian, uniform and exponential priors. Classes for additional distributions are planned for inclusion in a future release.

- The `Posterior` class from the `inference.posterior` module provides a convenient means of combining a likelihood and prior distribution function into a single posterior distribution function.

The newly added [Gaussian fitting jupyter notebook demo](https://github.com/C-bowman/inference-tools/blob/master/demos/gaussian_fitting_demo.ipynb) includes examples of how classes from these new modules can be used.

0.5.4

- GpRegressor now supports multi-start gradient-based hyper-parameter optimisation using the [L-BFGS-B algorithm](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_l_bfgs_b.html), in addition to [differential evolution](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution), which was previously available. Which of these approaches is used can be selected using the new "optimizer" keyword argument, with L-BFGS-B being the default.

- GpRegressor now supports distributed hyper-parameter optimisation using sub-process based parallelism. The number of sub-processes over which the optimisation is distributed is set by the new "n_processes" keyword argument. Currently only the multi-start L-BFGS-B optimiser can take advantage of this, so this keyword is ignored when using the differential evolution optimizer.

- Fixed a bug in GaussianKDE which caused a crash when fewer than 10 samples were given as input.

0.5.3

- Rather than assuming the mean of the Gaussian process is zero, `GpRegressor` now treats the mean as a hyper-parameter, and automatically selects a value for the mean which best describes the data.

- Fixed a bug in the calculation of the derivatives of the log-marginal-likelihood and the log-cross-validation density with respect to the hyper-parameters.

0.5.2

- Added new set of Jupyter notebook demos, which can be found in the `/demos/` directory

- Added a new function `inference.plotting.hdi_plot` for convenient plotting of highest-density intervals derived from a sample of model realisations.

- All sampling classes in `inference.mcmc` now pass model parameters to the user-provided posterior function as a `numpy.ndarray`, and the documentation has been updated to reflect this.

0.5.1

- Fixed a bug introduced in the 0.5.0 release where a passing single spatial point to the `__call__` method of `GpRegressor` would cause a crash in cases with 2 or more spatial dimensions.

0.5.0

This release contains significant improvements to the `GpRegressor` class, including:

- A new option to select between the squared-exponential and rational-quadratic covariance functions, or provide a user-defined custom covariance function.

- A new option to use leave-one-out cross-validation to select hyper-parameter values instead of the marginal-likelihood.

- Significant improvements to numerical efficiency leading to reduced computation times.

Page 4 of 5

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.