Gpytorch

Latest version: v1.14

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 6

1.2.1

This release includes the following fixes:

- Fix caching issues with variational GPs (1274, 1311)
- Ensure that constraint bounds are properly cast to floating point types (1307)
- Fix bug with broadcasting multitask multivariate normal shapes (1312)
- Bypass KeOps for small/rectangular kernels (1319)
- Fix issues with `eigenvectors=False` in LazyTensorsymeig (1283)
- Fix issues with fixed-noise LazyTensor preconditioner (1299)
- Doc fixes (1275, 1301)

1.2.0

Major Features

New variational and approximate models
This release features a number of new and added features for approximate GP models:

- Linear model of coregionalization for variational multitask GPs (1180)
- Deep Sigma Point Process models (1193)
- Mean-field decoupled (MFD) models from "Parametric Gaussian Process Regressors" (Jankowiak et al., 2020) (1179)
- Implement natural gradient descent (1258)
- Additional non-conjugate likelihoods (Beta, StudentT, Laplace) (1211)

New kernels
We have just added a number of new specialty kernels:

- `gpytorch.kernels.GaussianSymmetrizedKLKernel` for performing regression with uncertain inputs (1186)
- `gpytorch.kernels.RFFKernel` (random Fourier features kernel) (1172, 1233)
- `gpytorch.kernels.SpectralDeltaKernel` (a parametric kernel for patterns/extrapolation) (1231)

More scalable sampling
- Large-scale sampling with contour integral quadrature from Pleiss et al., 2020 (1194)

Minor features
- Ability to set amount of jitter added when performing Cholesky factorizations (1136)
- Improve scalability of KroneckerProductLazyTensor (1199, 1208)
- Improve speed of preconditioner (1224)
- Add symeig and svd methods to LazyTensors (1105)
- Add TriangularLazyTensor for Cholesky methods (1102)

Bug fixes
- Fix initialization code for `gpytorch.kernels.SpectralMixtureKernel` (1171)
- Fix bugs with LazyTensor addition (1174)
- Fix issue with loading smoothed box priors (1195)
- Throw warning when variances are not positive, check for valid correlation matrices (1237, 1241, 1245)
- Fix sampling issues with Pyro integration (1238)

1.1.1

Major features

- GPyTorch is compatible with PyTorch 1.5 (latest release)
- Several bugs with task-independent multitask models are fixed (1110)
- Task-dependent multitask models are more batch-mode compatible (1087, 1089, 1095)

Minor features

- `gpytorch.priors.MultivariateNormalPrior` has an expand method (1018)
- Better broadcasting for batched inducing point models (1047)
- `LazyTensor` repeating works with rectangular matrices (1068)
- `gpytorch.kernels.ScaleKernel` inherits the `active_dims` property from its base kernel (1072)
- Fully-bayesian models can be saved (1076)

Bug Fixes

- `gpytorch.kernels.PeriodicKernel` is batch-mode compatible (1012)
- Fix `gpytorch.priors.MultivariateNormalPrior` expand method (1018)
- Fix indexing issues with `LazyTensors` (1029)
- Fix constants with `gpytorch.mlls.GammaRobustVariationalELBO` (1038, 1053)
- Prevent doubly-computing derivatives of kernel inputs (1042)
- Fix initialization issues with `gpytorch.kernels.SpectralMixtureKernel` (1052)
- Fix stability of `gpytorch.variational.DeltaVariationalStrategy`

1.0.0

Major New Features and Improvements
Each feature in this section comes with a new example notebook and documentation for how to use them -- check the new docs!

- Added support for deep gaussian processes (564).
- KeOps integration has been added -- replace certain `gpytorch.kernels.SomeKernel` with `gpytorch.kernels.keops.SomeKernel` with KeOps installed, and run exact GPs on 100000+ data points (812).
- Variational inference has undergone significant internal refactoring! All old variational objects should still function, but many are deprecated. (903).
- Our integration with Pyro has been completely overhauled and is now much improved. For examples of interesting GP + Pyro models, see our new examples (903).
- Our example notebooks have been completely reorganized, and our documentation surrounding them has been rewritten to hopefully provide a better tutorial to GPyTorch (954).
- Added support for fully Bayesian GP modelling via NUTS (918).

Minor New Features and Improvements
- `GridKernel` and `GridInterpolationKernel` now support rectangular grids (888).
- Added cylindrical kernel (577).
- Added polynomial kernel (668).
- Added tutorials on basic usage (hyperparameters, saving/loading, etc) (685).
- `get_fantasy_model` now supports batched models (693).
- Added a `prior_mode` context manager that causes GP models to evaluate in prior mode (707).
- Added linear mean (676).
- Added horseshoe prior (719).
- Added polynomial kernel with derivatives (783).
- Fantasy model computations now use QR for solving least squares problems, improving numerical stability (790).
- All legacy functions have been removed, in favor of new function format in PyTorch (799).
- Added Newton Girard kernel (821).
- GP predictions now automatically clear caches when backpropagating through them. Previously, if you wanted to train through a GP in eval mode, you had to clear the caches manually by toggling the GP back to train mode and then to eval mode again. This is no longer necessary (916).
- Added rational quadratic kernel (330)
- Switch to using `torch.cholesky_solve` and `torch.logdet` now that they support batch mode / backwards (880)
- Better / less redundant parameterization for correlation matrices e.g. in `IndexKernel` (912).
- Kernels now define `__getitem__`, which allows slicing batch dimensions (782).
- Performance improvements in the small data regime, e.g. n < 2000 (926).
- Increased the size of kernel matrix for which Cholesky is the default solve strategy to n=800 (946).
- Added an option for manually specifying a different preconditioner for `AddedDiagLazyTensor` (930).
- Added precommit hooks that enforce code style (927).
- Lengthscales have been refactored, and kernels have an `is_stationary` attribute (925).
- All of our example notebooks now get smoke tested by our CI.
- Added a `deterministic_probes` setting that causes our MLL computation to be fully deterministic when using CG+Lanczos, which improves L-BFGS convergence (929).
- The use of the Woodbury formula for preconditioner computations is now fully replaced by QR, which improves numerical stability (968).

Bug fixes
- Fix a type error when calling `backward ` on `gpytorch.functions.logdet` (711).
- Variational models now properly skip posterior variance calculations if the `skip_posterior_variances` context is active (741).
- Fixed an issue with `diag` mode for `PeriodicKernel` (761).
- Stability improvements for `inv_softplus` and `inv_sigmoid` (776).
- Fix incorrect size handling in `InterpolatedLazyTensor` for rectangular matrices (906)
- Fix indexing in `IndexKernel` for batch mode (911).
- Fixed an issue where slicing batch mode lazy covariance matrices resulted in incorrect behavior (782).
- Cholesky gives a better error when there are NaNs (944).
- Use `psd_safe_cholesky` in prediction strategies rather than `torch.cholesky` (956).
- An error is now raised if Cholesky is used with KeOps, which is not supported (959).
- Fixed a bug where NaNs could occur during interpoilation (971).
- Fix MLL computation for heteroskedastic noise models (870).

0.3.6

A full list of bug fixes and features will be out with the 0.4 release.

0.3.5

This release addresses breaking changes in the recent PyTorch 1.2 release. Currently, GPyTorch will run on either PyTorch 1.1 or PyTorch 1.2.

A full list of new features and bug fixes will be coming soon in a GPyTorch 0.4 release.

Page 4 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.