Major New Features and Improvements
Each feature in this section comes with a new example notebook and documentation for how to use them -- check the new docs!
- Added support for deep gaussian processes (564).
- KeOps integration has been added -- replace certain `gpytorch.kernels.SomeKernel` with `gpytorch.kernels.keops.SomeKernel` with KeOps installed, and run exact GPs on 100000+ data points (812).
- Variational inference has undergone significant internal refactoring! All old variational objects should still function, but many are deprecated. (903).
- Our integration with Pyro has been completely overhauled and is now much improved. For examples of interesting GP + Pyro models, see our new examples (903).
- Our example notebooks have been completely reorganized, and our documentation surrounding them has been rewritten to hopefully provide a better tutorial to GPyTorch (954).
- Added support for fully Bayesian GP modelling via NUTS (918).
Minor New Features and Improvements
- `GridKernel` and `GridInterpolationKernel` now support rectangular grids (888).
- Added cylindrical kernel (577).
- Added polynomial kernel (668).
- Added tutorials on basic usage (hyperparameters, saving/loading, etc) (685).
- `get_fantasy_model` now supports batched models (693).
- Added a `prior_mode` context manager that causes GP models to evaluate in prior mode (707).
- Added linear mean (676).
- Added horseshoe prior (719).
- Added polynomial kernel with derivatives (783).
- Fantasy model computations now use QR for solving least squares problems, improving numerical stability (790).
- All legacy functions have been removed, in favor of new function format in PyTorch (799).
- Added Newton Girard kernel (821).
- GP predictions now automatically clear caches when backpropagating through them. Previously, if you wanted to train through a GP in eval mode, you had to clear the caches manually by toggling the GP back to train mode and then to eval mode again. This is no longer necessary (916).
- Added rational quadratic kernel (330)
- Switch to using `torch.cholesky_solve` and `torch.logdet` now that they support batch mode / backwards (880)
- Better / less redundant parameterization for correlation matrices e.g. in `IndexKernel` (912).
- Kernels now define `__getitem__`, which allows slicing batch dimensions (782).
- Performance improvements in the small data regime, e.g. n < 2000 (926).
- Increased the size of kernel matrix for which Cholesky is the default solve strategy to n=800 (946).
- Added an option for manually specifying a different preconditioner for `AddedDiagLazyTensor` (930).
- Added precommit hooks that enforce code style (927).
- Lengthscales have been refactored, and kernels have an `is_stationary` attribute (925).
- All of our example notebooks now get smoke tested by our CI.
- Added a `deterministic_probes` setting that causes our MLL computation to be fully deterministic when using CG+Lanczos, which improves L-BFGS convergence (929).
- The use of the Woodbury formula for preconditioner computations is now fully replaced by QR, which improves numerical stability (968).
Bug fixes
- Fix a type error when calling `backward ` on `gpytorch.functions.logdet` (711).
- Variational models now properly skip posterior variance calculations if the `skip_posterior_variances` context is active (741).
- Fixed an issue with `diag` mode for `PeriodicKernel` (761).
- Stability improvements for `inv_softplus` and `inv_sigmoid` (776).
- Fix incorrect size handling in `InterpolatedLazyTensor` for rectangular matrices (906)
- Fix indexing in `IndexKernel` for batch mode (911).
- Fixed an issue where slicing batch mode lazy covariance matrices resulted in incorrect behavior (782).
- Cholesky gives a better error when there are NaNs (944).
- Use `psd_safe_cholesky` in prediction strategies rather than `torch.cholesky` (956).
- An error is now raised if Cholesky is used with KeOps, which is not supported (959).
- Fixed a bug where NaNs could occur during interpoilation (971).
- Fix MLL computation for heteroskedastic noise models (870).