Gpytorch

Latest version: v1.14

Safety actively analyzes 714973 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 6

0.3.4a

0.3.0

New Features
- Implement kernel checkpointing, allowing exact GPs on up to 1M data points with multiple GPUs (499)
- GPyTorch now supports hard parameter constraints (e.g. bounds) via the register_constraint method on `Module` (596)
- All GPyTorch objects now support multiple batch dimensions. In addition to training `b` GPs simultaneously, you can now train a `b1 x b2` matrix of GPs simultaneously if you so choose (492, 589, 627)
- `RBFKernelGrad` now supports ARD (602)
- `FixedNoiseGaussianLikelihood` offers a better interface for dealing with known observation noise values. `WhiteNoiseKernel` is now hard deprecated (593)
- `InvMatmul`, `InvQuadLogDet` and `InvQuad` are now twice differentiable (603)
- `Likelihood` has been redesigned. See the new documentation for details if you are creating custom likelihoods (591)
- Better support for more flexible Pyro models. You can now define likelihoods of the form `p(y|f, z)` where `f` is a GP and `z` are arbitrary latent variables learned by Pyro (591).
- Parameters can now be recursively initialized with full names, e.g. `model.initialize(**{"covar_module.base_kernel.lengthscale": 1., "covar_module.outputscale": 1.})` (484)
- Added `ModelList` and `LikelihoodList` for training multiple GPs when batch mode can't be used -- see example notebooks (471)

Performance and stability improvements
- CG termination is now more tolerance based, and will much more rarely terminate without returning good solves. Furthermore, a warning is raised if it ever does that includes suggested courses of action. (569)
- In non-ARD mode, RBFKernel and MaternKernel use custom backward implementations for performance (517)
- Up to a 3x performance improvement in the regime where the test set is very small (615)
- The noise parameter in `GaussianLikelihood` now has a default lower bound, similar to sklearn (596)
- `psd_safe_cholesky` now adds successively increasing amounts of jitter rather than only once (610)
- Variational inference initialization now uses `psd_safe_cholesky` rather than `torch.cholesky` to initialize with the prior (610)
- The pivoted Cholesky preconditioner now uses a QR decomposition for its solve rather than the Woodbury formula for speed and stability (617)
- GPyTorch now uses Cholesky for solves with very small matrices rather than CG, resulting in reduced overhead for that setting (586)
- Cholesky can additionally be turned on manually for help debugging (586)
- Kernel distance computations now use `torch.cdist` when on PyTorch 1.1.0 in the non-batch setting (642)
- CUDA unit tests now default to using the least used available GPU when run (515)
- `MultiDeviceKernel` is now much faster (491)

Bug Fixes
- Fixed an issue with variational covariances at test time (638)
- Fixed an issue where the training covariance wasn't being detached for variance computations, occasionally resulting in backward errors (566)
- Fixed an issue where `active_dims` in kernels was being applied twice (576)
- Fixes and stability improvements for `MultiDeviceKernel` (560)
- Fixed an issue where `fast_pred_var` was failing for single training inputs (574)
- Fixed an issue when initializing parameter values with non-tensor values (630)
- Fixed an issue with handling the preconditioner log determinant value for MLL computation (634)
- Fixed an issue where `prior_dist` was being cached for VI, which was problematic for pyro models (599)
- Fixed a number of issues with `LinearKernel`, including one where the variance could go negative (584)
- Fixed a bug where training inputs couldn't be set with `set_train_data` if they are currently `None` (565)
- Fixed a number of bugs in `MultitaskMultivariateNormal` (545, 553)
- Fixed an indexing bug in `batch_symeig` (547)
- Fixed an issue where `MultitaskMultivariateNormal` wasn't interleaving rows correctly (540)

Other
- GPyTorch is now fully Python 3.6, and we've begun to include static type hints (581)
- Parameters in GPyTorch no longer have default singleton batch dimensions. For example, the default shape of `lengthscale` is now `torch.Size([1])` rather than `torch.Size([1, 1])` (605)
- `setup.py` now includes optional dependents, reads requirements from `requirements.txt`, does not require `torch` if `pytorch-nightly` is installed (495)

0.2.1

You can install GPyTorch via Anaconda (463)

Speed and stability
- Kernel distances use the JIT for fast computations (464)
- LinearCG uses the JIT for fast computations (464)
- Improve the stability of computing kernel distances (455)

Features
Variational inference improvements
- Sped up variational models by batching all matrix solves in one call (454)
- Can use the same set of inducing points for batch variational GPs (445)
- Whitened variational inference for improved convergence (493)
- Variational log likelihoods for BernoulliLikelihood are computed with quadrature (473)

Multi-GPU Gaussian processes
- Can train and test GPs by dividing the kernel onto multiple GPUs (450)

GPs with derivatives
- Can define RBFKernels for observations and their derivatives (462)

LazyTensors
- LazyTensors can broadcast matrix multiplication (459)
- Can use `` sign for matrix multiplication with LazyTensors

GP-list
- Convenience methods for training/testing multiple GPs in a list (471)

Other
- Added a `gpytorch.settings.fast_computations` feature to (optionally) use Cholesky-based inference (456)
- Distributions define event shapes (469)
- Can recursively initialize parameters on GP modules (484)

Bugs
- Can initialize `noise` in GaussianLikelihood (479)
- Fixed bugs in SGPR kernel (487)

0.1.1

Features
- Batch GPs, which previously were a feature, are now well-documented and much more stable [(see docs)](https://gpytorch.readthedocs.io/en/latest/batch_gps.html)
- Can add "fantasy observations" to models.
- Option for exact marginal log likelihood and sampling computations (this is slower, but potentially useful for debugging) (`gpytorch.settings.fast_computations`)

Bug fixes
- Easier usage of batch GPs
- Reduce bugs in [additive regression models](https://gpytorch.readthedocs.io/en/latest/examples/05_Scalable_GP_Regression_Multidimensional/KISSGP_Additive_Regression_CUDA.html)

0.1.0

0.1.0.rc5

Stability of hyperparameters
- Hyperparameters taht are constrained to be positive (e.g. variance, lengthscale, etc.) are now parameterized throught the softplus function (`log(1 + e^x)`) rather than through the log function
- This dramatically improves the numerical stability and optimization of hyperparameters
- Old models that were trained with `log` parameters will still work, but this is deprecated.
- Inference now handles certain numerical floating point round-off errors more gracefully.

Various stability improvements to variational inference

Other changes
- `GridKernel` can be used for data that lies on a perfect grid.
- New preconditioner for LazyTensors.
- Use batched cholesky functions for improved performance (requires updating PyTorch)

Page 5 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.