Botorch

Latest version: v0.13.0

Safety actively analyzes 723217 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 8

0.13.0

Highlights
* BoTorch website has been upgraded to utilize Docusaurus v3, with the API
reference being hosted by ReadTheDocs. The tutorials now expose an option to
open with Colab, for easy access to a runtime with modifiable tutorials.
The old versions of the website can be found at archive.botorch.org (2653).
* `RobustRelevancePursuitSingleTaskGP`, a robust Gaussian process model that adaptively identifies
outliers and leverages Bayesian model selection ([paper](https://arxiv.org/pdf/2410.24222)) (#2608, 2690, 2707).
* `LatentKroneckerGP`, a scalable model for data on partially observed grids, like the joint modeling
of hyper-parameters and partially completed learning curves in AutoML ([paper](https://arxiv.org/pdf/2410.09239)) (#2647).
* Add MAP-SAAS model, which utilizes the sparse axis-aligned subspace priors
([paper](https://proceedings.mlr.press/v161/eriksson21a/eriksson21a.pdf)) with MAP model fitting (#2694).

Compatibility
* Require GPyTorch==1.14 and linear_operator==0.6 (2710).
* Remove support for anaconda (official package) (2617).
* Remove `mpmath` dependency pin (2640).
* Updates to optimization routines to support SciPy>1.15:
* Use `threadpoolctl` in `minimize_with_timeout` to prevent CPU oversubscription (2712).
* Update optimizer output parsing to make model fitting compatible with SciPy>1.15 (2667).

New Features
* Add support for priors in OAK Kernel (2535).
* Add `BatchBroadcastedTransformList`, which broadcasts a list of `InputTransform`s over batch shapes (2558).
* `InteractionFeatures` input transform (2560).
* Implement `percentile_of_score`, which takes inputs `data` and `score`, and returns the percentile of
values in `data` that are below `score` (2568).
* Add `optimize_acqf_mixed_alternating`, which supports optimization over mixed discrete & continuous spaces (2573).
* Add support for `PosteriorTransform` to `get_optimal_samples` and `optimize_posterior_samples` (2576).
* Support inequality constraints & `X_avoid` in `optimize_acqf_discrete` (2593).
* Add ability to mix batch initial conditions and internal IC generation (2610).
* Add `qPosteriorStandardDeviation` acquisition function (2634).
* TopK downselection for initial batch generation. (2636).
* Support optimization over mixed spaces in `optimize_acqf_homotopy` (2639).
* Add `InfeasibilityError` exception class (2652).
* Support `InputTransform`s in `SparseOutlierLikelihood` and `get_posterior_over_support` (2659).
* `StratifiedStandardize` outcome transform (2671).
* Add `center` argument to `Normalize` (2680).
* Add input normalization step in `Warp` input transform (2692).
* Support mixing fully Bayesian & `SingleTaskGP` models in `ModelListGP` (2693).
* Add abstract fully Bayesian GP class and fully Bayesian linear GP model (2696, 2697).
* Tutorial on BO constrained by probability of classification model (2700).

Bug Fixes
* Fix error in decoupled_mobo tutorial due to torch/numpy issues (2550).
* Raise error for MTGP in `batch_cross_validation` (2554).
* Fix `posterior` method in `BatchedMultiOutputGPyTorchModel` for tracing JIT (2592).
* Replace hard-coded double precision in test_functions with default dtype (2597).
* Remove `as_tensor` argument of `set_tensors_from_ndarray_1d` (2615).
* Skip fixed feature enumerations in `optimize_acqf_mixed` that can't satisfy the parameter constraints (2614).
* Fix `get_default_partitioning_alpha` for >7 objectives (2646).
* Fix random seed handling in `sample_hypersphere` (2688).
* Fix bug in `optimize_objective` with fixed features (2691).
* `FullyBayesianSingleTaskGP.train` should not return `None` (2702).

Other Changes
* More efficient sampling from `KroneckerMultiTaskGP` (2460).
* Update `HigherOrderGP` to use new priors & standardize outcome transform by default (2555).
* Update `initialize_q_batch` methods to return both candidates and the corresponding acquisition values (2571).
* Update optimization documentation with LogEI insights (2587).
* Make all arguments in `optimize_acqf_homotopy` explicit (2588).
* Introduce `trial_indices` argument to `SupervisedDataset` (2595).
* Make optimizers raise an error when provided negative indices for fixed features (2603).
* Make input transforms `Module`s by default (2607).
* Reduce memory usage in `ConstrainedMaxPosteriorSampling` (2622).
* Add `clone` method to datasets (2625).
* Add support for continuous relaxation within `optimize_acqf_mixed_alternating` (2635).
* Update indexing in `qLogNEI._get_samples_and_objectives` to support multiple input batches (2649).
* Pass `X` to `OutcomeTransform`s (2663).
* Use mini-batches when evaluating candidates within `optimize_acqf_discrete_local_search` (2682).

Deprecations
* Remove `HeteroskedasticSingleTaskGP` (2616).
* Remove `FixedNoiseDataset` (2626).
* Remove support for legacy format non-linear constraints (2627).
* Remove `maximize` option from information theoretic acquisition functions (2590).

0.12.0

Major changes
* Update most models to use dimension-scaled log-normal hyperparameter priors by
default, which makes performance much more robust to dimensionality. See
discussion 2451 for details. The only models that are _not_ changed are those
for fully Bayesian models and `PairwiseGP`; for models that utilize a
composite kernel, such as multi-fidelity/task/context, this change only
affects the base kernel (2449, 2450, 2507).
* Use `Standarize` by default in all the models using the upgraded priors. In
addition to reducing the amount of boilerplate needed to initialize a model,
this change was motivated by the change to default priors, because the new
priors will work less well when data is not standardized. Users who do not
want to use transforms should explicitly pass in `None` (2458, 2532).

Compatibility
* Unpin NumPy (2459).
* Require PyTorch>=2.0.1, GPyTorch==1.13, and linear_operator==0.5.3 (2511).

New features
* Introduce `PathwiseThompsonSampling` acquisition function (2443).
* Enable `qBayesianActiveLearningByDisagreement` to accept a posterior
transform, and improve its implementation (2457).
* Enable `SaasPyroModel` to sample via NUTS when training data is empty (2465).
* Add multi-objective `qBayesianActiveLearningByDisagreement` (2475).
* Add input constructor for `qNegIntegratedPosteriorVariance` (2477).
* Introduce `qLowerConfidenceBound` (2517).
* Add input constructor for `qMultiFidelityHypervolumeKnowledgeGradient` (2524).
* Add `posterior_transform` to `ApproximateGPyTorchModel.posterior` (2531).

Bug fixes
* Fix `batch_shape` default in `OrthogonalAdditiveKernel` (2473).
* Ensure all tensors are on CPU in `HitAndRunPolytopeSampler` (2502).
* Fix duplicate logging in `generation/gen.py` (2504).
* Raise exception if `X_pending` is set on the underlying `AcquisitionFunction`
in prior-guided `AcquisitionFunction` (2505).
* Make affine input transforms error with data of incorrect dimension, even in
eval mode (2510).
* Use fidelity-aware `current_value` in input constructor for `qMultiFidelityKnowledgeGradient` (2519).
* Apply input transforms when computing MLL in model closures (2527).
* Detach `fval` in `torch_minimize` to remove an opportunity for memory leaks
(2529).

Documentation
* Clarify incompatibility of inter-point constraints with `get_polytope_samples`
(2469).
* Update tutorials to use the log variants of EI-family acquisition functions,
don't make tutorials pass `Standardize` unnecessarily, and other
simplifications and cleanup (2462, 2463, 2490, 2495, 2496, 2498, 2499).
* Remove deprecated `FixedNoiseGP` (2536).

Other changes
* More informative warnings about failure to standardize or normalize data
(2489).
* Suppress irrelevant warnings in `qHypervolumeKnowledgeGradient` helpers
(2486).
* Cleaner `botorch/acquisition/multi_objective` directory structure (2485).
* With `AffineInputTransform`, always require data to have at least two
dimensions (2518).
* Remove deprecated argument `data_fidelity` to `SingleTaskMultiFidelityGP` and
deprecated model `FixedNoiseMultiFidelityGP` (2532).
* Raise an `OptimizationGradientError` when optimization produces NaN gradients (2537).
* Improve numerics by replacing `torch.log(1 + x)` with `torch.log1p(x)`
and `torch.exp(x) - 1` with `torch.special.expm1` (2539, 2540, 2541).

0.11.3

Compatibility
* Pin NumPy to <2.0 (2382).
* Require GPyTorch 1.12 and LinearOperator 0.5.2 (2408, 2441).

New features
* Support evaluating posterior predictive in `MultiTaskGP` (2375).
* Infinite width BNN kernel (2366) and the corresponding tutorial (2381).
* An improved elliptical slice sampling implementation (2426).
* Add a helper for producing a `DeterministicModel` using a Matheron path (2435).

Deprecations and Deletions
* Stop allowing some arguments to be ignored in acqf input constructors (2356).
* Reap deprecated `**kwargs` argument from `optimize_acqf` variants (2390).
* Delete `DeterministicPosterior` and `DeterministicSampler` (2391, 2409, 2410).
* Removed deprecated `CachedCholeskyMCAcquisitionFunction` (2399).
* Deprecate model conversion code (2431).
* Deprecate `gp_sampling` module in favor of pathwise sampling (2432).

Bug Fixes
* Fix observation noise shape for batched models (2377).
* Fix `sample_all_priors` to not sample one value for all lengthscales (2404).
* Make `(Log)NoisyExpectedImprovement` create a correct fantasy model with
non-default `SingleTaskGP` (2414).

Other Changes
* Various documentation improvements (2395, 2425, 2436, 2437, 2438).
* Clean up `**kwargs` arguments in `qLogNEI` (2406).
* Add a `NumericsWarning` for Legacy EI implementations (2429).

0.11.2

See 0.11.3 release. This release failed due to mismatching GPyTorch and LinearOperator versions.

0.11.1

New Features
* Implement `qLogNParEGO` (2364).
* Support picking best of multiple fit attempts in `fit_gpytorch_mll` (2373).

Deprecations
* Many functions that used to silently ignore arbitrary keyword arguments will now
raise an exception when passed unsupported arguments (2327, 2336).
* Remove `UnstandardizeMCMultiOutputObjective` and `UnstandardizePosteriorTransform` (2362).

Bug Fixes
* Remove correlation between the step size and the step direction in `sample_polytope` (2290).
* Fix pathwise sampler bug (2337).
* Explicitly check timeout against `None` so that `0.0` isn't ignored (2348).
* Fix boundary handling in `sample_polytope` (2353).
* Avoid division by zero in `normalize` & `unnormalize` when lower & upper bounds are equal (2363).
* Update `sample_all_priors` to support wider set of priors (2371).

Other Changes
* Clarify `is_non_dominated` behavior with NaN (2332).
* Add input constructor for `qEUBO` (2335).
* Add `LogEI` as a baseline in the `TuRBO` tutorial (2355).
* Update polytope sampling code and add thinning capability (2358).
* Add initial objective values to initial state for sample efficiency (2365).
* Clarify behavior on standard deviations with <1 degree of freedom (2357).

0.11.0

Compatibility
* Reqire Python >= 3.10 (2293).

New Features
* SCoreBO and Bayesian Active Learning acquisition functions (2163).

Bug Fixes
* Fix non-None constraint noise levels in some constrained test problems (2241).
* Fix inverse cost-weighted utility behaviour for non-positive acquisition values (2297).

Other Changes
* Don't allow unused keyword arguments in `Model.construct_inputs` (2186).
* Re-map task values in MTGP if they are not contiguous integers starting from zero (2230).
* Unify `ModelList` and `ModelListGP` `subset_output` behavior (2231).
* Ensure `mean` and `interior_point` of `LinearEllipticalSliceSampler` have correct shapes (2245).
* Speed up task covariance of `LCEMGP` (2260).
* Improvements to `batch_cross_validation`, support for model init kwargs (2269).
* Support custom `all_tasks` for MTGPs (2271).
* Error out if scipy optimizer does not support bounds / constraints (2282).
* Support diagonal covariance root with fixed indices for `LinearEllipticalSliceSampler` (2283).
* Make `qNIPV` a subclass of `AcquisitionFunction` rather than `AnalyticAcquisitionFunction` (2286).
* Increase code-sharing of `LCEMGP` & define `construct_inputs` (2291).

Deprecations
* Remove deprecated args from base `MCSampler` (2228).
* Remove deprecated `botorch/generation/gen/minimize` (2229).
* Remove `fit_gpytorch_model` (2250).
* Remove `requires_grad_ctx` (2252).
* Remove `base_samples` argument of `GPyTorchPosterior.rsample` (2254).
* Remove deprecated `mvn` argument to `GPyTorchPosterior` (2255).
* Remove deprecated `Posterior.event_shape` (2320).
* Remove `**kwargs` & deprecated `indices` argument of `Round` transform (2321).
* Remove `Standardize.load_state_dict` (2322).
* Remove `FixedNoiseMultiTaskGP` (2323).

Page 1 of 8

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.