Botorch

Latest version: v0.12.0

Safety actively analyzes 687918 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 8

0.12.0

Major changes
* Update most models to use dimension-scaled log-normal hyperparameter priors by
default, which makes performance much more robust to dimensionality. See
discussion 2451 for details. The only models that are _not_ changed are those
for fully Bayesian models and `PairwiseGP`; for models that utilize a
composite kernel, such as multi-fidelity/task/context, this change only
affects the base kernel (2449, 2450, 2507).
* Use `Standarize` by default in all the models using the upgraded priors. In
addition to reducing the amount of boilerplate needed to initialize a model,
this change was motivated by the change to default priors, because the new
priors will work less well when data is not standardized. Users who do not
want to use transforms should explicitly pass in `None` (2458, 2532).

Compatibility
* Unpin NumPy (2459).
* Require PyTorch>=2.0.1, GPyTorch==1.13, and linear_operator==0.5.3 (2511).

New features
* Introduce `PathwiseThompsonSampling` acquisition function (2443).
* Enable `qBayesianActiveLearningByDisagreement` to accept a posterior
transform, and improve its implementation (2457).
* Enable `SaasPyroModel` to sample via NUTS when training data is empty (2465).
* Add multi-objective `qBayesianActiveLearningByDisagreement` (2475).
* Add input constructor for `qNegIntegratedPosteriorVariance` (2477).
* Introduce `qLowerConfidenceBound` (2517).
* Add input constructor for `qMultiFidelityHypervolumeKnowledgeGradient` (2524).
* Add `posterior_transform` to `ApproximateGPyTorchModel.posterior` (2531).

Bug fixes
* Fix `batch_shape` default in `OrthogonalAdditiveKernel` (2473).
* Ensure all tensors are on CPU in `HitAndRunPolytopeSampler` (2502).
* Fix duplicate logging in `generation/gen.py` (2504).
* Raise exception if `X_pending` is set on the underlying `AcquisitionFunction`
in prior-guided `AcquisitionFunction` (2505).
* Make affine input transforms error with data of incorrect dimension, even in
eval mode (2510).
* Use fidelity-aware `current_value` in input constructor for `qMultiFidelityKnowledgeGradient` (2519).
* Apply input transforms when computing MLL in model closures (2527).
* Detach `fval` in `torch_minimize` to remove an opportunity for memory leaks
(2529).

Documentation
* Clarify incompatibility of inter-point constraints with `get_polytope_samples`
(2469).
* Update tutorials to use the log variants of EI-family acquisition functions,
don't make tutorials pass `Standardize` unnecessarily, and other
simplifications and cleanup (2462, 2463, 2490, 2495, 2496, 2498, 2499).
* Remove deprecated `FixedNoiseGP` (2536).

Other changes
* More informative warnings about failure to standardize or normalize data
(2489).
* Suppress irrelevant warnings in `qHypervolumeKnowledgeGradient` helpers
(2486).
* Cleaner `botorch/acquisition/multi_objective` directory structure (2485).
* With `AffineInputTransform`, always require data to have at least two
dimensions (2518).
* Remove deprecated argument `data_fidelity` to `SingleTaskMultiFidelityGP` and
deprecated model `FixedNoiseMultiFidelityGP` (2532).
* Raise an `OptimizationGradientError` when optimization produces NaN gradients (2537).
* Improve numerics by replacing `torch.log(1 + x)` with `torch.log1p(x)`
and `torch.exp(x) - 1` with `torch.special.expm1` (2539, 2540, 2541).

0.11.3

Compatibility
* Pin NumPy to <2.0 (2382).
* Require GPyTorch 1.12 and LinearOperator 0.5.2 (2408, 2441).

New features
* Support evaluating posterior predictive in `MultiTaskGP` (2375).
* Infinite width BNN kernel (2366) and the corresponding tutorial (2381).
* An improved elliptical slice sampling implementation (2426).
* Add a helper for producing a `DeterministicModel` using a Matheron path (2435).

Deprecations and Deletions
* Stop allowing some arguments to be ignored in acqf input constructors (2356).
* Reap deprecated `**kwargs` argument from `optimize_acqf` variants (2390).
* Delete `DeterministicPosterior` and `DeterministicSampler` (2391, 2409, 2410).
* Removed deprecated `CachedCholeskyMCAcquisitionFunction` (2399).
* Deprecate model conversion code (2431).
* Deprecate `gp_sampling` module in favor of pathwise sampling (2432).

Bug Fixes
* Fix observation noise shape for batched models (2377).
* Fix `sample_all_priors` to not sample one value for all lengthscales (2404).
* Make `(Log)NoisyExpectedImprovement` create a correct fantasy model with
non-default `SingleTaskGP` (2414).

Other Changes
* Various documentation improvements (2395, 2425, 2436, 2437, 2438).
* Clean up `**kwargs` arguments in `qLogNEI` (2406).
* Add a `NumericsWarning` for Legacy EI implementations (2429).

0.11.2

See 0.11.3 release. This release failed due to mismatching GPyTorch and LinearOperator versions.

0.11.1

New Features
* Implement `qLogNParEGO` (2364).
* Support picking best of multiple fit attempts in `fit_gpytorch_mll` (2373).

Deprecations
* Many functions that used to silently ignore arbitrary keyword arguments will now
raise an exception when passed unsupported arguments (2327, 2336).
* Remove `UnstandardizeMCMultiOutputObjective` and `UnstandardizePosteriorTransform` (2362).

Bug Fixes
* Remove correlation between the step size and the step direction in `sample_polytope` (2290).
* Fix pathwise sampler bug (2337).
* Explicitly check timeout against `None` so that `0.0` isn't ignored (2348).
* Fix boundary handling in `sample_polytope` (2353).
* Avoid division by zero in `normalize` & `unnormalize` when lower & upper bounds are equal (2363).
* Update `sample_all_priors` to support wider set of priors (2371).

Other Changes
* Clarify `is_non_dominated` behavior with NaN (2332).
* Add input constructor for `qEUBO` (2335).
* Add `LogEI` as a baseline in the `TuRBO` tutorial (2355).
* Update polytope sampling code and add thinning capability (2358).
* Add initial objective values to initial state for sample efficiency (2365).
* Clarify behavior on standard deviations with <1 degree of freedom (2357).

0.11.0

Compatibility
* Reqire Python >= 3.10 (2293).

New Features
* SCoreBO and Bayesian Active Learning acquisition functions (2163).

Bug Fixes
* Fix non-None constraint noise levels in some constrained test problems (2241).
* Fix inverse cost-weighted utility behaviour for non-positive acquisition values (2297).

Other Changes
* Don't allow unused keyword arguments in `Model.construct_inputs` (2186).
* Re-map task values in MTGP if they are not contiguous integers starting from zero (2230).
* Unify `ModelList` and `ModelListGP` `subset_output` behavior (2231).
* Ensure `mean` and `interior_point` of `LinearEllipticalSliceSampler` have correct shapes (2245).
* Speed up task covariance of `LCEMGP` (2260).
* Improvements to `batch_cross_validation`, support for model init kwargs (2269).
* Support custom `all_tasks` for MTGPs (2271).
* Error out if scipy optimizer does not support bounds / constraints (2282).
* Support diagonal covariance root with fixed indices for `LinearEllipticalSliceSampler` (2283).
* Make `qNIPV` a subclass of `AcquisitionFunction` rather than `AnalyticAcquisitionFunction` (2286).
* Increase code-sharing of `LCEMGP` & define `construct_inputs` (2291).

Deprecations
* Remove deprecated args from base `MCSampler` (2228).
* Remove deprecated `botorch/generation/gen/minimize` (2229).
* Remove `fit_gpytorch_model` (2250).
* Remove `requires_grad_ctx` (2252).
* Remove `base_samples` argument of `GPyTorchPosterior.rsample` (2254).
* Remove deprecated `mvn` argument to `GPyTorchPosterior` (2255).
* Remove deprecated `Posterior.event_shape` (2320).
* Remove `**kwargs` & deprecated `indices` argument of `Round` transform (2321).
* Remove `Standardize.load_state_dict` (2322).
* Remove `FixedNoiseMultiTaskGP` (2323).

0.10.0

New Features
* Introduce updated guidelines and a new directory for community contributions (2167).
* Add `qEUBO` preferential acquisition function (2192).
* Add Multi Information Source Augmented GP (2152).

Bug Fixes
* Fix `condition_on_observations` in fully Bayesian models (2151).
* Fix for bug that occurs when splitting single-element bins, use default BoTorch kernel for BAxUS. (2165).
* Fix a bug when non-linear constraints are used with `q > 1` (2168).
* Remove unsupported `X_pending` from `qMultiFidelityLowerBoundMaxValueEntropy` constructor (2193).
* Don't allow `data_fidelities=[]` in `SingleTaskMultiFidelityGP` (2195).
* Fix `EHVI`, `qEHVI`, and `qLogEHVI` input constructors (2196).
* Fix input constructor for `qMultiFidelityMaxValueEntropy` (2198).
* Add ability to not deduplicate points in `_is_non_dominated_loop` (2203).

Other Changes
* Minor improvements to `MVaR` risk measure (2150).
* Add support for multitask models to `ModelListGP` (2154).
* Support unspecified noise in `ContextualDataset` (2155).
* Update `HVKG` sampler to reflect the number of model outputs (2160).
* Release restriction in `OneHotToNumeric` that the categoricals are the trailing dimensions (2166).
* Standardize broadcasting logic of `q(Log)EI`'s `best_f` and `compute_best_feasible_objective` (2171).
* Use regular inheritance instead of dispatcher to special-case `PairwiseGP` logic (2176).
* Support `PBO` in `EUBO`'s input constructor (2178).
* Add `posterior_transform` to `qMaxValueEntropySearch`'s input constructor (2181).
* Do not normalize or standardize dimension if all values are equal (2185).
* Reap deprecated support for objective with 1 arg in `GenericMCObjective` (2199).
* Consistent signature for `get_objective_weights_transform` (2200).
* Update context order handling in `ContextualDataset` (2205).
* Update contextual models for use in MBM (2206).
* Remove `(Identity)AnalyticMultiOutputObjective` (2208).
* Reap deprecated support for `soft_eval_constraint` (2223). Please use `botorch.utils.sigmoid` instead.

Compatibility
* Pin `mpmath <= 1.3.0` to avoid CI breakages due to removed modules in the
latest alpha release (2222).

Page 1 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.