Botorch

Latest version: v0.12.0

Safety actively analyzes 687918 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 8

0.5.0

Compatibility
* Require PyTorch >=1.8.1 (832).
* Require GPyTorch >=1.5 (848).
* Changes to how input transforms are applied: `transform_inputs` is applied in `model.forward` if the model is in `train` mode, otherwise it is applied in the `posterior` call (819, 835).

New Features
* Improved multi-objective optimization capabilities:
* `qNoisyExpectedHypervolumeImprovement` acquisition function that improves on `qExpectedHypervolumeImprovement` in terms of tolerating observation noise and speeding up computation for large `q`-batches (797, 822).
* `qMultiObjectiveMaxValueEntropy` acqusition function (913aa0e510dde10568c2b4b911124cdd626f6905, 760).
* Heuristic for reference point selection (830).
* `FastNondominatedPartitioning` for Hypervolume computations (699).
* `DominatedPartitioning` for partitioning the dominated space (726).
* `BoxDecompositionList` for handling box decompositions of varying sizes (712).
* Direct, batched dominated partitioning for the two-outcome case (739).
* `get_default_partitioning_alpha` utility providing heuristic for selecting approximation level for partitioning algorithms (793).
* New method for computing Pareto Frontiers with less memory overhead (842, 846).
* New `qLowerBoundMaxValueEntropy` acquisition function (a.k.a. GIBBON), a lightweight variant of Multi-fidelity Max-Value Entropy Search using a Determinantal Point Process approximation (724, 737, 749).
* Support for discrete and mixed input domains:
* `CategoricalKernel` for categorical inputs (771).
* `MixedSingleTaskGP` for mixed search spaces (containing both categorical and ordinal parameters) (772, 847).
* `optimize_acqf_discrete` for optimizing acquisition functions over fully discrete domains (777).
* Extend `optimize_acqf_mixed` to allow batch optimization (804).
* Support for robust / risk-aware optimization:
* Risk measures for robust / risk-averse optimization (821).
* `AppendFeatures` transform (820).
* `InputPerturbation` input transform for for risk averse BO with implementation errors (827).
* Tutorial notebook for Bayesian Optimization of risk measures (823).
* Tutorial notebook for risk-averse Bayesian Optimization under input perturbations (828).
* More scalable multi-task modeling and sampling:
* `KroneckerMultiTaskGP` model for efficient multi-task modeling for block-design settings (all tasks observed at all inputs) (637).
* Support for transforms in Multi-Task GP models (681).
* Posterior sampling based on Matheron's rule for Multi-Task GP models (841).
* Various changes to simplify and streamline integration with Ax:
* Handle non-block designs in `TrainingData` (794).
* Acquisition function input constructor registry (788, 802, 845).
* Random Fourier Feature (RFF) utilties for fast (approximate) GP function sampling (750).
* `DelaunayPolytopeSampler` for fast uniform sampling from (simple) polytopes (741).
* Add `evaluate` method to `ScalarizedObjective` (795).

Bug Fixes
* Handle the case when all features are fixed in `optimize_acqf` (770).
* Pass `fixed_features` to initial candidate generation functions (806).
* Handle batch empty pareto frontier in `FastPartitioning` (740).
* Handle empty pareto set in `is_non_dominated` (743).
* Handle edge case of no or a single observation in `get_chebyshev_scalarization` (762).
* Fix an issue in `gen_candidates_torch` that caused problems with acqusition functions using fantasy models (766).
* Fix `HigherOrderGP` `dtype` bug (728).
* Normalize before clamping in `Warp` input warping transform (722).
* Fix bug in GP sampling (764).

Other Changes
* Modify input transforms to support one-to-many transforms (819, 835).
* Make initial conditions for acquisition function optimization honor parameter constraints (752).
* Perform optimization only over unfixed features if `fixed_features` is passed (839).
* Refactor Max Value Entropy Search Methods (734).
* Use Linear Algebra functions from the `torch.linalg` module (735).
* Use PyTorch's `Kumaraswamy` distribution (746).
* Improved capabilities and some bugfixes for batched models (723, 767).
* Pass `callback` argument to `scipy.optim.minimize` in `gen_candidates_scipy` (744).
* Modify behavior of `X_pending` in in multi-objective acqusiition functions (747).
* Allow multi-dimensional batch shapes in test functions (757).
* Utility for converting batched multi-output models into batched single-output models (759).
* Explicitly raise `NotPSDError` in `_scipy_objective_and_grad` (787).
* Make `raw_samples` optional if `batch_initial_conditions` is passed (801).
* Use powers of 2 in qMC docstrings & examples (812).

0.4.0

Compatibility
* Require PyTorch >=1.7.1 (714).
* Require GPyTorch >=1.4 (714).

New Features
* `HigherOrderGP` - High-Order Gaussian Process (HOGP) model for
high-dimensional output regression (631, 646, 648, 680).
* `qMultiStepLookahead` acquisition function for general look-ahead
optimization approaches (611, 659).
* `ScalarizedPosteriorMean` and `project_to_sample_points` for more
advanced MFKG functionality (645).
* Large-scale Thompson sampling tutorial (654, 713).
* Tutorial for optimizing mixed continuous/discrete domains (application
to multi-fidelity KG with discrete fidelities) (716).
* `GPDraw` utility for sampling from (exact) GP priors (655).
* Add `X` as optional arg to call signature of `MCAcqusitionObjective` (487).
* `OSY` synthetic test problem (679).

Bug Fixes
* Fix matrix multiplication in `scalarize_posterior` (638).
* Set `X_pending` in `get_acquisition_function` in `qEHVI` (662).
* Make contextual kernel device-aware (666).
* Do not use an `MCSampler` in `MaxPosteriorSampling` (701).
* Add ability to subset outcome transforms (711).

Performance Improvements
* Batchify box decomposition for 2d case (642).

Other Changes
* Use scipy distribution in MES quantile bisect (633).
* Use new closure definition for GPyTorch priors (634).
* Allow enabling of approximate root decomposition in `posterior` calls (652).
* Support for upcoming 21201-dimensional PyTorch `SobolEngine` (672, 674).
* Refactored various MOO utilities to allow future additions (656, 657, 658, 661).
* Support input_transform in PairwiseGP (632).
* Output shape checks for t_batch_mode_transform (577).
* Check for NaN in `gen_candidates_scipy` (688).
* Introduce `base_sample_shape` property to `Posterior` objects (718).

0.3.3

Contextual Bayesian Optimization, Input Warping, TuRBO, sampling from polytopes.

Compatibility
* Require PyTorch >=1.7 (614).
* Require GPyTorch >=1.3 (614).

New Features
* Models (LCE-A, LCE-M and SAC ) for Contextual Bayesian Optimziation (581).
* Implements core models from:
[High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization](https://proceedings.neurips.cc/paper/2020/hash/faff959d885ec0ecf70741a846c34d1d-Abstract.html).
Q. Feng, B. Letham, H. Mao, E. Bakshy. NeurIPS 2020.
* See Ax for usage of these models.
* Hit and run sampler for uniform sampling from a polytope (592).
* Input warping:
* Core functionality (607).
* Kumaraswamy Distribution (606).
* Tutorial (8f34871652042219c57b799669a679aab5eed7e3).
* TuRBO-1 tutorial (598).
* Implements the method from [Scalable Global Optimization via
Local Bayesian Optimization](https://proceedings.neurips.cc/paper/2019/file/6c990b7aca7bc7058f5e98ea909e924b-Paper.pdf).
D. Eriksson, M. Pearce, J. Gardner, R. D. Turner, M. Poloczek. NeurIPS 2019.

Bug fixes
* Fix bounds of `HolderTable` synthetic function (596).
* Fix `device` issue in MOO tutorial (621).

Other changes
* Add `train_inputs` option to `qMaxValueEntropy` (593).
* Enable gpytorch settings to override BoTorch defaults for `fast_pred_var` and `debug` (595).
* Rename `set_train_data_transform` -> `preprocess_transform` (575).
* Modify `_expand_bounds()` shape checks to work with >2-dim bounds (604).
* Add `batch_shape` property to models (588).
* Modify `qMultiFidelityKnowledgeGradient.evaluate()` to work with `project`, `expand` and `cost_aware_utility` (594).
* Add list of papers using BoTorch to website docs (617).

0.3.2

Maintenance Release

New Features
* Add `PenalizedAcquisitionFunction` wrapper (585)
* Input transforms
* Reversible input transform (550)
* Rounding input transform (562)
* Log input transform (563)
* Differentiable approximate rounding for integers (561)

Bug fixes
* Fix sign error in UCB when `maximize=False` (a4bfacbfb2109d3b89107d171d2101e1995822bb)
* Fix batch_range sample shape logic (574)

Other changes
* Better support for two stage sampling in preference learning
(0cd13d0cb49b1ac8d0971e42f1f0e9dd6126fd9a)
* Remove noise term in `PairwiseGP` and add `ScaleKernel` by default (571)
* Rename `prior` to `task_covar_prior` in `MultiTaskGP` and `FixedNoiseMultiTaskGP`
(8e42ea82856b165a7df9db2a9b6f43ebd7328fc4)
* Support only transforming inputs on training or evaluation (551)
* Add `equals` method for `InputTransform` (552)

0.3.1

Maintenance Release

New Features
* Constrained Multi-Objective tutorial (493)
* Multi-fidelity Knowledge Gradient tutorial (509)
* Support for batch qMC sampling (510)
* New `evaluate` method for `qKnowledgeGradient` (515)

Compatibility
* Require PyTorch >=1.6 (535)
* Require GPyTorch >=1.2 (535)
* Remove deprecated `botorch.gen module` (532)

Bug fixes
* Fix bad backward-indexing of task_feature in `MultiTaskGP` (485)
* Fix bounds in constrained Branin-Currin test function (491)
* Fix max_hv for C2DTLZ2 and make Hypervolume always return a float (494)
* Fix bug in `draw_sobol_samples` that did not use the proper effective dimension (505)
* Fix constraints for `q>1` in `qExpectedHypervolumeImprovement` (c80c4fdb0f83f0e4f12e4ec4090d0478b1a8b532)
* Only use feasible observations in partitioning for `qExpectedHypervolumeImprovement`
in `get_acquisition_function` (523)
* Improved GPU compatibility for `PairwiseGP` (537)

Performance Improvements
* Reduce memory footprint in `qExpectedHypervolumeImprovement` (522)
* Add `(q)ExpectedHypervolumeImprovement` to nonnegative functions
[for better initialization] (496)

Other changes
* Support batched `best_f` in `qExpectedImprovement` (487)
* Allow to return full tree of solutions in `OneShotAcquisitionFunction` (488)
* Added `construct_inputs` class method to models to programmatically construct the
inputs to the constructor from a standardized `TrainingData` representation
(477, 482, 3621198d02195b723195b043e86738cd5c3b8e40)
* Acquisition function constructors now accept catch-all `**kwargs` options
(478, e5b69352954bb10df19a59efe9221a72932bfe6c)
* Use `psd_safe_cholesky` in `qMaxValueEntropy` for better numerical stabilty (518)
* Added `WeightedMCMultiOutputObjective` (81d91fd2e115774e561c8282b724457233b6d49f)
* Add ability to specify `outcomes` to all multi-output objectives (524)
* Return optimization output in `info_dict` for `fit_gpytorch_scipy` (534)
* Use `setuptools_scm` for versioning (539)

0.3.0

Multi-Objective Bayesian Optimization

New Features
* Multi-Objective Acquisition Functions (466)
* q-Expected Hypervolume Improvement
* q-ParEGO
* Analytic Expected Hypervolume Improvement with auto-differentiation
* Multi-Objective Utilities (466)
* Pareto Computation
* Hypervolume Calculation
* Box Decomposition algorithm
* Multi-Objective Test Functions (466)
* Suite of synthetic test functions for multi-objective, constrained optimization
* Multi-Objective Tutorial (468)
* Abstract ConstrainedBaseTestProblem (454)
* Add optimize_acqf_list method for sequentially, greedily optimizing 1 candidate
from each provided acquisition function (d10aec911b241b208c59c192beb9e4d572a092cd)

Bug fixes
* Fixed re-arranging mean in MultiTask MO models (450).

Other changes
* Move gpt_posterior_settings into models.utils (449)
* Allow specifications of batch dims to collapse in samplers (457)
* Remove outcome transform before model-fitting for sequential model fitting
in MO models (458)

Page 6 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.