Botorch

Latest version: v0.12.0

Safety actively analyzes 687918 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 8

0.2.5

Bugfix Release

Bug fixes
* Fixed issue with broken wheel build (444).

Other changes
* Changed code style to use absolute imports throughout (443).

0.2.4

Bugfix Release

Bug fixes
* There was a mysterious issue with the 0.2.3 wheel on pypi, where part of the
`botorch/optim/utils.py` file was not included, which resulted in an `ImportError` for
many central components of the code. Interestingly, the source dist (built with the
same command) did not have this issue.
* Preserve order in ChainedOutcomeTransform (440).

New Features
* Utilities for estimating the feasible volume under outcome constraints (437).

0.2.3

Pairwise GP for Preference Learning, Sampling Strategies.

Compatibility
* Require PyTorch >=1.5 (423).
* Require GPyTorch >=1.1.1 (425).

New Features
* Add `PairwiseGP` for preference learning with pair-wise comparison data (388).
* Add `SamplingStrategy` abstraction for sampling-based generation strategies, including
`MaxPosteriorSampling` (i.e. Thompson Sampling) and `BoltzmannSampling` (218, 407).

Deprecations
* The existing `botorch.gen` module is moved to `botorch.generation.gen` and imports
from `botorch.gen` will raise a warning (an error in the next release) (218).

Bug fixes
* Fix & update a number of tutorials (394, 398, 393, 399, 403).
* Fix CUDA tests (404).
* Fix sobol maxdim limitation in `prune_baseline` (419).

Other changes
* Better stopping criteria for stochastic optimization (392).
* Improve numerical stability of `LinearTruncatedFidelityKernel` (409).
* Allow batched `best_f` in `qExpectedImprovement` and `qProbabilityOfImprovement`
(411).
* Introduce new logger framework (412).
* Faster indexing in some situations (414).
* More generic `BaseTestProblem` (9e604fe2188ac85294c143d249872415c4d95823).

0.2.2

Require PyTorch 1.4, Python 3.7 and new features for active learning,
multi-fidelity optimization, and a number of bug fixes.

Compatibility
* Require PyTorch >=1.4 (379).
* Require Python >=3.7 (378).

New Features
* Add `qNegIntegratedPosteriorVariance` for Bayesian active learning (377).
* Add `FixedNoiseMultiFidelityGP`, analogous to `SingleTaskMultiFidelityGP` (386).
* Support `scalarize_posterior` for m>1 and q>1 posteriors (374).
* Support `subset_output` method on multi-fidelity models (372).
* Add utilities for sampling from simplex and hypersphere (369).

Bug fixes
* Fix `TestLoader` local test discovery (376).
* Fix batch-list conversion of `SingleTaskMultiFidelityGP` (370).
* Validate tensor args before checking input scaling for more
informative error messaages (368).
* Fix flaky `qNoisyExpectedImprovement` test (362).
* Fix test function in closed-loop tutorial (360).
* Fix num_output attribute in BoTorch/Ax tutorial (355).

Other changes
* Require output dimension in `MultiTaskGP` (383).
* Update code of conduct (380).
* Remove deprecated `joint_optimize` and `sequential_optimize` (363).

0.2.1

Minor bug fix release.

New Features
* Add a static method for getting batch shapes for batched MO models (346).

Bug fixes
* Revamp qKG constructor to avoid issue with missing objective (351).
* Make sure MVES can support sampled costs like KG (352).

Other changes
* Allow custom module-to-array handling in fit_gpytorch_scipy (341).

0.2.0

Max-value entropy acquisition function, cost-aware / multi-fidelity optimization,
subsetting models, outcome transforms.

Compatibility
* Require PyTorch >=1.3.1 (313).
* Require GPyTorch >=1.0 (342).

New Features
* Add cost-aware KnowledgeGradient (`qMultiFidelityKnowledgeGradient`) for
multi-fidelity optimization (292).
* Add `qMaxValueEntropy` and `qMultiFidelityMaxValueEntropy` max-value entropy
search acquisition functions (298).
* Add `subset_output` functionality to (most) models (324).
* Add outcome transforms and input transforms (321).
* Add `outcome_transform` kwarg to model constructors for automatic outcome
transformation and un-transformation (327).
* Add cost-aware utilities for cost-sensitive acquisiiton functions (289).
* Add `DeterminsticModel` and `DetermisticPosterior` abstractions (288).
* Add `AffineFidelityCostModel` (f838eacb4258f570c3086d7cbd9aa3cf9ce67904).
* Add `project_to_target_fidelity` and `expand_trace_observations` utilties for
use in multi-fidelity optimization (1ca12ac0736e39939fff650cae617680c1a16933).

Performance Improvements
* New `prune_baseline` option for pruning `X_baseline` in
`qNoisyExpectedImprovement` (287).
* Do not use approximate MLL computation for deterministic fitting (314).
* Avoid re-evaluating the acquisition function in `gen_candidates_torch` (319).
* Use CPU where possible in `gen_batch_initial_conditions` to avoid memory
issues on the GPU (323).

Bug fixes
* Properly register `NoiseModelAddedLossTerm` in `HeteroskedasticSingleTaskGP`
(671c93a203b03ef03592ce322209fc5e71f23a74).
* Fix batch mode for `MultiTaskGPyTorchModel` (316).
* Honor `propagate_grads` argument in `fantasize` of `FixedNoiseGP` (303).
* Properly handle `diag` arg in `LinearTruncatedFidelityKernel` (320).

Other changes
* Consolidate and simplify multi-fidelity models (308).
* New license header style (309).
* Validate shape of `best_f` in `qExpectedImprovement` (299).
* Support specifying observation noise explicitly for all models (256).
* Add `num_outputs` property to the `Model` API (330).
* Validate output shape of models upon instantiating acquisition functions (331).

Tests
* Silence warnings outside of explicit tests (290).
* Enforce full sphinx docs coverage in CI (294).

Page 7 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.