Aepsych

Latest version: v0.6.2

Safety actively analyzes 690691 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.6.0

Major changes:
**Warning, the model API has changed, live experiments using configs should not break but custom code used in post-hoc analysis may not be compatible.**

* Models no longer possess bounds (lb/ub attributes in the initialization and the corresponding attributes are removed from the API).
* Models require the dim argument for initialization (i.e., dim is no longer an optional argument).
* The models can evaluate points outside of the bounds (which defines the search space, not the model's bounds). The only thing the models should know is the dimensionality of the space.
* Models no longer have multiple methods that should not be directly bound to the models (e.g., `dim_grid()` or `get_max()`). These are replaced by new functions in the `model.utils` submodule that accepts our models and the bounds to work on.
* Notice that it could be different bounds relative to the search space's bound, affording extra flexibility.
* While it is still possible access these functions with the Strategy class, it is recommended that post-hoc analysis simply load the model, the data, and use these separate functions.
* We are looking to improve the ergonomics of post-hoc analysis with a simplified API to load data and model from DBs without needing to replay, the next release will further bring more changes towards this goal.
* Approximate GP Models (like the GPClassificationModel) now accept a new inducing point allocator class to determine the inducing points instead of selecting the algorithm using a string argument.
* If inducing point methods were not modified before by the config, then nothing needs to change. To change the inducing point method, the `inducing_point_method` option in Configs need to be the exact InducingPointAllocator object (e.g., GreedyVarianceReduction or KMeansAllocator).
* The new default inducing point allocator for models is the GreedyVarianceReduction
* This should yield models that are at least as good as before while generally being more efficient to fit the model. To revert to the old default, use KMeansAllocator.
* Fixed parameters can now be defined a strings and the server will be able to handle this seamlessly.

Bug fixes:
* Query messages to the server can now handle models that would return values with gradients.
* Query responses will now correctly unpack dimensions.
* Query responses now respect transforms.
* Prediction queries now can actually predict in probability_space.
* Whitespaces are no longer meaningful in defining lists in config.
* The greedy variance allocator (previously the "pivoted_chol" option) now work with models that augment the dimensionality.
* MonotonicRejectionGP now respect the inducing point options from config.

0.5.1

Features:
* Support for discrete parameters, binary parameters, and fixed parameters
* Optimizer options can now be set from config and in models to manipulate the underlying SciPy optimizer options
* Manual generators now support multi stimuli studies

Bug fixes:
* Dim_grid now returns the right shapes

**Full Changelog**: https://github.com/facebookresearch/aepsych/compare/v0.5.0...0.5.1

0.5.0

New feature release:
* GPU support for GPClassificationModel and GPRegressionModel alongside GPU support for generating points with OptimizeAcqfGenerator with any acquisition function.
* Models that are subclasses of GPClassificationModel and GPRegressionModel should also have GPU support.
* This should allow the use of the better acquisition functions while maintaining practical live active learning trial generation speeds.
* GPU support will also speed up post-hoc analysis when fitting on a lot of data. Models have a `model.device` attribute like tensors in PyTorch do and can be smoothly moved between devices using the same API (e.g., `model.cuda()` or `model.cpu()` as tensors.
* We wrote a document on speeding up AEPsych, especially for live experiments with active learning: https://aepsych.org/docs/speed.
* More models and generators will gain GPU support soon.
* New parameter configuration format and parameter transformations
* The settings for parameters should now be set in parameter-specific blocks, old configs will still work but will not support new parameter features going forward.
* We added a log scale transformation and the ability to disable the normalize scale transformation, these can be set at a parameter-specific level.
* Take a look at our documentation about the new parameter options: https://aepsych.org/docs/parameters
* More parameter transforms to come!

Please raise an issue if you find any bugs with the new features or if you have any feature requests that would help you run your next experiment using AEPsych.

0.4.4

Minor bug fixes

* Revert tensor changes for LSE contour plotting
* Ensure manual generators don't hang strategies in replay
* Set default inducing size to 99, be aware that inducing size >= 100 can significantly slowdown the model on very specific hardware setups

0.4.3

* Float64 are now the default data type for all tensors from AEPsych.
* Many functions are ported to only use PyTorch Tensors and not accept NumPy arrays
* Fixed ManualGenerators not knowing when it is finished.

0.4.2

* BoTorch version bumped to latest at 0.12.0.
* Numpy pinned below v2.0 to ensure compatibility with Intel Macs
* Only Python 3.10+ is supported now (matching BoTorch requirements)

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.