Aepsych

Latest version: v0.6.3

Safety actively analyzes 698693 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.6.3

* Pinned SciPy to 1.14.1, latest SciPy (ver. 1.15.0) causes intermittent model fitting failure from BoTorch. We will remove this pin when the problem is solved.

The last minor release was also a bug fix patch, notes were missed. 0.6.1 was skipped.

0.6.2

* Initialize acqf method correct handles bounds again
* Plotting functions works again, no longer calls missing methods/attributes from models
* Query constraints works again, fixed by using dims to make dummies
* MyPy version pinned, copyright headers readded.

0.6.0

Major changes:
**Warning, the model API has changed, live experiments using configs should not break but custom code used in post-hoc analysis may not be compatible.**

* Models no longer possess bounds (lb/ub attributes in the initialization and the corresponding attributes are removed from the API).
* Models require the dim argument for initialization (i.e., dim is no longer an optional argument).
* The models can evaluate points outside of the bounds (which defines the search space, not the model's bounds). The only thing the models should know is the dimensionality of the space.
* Models no longer have multiple methods that should not be directly bound to the models (e.g., `dim_grid()` or `get_max()`). These are replaced by new functions in the `model.utils` submodule that accepts our models and the bounds to work on.
* Notice that it could be different bounds relative to the search space's bound, affording extra flexibility.
* While it is still possible access these functions with the Strategy class, it is recommended that post-hoc analysis simply load the model, the data, and use these separate functions.
* We are looking to improve the ergonomics of post-hoc analysis with a simplified API to load data and model from DBs without needing to replay, the next release will further bring more changes towards this goal.
* Approximate GP Models (like the GPClassificationModel) now accept a new inducing point allocator class to determine the inducing points instead of selecting the algorithm using a string argument.
* If inducing point methods were not modified before by the config, then nothing needs to change. To change the inducing point method, the `inducing_point_method` option in Configs need to be the exact InducingPointAllocator object (e.g., GreedyVarianceReduction or KMeansAllocator).
* The new default inducing point allocator for models is the GreedyVarianceReduction
* This should yield models that are at least as good as before while generally being more efficient to fit the model. To revert to the old default, use KMeansAllocator.
* Fixed parameters can now be defined a strings and the server will be able to handle this seamlessly.

Bug fixes:
* Query messages to the server can now handle models that would return values with gradients.
* Query responses will now correctly unpack dimensions.
* Query responses now respect transforms.
* Prediction queries now can actually predict in probability_space.
* Whitespaces are no longer meaningful in defining lists in config.
* The greedy variance allocator (previously the "pivoted_chol" option) now work with models that augment the dimensionality.
* MonotonicRejectionGP now respect the inducing point options from config.

0.5.1

Features:
* Support for discrete parameters, binary parameters, and fixed parameters
* Optimizer options can now be set from config and in models to manipulate the underlying SciPy optimizer options
* Manual generators now support multi stimuli studies

Bug fixes:
* Dim_grid now returns the right shapes

**Full Changelog**: https://github.com/facebookresearch/aepsych/compare/v0.5.0...0.5.1

0.5.0

New feature release:
* GPU support for GPClassificationModel and GPRegressionModel alongside GPU support for generating points with OptimizeAcqfGenerator with any acquisition function.
* Models that are subclasses of GPClassificationModel and GPRegressionModel should also have GPU support.
* This should allow the use of the better acquisition functions while maintaining practical live active learning trial generation speeds.
* GPU support will also speed up post-hoc analysis when fitting on a lot of data. Models have a `model.device` attribute like tensors in PyTorch do and can be smoothly moved between devices using the same API (e.g., `model.cuda()` or `model.cpu()` as tensors.
* We wrote a document on speeding up AEPsych, especially for live experiments with active learning: https://aepsych.org/docs/speed.
* More models and generators will gain GPU support soon.
* New parameter configuration format and parameter transformations
* The settings for parameters should now be set in parameter-specific blocks, old configs will still work but will not support new parameter features going forward.
* We added a log scale transformation and the ability to disable the normalize scale transformation, these can be set at a parameter-specific level.
* Take a look at our documentation about the new parameter options: https://aepsych.org/docs/parameters
* More parameter transforms to come!

Please raise an issue if you find any bugs with the new features or if you have any feature requests that would help you run your next experiment using AEPsych.

0.4.4

Minor bug fixes

* Revert tensor changes for LSE contour plotting
* Ensure manual generators don't hang strategies in replay
* Set default inducing size to 99, be aware that inducing size >= 100 can significantly slowdown the model on very specific hardware setups

Page 1 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.