Gpax

Latest version: v0.1.8

Safety actively analyzes 681844 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.1.1

What's Changed
* Add notebook smoke tests, QOL changes, part 2 by matthewcarbone in https://github.com/ziatdinovmax/gpax/pull/39
* Add heteroskedastic Gaussian process by ziatdinovmax in https://github.com/ziatdinovmax/gpax/pull/44. See [tutorial](https://colab.research.google.com/github/ziatdinovmax/gpax/blob/main/examples/heteroskedasticGP.ipynb).
* Utilities for simplifying assignment of priors by ziatdinovmax in https://github.com/ziatdinovmax/gpax/pull/45
* Utilities for easy specification of custom kernels by ziatdinovmax in https://github.com/ziatdinovmax/gpax/pull/47

New Contributors
* matthewcarbone made their first contribution in https://github.com/ziatdinovmax/gpax/pull/39

**Full Changelog**: https://github.com/ziatdinovmax/gpax/compare/0.1.0...0.1.1

0.1.0

What's Changed
* Add batched acquisition functions and knowledge gradient by ziatdinovmax in https://github.com/ziatdinovmax/gpax/pull/37
* Improved documentation


**Full Changelog**: https://github.com/ziatdinovmax/gpax/compare/0.0.8...0.1.0

0.0.8

What's Changed
* Multi-fidelity/task DKL and GP by ziatdinovmax in https://github.com/ziatdinovmax/gpax/pull/29
* Add acquisition function penalties by ziatdinovmax in https://github.com/ziatdinovmax/gpax/pull/31
* Minor bug fixes for viDKL and viGP by ziatdinovmax in https://github.com/ziatdinovmax/gpax/pull/32


**Full Changelog**: https://github.com/ziatdinovmax/gpax/compare/0.0.7...0.0.8

0.0.7

Minor release:
- Optional distance-based penalty term for the acquisition functions
- Variational inference approximation of GP for fast image/spectral reconstruction
- Minor bug fixes and updated tutorials

0.0.6

- Add utility functions for hypothesis learning based on the [arXiv:2112.06649](https://arxiv.org/abs/2112.06649) paper.
Now the exploration phase of the hypothesis learning can be implemented as follows:
python3
Lists with physical models and probabilistic priors over their parameters
models = [model1, model2, model3]
model_priors = [model1_priors, model2_priors, model3_priors]

Initialize the reward and predictive uncertainty records
record = np.zeros((len(models), 2))
obj_history = []

def compute_reward(obj_history):
"""Simple reward function"""
r = 1 if obj_history[-1] < obj_history[-2] else -1
return r

Run active hypothesis learning for 15 steps
for e in range(15):

Sample model according to softmax or epsilon-greedy selection policy
idx = gpax.hypo.sample_next(
rewards=record[:, 1], method="softmax", temperature=1.2)

Derive fully Bayesian predictive uncertainty with the selected model
obj, _ = gpax.hypo.step(
models[idx], model_priors[idx],
X_measured, y_measured, X_unmeasured,
gp_wrap=True, gp_kernel='Matern' wrap the sampled model into a Gaussian process
)

Update predictive uncertainty records
obj_history.append(jnp.nanmedian(obj).item())
if e < 1:
continue

Compute reward and update reward records
r = compute_reward(obj_history)
record = gpax.hypo.update_record(record, idx, r)

Evaluate function in the suggested point
next_point_idx = obj.argmax()
measured_point = measure(next_point_idx) your actual measurement function goes here

Update arrays with measured and unmeasured data
X_measured, y_measured, X_unmeasured = update_datapoints(X_measured, y_measured, X_unmeasured)

- Minor bug fixes
- Documentation updates
- Test updates

0.0.5

- Allow specifying a CPU or GPU device on which one wants to perform training/prediction as a keyword argument (device). This could be useful in small data regimes where the model inference with NUTS runs faster on the CPU, but the computation of predictive means and variances is faster on GPU. Example:
python3
Specify devices for training and prediction
device_train=jax.devices("cpu")[0] training on CPU
device_predict = jax.devices("gpu")[0] prediction on GPU
Initialize model
gp_model = gpax.ExactGP(input_dim=1, kernel='Matern')
Run HMC with iterative No-U-turn-sampler on CPU to infer GP model parameters
gp_model.fit(rng_key, X, y, device=device_train) X and y are small arrays
Make a prediction on new inputs using GPU
y_pred, y_sampled = gp_model.predict(rng_key_predict, X_new, device=device_predict)

- Add utility function to visualize numpyro's distributions. Example:
python3
import numpyro
d = numpyro.distributions.Gamma(2, 5)
gpax.utils.dviz(d, samples=10000)

![image](https://user-images.githubusercontent.com/34245227/189211132-cb4a508f-37d8-45ea-ba95-b64c195f8c69.png)

- Add the option to pass a custom jitter value (a small positive term added to the diagonal part of a covariance matrix) for better numerical stability to all models. Example:
python3
gp_model = gpax.ExactGP(input_dim=1, kernel='Matern')
gp_model.fit(rng_key, X, y, jitter=1e-5)
y_pred, y_sampled = gp_model.predict(rng_key_predict, X_new, jitter=1e-5)

- Add an example on [Bayesian optimization](https://colab.research.google.com/github/ziatdinovmax/gpax/blob/main/examples/gpax_GPBO.ipynb) and expand descriptions in markdown cells for the existing examples
- Improve documentation

Page 2 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.