Pyoperon

Latest version: v0.4.0

Safety actively analyzes 687732 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.4.0

What's Changed

This release is based on [Operon](https://github.com/heal-research/operon) rev. [4a93f98](https://github.com/heal-research/operon/commit/4a93f98af108dbb98eb1cc10efe0f057b723293c)

- minor bugfix related to lexicographical sorting in NSGA2
- best order sort ([DOI](https://doi.org/10.1145/2908961.2931684)) implementation, now Operon contains all well-known non-dominated sorting algorithms
- refactored dispatch table using generic backend interface (based on mdspan), support for other math backends (Blaze, Eve, etc.)
- improved likelihoods (Gaussian, Poisson) which can also be used as objective functions
- many other small improvements and fixes
- support for SGD and L-BFGS algorithms for parameter tuning


The `scikit-learn` interface has been updated with some fixes and additional parameters:

- `local_iterations` parameter has been renamed to `optimizer_iterations`
- `optimizer` parameter accepts `lm`, `sgd` or `lbfgs` values to choose the optimization method
- `optimizer_likelihood` parameter specifies the likelihood used by the optimizer
- `optimizer_batch_size` controls the batch size for gradient descent
- `local_search_probability` controls the probability of applying local search to an individual
- `lamarckian_probability` controls the probability of writing optimized coefficients back into the genotype
- parameters `add_model_scale_term` and `add_model_intercept_term` control linear scaling of the final model
- `uncertainty` parameter specifies the variance of the error (taken into account inside the likelihood)
- `sgd_update_rule`, `sgd_learning_rate`, `sgd_beta`, `sgd_beta2`, `sgd_epsilon` can be used to configure the SGD algorithm
- `model_selection_criterion` parameter can be used to specify which model from the final pareto front is returned (NSGA2)

0.3.6

Changelog
This release is based on [Operon](https://github.com/heal-research/operon) rev. [88a15c3](https://github.com/heal-research/operon/commit/88a15c3f93a4784159d9ed3db12e69feee6361d1) and includes the following features:

- hard-crafted reverse-mode automatic differentiation module for symbolic expression trees, with much better runtime performance
- the ability to optimize all tree node coefficients via nonlinear least squares (previously, only leaf nodes were possible)
- slightly faster interpreter performance (+5-10%)
- a selection of new evaluators
* `AggregateEvaluator`: aggregates multiple objectives into a single scalar (min, max, median, mean, harmonic mean, sum)
* `BayesianInformationCriterionEvaluator`: computes the value of the [Bayesian Information Criterion](https://en.wikipedia.org/wiki/Bayesian_information_criterion) (BIC) for a symbolic regression model
* `AkaikeInformationCriterionEvaluator`: computes the value of the [Akaike Information Criterion](https://en.wikipedia.org/wiki/Akaike_information_criterion) (AIC) for a symbolic regression model
* `MinimumDescriptionLengthEvaluator`: computes the [Minimum Description Length](https://en.wikipedia.org/wiki/Minimum_description_length) (MDL) of a symbolic regression model
- various other fixes and improvements

The scikit-learn module now defaults to using the minimum description length to select the best model from the Pareto front. This is configurable with choices between: MSE, BIC, AIC, MDL

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.