Fairlearn

Latest version: v0.11.0

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.6.1

* Bugfix for `_create_group_metric_set()`. Fixes the list of metrics computed for regression
and adds a set of metrics for 'probability' problems [PR](https://github.com/fairlearn/fairlearn/pull/727)
* Updated 'Credit Card' notebook [PR](https://github.com/fairlearn/fairlearn/pull/713)
* Added some overlooked `MetricFrame` tests [PR](https://github.com/fairlearn/fairlearn/pull/701)
* Various documentation tweaks and enhancements

0.6.0

* Add `CorrelationRemover` preprocessing technique. This removes correlations
between sensitive and non-sensitive features while retaining as much information
as possible
* Add `control_features` to the classification moments. These allow for data
stratification, with fairness constraints enforced within each stratum, but
not between strata
* Update `make_derived_metric()` to use `MetricFrame`
* Assorted small documentation fixes

0.5.0

* Adjust classes to abide by naming conventions for attributes.
* Change `ExponentiatedGradient` signature by renaming argument `T` to
`max_iter`, `eta_mul` to `eta0`, and by adding `run_linprog_step`.
* API refactoring to separate out different uses of `eps` within
`ExponentiatedGradient`. It is now solely responsible for setting the L1
norm bound in the optimization (which controls the excess constraint
violation beyond what is allowed by the `constraints` object).
The other usage of `eps` as the right-hand side of constraints is
now captured directly in the moment classes as follows:
* Classification moments: `ConditionalSelectionRate` renamed to
`UtilityParity` and its subclasses have new arguments on the constructor:
* `difference_bound` - for difference-based constraints such as
demographic parity difference
* `ratio_bound_slack` - for ratio-based constraints such as demographic
parity ratio
* Additionally, there's a `ratio_bound` argument which represents the
argument previously called `ratio`.
* Regression moments: `ConditionalLossMoment` and its subclasses have a new
argument `upper_bound` with the same purpose for newly enabled regression
scenarios on `ExponentiatedGradient`.
For a comprehensive overview of available constraints refer to the new [user
guide on fairness constraints for reductions methods](https://fairlearn.github.io/user_guide/mitigation.html#reductions).
* Renamed several constraints to create a uniform naming convention according
to the accepted [metric harmonization proposal](https://github.com/fairlearn/fairlearn-proposals/blob/master/api/METRICS.md):
* `ErrorRateRatio` renamed to `ErrorRateParity`, and
`TruePositiveRateDifference` renamed to `TruePositiveRateParity` since the
desired pattern is `<metric name>Parity` with the exception of
`EqualizedOdds` and `DemographicParity`.
* `ConditionalSelectionRate` renamed to `UtilityParity`.
* `GroupLossMoment` renamed to `BoundedGroupLoss` in order to have a
descriptive name and for consistency with the paper. Similarly,
`AverageLossMoment` renamed to `MeanLoss`.
For a comprehensive overview of available constraints refer to the new [user
guide on fairness constraints for reductions methods](https://fairlearn.github.io/user_guide/mitigation.html#reductions).
* Added `TrueNegativeRateParity` to provide the opposite constraint of
`TruePositiveRateParity` to be used with reductions techniques.
* Add new constraints and objectives in `ThresholdOptimizer`
* Add class `InterpolatedThresholder` to represent the fitted
`ThresholdOptimizer`
* Add `fairlearn.datasets` module.
* Change the method to make copies of the estimator in `ExponentiatedGradient`
from `pickle.dump` to `sklearn.clone`.
* Add an argument `sample_weight_name` to `GridSearch` and
`ExponentiatedGradient` to control how `sample_weight` is supplied to
`estimator.fit`.
* Large changes to the metrics API. A new class `MetricFrame` has been
introduced, and `make_group_summary()` removed (along with related
functions). Please see the documentation and examples for more information.

0.4.6

* Handle case where reductions relabeling results in a single class
* Refactor metrics:
* Remove `GroupMetricResult` type in favor of a `Bunch`.
* Rename and slightly update signatures:
* `metric_by_group` changed to `group_summary`
* `make_group_metric` changed to `make_metric_group_summary`
* Add group summary transformers
`{difference,ratio,group_min,group_max}_from_group_summary`.
* Add factory `make_derived_metric`.
* Add new metrics:
* base metrics `{true,false}_{positive,negative}_rate`
* group summary metrics `<metric>_group_summary`
* derived metrics `<metric>_{difference,ratio,group_min,group_max}`
* disparity metrics `{demographic_parity,equalized_odds}_{difference,ratio}`
* Remove metrics:
* `fallout_rate` in favor of `false_positive_rate`
* `miss_rate` in favor of `false_negative_rate`
* `specificity_score` in favor of `true_negative_rate`
* Change from public to private:
* `mean_{over,under}prediction` and `{balanced_,}root_mean_squared_error`
changed to the versions with a leading underscore
* Fix warning due to changing default `dtype` when creating an empty
`pandas.Series`.
* Enable `GridSearch` for more than two sensitive features values.
* Add new disparity constraints for reductions methods as moments in
`fairlearn.reductions` including:
* `TruePositiveRateDifference`
* ratio options for all existing constraints in addition to the default,
i.e., difference between groups w.r.t. the relevant metric.
* Make `ExponentiatedGradient` require 0-1 labels for classification problems,
pending a better solution for Issue 339.

0.4.5

* Changes to `ThresholdOptimizer`:
* Separate plotting for `ThresholdOptimizer` into its own plotting function.
* `ThresholdOptimizer` now performs validations during `fit`, and not during
`__init__`. It also stores the fitted given estimator in the `estimator_`
attribute.
* `ThresholdOptmizer` is now a scikit-learn meta-estimator, and accepts
an estimator through the `estimator` parameter. To use a pre-fitted
estimator, pass `prefit=True`.
* Made `_create_group_metric_set_()` private by prepending with `_`.
Also changed the arguments, so that this routine requires
dictionaries for the predictions and sensitive features. This is a
breaking change.
* Remove `Reduction` base class for reductions methods and replace it with
`sklearn.base.BaseEstimator` and `sklearn.base.MetaEstimatorMixin`.
* Remove `ExponentiatedGradientResult` and `GridSearchResult` in favor of
storing the values and objects resulting from fitting the meta-estimator
directly in the `ExponentiatedGradient` and `GridSearch` objects,
respectively.
* Fix regression in input validation that dropped metadata from `X` if it is
provided as a `pandas.DataFrame`.

0.4.4

- Remove `GroupMetricSet` in favour of a `create_group_metric_set` method
- Add basic support for multiple sensitive features
- Refactor `ThresholdOptimizer` to use mixins from scikit-learn

Page 2 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.