FAIR
FAIR (Fairness Assessment and Inequality Reduction) empowers AI developers to assess fairness of their Machine Learning application and mitigate any observed bias in its application. It contains methods to assess fairness metrics as well as a set of bias algorithms for mitigating unfairness.
DiscriminationThreshold
The DiscriminationThreshold class provides a solution for determining the optimal discrimination threshold in a binary classification model for decision makers. The discrimination threshold refers to the probability value that separates the positive and negative classes. The commonly used threshold is 0.5, however, adjusting it will affect the sensitivity to false positives, as precision and recall exhibit an inverse relationship with respect to the threshold.
Paired_ttest
The goal of this function is to perform statistical paired t test for classifier comparisons. 2 methods are provided: McNemar's test and paired ttest 5x2cv.
What's Changed
Continuous integration by JoaoGranja in https://github.com/EqualityAI/EqualityML/pull/1
Refactor fair pkg by JoaoGranja in https://github.com/EqualityAI/EqualityML/pull/2
Rename project by JoaoGranja in https://github.com/EqualityAI/EqualityML/pull/4
Add r modules by JoaoGranja in https://github.com/EqualityAI/EqualityML/pull/5
Combine metrics mitigation classes by JoaoGranja in https://github.com/EqualityAI/EqualityML/pull/6
Sources of harm by nyujwc331 in https://github.com/EqualityAI/EqualityML/pull/7
Refactor r code by JoaoGranja in https://github.com/EqualityAI/EqualityML/pull/8