Evidently

Latest version: v0.4.29

Safety actively analyzes 641002 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 9 of 14

0.1.55.dev0

Not secure
Updates:
- added TPR, TNR, FPR, FNR Tests for Binary Classification Model Performance
- Renamed status "No Group" to "Dataset-level tests" in TestSuites filtering menu

Fixes:
- 207
- 265
- 256
- fixed unit tests for different versions of python and pandas

0.1.54.dev0

Not secure
**Updates:**
1. Updated the UI to let users group tests by the following properties:
- All tests
- By Status
- By feature
- By test type
- By test group

2. New Tests:
- Added tests for binary probabilistic classification models
- Added tests for multiclass classification models
- Added tests for multiclass probabilistic classification models
The full list of tests will be available in the docs.

3. New Tests Presets:
- Regression
- MulticlassClassification
- BinaryClassificationTopK
- BinaryClassification

0.1.53.dev0

Not secure
* added default configurations for Data Quality Tests
* added default configurations for Data Integrity Tests
* added visualisation for Data Quality Tests
* added visualisation for Data Integrity Tests
* Test descriptions are updated (column names are highlighted)

0.1.52.dev0

Not secure
Implemented new interfaces to test data and models in a batch: Test Suite.

Implemented the following Individual tests:
- TestNumberOfColumns()
- TestNumberOfRows()
- TestColumnNANShare()
- TestShareOfOutRangeValues()
- TestNumberOfOutListValues()
- TestMeanInNSigmas()
- TestMostCommonValueShare()
- TestNumberOfConstantColumns()
- TestNumberOfDuplicatedColumns()
- TestNumberOfDuplicatedRows()
- TestHighlyCorrelatedFeatures()
- TestTargetFeaturesCorrelations()
- TestShareOfDriftedFeatures()
- TestValueDrfit()
- TestColumnsType()

Implemented the following test presets:
- Data Quality. This preset is focused on the data quality issues like duplicate rows or null values.  
- Data Stability. This preset identifies the changes in the data or differences between the batches.
- Data Drift. This one compares feature distributions using statistical tests and distance metrics.  
- NoTargetPerformance. This preset combines several checks to run when there are model predictions, there are no actuals or ground truth labels. This includes checking for prediction drift and some of the data quality and stability checks.

0.1.51.dev0

Not secure
**Updates:**
- Updated DataDriftTab: added target and prediction rows in DataDrift Table widget
- Updated CatTargetDriftTab: added additional widgets for probabilistic cases in both binary and multiclasss probabilistic classification, particularly widget for label drift and class probability distributions.

**Fixes:**
- 233
- fixed previes in DataDrift Table widget. Now histogram previews for refernce and current data share an x-axis. This means that bins order in refernce and current histograms is the same, it makes visual distribution comparion esier.

0.1.50.dev0

Not secure
**Release scope:**
1. Stat test auto selection algorithm update: https://docs.evidentlyai.com/reports/data-drift#how-it-works

For small data with <= 1000 observations in the reference dataset:
* For numerical features (n_unique > 5): [two-sample Kolmogorov-Smirnov test](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test).
* For categorical features or numerical features with n_unique <= 5: [chi-squared test](https://en.wikipedia.org/wiki/Chi-squared_test).
* For binary categorical features (n_unique <= 2), we use the proportion difference test for independent samples based on Z-score.
All tests use a 0.95 confidence level by default.

For larger data with > 1000 observations in the reference dataset:
* For numerical features (n_unique > 5): [Wasserstein Distance](https://en.wikipedia.org/wiki/Wasserstein_metric).
* For categorical features or numerical with n_unique <= 5): [Jensen–Shannon divergence](https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence).
All tests use a threshold = 0.1 by default.

2. Added options for setting custom statistical test for Categorical and Numerical Target Drift Dashboard/Profile:
cat_target_stattest_func: Defines a custom statistical test to detect target drift in CatTargetDrift.
num_target_stattest_func: Defines a custom statistical test to detect target drift in NumTargetDrift.

3. Added options for setting custom threshold for drift detection for Categorical and Numerical Target Drift Dashboard/Profile:
cat_target_threshold: Optional[float] = None
num_target_threshold: Optional[float] = None
These thresholds highly depends on selected stattest, generally it is either threshold for p_value or threshold for a distance.

**Fixes:**
207

Page 9 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.