Evidently

Latest version: v0.4.40

Safety actively analyzes 681926 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 16

0.1.54.dev0

Not secure
**Updates:**
1. Updated the UI to let users group tests by the following properties:
- All tests
- By Status
- By feature
- By test type
- By test group

2. New Tests:
- Added tests for binary probabilistic classification models
- Added tests for multiclass classification models
- Added tests for multiclass probabilistic classification models
The full list of tests will be available in the docs.

3. New Tests Presets:
- Regression
- MulticlassClassification
- BinaryClassificationTopK
- BinaryClassification

0.1.53.dev0

Not secure
* added default configurations for Data Quality Tests
* added default configurations for Data Integrity Tests
* added visualisation for Data Quality Tests
* added visualisation for Data Integrity Tests
* Test descriptions are updated (column names are highlighted)

0.1.52.dev0

Not secure
Implemented new interfaces to test data and models in a batch: Test Suite.

Implemented the following Individual tests:
- TestNumberOfColumns()
- TestNumberOfRows()
- TestColumnNANShare()
- TestShareOfOutRangeValues()
- TestNumberOfOutListValues()
- TestMeanInNSigmas()
- TestMostCommonValueShare()
- TestNumberOfConstantColumns()
- TestNumberOfDuplicatedColumns()
- TestNumberOfDuplicatedRows()
- TestHighlyCorrelatedFeatures()
- TestTargetFeaturesCorrelations()
- TestShareOfDriftedFeatures()
- TestValueDrfit()
- TestColumnsType()

Implemented the following test presets:
- Data Quality. This preset is focused on the data quality issues like duplicate rows or null values.  
- Data Stability. This preset identifies the changes in the data or differences between the batches.
- Data Drift. This one compares feature distributions using statistical tests and distance metrics.  
- NoTargetPerformance. This preset combines several checks to run when there are model predictions, there are no actuals or ground truth labels. This includes checking for prediction drift and some of the data quality and stability checks.

0.1.51.dev0

Not secure
**Updates:**
- Updated DataDriftTab: added target and prediction rows in DataDrift Table widget
- Updated CatTargetDriftTab: added additional widgets for probabilistic cases in both binary and multiclasss probabilistic classification, particularly widget for label drift and class probability distributions.

**Fixes:**
- 233
- fixed previes in DataDrift Table widget. Now histogram previews for refernce and current data share an x-axis. This means that bins order in refernce and current histograms is the same, it makes visual distribution comparion esier.

0.1.50.dev0

Not secure
**Release scope:**
1. Stat test auto selection algorithm update: https://docs.evidentlyai.com/reports/data-drift#how-it-works

For small data with <= 1000 observations in the reference dataset:
* For numerical features (n_unique > 5): [two-sample Kolmogorov-Smirnov test](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test).
* For categorical features or numerical features with n_unique <= 5: [chi-squared test](https://en.wikipedia.org/wiki/Chi-squared_test).
* For binary categorical features (n_unique <= 2), we use the proportion difference test for independent samples based on Z-score.
All tests use a 0.95 confidence level by default.

For larger data with > 1000 observations in the reference dataset:
* For numerical features (n_unique > 5): [Wasserstein Distance](https://en.wikipedia.org/wiki/Wasserstein_metric).
* For categorical features or numerical with n_unique <= 5): [Jensen–Shannon divergence](https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence).
All tests use a threshold = 0.1 by default.

2. Added options for setting custom statistical test for Categorical and Numerical Target Drift Dashboard/Profile:
cat_target_stattest_func: Defines a custom statistical test to detect target drift in CatTargetDrift.
num_target_stattest_func: Defines a custom statistical test to detect target drift in NumTargetDrift.

3. Added options for setting custom threshold for drift detection for Categorical and Numerical Target Drift Dashboard/Profile:
cat_target_threshold: Optional[float] = None
num_target_threshold: Optional[float] = None
These thresholds highly depends on selected stattest, generally it is either threshold for p_value or threshold for a distance.

**Fixes:**
207

0.1.49.dev0

Not secure
**StatTests**
The following statistical tests now can be used for both numerical and categorical features:
* 'jensenshannon'
* 'kl_div'
* 'psi

**Grafana monitoring example**
* Updated the example to be used with several ML models
* Added monitors for NumTargetDrift, CatTargetDrift

Page 11 of 16

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.