Model-tuner

Latest version: v0.0.20a0

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.0.20a

- added flexibility between `boolean` and `None` for stratification inputs
- added custom exception for non pandas inputs in `return_bootstrap_metrics`
- enforced required `model_type` input to be specified as `"classification"` or `"regression"`
- removed extraneous ` "="` print below `pipeline_steps`
- handled missing `pipeline_steps` when using `imbalance_sampler`
- updated requirements for `python==3.11`
- fixed SMOTE for early stopping
- removed extra `model_type` input from `xgb_early_test.py`

0.0.19a

- Requirements updated again to make compatible with google colab out of the box.
- Bug in fit() method where `best_params` wasn't defined if we didn't specify a score
- Threshold bug now actually fixed. Specificity and other metrics should reflect this. (Defaults to 0.5 if optimal_threshold is not specified).

0.0.18a

- Updated requirements to include `numpy` versions `<1.26` for Python 3.8-3.11.

This should stop a rerun occurring when using the library on a google colab.

0.0.17a

Major fixes:
- Verbosity variable is now popped from the parameters before the fit
- Bug with Column Transformer early stopping fixed (valid set is now transformed correctly)
- Return metrics now has a consistent naming convention
- `report_model_metrics` is now using the correct threshold in all cases
- Default values updated for `train_val_test_split`
- `tune_threshold_Fbeta` is now called with the correct number of parameters in all cases
- Requirements updates: `XGBoost` updated to `2.1.2` for later Python versions.

Minor changes:
- `help(model_tuner)` should now be correctly formatted in google colab

0.0.16a

- Custom pipeline steps now updated (our pipeline usage has been completely changed and should now order itself and support non named steps) always ensures correct order
- This fixed multiple other issues that were occuring to do with logging of imbalanced learn
- Reporting model metrics now works.
- `AutoKeras` code deprecated and removed.
- `KFold` bug introduced because of `CatBoost`. This has now been fixed.
- Pretty print of pipeline.
- Boosting variable has been renamed.
- Version constraints have been updated and refactored.
- `tune_threshold_Fbeta` has been cleaned up to remove unused parameters.
- `train_val_test` unnecessary self removed and taken outside of class method.
- deprecated `setup.py` in favor of `pyproject.toml` per forthcoming `pip25` update.

0.0.15a

Contains all previous fixes relating to:

- `CatBoost` support (early stopping, and support involving resetting estimators).
- Pipeline steps now support hyperparameter tuning of the resamplers (`SMOTE`, `ADASYN`, etc.).
- Removed older implementations of impute and scaling and moved onto supporting only custom `pipeline_steps`.
- Fixed bugs in stratification with regards to length mismatch of dependent variable when using column names to stratify.
- Cleaned a removed multiple lines of unused code and unused initialisation parameters.

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.