Model-tuner

Latest version: v0.0.29b1

Safety actively analyzes 723954 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 5

0.0.19a

- Requirements updated again to make compatible with google colab out of the box.
- Bug in `fit()` method where `best_params` wasn't defined if we didn't specify a score
- Threshold bug now actually fixed. Specificity and other metrics should reflect this. (Defaults to 0.5 if optimal_threshold is not specified).

0.0.18a

- Updated requirements to include `numpy` versions `<1.26` for Python 3.8-3.11.

This should stop a rerun occurring when using the library on a google colab.

0.0.17a

Major fixes:
- Verbosity variable is now popped from the parameters before the fit
- Bug with Column Transformer early stopping fixed (valid set is now transformed correctly)
- Return metrics now has a consistent naming convention
- `report_model_metrics` is now using the correct threshold in all cases
- Default values updated for `train_val_test_split`
- `tune_threshold_Fbeta` is now called with the correct number of parameters in all cases
- Requirements updates: `XGBoost` updated to `2.1.2` for later Python versions.

Minor changes:
- `help(model_tuner)` should now be correctly formatted in google colab

0.0.16a

- Custom pipeline steps now updated (our pipeline usage has been completely changed and should now order itself and support non named steps) always ensures correct order
- This fixed multiple other issues that were occuring to do with logging of imbalanced learn
- Reporting model metrics now works.
- `AutoKeras` code deprecated and removed.
- `KFold` bug introduced because of `CatBoost`. This has now been fixed.
- Pretty print of pipeline.
- Boosting variable has been renamed.
- Version constraints have been updated and refactored.
- `tune_threshold_Fbeta` has been cleaned up to remove unused parameters.
- `train_val_test` unnecessary self removed and taken outside of class method.
- deprecated `setup.py` in favor of `pyproject.toml` per forthcoming `pip25` update.

0.0.15a

Contains all previous fixes relating to:

- `CatBoost` support (early stopping, and support involving resetting estimators).
- Pipeline steps now support hyperparameter tuning of the resamplers (`SMOTE`, `ADASYN`, etc.).
- Removed older implementations of impute and scaling and moved onto supporting only custom `pipeline_steps`.
- Fixed bugs in stratification with regards to length mismatch of dependent variable when using column names to stratify.
- Cleaned a removed multiple lines of unused code and unused initialisation parameters.

0.0.14a

In previous versions, the `train_val_test_split` method allowed for stratification either by y (`stratify_y`) or by specified columns (`stratify_cols`), but not both at the same time. There are use cases where stratification by both the target variable (y) and specific columns is necessary to ensure a balanced and representative split across different data segments.

**Enhancement**

Modified the `train_val_test_split` method to support simultaneous stratification by both `stratify_y` and `stratify_cols`. This was inside the method achieved by implementing the following logic that ensures both y and the specified columns are considered during the stratification process.

python

stratify_key = pd.concat([X[stratify_cols], y], axis=1)

strat_key_val_test = pd.concat(
[X_valid_test[stratify_cols], y_valid_test], axis=1
)

Page 3 of 5

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.