In this release of Citrine Python, we are shipping a new tool while paving the way for future functionality. We are proud to introduce the Holdout Set Evaluator, which allows users to evaluate model performance on a user-defined holdout set in lieu of our typical cross validation strategy. We've also included updates to our python SDK that will eventually allow users to select different algorithms in their AutoML Predictors and handle archival of predictors once they have been properly versioned. Both features are still in development for most production deployments, but these changes allow us to test and iterate before full deployment.
What's New
* Preparation for algorithmic selection in AutoML Predictors. The addition of the `estimators` field will eventually allow users to select additional algorithms to be considered during training. Full backend functionality still to come. 780
* Introduction of the Holdout Set Evaluator for generating model error metrics on a customizable holdout set in lieu of typical cross validation. The [`HoldoutSetEvaluator`](https://github.com/CitrineInformatics/citrine-python/blob/984df3d6ff399fe22dd001086b3989e2711f5f17/src/citrine/informatics/predictor_evaluator.py#L138), which can be used in a Predictor Evaluation Workflow alongside or instead of a `CrossValidationEvaluator`, takes a Data Source as an argument that is then predicted with the model during workflow execution. The same set of model performance metrics, such as RMSE and PvA results, can then be calculated and returned in the execution results. 768
Improvements
* Updated reported predictor archival status to prepare for a change with predictor versions. 782
**Full Changelog**: https://github.com/CitrineInformatics/citrine-python/compare/v1.37.1...v1.40.0