Snowflake-ml-python

Latest version: v1.7.1

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 8

1.7.1

Bug Fixes

- Registry: Null value is now allowed in the dataframe used in model signature inference. Null values will be ignored
and others will be used to infer the signature.
- Registry: Pandas Extension DTypes (`pandas.StringDType()`, `pandas.BooleanDType()`, etc.) are now supported in model
signature inference.
- Registry: Null value is now allowed in the dataframe used to predict.
- Data: Fix missing `snowflake.ml.data.*` module exports in wheel
- Dataset: Fix missing `snowflake.ml.dataset.*` module exports in wheel.
- Registry: Fix the issue that `tf_keras.Model` is not recognized as keras model when logging.

Behavior Changes

New Features

- Registry: Option to `enable_monitoring` set to False by default. This will gate access to preview features of Model Monitoring.
- Model Monitoring: `show_model_monitors` Registry method. This feature is still in Private Preview.
- Registry: Support `pd.Series` in input and output data.
- Model Monitoring: `add_monitor` Registry method. This feature is still in Private Preview.
- Model Monitoring: `resume` and `suspend` ModelMonitor. This feature is still in Private Preview.
- Model Monitoring: `get_monitor` Registry method. This feature is still in Private Preview.
- Model Monitoring: `delete_monitor` Registry method. This feature is still in Private Preview.

1.7.0

Behavior Change

- Generic: Require python >= 3.9.
- Data Connector: Update `to_torch_dataset` and `to_torch_datapipe` to add a dimension for scalar data.
This allows for more seamless integration with PyTorch `DataLoader`, which creates batches by stacking inputs of each batch.

Examples:

python
ds = connector.to_torch_dataset(shuffle=False, batch_size=3)


- Input: "col1": [10, 11, 12]
- Previous batch: array([10., 11., 12.]) with shape (3,)
- New batch: array([[10.], [11.], [12.]]) with shape (3, 1)

- Input: "col2": [[0, 100], [1, 110], [2, 200]]
- Previous batch: array([[ 0, 100], [ 1, 110], [ 2, 200]]) with shape (3,2)
- New batch: No change

- Model Registry: External access integrations are optional when creating a model inference service in
Snowflake >= 8.40.0.
- Model Registry: Deprecate `build_external_access_integration` with `build_external_access_integrations` in
`ModelVersion.create_service()`.

Bug Fixes

- Registry: Updated `log_model` API to accept both signature and sample_input_data parameters.
- Feature Store: ExampleHelper uses fully qualified path for table name. change weather features aggregation from 1d to 1h.
- Data Connector: Return numpy array with appropriate object type instead of list for multi-dimensional
data from `to_torch_dataset` and `to_torch_datapipe`
- Model explainability: Incompatibility between SHAP 0.42.1 and XGB 2.1.1 resolved by using latest SHAP 0.46.0.

New Features

- Registry: Provide pass keyworded variable length of arguments to class ModelContext. Example usage:

python
mc = custom_model.ModelContext(
config = 'local_model_dir/config.json',
m1 = model1
)

class ExamplePipelineModel(custom_model.CustomModel):
def __init__(self, context: custom_model.ModelContext) -> None:
super().__init__(context)
v = open(self.context['config']).read()
self.bias = json.loads(v)['bias']

custom_model.inference_api
def predict(self, input: pd.DataFrame) -> pd.DataFrame:
model_output = self.context['m1'].predict(input)
return pd.DataFrame({'output': model_output + self.bias})


- Model Development: Upgrade scikit-learn in UDTF backend for log_loss metric. As a result, `eps` argument is now ignored.
- Data Connector: Add the option of passing a `None` sized batch to `to_torch_dataset` for better
interoperability with PyTorch DataLoader.
- Model Registry: Support [pandas.CategoricalDtype](https://pandas.pydata.org/docs/reference/api/pandas.CategoricalDtype.html#pandas-categoricaldtype)
- Registry: It is now possible to pass `signatures` and `sample_input_data` at the same time to capture background
data from explainablity and data lineage.

1.6.4

Bug Fixes

- Registry: Fix an issue that leads to incident when using `ModelVersion.run` with service.

1.6.3

- Model Registry (PrPr) has been removed.

Bug Fixes

- Registry: Fix a bug that when package whose name does not follow PEP-508 is provided when logging the model,
an unexpected normalization is happening.
- Registry: Fix `not a valid remote uri` error when logging mlflow models.
- Registry: Fix a bug that `ModelVersion.run` is called in a nested way.
- Registry: Fix an issue that leads to `log_model` failure when local package version contains parts other than
base version.
- Fix issue where `sample_weights` were not being applied to search estimators.
- Model explainability: Fix bug which creates explain as a function instead of table function when enabling by default.
- Model explainability: Update lightgbm binary classification to return non-json values, from customer feedback.

New Features

- Data: Improve `DataConnector.to_pandas()` performance when loading from Snowpark DataFrames.
- Model Registry: Allow users to set a model task while using `log_model`.
- Feature Store: FeatureView supports ON_CREATE or ON_SCHEDULE initialize mode.

1.6.2

Bug Fixes

- Modeling: Support XGBoost version that is larger than 2.

- Data: Fix multiple epoch iteration over `DataConnector.to_torch_datapipe()` DataPipes.
- Generic: Fix a bug that when an invalid name is provided to argument where fully qualified name is expected, it will
be parsed wrongly. Now it raises an exception correctly.
- Model Explainability: Handle explanations for multiclass XGBoost classification models
- Model Explainability: Workarounds and better error handling for XGB>2.1.0 not working with SHAP==0.42.1

New Features

- Data: Add top-level exports for `DataConnector` and `DataSource` to `snowflake.ml.data`.
- Data: Add native batching support via `batch_size` and `drop_last_batch` arguments to `DataConnector.to_torch_dataset()`
- Feature Store: update_feature_view() supports taking feature view object as argument.

1.6.1

Bug Fixes

- Feature Store: Support large metadata blob when generating dataset
- Feature Store: Added a hidden knob in FeatureView as kargs for setting customized
refresh_mode
- Registry: Fix an error message in Model Version `run` when `function_name` is not mentioned and model has multiple
target methods.
- Cortex inference: snowflake.cortex.Complete now only uses the REST API for streaming and the use_rest_api_experimental
is no longer needed.
- Feature Store: Add a new API: FeatureView.list_columns() which list all column information.
- Data: Fix `DataFrame` ingestion with `ArrowIngestor`.

New Features

- Enable `set_params` to set the parameters of the underlying sklearn estimator, if the snowflake-ml model has been fit.
- Data: Add `snowflake.ml.data.ingestor_utils` module with utility functions helpful for `DataIngestor` implementations.
- Data: Add new `to_torch_dataset()` connector to `DataConnector` to replace deprecated DataPipe.
- Registry: Option to `enable_explainability` set to True by default for XGBoost, LightGBM and CatBoost as PuPr feature.
- Registry: Option to `enable_explainability` when registering SHAP supported sklearn models.

Page 1 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.