Pynb-dag-runner

Latest version: v0.0.9

Safety actively analyzes 638720 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 9

0.5.1

------------------

MLflow 0.5.1 is a patch release on top of 0.5.0 containing only bug fixes and no breaking changes or features.

Bug fixes:

- Fix ``with mlflow.start_run() as run`` to actually set ``run`` to the created Run (previously, it was None) (322, tomasatdatabricks)
- Fixes to DBFS artifactory to throw an exception if logging an artifact fails (309) and to mimic FileStore's behavior of logging subdirectories (347, andrewmchen)
- Fix for Python 3.7 support with tarfiles (329, tomasatdatabricks)
- Fix spark.load_model not to delete the DFS tempdir (335, aarondav)
- MLflow UI now appropriately shows entrypoint if it's not main (345, aarondav)
- Make Python API forward-compatible with newer server versions of protos (348, aarondav)
- Improved API docs (305, 284, smurching)

0.5.0

------------------

MLflow 0.5.0 offers some major improvements, including Keras and PyTorch first-class support as models, SFTP support as an artifactory, a new scatterplot visualization to compare runs, and a more complete Python SDK for experiment and run management.

Breaking changes:

- The Tracking API has been split into two pieces, a "basic logging" API and a "tracking service" API. The "basic logging" API deals with logging metrics, parameters, and artifacts to the currently-active active run, and is accessible in ``mlflow`` (e.g., ``mlflow.log_param``). The tracking service API allow managing experiments and runs (especially historical runs) and is available in ``mlflow.tracking``. The tracking service API will look analogous to the upcoming R and Java Tracking Service SDKs. Please be aware of the following breaking changes:

- ``mlflow.tracking`` no longer exposes the basic logging API, only ``mlflow``. So, code that was written like ``from mlflow.tracking import log_param`` will have to be ``from mlflow import log_param`` (note that almost all examples were already doing this).
- Access to the service API goes through the ``mlflow.tracking.get_service()`` function, which relies on the same tracking server set by either the environment variable ``MLFLOW_TRACKING_URI`` or by code with ``mlflow.tracking.set_tracking_uri()``. So code that used to look like ``mlflow.tracking.get_run()`` will now have to do ``mlflow.tracking.get_service().get_run()``. This does not apply to the basic logging API.
- ``mlflow.ActiveRun`` has been converted into a lightweight wrapper around ``mlflow.entities.Run`` to enable the Python ``with`` syntax. This means that there are no longer any special methods on the object returned when calling ``mlflow.start_run()``. These can be converted to the service API.

- The Python entities returned by the tracking service API are now accessible in ``mlflow.entities`` directly. Where previously you may have used ``mlflow.entities.experiment.Experiment``, you would now just use ``mlflow.entities.Experiment``. The previous version still exists, but is deprecated and may be hidden in a future version.
- REST API endpoint `/ajax-api/2.0/preview/mlflow/artifacts/get` has been moved to `$static_prefix/get-artifact`. This change is coversioned in the JavaScript, so should not be noticeable unless you were calling the REST API directly (293, andremchen)

Features:

- [Models] Keras integration: we now support logging Keras models directly in the log_model API, model format, and serving APIs (280, ToonKBC)
- [Models] PyTorch integration: we now support logging PyTorch models directly in the log_model API, model format, and serving APIs (264, vfdev-5)
- [UI] Scatterplot added to "Compare Runs" view to help compare runs using any two metrics as the axes (268, ToonKBC)
- [Artifacts] SFTP artifactory store added (260, ToonKBC)
- [Sagemaker] Users can specify a custom VPC when deploying SageMaker models (304, dbczumar)
- Pyfunc serialization now includes the Python version, and warns if the major version differs (can be suppressed by using ``load_pyfunc(suppress_warnings=True)``) (230, dbczumar)
- Pyfunc serve/predict will activate conda environment stored in MLModel. This can be disabled by adding ``--no-conda`` to ``mlflow pyfunc serve`` or ``mlflow pyfunc predict`` (225, 0wu)
- Python SDK formalized in ``mlflow.tracking``. This includes adding SDK methods for ``get_run``, ``list_experiments``, ``get_experiment``, and ``set_terminated``. (299, aarondav)
- ``mlflow run`` can now be run against projects with no ``conda.yaml`` specified. By default, an empty conda environment will be created -- previously, it would just fail. You can still pass ``--no-conda`` to avoid entering a conda environment altogether (218, smurching)

Bug fixes:

- Fix numpy array serialization for int64 and other related types, allowing pyfunc to return such results (240, arinto)
- Fix DBFS artifactory calling ``log_artifacts`` with binary data (295, aarondav)
- Fix Run Command shown in UI to reproduce a run when the original run is targeted at a subdirectory of a Git repo (294, adrian555)
- Filter out ubiquitious dtype/ufunc warning messages (317, aarondav)
- Minor bug fixes and documentation updates (261, stbof; 279, dmatrix; 313, rbang1, 320, yassineAlouini; 321, tomasatdatabricks; 266, 282, 289, smurching; 267, 265, aarondav; 256, 290, ToonKBC; 273, 263, mateiz; 272, 319, adrian555; 277, aadamson; 283, 296, andrewmchen)

0.4.2

------------------

Breaking changes: None

Features:

- MLflow experiments REST API and ``mlflow experiments create`` now support providing ``--artifact-location`` (232, aarondav)
- [UI] Runs can now be sorted by columns, and added a Select All button (227, ToonKBC)
- Databricks File System (DBFS) artifactory support added (226, andrewmchen)
- databricks-cli version upgraded to >= 0.8.0 to support new DatabricksConfigProvider interface (257, aarondav)

Bug fixes:

- MLflow client sends REST API calls using snake_case instead of camelCase field names (232, aarondav)
- Minor bug fixes (243, 242, aarondav; 251, javierluraschi; 245, smurching; 252, mateiz)

0.4.1

------------------

Breaking changes: None

Features:

- [Projects] MLflow will use the conda installation directory given by the $MLFLOW_CONDA_HOME
if specified (e.g. running conda commands by invoking "$MLFLOW_CONDA_HOME/bin/conda"), defaulting
to running "conda" otherwise. (231, smurching)
- [UI] Show GitHub links in the UI for projects run from http(s):// GitHub URLs (235, smurching)

Bug fixes:

- Fix GCSArtifactRepository issue when calling list_artifacts on a path containing nested directories (233, jakeret)
- Fix Spark model support when saving/loading models to/from distributed filesystems (180, tomasatdatabricks)
- Add missing mlflow.version import to sagemaker module (229, dbczumar)
- Validate metric, parameter and run IDs in file store and Python client (224, mateiz)
- Validate that the tracking URI is a remote URI for Databricks project runs (234, smurching)
- Fix bug where we'd fetch git projects at SSH URIs into a local directory with the same name as
the URI, instead of into a temporary directory (236, smurching)

0.4.0

------------------

Breaking changes:

- [Projects] Removed the ``use_temp_cwd`` argument to ``mlflow.projects.run()``
(``--new-dir`` flag in the ``mlflow run`` CLI). Runs of local projects now use the local project
directory as their working directory. Git projects are still fetched into temporary directories
(215, smurching)
- [Tracking] GCS artifact storage is now a pluggable dependency (no longer installed by default).
To enable GCS support, install ``google-cloud-storage`` on both the client and tracking server via pip.
(202, smurching)
- [Tracking] Clients running MLflow 0.4.0 and above require a server running MLflow 0.4.0
or above, due to a fix that ensures clients no longer double-serialize JSON into strings when
sending data to the server (200, aarondav). However, the MLflow 0.4.0 server remains
backwards-compatible with older clients (216, aarondav)


Features:

- [Examples] Add a more advanced tracking example: using MLflow with PyTorch and TensorBoard (203)
- [Models] H2O model support (170, ToonKBC)
- [Projects] Support for running projects in subdirectories of Git repos (153, juntai-zheng)
- [SageMaker] Support for specifying a compute specification when deploying to SageMaker (185, dbczumar)
- [Server] Added --static-prefix option to serve UI from a specified prefix to MLflow UI and server (116, andrewmchen)
- [Tracking] Azure blob storage support for artifacts (206, mateiz)
- [Tracking] Add support for Databricks-backed RestStore (200, aarondav)
- [UI] Enable productionizing frontend by adding CSRF support (199, aarondav)
- [UI] Update metric and parameter filters to let users control column order (186, mateiz)

Bug fixes:

- Fixed incompatible file structure returned by GCSArtifactRepository (173, jakeret)
- Fixed metric values going out of order on x axis (204, mateiz)
- Fixed occasional hanging behavior when using the projects.run API (193, smurching)

- Miscellaneous bug and documentation fixes from aarondav, andrewmchen, arinto, jakeret, mateiz, smurching, stbof

0.3.0

------------------

Breaking changes:

- [MLflow Server] Renamed ``--artifact-root`` parameter to ``--default-artifact-root`` in ``mlflow server`` to better reflect its purpose (165, aarondav)

Features:

- Spark MLlib integration: we now support logging SparkML Models directly in the log_model API, model format, and serving APIs (72, tomasatdatabricks)
- Google Cloud Storage is now supported as an artifact storage root (152, bnekolny)
- Support asychronous/parallel execution of MLflow runs (82, smurching)
- [SageMaker] Support for deleting, updating applications deployed via SageMaker (145, dbczumar)
- [SageMaker] Pushing the MLflow SageMaker container now includes the MLflow version that it was published with (124, sueann)
- [SageMaker] Simplify parameters to SageMaker deploy by providing sane defaults (126, sueann)
- [UI] One-element metrics are now displayed as a bar char (118, cryptexis)

Bug fixes:

- Require gitpython>=2.1.0 (98, aarondav)
- Fixed TensorFlow model loading so that columns match the output names of the exported model (94, smurching)
- Fix SparkUDF when number of columns >= 10 (97, aarondav)
- Miscellaneous bug and documentation fixes from emres, dmatrix, stbof, gsganden, dennyglee, anabranch, mikehuston, andrewmchen, juntai-zheng

Page 8 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.