Zenml

Latest version: v0.70.0

Safety actively analyzes 682387 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 20 of 22

0.3.8

Not secure

0.3.7.1

0.3.7.1rc5

0.3.7

Not secure
For those upgrading from an older version of ZenML, we ask to please delete their old `pipelines` dir and `.zenml` folders and start afresh with a `zenml init`.

If only working locally, this is as simple as:


cd zenml_enabled_repo
rm -rf pipelines/
rm -rf .zenml/


And then another ZenML init:


pip install --upgrade zenml
cd zenml_enabled_repo
zenml init


New Features
* The inner-workings of the `BaseDatasource` have been modified along with the concrete implementations. Now, there is no relation between a `DataStep` and a `Datasource`: A `Datasource` holds all the logic to version and track itself via the new `commit` paradigm.

* Introduced a new interface for datasources, the `process` method which is responsible for ingesting data and writing to TFRecords to be consumed by later steps.

* Datasource versions (snapshots) can be accessed directly via the `commits` paradigm: Every commit is a new version of data.

* Added `JSONDatasource` and `TFRecordsDatasource`.

Bug Fixes + Refactor
A big thanks to our new contributer aak7912 for the help in this release with issue 71 and PR 75.

* Added an example for [regression](https://github.com/maiot-io/zenml/tree/main/examples/regression).
* `compare_training_runs()` now takes an optional `datasource` parameter to filter by datasource.
* `Trainer` interface refined to focus on `run_fn` rather than other helper functions.
* New docs released with a streamlined vision and coherent storyline: https://docs.zenml.io
* Got rid of unnecessary Torch dependency with base ZenML version.

0.3.6

Not secure
New Features
* The inner-workings of the `BaseTrainerStep`, `BaseEvaluatorStep` and the `BasePreprocesserStep` have been modified along with their respective components to work with the new split_mapping. Now, users can define arbitrary splits (not just train/eval). E.g. Doing a `train/eval/test` split is possible.

* Within the instance of a `TrainerStep`, the user has access to `input_patterns` and `output_patterns` which provide the required uris with respect to their splits for the input and output(test_results) examples.

* The built-in trainers are modified to work with the new changes.

Bug Fixes + Refactor
A big thanks to our new super supporter zyfzjsc988 for most of the feedback that led to bug fixes and enhancements for this release:

* 63: Now one can specify which ports ZenML opens its add-on applications.
* 64 Now there is a way to list integrations with the following code:

from zenml.utils.requirements_utils import list_integrations.
list_integrations()

* Fixed 61: `view_anomalies()` breaking in the quickstart.
* Analytics is now `opt-in` by default, to get rid of the unnecessary prompt at `zenml init`. Users can still freely `opt-out` by using the CLI:


zenml config analytics opt-out


Again, the telemetry data is fully anonymized and just used to improve the product. Read more [here](https://docs.zenml.io/misc/usage-analytics.html)

0.3.5

Not secure
This release finally brings model agnostic automatic evaluation to ZenML! Now you can easily use [TFMA](https://github.com/tensorflow/model-analysis) with any model type to produce evaluation visualizations. This means you can now use TFMA with PyTorch or Scikit - a big win for automated sliced evaluation! It also introduces a new language for differentiation between features, raw features, labels and predictions, in addition to solving a few big bugs in the `examples` directory! Read more below.

As has been the case in the last few releases, this release is yet another **breaking upgrade**.

For those upgrading from an older version of ZenML, we ask to please delete their old `pipelines` dir and `.zenml` folders and start afresh with a `zenml init`.

If only working locally, this is as simple as:


cd zenml_enabled_repo
rm -rf pipelines/
rm -rf .zenml/


And then another ZenML init:


pip install --upgrade zenml
cd zenml_enabled_repo
zenml init


New Features
* Added a new interface into the trainer step called [`test_fn`](https://github.com/maiot-io/zenml/blob/b333b0bba7602e40a49168cc21c6405294386262/zenml/steps/trainer/base_trainer.py#L121) which is utilized to produce model predictions and save them as test results

* Implemented a new evaluator step called [`AgnosticEvaluator`](https://github.com/maiot-io/zenml/blob/b333b0bba7602e40a49168cc21c6405294386262/zenml/steps/evaluator/agnostic_evaluator.py) which is designed to work regardless of the model type as long as you run the `test_fn` in your trainer step

* The first two changes allow torch trainer steps to be followed by an agnostic evaluator step, see the example [here](https://github.com/maiot-io/zenml/blob/main/examples/pytorch/run.py).

* Proposed a new naming scheme, which is now integrated into the built-in steps, in order to make it easier to handle feature/label names.

* Modified the [`TorchFeedForwardTrainer`](https://github.com/maiot-io/zenml/blob/b333b0bba7602e40a49168cc21c6405294386262/zenml/steps/trainer/pytorch_trainers/torch_ff_trainer.py) to showcase how to use TensorBoard in conjunction with PyTorch


Bug Fixes + Refactor
* Refactored how ZenML treats relative imports for custom steps. Now, rather than doing absolute imports like:
python
from examples.scikit.step.trainer import MyScikitTrainer

One can also do the following:
python
from step.trainer import MyScikitTrainer


ZenML automatically figures out the absolute path of the module based on the root of the directory.

* Updated the [Scikit Example](https://github.com/maiot-io/zenml/tree/main/examples/scikit), [PyTorch Lightning Example](https://github.com/maiot-io/zenml/tree/main/examples/pytorch_lightning), [GAN Example](https://github.com/maiot-io/zenml/tree/main/examples/gan) accordingly. Now they should work according to their README's.

Big shout out to SaraKingGH in issue 55 for raising the above issues!

Page 20 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.