Clearml

Latest version: v1.16.5

Safety actively analyzes 687959 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 21

1.1.6

Not secure
Features

- Add `Task.force_store_standalone_script()` to force storing standalone script instead of a Git repository reference (340)
- Add `Logger.set_default_debug_sample_history()` and `Logger.get_default_debug_sample_history()` to allow controlling maximum debug samples programmatically
- Add populate now stores function arg types as part of the hyper paremeters
- Add `status_message` argument to `Task.mark_stopped()`
- Change `HTTP` driver timeout and retry codes (connection timeout will now trigger a retry)

Bug Fixes

- Fix and upgrade the SlackMonitor (533)
- Fix network issues causing Task to stop on status change when no status change has occurred (535)
- Fix Pipeline controller function support for dict as input argument
- Fix uploading the same metric/variant from multiple processes in threading mode should create a unique file per process (since global counter is not passed between the subprocesses)
- Fix resource monitoring should only run in the main process when using threaded logging mode
- Fix fork patching so that the signal handler (`at_exit`) will be called on time
- Fix fork (process pool) hangs or drops reports when reports are at the end of the forked function in both threaded and subprocess mode reporting
- Fix Multi-pipeline support
- Fix delete artifacts after upload
- Fix artifact preview has no truth value
- Fix storage cache cleanup does not remove all entries on a silent fail
- Fix always store session cache in `~/.clearml` (regardless of the cache folder)
- Fix `StorageManager.download_folder()` fails on Windows path

1.1.5

Not secure
Features
- Add support for `jsonargpraser` (403)
- Add `HyperParameterOptimizer.get_top_experiments_details()` returns the hparams and metrics of the top performing experiments of an HPO (473)
- Allow overriding initial iteration offset using environment variable (`CLEARML_SET_ITERATION_OFFSET`) or `Task.init(continue_last_task==<offset>)` (496)
- Add better input handling for `clearml-init` in colab (515)
- Add environment variable for default request method (521)
- Add `LocalClearmlJob` as possible option for HPO (525)
- Add convenience functionality to `clearml-data` (526)
- Add support for `vscode-jupyter` (https://github.com/microsoft/vscode-jupyter/pull/8531)
- Improve detection of running reporting subprocess (including zombie state)
- Support controlling S3/Google Cloud Storage `_stream_download_pool_connections` using the `stream_connections` configuration setting in `clearml.conf` (default 128)
- Add warning when loosing reporting subprocess
- Add `Model.remove()` to allow removing a model from the model repository
- Add HTTP download timeout control (change default connection timeout to 30 seconds)
- Add initial setup callback to monitoring class
- Add `Task.get_reported_plots()`
- Allow `Monitor.get_query_parameters` to override defaults
- Add support for Google Cloud Storage `pool_connections` and `pool_maxsize` overrides
- Add last worker time to `AutoScaler`
- Add warning when opening an aborted Dataset
- Store multi-pipeline execution plots on the master pipeline Task
- Support pipeline return value stored on pipeline Task
- Add `PipelineDecorator.multi_instance_support`
- Add `PipelineDecorator` to `clearml` and `clearml.automation` namespaces
- Documentation and examples
* Update docstrings (501)
* Add Markdown in pipeline jupyter notebooks (502)
* Update pipeline example (494)
* Add abseil example (509)
* Change README to dark theme (513)
* Update XGBoost example (524)
* Change example name (528)

Bug Fixes
- Fix `TriggerScheduler` on Dataset change (491)
- Fix links in Jupyter Notebooks (505)
- Fix `pandas` delta datetime conversion (510)
- Fix `matplotlib` auto-magic detect bar graph series name (518)
- Fix path limitation on storage services (posix, object storage) when storing target artifacts by limiting length of project name (full path) and task name used for object path (516)
- Fix multi-processing context block catching exception
- Fix Google Cloud Storage with no default project causes a crash
- Fix main process's reporting subprocess lost, switch back to thread mode
- Fix forked `StorageHelper` should use its own `ThreadExecuter`
- Fix local `StorageHelper.delete()` raising exception on non existing file instead of returning false
- Fix `StorageHelper` rename partial file throwing errors on multiple access
- Fix resource monitor fails on permission issues (skip over parts)
- Fix reusing Task does not reset it
- Fix support `clearml` PyCharm Plugin 1.0.2 (support partial pycharm git repo sync)
- Fix `Task.reset()` `force` argument ineffective
- Fix PY3.5 compatibility
- Fix validation error causes infinite loop
- Fix tasks schema prevents sending `null` container parts
- Fix missing `CLEARML_SET_ITERATION_OFFSET` definition
- Fix `Model.get_weights_package()` returns `None` on error
- Fix download progress bar based on `sdk.storage.log.report_download_chunk_size_mb` configuration
- Fix Conda lists the `CudaToolkit` version installed (for the agent to reproduce)
- Fix Jupyter kernel shutdown causing nested atexit callbacks leaving Task in running state
- Fix multi-subprocess can cause Task to hand at close
- Fix TF 2.7 support (get `logdir` on with multiple TB writers)

1.1.4

Not secure
Bug Fixes

- Fix duplicate keyword argument (affects `clearml-data` `Dataset.get()`) https://github.com/allegroai/clearml/issues/490, [ClearML Slack Channel \#1](https://clearml.slack.com/archives/CTK20V944/p1635411658177900), [\#2](https://clearml.slack.com/archives/CTK20V944/p1635925696270900), [\#3](https://clearml.slack.com/archives/CTK20V944/p1635323823155700), [\#4](https://clearml.slack.com/archives/CTK20V944/p1635921934264300)
- Fix session raises missing host error when in offline mode 489
- Fix `Task.get_task()` does not load `output_uri` from stored Task
- Fix `Task.get_models()['input']` returns string instead of `clearml.Model`
- Fix `tf.saved_model.load()` binding for TensorFlow >=2.0
- Fix hyperparams with `None` value converted to empty string causes inferred type to change to `str` in consecutive `Task.connect()` calls

1.1.3

Not secure
Features

- Add support for MegEngine with examples (455)
- Add `TaskTypes` to main namespace (453)
- Add `LogUnifomParameterRange` for hyperparameter optimization with Optuna (462)
- Add joblib (equivalent to scikit) to `Task.init(auto_connect_frameworks)` argument
- Log environment variables starting with `*` in `environ_bind.py` (459)
- Pipeline
- Add eager decorated pipeline execution
- Support pipeline monitoring for scalers/models/artifacts
- Add `PipelineController.upload_model()`
- Add `PipelineController.add_step(configuration_overrides)` argument allowing to override Task configuration objects
- Change `PipelineController.start_locally()` default `run_pipeline_steps_locally=False`
- Add `PipelineController.stop(mark_failed, mark_aborted)` argguments
- Add `PipelineController.run_locally` decorator
- Add `PipelineController.continue_on_fail` property
- Add `PipelineController.__init__(abort_on_failure)` argument
- Add ClreamlJob state cache (refresh every second)
- Datasets
- Add `clearml-data` multi-chunk support
- Change `clearml-data` default chunk size to 512MB
- Change `Dataset.create()` now automatically reverts to using current Task if no project/name provided
- Add `Optimizer.get_top_experiments_id_metrics_pair()` for top performing experiments
- Add support for setting default value to auto connected argparse arguments
- Add `Task.get_script()` and `Task.set_script()` for getting and setting task's script prioerties for execution
- Add `Task.mark_completed()` `force` and `status_message` arguments
- Add `Task.stopped()` `reason` argument
- Add `Task.query_tasks()`, `Task.get_task()` and `Task.get_tasks()` `tags` argument

Bug Fixes

- Fix PyJWT resiliency support
- Fix xgb train overload (456)
- Fix `http://` throws `OSError` in Windows by using `pathlib2` instead of `os` (#463)
- Fix local diff should include staged commits, otherwise applying git diff fails (457)
- Fix `task.upload_artifact` non-standard dictionary will now revert to `pickle` (452)
- Fix `S3BucketConfig.is_valid()` for EC2 environments with `use_credentials_chain` (478)
- Fix audio classifier example when training with a custom dataset (484)
- Fix `clearml-task` diff was corrupted by Windows drive letter and separator (483)
- Fix TQDM "line cleanup" not using `CR` but rather arrow-up escape sequence (181)
- Fix `task.connect(dict)` value casting - if `None` is the default value, use backend stored type
- Fix Jupyter notebook should always set Task as completed/stopped, never failed (exceptions are caught in interactive session)
- Fix Pipeline support
- Fix `LocalClearmlJob` setting failed status
- Fix pipeline stopping all running steps
- Fix nested pipeline component parent point to pipeline Task
- Fix `PipelineController.start()` should not kill the process when done
- Fix pipeline failing to create Step Task should cause the pipeline to be marked failed
- Fix nested pipeline components missing pipeline tags
- Fix images reported over history size were not sent if frequency was too high
- Fix git detectors missing git repository without origin
- Fix support for upload `LazyEvalWrapper` artifacts
- Fix duplicate task dataset tags
- Fix FileLock create target folder
- Fix crash inside forked subprocess might leave SafeQueue in a locked state, causing `task.close()` to hang
- Fix PyTorch distributed example `TimeoutSocket` issue in Windows
- Fix broken `Dataset.finalize()`
- Fix Python 3.5 compatibility

1.1.2

Not secure
Bug Fixes

- Fix PyJWT issue (limit dependency to `<2.2.0`)

1.1.1

Not secure
Bug Fixes

- Fix `Logger.report_image()` throws warning
- Fix TensorBoard `add_image()` not being reported

Page 10 of 21

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.