Horovod

Latest version: v0.28.1

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

0.22.0

Not secure
Added

- Added pytorch_lightning spark estimator which enables training pytorch_lightning models. ([2713](https://github.com/horovod/horovod/pull/2713))
- Added NVTX tracing hooks for profiling with Nsight Systems. ([2723](https://github.com/horovod/horovod/pull/2723))
- Added a generic `num_workers` API for ``RayExecutor`` ([2870](https://github.com/horovod/horovod/pull/2870))
- Supports Ray Client without code changes. ([2882](https://github.com/horovod/horovod/pull/2882))
- Supports inmemory cache option for Keras Estimator. ([2896](https://github.com/horovod/horovod/pull/2896))
- Added FP16 support for GPU tensor in mxnet. ([2915](https://github.com/horovod/horovod/pull/2915))
- Added response caching for allgather operations. ([2872](https://github.com/horovod/horovod/pull/2872))
- Estimator: add petastorm reader_pool_type into constructor ([2903](https://github.com/horovod/horovod/pull/2903))

Changed

- Changed `alltoall` to return the received splits as a second return value if non-uniform splits are sent. ([2631](https://github.com/horovod/horovod/pull/2631))
- Changed ``RayExecutor`` to use [Ray Placement Groups](https://docs.ray.io/en/master/placement-group.html) for worker colocation. ([#2824](https://github.com/horovod/horovod/pull/2824))
- Changed ``Inmemory dataloader`` usage for Torch Estimator with petastorm v0.11.0 release. ([2896](https://github.com/horovod/horovod/pull/2896))

Fixed

- Changed RayExecutor to use Ray node ID to enable multi-container:single-host setups. ([2883](https://github.com/horovod/horovod/pull/2882))
- Support sparse gradients aggregation in TF1 Keras. ([2879](https://github.com/horovod/horovod/pull/2879))
- Respect `global_step` parameter for LegacyOptimizers when aggregating gradients. ([2879](https://github.com/horovod/horovod/pull/2879))
- Fixed compatibility with PyTorch 1.9.0. ([2829](https://github.com/horovod/horovod/pull/2829))

0.21.3

Not secure
Added

- Add `groups` parameter in `DistributedOptimizer` for custom allreduce groups. ([2523](https://github.com/horovod/horovod/pull/2523))

Removed

- Removed `num_groups` parameter in `DistributedOptimizer`, replaced with `groups`. ([2523](https://github.com/horovod/horovod/pull/2523))

Fixed

- Fixed worker desynchronization deadlock issue in TensorFlow 2.4. ([2647](https://github.com/horovod/horovod/pull/2647))
- Deduped Keras `LearningRateWarmupCallback` log after gradual learning rate warmup. ([2661](https://github.com/horovod/horovod/pull/2661))

0.21.2

Not secure
Added

- Added support for Intel(R) MPI in horovodrun. ([2374](https://github.com/horovod/horovod/pull/2374))
- Add support for callbacks in Ray Elastic Executor. ([2639](https://github.com/horovod/horovod/pull/2639))
- Added forwarding of stdout/stderr captured to driver over Gloo. ([2646](https://github.com/horovod/horovod/pull/2646))

Fixed

- Fixed broadcast_optimizer_state to handle NoneType params for PyTorch 1.8. ([2624](https://github.com/horovod/horovod/pull/2624))
- Fixed `local_rank` support for Ray. ([2596](https://github.com/horovod/horovod/pull/2596))
- Fixed DL estimators to obtain the output df schema without sampling the input. ([2611](https://github.com/horovod/horovod/pull/2611))
- Fixed wrong default for horovod.tensorflow.keras.allreduce average ([2627](https://github.com/horovod/horovod/pull/2627))

0.21.1

Not secure
Added

- Added in-memory dataset caching param to `TorchEstimator`. ([2434](https://github.com/horovod/horovod/pull/2434))
- Added `val_batch_size` param to the Estimator API. ([2505](https://github.com/horovod/horovod/pull/2505))
- Added support for TorchScript modules when using `TorchEstimator`. ([2494](https://github.com/horovod/horovod/pull/2494))

Changed

- Migrated to oneCCL aligned with oneAPI specification v1.0. ([2513](https://github.com/horovod/horovod/pull/2513))
- Added knob to set cache hint for oneCCL allreduce. ([2560](https://github.com/horovod/horovod/pull/2560))
- Renamed `horovodrun` arg `--ccl-bgt-affinity` to `--thread-affinity`. ([2562](https://github.com/horovod/horovod/pull/2562))
- Changed default build parallelism from `-j8` to `-j1` to address potential race condition. ([2572](https://github.com/horovod/horovod/pull/2572))

Fixed

- Fixed building Horovod for ROCm PyTorch with newer hipify script. ([2360](https://github.com/horovod/horovod/pull/2360))
- Fixed "Executable class" support for Ray. ([2510](https://github.com/horovod/horovod/pull/2510))
- Fixed TorchEstimator returning model without switching to eval mode. ([2517](https://github.com/horovod/horovod/pull/2517))
- Remove ssh reliance for Ray elastic training. ([2528](https://github.com/horovod/horovod/pull/2528))
- Fixed error handling for changing framework without reinstalling horovod. ([2529](https://github.com/horovod/horovod/pull/2529))
- Fixed "Intermediate path does not exist" error with DBFSLocalStore. ([2526](https://github.com/horovod/horovod/pull/2526))
- Avoid synchronization if workers are only shrinked in elastic mode. ([2514](https://github.com/horovod/horovod/pull/2514))
- Fixed Ray resource test. ([2575](https://github.com/horovod/horovod/pull/2575))
- Fixed usage of env variable `HOROVOD_GLOO_TIMEOUT_SECONDS` with `horovodrun`. ([2571](https://github.com/horovod/horovod/pull/2571))

0.21.0

Not secure
Added

- Added support for backward_passes_per_step > 1 for TF Keras graph mode. ([2346](https://github.com/horovod/horovod/pull/2346))
- Added support for backward_passes_per_step > 1 for TF Keras eager execution. ([2371](https://github.com/horovod/horovod/pull/2371))
- Added support for backward_passes_per_step > 1 for TF LegacyOptimizer in graph mode. ([2401](https://github.com/horovod/horovod/pull/2401))
- Added grouped allreduce to enable more efficient tensor fusion and deterministic training. ([2453](https://github.com/horovod/horovod/pull/2453))
- Add support for specifying `op` and `compression` in `horovod.tensorflow.keras.allreduce()`. ([2423](https://github.com/horovod/horovod/pull/2423))
- Adding support for batched D2D memcopy kernel on GPU. ([2435](https://github.com/horovod/horovod/pull/2435))
- Added schema inference in Spark Estimator without sampling. ([2373](https://github.com/horovod/horovod/pull/2373))
- Added `Store.create("dbfs:/")` mapping to `DBFSLocalStore("/dbfs/...")`. ([2376](https://github.com/horovod/horovod/pull/2376))

Changed

- Changed Keras callbacks to require parameter `initial_lr` of `LearningRateScheduleCallback` and `LearningRateWarmupCallback`. ([2459](https://github.com/horovod/horovod/pull/2459))
- Changed default cycle time from 5ms to 1ms and fusion threshold from 64MB to 128MB. ([2468](https://github.com/horovod/horovod/pull/2468))

Fixed

- Fixed support for TensorFlow v2.4.0. ([2381](https://github.com/horovod/horovod/pull/2381))
- Fixed averaging using CUDA half2 implementation one element half buffers. ([2375](https://github.com/horovod/horovod/pull/2375))
- Fixed `HOROVOD_THREAD_AFFINITY` when using oneCCL. ([2350](https://github.com/horovod/horovod/pull/2350))
- Added timeout to SSH check in horovodrun to prevent hanging. ([2448](https://github.com/horovod/horovod/pull/2448))
- Added `HOROVOD_GLOO_TIMEOUT_SECONDS` value to error messages. ([2436](https://github.com/horovod/horovod/pull/2436))
- Fixed race condition in dynamic timeline API. ([2341](https://github.com/horovod/horovod/pull/2341))
- Fixed --log-hide-timestamp to apply to driver logs with Gloo. ([2388](https://github.com/horovod/horovod/pull/2388))
- Fixed the search order of Eigen and Flatbuffers paths. ([2473](https://github.com/horovod/horovod/pull/2473))
- Fixed type checks in `TorchEstimator` to correctly use `isinstance()`. ([2480](https://github.com/horovod/horovod/pull/2480))

0.20.3

Not secure
Added

- Added Elastic Ray integration. ([2291](https://github.com/horovod/horovod/pull/2291))

Changed

- Removed dependency on SSH access for Ray. ([2275](https://github.com/horovod/horovod/pull/2275))

Page 3 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.