Accelerate

Latest version: v1.2.1

Safety actively analyzes 689550 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 16

0.3.0

Notebook launcher

After doing all the data preprocessing in your notebook, you can launch your training loop using the new `notebook_launcher` functionality. This is especially useful for Colab or Kaggle with TPUs! [Here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/accelerate/simple_nlp_example.ipynb) is an example on Colab (don't forget to select a TPU runtime).

This launcher also works if you have multiple GPUs on your machine. You just have to pass along `num_processes=your_number_of_gpus` in the call to `notebook_launcher`.

- Notebook launcher 44 (sgugger)
- Add notebook/colab example 52 (sgugger)
- Support for multi-GPU in notebook_launcher 56 (sgugger)

Multi-node training

Our multi-node training test setup was flawed and the previous releases of 🤗 Accelerate were not working for multi-node distributed training. This is all fixed now and we have ensured to have more robust tests!

- fix cluster.py indent error 35 (JTT94)
- Set all defaults from config in launcher 38 (sgugger)
- Fix port in config creation 50 (sgugger)

Various bug fixes

- Fix typos in examples README 28 (arjunchandra)
- Fix load from config 31 (sgugger)
- docs: minor spelling tweaks 33 (brettkoonce)
- Add `set_to_none` to AcceleratedOptimizer.zero_grad 43 (sgugger)
- fix 53 54 (Guitaricet)
- update launch.py 58 (Jesse1eung)

0.2.1

Fix a bug preventing the load of a config with `accelerate launch`

0.2.0

SageMaker launcher

It's now possible to launch your training script on AWS instances using SageMaker via `accelerate launch`.

- Launch script on SageMaker 26 (philschmid )
- Add defaults for compute_environmnent 23 (sgugger )
- Add Configuration setup for SageMaker 17 (philschmid )

Kwargs handlers

To customize how the different objects used for mixed precision or distributed training are instantiated, a new API called `KwargsHandler` is added. This allows the user to pass along the kwargs that will be passed to those objects if used (and it is ignored if those are not used in the current setup, so the script can still run on any kind of setup).

- Add KwargsHandlers 15 (sgugger )

Pad across processes

Trying to gather tensors that are not of the same size across processes resulted in a process hang, a new method `Accelerator.pad_across_processes` has been added to help with that.

- Add utility to pad tensor across processes to max length 19 (sgugger )

Various bug fixes

- added thumbnail 25 (philschmid )
- Cleaner diffs in README and index 22 (sgugger )
- Use proper size 21 (sgugger )
- Alternate diff 20 (sgugger )
- Add YAML config support 16 (sgugger )
- Don't error on non-Tensors objects in move to device 13 (sgugger )
- Add CV example 10 (sgugger )
- Readme clean-up 9 (thomwolf )
- More flexible RNG synchronization 8 (sgugger )
- Fix typos and tighten grammar in README 7 (lewtun )
- Update README.md 6 (voidful )
- Fix TPU training in example 4 (thomwolf )
- Fix example name in README 3 (LysandreJik )

0.1.49

- Nothing changed but the PyPI upload broke due to old credentials for the previous release and a new release is the easiest fix

0.1.48

What's Changed
* Bump package versions May 2024 by bepuca in https://github.com/Chris-hughes10/pytorch-accelerated/pull/60


**Full Changelog**: https://github.com/Chris-hughes10/pytorch-accelerated/compare/v0.1.47...v0.1.48

0.1.47

What's Changed
* Fix an undesired warning when all batches are full for all processes by bepuca in https://github.com/Chris-hughes10/pytorch-accelerated/pull/54
* Fix run config for evaluate run by bepuca in https://github.com/Chris-hughes10/pytorch-accelerated/pull/55


**Full Changelog**: https://github.com/Chris-hughes10/pytorch-accelerated/compare/v0.1.46...v0.1.47

Page 10 of 16

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.