Accelerate

Latest version: v0.31.0

Safety actively analyzes 640974 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 15

0.17.0

PyTorch 2.0 support

This release fully supports the upcoming PyTorch 2.0 release. You can choose to use `torch.compile` or not and then customize the options in `accelerate.config` or via a `TorchDynamoPlugin`.

* update support for torch dynamo compile by pacman100 in 1150

Process Control Enhancements

This release adds a new `PartialState`, which contains most of the capabilities of the `AcceleratorState` however it is designed to be used by the user to assist in any process control mechanisms around it. With this, users also now do not need to have `if accelerator.state.is_main_process` when utilizing classes such as the `Tracking` API, as these now will automatically use only the main process for their work by default.

* Refactor process executors to be in AcceleratorState by muellerzr in 1039

TPU Pod Support (Experimental)

Launching from TPU pods is now supported, please see [this issue](https://github.com/huggingface/accelerate/issues/501#issuecomment-1424614540) for more information

* Introduce TPU Pod launching to `accelerate launch` by muellerzr in 1049

FP8 mixed precision training (Experimental)

This release adds experimental support for FP8 mixed precision training, which requires the [transformer-engine](https://github.com/NVIDIA/TransformerEngine) library as well as a Hopper GPU (or higher).

* Fp8 integration by sgugger in 1086

What's new?

* v0.17.0.dev0 by sgugger (direct commit on main)
* Deepspeed param check by dhar174 in 1015
* enabling `mps` device by default and removing related config by pacman100 in 1030
* fix: links to gradient synchronization by prassanna-ravishankar in 1035
* do not scale gradient in bf16 mode by kashif in 1036
* Pass keywords arguments of backward function deeper to DeepSpeed by DistinctVision in 1037
* Add daily slack notifier for nightlies by muellerzr in 1042
* Make sure direct parameters are properly set on device by sgugger in 1043
* Add `cpu_offload_with_hook` by sgugger in 1045
* Update quality tools to 2023 by sgugger in 1046
* Load tensors directly on device by sgugger in 1028
* Fix cpu_offload_with_hook code snippet by pcuenca in 1047
* Use create_task by muellerzr in 1052
* Fix args by adding in the defaults by muellerzr in 1053
* deepspeed `hidden_size` auto value default fixes by pacman100 in 1060
* Introduce PartialState by muellerzr in 1055
* Flag for deprecation by muellerzr in 1061
* Try with this by muellerzr in 1062
* Update integrations by muellerzr in 1063
* Swap utils over to use PartialState by muellerzr in 1065
* update fsdp docs and removing deepspeed version pinning by pacman100 in 1059
* Fix/implement process-execution decorators on the Accelerator by muellerzr in 1070
* Refactor state and make `PartialState` first class citizen by muellerzr in 1071
* Add error if passed --config_file does not exist by muellerzr in 1074
* SageMaker image_uri is now optional by <NOT FOUND> in 1077
* Allow custom SageMaker Estimator arguments by <NOT FOUND> in 1080
* Fix tpu_cluster arg by muellerzr in 1081
* Update complete_cv_example.py by fcossio in 1082
* Added SageMaker local mode config section by <NOT FOUND> in 1084
* Fix config by muellerzr in 1090
* adds missing "lfs" in pull by CSchoel in 1091
* add multi_cpu support to reduce by alex-hh in 1094
* Update README.md by BM-K in 1100
* Tracker rewrite and lazy process checker by muellerzr in 1079
* Update performance.mdx by fcossio in 1107
* Attempt to unwrap tracker. by pcuenca in 1109
* TensorBoardTracker: wrong arg def by stas00 in 1111
* Actually raise if exception by muellerzr in 1124
* Add test for ops and fix reduce by muellerzr in 1122
* Deep merge SageMaker `additional_args`, allowing more flexible configuration and `env` variable support by dbpprt in 1113
* Move dynamo.optimize to the end of model preparation by ymwangg in 1128
* Refactor `launch` for greater extensibility by Yard1 in 1123
* [Big model loading] Correct GPU only loading by patrickvonplaten in 1121
* Add tee and role to launch by muellerzr in 1132
* Expand warning and grab all GPUs available by default by muellerzr in 1134
* Fix multinode with GPU ids when each node has 1 by muellerzr in 1127
* deepspeed dataloader prepare fix by pacman100 in 1126
* fix ds dist init kwargs issue by pacman100 in 1138
* fix lr scheduler issue by pacman100 in 1140
* fsdp bf16 enable autocast by pacman100 in 1125
* Fix notebook_launcher by muellerzr in 1141
* fix partial state by pacman100 in 1144
* FSDP enhancements and fixes by pacman100 in 1145
* Fixed typos in notebook by SamuelLarkin in 1146
* Include a note in the gradient synchronization docs on "what can go wrong" and show the timings by muellerzr in 1153
* [Safetensors] Relax missing metadata constraint by patrickvonplaten in 1151
* Solve arrow keys being environment dependant for accelerate config by p1atdev (direct commit on main)
* Load custom state to cpu by Guangxuan-Xiao in 1156
* :memo: add a couple more trackers to the docs by nateraw in 1158
* Let GradientState know active dataloaders and reset the remainder by muellerzr in 1162
* Attempt to fix import error when PyTorch is build without `torch.distributed` module by mfuntowicz in 1108
* [`Accelerator`] Fix issue with 8bit models by younesbelkada in 1155
* Document skip_first_batches in the checkpoint usage guides by muellerzr in 1164
* Fix what files get deleted through `total_limit` by muellerzr in 1165
* Remove outdated command directions and use in tests by muellerzr in 1166

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* Yard1
* Refactor `launch` for greater extensibility (1123)

0.16.0

New code exploration doc tool

A new interactive tool has been introduced to the documentation to help users quickly learn how to utilize features of the framework before providing more details on them as shown below:
![image](https://user-images.githubusercontent.com/7831895/215847833-3434ec6f-1f8d-41b8-9368-48f76fb7783d.png)

Not only does it provide a code diff, but it also includes an explanation and links to more resources the user should check out to learn more:

![image](https://user-images.githubusercontent.com/7831895/215847922-5879f5d4-fa11-44de-9550-1e986da57bcb.png)

Try it out today in the [docs](https://huggingface.co/docs/accelerate/en/usage_guides/explore)

* Add in code exploration tool to docs by muellerzr in 1014
* Light vs dark theme based on pick by muellerzr in 1023

Skip batches in dataloaders

When resuming training, you can more efficiently skip batches in your dataloader with the new `skip_first_batches` function (also available as a method on your `Accelerator`).

* Efficiently skip batches in a dataloader by sgugger in 1002

DeepSpeed integration enhancements:

A new ZeRO-3 init context manager is added to provide granular control to users in situations involving nested/multiple models. Refactoring of DeepSpeed Config file support to remove ambiguity between it and Accelerate config.

Adding support for `auto` entries in the DeeSpeed config file to be filled via the `accelerate launch` command. Try it out today by referring to the section [Things to note when using DeepSpeed Config File](https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#deepspeed-config-file)

* ds zero-3 init context manager by pacman100 in 932
* raise error for duplicate accelerate config values when using `deepspeed_config_file` by pacman100 in 941


What's new?

* Flag to silence subprocess.CalledProcessError in launch by Cyberes in 902
* Add usage examples by muellerzr in 904
* Expand sanity checks by muellerzr in 905
* Fix conditional by muellerzr in 907
* fix issue that amp bf16 does not work for cpu in env with cuda. by sywangyi in 906
* fsdp enhancements by pacman100 in 911
* Fix typos accelerate -> accelerator by pcuenca in 915
* 🚨🚨🚨 Act on deprecations 🚨🚨🚨 by muellerzr in 917
* fix accelerate test failure with cpu config by sywangyi in 909
* Introduce `project_dir` and limit the number of saved checkpoints by muellerzr in 916
* Specify inference by muellerzr in 921
* Support `init_on_device` by thomasw21 in 926
* ds-z3-init and prepending ds env variables with `ACCELERATE_` by pacman100 in 928
* Honor model dtype in `load_checkpoint` by sgugger in 920
* ds zero-3 init context manager by pacman100 in 932
* Fix silly typo by tornikeo in 939
* add `mixed_precision_type` property to `AcceleratorState` by pacman100 in 935
* fix batch size in prepare_dataloader for iterable datasets by sanderland in 937
* fix mp related test fails by pacman100 in 943
* Fix tracker by muellerzr in 942
* Fix offload when weights are on the GPU by sgugger in 945
* raise error for duplicate accelerate config values when using `deepspeed_config_file` by pacman100 in 941
* Add is_initialized method and refactor by muellerzr in 949
* Fix DeepSpeed tests by muellerzr in 950
* Don't automatically offload buffers when loading checkpoints by sgugger in 951
* Typo fix in src/accelerate/utils/modeling.py by ryderwishart in 955
* support master port when using ds multi-node launcher by pacman100 in 959
* Allowing encoded configuration for DeepSpeed by cli99 in 895
* Update README.md by Don9wanKim in 968
* Raise minimum version for distrib launch by muellerzr in 978
* Fix tied parameters test in big model inference by sgugger in 979
* Fix type error on line 36 by dhar174 in 981
* Ensure that last batch doesn't get dropped if perfectly even in gather_for_metrics by muellerzr in 982
* Skip wandb test for now by muellerzr in 984
* Fix test for converting tensor to proper dtype by sgugger in 983
* in sync with trfs, removing style_doc utils and using doc-builder instead by pacman100 in 988
* Add new release_memory util by muellerzr in 990
* adding support for kwargs in `load_state` by pacman100 in 989
* Fix scheduler incorrect steps when gradient accumulation enabled by markovalexander in 999
* Fix parameters tying in dispatch_model by sgugger in 1000
* improve deepspeed notes by stas00 in 1003
* Update toctree by muellerzr in 1008
* Add styleguide by muellerzr in 1007
* Maintain accumulation steps by muellerzr in 1011
* Saving and loading state hooks by patrickvonplaten in 991
* Fix test introduced in PR and introduce AcceleratorTestCase by muellerzr in 1016
* Allow the torch device to be set with an env var by Yard1 in 1009
* Fix import of LrScheduler by sgugger in 1017
* Don't force mixed precision as no in examples by sgugger in 1018
* Include steppage in performance docs by muellerzr in 1013
* Fix env var by muellerzr in 1024
* Change default for keep_fp32_wrapper by muellerzr in 1025
* Fix slow test by keeping tied weights on the same GPU by sgugger in 1026
* Start of adding examples by muellerzr in 1001
* More improvements to docstrings + examples by muellerzr in 1010
* With example by muellerzr in 1027
* sagemaker launcher fixes by pacman100 in 1031

0.15.0

PyTorch 2.0 stack support

We are very excited by the newly announced PyTorch 2.0 stack and you can try it using Accelerate on any model by using the `dynamo_backend` argument of the `Accelerator`, or when filling your config with `accelerate config`.

Note that to get the best performance, we recommend:
- using an Ampere GPU (or more recent)
- sticking to fixed shaped for now

* Add support for torch dynamo by sgugger in 829

New CLI commands

* Added two new commands, `accelerate config update` and `accelerate config default`. The first will update a config file to have the latest keys added from latter releases of Accelerate, and the second will create a default configuration file automatically mimicking `write_default_config()` introduced in 851 and 853 by muellerzr
* Also introduced a filterable help for `accelerate launch` which will show options relevant to the choices shown, such as `accelerate launch --multi_gpu` will show launch parameters relevant to multi-gpu training.

What's new?

* fix 🐛 by pacman100 in 836
* Deepspeed example should use gather_for_metrics by HammadB in 821
* Highlight selection with pretty colors by muellerzr in 839
* Add `join_uneven_inputs` context manager to Accelerator by Chris-hughes10 in 820
* Introduce `default-config` command by muellerzr in 840
* Fix log error and add log level to get_logger by muellerzr in 842
* Fix if/else by muellerzr in 849
* Fix complete_cv example by muellerzr in 848
* Refactor Accelerate config and introduce a multi-argument CLI interface by muellerzr in 851
* Clean up, add update command by muellerzr in 853
* Revert "Update pr docs actions by mishig25 in 827)"
* Switch default log to warn by muellerzr in 859
* Remove mixed precision hook as part of the unwrap_model by muellerzr in 860
* update deepspeed error message wrt `batch_size` by pacman100 in 861
* fix failing deepspeed test by pacman100 in 868
* Even more log level refined, leave alone if not explicitly set by muellerzr in 871
* Solve pickling issues by muellerzr in 872
* Spring cleaning by muellerzr in 865
* fixing lr_scheduler prepare issue when using pytorch nightly by pacman100 in 878
* fix fsdp state_dict_config because of PyTorch changes by pacman100 in 877
* Update deprecated logging warn by SHi-ON in 881
* fix a bug by xiaohu2015 in 887
* Allow safetensors offload by sgugger in 873
* fixing lr scheduler for pytorch nightly by pacman100 in 884
* Prefix all accelerate env vars with ACCELERATE by muellerzr in 890
* fix prefix issues in tests by pacman100 in 891
* Fix windows cli selector by muellerzr in 893
* Better description for improper kwargs by muellerzr in 894
* Support bfloat16 in load_offloaded_weight by sgugger in 892

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* Chris-hughes10
* Add `join_uneven_inputs` context manager to Accelerator (820)

0.14.0

Megatron LM integration

Accelerate now supports Megatron-LM for the three model classes (BERT, GPT-2 and T5). You can learn more in the [documentation](https://huggingface.co/docs/accelerate/usage_guides/megatron_lm).

* Megatron-LM integration by pacman100 in 667
* ensure megatron is 2.2.0+ by jeffra in 755
* updating docs to use fork of megatron-lm and minor example/docs fix by pacman100 in 766
* adding support to return logits and generate for Megatron-LM GPT models by pacman100 in 819

PyTorch 1.13 support

Fixes a bug that returned SIGKILL errors on Windows.

* Isolate distrib_run by muellerzr in 828

Kaggle support with the `notebook_launcher`

With Kaggle now giving instances with two T4 GPUs, Accelerate can leverage this to do multi-gpu training from the notebook

* Work in kaggle! by muellerzr in 783

What's new?

* Add `non_blocking` kwarg to `send_to_device()` by NouamaneTazi in 607
* [ds launcher] un-hijack PYTHONPATH by stas00 in 741
* Fix num_processes is not defined by muellerzr in 746
* [Device map] nn.Parameter don't have children by patrickvonplaten in 747
* Use HTML relative paths for tiles by lewtun in 749
* Add gpu_ids to SageMakerConfig though it should never be set by muellerzr in 751
* Change num_cpu_threads_per_process default by muellerzr in 753
* Return unclipped gradient from grad_clip_norm_ by samuelstevens in 756
* refactor by pacman100 in 758
* update docs by pacman100 in 759
* Only wrap modules in DDP if they require grad by samuelstevens in 761
* Move io_same_device hook to before attach_align_device hook on cpu_offload and disk_offload. by piEsposito in 768
* Regression cli tests by muellerzr in 772
* Fix number of devices in get_balanced_memory by sgugger in 774
* Fix all github actions issues + depreciations by muellerzr in 773
* Fix flakey wandb test by muellerzr in 775
* Add defaults for launchers by muellerzr in 778
* Allow BatchSamplerShard to not even out batches by sgugger in 776
* Make rich toggleable and seperate out a new environment utility file by muellerzr in 779
* Add same_network + docs by muellerzr in 780
* fix transformers tests by ArthurZucker in 777
* Add Dev Container configuration by Chris-hughes10 in 782
* separate dataloader generator from sampler generator by pacman100 in 789
* Consider top-level buffers when computing `infer_auto_device_map` by younesbelkada in 792
* Add `even_batches` keyword to Accelerator by Chris-hughes10 in 781
* Fix device_map="auto" on CPU-only envs by sgugger in 797
* Fix extraction of state dict in offload by sgugger in 795
* fix: add pdsh as default launcher by zanussbaum in 800
* Deal with optimizer.differentiable in PyTorch 1.13.0 by comaniac in 803
* Introduce a pod-config command by muellerzr in 802
* Refactor CLI to improve readability by muellerzr in 810
* adding support to pickle and unpickle `AcceleratedOptimizer` by pacman100 in 811
* add `recurse` argument in `remove_hook_from_module` by younesbelkada in 812
* Act on deprecations by muellerzr in 813
* Mlflow-tracker-v2 🔥 by nbroad1881 in 794
* Update CLI docs and use mps rather than mps_device by muellerzr in 814
* Rename pod-config to tpu-config + docs by muellerzr in 818
* Update docs by muellerzr in 823
* rename sklearn to proper dep by muellerzr in 825
* Rename by muellerzr in 824
* Update pr docs actions by mishig25 in 827

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* Chris-hughes10
* Add Dev Container configuration (782)
* Add `even_batches` keyword to Accelerator (781)

0.13.2

- [Device map] nn.Parameter don't have children in 747 by patrickvonplaten

0.13.1

- Fix num_processes is not defined 746 by muellerzr

Page 5 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.