Accelerate

Latest version: v1.3.0

Safety actively analyzes 698693 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 17

2.0

As it's been ~2 years since torch 2.0 was first released, we are now requiring this as the **minimum version for Accelerate**, which similarly was done in `transformers` as of its last release.

Core
* [docs] no hard-coding cuda by faaany in https://github.com/huggingface/accelerate/pull/3270
* fix load_state_dict for npu by ji-huazhong in https://github.com/huggingface/accelerate/pull/3211
* Add `keep_torch_compile` param to `unwrap_model` and `extract_model_from_parallel` for distributed compiled model. by ggoggam in https://github.com/huggingface/accelerate/pull/3282
* [tests] make cuda-only test case device-agnostic by faaany in https://github.com/huggingface/accelerate/pull/3340
* latest bnb no longer has optim_args attribute on optimizer by winglian in https://github.com/huggingface/accelerate/pull/3311
* add torchdata version check to avoid "in_order" error by faaany in https://github.com/huggingface/accelerate/pull/3344
* [docs] fix typo, change "backoff_filter" to "backoff_factor" by suchot in https://github.com/huggingface/accelerate/pull/3296
* dataloader: check that in_order is in kwargs before trying to drop it by dvrogozh in https://github.com/huggingface/accelerate/pull/3346
* feat(tpu): remove nprocs from xla.spawn by tengomucho in https://github.com/huggingface/accelerate/pull/3324

Big Modeling
* Fix test_nested_hook by SunMarc in https://github.com/huggingface/accelerate/pull/3289
* correct the return statement of _init_infer_auto_device_map by Nech-C in https://github.com/huggingface/accelerate/pull/3279
* Use torch.xpu.mem_get_info for XPU by dvrogozh in https://github.com/huggingface/accelerate/pull/3275
* Ensure that tied parameter is children of module by pablomlago in https://github.com/huggingface/accelerate/pull/3327
* Fix for offloading when using TorchAO >= 0.7.0 by a-r-r-o-w in https://github.com/huggingface/accelerate/pull/3332
* Fix offload generate tests by SunMarc in https://github.com/huggingface/accelerate/pull/3334

Examples
* Give example on how to handle gradient accumulation with cross-entropy by ylacombe in https://github.com/huggingface/accelerate/pull/3193

Full Changelog
What's Changed
* [docs] no hard-coding cuda by faaany in https://github.com/huggingface/accelerate/pull/3270
* fix load_state_dict for npu by ji-huazhong in https://github.com/huggingface/accelerate/pull/3211
* Fix test_nested_hook by SunMarc in https://github.com/huggingface/accelerate/pull/3289
* correct the return statement of _init_infer_auto_device_map by Nech-C in https://github.com/huggingface/accelerate/pull/3279
* Give example on how to handle gradient accumulation with cross-entropy by ylacombe in https://github.com/huggingface/accelerate/pull/3193
* Use torch.xpu.mem_get_info for XPU by dvrogozh in https://github.com/huggingface/accelerate/pull/3275
* Add `keep_torch_compile` param to `unwrap_model` and `extract_model_from_parallel` for distributed compiled model. by ggoggam in https://github.com/huggingface/accelerate/pull/3282
* Ensure that tied parameter is children of module by pablomlago in https://github.com/huggingface/accelerate/pull/3327
* Bye bye torch <2 by muellerzr in https://github.com/huggingface/accelerate/pull/3331
* Fixup docker build err by muellerzr in https://github.com/huggingface/accelerate/pull/3333
* feat(tpu): remove nprocs from xla.spawn by tengomucho in https://github.com/huggingface/accelerate/pull/3324
* Fix offload generate tests by SunMarc in https://github.com/huggingface/accelerate/pull/3334
* [tests] make cuda-only test case device-agnostic by faaany in https://github.com/huggingface/accelerate/pull/3340
* latest bnb no longer has optim_args attribute on optimizer by winglian in https://github.com/huggingface/accelerate/pull/3311
* Fix for offloading when using TorchAO >= 0.7.0 by a-r-r-o-w in https://github.com/huggingface/accelerate/pull/3332
* add torchdata version check to avoid "in_order" error by faaany in https://github.com/huggingface/accelerate/pull/3344
* [docs] fix typo, change "backoff_filter" to "backoff_factor" by suchot in https://github.com/huggingface/accelerate/pull/3296
* dataloader: check that in_order is in kwargs before trying to drop it by dvrogozh in https://github.com/huggingface/accelerate/pull/3346

New Contributors
* ylacombe made their first contribution in https://github.com/huggingface/accelerate/pull/3193
* ggoggam made their first contribution in https://github.com/huggingface/accelerate/pull/3282
* pablomlago made their first contribution in https://github.com/huggingface/accelerate/pull/3327
* tengomucho made their first contribution in https://github.com/huggingface/accelerate/pull/3324
* suchot made their first contribution in https://github.com/huggingface/accelerate/pull/3296

**Full Changelog**: https://github.com/huggingface/accelerate/compare/v1.2.1...v1.3.0

1.3.0

1.2.1

* fix: add max_memory to _init_infer_auto_device_map's return statement in https://github.com/huggingface/accelerate/pull/3279 by Nech-C
* fix load_state_dict for npu in https://github.com/huggingface/accelerate/pull/3211 by statelesshz

**Full Changelog**: https://github.com/huggingface/accelerate/compare/v1.2.0...v1.2.1

1.2.0

Core
* enable `find_executable_batch_size` on XPU by faaany in https://github.com/huggingface/accelerate/pull/3236
* Use `numpy._core` instead of `numpy.core` by qgallouedec in https://github.com/huggingface/accelerate/pull/3247
* Add warnings and fallback for unassigned devices in infer_auto_device_map by Nech-C in https://github.com/huggingface/accelerate/pull/3066
* Allow for full dynamo config passed to Accelerator by muellerzr in https://github.com/huggingface/accelerate/pull/3251
* [WIP] FEAT Decorator to purge accelerate env vars by BenjaminBossan in https://github.com/huggingface/accelerate/pull/3252
* [`data_loader`] Optionally also propagate set_epoch to batch sampler by tomaarsen in https://github.com/huggingface/accelerate/pull/3246
* use XPU instead of GPU in the `accelerate config` prompt text by faaany in https://github.com/huggingface/accelerate/pull/3268

Big Modeling
* Fix `align_module_device`, ensure only cpu tensors for `get_state_dict_offloaded_model` by kylesayrs in https://github.com/huggingface/accelerate/pull/3217
* Remove hook for bnb 4-bit by SunMarc in https://github.com/huggingface/accelerate/pull/3223
* [docs] add instruction to install bnb on non-cuda devices by faaany in https://github.com/huggingface/accelerate/pull/3227
* Take care of case when "_tied_weights_keys" is not an attribute by fabianlim in https://github.com/huggingface/accelerate/pull/3226
* Update deferring_execution.md by max-yue in https://github.com/huggingface/accelerate/pull/3262
* Revert default behavior of `get_state_dict_from_offload` by kylesayrs in https://github.com/huggingface/accelerate/pull/3253
* Fix: Resolve 3060, `preload_module_classes` is lost for nested modules by wejoncy in https://github.com/huggingface/accelerate/pull/3248

DeepSpeed
* Select the DeepSpeedCPUOptimizer based on the original optimizer class. by eljandoubi in https://github.com/huggingface/accelerate/pull/3255
* support for wrapped schedulefree optimizer when using deepspeed by winglian in https://github.com/huggingface/accelerate/pull/3266

Documentation

* Update code in tracking documentation by faaany in https://github.com/huggingface/accelerate/pull/3235
* Replaced set/check breakpoint with set/check trigger in the troubleshooting documentation by relh in https://github.com/huggingface/accelerate/pull/3259

* Update set-seed by faaany in https://github.com/huggingface/accelerate/pull/3228
* Fix typo by faaany in https://github.com/huggingface/accelerate/pull/3221
* Use real path for `checkpoint` by faaany in https://github.com/huggingface/accelerate/pull/3220
* Fixed multiple typos for Tutorials and Guides docs by henryhmko in https://github.com/huggingface/accelerate/pull/3274

New Contributors
* winglian made their first contribution in https://github.com/huggingface/accelerate/pull/3266
* max-yue made their first contribution in https://github.com/huggingface/accelerate/pull/3262
* as12138 made their first contribution in https://github.com/huggingface/accelerate/pull/3261
* relh made their first contribution in https://github.com/huggingface/accelerate/pull/3259
* wejoncy made their first contribution in https://github.com/huggingface/accelerate/pull/3248
* henryhmko made their first contribution in https://github.com/huggingface/accelerate/pull/3274


Full Changelog
* Fix `align_module_device`, ensure only cpu tensors for `get_state_dict_offloaded_model` by kylesayrs in https://github.com/huggingface/accelerate/pull/3217
* remove hook for bnb 4-bit by SunMarc in https://github.com/huggingface/accelerate/pull/3223
* enable `find_executable_batch_size` on XPU by faaany in https://github.com/huggingface/accelerate/pull/3236
* take care of case when "_tied_weights_keys" is not an attribute by fabianlim in https://github.com/huggingface/accelerate/pull/3226
* [docs] update code in tracking documentation by faaany in https://github.com/huggingface/accelerate/pull/3235
* Add warnings and fallback for unassigned devices in infer_auto_device_map by Nech-C in https://github.com/huggingface/accelerate/pull/3066
* [`data_loader`] Optionally also propagate set_epoch to batch sampler by tomaarsen in https://github.com/huggingface/accelerate/pull/3246
* [docs] add instruction to install bnb on non-cuda devices by faaany in https://github.com/huggingface/accelerate/pull/3227
* Use `numpy._core` instead of `numpy.core` by qgallouedec in https://github.com/huggingface/accelerate/pull/3247
* Allow for full dynamo config passed to Accelerator by muellerzr in https://github.com/huggingface/accelerate/pull/3251
* [WIP] FEAT Decorator to purge accelerate env vars by BenjaminBossan in https://github.com/huggingface/accelerate/pull/3252
* use XPU instead of GPU in the `accelerate config` prompt text by faaany in https://github.com/huggingface/accelerate/pull/3268
* support for wrapped schedulefree optimizer when using deepspeed by winglian in https://github.com/huggingface/accelerate/pull/3266
* Update deferring_execution.md by max-yue in https://github.com/huggingface/accelerate/pull/3262
* Fix: Resolve 3257 by as12138 in https://github.com/huggingface/accelerate/pull/3261
* Replaced set/check breakpoint with set/check trigger in the troubleshooting documentation by relh in https://github.com/huggingface/accelerate/pull/3259
* Select the DeepSpeedCPUOptimizer based on the original optimizer class. by eljandoubi in https://github.com/huggingface/accelerate/pull/3255
* Revert default behavior of `get_state_dict_from_offload` by kylesayrs in https://github.com/huggingface/accelerate/pull/3253
* Fix: Resolve 3060, `preload_module_classes` is lost for nested modules by wejoncy in https://github.com/huggingface/accelerate/pull/3248
* [docs] update set-seed by faaany in https://github.com/huggingface/accelerate/pull/3228
* [docs] fix typo by faaany in https://github.com/huggingface/accelerate/pull/3221
* [docs] use real path for `checkpoint` by faaany in https://github.com/huggingface/accelerate/pull/3220
* Fixed multiple typos for Tutorials and Guides docs by henryhmko in https://github.com/huggingface/accelerate/pull/3274

Code Diff
Release diff: https://github.com/huggingface/accelerate/compare/v1.1.1...v1.2.0

1.1.0

Internals:
* Allow for a `data_seed` argument in https://github.com/huggingface/accelerate/pull/3150
* Trigger `weights_only=True` by default for all compatible objects when checkpointing and saving with `torch.save` in https://github.com/huggingface/accelerate/pull/3036
* Handle negative values for `dim` input in `pad_across_processes` in https://github.com/huggingface/accelerate/pull/3114
* Enable cpu bnb distributed lora finetune in https://github.com/huggingface/accelerate/pull/3159

DeepSpeed
* Support torch dynamo for deepspeed>=0.14.4 in https://github.com/huggingface/accelerate/pull/3069

Megatron
* update Megatron-LM plugin code to version 0.8.0 or higher in https://github.com/huggingface/accelerate/pull/3174

Big Model Inference
* New `has_offloaded_params` utility added in https://github.com/huggingface/accelerate/pull/3188

Examples
* Florence2 distributed inference example in https://github.com/huggingface/accelerate/pull/3123

Full Changelog
* Handle negative values for `dim` input in `pad_across_processes` by mariusarvinte in https://github.com/huggingface/accelerate/pull/3114
* Fixup DS issue with weakref by muellerzr in https://github.com/huggingface/accelerate/pull/3143
* Refactor scaler to util by muellerzr in https://github.com/huggingface/accelerate/pull/3142
* DS fix, continued by muellerzr in https://github.com/huggingface/accelerate/pull/3145
* Florence2 distributed inference example by hlky in https://github.com/huggingface/accelerate/pull/3123
* POC: Allow for a `data_seed` by muellerzr in https://github.com/huggingface/accelerate/pull/3150
* Adding multi gpu speech generation by dame-cell in https://github.com/huggingface/accelerate/pull/3149
* support torch dynamo for deepspeed>=0.14.4 by oraluben in https://github.com/huggingface/accelerate/pull/3069
* Fixup Zero3 + `save_model` by muellerzr in https://github.com/huggingface/accelerate/pull/3146
* Trigger `weights_only=True` by default for all compatible objects by muellerzr in https://github.com/huggingface/accelerate/pull/3036
* Remove broken dynamo test by oraluben in https://github.com/huggingface/accelerate/pull/3155
* fix version check bug in `get_xpu_available_memory` by faaany in https://github.com/huggingface/accelerate/pull/3165
* enable cpu bnb distributed lora finetune by jiqing-feng in https://github.com/huggingface/accelerate/pull/3159
* [Utils] `has_offloaded_params` by kylesayrs in https://github.com/huggingface/accelerate/pull/3188
* fix bnb by eljandoubi in https://github.com/huggingface/accelerate/pull/3186
* [docs] update neptune API by faaany in https://github.com/huggingface/accelerate/pull/3181
* docs: fix a wrong word in comment in src/accelerate/accelerate.py:1255 by Rebornix-zero in https://github.com/huggingface/accelerate/pull/3183
* [docs] use nn.module instead of tensor as model by faaany in https://github.com/huggingface/accelerate/pull/3157
* Fix typo by kylesayrs in https://github.com/huggingface/accelerate/pull/3191
* MLU devices : Checks if mlu is available via an cndev-based check which won't trigger the drivers and leave mlu by huismiling in https://github.com/huggingface/accelerate/pull/3187
* update Megatron-LM plugin code to version 0.8.0 or higher. by eljandoubi in https://github.com/huggingface/accelerate/pull/3174
* 🚨 🚨 🚨 Goodbye Python 3.8! 🚨 🚨 🚨 by muellerzr in https://github.com/huggingface/accelerate/pull/3194
* Update transformers.deepspeed references from transformers 4.46.0 release by loadams in https://github.com/huggingface/accelerate/pull/3196
* eliminate dead code by statelesshz in https://github.com/huggingface/accelerate/pull/3198
* take `torch.nn.Module` model into account when moving to device by faaany in https://github.com/huggingface/accelerate/pull/3167
* [docs] add xpu part and fix bug in `torchrun` by faaany in https://github.com/huggingface/accelerate/pull/3166
* Models With Tied Weights Need Re-Tieing After FSDP Param Init by fabianlim in https://github.com/huggingface/accelerate/pull/3154
* add the missing xpu for local sgd by faaany in https://github.com/huggingface/accelerate/pull/3163
* typo fix in big_modeling.py by a-r-r-o-w in https://github.com/huggingface/accelerate/pull/3207
* [Utils] `align_module_device` by kylesayrs in https://github.com/huggingface/accelerate/pull/3204

New Contributors
* mariusarvinte made their first contribution in https://github.com/huggingface/accelerate/pull/3114
* hlky made their first contribution in https://github.com/huggingface/accelerate/pull/3123
* dame-cell made their first contribution in https://github.com/huggingface/accelerate/pull/3149
* kylesayrs made their first contribution in https://github.com/huggingface/accelerate/pull/3188
* eljandoubi made their first contribution in https://github.com/huggingface/accelerate/pull/3186
* Rebornix-zero made their first contribution in https://github.com/huggingface/accelerate/pull/3183
* loadams made their first contribution in https://github.com/huggingface/accelerate/pull/3196

**Full Changelog**: https://github.com/huggingface/accelerate/compare/v1.0.1...v1.1.0

1.0.1

Bugfixes

* Fixes an issue where the `auto` values were no longer being parsed when using [deepspeed](https://github.com/huggingface/accelerate/pull/3143)
* Fixes a broken test in the deepspeed tests related to the [auto values](https://github.com/huggingface/accelerate/pull/3145)

**Full Changelog**: https://github.com/huggingface/accelerate/compare/v1.0.0...v1.0.1

Page 1 of 17

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.