As it's been ~2 years since torch 2.0 was first released, we are now requiring this as the **minimum version for Accelerate**, which similarly was done in `transformers` as of its last release.
Core
* [docs] no hard-coding cuda by faaany in https://github.com/huggingface/accelerate/pull/3270
* fix load_state_dict for npu by ji-huazhong in https://github.com/huggingface/accelerate/pull/3211
* Add `keep_torch_compile` param to `unwrap_model` and `extract_model_from_parallel` for distributed compiled model. by ggoggam in https://github.com/huggingface/accelerate/pull/3282
* [tests] make cuda-only test case device-agnostic by faaany in https://github.com/huggingface/accelerate/pull/3340
* latest bnb no longer has optim_args attribute on optimizer by winglian in https://github.com/huggingface/accelerate/pull/3311
* add torchdata version check to avoid "in_order" error by faaany in https://github.com/huggingface/accelerate/pull/3344
* [docs] fix typo, change "backoff_filter" to "backoff_factor" by suchot in https://github.com/huggingface/accelerate/pull/3296
* dataloader: check that in_order is in kwargs before trying to drop it by dvrogozh in https://github.com/huggingface/accelerate/pull/3346
* feat(tpu): remove nprocs from xla.spawn by tengomucho in https://github.com/huggingface/accelerate/pull/3324
Big Modeling
* Fix test_nested_hook by SunMarc in https://github.com/huggingface/accelerate/pull/3289
* correct the return statement of _init_infer_auto_device_map by Nech-C in https://github.com/huggingface/accelerate/pull/3279
* Use torch.xpu.mem_get_info for XPU by dvrogozh in https://github.com/huggingface/accelerate/pull/3275
* Ensure that tied parameter is children of module by pablomlago in https://github.com/huggingface/accelerate/pull/3327
* Fix for offloading when using TorchAO >= 0.7.0 by a-r-r-o-w in https://github.com/huggingface/accelerate/pull/3332
* Fix offload generate tests by SunMarc in https://github.com/huggingface/accelerate/pull/3334
Examples
* Give example on how to handle gradient accumulation with cross-entropy by ylacombe in https://github.com/huggingface/accelerate/pull/3193
Full Changelog
What's Changed
* [docs] no hard-coding cuda by faaany in https://github.com/huggingface/accelerate/pull/3270
* fix load_state_dict for npu by ji-huazhong in https://github.com/huggingface/accelerate/pull/3211
* Fix test_nested_hook by SunMarc in https://github.com/huggingface/accelerate/pull/3289
* correct the return statement of _init_infer_auto_device_map by Nech-C in https://github.com/huggingface/accelerate/pull/3279
* Give example on how to handle gradient accumulation with cross-entropy by ylacombe in https://github.com/huggingface/accelerate/pull/3193
* Use torch.xpu.mem_get_info for XPU by dvrogozh in https://github.com/huggingface/accelerate/pull/3275
* Add `keep_torch_compile` param to `unwrap_model` and `extract_model_from_parallel` for distributed compiled model. by ggoggam in https://github.com/huggingface/accelerate/pull/3282
* Ensure that tied parameter is children of module by pablomlago in https://github.com/huggingface/accelerate/pull/3327
* Bye bye torch <2 by muellerzr in https://github.com/huggingface/accelerate/pull/3331
* Fixup docker build err by muellerzr in https://github.com/huggingface/accelerate/pull/3333
* feat(tpu): remove nprocs from xla.spawn by tengomucho in https://github.com/huggingface/accelerate/pull/3324
* Fix offload generate tests by SunMarc in https://github.com/huggingface/accelerate/pull/3334
* [tests] make cuda-only test case device-agnostic by faaany in https://github.com/huggingface/accelerate/pull/3340
* latest bnb no longer has optim_args attribute on optimizer by winglian in https://github.com/huggingface/accelerate/pull/3311
* Fix for offloading when using TorchAO >= 0.7.0 by a-r-r-o-w in https://github.com/huggingface/accelerate/pull/3332
* add torchdata version check to avoid "in_order" error by faaany in https://github.com/huggingface/accelerate/pull/3344
* [docs] fix typo, change "backoff_filter" to "backoff_factor" by suchot in https://github.com/huggingface/accelerate/pull/3296
* dataloader: check that in_order is in kwargs before trying to drop it by dvrogozh in https://github.com/huggingface/accelerate/pull/3346
New Contributors
* ylacombe made their first contribution in https://github.com/huggingface/accelerate/pull/3193
* ggoggam made their first contribution in https://github.com/huggingface/accelerate/pull/3282
* pablomlago made their first contribution in https://github.com/huggingface/accelerate/pull/3327
* tengomucho made their first contribution in https://github.com/huggingface/accelerate/pull/3324
* suchot made their first contribution in https://github.com/huggingface/accelerate/pull/3296
**Full Changelog**: https://github.com/huggingface/accelerate/compare/v1.2.1...v1.3.0