Accelerate

Latest version: v0.30.0

Safety actively analyzes 623567 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 14

0.30.0

Core
* We've simplified the `tqdm` wrapper to make it fully passthrough, no need to have `tqdm(main_process_only, *args)`, it is now just `tqdm(*args)` and you can pass in `is_main_process` as a kwarg.
* We've added support for advanced optimizer usage:
* Schedule free optimizer introduced by [Meta](https://github.com/facebookresearch/schedule_free/tree/main) by muellerzr in https://github.com/huggingface/accelerate/pull/2631
* LOMO optimizer introduced by [OpenLMLab](https://github.com/OpenLMLab/LOMO) by younesbelkada in https://github.com/huggingface/accelerate/pull/2695
* Enable BF16 autocast to everything during FP8 and enable FSDP by muellerzr in https://github.com/huggingface/accelerate/pull/2655
* Support dataloader send_to_device calls to use non-blocking by drhead in https://github.com/huggingface/accelerate/pull/2685
* allow gather_for_metrics to be more flexible by SunMarc in https://github.com/huggingface/accelerate/pull/2710
* Add `cann` version info to command accelerate env for NPU by statelesshz in https://github.com/huggingface/accelerate/pull/2689
* Add MLU rng state setter by ArthurinRUC in https://github.com/huggingface/accelerate/pull/2664
* device agnostic testing for hooks&utils&big_modeling by statelesshz in https://github.com/huggingface/accelerate/pull/2602

Documentation
* Through collaboration between fabianlim (lead contribuitor), stas00, pacman100, and muellerzr we have a new concept guide out for FSDP and DeepSpeed explicitly detailing how each interop and explaining fully and clearly how each of those work. This was a momumental effort by fabianlim to ensure that everything can be as accurate as possible to users. I highly recommend visiting this new documentation, available [here](https://huggingface.co/docs/accelerate/concept_guides/fsdp_and_deepspeed)
* New distributed inference examples have been added thanks to SunMarc in https://github.com/huggingface/accelerate/pull/2672
* Fixed some docs for using internal trackers by brentyi in https://github.com/huggingface/accelerate/pull/2650

DeepSpeed
* Accelerate can now handle MoE models when using deepspeed, thanks to pacman100 in https://github.com/huggingface/accelerate/pull/2662
* Allow "auto" for gradient clipping in YAML by regisss in https://github.com/huggingface/accelerate/pull/2649
* Introduce a `deepspeed`-specific Docker image by muellerzr in https://github.com/huggingface/accelerate/pull/2707. To use, pull the `gpu-deepspeed` tag `docker pull huggingface/accelerate:cuda-deepspeed-nightly`

Megatron
* Megatron plugin can support NPU by zhangsheng377 in https://github.com/huggingface/accelerate/pull/2667


Big Modeling
* Add strict arg to load_checkpoint_and_dispatch by SunMarc in https://github.com/huggingface/accelerate/pull/2641

Bug Fixes
* Fix up state with xla + performance regression by muellerzr in https://github.com/huggingface/accelerate/pull/2634
* Parenthesis on xpu_available by muellerzr in https://github.com/huggingface/accelerate/pull/2639
* Fix `is_train_batch_min` type in DeepSpeedPlugin by yhna940 in https://github.com/huggingface/accelerate/pull/2646
* Fix backend check by jiqing-feng in https://github.com/huggingface/accelerate/pull/2652
* Fix the rng states of sampler's generator to be synchronized for correct sharding of dataset across GPUs by pacman100 in https://github.com/huggingface/accelerate/pull/2694
* Block AMP for MPS device by SunMarc in https://github.com/huggingface/accelerate/pull/2699
* Fixed issue when doing multi-gpu training with bnb when the first gpu is not used by SunMarc in https://github.com/huggingface/accelerate/pull/2714
* Fixup `free_memory` to deal with garbage collection by muellerzr in https://github.com/huggingface/accelerate/pull/2716
* Fix sampler serialization failing by SunMarc in https://github.com/huggingface/accelerate/pull/2723
* Fix deepspeed offload device type in the arguments to be more accurate by yhna940 in https://github.com/huggingface/accelerate/pull/2717

Full Changelog
* Schedule free optimizer support by muellerzr in https://github.com/huggingface/accelerate/pull/2631
* Fix up state with xla + performance regression by muellerzr in https://github.com/huggingface/accelerate/pull/2634
* Parenthesis on xpu_available by muellerzr in https://github.com/huggingface/accelerate/pull/2639
* add third-party device prefix to `execution_device` by faaany in https://github.com/huggingface/accelerate/pull/2612
* add strict arg to load_checkpoint_and_dispatch by SunMarc in https://github.com/huggingface/accelerate/pull/2641
* device agnostic testing for hooks&utils&big_modeling by statelesshz in https://github.com/huggingface/accelerate/pull/2602
* Docs fix for using internal trackers by brentyi in https://github.com/huggingface/accelerate/pull/2650
* Allow "auto" for gradient clipping in YAML by regisss in https://github.com/huggingface/accelerate/pull/2649
* Fix `is_train_batch_min` type in DeepSpeedPlugin by yhna940 in https://github.com/huggingface/accelerate/pull/2646
* Don't use deprecated `Repository` anymore by Wauplin in https://github.com/huggingface/accelerate/pull/2658
* Fix test_from_pretrained_low_cpu_mem_usage_measured failure by yuanwu2017 in https://github.com/huggingface/accelerate/pull/2644
* Add MLU rng state setter by ArthurinRUC in https://github.com/huggingface/accelerate/pull/2664
* fix backend check by jiqing-feng in https://github.com/huggingface/accelerate/pull/2652
* Megatron plugin can support NPU by zhangsheng377 in https://github.com/huggingface/accelerate/pull/2667
* Revert "fix backend check" by muellerzr in https://github.com/huggingface/accelerate/pull/2669
* `tqdm`: `*args` should come ahead of `main_process_only` by rb-synth in https://github.com/huggingface/accelerate/pull/2654
* Handle MoE models with DeepSpeed by pacman100 in https://github.com/huggingface/accelerate/pull/2662
* Fix deepspeed moe test with version check by pacman100 in https://github.com/huggingface/accelerate/pull/2677
* Pin DS...again.. by muellerzr in https://github.com/huggingface/accelerate/pull/2679
* fix backend check by jiqing-feng in https://github.com/huggingface/accelerate/pull/2670
* Deprecate tqdm args + slight logic tweaks by muellerzr in https://github.com/huggingface/accelerate/pull/2673
* Enable BF16 autocast to everything during FP8 + some tweaks to enable FSDP by muellerzr in https://github.com/huggingface/accelerate/pull/2655
* Fix the rng states of sampler's generator to be synchronized for correct sharding of dataset across GPUs by pacman100 in https://github.com/huggingface/accelerate/pull/2694
* Simplify test logic by pacman100 in https://github.com/huggingface/accelerate/pull/2697
* Add source code for DataLoader Animation by muellerzr in https://github.com/huggingface/accelerate/pull/2696
* Block AMP for MPS device by SunMarc in https://github.com/huggingface/accelerate/pull/2699
* Do a pip freeze during workflows by muellerzr in https://github.com/huggingface/accelerate/pull/2704
* add cann version info to command accelerate env by statelesshz in https://github.com/huggingface/accelerate/pull/2689
* Add version checks for the import of DeepSpeed moe utils by pacman100 in https://github.com/huggingface/accelerate/pull/2705
* Change dataloader send_to_device calls to non-blocking by drhead in https://github.com/huggingface/accelerate/pull/2685
* add distributed examples by SunMarc in https://github.com/huggingface/accelerate/pull/2672
* Add diffusers to req by muellerzr in https://github.com/huggingface/accelerate/pull/2711
* fix bnb multi gpu training by SunMarc in https://github.com/huggingface/accelerate/pull/2714
* allow gather_for_metrics to be more flexible by SunMarc in https://github.com/huggingface/accelerate/pull/2710
* Add Upcasting for FSDP in Mixed Precision. Add Concept Guide for FSPD and DeepSpeed. by fabianlim in https://github.com/huggingface/accelerate/pull/2674
* Segment out a deepspeed docker image by muellerzr in https://github.com/huggingface/accelerate/pull/2707
* Fixup `free_memory` to deal with garbage collection by muellerzr in https://github.com/huggingface/accelerate/pull/2716
* fix sampler serialization by SunMarc in https://github.com/huggingface/accelerate/pull/2723
* Fix sampler failing test by SunMarc in https://github.com/huggingface/accelerate/pull/2728
* Docs: Fix build main documentation by SunMarc in https://github.com/huggingface/accelerate/pull/2729
* Fix Documentation in FSDP and DeepSpeed Concept Guide by fabianlim in https://github.com/huggingface/accelerate/pull/2725
* Fix deepspeed offload device type by yhna940 in https://github.com/huggingface/accelerate/pull/2717
* FEAT: Add LOMO optimizer by younesbelkada in https://github.com/huggingface/accelerate/pull/2695
* Fix tests on main by muellerzr in https://github.com/huggingface/accelerate/pull/2739

New Contributors
* brentyi made their first contribution in https://github.com/huggingface/accelerate/pull/2650
* regisss made their first contribution in https://github.com/huggingface/accelerate/pull/2649
* yhna940 made their first contribution in https://github.com/huggingface/accelerate/pull/2646
* Wauplin made their first contribution in https://github.com/huggingface/accelerate/pull/2658
* ArthurinRUC made their first contribution in https://github.com/huggingface/accelerate/pull/2664
* jiqing-feng made their first contribution in https://github.com/huggingface/accelerate/pull/2652
* zhangsheng377 made their first contribution in https://github.com/huggingface/accelerate/pull/2667
* rb-synth made their first contribution in https://github.com/huggingface/accelerate/pull/2654
* drhead made their first contribution in https://github.com/huggingface/accelerate/pull/2685

**Full Changelog**: https://github.com/huggingface/accelerate/compare/v0.29.3...v0.30.0

0.29.3

* Fixes issue with backend refactor not working on CPU-based distributed environments by jiqing-feng: https://github.com/huggingface/accelerate/pull/2670
* Fixes issue where `load_checkpoint_and_dispatch` needs a `strict` argument
* by SunMarc: https://github.com/huggingface/accelerate/pull/2641

**Full Changelog**: https://github.com/huggingface/accelerate/compare/v0.29.2...v0.29.3

0.29.2

* Fixes xpu missing parenthesis https://github.com/huggingface/accelerate/pull/2639
* Fixes XLA and performance degradation on init with the state https://github.com/huggingface/accelerate/pull/2634

0.29.1

Fixed an import which would cause running accelerate CLI to fail if pytest wasn't installed

0.29.0

Core
* Accelerate can now optimize NUMA affinity, which can help increase throughput on NVIDIA multi-GPU systems. To enable it either follow the prompt during `accelerate config`, set the `ACCELERATE_CPU_AFFINITY=1` env variable, or manually using the following:
python
from accelerate.utils import set_numa_affinity

For GPU 0
set_numa_affinity(0)

Big thanks to stas00 for the recommendation, request, and feedback during development

* Allow for setting deterministic algorithms in `set_seed` by muellerzr in https://github.com/huggingface/accelerate/pull/2569
* Fixed the test script for TPU v2/v3 by vanbasten23 in https://github.com/huggingface/accelerate/pull/2542
* Cambricon MLU device support introduced by huismiling in https://github.com/huggingface/accelerate/pull/2552
* A big refactor was performed to the PartialState and AcceleratorState to allow for easier future-proofing and simplification of adding new devices by muellerzr in https://github.com/huggingface/accelerate/pull/2576
* Fixed a reproducibility issue in distributed environments with Dataloader shuffling when using `BatchSamplerShard` by universuen in https://github.com/huggingface/accelerate/pull/2584
* `notebook_launcher` can use multiple GPUs in Google Colab if using a custom instance that supports multiple GPUs by StefanTodoran in https://github.com/huggingface/accelerate/pull/2561

Big Model Inference
* Add log message for RTX 4000 series when performing multi-gpu inference with device_map which can lead to hanging by SunMarc in https://github.com/huggingface/accelerate/pull/2557
* Fix `load_checkpoint_in_model` behavior when unexpected keys are in the checkpoint by fxmarty in https://github.com/huggingface/accelerate/pull/2588

DeepSpeed
* Fix issue with the mapping of `main_process_ip` and `master_addr` when not using standard as deepspeed launcher by asdfry in https://github.com/huggingface/accelerate/pull/2495
* Improve deepspeed env gen by checking for bad keys, by muellerzr and ricklamers in https://github.com/huggingface/accelerate/pull/2565
* We now support custom deepspeed env files. Like normal `deepspeed`, set it with the `DS_ENV_FILE` environmental variable by muellerzr in https://github.com/huggingface/accelerate/pull/2566
* Resolve ZeRO-3 Initialization Failure in already-started distributed environments by sword865 in https://github.com/huggingface/accelerate/pull/2578

What's Changed
* Fix test_script.py on TPU v2/v3 by vanbasten23 in https://github.com/huggingface/accelerate/pull/2542
* Add mapping `main_process_ip` and `master_addr` when not using standard as deepspeed launcher by asdfry in https://github.com/huggingface/accelerate/pull/2495
* split_between_processes for Dataset by geronimi73 in https://github.com/huggingface/accelerate/pull/2433
* Include working driver check by muellerzr in https://github.com/huggingface/accelerate/pull/2558
* 🚨🚨🚨Move to using tags rather than latest for docker images and consolidate image repos 🚨 🚨🚨 by muellerzr in https://github.com/huggingface/accelerate/pull/2554
* Add Cambricon MLU accelerator support by huismiling in https://github.com/huggingface/accelerate/pull/2552
* Add NUMA affinity control for NVIDIA GPUs by muellerzr in https://github.com/huggingface/accelerate/pull/2535
* Add log message for RTX 4000 series when performing multi-gpu inference with device_map by SunMarc in https://github.com/huggingface/accelerate/pull/2557
* Improve deepspeed env gen by muellerzr in https://github.com/huggingface/accelerate/pull/2565
* Allow for setting deterministic algorithms by muellerzr in https://github.com/huggingface/accelerate/pull/2569
* Unpin deepspeed by muellerzr in https://github.com/huggingface/accelerate/pull/2570
* Rm uv install by muellerzr in https://github.com/huggingface/accelerate/pull/2577
* Allow for custom deepspeed env files by muellerzr in https://github.com/huggingface/accelerate/pull/2566
* [docs] Missing functions from API by stevhliu in https://github.com/huggingface/accelerate/pull/2580
* Update data_loader.py to Ensure Reproducibility in Multi-Process Environments with Dataloader Shuffle by universuen in https://github.com/huggingface/accelerate/pull/2584
* Refactor affinity and make it stateful by muellerzr in https://github.com/huggingface/accelerate/pull/2579
* Refactor and improve model estimator tool by muellerzr in https://github.com/huggingface/accelerate/pull/2581
* Fix `load_checkpoint_in_model` behavior when unexpected keys are in the checkpoint by fxmarty in https://github.com/huggingface/accelerate/pull/2588
* Guard stateful objects by muellerzr in https://github.com/huggingface/accelerate/pull/2572
* Expound PartialState docstring by muellerzr in https://github.com/huggingface/accelerate/pull/2589
* [docs] Fix kwarg docstring by stevhliu in https://github.com/huggingface/accelerate/pull/2590
* Allow notebook_launcher to launch to multiple GPUs from Colab by StefanTodoran in https://github.com/huggingface/accelerate/pull/2561
* Fix warning log for unused checkpoint keys by fxmarty in https://github.com/huggingface/accelerate/pull/2594
* Resolve ZeRO-3 Initialization Failure in Pre-Set Torch Distributed Environments (huggingface/transformers28803) by sword865 in https://github.com/huggingface/accelerate/pull/2578
* Refactor PartialState and AcceleratorState by muellerzr in https://github.com/huggingface/accelerate/pull/2576
* Allow for force unwrapping by muellerzr in https://github.com/huggingface/accelerate/pull/2595
* Pin hub for tests by muellerzr in https://github.com/huggingface/accelerate/pull/2608
* Default false for trust_remote_code by muellerzr in https://github.com/huggingface/accelerate/pull/2607
* fix llama example for pippy by SunMarc in https://github.com/huggingface/accelerate/pull/2616
* Fix links in Quick Tour by muellerzr in https://github.com/huggingface/accelerate/pull/2617
* Link to bash in env reporting by muellerzr in https://github.com/huggingface/accelerate/pull/2623
* Unpin hub by muellerzr in https://github.com/huggingface/accelerate/pull/2625

New Contributors
* asdfry made their first contribution in https://github.com/huggingface/accelerate/pull/2495
* geronimi73 made their first contribution in https://github.com/huggingface/accelerate/pull/2433
* huismiling made their first contribution in https://github.com/huggingface/accelerate/pull/2552
* universuen made their first contribution in https://github.com/huggingface/accelerate/pull/2584
* StefanTodoran made their first contribution in https://github.com/huggingface/accelerate/pull/2561
* sword865 made their first contribution in https://github.com/huggingface/accelerate/pull/2578

**Full Changelog**: https://github.com/huggingface/accelerate/compare/v0.28.0...v0.29.0

0.28.0

Core
* Introduce a `DataLoaderConfiguration` and begin deprecation of arguments in the `Accelerator`
diff
+from accelerate import DataLoaderConfiguration
+dl_config = DataLoaderConfiguration(split_batches=True, dispatch_batches=True)
-accelerator = Accelerator(split_batches=True, dispatch_batches=True)
+accelerator = Accelerator(dataloader_config=dl_config)

* Allow gradients to be synced each data batch while performing gradient accumulation, useful when training in FSDP by fabianlim in https://github.com/huggingface/accelerate/pull/2531
diff
from accelerate import GradientAccumulationPlugin
plugin = GradientAccumulationPlugin(
+ num_steps=2,
sync_each_batch=sync_each_batch
)
accelerator = Accelerator(gradient_accumulation_plugin=plugin)



Torch XLA
* Support for XLA on the GPU by anw90 in https://github.com/huggingface/accelerate/pull/2176
* Enable gradient accumulation on TPU in https://github.com/huggingface/accelerate/pull/2453

FSDP
* Support downstream FSDP + QLORA support through tweaks by allowing configuration of buffer precision by pacman100 in https://github.com/huggingface/accelerate/pull/2544

`launch` changes
* Support `mpirun` for multi-cpu training by dmsuehir in https://github.com/huggingface/accelerate/pull/2493

What's Changed
* Fix model metadata issue check by muellerzr in https://github.com/huggingface/accelerate/pull/2435
* Use py 3.9 by muellerzr in https://github.com/huggingface/accelerate/pull/2436
* Fix seedable sampler logic and expound docs by muellerzr in https://github.com/huggingface/accelerate/pull/2434
* Fix tied_pointers_to_remove type by fxmarty in https://github.com/huggingface/accelerate/pull/2439
* Make test assertions more idiomatic by akx in https://github.com/huggingface/accelerate/pull/2420
* Prefer `is_torch_tensor` over `hasattr` for torch.compile. by PhilJd in https://github.com/huggingface/accelerate/pull/2387
* Enable more Ruff lints & fix issues by akx in https://github.com/huggingface/accelerate/pull/2419
* Fix warning when dispatching model by SunMarc in https://github.com/huggingface/accelerate/pull/2442
* Make torch xla available on GPU by anw90 in https://github.com/huggingface/accelerate/pull/2176
* Include pippy_file_path by muellerzr in https://github.com/huggingface/accelerate/pull/2444
* [Big deprecation] Introduces a `DataLoaderConfig` by muellerzr in https://github.com/huggingface/accelerate/pull/2441
* Check for None by muellerzr in https://github.com/huggingface/accelerate/pull/2452
* Fix the pytest version to be less than 8.0.1 by BenjaminBossan in https://github.com/huggingface/accelerate/pull/2461
* Fix wrong `is_namedtuple` implementation by fxmarty in https://github.com/huggingface/accelerate/pull/2475
* Use grad-accum on TPU by muellerzr in https://github.com/huggingface/accelerate/pull/2453
* Add pre-commit configuration by akx in https://github.com/huggingface/accelerate/pull/2451
* Replace `os.path.sep.join` path manipulations with a helper by akx in https://github.com/huggingface/accelerate/pull/2446
* DOC: Fixes to Accelerator docstring by BenjaminBossan in https://github.com/huggingface/accelerate/pull/2443
* Context manager fixes by akx in https://github.com/huggingface/accelerate/pull/2450
* Fix TPU with new `XLA` device type by will-cromar in https://github.com/huggingface/accelerate/pull/2467
* Free mps memory by SunMarc in https://github.com/huggingface/accelerate/pull/2483
* [FIX] allow `Accelerator` to detect distributed type from the "LOCAL_RANK" env variable for XPU by faaany in https://github.com/huggingface/accelerate/pull/2473
* Fix CI tests due to pathlib issues by muellerzr in https://github.com/huggingface/accelerate/pull/2491
* Remove all cases of torchrun in tests and centralize as `accelerate launch` by muellerzr in https://github.com/huggingface/accelerate/pull/2498
* Fix link typo by SunMarc in https://github.com/huggingface/accelerate/pull/2503
* [docs] Accelerator API by stevhliu in https://github.com/huggingface/accelerate/pull/2465
* Docstring fixup by muellerzr in https://github.com/huggingface/accelerate/pull/2504
* [docs] Divide training and inference by stevhliu in https://github.com/huggingface/accelerate/pull/2466
* add custom dtype INT2 by SunMarc in https://github.com/huggingface/accelerate/pull/2505
* quanto compatibility for cpu/disk offload by SunMarc in https://github.com/huggingface/accelerate/pull/2481
* [docs] Quicktour by stevhliu in https://github.com/huggingface/accelerate/pull/2456
* Check if hub down by muellerzr in https://github.com/huggingface/accelerate/pull/2506
* Remove offline stuff by muellerzr in https://github.com/huggingface/accelerate/pull/2509
* Fixed 0MiB bug in convert_file_size_to_int by StoyanStAtanasov in https://github.com/huggingface/accelerate/pull/2507
* Fix edge case in infer_auto_device_map when dealing with buffers by SunMarc in https://github.com/huggingface/accelerate/pull/2511
* [docs] Fix typos by omahs in https://github.com/huggingface/accelerate/pull/2490
* fix typo in launch.py (`----main_process_port` to `--main_process_port`) by DerrickWang005 in https://github.com/huggingface/accelerate/pull/2516
* Add copyright + some ruff lint things by muellerzr in https://github.com/huggingface/accelerate/pull/2523
* Don't manage `PYTORCH_NVML_BASED_CUDA_CHECK` when calling `accelerate.utils.imports.is_cuda_available()` by luiscape in https://github.com/huggingface/accelerate/pull/2524
* Quanto compatibility with QBitsTensor by SunMarc in https://github.com/huggingface/accelerate/pull/2526
* Remove unnecessary `env=os.environ.copy()`s by akx in https://github.com/huggingface/accelerate/pull/2449
* Launch mpirun from accelerate launch for multi-CPU training by dmsuehir in https://github.com/huggingface/accelerate/pull/2493
* Enable using dash or underscore for CLI args by muellerzr in https://github.com/huggingface/accelerate/pull/2527
* Update the default behavior of `zero_grad(set_to_none=None)` to align with PyTorch by yongchanghao in https://github.com/huggingface/accelerate/pull/2472
* Update link to dynamo/compile doc by WarmongeringBeaver in https://github.com/huggingface/accelerate/pull/2533
* Check if the buffers fit GPU memory after device map auto inferred by notsyncing in https://github.com/huggingface/accelerate/pull/2412
* [Refactor] Refactor send_to_device to treat tensor-like first by vmoens in https://github.com/huggingface/accelerate/pull/2438
* Overdue email change... by muellerzr in https://github.com/huggingface/accelerate/pull/2534
* [docs] Troubleshoot by stevhliu in https://github.com/huggingface/accelerate/pull/2538
* Remove extra double-dash in error message by drscotthawley in https://github.com/huggingface/accelerate/pull/2541
* Allow Gradients to be Synced Each Data Batch While Performing Gradient Accumulation by fabianlim in https://github.com/huggingface/accelerate/pull/2531
* Update FSDP mixed precision setter to enable fsdp+qlora by pacman100 in https://github.com/huggingface/accelerate/pull/2544
* Use uv instead of pip install for github CI by muellerzr in https://github.com/huggingface/accelerate/pull/2546

New Contributors
* anw90 made their first contribution in https://github.com/huggingface/accelerate/pull/2176
* StoyanStAtanasov made their first contribution in https://github.com/huggingface/accelerate/pull/2507
* omahs made their first contribution in https://github.com/huggingface/accelerate/pull/2490
* DerrickWang005 made their first contribution in https://github.com/huggingface/accelerate/pull/2516
* luiscape made their first contribution in https://github.com/huggingface/accelerate/pull/2524
* dmsuehir made their first contribution in https://github.com/huggingface/accelerate/pull/2493
* yongchanghao made their first contribution in https://github.com/huggingface/accelerate/pull/2472
* WarmongeringBeaver made their first contribution in https://github.com/huggingface/accelerate/pull/2533
* vmoens made their first contribution in https://github.com/huggingface/accelerate/pull/2438
* drscotthawley made their first contribution in https://github.com/huggingface/accelerate/pull/2541
* fabianlim made their first contribution in https://github.com/huggingface/accelerate/pull/2531

**Full Changelog**: https://github.com/huggingface/accelerate/compare/v0.27.2...v0.28.0

Page 1 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.