Optimum-amd

Latest version: v0.1.0

Safety actively analyzes 682382 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.1.0

We are glad to release the first version of Optimum-AMD, extending the support of Hugging Face libraries for AMD ROCm GPUs and Ryzen AI laptops. More to come in the coming weeks!

`RyzenAIModelForImageClassification` for Ryzen AI NPU

Optimum-AMD allows to leverage the [Ryzen AI NPU](https://www.amd.com/en/products/processors/business-systems/ryzen-ai.html) (Neural Processing Unit) for image classification through the [`RyzenAIModelForImageClassification`](https://huggingface.co/docs/optimum/amd/ryzenai/package_reference) class for faster local inference. Check out the [documentation](https://huggingface.co/docs/optimum/amd/ryzenai/overview) for more details!

`amdrun` wrapper on `torchrun` to dispatch on the most optimal GPUs

When using multiple GPUs that need to communicate (tensor parallel, data parallel, etc.), the choice of which devices is used [is crutial](https://huggingface.co/docs/optimum/main/en/amd/amdgpu/perf_hardware) for optimal performances. `amdrun` command line that comes along Optimum-AMD allows to automatically dispatch a `torchrun` job on a single node to the optimal devices:


amdrun --ngpus <num_gpus> <script> <script_args>


ONNX Runtime `ROCMExecutionProvider` support

Optimum ONNX Runtime integration supports ROCm natively: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/amdgpu

Text Generation Inference library for LLM inference supports ROCm

Text Generation Inference supports ROCm natively: https://huggingface.co/docs/text-generation-inference/quicktour

GPTQ quantization support

AutoGPTQ library supports ROCm natively: https://github.com/PanQiWei/AutoGPTQ#quick-installation

Flash Attention 2 support for ROCm

Transformers supports natively Flash Attention 2 for ROCm: https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2

What's Changed
* Use MIT license instead of Apache 2.0 by fxmarty in https://github.com/huggingface/optimum-amd/pull/11
* Pip installable version of optimum-amd by fxmarty in https://github.com/huggingface/optimum-amd/pull/12
* Add dockerfile for Transformers + flash attention by fxmarty in https://github.com/huggingface/optimum-amd/pull/27
* Add RyzenAIModelForImageClassification by mht-sharma in https://github.com/huggingface/optimum-amd/pull/16
* Add `amdrun` by IlyasMoutawwakil in https://github.com/huggingface/optimum-amd/pull/26
* Add documentation by fxmarty in https://github.com/huggingface/optimum-amd/pull/28
* [Benchmark] adding `optimum-benchmark` compatible config files by IlyasMoutawwakil in https://github.com/huggingface/optimum-amd/pull/14
* Precise documentation by fxmarty in https://github.com/huggingface/optimum-amd/pull/29
* Improve documentation and add ort dockerfile by fxmarty in https://github.com/huggingface/optimum-amd/pull/30
* Fix makefile by fxmarty in https://github.com/huggingface/optimum-amd/pull/31
* Update ORT docker base image by mht-sharma in https://github.com/huggingface/optimum-amd/pull/32
* Update README.md by mht-sharma in https://github.com/huggingface/optimum-amd/pull/33
* Udpate documentation by echarlaix in https://github.com/huggingface/optimum-amd/pull/34
* Update Ryzen description by mht-sharma in https://github.com/huggingface/optimum-amd/pull/35
* Add pip installation by fxmarty in https://github.com/huggingface/optimum-amd/pull/36

New Contributors
* fxmarty made their first contribution in https://github.com/huggingface/optimum-amd/pull/11
* mht-sharma made their first contribution in https://github.com/huggingface/optimum-amd/pull/16
* IlyasMoutawwakil made their first contribution in https://github.com/huggingface/optimum-amd/pull/26
* echarlaix made their first contribution in https://github.com/huggingface/optimum-amd/pull/34

**Full Changelog**: https://github.com/huggingface/optimum-amd/commits/v0.1.0

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.