Nexaai

Latest version: v0.0.9.4

Safety actively analyzes 687918 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

1.5b

bash
nexa run -hf Qwen/Qwen2.5-1.5B-Instruct-GGUF


Qwen2.5 3B:
bash
nexa run -hf Qwen/Qwen2.5-3B-Instruct-GGUF


Qwen2.5 7B:
bash
nexa run -hf Qwen/Qwen2.5-7B-Instruct-GGUF


Qwen2.5 14B:
bash
nexa run -hf Qwen/Qwen2.5-14B-Instruct-GGUF


The command line will prompt you to select one file from different quantization options. Use the number to indicate your choice. If you're unsure which one to choose, try "q4_0.gguf".

You will then have Qwen2.5 running locally on your computer.

**Note:** For Qwen2.5-code and Qwen2.5-Math, there are no official GGUF files available. Please use Option 2 for these models.

Option 2: Pull and Run Qwen2.5, Qwen2.5-code, Qwen2.5-Math from Nexa Model Hub 🐙

We have converted and uploaded the following models to the Nexa Model Hub:

| Model | Nexa Run Command |
|-------|------------------|
| [Qwen2.5 0.5B](https://nexaai.com/Qwen/Qwen2.5-0.5B-Instruct/gguf-q4_0/readme) | `nexa run Qwen2.5-0.5B-Instruct:q4_0` |
| [Qwen2.5 1.5B](https://nexaai.com/Qwen/Qwen2.5-1.5B-Instruct/gguf-q4_0/readme) | `nexa run Qwen2.5-1.5B-Instruct:q4_0` |
| [Qwen2.5 3B](https://nexaai.com/Qwen/Qwen2.5-3B-Instruct/gguf-q4_0/readme) | `nexa run Qwen2.5-3B-Instruct:q4_0` |
| [Qwen2.5-code](https://nexaai.com/Qwen/Qwen2.5-Coder-1.5B-Instruct/gguf-q4_0/readme) | `nexa run Qwen2.5-Coder-1.5B-Instruct:q4_0` |
| [Qwen2.5-Math](https://nexaai.com/Qwen/Qwen2.5-Math-1.5B-Instruct/gguf-q4_0/readme) | `nexa run Qwen2.5-Math-1.5B-Instruct:q4_0` |

Visit the model pages to choose your parameters and quantization preference. We will constantly upload and support more models in the Qwen2.5 family.

Please feel free to share your feedback and feature/model requests on the [issue page](https://github.com/NexaAI/nexa-sdk/issues/new/choose).

v0.0.8.3-metal
What's New ✨
* Added image generation model support: [SD3](https://nexaai.com/StabilityAI/stable-diffusion-3-medium/gguf-q4_0/file) and [Flux](https://nexaai.com/BlackForestLabs/FLUX.1-schnell/gguf-q4_0/file) ([#75](https://github.com/NexaAI/nexa-sdk/issues/75))
* Added NLP model support: [OpenELM](https://nexaai.com/apple/OpenELM-3B/gguf-q4_K_M/file) and [Phi3.5](https://nexaai.com/microsoft/Phi-3.5-mini-instruct/gguf-q4_0/file)
* Implemented logits through API ([67](https://github.com/NexaAI/nexa-sdk/pull/67))

Improvements 🔧
* Added more SDK development examples ([examples](https://github.com/NexaAI/nexa-sdk/tree/main/examples))
* Added RoCM support for AMD GPUs ([90](https://github.com/NexaAI/nexa-sdk/pull/90))

Fixes 🐞
* Fixed server issue: execute `curl` command on Windows ([79](https://github.com/NexaAI/nexa-sdk/issues/79))
* Fixed nanoLlava file mapping issue ([68](https://github.com/NexaAI/nexa-sdk/issues/68))

Upgrade Guide 📝

To upgrade the NexaAI SDK for GPU use with Metal on macOS, follow these steps:

1. Open a terminal.

2. Run the following command:

bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


This command will upgrade your existing NexaAI SDK installation to the latest Metal-compatible version.

Note
If you encounter any issues or want to ensure a clean installation:

1. Uninstall the current version:
bash
pip uninstall nexaai


2. Reinstall the package using this command:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


For more detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.2...v0.0.8.3](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.2...v0.0.8.3)

v0.0.8.3-cu124
What's New ✨
* Added image generation model support: [SD3](https://nexaai.com/StabilityAI/stable-diffusion-3-medium/gguf-q4_0/file) and [Flux](https://nexaai.com/BlackForestLabs/FLUX.1-schnell/gguf-q4_0/file) ([#75](https://github.com/NexaAI/nexa-sdk/issues/75))
* Added NLP model support: [OpenELM](https://nexaai.com/apple/OpenELM-3B/gguf-q4_K_M/file) and [Phi3.5](https://nexaai.com/microsoft/Phi-3.5-mini-instruct/gguf-q4_0/file)
* Implemented logits through API ([67](https://github.com/NexaAI/nexa-sdk/pull/67))

Improvements 🔧
* Added more SDK development examples ([examples](https://github.com/NexaAI/nexa-sdk/tree/main/examples))
* Added RoCM support for AMD GPUs ([90](https://github.com/NexaAI/nexa-sdk/pull/90))

Fixes 🐞
* Fixed server issue: execute `curl` command on Windows ([79](https://github.com/NexaAI/nexa-sdk/issues/79))
* Fixed nanoLlava file mapping issue ([68](https://github.com/NexaAI/nexa-sdk/issues/68))

Upgrade Guide 📝
To upgrade the NexaAI SDK for GPU use with CUDA, follow these steps based on your operating system:

For Linux:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For Windows:
- **PowerShell:**
powershell
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

- **Command Prompt:**
cmd
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

- **Git Bash:**
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

These commands will upgrade your existing NexaAI SDK installation to the latest CUDA-compatible version.

Note
If you encounter any issues or want to ensure a clean installation:

1. Uninstall the current version:
bash
pip uninstall nexaai


2. Reinstall the package using the appropriate command for your system as listed above, but without the `-U` flag.

For more detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.2...v0.0.8.3](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.2...v0.0.8.3)

0.5b

bash
nexa run -hf Qwen/Qwen2.5-0.5B-Instruct-GGUF

0.0.9.3

v0.0.9.2-vulkan1.3.261.1


v0.0.9.2-rocm621


v0.0.9.2-metal


v0.0.9.2-cu124


v0.0.9.1-rocm621


v0.0.9.1-metal


v0.0.9.1-cu124

0.0.9.1

v0.0.9.0-vulkan1.3.261.1


v0.0.9.0-rocm621


v0.0.9.0-metal


v0.0.9.0-cu124

0.0.9.0

v0.0.8.9-rocm621
Improvements 🚀
* Added multiprocessing support to speed up model evaluation tasks ([175](https://github.com/NexaAI/nexa-sdk/pull/175))
* Use `--num_workers` flag to specify number of parallel processes
* Example: `nexa eval phi3 --tasks ifeval --num_workers 4`
* Added support for Python 3.13 ([172](https://github.com/NexaAI/nexa-sdk/pull/172))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.8...v0.0.8.9](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.8...v0.0.8.9)

v0.0.8.9-metal
Improvements 🚀
* Added multiprocessing support to speed up model evaluation tasks ([175](https://github.com/NexaAI/nexa-sdk/pull/175))
* Use `--num_workers` flag to specify number of parallel processes
* Example: `nexa eval phi3 --tasks ifeval --num_workers 4`
* Added support for Python 3.13 ([172](https://github.com/NexaAI/nexa-sdk/pull/172))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.8...v0.0.8.9](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.8...v0.0.8.9)

v0.0.8.9-cu124
Improvements 🚀
* Added multiprocessing support to speed up model evaluation tasks ([175](https://github.com/NexaAI/nexa-sdk/pull/175))
* Use `--num_workers` flag to specify number of parallel processes
* Example: `nexa eval phi3 --tasks ifeval --num_workers 4`
* Added support for Python 3.13 ([172](https://github.com/NexaAI/nexa-sdk/pull/172))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.8...v0.0.8.9](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.8...v0.0.8.9)

0.0.8.9

Improvements 🚀
* Added multiprocessing support to speed up model evaluation tasks ([175](https://github.com/NexaAI/nexa-sdk/pull/175))
* Use `--num_workers` flag to specify number of parallel processes
* Example: `nexa eval phi3 --tasks ifeval --num_workers 4`
* Added support for Python 3.13 ([172](https://github.com/NexaAI/nexa-sdk/pull/172))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.8...v0.0.8.9](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.8...v0.0.8.9)

v0.0.8.8-rocm621
Improvements 🚀
* `nexa eval` command now supports evaluating memory usage, latency, and energy consumption ([166](https://github.com/NexaAI/nexa-sdk/pull/166))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.7...v0.0.8.8](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.7...v0.0.8.8)

v0.0.8.8-metal
Improvements 🚀
* `nexa eval` command now supports evaluating memory usage, latency, and energy consumption ([166](https://github.com/NexaAI/nexa-sdk/pull/166))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.7...v0.0.8.8](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.7...v0.0.8.8)

v0.0.8.8-cu124
Improvements 🚀
* `nexa eval` command now supports evaluating memory usage, latency, and energy consumption ([166](https://github.com/NexaAI/nexa-sdk/pull/166))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.7...v0.0.8.8](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.7...v0.0.8.8)

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.