Nexaai

Latest version: v0.0.9.4

Safety actively analyzes 685670 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.0.8.8

Improvements 🚀
* `nexa eval` command now supports evaluating memory usage, latency, and energy consumption ([166](https://github.com/NexaAI/nexa-sdk/pull/166))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.7...v0.0.8.8](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.7...v0.0.8.8)

v0.0.8.7-rocm621
What's New ✨
* Support for running models from user's local path ([151](https://github.com/NexaAI/nexa-sdk/pull/151))
* See details in [CLI doc](https://github.com/NexaAI/nexa-sdk/blob/main/CLI.md#run-a-model) and [Server doc](https://github.com/NexaAI/nexa-sdk/blob/main/SERVER.md#start-local-server)
* Run a NLP model from local path: `nexa run ../models/gemma-1.1-2b-instruct-q4_0.gguf -lp -mt NLP`
* Start a multimodal model server from a local directory: `nexa server ../models/llava-v1.6-vicuna-7b/ -lp -mt MULTIMODAL`
* Embedding models support ([159](https://github.com/NexaAI/nexa-sdk/pull/159))
* See details in [**nexa embed**](https://github.com/NexaAI/nexa-sdk/blob/main/CLI.md#generate-embeddings)
* Quick example: `nexa embed nomic "Advancing on-device AI, together." >> generated_embeddings.txt` (This command generates embeddings for the text "Advancing on-device AI, together." using the Nomic model and appends the result to a file named generated_embeddings.txt)

* VLM models support in /v1/chat/completions ([154](https://github.com/NexaAI/nexa-sdk/pull/154))
* See details in [Server doc](https://github.com/NexaAI/nexa-sdk/blob/main/SERVER.md#2-chat-completions-v1chatcompletions)

* Support for running model evaluation on your device ([150](https://github.com/NexaAI/nexa-sdk/pull/150))

Improvements 🚀
* Customizable maximum context window (--nctx) for NLP and VLM models: ([155](https://github.com/NexaAI/nexa-sdk/pull/155) and [#158](https://github.com/NexaAI/nexa-sdk/pull/158))

* CV models now supported when running with -hf flag ([151](https://github.com/NexaAI/nexa-sdk/pull/151))
* Pull and run a CV model from Hugging Face: `nexa run -hf Steward/lcm-dreamshaper-v7-gguf -mt COMPUTER_VISION`


Fixes 🐞
* Fixed streaming issues with /v1/chat/completions: ([152](https://github.com/NexaAI/nexa-sdk/pull/152))

* Resolved download problems on macOS and Windows: ([146](https://github.com/NexaAI/nexa-sdk/pull/146))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:
CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir

For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.6.1...v0.0.8.7](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.6.1...v0.0.8.7)

v0.0.8.7-metal
What's New ✨
* Support for running models from user's local path ([151](https://github.com/NexaAI/nexa-sdk/pull/151))
* See details in [CLI doc](https://github.com/NexaAI/nexa-sdk/blob/main/CLI.md#run-a-model) and [Server doc](https://github.com/NexaAI/nexa-sdk/blob/main/SERVER.md#start-local-server)
* Run a NLP model from local path: `nexa run ../models/gemma-1.1-2b-instruct-q4_0.gguf -lp -mt NLP`
* Start a multimodal model server from a local directory: `nexa server ../models/llava-v1.6-vicuna-7b/ -lp -mt MULTIMODAL`
* Embedding models support ([159](https://github.com/NexaAI/nexa-sdk/pull/159))
* See details in [**nexa embed**](https://github.com/NexaAI/nexa-sdk/blob/main/CLI.md#generate-embeddings)
* Quick example: `nexa embed nomic "Advancing on-device AI, together." >> generated_embeddings.txt` (This command generates embeddings for the text "Advancing on-device AI, together." using the Nomic model and appends the result to a file named generated_embeddings.txt)

* VLM models support in /v1/chat/completions ([154](https://github.com/NexaAI/nexa-sdk/pull/154))
* See details in [Server doc](https://github.com/NexaAI/nexa-sdk/blob/main/SERVER.md#2-chat-completions-v1chatcompletions)

* Support for running model evaluation on your device ([150](https://github.com/NexaAI/nexa-sdk/pull/150))

Improvements 🚀
* Customizable maximum context window (--nctx) for NLP and VLM models: ([155](https://github.com/NexaAI/nexa-sdk/pull/155) and [#158](https://github.com/NexaAI/nexa-sdk/pull/158))

* CV models now supported when running with -hf flag ([151](https://github.com/NexaAI/nexa-sdk/pull/151))
* Pull and run a CV model from Hugging Face: `nexa run -hf Steward/lcm-dreamshaper-v7-gguf -mt COMPUTER_VISION`


Fixes 🐞
* Fixed streaming issues with /v1/chat/completions: ([152](https://github.com/NexaAI/nexa-sdk/pull/152))

* Resolved download problems on macOS and Windows: ([146](https://github.com/NexaAI/nexa-sdk/pull/146))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:
CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir

For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.6.1...v0.0.8.7](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.6.1...v0.0.8.7)

v0.0.8.7-cu124
What's New ✨
* Support for running models from user's local path ([151](https://github.com/NexaAI/nexa-sdk/pull/151))
* See details in [CLI doc](https://github.com/NexaAI/nexa-sdk/blob/main/CLI.md#run-a-model) and [Server doc](https://github.com/NexaAI/nexa-sdk/blob/main/SERVER.md#start-local-server)
* Run a NLP model from local path: `nexa run ../models/gemma-1.1-2b-instruct-q4_0.gguf -lp -mt NLP`
* Start a multimodal model server from a local directory: `nexa server ../models/llava-v1.6-vicuna-7b/ -lp -mt MULTIMODAL`
* Embedding models support ([159](https://github.com/NexaAI/nexa-sdk/pull/159))
* See details in [**nexa embed**](https://github.com/NexaAI/nexa-sdk/blob/main/CLI.md#generate-embeddings)
* Quick example: `nexa embed nomic "Advancing on-device AI, together." >> generated_embeddings.txt` (This command generates embeddings for the text "Advancing on-device AI, together." using the Nomic model and appends the result to a file named generated_embeddings.txt)

* VLM models support in /v1/chat/completions ([154](https://github.com/NexaAI/nexa-sdk/pull/154))
* See details in [Server doc](https://github.com/NexaAI/nexa-sdk/blob/main/SERVER.md#2-chat-completions-v1chatcompletions)

* Support for running model evaluation on your device ([150](https://github.com/NexaAI/nexa-sdk/pull/150))

Improvements 🚀
* Customizable maximum context window (--nctx) for NLP and VLM models: ([155](https://github.com/NexaAI/nexa-sdk/pull/155) and [#158](https://github.com/NexaAI/nexa-sdk/pull/158))

* CV models now supported when running with -hf flag ([151](https://github.com/NexaAI/nexa-sdk/pull/151))
* Pull and run a CV model from Hugging Face: `nexa run -hf Steward/lcm-dreamshaper-v7-gguf -mt COMPUTER_VISION`


Fixes 🐞
* Fixed streaming issues with /v1/chat/completions: ([152](https://github.com/NexaAI/nexa-sdk/pull/152))

* Resolved download problems on macOS and Windows: ([146](https://github.com/NexaAI/nexa-sdk/pull/146))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:
CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir

For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.6.1...v0.0.8.7](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.6.1...v0.0.8.7)

0.0.8.7

What's New ✨
* Support for running models from user's local path ([151](https://github.com/NexaAI/nexa-sdk/pull/151))
* See details in [CLI doc](https://github.com/NexaAI/nexa-sdk/blob/main/CLI.md#run-a-model) and [Server doc](https://github.com/NexaAI/nexa-sdk/blob/main/SERVER.md#start-local-server)
* Run a NLP model from local path: `nexa run ../models/gemma-1.1-2b-instruct-q4_0.gguf -lp -mt NLP`
* Start a multimodal model server from a local directory: `nexa server ../models/llava-v1.6-vicuna-7b/ -lp -mt MULTIMODAL`
* Embedding models support ([159](https://github.com/NexaAI/nexa-sdk/pull/159))
* List of embedding models we support: [**model hub embedding models**](https://nexa.ai/models?tasks=Text+Embedding)
* See details in [**nexa embed**](https://github.com/NexaAI/nexa-sdk/blob/main/CLI.md#generate-embeddings)
* Quick example: `nexa embed nomic "Advancing on-device AI, together." >> generated_embeddings.txt` (This command generates embeddings for the text "Advancing on-device AI, together." using the Nomic model and appends the result to a file named generated_embeddings.txt)

* VLM models support in /v1/chat/completions ([154](https://github.com/NexaAI/nexa-sdk/pull/154))
* See details in [Server doc](https://github.com/NexaAI/nexa-sdk/blob/main/SERVER.md#2-chat-completions-v1chatcompletions)

* (Beta-testing) Support for running model evaluation on your device ([150](https://github.com/NexaAI/nexa-sdk/pull/150))

Improvements 🚀
* Customizable maximum context window (--nctx) for NLP and VLM models: ([155](https://github.com/NexaAI/nexa-sdk/pull/155) and [#158](https://github.com/NexaAI/nexa-sdk/pull/158))

* CV models now supported when running with -hf flag ([151](https://github.com/NexaAI/nexa-sdk/pull/151))
* Pull and run a CV model from Hugging Face: `nexa run -hf Steward/lcm-dreamshaper-v7-gguf -mt COMPUTER_VISION`


Fixes 🐞
* Fixed streaming issues with /v1/chat/completions: ([152](https://github.com/NexaAI/nexa-sdk/pull/152))

* Resolved download problems on macOS and Windows: ([146](https://github.com/NexaAI/nexa-sdk/pull/146))

Upgrade Guide 📝
To upgrade the Nexa SDK, use the command for your system:
CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir

GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir

For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.6.1...v0.0.8.7](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.6.1...v0.0.8.7)

v0.0.8.6-rocm621
What's New ✨
* Added support for new models ([135](https://github.com/NexaAI/nexa-sdk/pull/135))
- [AMD-Llama-135m](https://nexaai.com/AMD/AMD-Llama-135m/gguf-fp16/file): `nexa run AMD-Llama-135m:fp16`
- [whisper-large-v3-turbo](https://nexaai.com/Systran/faster-whisper-large-v3-turbo/bin-cpu-fp16/readme) : `nexa run faster-whisper-large-turbo`
* Included new demo [local file organizer v0.0.2](https://github.com/NexaAI/nexa-sdk/tree/main/examples/local_file_organization) ([#132](https://github.com/NexaAI/nexa-sdk/pull/132))
* Implemented support for concurrent downloads ([125](https://github.com/NexaAI/nexa-sdk/pull/125))
* Added a few 🌶️ [uncensored models](https://nexaai.com/models?tasks=Uncensored)

Improvements 🔧
* Added AMD ROCm prebuild wheel

Fixes 🐞
* Fixed progress bar not showing during image generation

Upgrade Guide 📝

To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.5...v0.0.8.6](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.5...v0.0.8.6)

v0.0.8.6-metal
What's New ✨
* Added support for new models ([135](https://github.com/NexaAI/nexa-sdk/pull/135))
- [AMD-Llama-135m](https://nexaai.com/AMD/AMD-Llama-135m/gguf-fp16/file): `nexa run AMD-Llama-135m:fp16`
- [whisper-large-v3-turbo](https://nexaai.com/Systran/faster-whisper-large-v3-turbo/bin-cpu-fp16/readme) : `nexa run faster-whisper-large-turbo`
* Included new demo [local file organizer v0.0.2](https://github.com/NexaAI/nexa-sdk/tree/main/examples/local_file_organization) ([#132](https://github.com/NexaAI/nexa-sdk/pull/132))
* Implemented support for concurrent downloads ([125](https://github.com/NexaAI/nexa-sdk/pull/125))
* Added a few 🌶️ [uncensored models](https://nexaai.com/models?tasks=Uncensored)

Improvements 🔧
* Added AMD ROCm prebuild wheel

Fixes 🐞
* Fixed progress bar not showing during image generation

Upgrade Guide 📝

To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.5...v0.0.8.6](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.5...v0.0.8.6)

v0.0.8.6-cu124
What's New ✨
* Added support for new models ([135](https://github.com/NexaAI/nexa-sdk/pull/135))
- [AMD-Llama-135m](https://nexaai.com/AMD/AMD-Llama-135m/gguf-fp16/file): `nexa run AMD-Llama-135m:fp16`
- [whisper-large-v3-turbo](https://nexaai.com/Systran/faster-whisper-large-v3-turbo/bin-cpu-fp16/readme) : `nexa run faster-whisper-large-turbo`
* Included new demo [local file organizer v0.0.2](https://github.com/NexaAI/nexa-sdk/tree/main/examples/local_file_organization) ([#132](https://github.com/NexaAI/nexa-sdk/pull/132))
* Implemented support for concurrent downloads ([125](https://github.com/NexaAI/nexa-sdk/pull/125))
* Added a few 🌶️ [uncensored models](https://nexaai.com/models?tasks=Uncensored)

Improvements 🔧
* Added AMD ROCm prebuild wheel

Fixes 🐞
* Fixed progress bar not showing during image generation

Upgrade Guide 📝

To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.5...v0.0.8.6](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.5...v0.0.8.6)

0.0.8.6

What's New ✨
* Added support for new models ([135](https://github.com/NexaAI/nexa-sdk/pull/135))
- [AMD-Llama-135m](https://nexaai.com/AMD/AMD-Llama-135m/gguf-fp16/file): `nexa run AMD-Llama-135m:fp16`
- [whisper-large-v3-turbo](https://nexaai.com/Systran/faster-whisper-large-v3-turbo/bin-cpu-fp16/readme) : `nexa run faster-whisper-large-turbo`
* Included new demo [local file organizer v0.0.2](https://github.com/NexaAI/nexa-sdk/tree/main/examples/local_file_organization) ([#132](https://github.com/NexaAI/nexa-sdk/pull/132))
* Implemented support for concurrent downloads ([125](https://github.com/NexaAI/nexa-sdk/pull/125))
* Added a few 🌶️ [uncensored models](https://nexaai.com/models?tasks=Uncensored)

Improvements 🔧
* Added AMD ROCm prebuild wheel

Fixes 🐞
* Fixed progress bar not showing during image generation

Upgrade Guide 📝

To upgrade the Nexa SDK, use the command for your system:

CPU
bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (Metal)
For the GPU version supporting **Metal (macOS)**:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (CUDA)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows PowerShell**:
bash
$env:CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON"; pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Command Prompt**:
bash
set CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" & pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


For **Windows Git Bash**:
bash
CMAKE_ARGS="-DGGML_CUDA=ON -DSD_CUBLAS=ON" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cu124 --extra-index-url https://pypi.org/simple --no-cache-dir


GPU (ROCm)
For **Linux**:
bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/rocm621 --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.

[Full Changelog - v0.0.8.5...v0.0.8.6](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.5...v0.0.8.6)

v0.0.8.5-metal
What's New ✨
Added support for Llama3.2 models:

| Model | Command to Run |
|-------|----------------|
| [Llama3.2 3B](https://nexaai.com/meta/Llama3.2-3B-Instruct/gguf-q4_0/readme) | `nexa run llama3.2` |
| [Llama3.2 1B](https://nexaai.com/meta/Llama3.2-1B-Instruct/gguf-q4_0/readme) | `nexa run Llama3.2-1B-Instruct:q4_0` |

Update Nexa SDK 🛠️

CPU Version
To update the CPU version of Nexa SDK, run:
bash
pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU Version (Metal - macOS)
For the GPU version supporting Metal (macOS), run:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


Other GPU Support
For detailed installation instructions of Nexa SDK for **CUDA** and **AMD GPU** support, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the main README.

**Note:** To update your current SDK version to v0.0.8.5, use the same command as the installation but add a `-U` flag to the pip install command.

v0.0.8.5-cu124
What's New ✨
Added support for Llama3.2 models:

| Model | Command to Run |
|-------|----------------|
| [Llama3.2 3B](https://nexaai.com/meta/Llama3.2-3B-Instruct/gguf-q4_0/readme) | `nexa run llama3.2` |
| [Llama3.2 1B](https://nexaai.com/meta/Llama3.2-1B-Instruct/gguf-q4_0/readme) | `nexa run Llama3.2-1B-Instruct:q4_0` |

Update Nexa SDK 🛠️

CPU Version
To update the CPU version of Nexa SDK, run:
bash
pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU Version (Metal - macOS)
For the GPU version supporting Metal (macOS), run:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


Other GPU Support
For detailed installation instructions of Nexa SDK for **CUDA** and **AMD GPU** support, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the main README.

**Note:** To update your current SDK version to v0.0.8.5, use the same command as the installation but add a `-U` flag to the pip install command.

0.0.8.5

What's New ✨
Added support for Llama3.2 models:

| Model | Command to Run |
|-------|----------------|
| [Llama3.2 3B](https://nexaai.com/meta/Llama3.2-3B-Instruct/gguf-q4_0/readme) | `nexa run llama3.2` |
| [Llama3.2 1B](https://nexaai.com/meta/Llama3.2-1B-Instruct/gguf-q4_0/readme) | `nexa run Llama3.2-1B-Instruct:q4_0` |

Update Nexa SDK 🛠️

CPU Version
To update the CPU version of Nexa SDK, run:
bash
pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


GPU Version (Metal - macOS)
For the GPU version supporting Metal (macOS), run:
bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai -U --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


Other GPU Support
For detailed installation instructions of Nexa SDK for **CUDA** and **AMD GPU** support, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the main README.

**Note:** To update your current SDK version to v0.0.8.5, use the same command as the installation but add a `-U` flag to the pip install command.

main-cu124


v0.0.8.4-metal
What's New ✨
* Added support for Qwen2.5, Qwen2.5-code, and Qwen2.5-Math

Install Nexa SDK 🛠️

CPU Installation

To install the CPU version of Nexa SDK, run:

bash
pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir
`

GPU Installation (Metal - macOS)

For the GPU version supporting Metal (macOS), run:

bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions of Nexa SDK for **CUDA** and **AMD GPU** support, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the main README.

To update your current SDK version to v0.0.8.4, use the same command as the installation but add a `-U` flag to the pip install command.

Run Qwen2.5 with Nexa SDK

Option 1: Run official GGUF files from Qwen HuggingFace Page 🤗

You could use the following command to pull and run language models in GGUF format from 🤗 HuggingFace: `nexa run -hf <hf model id>`. Choose one of these commands based on your preferred model size:

0.0.8.4

What's New ✨
* Added support for Qwen2.5, Qwen2.5-code, and Qwen2.5-Math

Install Nexa SDK 🛠️

CPU Installation

To install the CPU version of Nexa SDK, run:

bash
pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir
`

GPU Installation (Metal - macOS)

For the GPU version supporting Metal (macOS), run:

bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir


For detailed installation instructions of Nexa SDK for **CUDA** and **AMD GPU** support, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the main README.

To update your current SDK version to v0.0.8.4, use the same command as the installation but add a `-U` flag to the pip install command.

Run Qwen2.5 with Nexa SDK

Option 1: Run official GGUF files from Qwen HuggingFace Page 🤗

You could use the following command to pull and run language models in GGUF format from 🤗 HuggingFace: `nexa run -hf <hf model id>`. Choose one of these commands based on your preferred model size:

0.0.8.3

What's New ✨
* Added image generation model support: [SD3](https://nexaai.com/StabilityAI/stable-diffusion-3-medium/gguf-q4_0/file) and [Flux](https://nexaai.com/BlackForestLabs/FLUX.1-schnell/gguf-q4_0/file) ([#75](https://github.com/NexaAI/nexa-sdk/issues/75))
* Added NLP model support: [OpenELM](https://nexaai.com/apple/OpenELM-3B/gguf-q4_K_M/file) and [Phi3.5](https://nexaai.com/microsoft/Phi-3.5-mini-instruct/gguf-q4_0/file)
* Implemented logits through API ([67](https://github.com/NexaAI/nexa-sdk/pull/67))

Improvements 🔧
* Added more SDK development examples ([examples](https://github.com/NexaAI/nexa-sdk/tree/main/examples))
* Added RoCM support for AMD GPUs ([90](https://github.com/NexaAI/nexa-sdk/pull/90))

Fixes 🐞
* Fixed server issue: execute `curl` command on Windows ([79](https://github.com/NexaAI/nexa-sdk/issues/79))
* Fixed nanoLlava file mapping issue ([68](https://github.com/NexaAI/nexa-sdk/issues/68))

Upgrade Guide 📝

To upgrade the NexaAI SDK CPU version , follow these steps:

1. Open a terminal or command prompt.

2. Run the following command:

bash
pip install -U nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


This command will upgrade your existing NexaAI SDK installation to the latest CPU-compatible version.

Note
If you encounter any issues or want to ensure a clean installation:

1. Uninstall the current version:
bash
pip uninstall nexaai


2. Reinstall the package using this command:
bash
pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir


For more detailed installation instructions, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the README.


[Full Changelog - v0.0.8.2...v0.0.8.3](https://github.com/NexaAI/nexa-sdk/compare/v0.0.8.2...v0.0.8.3)

v0.0.8.2-metal


v0.0.8.2-cu124

Page 2 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.