Ollama for Linux
<img src="https://github.com/jmorganca/ollama/assets/251292/89f8526e-866a-4e19-a73c-3ff850d45c76" height="220">
Ollama for Linux is now available, with GPU acceleration enabled out-of-the-box for Nvidia GPUs.
💯 Ollama will run on cloud servers with multiple GPUs attached
🤖 Ollama will run on WSL 2 with GPU support
😍 Ollama maximizes the number of GPU layers to load to increase performance without crashing
🤩 Ollama will support CPU only, and small hobby gaming GPUs to super powerful workstation graphics cards like the H100
Download
curl https://ollama.ai/install.sh | sh
Manual [install steps](https://github.com/jmorganca/ollama/blob/main/docs/linux.md) are also available.
Changelog
* Ollama will now automatically offload as much of the running model as is supported by your GPU for maximum performance without any crashes
* Fix issue where characters would be erased when running `ollama run`
* Added a new community project by TwanLuttik in https://github.com/jmorganca/ollama/pull/574
New Contributors
* TwanLuttik made their first contribution in https://github.com/jmorganca/ollama/pull/574
**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.21...v0.1.0