Ollama

Latest version: v0.4.1

Safety actively analyzes 683322 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 14 of 17

0.1.0

Ollama for Linux
<img src="https://github.com/jmorganca/ollama/assets/251292/89f8526e-866a-4e19-a73c-3ff850d45c76" height="220">

Ollama for Linux is now available, with GPU acceleration enabled out-of-the-box for Nvidia GPUs.

💯 Ollama will run on cloud servers with multiple GPUs attached
🤖 Ollama will run on WSL 2 with GPU support
😍 Ollama maximizes the number of GPU layers to load to increase performance without crashing
🤩 Ollama will support CPU only, and small hobby gaming GPUs to super powerful workstation graphics cards like the H100

Download


curl https://ollama.ai/install.sh | sh


Manual [install steps](https://github.com/jmorganca/ollama/blob/main/docs/linux.md) are also available.

Changelog
* Ollama will now automatically offload as much of the running model as is supported by your GPU for maximum performance without any crashes
* Fix issue where characters would be erased when running `ollama run`
* Added a new community project by TwanLuttik in https://github.com/jmorganca/ollama/pull/574

New Contributors
* TwanLuttik made their first contribution in https://github.com/jmorganca/ollama/pull/574

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.21...v0.1.0

0.0.21

* Fixed an issue where empty responses would be returned if `template` was provided in the api, but not `prompt`
* Fixed an issue where the "Send a message" placeholder would show when writing multi line prompts with `ollama run`
* Fixed an issue where multi-line prompts in `ollama run` wouldn't be submitted when pressing Return

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.20...v0.0.21

0.0.20

What's Changed
* `ollama run` has a new & improved experience:
* Models will now be loaded immediately making even the first prompt much faster
* Added hint text
* Ollama will now fit words in the available width of the terminal for better readability
* `OLLAMA_HOST` now supports ipv6 hostnames
* `ollama run` will now automatically pull models if they don't exist when using a remote instance of Ollama
* Sending an empty `prompt` field to `/api/generate` will now load the model so the next request is fast
* Fixed an issue where `ollama create` would not correctly detect falcon model sizes
* Add a simple python client to access Ollama in `api/client.py` by pdevine
* Improvements to showing progress on `ollama pull` and `ollama push`
* Fixed an issue for adding empty layers with `ollama create`
* Fixed an issue for running Ollama on Windows (compiled from source)
* Fixed an error when running `ollama push`
* Readable community projects by jamesbraza


New Contributors
* jamesbraza made their first contribution in https://github.com/jmorganca/ollama/pull/550

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.19...v0.0.20

0.0.19

What's Changed
* Updated Docker image for Ollama `docker pull ollama/ollama`
* Ability to import and use GGUF file type models
* Fixed issue where `ollama push` would error on long-running uploads
* Ollama will now automatically clean up unused data locally
* Improve build instructions by apepper

New Contributors
* apepper made their first contribution in https://github.com/jmorganca/ollama/pull/482

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.18...v0.0.19

0.0.18

What's Changed
* New `ollama show` command for viewing details about a model:
* See a system prompt for a model: `ollama show --system orca-mini`
* View a model's parameters: `ollama show --parameters codellama`
* View a model's default prompt template: `ollama show --template llama2`
* View a Modelfile for a model: `ollama show --modelfile llama2`
* Minor improvements to model loading and generation time
* Fixed an issue where large prompts would cause `codellama` and similar models to show an error
* Fixed compatibility issues with macOS 11 Big Sur
* Fixed an issue where characters would be escaped in prompts causing escaped characters like `&amp;` in the output
* Fixed several issues with building from source on Windows and Linux
* Minor performance improvements to model loading and generation
* New sentiments example by technovangelist
* Fixed `num_keep` parameter not working properly
* Fixed issue where `Modelfile` parameters would not be honored at runtime
* Added missing options params to the embeddings docs by herrjemand
* Fixed issue where `ollama list` would error when there were no models to show

When building from source, Ollama will require running `go generate` to generate dependencies:


git clone https://github.com/jmorganca/ollama
cd ollama
go generate ./...
go build .


Note: `cmake` is required to build dependencies. On macOS it can be installed with `brew install cmake`, and on other platforms via the [installer](https://cmake.org/install/) or well-known package managers.

New Contributors
* callmephilip made their first contribution in https://github.com/jmorganca/ollama/pull/448
* herrjemand made their first contribution in https://github.com/jmorganca/ollama/pull/472

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.17...v0.0.18

0.0.17

What's Changed
* Multiple models can be removed together: `ollama rm mario:latest orca-mini:3b`
* `ollama list` will now show a unique ID for each model based on its contents
* Fixed bug where a prompt wasn't set by default causing an error when running a model created with `ollama create`
* Fixed crash when running 34B parameter models on hardware with not enough memory to run it.
* Fixed issue where non-quantized f16 models would not run
* Improved network performance of `ollama push`
* Fixed issue where stop sequences (such as `\n`) wouldn't be honored

New Contributors
* sqs made their first contribution in https://github.com/jmorganca/ollama/pull/415

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.16...v0.0.17

Page 14 of 17

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.