Ollama

Latest version: v0.4.7

Safety actively analyzes 723177 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 19

0.1.29

<img src="https://github.com/ollama/ollama/assets/3325447/d282a022-cf86-4feb-8c35-e5139b97d8e3" width="100%" />

AMD Preview

Ollama now supports AMD graphics cards in preview on Windows and Linux. All the features are now accelerated by AMD graphics cards, and support is included by default in Ollama for [Linux](https://ollama.com/download/linux), [Windows](https://ollama.com/download/windows) and [Docker](https://hub.docker.com/r/ollama/ollama).

Supported cards and accelerators

| Family | Supported cards and accelerators |
| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| AMD Radeon RX | `7900 XTX` `7900 XT` `7900 GRE` `7800 XT` `7700 XT` `7600 XT` `7600` <br>`6950 XT` `6900 XTX` `6900XT` `6800 XT` `6800`<br>`Vega 64` `Vega 56` |
| AMD Radeon PRO | `W7900` `W7800` `W7700` `W7600` `W7500` <br>`W6900X` `W6800X Duo` `W6800X` `W6800`<br>`V620` `V420` `V340` `V320`<br>`Vega II Duo` `Vega II` `VII` `SSG` |
| AMD Instinct | `MI300X` `MI300A` `MI300`<br>`MI250X` `MI250` `MI210` `MI200`<br>`MI100` `MI60` `MI50` |

What's Changed
* `ollama <command> -h` will now show documentation for supported environment variables
* Fixed issue where generating embeddings with `nomic-embed-text`, `all-minilm` or other embedding models would hang on Linux
* Experimental support for importing Safetensors models using the `FROM <directory with safetensors model>` command in the Modelfile
* Fixed issues where Ollama would hang when using JSON mode.
* Fixed issue where `ollama run` would error when piping output to `tee` and other tools
* Fixed an issue where memory would not be released when running vision models
* Ollama will no longer show an error message when piping to stdin on Windows

New Contributors
* tgraupmann made their first contribution in https://github.com/ollama/ollama/pull/2582
* andersrex made their first contribution in https://github.com/ollama/ollama/pull/2909
* leonid20000 made their first contribution in https://github.com/ollama/ollama/pull/2440
* hishope made their first contribution in https://github.com/ollama/ollama/pull/2973
* mrdjohnson made their first contribution in https://github.com/ollama/ollama/pull/2759
* mofanke made their first contribution in https://github.com/ollama/ollama/pull/3077
* racerole made their first contribution in https://github.com/ollama/ollama/pull/3073
* Chris-AS1 made their first contribution in https://github.com/ollama/ollama/pull/3094

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.28...v0.1.29

0.1.28

New models
* [StarCoder2](https://ollama.com/library/starcoder2): the next generation of transparently trained open code LLMs that comes in three sizes: 3B, 7B and 15B parameters.
* [DolphinCoder](https://ollama.com/library/dolphincoder): a chat model based on StarCoder2 15B that excels at writing code.

What's Changed
* Vision models such as `llava` should now respond better to text prompts
* Improved support for `llava` 1.6 models
* Fixed issue where switching between models repeatedly would cause Ollama to hang
* Installing Ollama on Windows no longer requires a minimum of 4GB disk space
* Ollama on macOS will now more reliably determine available VRAM
* Fixed issue where running Ollama in `podman` would not detect Nvidia GPUs
* Ollama will correctly return an empty embedding when calling `/api/embeddings` with an empty `prompt` instead of hanging

New Contributors
* Bin-Huang made their first contribution in https://github.com/ollama/ollama/pull/1706
* elthommy made their first contribution in https://github.com/ollama/ollama/pull/2737
* peanut256 made their first contribution in https://github.com/ollama/ollama/pull/2354
* tylinux made their first contribution in https://github.com/ollama/ollama/pull/2827
* fred-bf made their first contribution in https://github.com/ollama/ollama/pull/2780
* bmwiedemann made their first contribution in https://github.com/ollama/ollama/pull/2836

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.27...v0.1.28

0.1.27

[![306900613-01333db3-c27b-4044-88b3-9b2ffbe06415](https://github.com/ollama/ollama/assets/251292/fd975189-15bb-4c66-a16f-0f34c61ccba3)](https://ollama.com/library/gemma)

Gemma
[Gemma](https://ollama.com/library/gemma) is a new, top-performing family of lightweight open models built by Google. Available in `2b` and `7b` parameter sizes:
* `ollama run gemma:2b`
* `ollama run gemma:7b` (default)

What's Changed
* Performance improvements (up to 2x) when running [Gemma](https://ollama.com/library/gemma) models
* Fixed performance issues on Windows without GPU acceleration. Systems with AVX and AVX2 instruction sets should be 2-4x faster.
* Reduced likelihood of false positive Windows Defender alerts on Windows.

New Contributors
* joshyan1 made their first contribution in https://github.com/ollama/ollama/pull/2657
* pfrankov made their first contribution in https://github.com/ollama/ollama/pull/2138
* adminazhar made their first contribution in https://github.com/ollama/ollama/pull/2686
* b-tocs made their first contribution in https://github.com/ollama/ollama/pull/2510
* Yuan-ManX made their first contribution in https://github.com/ollama/ollama/pull/2249
* langchain4j made their first contribution in https://github.com/ollama/ollama/pull/1690
* logancyang made their first contribution in https://github.com/ollama/ollama/pull/1918

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.26...v0.1.27

0.1.26

What's Changed
* Support for `bert` and `nomic-bert` embedding models
* Fixed issue where system prompt and prompt template would not be updated when loading a new model
* Quotes will now be trimmed around the value of the `OLLAMA_HOST` on Windows
* Fixed duplicate button issue on the Windows taskbar menu.
* Fixed issue where system prompt would be be overridden when using the `/api/chat` endpoint
* Hardened AMD driver lookup logic
* Fixed issue where two versions of Ollama on Windows would run at the same time
* Fixed issue where memory would not be released after a model is unloaded with modern CUDA-enabled GPUs
* Fixed issue where AVX2 was required for GPU on Windows machines with GPUs
* Fixed issue where `/bye` or `/exit` would not work with trailing spaces or characters after them

New Contributors
* tristanbob made their first contribution in https://github.com/ollama/ollama/pull/2545
* justinh-rahb made their first contribution in https://github.com/ollama/ollama/pull/2563
* gerazov made their first contribution in https://github.com/ollama/ollama/pull/2188
* eddumelendez made their first contribution in https://github.com/ollama/ollama/pull/2164
* lulzshadowwalker made their first contribution in https://github.com/ollama/ollama/pull/2381
* jakobhoeg made their first contribution in https://github.com/ollama/ollama/pull/2466
* jdetroyes made their first contribution in https://github.com/ollama/ollama/pull/1673
* djcopley made their first contribution in https://github.com/ollama/ollama/pull/1767
* pythops made their first contribution in https://github.com/ollama/ollama/pull/2329
* ttsugriy made their first contribution in https://github.com/ollama/ollama/pull/2511
* medoror made their first contribution in https://github.com/ollama/ollama/pull/2180
* nikeshparajuli made their first contribution in https://github.com/ollama/ollama/pull/1775
* n4ze3m made their first contribution in https://github.com/ollama/ollama/pull/2447

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.25...v0.1.26

0.1.25

[![ollama_windows](https://github.com/ollama/ollama/assets/3325447/ed02fdfe-17fc-4288-aff1-e58193f0b71a)](https://github.com/ollama/ollama/releases/download/v0.1.25/OllamaSetup.exe)

Windows Preview

Ollama is now available on Windows in preview. Download it [here](https://github.com/ollama/ollama/releases/download/v0.1.25/OllamaSetup.exe). Ollama on Windows makes it possible to pull, run and create large language models in a new native Windows experience. It includes built-in GPU acceleration, access to the full [model library](https://ollama.com/library), and the Ollama API including [OpenAI compatibility](https://ollama.com/blog/openai-compatibility).

What's Changed
* Ollama on Windows is now available in preview.
* Fixed an issue where requests would hang after being repeated several times
* Ollama will now correctly error when provided an unsupported image format
* Fixed issue where `ollama serve` wouldn't immediately quit when receiving a termination signal
* Fixed issues with prompt templating for the `/api/chat` endpoint, such as where Ollama would omit the second system prompt in a series of messages
* Fixed issue where providing an empty list of messages would return a non-empty response instead of loading the model
* Setting a negative `keep_alive` value (e.g. `-1`) will now correctly keep the model loaded indefinitely

New Contributors
* lebrunel made their first contribution in https://github.com/ollama/ollama/pull/2477
* bnorick made their first contribution in https://github.com/ollama/ollama/pull/2480

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.24...v0.1.25

0.1.24

OpenAI Compatibility

![openai](https://github.com/ollama/ollama/assets/251292/da36abcd-c929-4806-b957-5adf41ac641a)


This release adds initial compatibility support for the OpenAI [Chat Completions API](https://platform.openai.com/docs/api-reference/chat).

* [Documentation](https://github.com/ollama/ollama/blob/main/docs/openai.md)
* [Examples](https://ollama.ai/blog/openai-compatibility)

Usage with cURL


curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama2",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'


New Models
* [Qwen 1.5](https://ollama.ai/library/qwen): Qwen 1.5 is a new family of large language models by Alibaba Cloud spanning from 0.5B to 72B.

What's Changed
* Fixed issue where requests to `/api/chat` would hang when providing empty `user` messages repeatedly
* Fixed issue on macOS where Ollama would return a missing library error after being open for a long period of time

New Contributors
* easp made their first contribution in https://github.com/ollama/ollama/pull/2340
* mraiser made their first contribution in https://github.com/ollama/ollama/pull/1849

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.23...v0.1.24

Page 11 of 19

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.