Ollama

Latest version: v0.4.7

Safety actively analyzes 722491 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 19

0.5.7

What's Changed
* Fixed issue where using two `FROM` commands in `Modelfile`
* Support importing Command R and Command R+ architectures from safetensors

New Contributors
* Gloryjaw made their first contribution in https://github.com/ollama/ollama/pull/8438

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.5.6...v0.5.7

0.5.6

What's Changed
* Fixed errors that would occur when running `ollama create` on Windows and when using absolute paths

New Contributors
* steveberdy made their first contribution in https://github.com/ollama/ollama/pull/8352

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.5.5...v0.5.6

0.5.5

![ollama 2025](https://github.com/user-attachments/assets/d8e8aa47-f7e8-4531-a062-eb48d124b8db)


New models
- [Phi-4](https://ollama.com/library/phi4): Phi 4 is a 14B parameter, state-of-the-art open model from Microsoft.
- [Command R7B](https://ollama.com/library/command-r7b): the smallest model in Cohere's R series delivers top-tier speed, efficiency, and quality to build powerful AI applications on commodity GPUs and edge devices.
- [DeepSeek-V3](https://ollama.com/library/deepseek-v3): A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
- [OLMo 2](https://ollama.com/library/olmo2): a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.
- [Dolphin 3](https://ollama.com/library/dolphin3): the next generation of the Dolphin series of instruct-tuned models designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.
- [SmallThinker](https://ollama.com/library/smallthinker): A new small reasoning model fine-tuned from the Qwen 2.5 3B Instruct model.
- [Granite 3.1 Dense](https://ollama.com/library/granite3.1-dense): 2B and 8B text-only dense LLMs trained on over 12 trillion tokens of data, demonstrated significant improvements over their predecessors in performance and speed in IBM’s initial testing.
- [Granite 3.1 MoE](https://ollama.com/library/granite3.1-moe): 1B and 3B long-context mixture of experts (MoE) Granite models from IBM designed for low latency usage.

What's Changed
* The `/api/create` API endpoint that powers `ollama create` has been changed to improve conversion time and also accept a JSON object. **Note: this change is not backwards compatible**. If importing models, make sure you're using version `0.5.5` or later for both Ollama and the `ollama` CLI when running `ollama create`. If using `ollama.create` in the Python or JavaScript libraries, make sure to update to the latest version.
* Fixed runtime error that would occur when filling the model's context window
* Fixed crash that would occur when quotes were used in `/save`
* Fixed errors that would occur when sending x-stainless headers from OpenAI clients

New Contributors
* Squishedmac made their first contribution in https://github.com/ollama/ollama/pull/8172
* erusev made their first contribution in https://github.com/ollama/ollama/pull/7950
* olumolu made their first contribution in https://github.com/ollama/ollama/pull/8227
* paradoxical-dev made their first contribution in https://github.com/ollama/ollama/pull/8242
* belfie13 made their first contribution in https://github.com/ollama/ollama/pull/8215
* Docteur-RS made their first contribution in https://github.com/ollama/ollama/pull/7259
* anxkhn made their first contribution in https://github.com/ollama/ollama/pull/8082
* ubaldus made their first contribution in https://github.com/ollama/ollama/pull/8307
* isamu made their first contribution in https://github.com/ollama/ollama/pull/8343

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.5.4...v0.5.5

0.5.4

New models

- [Falcon3](https://ollama.com/library/falcon3): A family of efficient AI models under 10B parameters performant in science, math, and coding through innovative training techniques.

What's Changed
* Fixed issue where providing `null` to `format` would result in an error

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.5.3...v0.5.4

0.5.3

What's Changed
* Fixed runtime errors on older Intel Macs
* Fixed issue where setting the `format` field to `""` would cause an error

New Contributors
* Askir made their first contribution in https://github.com/ollama/ollama/pull/8028

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.5.2...v0.5.3

0.5.2

New models

- [EXAONE 3.5](https://ollama.com/library/exaone3.5): a collection of instruction-tuned bilingual (English and Korean) generative models ranging from 2.4B to 32B parameters, developed and released by LG AI Research.

What's Changed
* Fixed issue where whitespace would get trimmed from prompt when images were provided
* Improved memory estimation when scheduling models
* `OLLAMA_ORIGINS` will now check hosts in a case insensitive manner

> Note: the Linux `ollama-linux-amd64.tgz` directory structure has changed – if you manually install Ollama on Linux, make sure to retain the new directory layout and contents of the tar file.

New Contributors
* yannickgloster made their first contribution in https://github.com/ollama/ollama/pull/7960
* stweil made their first contribution in https://github.com/ollama/ollama/pull/7021
* AidfulAI made their first contribution in https://github.com/ollama/ollama/pull/8024
* taozuhong made their first contribution in https://github.com/ollama/ollama/pull/7948
* philffm made their first contribution in https://github.com/ollama/ollama/pull/7202

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.5.1...v0.5.2

Page 1 of 19

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.