Ollama

Latest version: v0.3.3

Safety actively analyzes 682251 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 15

0.1.32

![picture of ollama levelling up](https://github.com/ollama/ollama/assets/3325447/05172bee-65a2-43f1-b6c9-db71cf8edd53)

New models
* [WizardLM 2](https://ollama.com/library/wizardlm2): State of the art large language model from Microsoft AI with improved performance on complex chat, multilingual, reasoning and agent use cases.
* `wizardlm2:8x22b`: large 8x22B model based on Mixtral 8x22B
* `wizardlm2:7b`: fast, high-performing model based on Mistral 7B
* [Snowflake Arctic Embed](https://ollama.com/library/snowflake-arctic-embed): A suite of text embedding models by Snowflake, optimized for performance.
* [Command R+](https://ollama.com/library/command-r-plus): a powerful, scalable large language model purpose-built for RAG use cases
* [DBRX](https://ollama.com/library/dbrx): A large 132B open, general-purpose LLM created by Databricks.
* [Mixtral 8x22B](https://ollama.com/library/mixtral:8x22b): the new leading Mixture of Experts (MoE) base model by Mistral AI.

What's Changed
* Ollama will now better utilize available VRAM, leading to less out-of-memory errors, as well as better GPU utilization
* When running larger models that don't fit into VRAM on macOS, Ollama will now split the model between GPU and CPU to maximize performance.
* Fixed several issues where Ollama would hang upon encountering an error
* Fix issue where using quotes in `OLLAMA_ORIGINS` would cause an error

New Contributors
* sugarforever made their first contribution in https://github.com/ollama/ollama/pull/3400
* yaroslavyaroslav made their first contribution in https://github.com/ollama/ollama/pull/3378
* Nagi-ovo made their first contribution in https://github.com/ollama/ollama/pull/3423
* ParisNeo made their first contribution in https://github.com/ollama/ollama/pull/3436
* philippgille made their first contribution in https://github.com/ollama/ollama/pull/3437
* cesto93 made their first contribution in https://github.com/ollama/ollama/pull/3461
* ThomasVitale made their first contribution in https://github.com/ollama/ollama/pull/3515
* writinwaters made their first contribution in https://github.com/ollama/ollama/pull/3539
* alexmavr made their first contribution in https://github.com/ollama/ollama/pull/3555

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.31...v0.1.32

0.1.31

[![ollama embedding](https://github.com/ollama/ollama/assets/3325447/0258e96a-a703-489a-80be-6caa97cd3f81)](https://ollama.com/blog/embedding-models)

Ollama supports embedding models. Bring your existing documents or other data, and combine it with text prompts to build RAG (retrieval augmented generation) apps using the Ollama [REST API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings), [Python](https://github.com/ollama/ollama-python) or [Javascript](https://github.com/ollama/ollama-js) libraries.

New models
* [Qwen 1.5 32B](https://ollama.com/library/qwen:32b): A new 32B multilingual model competitive with larger models such as Mixtral
* [StarlingLM Beta](https://ollama.com/library/starling-lm:beta): A high ranking 7B model on popular benchmarks that includes a permissive Apache 2.0 license.
* [DolphinCoder StarCoder 7B](https://ollama.com/library/dolphincoder:7b): A 7B uncensored variant of the Dolphin model family that excels at coding, based on StarCoder2 7B.
* [StableLM 1.6 Chat](https://ollama.com/library/stablelm2:chat): A new version of StableLM 1.6 tuned for instruction

What's new
* Fixed issue where Ollama would hang when using certain unicode characters in the prompt such as emojis

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.30...v0.1.31

0.1.30

<img alt="Ollama now supports Cohere's Command R model" src="https://github.com/ollama/ollama/assets/3325447/ba99059d-2397-4fb9-84b7-d45c71518b4e" width="640" />

New models
* [Command R](https://ollama.com/library/command-r): a Large Language Model optimized for conversational interaction and long context tasks.
* [mxbai-embed-large](https://ollama.com/library/mxbai-embed-large): A new state-of-the-art large embedding model

What's Changed
* Fixed various issues with `ollama run` on Windows
* History now will work when pressing up and down arrow keys
* Right and left arrow keys will now move the cursor appropriately
* Pasting multi-line strings will now work on Windows
* Fixed issue where mounting or sharing files between Linux and Windows (e.g. via WSL or Docker) would cause errors due to having `:` in the filename.
* Improved support for AMD MI300 and MI300X Accelerators
* Improved cleanup of temporary files resulting in better space utilization

**Important change**

For filesystem compatibility, Ollama has changed model data filenames to use `-` instead of `:`. This change will be applied automatically. If downgrading to 0.1.29 or lower from 0.1.30 (on Linux or macOS only) run:


find ~/.ollama/models/blobs -type f -exec bash -c 'mv "$0" "${0//-/:}"' {} \;



New Contributors
* alitrack made their first contribution in https://github.com/ollama/ollama/pull/3111
* drazdra made their first contribution in https://github.com/ollama/ollama/pull/3338
* rapidarchitect made their first contribution in https://github.com/ollama/ollama/pull/3288
* yusufcanb made their first contribution in https://github.com/ollama/ollama/pull/3274
* jikkuatwork made their first contribution in https://github.com/ollama/ollama/pull/3178
* timothycarambat made their first contribution in https://github.com/ollama/ollama/pull/3145
* fly2tomato made their first contribution in https://github.com/ollama/ollama/pull/2946
* enoch1118 made their first contribution in https://github.com/ollama/ollama/pull/2927
* danny-avila made their first contribution in https://github.com/ollama/ollama/pull/2918
* mmo80 made their first contribution in https://github.com/ollama/ollama/pull/2881
* anaisbetts made their first contribution in https://github.com/ollama/ollama/pull/2428
* marco-souza made their first contribution in https://github.com/ollama/ollama/pull/1905
* guchenhe made their first contribution in https://github.com/ollama/ollama/pull/1944
* herval made their first contribution in https://github.com/ollama/ollama/pull/1873
* Npahlfer made their first contribution in https://github.com/ollama/ollama/pull/1623
* remy415 made their first contribution in https://github.com/ollama/ollama/pull/2279

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.29...v0.1.30

0.1.29

<img src="https://github.com/ollama/ollama/assets/3325447/d282a022-cf86-4feb-8c35-e5139b97d8e3" width="100%" />

AMD Preview

Ollama now supports AMD graphics cards in preview on Windows and Linux. All the features are now accelerated by AMD graphics cards, and support is included by default in Ollama for [Linux](https://ollama.com/download/linux), [Windows](https://ollama.com/download/windows) and [Docker](https://hub.docker.com/r/ollama/ollama).

Supported cards and accelerators

| Family | Supported cards and accelerators |
| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| AMD Radeon RX | `7900 XTX` `7900 XT` `7900 GRE` `7800 XT` `7700 XT` `7600 XT` `7600` <br>`6950 XT` `6900 XTX` `6900XT` `6800 XT` `6800`<br>`Vega 64` `Vega 56` |
| AMD Radeon PRO | `W7900` `W7800` `W7700` `W7600` `W7500` <br>`W6900X` `W6800X Duo` `W6800X` `W6800`<br>`V620` `V420` `V340` `V320`<br>`Vega II Duo` `Vega II` `VII` `SSG` |
| AMD Instinct | `MI300X` `MI300A` `MI300`<br>`MI250X` `MI250` `MI210` `MI200`<br>`MI100` `MI60` `MI50` |

What's Changed
* `ollama <command> -h` will now show documentation for supported environment variables
* Fixed issue where generating embeddings with `nomic-embed-text`, `all-minilm` or other embedding models would hang on Linux
* Experimental support for importing Safetensors models using the `FROM <directory with safetensors model>` command in the Modelfile
* Fixed issues where Ollama would hang when using JSON mode.
* Fixed issue where `ollama run` would error when piping output to `tee` and other tools
* Fixed an issue where memory would not be released when running vision models
* Ollama will no longer show an error message when piping to stdin on Windows

New Contributors
* tgraupmann made their first contribution in https://github.com/ollama/ollama/pull/2582
* andersrex made their first contribution in https://github.com/ollama/ollama/pull/2909
* leonid20000 made their first contribution in https://github.com/ollama/ollama/pull/2440
* hishope made their first contribution in https://github.com/ollama/ollama/pull/2973
* mrdjohnson made their first contribution in https://github.com/ollama/ollama/pull/2759
* mofanke made their first contribution in https://github.com/ollama/ollama/pull/3077
* racerole made their first contribution in https://github.com/ollama/ollama/pull/3073
* Chris-AS1 made their first contribution in https://github.com/ollama/ollama/pull/3094

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.28...v0.1.29

0.1.28

New models
* [StarCoder2](https://ollama.com/library/starcoder2): the next generation of transparently trained open code LLMs that comes in three sizes: 3B, 7B and 15B parameters.
* [DolphinCoder](https://ollama.com/library/dolphincoder): a chat model based on StarCoder2 15B that excels at writing code.

What's Changed
* Vision models such as `llava` should now respond better to text prompts
* Improved support for `llava` 1.6 models
* Fixed issue where switching between models repeatedly would cause Ollama to hang
* Installing Ollama on Windows no longer requires a minimum of 4GB disk space
* Ollama on macOS will now more reliably determine available VRAM
* Fixed issue where running Ollama in `podman` would not detect Nvidia GPUs
* Ollama will correctly return an empty embedding when calling `/api/embeddings` with an empty `prompt` instead of hanging

New Contributors
* Bin-Huang made their first contribution in https://github.com/ollama/ollama/pull/1706
* elthommy made their first contribution in https://github.com/ollama/ollama/pull/2737
* peanut256 made their first contribution in https://github.com/ollama/ollama/pull/2354
* tylinux made their first contribution in https://github.com/ollama/ollama/pull/2827
* fred-bf made their first contribution in https://github.com/ollama/ollama/pull/2780
* bmwiedemann made their first contribution in https://github.com/ollama/ollama/pull/2836

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.27...v0.1.28

0.1.27

[![306900613-01333db3-c27b-4044-88b3-9b2ffbe06415](https://github.com/ollama/ollama/assets/251292/fd975189-15bb-4c66-a16f-0f34c61ccba3)](https://ollama.com/library/gemma)

Gemma
[Gemma](https://ollama.com/library/gemma) is a new, top-performing family of lightweight open models built by Google. Available in `2b` and `7b` parameter sizes:
* `ollama run gemma:2b`
* `ollama run gemma:7b` (default)

What's Changed
* Performance improvements (up to 2x) when running [Gemma](https://ollama.com/library/gemma) models
* Fixed performance issues on Windows without GPU acceleration. Systems with AVX and AVX2 instruction sets should be 2-4x faster.
* Reduced likelihood of false positive Windows Defender alerts on Windows.

New Contributors
* joshyan1 made their first contribution in https://github.com/ollama/ollama/pull/2657
* pfrankov made their first contribution in https://github.com/ollama/ollama/pull/2138
* adminazhar made their first contribution in https://github.com/ollama/ollama/pull/2686
* b-tocs made their first contribution in https://github.com/ollama/ollama/pull/2510
* Yuan-ManX made their first contribution in https://github.com/ollama/ollama/pull/2249
* langchain4j made their first contribution in https://github.com/ollama/ollama/pull/1690
* logancyang made their first contribution in https://github.com/ollama/ollama/pull/1918

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.26...v0.1.27

Page 7 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.