Ollama

Latest version: v0.3.3

Safety actively analyzes 682251 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 15

0.1.44

What's Changed
* Fixed issue where unicode characters such as emojis would not be loaded correctly when running `ollama create`
* Fixed certain cases where Nvidia GPUs would not be detected and reported as compute capability 1.0 devices

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.43...v0.1.44

0.1.43

![Ollama honest work](https://github.com/ollama/ollama/assets/3325447/06b05d79-1872-45d9-bed3-76f72afa2baf)

What's Changed
* New [import.md](https://github.com/ollama/ollama/blob/main/docs/import.md) guide for converting and importing models to Ollama
* Fixed issue where embedding vectors resulting from `/api/embeddings` would not be accurate
* JSON mode responses will no longer include invalid escape characters
* Removing a model will no longer show incorrect `File not found` errors
* Fixed issue where running `ollama create` would result in an error on Windows with certain file formatting

New Contributors
* erhant made their first contribution in https://github.com/ollama/ollama/pull/4854
* nischalj10 made their first contribution in https://github.com/ollama/ollama/pull/4612
* dcasota made their first contribution in https://github.com/ollama/ollama/pull/4852
* Napuh made their first contribution in https://github.com/ollama/ollama/pull/4084
* hughescr made their first contribution in https://github.com/ollama/ollama/pull/3782
* jimscard made their first contribution in https://github.com/ollama/ollama/pull/3382

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.42...v0.1.43

0.1.42

New models
* [Qwen 2](https://ollama.com/library/qwen2): a new series of large language models from Alibaba group

What's Changed
* Fixed issue where `qwen2` would output erroneous text such as `GGG` on Nvidia and AMD GPUs
* `ollama pull` is now faster if it detects a model is already downloaded
* `ollama create` will now automatically detect prompt templates for popular model architectures such as Llama, Gemma, Phi and more.
* Ollama can now be accessed from local apps built with Electron and Tauri, as well as in developing apps in local html files
* Update welcome prompt in Windows to `llama3`
* Fixed issues where `/api/ps` and `/api/tags` would show invalid timestamps in responses

New Contributors
* shoebham made their first contribution in https://github.com/ollama/ollama/pull/4766
* kartikm7 made their first contribution in https://github.com/ollama/ollama/pull/4719
* royjhan made their first contribution in https://github.com/ollama/ollama/pull/4822

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.41...v0.1.42

0.1.41

What's Changed
* Fixed issue on Windows 10 and 11 with Intel CPUs with integrated GPUs where Ollama would encounter an error

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.40...v0.1.41

0.1.40

![ollama continuing to capture bugs](https://github.com/ollama/ollama/assets/3325447/d3aba466-40cc-4878-b2bb-34ecbae977d3)

New models
* [Codestral](https://ollama.com/library/codestral): Codestral is Mistral AI’s first-ever code model designed for code generation tasks.
* [IBM Granite Code](https://ollama.com/library/granite-code): now in [3B](https://ollama.com/library/granite-code:3b) and [8B](https://ollama.com/library/granite-code:8b) parameter sizes.
* [Deepseek V2](https://ollama.com/library/deepseek-v2): A Strong, Economical, and Efficient Mixture-of-Experts Language Model

What's Changed
* Fixed out of memory and incorrect token issues when running Codestral on 16GB Macs
* Fixed issue where full-width characters (e.g. Japanese, Chinese, Russian) were deleted at end of the line when using `ollama run`

New Examples
* [Use open-source models as coding assistant with Continue](https://ollama.com/blog/continue-code-assistant)

New Contributors
* zhewang1-intc made their first contribution in https://github.com/ollama/ollama/pull/3278

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.39...v0.1.40

0.1.39

New models
- [Cohere Aya 23](https://ollama.com/library/aya): A new state-of-the-art, multilingual LLM covering 23 different languages.
- [Mistral 7B 0.3](https://ollama.com/library/mistral:v0.3): A new version of Mistral 7B with initial support for function calling.
- [Phi-3 Medium](https://ollama.com/library/phi3:medium): a 14B parameters, lightweight, state-of-the-art open model by Microsoft.
- [Phi-3 Mini 128K](https://ollama.com/library/phi3:mini-128k) and [Phi-3 Medium 128K](https://ollama.com/library/phi3:medium-128k): versions of the Phi-3 models that support a context window size of 128K
- [Granite code](https://ollama.com/library/granite-code): A family of open foundation models by IBM for Code Intelligence

Llama 3 import

It is now possible to import and quantize Llama 3 and its finetunes from Safetensors format to Ollama.

First, clone a Hugging Face repo with a Safetensors model:


git clone https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
cd Meta-Llama-3-8B-Instruct


Next, create a `Modelfile`:


FROM .

TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""

PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>


Then, create and quantize a model:


ollama create --quantize q4_0 -f Modelfile my-llama3
ollama run my-llama3


What's Changed
* Fixed issues where wide characters such as Chinese, Korean, Japanese and Russian languages.
* Added new `OLLAMA_NOHISTORY=1` environment variable that can be set to disable history when using `ollama run`
* New experimental `OLLAMA_FLASH_ATTENTION=1` flag for `ollama serve` that improves token generation speed on Apple Silicon Macs and NVIDIA graphics cards
* Fixed error that would occur on Windows running `ollama create -f Modelfile`
* `ollama create` can now create models from I-Quant GGUF files
* Fixed `EOF` errors when resuming downloads via `ollama pull`
* Added a `Ctrl+W` shortcut to `ollama run`


New Contributors
* rapmd73 made their first contribution in https://github.com/ollama/ollama/pull/4467
* sammcj made their first contribution in https://github.com/ollama/ollama/pull/4120
* likejazz made their first contribution in https://github.com/ollama/ollama/pull/4535

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.1.38...v0.1.39

Page 5 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.