Ollama

Latest version: v0.3.3

Safety actively analyzes 682244 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 15

0.1.8

New Models
* [CodeBooga](https://ollama.ai/library/codebooga): A high-performing code instruct model created by merging two existing code models.
* [Dolphin 2.2 Mistral](https://ollama.ai/library/dolphin2.2-mistral): An instruct-tuned model based on Mistral. Version 2.2 is fine-tuned for improved conversation and empathy.
* [MistralLite](https://ollama.ai/library/mistrallite): MistralLite is a fine-tuned model based on Mistral with enhanced capabilities of processing long contexts.
* [Yarn Mistral](https://ollama.ai/library/yarn-mistral) an extension of [Mistral](https://ollama.ai/library/mistral) to support a context window of up to 128 tokens
* [Yarn Llama 2](https://ollama.ai/library/yarn-llama2) an extension of [Llama 2](https://ollama.ai/library/llama2) to support a context window of up to 128 tokens

What's Changed
* Ollama will now honour large context sizes on models such as `codellama` and `mistrallite`
* Fixed issue where repeated characters would be output on long contexts
* `ollama push` is now much faster. 7B models will push up to ~100MB/s and large models (70B+) up to 1GB/s if network speeds permit

New Contributors
* dloss made their first contribution in https://github.com/jmorganca/ollama/pull/948
* noahgitsham made their first contribution in https://github.com/jmorganca/ollama/pull/983

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.7...v0.1.8

0.1.7

What's Changed
* Fixed an issue when running `ollama run` where certain key combinations such as Ctrl+Space would lead to an unresponsive prompt
* Fixed issue in `ollama run` where retrieving the previous prompt from history would require two up arrow key presses instead of one
* Exiting `ollama run` with Ctrl+D will now put cursor on the next line

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.6...v0.1.7

0.1.6

New models
* [Dolphin 2.1 Mistral](https://ollama.ai/library/dolphin2.1-mistral): an instruct-tuned model based on Mistral and trained on a dataset filtered to remove alignment and bias.
* [Zephyr Beta](https://ollama.ai/library/zephyr): this is the second model in the series based on Mistral, and has strong performance that compares to and even exceeds Llama 2 70b in several categories. It’s trained on a distilled dataset, improving grammar and yielding even better chat results.

What's Changed
* Pasting multi-line strings in `ollama run` is now possible
* Fixed various issues when writing prompts in `ollama run`
* The library models have been refreshed and revamped including `llama2`, `codellama`, and more:
* All `chat` or `instruct` models now support setting the `system` parameter, or `SYSTEM` command in the `Modelfile`
* Parameters (`num_ctx`, etc) have been updated for library models
* Slight performance improvements for all models
* Model storage can now be configured with `OLLAMA_MODELS`. See the [FAQ](https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-change-where-ollama-stores-models) for more info on how to configure this.
* `OLLAMA_HOST` will now default to port `443` when `https://` is specified, and port `80` when `http://` is specified
* Fixed trailing slashes causing an error when using `OLLAMA_HOST`
* Fixed issue where `ollama pull` would retry multiple times when out of space
* Fixed various `out of memory` issues when using Nvidia GPUs
* Fixed performance issue previously introduced on AMD CPUs

New Contributors
* ajayk made their first contribution in https://github.com/jmorganca/ollama/pull/855

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.5...v0.1.6

0.1.5

What's Changed
* Fix an issue where an error would occur when running `falcon` or `starcoder` models


**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.4...v0.1.5

0.1.4

New models

* [OpenHermes 2 Mistral](https://ollama.ai/library/openhermes2-mistral): a new fine-tuned model based on Mistral, trained on open datasets totalling over 900,000 instructions. This model has strong multi-turn chat skills, surpassing previous Hermes 13B models and even matching 70B models on some benchmarks.

What's Changed
* Faster model switching: models will now stay loaded between requests when using different parameters (e.g. `temperature`) or system prompts
* `starcoder`, `sqlcoder` and `falcon` models now have unicode support. Note: they will need to be re-pulled (e.g. `ollama pull starcoder`)
* New documentation guide on [importing existing models](https://github.com/jmorganca/ollama/blob/main/docs/import.md) to Ollama (GGUF, PyTorch, etc)
* `ollama serve` will now print the current version of Ollama on start
* `ollama run` will now show more descriptive errors when encountering runtime issues (such as insufficient memory)
* Fixed an issue where Ollama on Linux would use CPU instead of using both the CPU and GPU for GPUs with less memory
* Fixed architecture check in Linux install script
* Fixed issue where leading whitespaces would be returned in responses
* Fixed issue where `ollama show` would show an empty `SYSTEM` prompt (instead of omitting it)
* Fixed issue with the `/api/tags` endpoint would return `null` instead of `[]` if no models were found
* Fixed an issue where `ollama show` wouldn't work when connecting remotely by using `OLLAMA_HOST`
* Fixed issue where GPU/Metal would be used on macOS even with `num_gpu` set to `0`
* Fixed issue where certain characters would be escaped in responses
* Fixed `ollama serve` logs to report the proper amount of GPU memory (VRAM) being used

Note: the `EMBED` keyword in `Modelfile` is being revisited until a future version of Ollama. Join [the discussion](https://github.com/jmorganca/ollama/issues/834) on how we can make it better.

New Contributors
* vieux made their first contribution in https://github.com/jmorganca/ollama/pull/810
* s-kostyaev made their first contribution in https://github.com/jmorganca/ollama/pull/801
* ggozad made their first contribution in https://github.com/jmorganca/ollama/pull/794
* awaescher made their first contribution in https://github.com/jmorganca/ollama/pull/811
* deichbewohner made their first contribution in https://github.com/jmorganca/ollama/pull/799

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.3...v0.1.4

0.1.3

What's Changed
* Improved various API error messages to be easier to read
* Improved GPU allocation for older GPUs to fix "out of memory" errors
* Fixed issue where setting `num_gpu` to `0` would result in an error
* Ollama for macOS will now always update to the latest version, even if earlier updates had also been downloaded beforehand

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.2...v0.1.3

Page 11 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.