Ollama

Latest version: v0.2.0

Safety actively analyzes 633433 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 10

0.1.11

New Models
* [Orca 2](https://ollama.ai/library/orca2): A fine-tuned version of Meta's Llama 2 model, designed to excel particularly in reasoning.
* [DeepSeek Coder](https://ollama.ai/library/deepseek-coder): A capable coding model trained from scratch. Available in 1.3B, 6.7B and 33B parameter counts.
* [Alfred](https://ollama.ai/library/alfred): A robust conversational model designed to be used for both chat and instruct use cases.

What's Changed
* Improved progress bar design
* Fixed issue where `ollama create` would error with `invalid cross-device link`
* Fixed issue where `ollama run` Ollama would exit with an error on macOS Big Sur and Monterey
* `q5_0` and `q5_1` models will now use GPU
* Fixed several `max retries exceeded` errors when running `ollama pull` or `ollama push`
* Fixed issue where `ollama create` would result in a "file not found" error `FROM` referred to local file
* Fixed issue where resizing the terminal while running `ollama pull` would cause repeated progress bar messages
* Minor performance improvements on Intel Macs
* Improved error messages on Linux when using Nvidia GPUs

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.10...v0.1.11

0.1.10

New models
- [OpenChat](https://ollama.ai/library/openchat): An open-source chat model trained on a wide variety of data, surpassing ChatGPT on various benchmarks.
- [Neural-chat](https://ollama.ai/library/neural-chat): New chat model by Intel
- [Goliath](https://ollama.ai/library/goliath): A large chat model created by combining two fine-tuned versions of Llama 2 70B

What's Changed
* JSON mode can now be used with `ollama run`:
* Pass `--format json` flag or
* Use `/set format json` to change the current chat session to use JSON mode
* Prompts can now be passed in via standard input to `ollama run`. For example: `head -30 README.md | ollama run codellama "how do I install Ollama on Linux?"`
* `ollama create` now works with `OLLAMA_HOST` to build models using Ollama running on a remote machine
* Fixed crashes on Intel Macs
* Fixed issue where `ollama pull` progress would reverse when re-trying a failed connection
* Fixed issue where `ollama show --modelfile` would show an incorrect `FROM` command
* Fixed issue where word wrap wouldn't work when piping in data to `ollama run` via standard input
* Fix permission denied issues when running `ollama create` on Linux
* Added FAQ [entry](https://github.com/jmorganca/ollama/blob/main/docs/faq.md) for proxy support on Linux
* Fixed installer error on Debian 12
* Fixed issue where `ollama push` would result in a 405 error
* `ollama push` will now return a better error when trying to push to a namespace the current user does not have access to

New Contributors
* dhiltgen made their first contribution in https://github.com/jmorganca/ollama/pull/1075
* dansreis made their first contribution in https://github.com/jmorganca/ollama/pull/1055
* breitburg made their first contribution in https://github.com/jmorganca/ollama/pull/1106
* enricoros made their first contribution in https://github.com/jmorganca/ollama/pull/1078
* huynle made their first contribution in https://github.com/jmorganca/ollama/pull/1115
* bnodnarb made their first contribution in https://github.com/jmorganca/ollama/pull/1098
* danemadsen made their first contribution in https://github.com/jmorganca/ollama/pull/1120
* pieroit made their first contribution in https://github.com/jmorganca/ollama/pull/1124
* yanndegat made their first contribution in https://github.com/jmorganca/ollama/pull/1151

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.9...v0.1.10

0.1.9

New models
* [Yi](https://ollama.ai/library/yi): a high-performing, bilingual model supporting both English and Chinese.

What's Changed
* [JSON mode](https://github.com/jmorganca/ollama/blob/main/docs/api.md#json-mode): instruct models to always return valid JSON when calling `/api/generate` by setting the `format` parameter to `json`
* Raw mode: bypass any templating done by Ollama by passing `{"raw": true}` to `/api/generate`
* Better error descriptions when downloading and uploading models with `ollama pull` and `ollama push`
* Fixed issue where Linux installer would encounter an error when running as the `root` user
* Improved progress bar design when running `ollama pull` and `ollama push`
* Fixed issue where running on a machine with less than 2GB of VRAM would be slow

New Contributors
* pepperoni21 made their first contribution in https://github.com/jmorganca/ollama/pull/995
* lgrammel made their first contribution in https://github.com/jmorganca/ollama/pull/1020
* ej52 made their first contribution in https://github.com/jmorganca/ollama/pull/999
* David-Kunz made their first contribution in https://github.com/jmorganca/ollama/pull/996
* tjbck made their first contribution in https://github.com/jmorganca/ollama/pull/943
* omagdy7 made their first contribution in https://github.com/jmorganca/ollama/pull/1029
* upchui made their first contribution in https://github.com/jmorganca/ollama/pull/1034
* kevinhermawan made their first contribution in https://github.com/jmorganca/ollama/pull/1043
* amithkoujalgi made their first contribution in https://github.com/jmorganca/ollama/pull/1044
* mpldr made their first contribution in https://github.com/jmorganca/ollama/pull/1042
* aashish2057 made their first contribution in https://github.com/jmorganca/ollama/pull/992
* nickanderson made their first contribution in https://github.com/jmorganca/ollama/pull/1062

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.8...v0.1.9

0.1.8

New Models
* [CodeBooga](https://ollama.ai/library/codebooga): A high-performing code instruct model created by merging two existing code models.
* [Dolphin 2.2 Mistral](https://ollama.ai/library/dolphin2.2-mistral): An instruct-tuned model based on Mistral. Version 2.2 is fine-tuned for improved conversation and empathy.
* [MistralLite](https://ollama.ai/library/mistrallite): MistralLite is a fine-tuned model based on Mistral with enhanced capabilities of processing long contexts.
* [Yarn Mistral](https://ollama.ai/library/yarn-mistral) an extension of [Mistral](https://ollama.ai/library/mistral) to support a context window of up to 128 tokens
* [Yarn Llama 2](https://ollama.ai/library/yarn-llama2) an extension of [Llama 2](https://ollama.ai/library/llama2) to support a context window of up to 128 tokens

What's Changed
* Ollama will now honour large context sizes on models such as `codellama` and `mistrallite`
* Fixed issue where repeated characters would be output on long contexts
* `ollama push` is now much faster. 7B models will push up to ~100MB/s and large models (70B+) up to 1GB/s if network speeds permit

New Contributors
* dloss made their first contribution in https://github.com/jmorganca/ollama/pull/948
* noahgitsham made their first contribution in https://github.com/jmorganca/ollama/pull/983

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.7...v0.1.8

0.1.7

What's Changed
* Fixed an issue when running `ollama run` where certain key combinations such as Ctrl+Space would lead to an unresponsive prompt
* Fixed issue in `ollama run` where retrieving the previous prompt from history would require two up arrow key presses instead of one
* Exiting `ollama run` with Ctrl+D will now put cursor on the next line

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.6...v0.1.7

0.1.6

New models
* [Dolphin 2.1 Mistral](https://ollama.ai/library/dolphin2.1-mistral): an instruct-tuned model based on Mistral and trained on a dataset filtered to remove alignment and bias.
* [Zephyr Beta](https://ollama.ai/library/zephyr): this is the second model in the series based on Mistral, and has strong performance that compares to and even exceeds Llama 2 70b in several categories. It’s trained on a distilled dataset, improving grammar and yielding even better chat results.

What's Changed
* Pasting multi-line strings in `ollama run` is now possible
* Fixed various issues when writing prompts in `ollama run`
* The library models have been refreshed and revamped including `llama2`, `codellama`, and more:
* All `chat` or `instruct` models now support setting the `system` parameter, or `SYSTEM` command in the `Modelfile`
* Parameters (`num_ctx`, etc) have been updated for library models
* Slight performance improvements for all models
* Model storage can now be configured with `OLLAMA_MODELS`. See the [FAQ](https://github.com/jmorganca/ollama/blob/main/docs/faq.md#how-can-i-change-where-ollama-stores-models) for more info on how to configure this.
* `OLLAMA_HOST` will now default to port `443` when `https://` is specified, and port `80` when `http://` is specified
* Fixed trailing slashes causing an error when using `OLLAMA_HOST`
* Fixed issue where `ollama pull` would retry multiple times when out of space
* Fixed various `out of memory` issues when using Nvidia GPUs
* Fixed performance issue previously introduced on AMD CPUs

New Contributors
* ajayk made their first contribution in https://github.com/jmorganca/ollama/pull/855

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.5...v0.1.6

Page 5 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.