Ollama

Latest version: v0.3.3

Safety actively analyzes 682251 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 12 of 15

0.1.2

New Models
* [Zephyr](https://ollama.ai/library/zephyr) A fine-tuned 7B version of mistral that was trained on a mix of publicly available, synthetic datasets and performs as well as Llama 2 70B in many benchmarks
* [Mistral OpenOrca](https://ollama.ai/library/mistral-openorca) a 7 billion parameter model fine-tuned on top of the Mistral 7B model using the OpenOrca dataset

Examples
Ollama's [examples](https://github.com/jmorganca/ollama/tree/main/examples) have been updated with some new examples:
* [Ask the mentors](https://github.com/jmorganca/ollama/tree/main/examples/typescript-mentors): a TypesScript, multi-user conversation app
* [TypeScript LangChain](https://github.com/jmorganca/ollama/tree/main/examples/langchain-typescript-simple): a simple example of using Ollama with LangChainJS and TypeScript.


What's Changed
* Download speeds for `ollama pull` have been significantly improved, from 60MB/s to over 1.5GB/s (25x faster) on fast network connections
* The API now supports non-streaming responses. Set the `stream` parameter to `false` and endpoints will return data in one single response:

curl -X POST http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "Why is the sky blue?",
"stream": false
}'

* Ollama can now be used with http proxies (using `HTTP_PROXY=http://<proxy>`) and https proxies (using `HTTPS_PROXY=https://<proxy>`)
* Fixed `token too long` error when generating a response
* `q8_0`, `q5_0`, `q5_1`, and `f32` models will now use GPU on Linux
* Revise help text in `ollama run` to be easier to read
* Rename runner subprocess to `ollama-runner`
* `ollama create` will now show feedback when reading model metadata
* Fix `not found error` showing when running `ollama pull`
* Improved video memory allocation on Linux to fix errors when using Nvidia GPUs

New Contributors
* xyproto made their first contribution in https://github.com/jmorganca/ollama/pull/705
* konsalex made their first contribution in https://github.com/jmorganca/ollama/pull/741

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.1...v0.1.2

0.1.1

What's Changed
* Cancellable responses: `Ctrl+C` will now cancel responses when running `ollama run`
* Exit `ollama run` sessions with `Ctrl+D` or `/bye`
* Improved error messages for unknown `/slash` commands when using `ollama run`
* Various improvements to the Linux install script for distro compatibility and to fix bugs
* Fixed install issues on Fedora
* Fixed issue where specifying the `library/` prefix in `ollama run` would cause an error
* Fixed highlight color for placeholder text in `ollama run`
* Fixed issue where auto updater would not restart when clicking "Restart to Update"
* Ollama will now clean up subdirectories in `~/.ollama/models`
* Ollama when now show a default message when `ollama show` results in an empty message

New Contributors
* aaroncoffey made their first contribution in https://github.com/jmorganca/ollama/pull/629
* lstep made their first contribution in https://github.com/jmorganca/ollama/pull/621
* JayNakrani made their first contribution in https://github.com/jmorganca/ollama/pull/632
* Jimexist made their first contribution in https://github.com/jmorganca/ollama/pull/664
* hallh made their first contribution in https://github.com/jmorganca/ollama/pull/663

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.1.0...v0.1.1

0.1.0

Ollama for Linux
<img src="https://github.com/jmorganca/ollama/assets/251292/89f8526e-866a-4e19-a73c-3ff850d45c76" height="220">

Ollama for Linux is now available, with GPU acceleration enabled out-of-the-box for Nvidia GPUs.

💯 Ollama will run on cloud servers with multiple GPUs attached
🤖 Ollama will run on WSL 2 with GPU support
😍 Ollama maximizes the number of GPU layers to load to increase performance without crashing
🤩 Ollama will support CPU only, and small hobby gaming GPUs to super powerful workstation graphics cards like the H100

Download


curl https://ollama.ai/install.sh | sh


Manual [install steps](https://github.com/jmorganca/ollama/blob/main/docs/linux.md) are also available.

Changelog
* Ollama will now automatically offload as much of the running model as is supported by your GPU for maximum performance without any crashes
* Fix issue where characters would be erased when running `ollama run`
* Added a new community project by TwanLuttik in https://github.com/jmorganca/ollama/pull/574

New Contributors
* TwanLuttik made their first contribution in https://github.com/jmorganca/ollama/pull/574

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.21...v0.1.0

0.0.21

* Fixed an issue where empty responses would be returned if `template` was provided in the api, but not `prompt`
* Fixed an issue where the "Send a message" placeholder would show when writing multi line prompts with `ollama run`
* Fixed an issue where multi-line prompts in `ollama run` wouldn't be submitted when pressing Return

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.20...v0.0.21

0.0.20

What's Changed
* `ollama run` has a new & improved experience:
* Models will now be loaded immediately making even the first prompt much faster
* Added hint text
* Ollama will now fit words in the available width of the terminal for better readability
* `OLLAMA_HOST` now supports ipv6 hostnames
* `ollama run` will now automatically pull models if they don't exist when using a remote instance of Ollama
* Sending an empty `prompt` field to `/api/generate` will now load the model so the next request is fast
* Fixed an issue where `ollama create` would not correctly detect falcon model sizes
* Add a simple python client to access Ollama in `api/client.py` by pdevine
* Improvements to showing progress on `ollama pull` and `ollama push`
* Fixed an issue for adding empty layers with `ollama create`
* Fixed an issue for running Ollama on Windows (compiled from source)
* Fixed an error when running `ollama push`
* Readable community projects by jamesbraza


New Contributors
* jamesbraza made their first contribution in https://github.com/jmorganca/ollama/pull/550

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.19...v0.0.20

0.0.19

What's Changed
* Updated Docker image for Ollama `docker pull ollama/ollama`
* Ability to import and use GGUF file type models
* Fixed issue where `ollama push` would error on long-running uploads
* Ollama will now automatically clean up unused data locally
* Improve build instructions by apepper

New Contributors
* apepper made their first contribution in https://github.com/jmorganca/ollama/pull/482

**Full Changelog**: https://github.com/jmorganca/ollama/compare/v0.0.18...v0.0.19

Page 12 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.