Ollama

Latest version: v0.4.7

Safety actively analyzes 723177 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 19

0.3.7

New Models
* [Hermes 3](https://ollama.com/library/hermes3): Hermes 3 is the latest version of the flagship Hermes series of LLMs by Nous Research, which includes support for tool calling.
* [Phi 3.5](https://ollama.com/library/phi3.5): A lightweight AI model with 3.8 billion parameters with performance overtaking similarly and larger sized models.
* [SmolLM](https://ollama.com/library/smollm): A family of small models with 135M, 360M, and 1.7B parameters, trained on a new high-quality dataset.

What's Changed
* CUDA 12 support: improving performance by up to 10% on newer NVIDIA GPUs
* Improved performance of `ollama pull` and `ollama push` on slower connections
* Fixed issue where setting `OLLAMA_NUM_PARALLEL` would cause models to be reloaded on lower VRAM systems
* Ollama on Linux is now distributed as a `tar.gz` file, which contains the `ollama` binary along with required libraries.

New Contributors
* pamelafox made their first contribution in https://github.com/ollama/ollama/pull/6345
* eust-w made their first contribution in https://github.com/ollama/ollama/pull/5964

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.3.6...v0.3.7

0.3.6

What's Changed
* Fixed issue where `/api/embed` would return an error instead of loading the model when the `input` field was not provided.
* `ollama create` can now import Phi-3 models from Safetensors
* Added progress information to `ollama create` when importing GGUF files
* Ollama will now import GGUF files faster by minimizing file copies


**Full Changelog**: https://github.com/ollama/ollama/compare/v0.3.5...v0.3.6

0.3.5

What's Changed
* Fixed `Incorrect function` error when downloading models on Windows
* Fixed issue where temporary files would not be cleaned up
* Fix rare error when Ollama would start up due to invalid model data
* Ollama will now provide an error instead of crashing on Windows when running models that are too large to fit into total memory

New Contributors
* jessegross made their first contribution in https://github.com/ollama/ollama/pull/6145
* rgbkrk made their first contribution in https://github.com/ollama/ollama/pull/5985
* Nicholas42 made their first contribution in https://github.com/ollama/ollama/pull/6235
* cognitivetech made their first contribution in https://github.com/ollama/ollama/pull/6305

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.3.4...v0.3.5

0.3.4

<img width="1548" alt="Screenshot 2024-08-06 at 8 16 44 PM" src="https://github.com/user-attachments/assets/9a53a40e-4649-4d67-8433-052a5941a5b6">


New embedding models
* [BGE-M3](https://ollama.com/library/bge-m3): a large embedding model from BAAI distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
* [BGE-Large](https://ollama.com/library/bge-large): a large embedding model trained in english.
* [Paraphrase-Multilingual](https://ollama.com/library/paraphrase-multilingual): A multilingual embedding model trained on parallel data for 50+ languages.

New embedding API with batch support

Ollama now supports a new API endpoint `/api/embed` for embedding generation:


curl http://localhost:11434/api/embed -d '{
"model": "all-minilm",
"input": ["Why is the sky blue?", "Why is the grass green?"]
}'


This API endpoint supports new features:
* **Batches**: generate embeddings for several documents in one request
* **Normalized embeddings**: embeddings are now normalized, improving similarity results
* **Truncation**: a new `truncate` parameter that will error if set to `false`
* **Metrics**: responses include `load_duration`, `total_duration` and `prompt_eval_count` metrics

See the [API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings) for more details and examples.

What's Changed
* Fixed initial slow download speeds on Windows
* NUMA support will now be autodetected by Ollama to improve performance
* Fixed issue where the `/api/embed` would sometimes return embedding results out of order

New Contributors
* av made their first contribution in https://github.com/ollama/ollama/pull/6147
* sryu1 made their first contribution in https://github.com/ollama/ollama/pull/6151
* rick-github made their first contribution in https://github.com/ollama/ollama/pull/6154

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.3.3...v0.3.4

0.3.3

What's Changed
* The `/api/embed` endpoint now returns statistics: `total_duration`, `load_duration`, and `prompt_eval_count`
* Added usage metrics to the `/v1/embeddings` OpenAI compatibility API
* Fixed issue where `/api/generate` would respond with an empty string if provided a `context`
* Fixed issue where `/api/generate` would return an incorrect value for `context`
* `/show modefile` will now render `MESSAGE` commands correctly

New Contributors
* slouffka made their first contribution in https://github.com/ollama/ollama/pull/6115

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.3.2...v0.3.3

0.3.2

What's Changed
* Fixed issue where `ollama pull` would not resume download progress
* Fixed issue where `phi3` would report an error on older versions

New Contributors
* longseespace made their first contribution in https://github.com/ollama/ollama/pull/6096

**Full Changelog**: https://github.com/ollama/ollama/compare/v0.3.1...v0.3.2

Page 5 of 19

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.