Miniogre

Latest version: v0.5.0

Safety actively analyzes 624712 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.5.0

Introducing support for Ollama as an LLM inference provider (API)

Now the generation of requirements and README files can be done locally by setting the provider option to `ollama`:


miniogre run --provider ollama


It's important to notice that by default Ollama is using the `mistral:7b` model, whose generated output's quality (requirements and README) is lower than the ones delivered by OpenAI's GPT-4 (`--provider openai`, the default option).

This release introduces the following features:

- [feat: Add with_readme option to miniogre run](https://github.com/ogre-run/miniogre/commit/1f9e2d649f67dd312a04cf741d55f7cb4b35b658)
- [feat: Add ollama to provider options for miniogre readme](https://github.com/ogre-run/miniogre/commit/4816786f5e8524ea66002afd0e06379b843c4221)
- [Update gptify version to 0.3.2.](https://github.com/ogre-run/miniogre/commit/126d95b6f116eb5888753dcd2705241580d9254d)
- [feat: Add command to display version.](https://github.com/ogre-run/miniogre/commit/d53535c359a591589f9674ba6c2ff1a2db3b6813)
- [Merge pull request](https://github.com/ogre-run/miniogre/commit/a23d1de8daac0d31bf5957a2659eb52cc2264efa) https://github.com/ogre-run/miniogre/pull/10 [from ogre-run/feat-ollama](https://github.com/ogre-run/miniogre/commit/a23d1de8daac0d31bf5957a2659eb52cc2264efa)

0.4.2

Patches for:

- sbom generation
- verbose for Docker build

0.4.1

Fix: add `--no-container` option.

It should have been included in the previous release.

0.4.0

This release implements:

- A fix for issue https://github.com/ogre-run/miniogre/issues/6
- Better handling of requirements installation using uv (packages are now installed in the container system's python -- no venv)
- A `--verbose` option to visualize the Docker build log in real time
- A `--no-container` option to enable the user to control wether the container is spun up or not. This is useful when the intention is to only generate the files (requirements.txt, Dockerfile, SBOM).

**Full Changelog**: https://github.com/ogre-run/miniogre/compare/v0.3.0...v0.4.0

0.3.0

Add support to three new LLM inference providers besides OpenAI:

- `mistral`: [mistral.ai](https://mistral.ai)
- `groq`: [groq](https://groq.com)
- `octoai`: [octoai](https://octo.ai)

0.2.0

- New requirement generation pipeline. Rough estimate is generated locally and then sent to an LLM (currently openAI) to be cleaned up and fixed.

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.