Introducing support for Ollama as an LLM inference provider (API)
Now the generation of requirements and README files can be done locally by setting the provider option to `ollama`:
miniogre run --provider ollama
It's important to notice that by default Ollama is using the `mistral:7b` model, whose generated output's quality (requirements and README) is lower than the ones delivered by OpenAI's GPT-4 (`--provider openai`, the default option).
This release introduces the following features:
- [feat: Add with_readme option to miniogre run](https://github.com/ogre-run/miniogre/commit/1f9e2d649f67dd312a04cf741d55f7cb4b35b658)
- [feat: Add ollama to provider options for miniogre readme](https://github.com/ogre-run/miniogre/commit/4816786f5e8524ea66002afd0e06379b843c4221)
- [Update gptify version to 0.3.2.](https://github.com/ogre-run/miniogre/commit/126d95b6f116eb5888753dcd2705241580d9254d)
- [feat: Add command to display version.](https://github.com/ogre-run/miniogre/commit/d53535c359a591589f9674ba6c2ff1a2db3b6813)
- [Merge pull request](https://github.com/ogre-run/miniogre/commit/a23d1de8daac0d31bf5957a2659eb52cc2264efa) https://github.com/ogre-run/miniogre/pull/10 [from ogre-run/feat-ollama](https://github.com/ogre-run/miniogre/commit/a23d1de8daac0d31bf5957a2659eb52cc2264efa)