First Release: https://pypi.org/project/oshepherd/0.0.3/
* Basic **API** behavior: serving one http endpoint as a copy of the original `generate()` Ollama server [endpoint](https://github.com/ollama/ollama/blob/114c932a8e872846fc714353c65d041feb886027/server/routes.go#L996), receiving incoming request parameters, queuing a message with those parameters in RabbitMQ through Celery, waiting for the Celery Task to finish, extracting the returned response from Redis backend, and then responding to http Ollama client the response from a remote Ollama server.
* Basic **WORKER** behavior: Respond to messages queued in RabbitMQ using Celery, fire `generate()` request pointing to local Ollama server within worker instance, also using Ollama python package client, and return response to Redis backend.