- All models now have ``<model>.distributed_chat_async`` that can be used in servers without blocking the main event loop. This will give a much needed UX improvement to the entire system.
0.6.3
-----
- ``<model>.distributed_chat`` now takes in args that are passed to the ``post_logic``.
0.6.2
-----
- New set of utils in ``tuneapi.utils`` called ``prompt`` to help with the basics of prompting.
0.6.1
-----
- Package now uses ``fire==0.7.0``
0.6.0
-----
- ``distributed_chat`` functionality in ``tuneapi.apis.turbo`` support. In all APIs search for ``model.distributed_chat()`` method. This enables **fault tolerant LLM API calls**. - Moved ``tuneapi.types.experimental`` to ``tuneapi.types.evals``
0.5.13
-----
- ``tuneapi.types.ModelInterface`` has an ``extra_headers`` attribute in it.