- fixed streaming - multiple little bugfixes - option to set the expected generated token count as a hint for LLM selector - add argument schema option for llm_function
0.0.10
- async screaming callback support - LlmSelector for automatic selection of LLM based on the model context window and prompt length
0.0.9
- fix some scenarios of LLM response that raised error - save AIMessage with function call in output wrapper - fix logging that we are out or stream context, when stream is not on
0.0.8
- support for parsing via OpenAI functions 🚀 - support for controlling function_call - add BIG_CONTEXT prompt type - ton of bugfixes
0.0.7
- fixed streaming capture - better handling for missing docs for llm_function