* Fix ChainlitTaskCallbacks - use classmethod not staticmethod, so any derived versions propagate to subtasks * Handle deviant OpenAI LLM function-call generation (esp w gpt-4o), e.g. functions with name "functions", or "functions xyz"
0.1.245
* Improve LanceDocChatAgent and Query Planner to handle deviations. * Handle function-call odditities in OpenAI LLM -- they can generate an unnecessary "name" field, which we override with the "request" field from the arguments.
0.1.244
- Tweaks to LanceDocChatAgent and related Agents (QueryPlanner, Critic, etc) to accomodate deviations - In multi-agent chats, so total cost across all agents, in addition to cumul cost of current agent
0.1.243
Update toml to latest DuckDuckGo (6.0.0) so we no longer get the rate limit error
0.1.242
Support OpenAI GPT-4o -- this is now the default LLM when no model is specified.
To explicitly specify the LLM, you can do, for example:
import langroid.language_models as lm llm = lm.OpenAIGPT( lm.OpenAIGPTConfig(chat_model = lm.OpenAIChatModel.GPT4o))
And recall you can run most of the example scripts with a model specified via `-m gpt-4o`, and pytests using `--m gpt-4o`
0.1.241
Gemini support via litellm. See docs https://langroid.github.io/langroid/tutorials/non-openai-llms/
Essentially, you need to: - install `langroid` with the `litellm` extra, e.g. `pip install "langroid[litellm]"` - set up your `GEMINI_API_KEY` in the `.env` file or shell env - specify `chat_model="litellm/gemini/gemini-1.5-pro-latest"`