Set defalt tokenizer in case tiktoken.encoding_for_model() fails
0.1.247
Agent: during init, set `config.parsing.token_encoding_model` to the LLM, so we use the tokenizer specific to the LLM, which helps with accurate token-cost computation. (only affects OpenAI LLMs currently).
0.1.246
* Fix ChainlitTaskCallbacks - use classmethod not staticmethod, so any derived versions propagate to subtasks * Handle deviant OpenAI LLM function-call generation (esp w gpt-4o), e.g. functions with name "functions", or "functions xyz"
0.1.245
* Improve LanceDocChatAgent and Query Planner to handle deviations. * Handle function-call odditities in OpenAI LLM -- they can generate an unnecessary "name" field, which we override with the "request" field from the arguments.
0.1.244
- Tweaks to LanceDocChatAgent and related Agents (QueryPlanner, Critic, etc) to accomodate deviations - In multi-agent chats, so total cost across all agents, in addition to cumul cost of current agent
0.1.243
Update toml to latest DuckDuckGo (6.0.0) so we no longer get the rate limit error