* Upgrade from OpenAI 0.x to 1.1.1 * litellm is temporarily disabled due to openai version conflict
0.1.109
DocChatAgent/VectorStore: Improve how we handle retrieved context-window overlap when n_neighbor_chunks > 0 (Do connected components correctly; don't split connected groups so context is retained)
DocChatAgent: new re-rankers: * rerank_with_diversity to increase diversity among top chunks * rerank_to_periphery to have best chunks at periphery (mitigates lost-in-middle effect)
0.1.106
Class method: langroid.language_models.base.LanguageModel.usage_cost_summary(): returns summary of token usage, cost aggregated by LLM model
0.1.105
Globally track, report token usage & cost by model-name. E.g. see: * examples/docqa/doc-chat-multi-llm.py * tests/main/test_token_usage.py