Langroid

Latest version: v0.50.1

Safety actively analyzes 723954 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 43 of 71

0.1.193

Support `ollama` [OpenAI API-compatility](https://ollama.com/blog/openai-compatibility), i.e ollama LLM server now mimics the OpenAI API, so any code that used to work for OpenAI LLMs will now work with a simple change of `api_base`.

Langroid takes care of setting the `api_base` behind the scenes, when you specify the local LLM using `chat_model = "ollama/mistral"`, e.g.

python
import langroid.language_models as lm
import langroid as lr

llm_config = lm.OpenAIGPTConfig(
chat_model="ollama/mistral:7b-instruct-v0.2-q8_0",
chat_context_length=16_000, adjust based on model
)
agent = lr.ChatAgent(llm=llm_config)
...


See more in this tutorial on [Local LLM Setup with Langroid](https://langroid.github.io/langroid/tutorials/local-llm-setup/)

0.1.192

LanceQueryPlanAgent: fix the fallback method to detect curr_query_plan

0.1.191

Enhanced LanceDocChatAgent ingestion: handle new metadata fields, better schema extraction for SQL filter query in LanceRagTask

0.1.190

* Chainlit - fix log level
* `ChainlitCallbackConfig` can now be passed to `ChainlitAgentCallbacks` and `ChainlitTaskCallbacks` (see example https://github.com/langroid/langroid/blob/main/examples/chainlit/chat-with-task.py)

0.1.189

Chainlit: new callback: show_start_response() to display spinner at start of long process

0.1.188

Minor mods in Chainlit rendering of LLM, Agent responses

Page 43 of 71

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.