Langroid

Latest version: v0.2.5

Safety actively analyzes 640549 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 13 of 42

0.1.195

Make batch utilities available via lr:

- `run_batch_tasks`
- `llm_response_batch`
- `agent_response_batch`

E.g. you can now do:


import langroid as lr
...
lr.run_batch_tasks(...)
...

0.1.194

Minor:
* fix in Task.py: When interactive = True, set agent.default_human_response = None
* openai_gpt.py - when getting available models, catch exception in case there's an OpenAI outage, so we don't fail when we only want to use a local LLM

0.1.193

Support `ollama` [OpenAI API-compatility](https://ollama.com/blog/openai-compatibility), i.e ollama LLM server now mimics the OpenAI API, so any code that used to work for OpenAI LLMs will now work with a simple change of `api_base`.

Langroid takes care of setting the `api_base` behind the scenes, when you specify the local LLM using `chat_model = "ollama/mistral"`, e.g.

python
import langroid.language_models as lm
import langroid as lr

llm_config = lm.OpenAIGPTConfig(
chat_model="ollama/mistral:7b-instruct-v0.2-q8_0",
chat_context_length=16_000, adjust based on model
)
agent = lr.ChatAgent(llm=llm_config)
...


See more in this tutorial on [Local LLM Setup with Langroid](https://langroid.github.io/langroid/tutorials/local-llm-setup/)

0.1.192

LanceQueryPlanAgent: fix the fallback method to detect curr_query_plan

0.1.191

Enhanced LanceDocChatAgent ingestion: handle new metadata fields, better schema extraction for SQL filter query in LanceRagTask

0.1.190

* Chainlit - fix log level
* `ChainlitCallbackConfig` can now be passed to `ChainlitAgentCallbacks` and `ChainlitTaskCallbacks` (see example https://github.com/langroid/langroid/blob/main/examples/chainlit/chat-with-task.py)

Page 13 of 42

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.