Langroid

Latest version: v0.2.5

Safety actively analyzes 640549 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 42

0.1.252

Minor: validate agent.name format; friendlier task infinite loop warning (points out where to configure it)

0.1.251

* (Exact) Infinite-loop detection, see [Task._maybe_infinite_loop](https://github.com/langroid/langroid/blob/main/langroid/agent/task.py);
Throw InfiniteLoopException when a possible infinite loop is detected.
By default, loops of up to 10 messages are detected (configurable). Note that "Exact" loop detection means this only detects exactly-repeating cycles of k messages (k <= 10), e.g. a n i m a l m a l m a l m a l...
In other words we don't detect cases where the LLM (or other entity) is generating "essentially the same, but slightly different message" repeatedly.

Configs for Infinite loop detection are in `TaskConfig` in `Task.py`.

* Global settings.max_turns (default -1, meaning not used) can additionally be used to guard against infinite loops. In pytest conftest.py it is set to 100, so any task will run at most 100 turns.

* Tolerant tool detection when `request` field is inside `properties` field

* "" message addressing: any entity can address any other entity of the agent ("llm", "user", "agent"), or any other sub-task by name. This is an alternative to using "SEND_TO:<entity_name>", or using the `RecipientTool` to address messages to a specific recipient. The advantage of using RecipientTool is that since it is a tool, the tool handler fallback method can detect that the tool is not being used, and send a reminder to the LLM to clarify who it is addressing the message to.

* In non-interactive mode, wait for user input if "user" is explicitly addressed

* Misc improvements:
- `ToolMessage` instructions reminding LLM to use `request` field (gpt-4o often forgets).
- `RecipientTool`: allow default recipient
- Bug fix: chainlit examples that had modifiable LLM settings were not using the changed LLM, but now do

0.1.250

Added RetrievalTool: to be used by DocChatAgent (or subclasses) to simply get relevant passages,
and skip the final LLM answer-generation.
This enables designing Agents that are instructed to take other actions based on these passages, besides generating an answer summary.

0.1.249

* Improves RAG (`DocChatAgent`) citations. See here: https://github.com/langroid/langroid/issues/477
* Update chainlit `config.toml` to show file-upload button in chats (their API for this changed, yet again)
* Fix edge-cases involving rendering LLM output in non-streaming mode, and using rich spinner

0.1.248

Set defalt tokenizer in case tiktoken.encoding_for_model() fails

0.1.247

Agent: during init, set `config.parsing.token_encoding_model` to the LLM,
so we use the tokenizer specific to the LLM, which helps with accurate token-cost computation.
(only affects OpenAI LLMs currently).

Page 4 of 42

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.