Langroid

Latest version: v0.45.7

Safety actively analyzes 714860 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 21 of 69

0.16.5

fix: Further enhances of json parsing from tool-gen with weak LLMs

0.16.4

fix: Improve JSON parsing (e.g. code) from weak LLMs

Uses the excellent, lightweight [json-repair lib](https://github.com/mangiucugna/json_repair).

One example where this helps: when using tools, weak LLMs sometimes generate JSON that has un-escaped newlines within strings.
If we simply discard these, it's problematic when the strings contain newline-sensitive code (e.g. python, toml).
Instead we should escape these newlines, but ONLY escape the newlines that appear within string-valued fields in the JSON
(newlines that appear outside of these should definitely NOT be escaped, or it leads to inaccurate json).
Fortunately, the excellent json-repair lib has a good solution for this and other pesky json issues, using CFGs.

0.16.3

fix: in logging.py, escape markup when using rich.console.print(...)

0.16.2

fix: switch to langroid-tools for o1 models since they don't yet support tools/fns in API

When using o1 models, the `ChatAgent` automatically sets `ChatAgentConfig.use_functions_api` to False,
and `ChatAgentConfig.use_tools` so that Langroid's prompt-based ToolMessage mechanism is used instead.

feat: TaskConfig.recognize_string_signals bool flag (default True), can be set to False to dis-allow string-based signals like DONE, PASS etc

This is useful when we want to avoid "accidental prompt injection" (e.g. "DONE" may appear in normal text, and we don't want that to trigger task completion).
In general it is preferable to use the task orchestration tools (`DoneTool` etc) rather than string-based signals.

0.16.1

fix: handle max_tokens/max_completion_tokens variation to support groq, o1, other LLMs

0.16.0

feat: Support OpenAI o1-preview, o1-mini

To use these you can set the LLM config as follows:

config = OpenAIGPTConfig(
chat_model=OpenAIChatModel.O1_MINI or O1_PREVIEW
)

Or in many example scripts you can directly specify the model using `-m o1-preview` or `-m o1-mini`, e.g.:

python3 examples/basic/chat.py -m o1-mini


Also any pytest that runs against a real (i.e. not MockLM) LLM can be run with these models using `--m o1-preview` or `--m o1-mini`, e.g.


pytest -xvs tests/main/test_llm.py --m o1-mini



Note these models (as of Sep 12 2024):
- do not support streaming, so langroid sets `stream` to `False` even if you try to stream
- do not support system msg, so langroid maps any supplied system msg to a msg with role `User`, and
- do not allow temperature setting, so any temperature setting is ignored when using langroid (the models use default temperature = 1)

Page 21 of 69

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.