Langroid

Latest version: v0.2.2

Safety actively analyzes 638437 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 41

0.1.211

LanceDB: when creating schema, sort fields to prevent variable ordering and consequent mismatch between stored table and new docs during ingest.

0.1.210

LanceDB: allow adding docs with new metadata fields that were not in original config.document_class

0.1.209

Fix regex in Relevance Extractor, used in the [`extract_numbered_segments` function](https://github.com/langroid/langroid/blob/main/langroid/parsing/utils.py).
This was causing the segments to only include portions before a newline,
which meant that for certain documents, the extraction was losing big portions of the text between
sentence/segment markers. This fix should vastly improve relevance extractor and get rid of issues
seen when using this in Doc Chat.

0.1.208

* DocChat Fixes to ensure we avoid extending content when there are other fields in Document
- e.g. when we are working with a subclass of Document where there are fields other than just `content` and `metadata`,
we want to avoid various parts of the DocChatAgent pipeline, such as adding context windows, adding fuzzy match context, etc,
since we cannot know how to set the other fields when content changes (and just arbitrarily retaining the field values of one
of the chunks can be wrong)
* Lancedb: don't complain when doc with id not found
- this should not occur normally

0.1.207

Small improvements in OpenAI API Key handling:
* do not allow empty OpenAIGPTConfig.api_key, as that leads to an Connection error (and hence triggers retries) rather than Auth error, which immediately exits without retries
* when using an OpenAI model, explicitly get the OPENAI_API_KEY from env, rather than fully relying on Pydantic BaseSettings to do it

0.1.206

Upgrade LiteLLM -> 1.30.1 for Claude3 Support

Use like this (this is a 1-shot example, generalize from here):

First install the `litellm` extra, e.g. `pip install "langroid[litellm]"`.

Ensure you have `ANTHROPIC_API_KEY` set in your environment (e.g. in the `.env` file or via an env var setting).

Then specify `chat_model` as `litellm/<model_name>`:


import langroid as lr
import langroid.language_models as lm
llm_config = lm.OpenAIGPTConfig(
chat_model="litellm/claude-3-sonnet-20240229",
chat_context_length=16_000, adjust based on model
)
llm = lm.OpenAIGPT(llm_config)
llm.chat("What is 4+5?")

agent_config = lr.ChatAgentConfig(llm=llm_config)
agent = lr.ChatAgent(agent_config)
agent.llm_response("When was Beethoven born?")
agent.llm_response("And Chopin?")

task = lr.Task(agent, interactive=True)
task.run()


See the litellm docs for syntax to use for other Claude model names:
https://docs.litellm.ai/docs/providers/anthropic

Note that for many of the example scripts there is a `-m <model_name>` option you can pass, to override the default `GPT4Turbo`. For claude models you can specify `-m litellm/claude-3-sonnet-20240229`

Page 10 of 41

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.