Llm-ie

Latest version: v0.4.6

Safety actively analyzes 723158 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.4.0

Documentation

New features
- **Concurrent extraction** for extractors that requires multiple inferencing: `SentenceFrameExtractor`, `SentenceReviewFrameExtractor`, `SentenceCoTFrameExtractor`, `BinaryRelationExtractor`, and `MultiClassRelationExtractor`. We use Python `asyncio` for concurrent, high-throughput inferencing. On a 4×A100 GPU server running vLLM, the speed is 10× faster than synchronous extraction.

To use concurrent for sentence-level frame extraction. The `concurrent_batch_size=32` sets 32 sentences to be processed at once.
python
from llm_ie.extractors import SentenceFrameExtractor

extractor = SentenceFrameExtractor(inference_engine, prompt_temp)
frames = extractor.extract_frames(text_content=text, entity_key="Diagnosis", concurrent=True, concurrent_batch_size=32)


Using concurrent for relation extraction. The `concurrent_batch_size=32` sets 32 frame pairs to be processed at once.
python
from llm_ie.extractors import MultiClassRelationExtractor

extractor = MultiClassRelationExtractor(inference_engine, prompt_template=re_prompt_template, possible_relation_types_func=possible_relation_types_func)
relations = extractor.extract_relations(doc, concurrent=True, concurrent_batch_size=32)

- Supports for 🚅 [LiteLLM](https://github.com/BerriAI/litellm)

python
from llm_ie.engines import LiteLLMInferenceEngine

inference_engine = LiteLLMInferenceEngine(model="openai/Llama-3.3-70B-Instruct", base_url="http://localhost:8000/v1", api_key="EMPTY")


- The `PromptEditor` LLM agent now accepts `prompt_guide` for customized prompt guidelines.

python
from llm_ie import PromptEditor, BasicFrameExtractor, OllamaInferenceEngine

Define an LLM inference engine
inference_engine = OllamaInferenceEngine(model_name="llama3.1:8b-instruct-q8_0")

Define editor
editor = PromptEditor(inference_engine, BasicFrameExtractor, prompt_guide="<a customized guideline>")

editor.chat()

0.3.5

Documentation

New features
- Added [json_repair](https://github.com/mangiucugna/json_repair) to dependency
- Adopted `json_repair` in post-processing. When LLM's output does not follow JSON format (e.g., has un-escaped double quote or newline character), json_repair will be used to fix it. If fixed, a warning **"JSONDecodeError detected, fixed with repair_json"** will be triggered. If still fails, a warning **"JSONDecodeError could not be fixed"** will be triggered.
- For `FrameExtractors`, frames that have broken JSON format and can not be fixed will be discarded.
- For `RelationExtractors`, relations that have broken JSON format and can not be fixed will be discarded.

0.3.4

Documentation

Changes
- Fixed issues in fuzzy search.

0.3.3

Documentation

New features
- Added fuzzy search for entity text

Changes
- Fixed the utf-8 issue on Windows system.

0.3.1

Documentation

New features
- Added Sentence Review Frame Extractor and Sentence CoT Frame Extractor
- Made default review prompts. If a custom review prompt is not supplied, use the default.

Changes
- Fixed a bug in post-processing

0.3.0

Documentation

New features
- Added interactive chat to Prompt editor LLM agent.
- The attributes now support nested structure (dictionary of dictionaries/ lists).
- Added more prompt templates and examples to the prompting guideline.

Changes
- Fixed the printing issue with `OpenAIInferenceEngine`

Page 2 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.