Raglight

Latest version: v1.6.2

Safety actively analyzes 723158 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 5

1.2.0

Add LMStudio Provider

You can now use both Ollama and LMStudio for LLM inference.

Switch between both providers :

python
from raglight.rag.simple_rag_api import RAGPipeline
from raglight.models.data_source_model import FolderSource, GitHubSource
from raglight.config.settings import Settings

Settings.setup_logging()

pipeline = RAGPipeline(knowledge_base=[
FolderSource(path="<path to your folder with pdf>/knowledge_base"),
GitHubSource(url="https://github.com/Bessouat40/RAGLight")
], model_name="llama3", reasoning_model_name="deepseek-r1:1.5b", reflection=1, provider=SETTINGS.OLLAMA) default : provider = Settings.Ollama
], model_name="llama3", reasoning_model_name="deepseek-r1:1.5b", reflection=1, provider=SETTINGS.LMSTUDIO)

pipeline.build()

response = pipeline.generate("How can I create an easy RAGPipeline using raglight framework ? Give me the the easier python implementation")
print(response)

1.1.5

New feature :

Add RAT (Retrieval Augmented Thinking) as another pipeline available

1.1.4

Version allowing communication between framework and Ollama when using library with Docker.
Example here : https://github.com/Bessouat40/LLMChat

1.1.1

First real release !
You can use both text files and code files as knowledge base.
You can declare easily github repositories to use as knowledge base.

1.0.1

First version.
Usage only with Ollama.

0.1.5

First version after code refactorisation.

Page 4 of 5

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.