Intelli

Latest version: v0.5.3

Safety actively analyzes 702710 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.5.3

New Features 🌟

- Support NVIDIA hosted models (Deepseek and Llama 3.3) via a unified chatbot interface.
- Add streaming responses when calling NVIDIA models.
- Add new embedding provider.

Using NVIDIA Chat Features 💻

python
from intelli.function.chatbot import Chatbot, ChatProvider
from intelli.model.input.chatbot_input import ChatModelInput

get your API key from https://build.nvidia.com/
nvidia_bot = Chatbot("YOUR_NVIDIA_KEY", ChatProvider.NVIDIA.value)

prepare the input
input_obj = ChatModelInput("You are a helpful assistant.", model="deepseek-ai/deepseek-r1", max_tokens=1024, temperature=0.6)
input_obj.add_user_message("What do you think is the secret to balanced life?")


Synchronous response example
python
response = nvidia_bot.chat(input_obj)


Streaming response example
python
async def stream_nvidia():
for i, chunk in enumerate(nvidia_bot.stream(input_obj)):
print(chunk, end="") Print each chunk as it arrives
if i >= 4: Print only the first 5 chunks
break

In an async context, you can run:
result = await stream_nvidia()


For more details, check [the docs](https://docs.intellinode.ai/docs/python).

0.5.1

Offline Whisper Transcription 🎤

Load and use OpenAI's Whisper model offline for audio transcription.
Intellinode module support initial prompt to improve the transcription quality.

Code
Load audio
python
import soundfile as sf
audio_data, sample_rate = sf.read(file_name)


Inference:
python
from intelli.wrappers.keras_wrapper import KerasWrapper
wrapper = KerasWrapper(model_name="whisper_large_multi_v2")
result = wrapper.transcript(audio_data, user_prompt="medical content")


check the [documentation](https://docs.intellinode.ai/docs/python/offline-chatbot/whisper).

0.4.2

New Features 🌟

- Update the agent to support the Llama 3.1 offline model.
- Add offline model capability to the chatbot.
- Unify Keras loader under a dedicated wrapper `KerasWrapper`.

Using the New Features 💻
- [Gemma 2 chatbot with RAG](https://docs.intellinode.ai/docs/python/offline-chatbot/gemma).
- [Llama 3 chatbot with RAG](https://docs.intellinode.ai/docs/python/offline-chatbot/llama).
- [Mistral chatbot with RAG](https://docs.intellinode.ai/docs/python/offline-chatbot/mistral).

0.2.3

New Features 🌟

- **Support for ANTHROPIC Models**: Our chatbot integration now supports advanced ANTHROPIC models, including those with large context windows.
- **Chatbot Provider Enumeration**: The selection of AI providers has been simplified through the use of enumerators.
- **Minor Bug Fixes**: Adjust the parameter order for the controllers.

Using the New Features 💻

- `ChatProvider` enum simplifies the selecting providers.

python
from intelli.function.chatbot import ChatProvider

check available chatbot providers
for provider in ChatProvider:
print(provider.name)


- Check the [chatbot documentation](https://docs.intellinode.ai/docs/python/chatbot/model-switching) to use claude-3 model.

Contributors
- gutyoh
- Barqawiz

0.2.0

New Features 🌟

- **Add Keras Agents**: Intelli now supports the loading of offline open-source models using `KerasAgent`.
- **Supported Offline Models**: `gemma_2b_en`, `gemma_instruct_2b_en`, `gemma_7b_en`, `gemma_instruct_7b_en`, `mistral_7b_en`, `mistral_instruct_7b_en`.

Using the New Features 💻
To use the new Keras Agent, instantiate the `KerasAgent` class with the appropriate parameters:
python
from intelli.flow.agents.kagent import KerasAgent

Setting up a Gemma agent
gemma_params = {
"model": "gemma_instruct_2b_en",
"max_length": 200
}
gemma_agent = KerasAgent(agent_type="text",
mission="writing assistant",
model_params=gemma_params,
log=True)

Prepare the tasks with the user instructions:
python
from intelli.flow.input.task_input import TextTaskInput
from intelli.flow.tasks.task import Task

Sample task to write blog post
task1 = Task(
TextTaskInput("write blog post about electric cars"), gemma_agent, log=True
)

Create more tasks as needed

Execute tasks using `SequenceFlow`. The example below shows a single task, but you can include additional tasks for text, image, or vision:
python
from intelli.flow.sequence_flow import SequenceFlow

Start SequenceFlow
flow = SequenceFlow([task1], log=True)
final_result = flow.start()


Fore more details check [the docs](https://docs.intellinode.ai/docs/python/flows/kagent).

0.1.5

What's New 🌟

- Add a function to generate a visual image for the flow.
python
flow.generate_graph_img()


- Add remote speech model, allowing to generate synthesised speeches from openai or google models.

- Fix a minor bug in the semantic search functionality.

Page 1 of 2

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.