Clientai

Latest version: v0.5.0

Safety actively analyzes 710644 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.5.0

Pydantic Validation for each Step

To see a more detailed guide, [check the docs here](https://igorbenav.github.io/clientai/usage/agent/validation/).

You now have three ways to handle agent outputs in ClientAI:

1. Regular text output (default)
2. JSON-formatted output (`json_output=True`)
3. Validated output with Pydantic models (`json_output=True` with `return_type`)

Let's look at when to use each approach.

Simple Text Output

When you just need text responses, use the default configuration:

python
class SimpleAgent(Agent):
think("analyze")
def analyze_text(self, input_text: str) -> str:
return f"Please analyze this text: {input_text}"

Usage
agent = SimpleAgent(client=client, default_model="gpt-4")
result = agent.run("Hello world") Returns plain text


This is perfect for general text generation, summaries, or when you don't need structured data.

JSON-Formatted Output

When you need structured data but don't want strict validation, use `json_output=True`:

python
class StructuredAgent(Agent):
think(
name="analyze",
json_output=True Ensures JSON output
)
def analyze_data(self, input_data: str) -> str:
return """
Analyze this data and return as JSON with these fields:
- summary: brief overview
- key_points: list of main points
- sentiment: positive, negative, or neutral

Data: {input_data}
"""

Usage
agent = StructuredAgent(client=client, default_model="gpt-4")
result = agent.run("Great product, highly recommend!")
Returns parsed JSON like:
{
"summary": "Positive product review",
"key_points": ["Strong recommendation", "General satisfaction"],
"sentiment": "positive"
}


This approach gives you structured data while maintaining flexibility in the output format.

Validated Output with Pydantic

When you need guaranteed output structure and type safety, combine `json_output` with Pydantic models:

python
from pydantic import BaseModel, Field
from typing import List, Optional

class ProductAnalysis(BaseModel):
summary: str = Field(
min_length=10,
description="Brief overview of the analysis"
)
key_points: List[str] = Field(
min_items=1,
description="Main points from the analysis"
)
sentiment: str = Field(
pattern="^(positive|negative|neutral)$",
description="Overall sentiment"
)
confidence: float = Field(
ge=0, le=1,
description="Confidence score between 0 and 1"
)
categories: Optional[List[str]] = Field(
default=None,
description="Product categories if mentioned"
)

class ValidatedAgent(Agent):
think(
name="analyze",
json_output=True, Required for validation
return_type=ProductAnalysis Enables Pydantic validation
)
def analyze_review(self, review: str) -> ProductAnalysis:
return """
Analyze this product review and return a JSON object with:
- summary: at least 10 characters
- key_points: non-empty list of strings
- sentiment: exactly "positive", "negative", or "neutral"
- confidence: number between 0 and 1
- categories: optional list of product categories

Review: {review}
"""

Usage
agent = ValidatedAgent(client=client, default_model="gpt-4")
try:
result = agent.run("This laptop is amazing! Great battery life and performance.")
print(f"Summary: {result.summary}")
print(f"Sentiment: {result.sentiment}")
print(f"Confidence: {result.confidence}")
for point in result.key_points:
print(f"- {point}")
except ValidationError as e:
print("Output validation failed:", e)

0.4.4

What's Changed
* improved example by igorbenav in https://github.com/igorbenav/clientai/pull/31
* rename 'full' extra to 'all' in pyproject.toml by jpwieland in https://github.com/igorbenav/clientai/pull/32
* version bumped to 0.4.4 by igorbenav in https://github.com/igorbenav/clientai/pull/33

New Contributors
* jpwieland made their first contribution in https://github.com/igorbenav/clientai/pull/32 🎉

**Full Changelog**: https://github.com/igorbenav/clientai/compare/v0.4.3...v0.4.4

0.4.3

What's Changed
* fix number of passed parameters bug by igorbenav in https://github.com/igorbenav/clientai/pull/29
* bump for version by igorbenav in https://github.com/igorbenav/clientai/pull/30


**Full Changelog**: https://github.com/igorbenav/clientai/compare/v0.4.2...v0.4.3

0.4.2

What's Changed
* version update by igorbenav in https://github.com/igorbenav/clientai/pull/24
* Docs improvement by igorbenav in https://github.com/igorbenav/clientai/pull/25
* correct url for template by igorbenav in https://github.com/igorbenav/clientai/pull/26
* fix for httpx proxies deprecation by igorbenav in https://github.com/igorbenav/clientai/pull/27
* package version bump by igorbenav in https://github.com/igorbenav/clientai/pull/28


**Full Changelog**: https://github.com/igorbenav/clientai/compare/v0.4.1...v0.4.2

0.4.1

Small bug fix

What's Changed
* add previously removed _current_agent by igorbenav in https://github.com/igorbenav/clientai/pull/14


**Full Changelog**: https://github.com/igorbenav/clientai/compare/v0.4.0...v0.4.1

0.4.0

ClientAI Agent Module

The Agent module provides a flexible framework for building AI agents that can execute multi-step workflows with automated tool selection and LLM integration. This module enables developers to create sophisticated AI agents that can handle complex tasks through configurable steps, automated tool selection, and state management.

For complete API reference, see [Agent API Documentation](https://igorbenav.github.io/clientai/api/agent/core/agent/).

Core Features

- Multi-step workflow execution with LLM integration
- Automated tool selection and execution
- Configurable execution steps (think, act, observe, synthesize)
- State and context management across steps
- Streaming response support
- Comprehensive error handling and retry logic
- Tool scope management and validation

For detailed examples, see [Agent Examples](https://igorbenav.github.io/clientai/examples/agent/simple_qa/).

Quick Start

For a complete guide on creating agents, see [Creating Agents](https://igorbenav.github.io/clientai/usage/agent/creating_agents/).

python
from clientai import ClientAI
from clientai.agent import create_agent, tool

Create a simple translation agent
translator = create_agent(
client=client,
role="translator",
system_prompt="You are a helpful translation assistant. Translate input to French.",
model="gpt-4"
)

result = translator.run("Hello world!") Returns: "Bonjour le monde!"

Create an agent with tools
class AnalysisAgent(Agent):
think("analyze")
def analyze_data(self, input_data: str) -> str:
return f"Please analyze this data: {input_data}"

tool(name="Calculator")
def calculate(self, x: int, y: int) -> int:
"""Performs basic arithmetic."""
return x + y

agent = AnalysisAgent(
client=client,
default_model="gpt-4",
tool_confidence=0.8
)


Core Components

Agent Class

The main `Agent` class provides the foundation for creating AI agents. See [Agent API Reference](https://igorbenav.github.io/clientai/api/agent/core/agent/) for complete details.

python
class Agent:
def __init__(
self,
client: ClientAI,
default_model: Union[str, Dict[str, Any], ModelConfig],
tools: Optional[List[ToolConfig]] = None,
tool_selection_config: Optional[ToolSelectionConfig] = None,
tool_confidence: Optional[float] = None,
tool_model: Optional[Union[str, Dict[str, Any], ModelConfig]] = None,
max_tools_per_step: Optional[int] = None,
max_history_size: Optional[int] = None,
**default_model_kwargs: Any
)


Step Decorators

The module provides decorators for defining workflow steps. For more information, see [Workflow Steps](https://igorbenav.github.io/clientai/usage/agent/workflow_steps/) and [Step Decorators API](https://igorbenav.github.io/clientai/api/agent/steps/decorators/).

python
from clientai.agent import think, act, observe, synthesize

class MyAgent(Agent):
think("analyze")
def analyze_data(self, input_data: str) -> str:
return f"Analyze this data: {input_data}"

act("process")
def process_results(self, analysis: str) -> str:
return f"Process these results: {analysis}"

observe("gather")
def gather_data(self, query: str) -> str:
return f"Gathering data for: {query}"

synthesize("summarize")
def summarize_results(self, data: str) -> str:
return f"Summary of: {data}"


Tool Management

Tools can be registered and managed using several approaches. For complete documentation, see [Tools and Tool Selection](https://igorbenav.github.io/clientai/usage/agent/tools/).

python
Using the tool decorator
tool(name="Calculator", description="Performs calculations")
def calculate(x: int, y: int) -> int:
return x + y

Direct registration
agent.register_tool(
utility_function,
name="Utility",
description="Utility function",
scopes=["think", "act"]
)

Using ToolConfig
tool_config = ToolConfig(
tool=calculate,
scopes=["think", "act"],
name="Calculator",
description="Performs calculations"
)


Configuration

ModelConfig

Configure model behavior. See [Agent Models](https://igorbenav.github.io/clientai/api/agent/core/agent/#model-configuration) for details.

python
from clientai.agent.config import ModelConfig

model_config = ModelConfig(
name="gpt-4",
temperature=0.7,
stream=True,
json_output=False
)


StepConfig

Configure step execution behavior. See [Step Configuration](https://igorbenav.github.io/clientai/api/agent/steps/step/#configuration) for details.

python
from clientai.agent.config import StepConfig

step_config = StepConfig(
enabled=True,
retry_count=3,
timeout=30.0,
required=True,
pass_result=True
)


ToolSelectionConfig

Configure tool selection behavior. See [Tool Selection](https://igorbenav.github.io/clientai/api/agent/tools/selector/) for details.

python
from clientai.agent.config import ToolSelectionConfig

tool_config = ToolSelectionConfig(
confidence_threshold=0.8,
max_tools_per_step=3
)


Advanced Usage

Streaming Responses

For detailed information about streaming, see [Creating Custom Run](https://igorbenav.github.io/clientai/advanced/agent/creating_run/).

python
Enable streaming for specific runs
for chunk in agent.run("Process this data", stream=True):
print(chunk, end="", flush=True)

Configure streaming at step level
class StreamingAgent(Agent):
think("analyze", stream=True)
def analyze_data(self, input: str) -> str:
return f"Analyzing: {input}"


Context Management

For complete context management documentation, see [Context Management](https://igorbenav.github.io/clientai/usage/agent/context/) and [AgentContext API](https://igorbenav.github.io/clientai/api/agent/core/context/).

python
Access and manipulate agent context
agent.context.set_input("New input")
agent.context.state["key"] = "value"
agent.context.set_step_result("analyze", "Analysis result")

Reset context
agent.reset_context() Clears current state
agent.reset() Complete reset including workflow


Custom Run Methods

For advanced run method customization, see [Creating Custom Run](https://igorbenav.github.io/clientai/advanced/agent/creating_run/).

python
class CustomAgent(Agent):
run(description="Custom workflow execution")
def custom_run(self, input_data: str) -> str:
Custom workflow implementation
result = self.analyze_step(input_data)
return self.process_step(result)


Tool Scopes

Tools can be restricted to specific workflow steps. See [Tool Registry API](https://igorbenav.github.io/clientai/api/agent/tools/registry/) for complete details.

- `think`: Analysis and reasoning steps
- `act`: Decision-making and action steps
- `observe`: Data gathering steps
- `synthesize`: Summary and integration steps
- `all`: Available in all steps

python
Register tool with specific scopes
agent.register_tool(
calculate,
name="Calculator",
scopes=["think", "act"]
)

Get tools for a specific scope
think_tools = agent.get_tools("think")


Error Handling

For comprehensive error handling information, see [Error Handling](https://igorbenav.github.io/clientai/advanced/error_handling/).

python
from clientai.agent.exceptions import (
AgentError, Base exception for agent-related errors
StepError, Errors in step execution
WorkflowError, Errors in workflow management
ToolError Errors in tool execution
)

try:
result = agent.run("Process this")
except StepError as e:
print(f"Step execution failed: {e}")
except ToolError as e:
print(f"Tool execution failed: {e}")
except WorkflowError as e:
print(f"Workflow execution failed: {e}")


Best Practices

For more detailed best practices and guidelines, see [Advanced Overview](https://igorbenav.github.io/clientai/advanced/overview/).

1. **Step Organization**: Organize workflow steps logically, with clear progression from analysis to action.

2. **Tool Design**:
- Provide clear type hints for tool functions
- Include descriptive docstrings
- Keep tools focused on single responsibilities
- Use appropriate scopes to restrict tool availability

3. **Error Handling**:
- Configure step requirements appropriately
- Use retry counts for potentially flaky operations
- Implement proper error recovery in custom run methods

4. **Performance**:
- Use streaming for long-running operations
- Configure appropriate model parameters for each step type
- Manage context size to prevent memory issues

5. **Context Management**:
- Clear context when appropriate
- Use state dictionary for temporary data
- Maintain appropriate history size

Testing

For detailed testing guidelines, see [Contributing Guide](https://igorbenav.github.io/clientai/community/CONTRIBUTING/).

When testing agents:

1. Mock the ClientAI instance for testing
2. Test tool selection logic independently
3. Verify step execution order
4. Test error handling paths
5. Validate context management
6. Check streaming behavior

Example test setup:

python
from unittest.mock import Mock

def test_agent():
mock_client = Mock()
mock_client.generate_text.return_value = "Test response"

agent = MyAgent(
client=mock_client,
default_model="gpt-4"
)

result = agent.run("Test input")
assert result == "Test response"
assert mock_client.generate_text.called


What's Changed
* Agent support by igorbenav in https://github.com/igorbenav/clientai/pull/12
* Pyproject bump by igorbenav in https://github.com/igorbenav/clientai/pull/13


**Full Changelog**: https://github.com/igorbenav/clientai/compare/v0.3.3...v0.4.0

Page 1 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.