Agently

Latest version: v3.2.2.8

Safety actively analyzes 623608 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

127.0.0.17890

instruct:
输出语言: 中文
output:
page_topic:
$type: str
$desc: ""
summary:
$type: str
$desc: ""


- Python文件:
python
import Agently

agent_factory = (
Agently.AgentFactory()
.set_settings("model.Google.auth.api_key", "")
.set_settings("current_model", "Google")
)

agent = agent_factory.create_agent()

print(
agent
.load_yaml_prompt(
path="./yaml_prompt.yaml",
你也可以用下面方式直接传递YAML格式的字符串
yaml=yaml_str
variables = {
"user_input": "http://Agently.tech",
}
)
.start()
)


- 运行结果:
json
{
"page_topic": "易用、灵活、高效的开源大模型应用开发框架",
"summary": "Agently是一个开源的大模型应用开发框架,它可以让开发者轻松地使用大模型来构建应用程序。Agently的特点包括:\n\n* 语法简单易学,5分钟开始使用\n* 安装简单,使用pip install -U Agently即可\n* 使用灵活,可以通过几行代码指定大模型、鉴权信息等信息\n* 支持链式调用,像调用函数一样和Agent实例交互\n* 为工程开发者设计,应用开发灵活性高\n* 支持传递结构化数据灵活表达请求,管理Agent实例设定信息,提供自定义函数\n* 支持监听流式输出,使用Agently Workflow将复杂任务切分成块\n* 架构设计深度,解构了大模型驱动的Agent结构,维护了模型请求前后置信息流处理工作流等基础原子要件\n* 提供能力插件、工作流管理方案等增强开发者在应用层的表达丰富度"
}


使用YAML格式数据管理你的工作流

`[Agently Workflow: YAML Flow]`

> `[🧪测试] 这个功能后续可能会调整用法或者语法`

我们向您提供一种实验性的通过YAML格式数据管理工作流的方法,通过这种管理方法,您可以对工作流中的工作块定义,以及工作块间的连接关系进行更加方便直观的管理。这个功能将为您呈现我们的初步想法,我们还会持续完善这个能力,强化这种表达方法的表达能力。

同时,我们也在这项新能力中预置了`开始(Start)`,`用户输入(UserInput)`,`打印结果(Print)`这三个基本工作块,帮助您更快速的构建自己的工作流。通过阅读这三个工作块的定义方法,也能够对您创建自定义工作块提供思路参考。

基本用法

- YAML文件/YAML文本内容:

YAML
chunks:
start:
type: Start
user_input:
type: UserInput
placeholder: '[用户输入]: '
print:
type: Print
connections:
- start->user_input->print


- Python文件:

python
import Agently
workflow = Agently.Workflow()
你可以通过设置draw=True来输出工作流的Mermaid代码,而不是运行它
print(workflow.start_yaml(path="./yaml_file.yaml", draw=True))
workflow.start_yaml(path="./yaml_file.yaml")


- 运行结果:

shell
[用户输入]: 1+2
>>> 1+2


自定义你自己的工作块执行器

- YAML文件/YAML文本内容:

YAML
chunks:
start:
type: Start
user_input:
type: UserInput
placeholder: '[User Input]:'
我们在这里声明一个新的calculate工作块
calculate:
然后在这里添加一个calculate执行器来计算用户输入结果
在executor里指定执行器id为calc
executor: calc
print:
type: Print
connections:
然后把calculate工作块放入工作流中
- start->user_input->calculate->print


- Python file:

python
import Agently

使用函数装饰器`workflow.executor_func(<executor_id>)`
来声明一个执行器id为calc的执行器函数
workflow.executor_func("calc")
def calculate_executor(inputs, storage):
result = eval(inputs["input"])
return str(result)

workflow = Agently.Workflow()
print(workflow.start_yaml(path="./yaml_file.yaml", draw=True))
workflow.start_yaml(path="./yaml_file.yaml")


- Result:

shell
[用户输入]: 1+2
>>> 3


通过基础Prompt管理方法理解不同的Prompt生命周期

我们添加了一系列的Prompt管理方法,来帮助开发者直接管理设定给Agent实例或是单次请求的Prompt信息,设定对象不同,这些Prompt信息的生命周期也是有差别的。

当我们使用`agent.set_agent_prompt()`方法向Agent实例设定Prompt信息的时候,这些信息将被传递并存储到Agent实例的结构体内,并在这个Agent实例**每次请求模型时,都携带这些信息**,直到这个Agent实例被销毁或者回收。

- `agent.set_agent_prompt(<基础指令名>, <value>)`
- `agent.get_agent_prompt(<基础指令名>)`
- `agent.remove_agent_prompt(<基础指令名>)`

当我们使用`agent.set_request_prompt()`方法向Agent实例内部的单次请求实例设定Prompt信息的时候,这些信息将**只会在下一次请求时传递给模型**,当请求完成后,这些信息就会被清除掉,不再保留。

- `agent.set_request_prompt(<基础指令名>, <value>)`
- `agent.get_request_prompt(<基础指令名>)`
- `agent.remove_request_prompt(<基础指令名>)`

在我们之前提供的Agent指令中,通过Agent能力插件提供的方法,例如Role插件提供的`.set_role()`方法,就使用了类似`.set_agent_prompt()`的设定方法。因此,通过`.set_role()`方法设定的信息将在多次请求间保留。

而基础指令如`.input()`、`.instruct()`、`.output()`则使用了类似`.set_request_prompt()`的设定方法。因此,通过`.input()`这些方法设定的信息,在当次请求(以`.start()`命令为标志)结束后,这些信息就被清理了,下次请求时需要重新设定。

阅读[框架开发教程 - 基础指令列表](http://www.agently.tech/guide.html#_9)了解我们支持的基础指令

功能升级

- `[Agently Workflow]`: 做了大量让复杂工作流更加稳定可靠的优化。[查看详情](https://github.com/Maplemx/Agently/pull/64)
- `[框架核心]`: 重命名了基础Prompt槽位,让它们能和基础指令名保持一致。[查看详情](https://github.com/Maplemx/Agently/commit/3303aa1f7083d3ac9ddcc744f40c4adc56610939)
- `[Facility]`: 使用`Agently.lib`作为`Agently.facility`的别名,方便使用。
- `[工具: 网页浏览browse]`: 移除了对newspaper3k包的依赖,并使用BeautifulSoup4包作为浏览工具替代 [查看详情](https://github.com/Maplemx/Agently/commit/df8c69a990578ec064a3c69d15ba185623d67100)

问题修复

- `[请求插件: OpenAI]`: 修复了会导致在使用代理Proxy的时候报`await can not use on response`的错误的问题 [查看详情](https://github.com/Maplemx/Agently/commit/7643cfe159f57ee05afd55a23fbe2b594a556d53)
- `[请求插件: OAIClient]`: 修复了代理Proxy无法生效的问题 [查看详情](https://github.com/Maplemx/Agently/commit/7643cfe159f57ee05afd55a23fbe2b594a556d53)
- `[请求插件: OAIClient]`: 修复了一个导致system prompt无法正常工作的问题 [查看详情](https://github.com/Maplemx/Agently/commit/1f9d275c9c415b5eef439b95f796bb617164b0cf)
- `[Agent能力插件: Tool]`: 修复了一个因为重命名Prompt槽位导致的工具调用无法生效的问题 [查看详情](https://github.com/Maplemx/Agently/commit/48b80f85c8690e94658e5795e9191a643f663ac3)

3.5202220212021

如果我们希望我们使用的基于大语言模型工作的Agent能够在某些方面能够跟上世界的变化,我们能做什么呢?或许,给Agent添加一些技能(Skills)让它能够和真实世界发生交互,会是一个好主意。

---

HOW TO INSTALL?

npm:npm install agently

yarn:yarn add agently

HOW TO USE?

README:[English](https://github.com/Maplemx/Agently/blob/main/README.md) | [中文](https://github.com/Maplemx/Agently/blob/main/README_CN.md)

---

🤵 Agently is a framework helps developers to create amazing LLM based applications.

🎭 You can use it to create an LLM based agent instance with role set and memory easily.

⚙️ You can use Agently agent instance just like an async function and put it anywhere in your code.

🧩 With the easy-to-plug-in design, you can easily append new LLM API/private API/memory management methods/skills to your Agently agent instance.

> ⚠️ Notice: Agently is a node.js package only works on the server-side.

🥷 Author: Maplemx | 📧 Email: [maplemxgmail.com](mailto:maplemxgmail.com) | 💬 WeChat: moxinapp

⁉️ [Report bugs or post your ideas here](https://github.com/Maplemx/Agently/issues)

⭐️ Star this repo if you like it, thanks!

🤵 Agently是一个希望帮助大语言模型(LLM)应用开发者们制作出超棒的大语言模型应用(LLM Based Applications)的轻量级框架

🎭 你能够使用Agently快速而轻松地创建并管理基于大语言模型的Agent实例,并管理他们的人设和记忆,这将让客服机器人、角色扮演机器人、游戏用Agent的构造和管理更方便

⚙️ 你可以把Agently创建的Agent以及Session像一个异步函数(async function)一样使用,这将让基于大语言模型能力的自动化工作流构造更轻松,你甚至可以沿用原有的业务代码,在其中部分需要NLP算法、复杂推理或人工操作的环节,尝试把Agently提供的Agent和Session当做一个异步函数,几乎无缝地加入到代码的业务流程中

🧩 Agently在设计时考虑了对主要请求流程中节点部件的可更换性,你可以轻松地更换或定制它们,例如:添加新的LLM模型请求方法,更换私有/转发的模型请求API地址,调整Agent记忆管理方法,定制自己的模型消息解析方案等

🔀 Agently提供的独特的针对一次请求中的流式消息(Streaming Message)的消息分块及多下游分发管理方案,能够让你在接收流式消息时,一方面保留了大语言模型通过流式消息的方式带来的高速反馈敏捷性优点,另一方面又能在一次请求中做更多的事情

> ⚠️ 注意:Agently适用于Node.js的服务端而不是网页前端

🥷 作者: Maplemx | 📧 Email: [maplemxgmail.com](mailto:maplemxgmail.com) | 💬 微信: moxinapp

⁉️ [如果您发现了BUG,或者有好的点子,请在这里提交](https://github.com/Maplemx/Agently/issues)

⭐️ 如果您觉得这个项目对您有帮助,请给项目加星,感谢您的肯定和支持!

3.2.2.3

New Features

- `[Agent.load_yaml_prompt()]`:

We provide developers a new way to manage your request prompt template in YAML file!

- HOW TO USE:

- YAML file:

YAML
input: ${user_input}
use_public_tools:
- browse
set_tool_proxy: http://127.0.0.1:7890
instruct:
output language: English
output:
page_topic:
$type: str
$desc: ""
summary:
$type: str
$desc: ""


- Python file:
python
import Agently

agent_factory = (
Agently.AgentFactory()
.set_settings("model.Google.auth.api_key", "")
.set_settings("current_model", "Google")
)

agent = agent_factory.create_agent()

print(
agent
.load_yaml_prompt(
path="./yaml_prompt.yaml",
or just pass YAML string like this:
yaml=yaml_str
variables = {
"user_input": "http://Agently.tech",
}
)
.start()
)


- Result:
shell
{'page_topic': 'Agently - Artificial Intelligence for the Enterprise', 'summary': 'Agently is a leading provider of AI-powered solutions for the enterprise. Our platform enables businesses to automate tasks, improve efficiency, and gain insights from their data. We offer a range of services, including:\n\n* **AI-powered automation:** Automate repetitive tasks, such as data entry and customer service, to free up your team to focus on more strategic initiatives.\n* **Machine learning:** Use machine learning to improve the accuracy of your predictions and decisions. We can help you identify trends and patterns in your data, and develop models that can predict future outcomes.\n* **Natural language processing:** Use natural language processing to understand and generate human language. This can be used for a variety of applications, such as chatbots, text analysis, and sentiment analysis.\n\nAgently is committed to helping businesses succeed in the digital age. We believe that AI is a powerful tool that can be used to improve efficiency, innovation, and customer satisfaction. We are excited to partner with you to explore the possibilities of AI for your business.'}


- `[Agently Workflow: YAML Flow]`:`[🧪beta] This feature may change in the future`

We try to provide a simple way to help developers to manage workflow easier with YAML files, so we publish a beta feature **YAML Flow** to present the idea.

With this new feature, you can use YAML files to state chunks and manage the connections between chunks.

Also, we preset some basic chunks(`Start`, `UserInput` and `Print`) to help you build your own workflow quicker.

- BASIC USE:

- YAML file:

YAML
chunks:
start:
type: Start
user_input:
type: UserInput
placeholder: '[User Input]:'
print:
type: Print
connections:
- start->user_input->print


- Python file:

python
import Agently
workflow = Agently.Workflow()
You can use draw=True to output workflow Mermaid code instead of running it
print(workflow.start_yaml(path="./yaml_file.yaml", draw=True))
workflow.start_yaml(path="./yaml_file.yaml")


- Result:

shell
[User Input]: 1+2
>>> 1+2


- ADD YOUR OWN EXECUTORS:

- YAML file:

YAML
chunks:
start:
type: Start
user_input:
type: UserInput
placeholder: '[User Input]:'
We state a new chunk named 'calculate'
calculate:
We add a calculate executor to calculate user input
with executor_id = 'calculate'
executor: calculate
print:
type: Print
connections:
Then add the 'calculate' chunk into the workflow
- start->user_input->calculate->print


- Python file:

python
import Agently

use decorator `workflow.executor_func(<executor_id>)`
to state executor function
workflow.executor_func("calculate")
def calculate_executor(inputs, storage):
result = eval(inputs["input"])
return str(result)

workflow = Agently.Workflow()
print(workflow.start_yaml(path="./yaml_file.yaml", draw=True))
workflow.start_yaml(path="./yaml_file.yaml")


- Result:

shell
[User Input]: 1+2
>>> 3


- `[Basic Prompt Management Methods]`:

Add a series prompt management methods to help developers directly manage the prompt information in agent instants or request with different information life cycles.

These methods below will manage prompt information in agent instant and the prompt information will be pass to model when requesting every time** until this agent instant is dropped**.

- `agent.set_agent_prompt(<slot_name>, <value>)`
- `agent.get_agent_prompt(<slot_name>)`
- `agent.remove_agent_prompt(<slot_name>)`

These methods below will manage prompt information in single request which will **only use the prompt information once**! When the request is finished, all the prompt information will be erased.

- `agent.set_request_prompt(<slot_name>, <value>)`
- `agent.get_request_prompt(<slot_name>)`
- `agent.remove_request_prompt(<slot_name>)`

[Read Development Handbook - Standard Request Slots to learn more](https://github.com/Maplemx/Agently/blob/main/docs/guidebook/application_development_handbook.ipynb)

Updates:

- `[Agently Workflow]`: Make some changes to make complex flow more stable. https://github.com/Maplemx/Agently/pull/64
- `[Framework Core]`: Rename variables of basic prompt slots to keep in unison. https://github.com/Maplemx/Agently/commit/3303aa1f7083d3ac9ddcc744f40c4adc56610939
- `[Facility]`: Use `Agently.lib` as alias of `Agently.facility`
- `[Tools: browse]`: remove newspaper3k and replace it with BeautifulSoup4 https://github.com/Maplemx/Agently/commit/df8c69a990578ec064a3c69d15ba185623d67100

Bug Fixed:

- `[Request: OpenAI]`: Fixed a bug that report error `await can not use on response` when using proxy https://github.com/Maplemx/Agently/commit/7643cfe159f57ee05afd55a23fbe2b594a556d53
- `[Request: OAIClient]`: Fixed a bug that proxy can not work correctly https://github.com/Maplemx/Agently/commit/7643cfe159f57ee05afd55a23fbe2b594a556d53
- `[Request: OAIClient]`: Fixed a bug that system prompt can not work correctly https://github.com/Maplemx/Agently/commit/1f9d275c9c415b5eef439b95f796bb617164b0cf
- `[Agent Component: Tool]`: Fixed a bug that make tool calling can not work correctly https://github.com/Maplemx/Agently/commit/48b80f85c8690e94658e5795e9191a643f663ac3

---

新功能

通过YAML格式数据管理单次Agent请求模板

`[Agent.load_yaml_prompt()]`

我们提供了一种全新的YAML语法表达方式,来帮助您更好地管理单次Agent请求,除了方便开发人员将不同模块进行解耦外,我们也希望通过这种方式,将Agently提供的能力用一种标准化配置的方式进行跨语种表达,或是将这种表达方式提供给非开发人员使用。

如何使用

- YAML文件/YAML文本内容:

YAML
input: ${user_input}
use_public_tools:
- browse

3.2.1.3

New Features

- `[Request: OAIClient]` Add a new request plugins for those models which API format is alike OpenAI but have some additional rules like can not support multi system messages or must have strict user-assistant message orders. It's very useful for those local servering models started by local model servering library like [Xinference](https://github.com/xorbitsai/inference).

HOW TO USE:

python
import Agently

agent_factory = (
Agently.AgentFactory(is_debug=True)
.set_settings("current_model", "OAIClient")
Mixtral for example
.set_settings("model.OAIClient.url" , "https://api.mistral.ai/v1")
if you want to use Moonshot Kimi:
.set_settings("model.OAIClient.url", "https://api.moonshot.cn/v1")
set model name
Mixtral model list: https://docs.mistral.ai/platform/endpoints/
.set_settings("model.OAIClient.options", { "model": "open-mistral-7b" })
Moonshot mode list: https://platform.moonshot.cn/docs/pricing#文本生成模型-moonshot-v1
set API-KEY if needed
.set_settings("model.OAIClient.auth.api_key", "")
set proxy if needed
.set_proxy("http://127.0.0.1:7890")
you can also change message rules
.set_settings("model.OAIClient.message_rules", {
"no_multi_system_messages": True, True by default, will combine multi system messages into one
"strict_orders": True, True by default, will transform messages' order into "User-Assistant-User-Assitant" strictly
"no_multi_type_messages": True, True by default, will only allow text messages
})
)

agent = agent_factory.create_agent()

(
agent
.set_role("You love EMOJI very much and try to use EMOJI in every sentence.")
.chat_history([
{ "role": "user", "content": "It's a beautiful day, isn't it?" },
{ "role": "assistant", "content": "Right, shine and bright!☀️" }
])
.input("What do you suggest us to do today?")
use .start("completions") if your model is a completion model
.start("chat")
)


Update

- `[Request: ERNIE]` Add support for parameter `system` in new API reference, now system prompt will be pass to parameter `system` instead of being transformed into one of the user chat message. https://github.com/Maplemx/Agently/commit/dc52bdc9dfe829675b403e478c28464297fbdcd1
- `[Request]` Optimized prompt about multi items in list https://github.com/Maplemx/Agently/commit/9f378c771a99796845dbbe835f0cbac4c9e0271f

Bug Fixed

- `[Request Alias]` Fixed some bugs that cause `.general()` and `.abstract()` not working https://github.com/Maplemx/Agently/commit/5f6dd5e7e14bf46e5b25b4898fc4767c1d5e7829
- `[Agent Component: Segment]` Fixed a bug that cause streaming handler not working https://github.com/Maplemx/Agently/commit/8ad370c531366d52f1c217d424c0ffc74a42f400
- `[Request: ERNIE]` Fixed some quotation marks conflict https://github.com/Maplemx/Agently/commit/fcdcdf04476ac932a6199d20ea63eb4b4d64c408

---

新功能

- `[模型请求插件: OAIClient]` 新增新的模型请求插件`OAIClient`,用于支持开发者请求那些看起来很像OpenAI API格式的模型(但通常会有些和OpenAI API不一致的潜在规则)。这个请求插件也可以用于请求通过类似[Xinference](https://github.com/xorbitsai/inference)这样的本地模型服务化库启动的本地模型服务。

如何使用:

python
import Agently

agent_factory = (
Agently.AgentFactory(is_debug=True)
.set_settings("current_model", "OAIClient")
这里用Mixtral举例
.set_settings("model.OAIClient.url" , "https://api.mistral.ai/v1")
如果你希望使用月之暗面的Kimi可以参考下面这个url
.set_settings("model.OAIClient.url", "https://api.moonshot.cn/v1")
设置你要使用的具体模型
Mixtral支持的模型列表: https://docs.mistral.ai/platform/endpoints/
.set_settings("model.OAIClient.options", { "model": "open-mistral-7b" })
月之暗面支持的模型列表: https://platform.moonshot.cn/docs/pricing#文本生成模型-moonshot-v1
设置API-KEY(如果需要的话,本地模型可能不需要)
.set_settings("model.OAIClient.auth.api_key", "")
设置代理(如果需要的话)
.set_proxy("http://127.0.0.1:7890")
你也可以通过设置变更消息处理的规则
.set_settings("model.OAIClient.message_rules", {
"no_multi_system_messages": True, 默认开,如果有多条system消息,将会被合并成一条
"strict_orders": True, 默认开,将会强制将消息列转换成“用户-助理-用户-助理”顺序
"no_multi_type_messages": True, 默认开,将只保留文本类消息,并且直接将文本值放到content里
})
)

agent = agent_factory.create_agent()

(
agent
.set_role("You love EMOJI very much and try to use EMOJI in every sentence.")
.chat_history([
{ "role": "user", "content": "It's a beautiful day, isn't it?" },
{ "role": "assistant", "content": "Right, shine and bright!☀️" }
])
.input("What do you suggest us to do today?")
使用.start("completions")可以支持补全模型!
.start("chat")
)


更新
- `[模型请求插件: ERNIE]` 添加了在新的API规范中,对`system`参数的直接支持。现在传递给文心一言的system prompt将会直接传递给API接口的`system`参数,而不再转换成用户对话消息了;
- `[请求优化]` 优化了list中可以包含多个item的prompt提示方法。

问题修复

- `[请求指令]` 修复了导致`.general()`和`.abstract()`不能正常工作的问题;
- `[Agent能力插件: Segment]` 修复了导致流式输出过程中,处理器(handler)无法工作的问题;
- `[模型请求插件: ERNIE]` 修复了一些引号冲突的问题。

3.2.1.0

New Features:

1. `[Request]` New models are supported!

新增了两个模型的支持!

- **Claude**:

python
import Agently
agent_factory = Agently.AgentFactory()

(
agent_factory
.set_settings("current_model", "Claude")
.set_settings("model.Claude.auth", { "api_key": "" })
switch model
model list: https://docs.anthropic.com/claude/docs/models-overview
default: claude-3-sonnet-20240229
.set_settings("model.Claude.options", { "model": "claude-3-opus-20240229" })
)

Test
agent = agent_factory.create_agent()
agent.input("Print 'It works'.").start()


- **MiniMax**:

python
import Agently
agent_factory = Agently.AgentFactory()

(
agent_factory
.set_settings("current_model", "MiniMax")
.set_settings("model.MiniMax.auth", {
"group_id": "",
"api_key": ""
})
switch model
model list:https://www.minimaxi.com/document/guides/chat-model/V2?id=65e0736ab2845de20908e2dd
default: abab5.5-chat
.set_settings("model.MiniMax.options", { "model": "abab6-chat" })
)

Test
agent = agent_factory.create_agent()
agent.input("Print 'It works'.").start()


2. `[Agent Workflow]` add new feature **.draw()** to generate mermaid code to present current workflow graph!

添加了可以根据workflow连接情况生成流程图代码(mermaid格式)的方法。

python
after connect all chunks
mermaid_code = workflow.draw()

you can use mermaid-python package to draw the graph in colab
!pip install -q -U mermaid-python
from mermaid import Mermaid
Mermaid(mermaid_code)


Bug Fixed:

1. `[Framework]`: https://github.com/Maplemx/Agently/issues/49<br />Added try except when request event loop is not in debug mode to avoid error `Event loop is closed`.<br />添加了try except逻辑,来减少在非debug模式下`Event loop is closed`的报错;
2. `[Agently Workflow]`: https://github.com/Maplemx/Agently/issues/48<br />Removed unnecessary print when workflow start running.<br />移除了workflow启动时会出现的一个不必要的print。

3.2.0.1

New Feature:

1. `[Agently Workflow]`
We're glad to introduce a brand new feature of Agently v3.2 to you all: `Agently Workflow`!

With this new feature, you can arrange and manage your LLMs based application workflow in just 3 steps, simple and easy:

1. Define and program your application logic into different workflow chunks;
2. Connect chunks using `chunk.connect_to()` in orders; (Loop and condition judugment supported)
3. Start the workflow using `workflow.startup()`.

[Visit Agently Workflow Showcase Page to Explore More](https://colab.research.google.com/github/Maplemx/Agently/blob/main/playground/workflow_series_01_building_a_multi_round_chat.ipynb)

2. `[Agent Component: Decorator]: agent.tool(tool_info:dict={})`

Now you can use `agent.tool()` to decorate a function to register it as an agent's tool:

python
agent.tool()
def get_current_date():
"""Get current date"""
return datetime.now().date().strftime("%Y-%B-%d")


You can also pass other parameters to `agent.tool()` in the same way when using `agent.register_tool()`.

Update:

1. `[Framework]` Updated inherit logic of settings and add `is_debug` parameter to `.create_agent()`;
2. `[Agent Component: Role]` Change `.set_role_name()` to `.set_role_id()` and the role id will not be passed to agent request now;
3. `[Facility: RoleManager]` Change `.set_name()` to `.set_id()`;
4. `[Agent Component: Tool]` Update tool using prompt;

Bug Fix:

1. `[Agent Component: Segment]` Clean segments' prompt cache at an earlier time and add try except logic to avoid unclean runtime cause error;

----

全新功能:

1. `[Agently Workflow]`

我们非常高兴能够向您介绍Agently v3.2版本推出的全新功能:Agently Workflow!

使用这项全新的功能,只需要一二三步走,您就能够轻松惬意地编排管理您的语言模型应用工作流了:

1. 在workflow chunks切块中编写您的应用工作流中的单块工作逻辑(如输入、判断、请求执行、数据存取等);
2. 使用`chunk.connect_to()`方法将切块按您想要的工作顺序进行连接(支持环状连接、条件分支等复杂连接关系);
3. 通过`workflow.startup()`开始运行工作流。

功能案例:[点击查看案例,了解新功能的用法](https://colab.research.google.com/github/Maplemx/Agently/blob/main/playground/workflow_series_01_building_a_multi_round_chat.ipynb)

2. `[Agent Component: Decorator]: agent.tool(tool_info:dict={})`

现在,您可以使用`agent.tool()`装饰器来将一个函数注册给Agent使用了,具体用法如下:

python
agent.tool()
def get_current_date():
"""Get current date"""
return datetime.now().date().strftime("%Y-%B-%d")


您可以向`agent.tool()`传递其他参数,参数要求和`agent.register_tool()`方法保持一致。

重要更新

1. `[框架核心]` 改善了一些框架内的`settings`设置项的继承逻辑,并且支持向`.create_agent()`传递`is_debug`参数了;
2. `[Agent能力组件:Role]` 将`.set_role_name()`方法更改为`.set_role_id()`方法,使其和业务表达解耦,同时,设置的role id将不会在agent请求模型时被传递过去,只作为代码编写时的agent身份标识;
3. `[公共设施插件: RoleManager]` 与上条相同,将`.set_name()`方法更改为`.set_id()`;
5. `[Agent能力组件:Tool]` 更新了部分工具调用的提示词,优化Tool插件的工作质量;

问题修复

1. `[Agent能力组件:Segment]` 在更早的时机清除Segments的提示词设置缓存,并添加了一段错误监听逻辑,以在运行出错时能够将当前的Segments设置清除,避免干扰下一次运行(尤其是在Colab环境中)。

Page 1 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.