Agently

Latest version: v3.4.0.5

Safety actively analyzes 685525 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 5

3.4.0.1

What's Changed
* Add `delta` to realtime response data by Maplemx in https://github.com/Maplemx/Agently/pull/171


**Full Changelog**: https://github.com/Maplemx/Agently/compare/v3.4.0.0...v3.4.0.1

3.4.0.0

(Will update soon)

What's Changed
* Update README.md by moyueheng in https://github.com/Maplemx/Agently/pull/167
* keep up by Maplemx in https://github.com/Maplemx/Agently/pull/169
* Dev by Maplemx in https://github.com/Maplemx/Agently/pull/170

New Contributors
* moyueheng made their first contribution in https://github.com/Maplemx/Agently/pull/167

**Full Changelog**: https://github.com/Maplemx/Agently/compare/v3.3.4.8...v3.4.0.0

3.3.4.8

Update

1. `[AppConnector]` Support Gradio additional inputs.
2. `[DataGenerator]` Add new methods `.future()`, `.join()` to support pre-define generator handler before `.add()` is called.

3.3.4.7

Yeah... I'm sorry that I didn't update release document here in time and as you know... just like a small snowball rolling down from the top of a slope, it growth bigger and bigger, and it gets harder and harder for me to update this release document...

In fact, from v3.2.2.3 to v.3.3.4.7 we make so much progress such as:

- We improved Agently Workflow to make it both powerful and easy to use, read [Agently Workflow Official Document](http://agently.cn/guides/workflow/index.html) to explore more.
- We add supports of a lot of models, read [Agently AnyModel Official Document](http://agently.cn/guides/model_settings/index.html)
- We add a new feature named `AppConnector` that will help developers to create an application with web UI really quick, and developers can switch application UI framework between `gradio`, `streamlit` and `shell` without changing any application logic code. In version 3.3.4.x we will continue to improve this feature.
- And of course we fixed a lot of bugs and did a lot of optimizing work.

As a really small team with just one full time developer and one part time developer, maybe we did not so good as those team with enough human resource and money support, but we're still keep fighting and keep going because we believe Agently AI application development framework can really help developers to create wonderful LLM based applications faster and easier.

We'll do better! Fighting!

3.2.2.3

New Features

- `[Agent.load_yaml_prompt()]`:

We provide developers a new way to manage your request prompt template in YAML file!

- HOW TO USE:

- YAML file:

YAML
input: ${user_input}
use_public_tools:
- browse
set_tool_proxy: http://127.0.0.1:7890
instruct:
output language: English
output:
page_topic:
$type: str
$desc: ""
summary:
$type: str
$desc: ""


- Python file:
python
import Agently

agent_factory = (
Agently.AgentFactory()
.set_settings("model.Google.auth.api_key", "")
.set_settings("current_model", "Google")
)

agent = agent_factory.create_agent()

print(
agent
.load_yaml_prompt(
path="./yaml_prompt.yaml",
or just pass YAML string like this:
yaml=yaml_str
variables = {
"user_input": "http://Agently.tech",
}
)
.start()
)


- Result:
shell
{'page_topic': 'Agently - Artificial Intelligence for the Enterprise', 'summary': 'Agently is a leading provider of AI-powered solutions for the enterprise. Our platform enables businesses to automate tasks, improve efficiency, and gain insights from their data. We offer a range of services, including:\n\n* **AI-powered automation:** Automate repetitive tasks, such as data entry and customer service, to free up your team to focus on more strategic initiatives.\n* **Machine learning:** Use machine learning to improve the accuracy of your predictions and decisions. We can help you identify trends and patterns in your data, and develop models that can predict future outcomes.\n* **Natural language processing:** Use natural language processing to understand and generate human language. This can be used for a variety of applications, such as chatbots, text analysis, and sentiment analysis.\n\nAgently is committed to helping businesses succeed in the digital age. We believe that AI is a powerful tool that can be used to improve efficiency, innovation, and customer satisfaction. We are excited to partner with you to explore the possibilities of AI for your business.'}


- `[Agently Workflow: YAML Flow]`:`[🧪beta] This feature may change in the future`

We try to provide a simple way to help developers to manage workflow easier with YAML files, so we publish a beta feature **YAML Flow** to present the idea.

With this new feature, you can use YAML files to state chunks and manage the connections between chunks.

Also, we preset some basic chunks(`Start`, `UserInput` and `Print`) to help you build your own workflow quicker.

- BASIC USE:

- YAML file:

YAML
chunks:
start:
type: Start
user_input:
type: UserInput
placeholder: '[User Input]:'
print:
type: Print
connections:
- start->user_input->print


- Python file:

python
import Agently
workflow = Agently.Workflow()
You can use draw=True to output workflow Mermaid code instead of running it
print(workflow.start_yaml(path="./yaml_file.yaml", draw=True))
workflow.start_yaml(path="./yaml_file.yaml")


- Result:

shell
[User Input]: 1+2
>>> 1+2


- ADD YOUR OWN EXECUTORS:

- YAML file:

YAML
chunks:
start:
type: Start
user_input:
type: UserInput
placeholder: '[User Input]:'
We state a new chunk named 'calculate'
calculate:
We add a calculate executor to calculate user input
with executor_id = 'calculate'
executor: calculate
print:
type: Print
connections:
Then add the 'calculate' chunk into the workflow
- start->user_input->calculate->print


- Python file:

python
import Agently

use decorator `workflow.executor_func(<executor_id>)`
to state executor function
workflow.executor_func("calculate")
def calculate_executor(inputs, storage):
result = eval(inputs["input"])
return str(result)

workflow = Agently.Workflow()
print(workflow.start_yaml(path="./yaml_file.yaml", draw=True))
workflow.start_yaml(path="./yaml_file.yaml")


- Result:

shell
[User Input]: 1+2
>>> 3


- `[Basic Prompt Management Methods]`:

Add a series prompt management methods to help developers directly manage the prompt information in agent instants or request with different information life cycles.

These methods below will manage prompt information in agent instant and the prompt information will be pass to model when requesting every time** until this agent instant is dropped**.

- `agent.set_agent_prompt(<slot_name>, <value>)`
- `agent.get_agent_prompt(<slot_name>)`
- `agent.remove_agent_prompt(<slot_name>)`

These methods below will manage prompt information in single request which will **only use the prompt information once**! When the request is finished, all the prompt information will be erased.

- `agent.set_request_prompt(<slot_name>, <value>)`
- `agent.get_request_prompt(<slot_name>)`
- `agent.remove_request_prompt(<slot_name>)`

[Read Development Handbook - Standard Request Slots to learn more](https://github.com/Maplemx/Agently/blob/main/docs/guidebook/application_development_handbook.ipynb)

Updates:

- `[Agently Workflow]`: Make some changes to make complex flow more stable. https://github.com/Maplemx/Agently/pull/64
- `[Framework Core]`: Rename variables of basic prompt slots to keep in unison. https://github.com/Maplemx/Agently/commit/3303aa1f7083d3ac9ddcc744f40c4adc56610939
- `[Facility]`: Use `Agently.lib` as alias of `Agently.facility`
- `[Tools: browse]`: remove newspaper3k and replace it with BeautifulSoup4 https://github.com/Maplemx/Agently/commit/df8c69a990578ec064a3c69d15ba185623d67100

Bug Fixed:

- `[Request: OpenAI]`: Fixed a bug that report error `await can not use on response` when using proxy https://github.com/Maplemx/Agently/commit/7643cfe159f57ee05afd55a23fbe2b594a556d53
- `[Request: OAIClient]`: Fixed a bug that proxy can not work correctly https://github.com/Maplemx/Agently/commit/7643cfe159f57ee05afd55a23fbe2b594a556d53
- `[Request: OAIClient]`: Fixed a bug that system prompt can not work correctly https://github.com/Maplemx/Agently/commit/1f9d275c9c415b5eef439b95f796bb617164b0cf
- `[Agent Component: Tool]`: Fixed a bug that make tool calling can not work correctly https://github.com/Maplemx/Agently/commit/48b80f85c8690e94658e5795e9191a643f663ac3

---

新功能

通过YAML格式数据管理单次Agent请求模板

`[Agent.load_yaml_prompt()]`

我们提供了一种全新的YAML语法表达方式,来帮助您更好地管理单次Agent请求,除了方便开发人员将不同模块进行解耦外,我们也希望通过这种方式,将Agently提供的能力用一种标准化配置的方式进行跨语种表达,或是将这种表达方式提供给非开发人员使用。

如何使用

- YAML文件/YAML文本内容:

YAML
input: ${user_input}
use_public_tools:
- browse

3.2.1.3

New Features

- `[Request: OAIClient]` Add a new request plugins for those models which API format is alike OpenAI but have some additional rules like can not support multi system messages or must have strict user-assistant message orders. It's very useful for those local servering models started by local model servering library like [Xinference](https://github.com/xorbitsai/inference).

HOW TO USE:

python
import Agently

agent_factory = (
Agently.AgentFactory(is_debug=True)
.set_settings("current_model", "OAIClient")
Mixtral for example
.set_settings("model.OAIClient.url" , "https://api.mistral.ai/v1")
if you want to use Moonshot Kimi:
.set_settings("model.OAIClient.url", "https://api.moonshot.cn/v1")
set model name
Mixtral model list: https://docs.mistral.ai/platform/endpoints/
.set_settings("model.OAIClient.options", { "model": "open-mistral-7b" })
Moonshot mode list: https://platform.moonshot.cn/docs/pricing#文本生成模型-moonshot-v1
set API-KEY if needed
.set_settings("model.OAIClient.auth.api_key", "")
set proxy if needed
.set_proxy("http://127.0.0.1:7890")
you can also change message rules
.set_settings("model.OAIClient.message_rules", {
"no_multi_system_messages": True, True by default, will combine multi system messages into one
"strict_orders": True, True by default, will transform messages' order into "User-Assistant-User-Assitant" strictly
"no_multi_type_messages": True, True by default, will only allow text messages
})
)

agent = agent_factory.create_agent()

(
agent
.set_role("You love EMOJI very much and try to use EMOJI in every sentence.")
.chat_history([
{ "role": "user", "content": "It's a beautiful day, isn't it?" },
{ "role": "assistant", "content": "Right, shine and bright!☀️" }
])
.input("What do you suggest us to do today?")
use .start("completions") if your model is a completion model
.start("chat")
)


Update

- `[Request: ERNIE]` Add support for parameter `system` in new API reference, now system prompt will be pass to parameter `system` instead of being transformed into one of the user chat message. https://github.com/Maplemx/Agently/commit/dc52bdc9dfe829675b403e478c28464297fbdcd1
- `[Request]` Optimized prompt about multi items in list https://github.com/Maplemx/Agently/commit/9f378c771a99796845dbbe835f0cbac4c9e0271f

Bug Fixed

- `[Request Alias]` Fixed some bugs that cause `.general()` and `.abstract()` not working https://github.com/Maplemx/Agently/commit/5f6dd5e7e14bf46e5b25b4898fc4767c1d5e7829
- `[Agent Component: Segment]` Fixed a bug that cause streaming handler not working https://github.com/Maplemx/Agently/commit/8ad370c531366d52f1c217d424c0ffc74a42f400
- `[Request: ERNIE]` Fixed some quotation marks conflict https://github.com/Maplemx/Agently/commit/fcdcdf04476ac932a6199d20ea63eb4b4d64c408

---

新功能

- `[模型请求插件: OAIClient]` 新增新的模型请求插件`OAIClient`,用于支持开发者请求那些看起来很像OpenAI API格式的模型(但通常会有些和OpenAI API不一致的潜在规则)。这个请求插件也可以用于请求通过类似[Xinference](https://github.com/xorbitsai/inference)这样的本地模型服务化库启动的本地模型服务。

如何使用:

python
import Agently

agent_factory = (
Agently.AgentFactory(is_debug=True)
.set_settings("current_model", "OAIClient")
这里用Mixtral举例
.set_settings("model.OAIClient.url" , "https://api.mistral.ai/v1")
如果你希望使用月之暗面的Kimi可以参考下面这个url
.set_settings("model.OAIClient.url", "https://api.moonshot.cn/v1")
设置你要使用的具体模型
Mixtral支持的模型列表: https://docs.mistral.ai/platform/endpoints/
.set_settings("model.OAIClient.options", { "model": "open-mistral-7b" })
月之暗面支持的模型列表: https://platform.moonshot.cn/docs/pricing#文本生成模型-moonshot-v1
设置API-KEY(如果需要的话,本地模型可能不需要)
.set_settings("model.OAIClient.auth.api_key", "")
设置代理(如果需要的话)
.set_proxy("http://127.0.0.1:7890")
你也可以通过设置变更消息处理的规则
.set_settings("model.OAIClient.message_rules", {
"no_multi_system_messages": True, 默认开,如果有多条system消息,将会被合并成一条
"strict_orders": True, 默认开,将会强制将消息列转换成“用户-助理-用户-助理”顺序
"no_multi_type_messages": True, 默认开,将只保留文本类消息,并且直接将文本值放到content里
})
)

agent = agent_factory.create_agent()

(
agent
.set_role("You love EMOJI very much and try to use EMOJI in every sentence.")
.chat_history([
{ "role": "user", "content": "It's a beautiful day, isn't it?" },
{ "role": "assistant", "content": "Right, shine and bright!☀️" }
])
.input("What do you suggest us to do today?")
使用.start("completions")可以支持补全模型!
.start("chat")
)


更新
- `[模型请求插件: ERNIE]` 添加了在新的API规范中,对`system`参数的直接支持。现在传递给文心一言的system prompt将会直接传递给API接口的`system`参数,而不再转换成用户对话消息了;
- `[请求优化]` 优化了list中可以包含多个item的prompt提示方法。

问题修复

- `[请求指令]` 修复了导致`.general()`和`.abstract()`不能正常工作的问题;
- `[Agent能力插件: Segment]` 修复了导致流式输出过程中,处理器(handler)无法工作的问题;
- `[模型请求插件: ERNIE]` 修复了一些引号冲突的问题。

Page 2 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.