Lagent

Latest version: v0.2.4

Safety actively analyzes 681775 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.5.0rc1

Abstract

The current landscape of agent frameworks predominantly focuses on low-code development (using static diagrams or pipelines) or addressing specific domain tasks, which often leads to difficulties in debugging and rigid workflows. \textbf{L}anguage \textbf{Agent} addresses these challenges by offering an imperative and Pythonic programming style that treats code as an agent. This approach facilitates easier debugging and streamlines the development of agent workflows. Additionally, Lagent allows for straightforward deployment as an HTTP service, supporting the construction of distributed multi-agent applications through centralized programming. This enhances development efficiency while maintaining effectiveness.

In this paper, we detail the principles that drove the implementation of Lagent and how they are reflected in its architecture. We emphasize that every aspect of Lagent is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.

Usability centric design

Consequently, the agents themselves evolved rapidly from a single Plan-Action-Iteration agent or Plan-Then-Act agent\cite{} into incredibly varied numerical programs often composed of many loops and recursive functions
To support this growing complexity, Lagent foregoes the potential benefits of a graph-metaprogramming-based or event-driven-based\cite{} approach to preserve the imperative programming model of Python. PyTorch inspired this design. Lagent extends this to all aspects of agent workflows. Defining LLMs, tools, and memories, deploying an HTTP service, distributing multi-agents, and making the inference process asynchronous are all expressed using the familiar concepts developed for general-purpose programming.

This solution ensures that any new potential agent architecture can be easily implemented with Lagent. For instance, agents (which in agent learning commonly be understood as Instruction + LLM + Memory + Plan/Action based on current state) are typically expressed as Python classes whose constructors create and initialize their parameters, and whose forward methods process an input. Similarly, multi-agents are usually represented as classes that compose single agents, but let us state again that nothing forces the user to structure their code in that way. Listing demonstrates how ReAcT(a common used agent) and TranslateAgentv(translation agent pipeline) can be created by lagent. Note that ReACT is of course part of the library, but we show an example implementation to highlight how simple it is.

python
class ReACT(Agent):
def __init__(self):
llm = LLM()
self.tools = tools
react_instruction = react_instruction.format(
action_info=get_tools_description(self.tools)
self.select_agent = Agent(
llm=llm, template=react_instruction)
self.finish_condition = lambda m:
'FinalAnswer' in m.content
super().__init__()

def forward(
self, message: AgentMessage, **kwargs
) -> AgentMessage:
for _ in range(self.max_turn):
message = self.select_agent(message)
if self.finish_condition(message):
return message
message = self.tools(message)
return message


python
class TranslateAgent(Agent):
def __init__(self):
self.llm = LLM()
self.initial_agent = Agent(
template=initial_trans_template, llm=llm)
self.reflection_agent = Agent(
template=reflection_template, llm=llm)
self.improve_agent = Agent(
tempalte=improve_translation_template, llm=llm)
super().__init__()

def forward(
self, message: AgentMessage, **kwargs
) -> AgentMessage:
initial_message = self.initial_agent(message)
reflection_message = self.reflection_agent(message,
initial_message)
response_message = self.improve_agent(
message,
initial_message,
reflecion_message)
return response_message

0.2.4

What's Changed
* Pop invalid gen params for openai api by liujiangning30 in https://github.com/InternLM/lagent/pull/217
* Fix: event loop for DuckDuckGoSearch by liujiangning30 in https://github.com/InternLM/lagent/pull/220
* Feat: GPTAPI supports qwen by liujiangning30 in https://github.com/InternLM/lagent/pull/218
* Ensure completeness of responses of qwen model by liujiangning30 in https://github.com/InternLM/lagent/pull/225
* Fix unclosed event loop by liujiangning30 in https://github.com/InternLM/lagent/pull/235
* Fix: timeout for ddgs by liujiangning30 in https://github.com/InternLM/lagent/pull/236
* [Fix] Fix griffe version by fanqiNO1 in https://github.com/InternLM/lagent/pull/237
* Fix KeyError by liujiangning30 in https://github.com/InternLM/lagent/pull/226
* [feature] support brave search api and refractor google serper api in BingBroswer by tackhwa in https://github.com/InternLM/lagent/pull/233
* 增加对商汤科技商汤商量系列模型的支持(已经和MindSearch项目一起测试OK) Add support for SenseTime's SenseNova series LLM (Has been tested OK with MindSearch project) by winer632 in https://github.com/InternLM/lagent/pull/234
* [docs] fix some bugs in docs.md by MING-ZCH in https://github.com/InternLM/lagent/pull/249
* Update requirements by jamiechoi1995 in https://github.com/InternLM/lagent/pull/245
* update requirement by Harold-lkk in https://github.com/InternLM/lagent/pull/257
* Compatible with lmdeploy by lvhan028 in https://github.com/InternLM/lagent/pull/258
* [Version] v0.2.4 by Harold-lkk in https://github.com/InternLM/lagent/pull/261

New Contributors
* winer632 made their first contribution in https://github.com/InternLM/lagent/pull/234
* MING-ZCH made their first contribution in https://github.com/InternLM/lagent/pull/249
* jamiechoi1995 made their first contribution in https://github.com/InternLM/lagent/pull/245
* lvhan028 made their first contribution in https://github.com/InternLM/lagent/pull/258

**Full Changelog**: https://github.com/InternLM/lagent/compare/v0.2.3...v0.2.4

0.2.3

What's Changed
* Fix chat return of `GPTAPI` by braisedpork1964 in https://github.com/InternLM/lagent/pull/166
* Fix bug of ppt and googlescholar by liujiangning30 in https://github.com/InternLM/lagent/pull/167
* fix typo "ablility " in overview.md by tackhwa in https://github.com/InternLM/lagent/pull/175
* Fix errmsg: cast dict to str by liujiangning30 in https://github.com/InternLM/lagent/pull/172
* feat: support vllm by RangiLyu in https://github.com/InternLM/lagent/pull/177
* support demo with hf by liujiangning30 in https://github.com/InternLM/lagent/pull/179
* fix bug of Internlm2Protocol.parse by liujiangning30 in https://github.com/InternLM/lagent/pull/180
* Fix generation parameters in API models by braisedpork1964 in https://github.com/InternLM/lagent/pull/181
* support batch by Harold-lkk in https://github.com/InternLM/lagent/pull/182
* fix deprecated top_k for GPTAPI by Iiji in https://github.com/InternLM/lagent/pull/185
* support json mode and proxy by Harold-lkk in https://github.com/InternLM/lagent/pull/189
* Allow access to code from interpreter results by braisedpork1964 in https://github.com/InternLM/lagent/pull/191
* Fix: typo for lmdeploy_wrapper by fanqiNO1 in https://github.com/InternLM/lagent/pull/171
* Feat: stream chat for GPTAPI by liujiangning30 in https://github.com/InternLM/lagent/pull/194
* align streaming return format for GPTAPI by liujiangning30 in https://github.com/InternLM/lagent/pull/196
* stream chat for GPTAPI by liujiangning30 in https://github.com/InternLM/lagent/pull/197
* Mind search by Harold-lkk in https://github.com/InternLM/lagent/pull/208
* Fix: update requirements by Liqu1d-G in https://github.com/InternLM/lagent/pull/214
* role with name by liujiangning30 in https://github.com/InternLM/lagent/pull/215
* Bump to v0.3.0 by liujiangning30 in https://github.com/InternLM/lagent/pull/213
* Bump to v0.2.3 by liujiangning30 in https://github.com/InternLM/lagent/pull/216

New Contributors
* tackhwa made their first contribution in https://github.com/InternLM/lagent/pull/175
* Iiji made their first contribution in https://github.com/InternLM/lagent/pull/185
* fanqiNO1 made their first contribution in https://github.com/InternLM/lagent/pull/171
* Liqu1d-G made their first contribution in https://github.com/InternLM/lagent/pull/214

**Full Changelog**: https://github.com/InternLM/lagent/compare/v0.2.2...v0.2.3

0.2.2

What's Changed
* Fix bug of LMDeployClient by liujiangning30 in https://github.com/InternLM/lagent/pull/140
* fix bug of TritonClient by liujiangning30 in https://github.com/InternLM/lagent/pull/141
* update readme demo by Harold-lkk in https://github.com/InternLM/lagent/pull/143
* Fix type annotation by braisedpork1964 in https://github.com/InternLM/lagent/pull/144
* [Enchance] lazy import for actions by Harold-lkk in https://github.com/InternLM/lagent/pull/146
* Fix: skip start_token by liujiangning30 in https://github.com/InternLM/lagent/pull/145
* Fix: filter_suffix in TritonClient by liujiangning30 in https://github.com/InternLM/lagent/pull/150
* Fix: gen_config in lmdeploypipeline updated by input gen_params by liujiangning30 in https://github.com/InternLM/lagent/pull/151
* max_tokens to max_new_tokens by liujiangning30 in https://github.com/InternLM/lagent/pull/149
* support inference for pad_token & chatglm chat by zehuichen123 in https://github.com/InternLM/lagent/pull/157
* Feat: no_skip_speicial_token by liujiangning30 in https://github.com/InternLM/lagent/pull/148
* fix batch generate by Harold-lkk in https://github.com/InternLM/lagent/pull/158
* fix bug caused by static model_name by liujiangning30 in https://github.com/InternLM/lagent/pull/156
* update version by liujiangning30 in https://github.com/InternLM/lagent/pull/161


**Full Changelog**: https://github.com/InternLM/lagent/compare/v0.2.1...v0.2.2

0.2.1

What's Changed
* Fix docstring format of `GoogleScholar` by braisedpork1964 in https://github.com/InternLM/lagent/pull/138
* [Version] Bump v0.2.1 by braisedpork1964 in https://github.com/InternLM/lagent/pull/139


**Full Changelog**: https://github.com/InternLM/lagent/compare/v0.2.0...v0.2.1

0.2.0

What's Changed
- Stream Output: Provides the stream_chat interface for streaming output, allowing cool streaming demos right at your local setup.
- Interfacing is unified, with a comprehensive design upgrade for enhanced extensibility, including:
- Model: Whether it's the OpenAI API, Transformers, or LMDeploy inference acceleration framework, you can seamlessly switch between models.
- Action: Simple inheritance and decoration allow you to create your own personal toolkit, adaptable to both InternLM and GPT.
- Agent: Consistent with the Model's input interface, the transformation from model to intelligent agent only takes one step, facilitating the exploration and implementation of various agents.

- Documentation has been thoroughly upgraded with full API documentation coverage.

Welcome to watch our demo at https://www.youtube.com/watch?v=YAelRLi0Zak

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.