分类目录:《大模型从入门到应用》总目录
LangChain系列文章:
- 基础知识
- 快速入门
- 模型(Models)
- 提示(Prompts)
- 记忆(Memory)
- 索引(Indexes)
- 链(Chains)
- 代理(Agents)
- 回调函数(Callbacks)
带有ChatModel的LLM聊天自定义代理(Custom LLM Chat Agent)
本部分将介绍如何基于聊天模型创建自己的自定义代理。LLM聊天代理由三个部分组成:
- PromptTemplate:这是用于指示语言模型该做什么的提示模板
- ChatModel:这是驱动代理的语言模型
- Stop序列:指示LLM在找到此字符串时停止生成
- OutputParser:确定如何将LLM输出解析为
AgentAction
或AgentFinish
对象。
LLMAgent用于代理执行器。这个代理执行器在很大程度上可以看作是一个循环:
- 将用户输入和任何先前的步骤传递给代理(在这种情况下是LLMAgent)
- 如果代理返回
AgentFinish
,则将其直接返回给用户
自定义多操作代理(Custom MultiAction Agent)
本教程介绍如何创建您自己的自定义代理。
代理由两部分组成:
- 工具:代理可以使用的工具。
- 代理类本身:这决定了要采取哪个操作。
在本文中,我们将介绍如何创建一个自定义代理,该代理可以预测或同时执行多个步骤:
from langchain.agents import Tool, AgentExecutor, BaseMultiActionAgent
from langchain import OpenAI, SerpAPIWrapper
def random_word(query: str) -> str:
print("\n现在我正在做这个!")
return "foo"
search = SerpAPIWrapper()
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
),
Tool(
name = "RandomWord",
func=random_word,
description="call this to get a random word."
)
]
from typing import List, Tuple, Any, Union
from langchain.schema import AgentAction, AgentFinish
class FakeAgent(BaseMultiActionAgent):
"""Fake Custom Agent."""
def input_keys(self):
return ["input"]
def plan(
self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
) -> Union[List[AgentAction], AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
if len(intermediate_steps) == 0:
return [
AgentAction(tool="Search", tool_input=kwargs["input"], log=""),
AgentAction(tool="RandomWord", tool_input=kwargs["input"], log=""),
]
else:
return AgentFinish(return_values={"output": "bar"}, log="")
async def aplan(
self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
) -> Union[List[AgentAction], AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
if len(intermediate_steps) == 0:
return [
AgentAction(tool="Search", tool_input=kwargs["input"], log=""),
AgentAction(tool="RandomWord", tool_input=kwargs["input"], log=""),
]
else:
return AgentFinish(return_values={"output": "bar"}, log="")
agent = FakeAgent()
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("How many people live in canada as of 2023?")
输出:
> Entering new AgentExecutor chain...
The current population of Canada is 38,669,152 as of Monday, April 24, 2023, based on Worldometer elaboration of the latest United Nations data.
Now I'm doing this!
foo
> Finished chain.
'bar'
参考文献:
[1] LangChain 🦜️🔗 中文网,跟着LangChain一起学LLM/GPT开发:https://www.langchain.com.cn/
[2] LangChain中文网 - LangChain 是一个用于开发由语言模型驱动的应用程序的框架:http://www.cnlangchain.com/