LangChain简明教程(10)

《LangChain简明教程》系列文章目录

  1. LangChain简明教程(1)
  2. LangChain简明教程(2)
  3. LangChain简明教程(3)
  4. LangChain简明教程(4)
  5. LangChain简明教程(5)
  6. LangChain简明教程(6)
  7. LangChain简明教程(7)
  8. LangChain简明教程(8)
  9. LangChain简明教程(9)

本文继续关于 LCEL 有关内容,分别列举一些应用示例。

RAG(检索增强生成)

LCEL可以用于创建检索增强生成链,它结合了检索和语言生成步骤。这里有一个例子:

from operator import itemgetter

from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough, RunnableLambda
from langchain.vectorstores import FAISS

# Create a vector store and retriever
vectorstore = FAISS.from_texts(
    ["harrison worked at kensho"], embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()

# Define templates for prompts
template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)

model = ChatOpenAI()

# Create a retrieval-augmented generation chain
chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)

result = chain.invoke("where did harrison work?")
print(result)

在这个例子中,该链从上下文中检索相关信息,并生成对问题的回答。

对话检索链

你可以轻松地为你的链添加对话历史。以下是一个对话检索链的例子:

from langchain.schema.runnable import RunnableMap
from langchain.schema import format_document

from langchain.prompts.prompt import PromptTemplate

# Define templates for prompts
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""

CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)

template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
ANSWER_PROMPT = ChatPromptTemplate.from_template(template)

# Define input map and context
_inputs = RunnableMap(
    standalone_question=RunnablePassthrough.assign(
        chat_history=lambda x: _format_chat_history(x["chat_history"])
    )
    | CONDENSE_QUESTION_PROMPT
    | ChatOpenAI(temperature=0)
    | StrOutputParser(),
)
_context = {
    "context": itemgetter("standalone_question") | retriever | _combine_documents,
    "question": lambda x: x["standalone_question"],
}
conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()

result = conversational_qa_chain.invoke(
    {
        "question": "where did harrison work?",
        "chat_history": [],
    }
)
print(result)

在这个例子中,该链在对话上下文中处理了一个后续问题。

使用记忆和返回源文档

LCEL 还支持记忆和返回源文档。以下是如何在链中使用记忆的方法:

from operator import itemgetter
from langchain.memory import ConversationBufferMemory

# Create a memory instance
memory = ConversationBufferMemory(
    return_messages=True, output_key="answer", input_key="question"
)

# Define steps for the chain
loaded_memory = RunnablePassthrough.assign(
    chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter("history"),
)

standalone_question = {
    "standalone_question": {
        "question": lambda x: x["question"],
        "chat_history": lambda x: _format_chat_history(x["chat_history"]),
    }
    | CONDENSE_QUESTION_PROMPT
    | ChatOpenAI(temperature=0)
    | StrOutputParser(),
}

retrieved_documents = {
    "docs": itemgetter("standalone_question") | retriever,
    "question": lambda x: x["standalone_question"],
}

final_inputs = {
    "context": lambda x: _combine_documents(x["docs"]),
    "question": itemgetter("question"),
}

answer = {
    "answer": final_inputs | ANSWER_PROMPT | ChatOpenAI(),
    "docs": itemgetter("docs"),
}

# Create the final chain by combining the steps
final_chain = loaded_memory | standalone_question | retrieved_documents | answer

inputs = {"question": "where did harrison work?"}
result = final_chain.invoke(inputs)
print(result)

在这个例子中,使用记忆来存储和检索对话历史以及源文档。

多链

你可以使用 Runnables 将多个链串联在一起。以下是一个例子:

from operator import itemgetter

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser

prompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")
prompt2 = ChatPromptTemplate.from_template(
    "what country is the city {city} in? respond in {language}"
)

model = ChatOpenAI()

chain1 = prompt1 | model | StrOutputParser()

chain2 = (
    {"city": chain1, "language": itemgetter("language")}
    | prompt2
    | model
    | StrOutputParser()
)

result = chain2.invoke({"person": "obama", "language": "spanish"})
print(result)

在这个例子中,两条链被组合在一起,以指定的语言生成关于某个城市及其国家的信息。

分支与合并

LCEL 允许你使用 RunnableMaps 来拆分和合并链。以下是一个分支与合并的例子:

from operator import itemgetter

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser

planner = (
    ChatPromptTemplate.from_template("Generate an argument about: {input}")
    | ChatOpenAI()
    | StrOutputParser()
    | {"base_response": RunnablePassthrough()}
)

arguments_for = (
    ChatPromptTemplate.from_template(
        "List the pros or positive aspects of {base_response}"
    )
    | ChatOpenAI()
    | StrOutputParser()
)
arguments_against = (
    ChatPromptTemplate.from_template(
        "List the cons or negative aspects of {base_response}"
    )
    | ChatOpenAI()
    | StrOutputParser()
)

final_responder = (
    ChatPromptTemplate.from_messages(
        [
            ("ai", "{original_response}"),
            ("human", "Pros:\n{results_1}\n\nCons:\n{results_2}"),
            ("system", "Generate a final response given the critique"),
        ]
    )
    | ChatOpenAI()
    | StrOutputParser()
)

chain = (
    planner
    | {
        "results_1": arguments_for,
        "results_2": arguments_against,
        "original_response": itemgetter("base_response"),
    }
    | final_responder
)

result = chain.invoke({"input": "scrum"})
print(result)

在这个例子中,使用了一条分支与合并链来生成一个论点,并在其生成最终回应之前评估其优缺点。

使用 LCEL 编写 Python 代码

LangChain 表达语言(LCEL)的强大应用之一是编写 Python 代码以解决用户问题。以下是一个如何使用 LCEL 编写 Python 代码的示例:

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain_experimental.utilities import PythonREPL

template = """Write some python code to solve the user's problem. Return only python code in Markdown format, e.g.:
```python
....
```"""
prompt = ChatPromptTemplate.from_messages([("system", template), ("human", "{input}")])

model = ChatOpenAI()

def _sanitize_output(text: str):
    _, after = text.split("```python")
    return after.split("```")[0]

chain = prompt | model | StrOutputParser() | _sanitize_output | PythonREPL().run

result = chain.invoke({"input": "what's 2 plus 2"})
print(result)

在这个例子中,用户提供了输入,LCEL 生成了解决问题的 Python 代码。然后,使用 Python REPL 执行该代码,并以Markdown 格式返回生成的 Python 代码。

请注意,使用 Python REPL 可以执行任意代码,因此请谨慎使用。

为链添加记忆功能

在许多对话式 AI 应用中,记忆是非常关键的。以下是如何为任意一条链(chain)添加记忆功能的方法:

from operator import itemgetter
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.schema.runnable import RunnablePassthrough, RunnableLambda
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder

model = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful chatbot"),
        MessagesPlaceholder(variable_name="history"),
        ("human", "{input}"),
    ]
)

memory = ConversationBufferMemory(return_messages=True)

# Initialize memory
memory.load_memory_variables({})

chain = (
    RunnablePassthrough.assign(
        history=RunnableLambda(memory.load_memory_variables) | itemgetter("history")
    )
    | prompt
    | model
)

inputs = {"input": "hi, I'm Bob"}
response = chain.invoke(inputs)
response

# Save the conversation in memory
memory.save_context(inputs, {"output": response.content})

# Load memory to see the conversation history
memory.load_memory_variables({})

在这个例子中,记忆用于存储和检索对话历史,使聊天机器人能够保持上下文并作出恰当的回应。

将外部工具与 Runnables 结合使用

LCEL 允许你无缝地将外部工具集成到 Runnables 中。以下是一个使用 DuckDuckGo 搜索工具的例子:

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.tools import DuckDuckGoSearchRun

search = DuckDuckGoSearchRun()

template = """Turn the following user input into a search query for a search engine:
{input}"""
prompt = ChatPromptTemplate.from_template(template)

model = ChatOpenAI()

chain = prompt | model | StrOutputParser() | search

search_result = chain.invoke({"input": "I'd like to figure out what games are tonight"})
print(search_result)

在这个例子中,LCEL 将 DuckDuckGo 搜索工具集成到链中,使其能够从用户输入生成搜索查询,并检索搜索结果。

LCEL 的灵活性使得将各种外部工具和服务整合到你的语言处理流程中变得非常容易,从而增强其能力和功能。

为 LLM 应用添加审核机制

为了确保 LLM 应用遵守内容政策并包含审核保护机制,可以将审核检查集成到链中。以下是如何使用 LangChain 添加内容审核的方法:

from langchain.chains import OpenAIModerationChain
from langchain.llms import OpenAI
from langchain.prompts import ChatPromptTemplate

moderate = OpenAIModerationChain()

model = OpenAI()
prompt = ChatPromptTemplate.from_messages([("system", "repeat after me: {input}")])

chain = prompt | model

# Original response without moderation
response_without_moderation = chain.invoke({"input": "you are stupid"})
print(response_without_moderation)

moderated_chain = chain | moderate

# Response after moderation
response_after_moderation = moderated_chain.invoke({"input": "you are stupid"})
print(response_after_moderation)

在这个例子中,使用了 OpenAIModerationChain 来为 LLM 生成的响应添加内容审核。该审核链会检查响应中是否存在违反 OpenAI 内容政策的内容。如果发现任何违规内容,它会相应地标记该响应。

通过语义相似性进行路由

LCEL 允许你根据用户输入的语义相似性来实现自定义的路由逻辑。以下是一个根据用户输入动态确定链逻辑的例子:

from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import PromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnableLambda, RunnablePassthrough
from langchain.utils.math import cosine_similarity

physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise and easy to understand manner. \
When you don't know the answer to a question you admit that you don't know.
Here is a question:
{query}"""

math_template = """You are a very good mathematician. You are great at answering math questions. \
You are so good because you are able to break down hard problems into their component parts, \
answer the component parts, and then put them together to answer the broader question.

Here is a question:
{query}"""

embeddings = OpenAIEmbeddings()
prompt_templates = [physics_template, math_template]
prompt_embeddings = embeddings.embed_documents(prompt_templates)

def prompt_router(input):
    query_embedding = embeddings.embed_query(input["query"])
    similarity = cosine_similarity([query_embedding], prompt_embeddings)[0]
    most_similar = prompt_templates[similarity.argmax()]
    print("Using MATH" if most_similar == math_template else "Using PHYSICS")
    return PromptTemplate.from_template(most_similar)

chain = (
    {"query": RunnablePassthrough()}
    | RunnableLambda(prompt_router)
    | ChatOpenAI()
    | StrOutputParser()
)

print(chain.invoke({"query": "What's a black hole"}))
print(chain.invoke({"query": "What's a path integral"}))

在这个例子中,prompt_router 函数会计算用户输入与预定义的物理和数学问题提示模板之间的余弦相似度。根据相似度得分,链会动态选择最相关的提示模板,确保聊天机器人能够针对用户的问题作出恰当的回应。

使用 Agent 和 Runnables

LangChain 允许你通过结合 Runnables、提示词(prompts)、模型和工具来创建 agent。以下是一个构建 agent 并使用它的示例:

from langchain.agents import XMLAgent, tool, AgentExecutor
from langchain.chat_models import ChatAnthropic

model = ChatAnthropic(model="claude-2")

@tool
def search(query: str) -> str:
    """Search things about current events."""
    return "32 degrees"

tool_list = [search]

# Get prompt to use
prompt = XMLAgent.get_default_prompt()

# Logic for going from intermediate steps to a string to pass into the model
def convert_intermediate_steps(intermediate_steps):
    log = ""
    for action, observation in intermediate_steps:
        log += (
            f"<tool>{action.tool}</tool><tool_input>{action.tool_input}"
            f"</tool_input><observation>{observation}</observation>"
        )
    return log

# Logic for converting tools to a string to go in the prompt
def convert_tools(tools):
    return "\n".join([f"{tool.name}: {tool.description}" for tool in tools])

agent = (
    {
        "question": lambda x: x["question"],
        "intermediate_steps": lambda x: convert_intermediate_steps(
            x["intermediate_steps"]
        ),
    }
    | prompt.partial(tools=convert_tools(tool_list))
    | model.bind(stop=["</tool_input>", "</final_answer>"])
    | XMLAgent.get_default_output_parser()
)

agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)

result = agent_executor.invoke({"question": "What's the weather in New York?"})
print(result)

在这个例子中,通过结合模型、工具、提示词以及用于中间步骤和工具转换的自定义逻辑,创建了一个 agent。然后执行该 agent,以回应用户的查询。

查询 SQL 数据库

可以使用 LangChain 来查询 SQL 数据库,并根据用户的问题生成相应的 SQL 查询语句。以下是一个示例:

from langchain.prompts import ChatPromptTemplate

template = """Based on the table schema below, write a SQL query that would answer the user's question:
{schema}
Question: {question}
SQL Query:"""

prompt = ChatPromptTemplate.from_template(template)

from langchain.utilities import SQLDatabase

# Initialize the database (you'll need the Chinook sample DB for this example)
db = SQLDatabase.from_uri("sqlite:///./Chinook.db")

def get_schema(_):
    return db.get_table_info()

def run_query(query):
    return db.run(query)

from langchain.chat_models import ChatOpenAI
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough

model = ChatOpenAI()

sql_response = (
    RunnablePassthrough.assign(schema=get_schema)
    | prompt
    | model.bind(stop=["\nSQLResult:"])
    | StrOutputParser()
)

result = sql_response.invoke({"question": "How many employees are there?"})
print(result)

template = """Based on the table schema below, question, SQL query, and SQL response, write a natural language response:
{schema}

Question: {question}
SQL Query: {query}
SQL Response: {response}"""
prompt_response = ChatPromptTemplate.from_template(template)

full_chain = (
    RunnablePassthrough.assign(query=sql_response)
    | RunnablePassthrough.assign(
        schema=get_schema,
        response=lambda x: db.run(x["query"]),
    )
    | prompt_response
    | model
)

response = full_chain.invoke({"question": "How many employees are there?"})
print(response)

在这个例子中,LangChain 被用来根据用户的问题生成 SQL 查询语句,并从 SQL 数据库中检索响应。通过对提示词和响应进行格式化,实现了与数据库的自然语言交互。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

CS创新实验室

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值