Elasticsearch:使用 Open AI 和 Langchain 的 RAG - Retrieval Augmented Generation (四)

本文介绍了如何将Elasticsearch搜索结果传递给OpenAI和Langchain的大数据模型,以提升文档检索和生成质量。通过RAG方法,结合Elasticsearch的搜索功能和模型辅助生成,展示了两种应用方式:使用Retriever和不使用Retriever。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

这篇博客是之前文章:

的续篇。在这篇文章中,我们将学习如何把从 Elasticsearch 搜索到的结果传递到大数据模型以得到更好的结果。

如果你还没有创建好自己的环境,请参考第一篇文章进行详细地安装。

针对大文本的文档,我们可以采用如下的架构:

创建应用并展示

安装包

#!pip3 install langchain

导入包

from dotenv import load_dotenv
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import ElasticsearchStore
from langchain.text_splitter import CharacterTextSplitter
from langchain.prompts import ChatPromptTemplate
from langchain.prompts import PromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
from langchain.schema.runnable import RunnableLambda
from langchain.schema import HumanMessage
from urllib.request import urlopen
import os, json
 
load_dotenv()
 
openai_api_key=os.getenv('OPENAI_API_KEY')
elastic_user=os.getenv('ES_USER')
elastic_password=os.getenv('ES_PASSWORD')
elastic_endpoint=os.getenv("ES_ENDPOINT")
elastic_index_name='langchain-rag'

添加文档并将文档分成段落

with open('workplace-docs.json') as f:
   workplace_docs = json.load(f)
 
print(f"Successfully loaded {len(workplace_docs)} documents")

metadata = []
content = []
 
for doc in workplace_docs:
  content.append(doc["content"])
  metadata.append({
      "name": doc["name"],
      "summary": doc["summary"],
      "rolePermissions":doc["rolePermissions"]
  })
 
text_splitter = CharacterTextSplitter(chunk_size=50, chunk_overlap=0)
docs = text_splitter.create_documents(content, metadatas=metadata)

Index Documents using ELSER - SparseVectorRetrievalStrategy()

from elasticsearch import Elasticsearch

url = f"https://{elastic_user}:{elastic_password}@{elastic_endpoint}:9200"
connection = Elasticsearch(url, ca_certs = "./http_ca.crt", verify_certs = True)

es = ElasticsearchStore.from_documents(
    docs,
    es_url = url,
    es_connection = connection,
    es_user=elastic_user,
    es_password=elastic_password,
    index_name=elastic_index_name,
    strategy=ElasticsearchStore.SparseVectorRetrievalStrategy()
)

如果你还没有配置好自己的 ELSER,请参考之前的文章 “ Elasticsearch:使用 Open AI 和 Langchain 的 RAG - Retrieval Augmented Generation (三)”。

在执行完上面的命令后,我们可以在 Kibana 中进行查看:

展示结果

def showResults(output):
  print("Total results: ", len(output))
  for index in range(len(output)):
    print(output[index])

r = es.similarity_search("work from home policy")
showResults(r)

RAG with Elasticsearch - Method 1 (Using Retriever)

retriever = es.as_retriever(search_kwargs={"k": 4})

template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)

chain = (
    {"context": retriever, "question": RunnablePassthrough()} 
    | prompt 
    | ChatOpenAI() 
    | StrOutputParser()
)

chain.invoke("vacation policy")

RAG with Elasticsearch - Method 2 (Without Retriever)

Add Context

def add_context(question: str):
    r = es.similarity_search(question)
    
    context = "\n".join(x.page_content for x in r)
    
    return context

Chain

template = """Answer the question based only on the following context:
{context}

Question: {question}
"""

prompt = ChatPromptTemplate.from_template(template)

chain = (
    {"context": RunnableLambda(add_context), "question": RunnablePassthrough()}
    | prompt
    | ChatOpenAI()
    | StrOutputParser()
)

chain.invoke("canada employees guidelines")

Compare with RAG and without RAG

q = input("Ask Question: ")

## Question to OpenAI

chat = ChatOpenAI()

messages = [
    HumanMessage(
        content=q
    )
]

gpt_res = chat(messages)

# Question with RAG

gpt_rag_res = chain.invoke(q)


# Responses

s = f"""
ChatGPT Response:

{gpt_res}

ChatGPT with RAG Response:

{gpt_rag_res}
"""

print(s)

上面的 jupyter notebook 的代码可以在地址 https://github.com/liu-xiao-guo/semantic_search_es/blob/main/RAG-langchain-elasticsearch.ipynb 下载。

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值