提示词
LangChain 提供了几个实用工具来帮助管理语言模型,包括聊天模型的提示词。
提示词模板
PromptTemplate
允许您使用模板来生成提示词。当您想在多个地方使用相同的提示大纲,但更改某些值时,这非常有用。
提示模板支持 LLM 和聊天模型,如下所示:
import {
ChatPromptTemplate,
HumanMessagePromptTemplate,
PromptTemplate,
SystemMessagePromptTemplate,
} from "langchain/prompts";
export const run = async () => {
// A `PromptTemplate` consists of a template string and a list of input variables.
const template = "What is a good name for a company that makes {product}?";
const promptA = new PromptTemplate({ template, inputVariables: ["product"] });
// We can use the `format` method to format the template with the given input values.
const responseA = await promptA.format({ product: "colorful socks" });
console.log({ responseA });
/*
{
responseA: 'What is a good name for a company that makes colorful socks?'
}
*/
// We can also use the `fromTemplate` method to create a `PromptTemplate` object.
const promptB = PromptTemplate.fromTemplate(
"What is a good name for a company that makes {product}?"
);
const responseB = await promptB.format({ product: "colorful socks" });
console.log({ responseB });
/*
{
responseB: 'What is a good name for a company that makes colorful socks?'
}
*/
// For chat models, we provide a `ChatPromptTemplate` class that can be used to format chat prompts.
const chatPrompt = ChatPromptTemplate.fromPromptMessages([
SystemMessagePromptTemplate.fromTemplate(
"You are a helpful assistant that translates {input_language} to {output_language}."
),
HumanMessagePromptTemplate.fromTemplate("{text}"),
]);
// The result can be formatted as a string using the `format` method.
const responseC = await chatPrompt.format({
input_language: "English",
output_language: "French",
text: "I love programming.",
});
console.log({ responseC });
/*
{
responseC: '[{"text":"You are a helpful assistant that translates English to French."},{"text":"I love programming."}]'
}
*/
// The result can also be formatted as a list of `ChatMessage` objects by returning a `PromptValue` object and calling the `toChatMessages` method.
// More on this below.
const responseD = await chatPrompt.formatPromptValue({
input_language: "English",
output_language: "French",
text: "I love programming.",
});
const messages = responseD.toChatMessages();
console.log({ messages });
/*
{
messages: [
SystemChatMessage {
text: 'You are a helpful assistant that translates English to French.'
},
HumanChatMessage { text: 'I love programming.' }
]
}
*/
};
附加功能: 提示模板
我们提供了一些额外的提示模板功能,如下所示:
提示值
PromptValue
是由 PromptTemplate
的 formatPromptValue
返回的对象。它可以转换为字符串或 ChatMessage
对象列表。
import {
ChatPromptTemplate,
HumanMessagePromptTemplate,
PromptTemplate,
SystemMessagePromptTemplate,
} from "langchain/prompts";
export const run = async () => {
const template = "What is a good name for a company that makes {product}?";
const promptA = new PromptTemplate({ template, inputVariables: ["product"] });
// The `formatPromptValue` method returns a `PromptValue` object that can be used to format the prompt as a string or a list of `ChatMessage` objects.
const responseA = await promptA.formatPromptValue({
product: "colorful socks",
});
const responseAString = responseA.toString();
console.log({ responseAString });
/*
{
responseAString: 'What is a good name for a company that makes colorful socks?'
}
*/
const responseAMessages = responseA.toChatMessages();
console.log({ responseAMessages });
/*
{
responseAMessages: [
HumanChatMessage {
text: 'What is a good name for a company that makes colorful socks?'
}
]
}
*/
const chatPrompt = ChatPromptTemplate.fromPromptMessages([
SystemMessagePromptTemplate.fromTemplate(
"You are a helpful assistant that translates {input_language} to {output_language}."
),
HumanMessagePromptTemplate.fromTemplate("{text}"),
]);
// `formatPromptValue` also works with `ChatPromptTemplate`.
const responseB = await chatPrompt.formatPromptValue({
input_language: "English",
output_language: "French",
text: "I love programming.",
});
const responseBString = responseB.toString();
console.log({ responseBString });
/*
{
responseBString: '[{"text":"You are a helpful assistant that translates English to French."},{"text":"I love programming."}]'
}
*/
const responseBMessages = responseB.toChatMessages();
console.log({ responseBMessages });
/*
{
responseBMessages: [
SystemChatMessage {
text: 'You are a helpful assistant that translates English to French.'
},
HumanChatMessage { text: 'I love programming.' }
]
}
*/
};
部分值
与其他方法一样,部分化提示模板可能是有意义的 - 例如,传递所需值的子集,以创建仅期望剩余子集值的新提示模板。
LangChain 以两种方式支持此功能:
- 使用字符串值进行部分格式化。
- 使用返回字符串值的函数进行部分格式化。
这两种不同的方式支持不同的用例。在下面的示例中,我们将介绍两种用例的动机以及如何在 LangChain 中实现它。
import { PromptTemplate } from "langchain/prompts";
export const run = async () => {
// The `partial` method returns a new `PromptTemplate` object that can be used to format the prompt with only some of the input variables.
const promptA = new PromptTemplate({
template: "{foo}{bar}",
inputVariables: ["foo", "bar"],
});
const partialPromptA = await promptA.partial({ foo: "foo" });
console.log(await partialPromptA.format({ bar: "bar" }));
// foobar
// You can also explicitly specify the partial variables when creating the `PromptTemplate` object.
const promptB = new PromptTemplate({
template: "{foo}{bar}",
inputVariables: ["foo"],
partialVariables: { bar: "bar" },
});
console.log(await promptB.format({ foo: "foo" }));
// foobar
// You can also use partial formatting with function inputs instead of string inputs.
const promptC = new PromptTemplate({
template: "Tell me a {adjective} joke about the day {date}",
inputVariables: ["adjective", "date"],
});
const partialPromptC = await promptC.partial({
date: () => new Date().toLocaleDateString(),
});
console.log(await partialPromptC.format({ adjective: "funny" }));
// Tell me a funny joke about the day 3/22/2023
const promptD = new PromptTemplate({
template: "Tell me a {adjective} joke about the day {date}",
inputVariables: ["adjective"],
partialVariables: { date: () => new Date().toLocaleDateString() },
});
console.log(await promptD.format({ adjective: "funny" }));
// Tell me a funny joke about the day 3/22/2023
};
少量样本提示模板
少量样本提示模板是您可以使用示例构建的提示模板。
import { FewShotPromptTemplate, PromptTemplate } from "langchain/prompts";
export const run = async () => {
// First, create a list of few-shot examples.
const examples = [
{ word: "happy", antonym: "sad" },
{ word: "tall", antonym: "short" },
];
// Next, we specify the template to format the examples we have provided.
const exampleFormatterTemplate = "Word: {word}\nAntonym: {antonym}\n";
const examplePrompt = new PromptTemplate({
inputVariables: ["word", "antonym"],
template: exampleFormatterTemplate,
});
// Finally, we create the `FewShotPromptTemplate`
const fewShotPrompt = new FewShotPromptTemplate({
/* These are the examples we want to insert into the prompt. */
examples,
/* This is how we want to format the examples when we insert them into the prompt. */
examplePrompt,
/* The prefix is some text that goes before the examples in the prompt. Usually, this consists of instructions. */
prefix: "Give the antonym of every input",
/* The suffix is some text that goes after the examples in the prompt. Usually, this is where the user input will go */
suffix: "Word: {input}\nAntonym:",
/* The input variables are the variables that the overall prompt expects. */
inputVariables: ["input"],
/* The example_separator is the string we will use to join the prefix, examples, and suffix together with. */
exampleSeparator: "\n\n",
/* The template format is the formatting method to use for the template. Should usually be f-string. */
templateFormat: "f-string",
});
// We can now generate a prompt using the `format` method.
console.log(await fewShotPrompt.format({ input: "big" }));
/*
Give the antonym of every input
Word: happy
Antonym: sad
Word: tall
Antonym: short
Word: big
Antonym:
*/
};
sidebar_label: 输出解析器
sidebar_position: 2
import CodeBlock from “@theme/CodeBlock”;
输出解析器
语言模型输出文本。但很多时候,您可能希望获得比仅文本更结构化的信息。这就是输出解析器的作用。
输出解析器是帮助结构化语言模型响应的类。输出解析器必须实现两种主要方法:
getFormatInstructions(): str
一种方法,返回包含有关语言模型输出格式的说明的字符串。parse(raw: string): any
一种方法,接受一个字符串(假定为语言模型的响应),并将其解析为某种结构。
然后是一个可选的方法:
parseWithPrompt(text: string, prompt: BasePromptValue): any
:一种方法,接受一个字符串(假定为语言模型的响应)和格式化的提示(假定生成了这样的响应的提示),并将其解析为某种结构。提示主要是在 OutputParser 想要重试或以某种方式修复输出时提供的,需要提示中的信息才能这样做。
结构化输出解析器
当您想要返回多个字段时,可以使用此输出解析器。
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { StructuredOutputParser } from "langchain/output_parsers";
// With a `StructuredOutputParser` we can define a schema for the output.
const parser = StructuredOutputParser.fromNamesAndDescriptions({
answer: "answer to the user's question",
source: "source used to answer the user's question, should be a website.",
});
const formatInstructions = parser.getFormatInstructions();
const prompt = new PromptTemplate({
template:
"Answer the users question as best as possible.\n{format_instructions}\n{question}",
inputVariables: ["question"],
partialVariables: { format_instructions: formatInstructions },
});
const model = new OpenAI({ temperature: 0 });
const input = await prompt.format({
question: "What is the capital of France?",
});
const response = await model.call(input);
console.log(input);
/*
Answer the users question as best as possible.
The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
Here is the output schema:
`
{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website."}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}
`
What is the capital of France?
*/
console.log(response);
/*
{"answer": "Paris", "source": "https://en.wikipedia.org/wiki/Paris"}
*/
console.log(await parser.parse(response));
// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/France' }
使用 Zod 模式的结构化输出解析器
当您想要使用 Zod,一种 TypeScript 验证库,定义输出模式时,也可以使用此输出解析器。传递的 Zod 模式需要可以从 JSON 字符串解析,因此例如 z.date()
不被允许,但是 z.coerce.date()
是可以的。
import { z } from "zod";
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { StructuredOutputParser } from "langchain/output_parsers";
// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.
const parser = StructuredOutputParser.fromZodSchema(
z.object({
answer: z.string().describe("answer to the user's question"),
sources: z
.array(z.string())
.describe("sources used to answer the question, should be websites."),
})
);
const formatInstructions = parser.getFormatInstructions();
const prompt = new PromptTemplate({
template:
"Answer the users question as best as possible.\n{format_instructions}\n{question}",
inputVariables: ["question"],
partialVariables: { format_instructions: formatInstructions },
});
const model = new OpenAI({ temperature: 0 });
const input = await prompt.format({
question: "What is the capital of France?",
});
const response = await model.call(input);
console.log(input);
/*
Answer the users question as best as possible.
The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
Here is the output schema:
`
{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites."}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}
`
What is the capital of France?
*/
console.log(response);
/*
{"answer": "Paris", "sources": ["https://en.wikipedia.org/wiki/Paris"]}
*/
console.log(await parser.parse(response));
/*
{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }
*/
输出修复解析器
此输出解析器包装另一个输出解析器,在第一个解析器失败时,调用另一个 LLM 修复任何错误。
import { z } from "zod";
import { ChatOpenAI } from "langchain/chat_models/openai";
import {
StructuredOutputParser,
OutputFixingParser,
} from "langchain/output_parsers";
export const run = async () => {
const parser = StructuredOutputParser.fromZodSchema(
z.object({
answer: z.string().describe("answer to the user's question"),
sources: z
.array(z.string())
.describe("sources used to answer the question, should be websites."),
})
);
/** This is a bad output because sources is a string, not a list */
const badOutput = `\`\`\`json
{
"answer": "foo",
"sources": "foo.com"
}
\`\`\``;
try {
await parser.parse(badOutput);
} catch (e) {
console.log("Failed to parse bad output: ", e);
/*
Failed to parse bad output: OutputParserException [Error]: Failed to parse. Text: ```json
{
"answer": "foo",
"sources": "foo.com"
}
```. Error: [
{
"code": "invalid_type",
"expected": "array",
"received": "string",
"path": [
"sources"
],
"message": "Expected array, received string"
}
]
at StructuredOutputParser.parse (/Users/ankushgola/Code/langchainjs/langchain/src/output_parsers/structured.ts:71:13)
at run (/Users/ankushgola/Code/langchainjs/examples/src/prompts/fix_parser.ts:25:18)
at <anonymous> (/Users/ankushgola/Code/langchainjs/examples/src/index.ts:33:22)
*/
}
const fixParser = OutputFixingParser.fromLLM(
new ChatOpenAI({ temperature: 0 }),
parser
);
const output = await fixParser.parse(badOutput);
console.log("Fixed output: ", output);
// Fixed output: { answer: 'foo', sources: [ 'foo.com' ] }
};
逗号分隔列表解析器
当您想要返回逗号分隔的项目列表时,可以使用此输出解析器。
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { CommaSeparatedListOutputParser } from "langchain/output_parsers";
export const run = async () => {
// With a `CommaSeparatedListOutputParser`, we can parse a comma separated list.
const parser = new CommaSeparatedListOutputParser();
const formatInstructions = parser.getFormatInstructions();
const prompt = new PromptTemplate({
template: "List five {subject}.\n{format_instructions}",
inputVariables: ["subject"],
partialVariables: { format_instructions: formatInstructions },
});
const model = new OpenAI({ temperature: 0 });
const input = await prompt.format({ subject: "ice cream flavors" });
const response = await model.call(input);
console.log(input);
/*
List five ice cream flavors.
Your response should be a list of comma separated values, eg: `foo, bar, baz`
*/
console.log(response);
// Vanilla, Chocolate, Strawberry, Mint Chocolate Chip, Cookies and Cream
console.log(await parser.parse(response));
/*
[
'Vanilla',
'Chocolate',
'Strawberry',
'Mint Chocolate Chip',
'Cookies and Cream'
]
*/
};
自定义列表解析器
当您想要返回具有特定长度和分隔符的项目列表时,可以使用此输出解析器。
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { CustomListOutputParser } from "langchain/output_parsers";
// With a `CustomListOutputParser`, we can parse a list with a specific length and separator.
const parser = new CustomListOutputParser({ length: 3, separator: "\n" });
const formatInstructions = parser.getFormatInstructions();
const prompt = new PromptTemplate({
template: "Provide a list of {subject}.\n{format_instructions}",
inputVariables: ["subject"],
partialVariables: { format_instructions: formatInstructions },
});
const model = new OpenAI({ temperature: 0 });
const input = await prompt.format({
subject: "great fiction books (book, author)",
});
const response = await model.call(input);
console.log(input);
/*
Provide a list of great fiction books (book, author).
Your response should be a list of 3 items separated by "\n" (eg: `foo\n bar\n baz`)
*/
console.log(response);
/*
The Catcher in the Rye, J.D. Salinger
To Kill a Mockingbird, Harper Lee
The Great Gatsby, F. Scott Fitzgerald
*/
console.log(await parser.parse(response));
/*
[
'The Catcher in the Rye, J.D. Salinger',
'To Kill a Mockingbird, Harper Lee',
'The Great Gatsby, F. Scott Fitzgerald'
]
*/
合并输出解析器
可以使用 CombiningOutputParser
来组合输出解析器。此输出解析器接受输出解析器列表,并将要求(并解析)包含所有解析器的所有字段的组合输出。
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import {
StructuredOutputParser,
RegexParser,
CombiningOutputParser,
} from "langchain/output_parsers";
const answerParser = StructuredOutputParser.fromNamesAndDescriptions({
answer: "answer to the user's question",
source: "source used to answer the user's question, should be a website.",
});
const confidenceParser = new RegexParser(
/Confidence: (A|B|C), Explanation: (.*)/,
["confidence", "explanation"],
"noConfidence"
);
const parser = new CombiningOutputParser(answerParser, confidenceParser);
const formatInstructions = parser.getFormatInstructions();
const prompt = new PromptTemplate({
template:
"Answer the users question as best as possible.\n{format_instructions}\n{question}",
inputVariables: ["question"],
partialVariables: { format_instructions: formatInstructions },
});
const model = new OpenAI({ temperature: 0 });
const input = await prompt.format({
question: "What is the capital of France?",
});
const response = await model.call(input);
console.log(input);
/*
Answer the users question as best as possible.
Return the following outputs, each formatted as described below:
Output 1:
The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
Here is the output schema:
`
{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website."}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}
`
Output 2:
Your response should match the following regex: /Confidence: (A|B|C), Explanation: (.*)/
What is the capital of France?
*/
console.log(response);
/*
Output 1:
{"answer":"Paris","source":"https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"}
Output 2:
Confidence: A, Explanation: The capital of France is Paris.
*/
console.log(await parser.parse(response));
/*
{
answer: 'Paris',
source: 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html',
confidence: 'A',
explanation: 'The capital of France is Paris.'
}
*/
示例选择器
如果您有大量的示例,您可能需要编程选择要包含在提示中的示例。ExampleSelector是负责执行此操作的类。基本接口定义如下。
class BaseExampleSelector {
addExample(example: Example): Promise<void | string>;
selectExamples(input_variables: Example): Promise<Example[]>;
}
它需要公开一个selectExamples
方法 - 这个方法接受输入变量,然后返回一个示例列表 - 和一个addExample
方法,它保存一个示例以供以后选择。每个具体实现都可以决定如何保存和选择这些示例。让我们看一些示例。
按长度选择
这个ExampleSelector
根据长度选择要使用的示例。当您担心构造的提示将超过上下文窗口的长度时,这非常有用。对于较长的输入,它将选择较少的示例进行包含,而对于较短的输入,它将选择更多的示例。
import {
LengthBasedExampleSelector,
PromptTemplate,
FewShotPromptTemplate,
} from "langchain/prompts";
export async function run() {
// Create a prompt template that will be used to format the examples.
const examplePrompt = new PromptTemplate({
inputVariables: ["input", "output"],
template: "Input: {input}\nOutput: {output}",
});
// Create a LengthBasedExampleSelector that will be used to select the examples.
const exampleSelector = await LengthBasedExampleSelector.fromExamples(
[
{ input: "happy", output: "sad" },
{ input: "tall", output: "short" },
{ input: "energetic", output: "lethargic" },
{ input: "sunny", output: "gloomy" },
{ input: "windy", output: "calm" },
],
{
examplePrompt,
maxLength: 25,
}
);
// Create a FewShotPromptTemplate that will use the example selector.
const dynamicPrompt = new FewShotPromptTemplate({
// We provide an ExampleSelector instead of examples.
exampleSelector,
examplePrompt,
prefix: "Give the antonym of every input",
suffix: "Input: {adjective}\nOutput:",
inputVariables: ["adjective"],
});
// An example with small input, so it selects all examples.
console.log(await dynamicPrompt.format({ adjective: "big" }));
/*
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output:
*/
// An example with long input, so it selects only one example.
const longString =
"big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else";
console.log(await dynamicPrompt.format({ adjective: longString }));
/*
Give the antonym of every input
Input: happy
Output: sad
Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else
Output:
*/
}
按相似度选择
SemanticSimilarityExampleSelector
根据与输入最相似的示例选择示例。它通过找到具有与输入具有最大余弦相似度的嵌入的示例来实现此目的。
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import {
SemanticSimilarityExampleSelector,
PromptTemplate,
FewShotPromptTemplate,
} from "langchain/prompts";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
export async function run() {
// Create a prompt template that will be used to format the examples.
const examplePrompt = new PromptTemplate({
inputVariables: ["input", "output"],
template: "Input: {input}\nOutput: {output}",
});
// Create a SemanticSimilarityExampleSelector that will be used to select the examples.
const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples(
[
{ input: "happy", output: "sad" },
{ input: "tall", output: "short" },
{ input: "energetic", output: "lethargic" },
{ input: "sunny", output: "gloomy" },
{ input: "windy", output: "calm" },
],
new OpenAIEmbeddings(),
HNSWLib,
{ k: 1 }
);
// Create a FewShotPromptTemplate that will use the example selector.
const dynamicPrompt = new FewShotPromptTemplate({
// We provide an ExampleSelector instead of examples.
exampleSelector,
examplePrompt,
prefix: "Give the antonym of every input",
suffix: "Input: {adjective}\nOutput:",
inputVariables: ["adjective"],
});
// Input is about the weather, so should select eg. the sunny/gloomy example
console.log(await dynamicPrompt.format({ adjective: "rainy" }));
/*
Give the antonym of every input
Input: sunny
Output: gloomy
Input: rainy
Output:
*/
// Input is a measurement, so should select the tall/short example
console.log(await dynamicPrompt.format({ adjective: "large" }));
/*
Give the antonym of every input
Input: tall
Output: short
Input: large
Output:
*/
}