LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
LangChain
  • Browser
  • Universal
  • Hub
  • Node
  • Load
  • Serializable
  • Encoder Backed
  • File System
  • In Memory
LangChain Core
  • Agents
  • Caches
  • Base
  • Dispatch
  • Web
  • Manager
  • Promises
  • Chat History
  • Context
  • Base
  • Langsmith
  • Documents
  • Embeddings
  • Errors
  • Example Selectors
  • Indexing
  • Base
  • Chat Models
  • Llms
  • Profile
  • Structured Output
  • Load
  • Serializable
  • Memory
  • Messages
  • Tool
  • Output Parsers
  • Openai Functions
  • Openai Tools
  • Outputs
  • Prompt Values
  • Prompts
  • Retrievers
  • Document Compressors
  • Runnables
  • Graph
  • Singletons
  • Stores
  • Structured Query
  • Testing
  • Tools
  • Base
  • Console
  • Log Stream
  • Run Collector
  • Tracer Langchain
  • Stream
  • Async Caller
  • Chunk Array
  • Context
  • Env
  • Event Source Parse
  • Format
  • Function Calling
  • Hash
  • Json Patch
  • Json Schema
  • Math
  • Ssrf
  • Standard Schema
  • Stream
  • Testing
  • Tiktoken
  • Types
  • Vectorstores
Text Splitters
MCP Adapters
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

LangChain
BrowserUniversalHubNodeLoadSerializableEncoder BackedFile SystemIn Memory
LangChain Core
AgentsCachesBaseDispatchWebManagerPromisesChat HistoryContextBaseLangsmithDocumentsEmbeddingsErrorsExample SelectorsIndexingBaseChat ModelsLlmsProfileStructured OutputLoadSerializableMemoryMessagesToolOutput ParsersOpenai FunctionsOpenai ToolsOutputsPrompt ValuesPromptsRetrieversDocument CompressorsRunnablesGraphSingletonsStoresStructured QueryTestingToolsBaseConsoleLog StreamRun CollectorTracer LangchainStreamAsync CallerChunk ArrayContextEnvEvent Source ParseFormatFunction CallingHashJson PatchJson SchemaMathSsrfStandard SchemaStreamTestingTiktokenTypesVectorstores
Text Splitters
MCP Adapters
Language
Theme
JavaScript@langchain/coretesting
Module●Since v1.1

testing

Copy
import { ... } from "@langchain/core/testing";

Functions

function
fakeModel→ FakeBuiltModel
function
toContainToolCall→ ExpectationResult
function
toHaveBeenInterrupted→ ExpectationResult
function
toHaveStructuredResponse→ ExpectationResult
function
toHaveToolCallCount→ ExpectationResult
function
toHaveToolCalls→ ExpectationResult
function
toHaveToolMessages→ ExpectationResult

Classes

class
FakeBuiltModel

Interfaces

Variables

View source on GitHub
interface
LangChainMatchers
variable
langchainMatchers: __type
variable
toBeAIMessage: (this: ExpectExtendThis, received: unknown, expected?: string | Record<string, unknown>)
variable
toBeHumanMessage: (this: ExpectExtendThis, received: unknown, expected?: string | Record<string, unknown>)
variable
toBeSystemMessage: (this: ExpectExtendThis, received: unknown, expected?: string | Record<string, unknown>)
variable
toBeToolMessage: (this: ExpectExtendThis, received: unknown, expected?: string | Record<string, unknown>)

Creates a new FakeBuiltModel for testing.

Returns a chainable builder — queue responses, then pass the model anywhere a chat model is expected. Responses are consumed in FIFO order, one per invoke() call.

API summary

Method Description
fakeModel() Creates a new fake chat model. Returns a chainable builder.
.respond(message) Queue an AIMessage (or any BaseMessage) to return on the next invocation.
.respond(error) Queue an Error to throw on the next invocation.
.respond(factory) Queue a function (messages) => BaseMessage \| Error for dynamic responses.
.respondWithTools(toolCalls) Shorthand for .respond() with tool calls. Each entry needs name and args; id is optional.
.alwaysThrow(error) Make every invocation throw this error, regardless of the queue.
.structuredResponse(value) Set the value returned by .withStructuredOutput().
.bindTools(tools) Bind tools to the model. Returns a RunnableBinding that shares the response queue and call recording.
.withStructuredOutput(schema) Returns a runnable that produces the .structuredResponse() value.
.calls Array of { messages, options } for every invocation (read-only).
.callCount Number of times the model has been invoked.

A fake chat model for testing, created via fakeModel.

Queue responses with .respond() and .respondWithTools(), then pass the instance directly wherever a chat model is expected. Responses are consumed in first-in-first-out order — one per invoke() call. When all queued responses are consumed, further invocations throw.

All matcher functions bundled for convenient use with expect.extend().