LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    ⌘I

    LangChain Assistant

    Ask a question to get started

    Enter to send•Shift+Enter new line

    Menu

    LangGraph Checkpoint
    Checkpoint Postgres
    Store Postgres
    Checkpoint SQLite
    LangGraph Prebuilt
    LangGraph CLI
    LangGraph SDK
    LangGraph Supervisor
    LangGraph Swarm
    Language
    Theme
    Pythonlanggraph.store.postgresaio
    Moduleā—Since v1.0

    aio

    Attributes

    attribute
    PLACEHOLDER
    attribute
    logger

    Classes

    class
    BasePostgresStore
    class
    PoolConfig
    class
    PostgresIndexConfig
    class
    Row
    class
    AsyncPostgresStore
    View source on GitHub

    Connection pool settings for PostgreSQL connections.

    Controls connection lifecycle and resource utilization:

    • Small pools (1-5) suit low-concurrency workloads
    • Larger pools handle concurrent requests but consume more resources
    • Setting max_size prevents resource exhaustion under load

    Configuration for vector embeddings in PostgreSQL store with pgvector-specific options.

    Extends EmbeddingConfig with additional configuration for pgvector index and vector types.

    Asynchronous Postgres-backed store with optional vector search using pgvector.

    Examples

    Basic setup and usage:

    from langgraph.store.postgres import AsyncPostgresStore
    
    conn_string = "postgresql://user:pass@localhost:5432/dbname"
    
    async with AsyncPostgresStore.from_conn_string(conn_string) as store:
        await store.setup()  # Run migrations. Done once
    
        # Store and retrieve data
        await store.aput(("users", "123"), "prefs", {"theme": "dark"})
        item = await store.aget(("users", "123"), "prefs")

    Vector search using LangChain embeddings:

    from langchain.embeddings import init_embeddings
    from langgraph.store.postgres import AsyncPostgresStore
    
    conn_string = "postgresql://user:pass@localhost:5432/dbname"
    
    async with AsyncPostgresStore.from_conn_string(
        conn_string,
        index={
            "dims": 1536,
            "embed": init_embeddings("openai:text-embedding-3-small"),
            "fields": ["text"]  # specify which fields to embed. Default is the whole serialized value
        }
    ) as store:
        await store.setup()  # Run migrations. Done once
    
        # Store documents
        await store.aput(("docs",), "doc1", {"text": "Python tutorial"})
        await store.aput(("docs",), "doc2", {"text": "TypeScript guide"})
        await store.aput(("docs",), "doc3", {"text": "Other guide"}, index=False)  # don't index
    
        # Search by similarity
        results = await store.asearch(("docs",), query="programming guides", limit=2)

    Using connection pooling for better performance:

    from langgraph.store.postgres import AsyncPostgresStore, PoolConfig
    
    conn_string = "postgresql://user:pass@localhost:5432/dbname"
    
    async with AsyncPostgresStore.from_conn_string(
        conn_string,
        pool_config=PoolConfig(
            min_size=5,
            max_size=20
        )
    ) as store:
        await store.setup()  # Run migrations. Done once
        # Use store with connection pooling...