# Build Agents on Cloudflare
URL: https://developers.cloudflare.com/agents/
import {
CardGrid,
Description,
Feature,
LinkButton,
LinkTitleCard,
PackageManagers,
Plan,
RelatedProduct,
Render,
TabItem,
Tabs,
TypeScriptExample,
} from "~/components";
The Agents SDK enables you to build and deploy AI-powered agents that can autonomously perform tasks, communicate with clients in real time, call AI models, persist state, schedule tasks, run asynchronous workflows, browse the web, query data from your database, support human-in-the-loop interactions, and [a lot more](/agents/api-reference/).
### Ship your first Agent
To use the Agent starter template and create your first Agent with the Agents SDK:
```sh
# install it
npm create cloudflare@latest agents-starter -- --template=cloudflare/agents-starter
# and deploy it
npx wrangler@latest deploy
```
Head to the guide on [building a chat agent](/agents/getting-started/build-a-chat-agent) to learn how the starter project is built and how to use it as a foundation for your own agents.
If you're already building on [Workers](/workers/), you can install the `agents` package directly into an existing project:
```sh
npm i agents
```
And then define your first Agent by creating a class that extends the `Agent` class:
```ts
import { Agent, AgentNamespace } from 'agents';
export class MyAgent extends Agent {
// Define methods on the Agent:
// https://developers.cloudflare.com/agents/api-reference/agents-api/
//
// Every Agent has built in state via this.setState and this.sql
// Built-in scheduling via this.schedule
// Agents support WebSockets, HTTP requests, state synchronization and
// can run for seconds, minutes or hours: as long as the tasks need.
}
```
Dive into the [Agent SDK reference](/agents/api-reference/agents-api/) to learn more about how to use the Agents SDK package and defining an `Agent`.
### Why build agents on Cloudflare?
We built the Agents SDK with a few things in mind:
- **Batteries (state) included**: Agents come with [built-in state management](/agents/api-reference/store-and-sync-state/), with the ability to automatically sync state between an Agent and clients, trigger events on state changes, and read+write to each Agent's SQL database.
- **Communicative**: You can connect to an Agent via [WebSockets](/agents/api-reference/websockets/) and stream updates back to client in real-time. Handle a long-running response from a reasoning model, the results of an [asynchronous workflow](/agents/api-reference/run-workflows/), or build a chat app that builds on the `useAgent` hook included in the Agents SDK.
- **Extensible**: Agents are code. Use the [AI models](/agents/api-reference/using-ai-models/) you want, bring-your-own headless browser service, pull data from your database hosted in another cloud, add your own methods to your Agent and call them.
Agents built with Agents SDK can be deployed directly to Cloudflare and run on top of [Durable Objects](/durable-objects/) — which you can think of as stateful micro-servers that can scale to tens of millions — and are able to run wherever they need to. Run your Agents close to a user for low-latency interactivity, close to your data for throughput, and/or anywhere in between.
---
### Build on the Cloudflare Platform
Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more.
Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM.
Run machine learning models, powered by serverless GPUs, on Cloudflare's global network.
Build stateful agents that guarantee executions, including automatic retries, persistent state that runs for minutes, hours, days, or weeks.
---
# Changelog
URL: https://developers.cloudflare.com/ai-gateway/changelog/
import { ProductReleaseNotes } from "~/components";
{/* */}
---
# Architectures
URL: https://developers.cloudflare.com/ai-gateway/demos/
import { GlossaryTooltip, ResourcesBySelector } from "~/components";
Learn how you can use AI Gateway within your existing architecture.
## Reference architectures
Explore the following reference architectures that use AI Gateway:
---
# Getting started
URL: https://developers.cloudflare.com/ai-gateway/get-started/
import { Details, DirectoryListing, LinkButton, Render } from "~/components";
In this guide, you will learn how to create your first AI Gateway. You can create multiple gateways to control different applications.
## Prerequisites
Before you get started, you need a Cloudflare account.
Sign up
## Create gateway
Then, create a new AI Gateway.
## Choosing gateway authentication
When setting up a new gateway, you can choose between an authenticated and unauthenticated gateway. Enabling an authenticated gateway requires each request to include a valid authorization token, adding an extra layer of security. We recommend using an authenticated gateway when storing logs to prevent unauthorized access and protect against invalid requests that can inflate log storage usage and make it harder to find the data you need. Learn more about setting up an [Authenticated Gateway](/ai-gateway/configuration/authentication/).
## Connect application
Next, connect your AI provider to your gateway.
AI Gateway offers multiple endpoints for each Gateway you create - one endpoint per provider, and one Universal Endpoint. To use AI Gateway, you will need to create your own account with each provider and provide your API key. AI Gateway acts as a proxy for these requests, enabling observability, caching, and more.
Additionally, AI Gateway has a [WebSockets API](/ai-gateway/configuration/websockets-api/) which provides a single persistent connection, enabling continuous communication. This API supports all AI providers connected to AI Gateway, including those that do not natively support WebSockets.
Below is a list of our supported model providers:
If you do not have a provider preference, start with one of our dedicated tutorials:
- [OpenAI](/ai-gateway/integrations/aig-workers-ai-binding/)
- [Workers AI](/ai-gateway/tutorials/create-first-aig-workers/)
## View analytics
Now that your provider is connected to the AI Gateway, you can view analytics for requests going through your gateway.
:::note[Note]
The cost metric is an estimation based on the number of tokens sent and received in requests. While this metric can help you monitor and predict cost trends, refer to your provider’s dashboard for the most accurate cost details.
:::
## Next steps
- Learn more about [caching](/ai-gateway/configuration/caching/) for faster requests and cost savings and [rate limiting](/ai-gateway/configuration/rate-limiting/) to control how your application scales.
- Explore how to specify model or provider [fallbacks](/ai-gateway/configuration/fallbacks/) for resiliency.
- Learn how to use low-cost, open source models on [Workers AI](/ai-gateway/providers/workersai/) - our AI inference service.
---
# Header Glossary
URL: https://developers.cloudflare.com/ai-gateway/glossary/
import { Glossary } from "~/components";
AI Gateway supports a variety of headers to help you configure, customize, and manage your API requests. This page provides a complete list of all supported headers, along with a short description
## Configuration hierarchy
Settings in AI Gateway can be configured at three levels: **Provider**, **Request**, and **Gateway**. Since the same settings can be configured in multiple locations, the following hierarchy determines which value is applied:
1. **Provider-level headers**:
Relevant only when using the [Universal Endpoint](/ai-gateway/providers/universal/), these headers take precedence over all other configurations.
2. **Request-level headers**:
Apply if no provider-level headers are set.
3. **Gateway-level settings**:
Act as the default if no headers are set at the provider or request levels.
This hierarchy ensures consistent behavior, prioritizing the most specific configurations. Use provider-level and request-level headers for more fine-tuned control, and gateway settings for general defaults.
---
# Cloudflare AI Gateway
URL: https://developers.cloudflare.com/ai-gateway/
import {
CardGrid,
Description,
Feature,
LinkTitleCard,
Plan,
RelatedProduct,
} from "~/components";
Observe and control your AI applications.
Cloudflare's AI Gateway allows you to gain visibility and control over your AI apps. By connecting your apps to AI Gateway, you can gather insights on how people are using your application with analytics and logging and then control how your application scales with features such as caching, rate limiting, as well as request retries, model fallback, and more. Better yet - it only takes one line of code to get started.
Check out the [Get started guide](/ai-gateway/get-started/) to learn how to configure your applications with AI Gateway.
## Features
View metrics such as the number of requests, tokens, and the cost it takes to run your application.
Gain insight on requests and errors.
Serve requests directly from Cloudflare's cache instead of the original model provider for faster requests and cost savings.
Control how your application scales by limiting the number of requests your application receives.
Improve resilience by defining request retry and model fallbacks in case of an error.
Workers AI, OpenAI, Azure OpenAI, HuggingFace, Replicate, and more work with AI Gateway.
---
## Related products
Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network.
Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM.
## More resources
Connect with the Workers community on Discord to ask questions, show what you
are building, and discuss the platform with other developers.
Learn how you can build and deploy ambitious AI applications to Cloudflare's
global network.
Follow @CloudflareDev on Twitter to learn about product announcements, and
what is new in Cloudflare Workers.
---
# Getting started
URL: https://developers.cloudflare.com/autorag/get-started/
AutoRAG allows developers to create fully managed retrieval-augmented generation (RAG) pipelines to power AI applications with accurate and up-to-date information without needing to manage infrastructure.
## 1. Upload data or use existing data in R2
AutoRAG integrates with R2 for data import. Create an R2 bucket if you do not have one and upload your data.
:::note
Before you create your first bucket, you must purchase R2 from the Cloudflare dashboard.
:::
To create and upload objects to your bucket from the Cloudflare dashboard:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/r2) and select **R2**.
2. Select Create bucket, name the bucket, and select **Create bucket**.
3. Choose to either drag and drop your file into the upload area or **select from computer**. Review the [file limits](/autorag/configuration/data-source/) when creating your knowledge base.
_If you need inspiration for what document to use to make your first AutoRAG, try downloading and uploading the [RSS](/changelog/rss/index.xml) of the [Cloudflare Changelog](/changelog/)._
## 2. Create an AutoRAG
To create a new AutoRAG:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/ai/autorag) and select **AI** > **AutoRAG**.
2. Select **Create AutoRAG**, configure the AutoRAG, and complete the setup process.
3. Select **Create**.
## 3. Monitor indexing
Once created, AutoRAG will create a Vectorize index in your account and begin indexing the data.
To monitor the indexing progress:
1. From the **AutoRAG** page in the dashboard, locate and select your AutoRAG.
2. Navigate to the **Overview** page to view the current indexing status.
## 4. Try it out
Once indexing is complete, you can run your first query:
1. From the **AutoRAG** page in the dashboard, locate and select your AutoRAG.
2. Navigate to the **Playground** page.
3. Select **Search with AI** or **Search**.
4. Enter a **query** to test out its response.
## 5. Add to your application
There are multiple ways you can add AutoRAG to your applications:
- [Workers Binding](/autorag/usage/workers-binding/)
- [REST API](/autorag/usage/rest-api/)
---
# Overview
URL: https://developers.cloudflare.com/autorag/
import {
CardGrid,
Description,
LinkTitleCard,
Plan,
RelatedProduct,
LinkButton,
Feature,
} from "~/components";
Create fully-managed RAG pipelines to power your AI applications with accurate
and up-to-date information.
AutoRAG lets you create fully-managed, retrieval-augmented generation (RAG) pipelines that continuously updates and scales on Cloudflare. With AutoRAG, you can integrate context-aware AI into your applications without managing infrastructure.
You can use AutoRAG to build:
- **Product Chatbot:** Answer customer questions using your own product content.
- **Docs Search:** Make documentation easy to search and use.
Get started
Watch AutoRAG demo
---
## Features
Automatically and continuously index your data source, keeping your content fresh without manual reprocessing.
Create multitenancy by scoping search to each tenant’s data using folder-based metadata filters.
Call your AutoRAG instance for search or AI Search directly from a Cloudflare Worker using the native binding integration.
Cache repeated queries and results to improve latency and reduce compute on repeated requests.
---
## Related products
Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network.
Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more.
Build full-stack AI applications with Vectorize, Cloudflare’s vector database.
Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
---
## More resources
Build and deploy your first Workers AI application.
Connect with the Workers community on Discord to ask questions, share what you
are building, and discuss the platform with other developers.
Follow @CloudflareDev on Twitter to learn about product announcements, and
what is new in Cloudflare Workers.
---
# Changelog
URL: https://developers.cloudflare.com/browser-rendering/changelog/
import { ProductReleaseNotes } from "~/components";
{/* */}
---
# FAQ
URL: https://developers.cloudflare.com/browser-rendering/faq/
import { GlossaryTooltip } from "~/components";
Below you will find answers to our most commonly asked questions. If you cannot find the answer you are looking for, refer to the [Discord](https://discord.cloudflare.com) to explore additional resources.
##### Uncaught (in response) TypeError: Cannot read properties of undefined (reading 'fetch')
Make sure that you are passing your Browser binding to the `puppeteer.launch` api and that you have [Workers Paid plan](/workers/platform/pricing/).
##### Will browser rendering bypass Cloudflare's Bot Protection?
No, Browser Rendering requests are always identified as bots by Cloudflare and do not bypass Bot Protection. Additionally, Browser Rendering respects the robots.txt protocol, ensuring that any disallowed paths specified for user agents are not accessed during rendering.
If you are attempting to scan your **own zone** and need Browser Rendering to access areas protected by Cloudflare’s Bot Protection, you can create a [WAF skip rule](/waf/custom-rules/skip/) to bypass the bot protection using a header or a custom user agent.
## Puppeteer
##### Code generation from strings disallowed for this context while using an Xpath selector
Currently it's not possible to use Xpath to select elements since this poses a security risk to Workers.
As an alternative try to use a css selector or `page.evaluate` for example:
```ts
const innerHtml = await page.evaluate(() => {
return (
// @ts-ignore this runs on browser context
new XPathEvaluator()
.createExpression("/html/body/div/h1")
// @ts-ignore this runs on browser context
.evaluate(document, XPathResult.FIRST_ORDERED_NODE_TYPE).singleNodeValue
.innerHTML
);
});
```
:::note
Keep in mind that `page.evaluate` can only return primitive types like strings, numbers, etc.
Returning an `HTMLElement` will not work.
:::
---
# Get started
URL: https://developers.cloudflare.com/browser-rendering/get-started/
Browser rendering can be used in two ways:
- [Workers Binding API](/browser-rendering/workers-binding-api) for complex scripts.
- [REST API](/browser-rendering/rest-api/) for simple actions.
---
# Browser Rendering
URL: https://developers.cloudflare.com/browser-rendering/
import {
CardGrid,
Description,
LinkTitleCard,
Plan,
RelatedProduct,
} from "~/components";
Browser automation for [Cloudflare Workers](/workers/) and [quick browser actions](/browser-rendering/rest-api/).
Browser Rendering enables developers to programmatically control and interact with headless browser instances running on Cloudflare’s global network. This facilitates tasks such as automating browser interactions, capturing screenshots, generating PDFs, and extracting data from web pages.
## Integration Methods
You can integrate Browser Rendering into your applications using one of the following methods:
- **[REST API](/browser-rendering/rest-api/)**: Ideal for simple, stateless tasks like capturing screenshots, generating PDFs, extracting HTML content, and more.
- **[Workers Binding API](/browser-rendering/workers-binding-api/)**: Suitable for advanced browser automation within [Cloudflare Workers](/workers/). This method provides greater control, enabling more complex workflows and persistent sessions.
Choose the method that best fits your use case. For example, use the [REST API endpoints](/browser-rendering/rest-api/) for straightforward tasks from external applications and the [Workers Binding API](/browser-rendering/workers-binding-api/) for complex automation within the Cloudflare ecosystem.
## Use Cases
Browser Rendering can be utilized for various purposes, including:
- Fetch HTML content of a page.
- Capture screenshot of a webpage.
- Convert a webpage into a PDF document.
- Take a webpage snapshot.
- Scrape specified HTML elements from a webpage.
- Retrieve data in a structured format.
- Extract Markdown content from a webpage.
- Gather all hyperlinks found on a webpage.
## Related products
Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
A globally distributed coordination API with strongly consistent storage.
Build and deploy AI-powered agents that can autonomously perform tasks.
## More resources
Deploy your first Browser Rendering project using Wrangler and Cloudflare's
version of Puppeteer.
New to Workers? Get started with the Workers Learning Path.
Learn about Browser Rendering limits.
Connect with the Workers community on Discord to ask questions, show what you
are building, and discuss the platform with other developers.
Follow @CloudflareDev on Twitter to learn about product announcements, and
what is new in Cloudflare Workers.
---
# Cloudflare for Platforms
URL: https://developers.cloudflare.com/cloudflare-for-platforms/
import { Description, Feature } from "~/components"
Build your own multitenant platform using Cloudflare as infrastructure
Cloudflare for Platforms lets you run untrusted code written by your customers, or by AI, in a secure, hosted sandbox, and give each customer their own subdomain or custom domain.

You can think of Cloudflare for Platforms as the exact same products and functionality that Cloudflare offers its own customers, structured so that you can offer it to your own customers, embedded within your own product. This includes:
- **Isolation and multitenancy** — each of your customers runs code in their own Worker — a [secure and isolated sandbox](/workers/reference/how-workers-works/)
- **Programmable routing, ingress, egress and limits** — you write code that dispatches requests to your customers' code, and can control [ingress](/cloudflare-for-platforms/workers-for-platforms/get-started/dynamic-dispatch/), [egress](/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/) and set [per-customer limits](/cloudflare-for-platforms/workers-for-platforms/configuration/custom-limits/)
- **Databases and storage** — you can provide [databases, object storage and more](/workers/runtime-apis/bindings/) to your customers as APIs they can call directly, without API tokens, keys, or external dependencies
- **Custom Domains and Subdomains** — you [call an API](/cloudflare-for-platforms/cloudflare-for-saas/) to create custom subdomains or configure custom domains for each of your customers
Cloudflare for Platforms is used by leading platforms big and small to:
- Build application development platforms tailored to specific domains, like ecommerce storefronts or mobile apps
- Power AI coding platforms that let anyone build and deploy software
- Customize product behavior by allowing any user to write a short code snippet
- Offer every customer their own isolated database
- Provide each customer with their own subdomain
***
## Products
Let your customers build and deploy their own applications to your platform, using Cloudflare's developer platform.
Give your customers their own subdomain or custom domain, protected and accelerated by Cloudflare.
---
# Overview
URL: https://developers.cloudflare.com/constellation/
import { CardGrid, Description, LinkTitleCard } from "~/components"
Run machine learning models with Cloudflare Workers.
Constellation allows you to run fast, low-latency inference tasks on pre-trained machine learning models natively on Cloudflare Workers. It supports some of the most popular machine learning (ML) and AI runtimes and multiple classes of models.
Cloudflare provides a curated list of verified models, or you can train and upload your own.
Functionality you can deploy to your application with Constellation:
* Content generation, summarization, or similarity analysis
* Question answering
* Audio transcription
* Image or audio classification
* Object detection
* Anomaly detection
* Sentiment analysis
***
## More resources
Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers.
Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.
---
# Demos and architectures
URL: https://developers.cloudflare.com/d1/demos/
import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components"
Learn how you can use D1 within your existing application and architecture.
## Featured Demos
- [Starter code for D1 Sessions API](https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template): An introduction to D1 Sessions API. This demo simulates purchase orders administration.
[](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template)
:::note[Tip: Place your database further away for the read replication demo]
To simulate how read replication can improve a worst case latency scenario, select your primary database location to be in a farther away region (one of the deployment steps).
You can find this in the **Database location hint** dropdown.
:::
## Demos
Explore the following demo applications for D1.
## Reference architectures
Explore the following reference architectures that use D1:
---
# Getting started
URL: https://developers.cloudflare.com/d1/get-started/
import { Render, PackageManagers, Steps, FileTree, Tabs, TabItem, TypeScriptExample, WranglerConfig } from "~/components";
This guide instructs you through:
- Creating your first database using D1, Cloudflare's native serverless SQL database.
- Creating a schema and querying your database via the command-line.
- Connecting a [Cloudflare Worker](/workers/) to your D1 database to query your D1 database programmatically.
You can perform these tasks through the CLI or through the Cloudflare dashboard.
:::note
If you already have an existing Worker and an existing D1 database, follow this tutorial from [3. Bind your Worker to your D1 database](/d1/get-started/#3-bind-your-worker-to-your-d1-database).
:::
## Prerequisites
## 1. Create a Worker
Create a new Worker as the means to query your database.
1. Create a new project named `d1-tutorial` by running:
This creates a new `d1-tutorial` directory as illustrated below.
- d1-tutorial
- node_modules/
- test/
- src
- **index.ts**
- package-lock.json
- package.json
- testconfig.json
- vitest.config.mts
- worker-configuration.d.ts
- **wrangler.jsonc**
Your new `d1-tutorial` directory includes:
- A `"Hello World"` [Worker](/workers/get-started/guide/#3-write-code) in `index.ts`.
- A [Wrangler configuration file](/workers/wrangler/configuration/). This file is how your `d1-tutorial` Worker accesses your D1 database.
:::note
If you are familiar with Cloudflare Workers, or initializing projects in a Continuous Integration (CI) environment, initialize a new project non-interactively by setting `CI=true` as an environmental variable when running `create cloudflare@latest`.
For example: `CI=true npm create cloudflare@latest d1-tutorial --type=simple --git --ts --deploy=false` creates a basic "Hello World" project ready to build on.
:::
1. Log in to your [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account.
2. Go to your account > **Workers & Pages** > **Overview**.
3. Select **Create**.
4. Select **Create Worker**.
5. Name your Worker. For this tutorial, name your Worker `d1-tutorial`.
6. Select **Deploy**.
## 2. Create a database
A D1 database is conceptually similar to many other databases: a database may contain one or more tables, the ability to query those tables, and optional indexes. D1 uses the familiar [SQL query language](https://www.sqlite.org/lang.html) (as used by SQLite).
To create your first D1 database:
1. Change into the directory you just created for your Workers project:
```sh
cd d1-tutorial
```
2. Run the following `wrangler d1` command and give your database a name. In this tutorial, the database is named `prod-d1-tutorial`:
```sh
npx wrangler d1 create prod-d1-tutorial
```
```sh output
✅ Successfully created DB 'prod-d1-tutorial'
[[d1_databases]]
binding = "DB" # available in your Worker on env.DB
database_name = "prod-d1-tutorial"
database_id = ""
```
This creates a new D1 database and outputs the [binding](/workers/runtime-apis/bindings/) configuration needed in the next step.
:::note
The `wrangler` command-line interface is Cloudflare's tool for managing and deploying Workers applications and D1 databases in your terminal. It was installed when you used `npm create cloudflare@latest` to initialize your new project.
:::
1. Go to **Storage & Databases** > **D1**.
2. Select **Create**.
3. Name your database. For this tutorial, name your D1 database `prod-d1-tutorial`.
4. (Optional) Provide a location hint. Location hint is an optional parameter you can provide to indicate your desired geographical location for your database. Refer to [Provide a location hint](/d1/configuration/data-location/#provide-a-location-hint) for more information.
5. Select **Create**.
:::note
For reference, a good database name:
- Uses a combination of ASCII characters, shorter than 32 characters, and uses dashes (-) instead of spaces.
- Is descriptive of the use-case and environment. For example, "staging-db-web" or "production-db-backend".
- Only describes the database, and is not directly referenced in code.
:::
## 3. Bind your Worker to your D1 database
You must create a binding for your Worker to connect to your D1 database. [Bindings](/workers/runtime-apis/bindings/) allow your Workers to access resources, like D1, on the Cloudflare developer platform.
To bind your D1 database to your Worker:
You create bindings by updating your Wrangler file.
1. Copy the lines obtained from [step 2](/d1/get-started/#2-create-a-database) from your terminal.
2. Add them to the end of your Wrangler file.
```toml
[[d1_databases]]
binding = "DB" # available in your Worker on env.DB
database_name = "prod-d1-tutorial"
database_id = ""
```
Specifically:
- The value (string) you set for `binding` is the **binding name**, and is used to reference this database in your Worker. In this tutorial, name your binding `DB`.
- The binding name must be [a valid JavaScript variable name](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar_and_types#variables). For example, `binding = "MY_DB"` or `binding = "productionDB"` would both be valid names for the binding.
- Your binding is available in your Worker at `env.` and the D1 [Workers Binding API](/d1/worker-api/) is exposed on this binding.
:::note
When you execute the `wrangler d1 create` command, the client API package (which implements the D1 API and database class) is automatically installed. For more information on the D1 Workers Binding API, refer to [Workers Binding API](/d1/worker-api/).
:::
You can also bind your D1 database to a [Pages Function](/pages/functions/). For more information, refer to [Functions Bindings for D1](/pages/functions/bindings/#d1-databases).
You create bindings by adding them to the Worker you have created.
1. Go to **Workers & Pages** > **Overview**.
2. Select the `d1-tutorial` Worker you created in [step 1](/d1/get-started/#1-create-a-worker).
3. Select **Settings**.
4. Scroll to **Bindings**, then select **Add**.
5. Select **D1 database**.
6. Name your binding in **Variable name**, then select the `prod-d1-tutorial` D1 database you created in [step 2](/d1/get-started/#2-create-a-database) from the dropdown menu. For this tutorial, name your binding `DB`.
7. Select **Deploy** to deploy your binding. When deploying, there are two options:
- **Deploy:** Immediately deploy the binding to 100% of your audience.
- **Save version:** Save a version of the binding which you can deploy in the future.
For this tutorial, select **Deploy**.
## 4. Run a query against your D1 database
### Configure your D1 database
After correctly preparing your [Wrangler configuration file](/workers/wrangler/configuration/), set up your database. Use the example `schema.sql` file below to initialize your database.
1. Copy the following code and save it as a `schema.sql` file in the `d1-tutorial` Worker directory you created in step 1:
```sql
DROP TABLE IF EXISTS Customers;
CREATE TABLE IF NOT EXISTS Customers (CustomerId INTEGER PRIMARY KEY, CompanyName TEXT, ContactName TEXT);
INSERT INTO Customers (CustomerID, CompanyName, ContactName) VALUES (1, 'Alfreds Futterkiste', 'Maria Anders'), (4, 'Around the Horn', 'Thomas Hardy'), (11, 'Bs Beverages', 'Victoria Ashworth'), (13, 'Bs Beverages', 'Random Name');
```
2. Initialize your database to run and test locally first. Bootstrap your new D1 database by running:
```sh
npx wrangler d1 execute prod-d1-tutorial --local --file=./schema.sql
```
3. Validate your data is in your database by running:
```sh
npx wrangler d1 execute prod-d1-tutorial --local --command="SELECT * FROM Customers"
```
```sh output
🌀 Mapping SQL input into an array of statements
🌀 Executing on local database production-db-backend (5f092302-3fbd-4247-a873-bf1afc5150b) from .wrangler/state/v3/d1:
┌────────────┬─────────────────────┬───────────────────┐
│ CustomerId │ CompanyName │ ContactName │
├────────────┼─────────────────────┼───────────────────┤
│ 1 │ Alfreds Futterkiste │ Maria Anders │
├────────────┼─────────────────────┼───────────────────┤
│ 4 │ Around the Horn │ Thomas Hardy │
├────────────┼─────────────────────┼───────────────────┤
│ 11 │ Bs Beverages │ Victoria Ashworth │
├────────────┼─────────────────────┼───────────────────┤
│ 13 │ Bs Beverages │ Random Name │
└────────────┴─────────────────────┴───────────────────┘
```
Use the Dashboard to create a table and populate it with data.
1. Go to **Storage & Databases** > **D1**.
2. Select the `prod-d1-tutorial` database you created in [step 2](/d1/get-started/#2-create-a-database).
3. Select **Console**.
4. Paste the following SQL snippet.
```sql
DROP TABLE IF EXISTS Customers;
CREATE TABLE IF NOT EXISTS Customers (CustomerId INTEGER PRIMARY KEY, CompanyName TEXT, ContactName TEXT);
INSERT INTO Customers (CustomerID, CompanyName, ContactName) VALUES (1, 'Alfreds Futterkiste', 'Maria Anders'), (4, 'Around the Horn', 'Thomas Hardy'), (11, 'Bs Beverages', 'Victoria Ashworth'), (13, 'Bs Beverages', 'Random Name');
```
5. Select **Execute**. This creates a table called `Customers` in your `prod-d1-tutorial` database.
6. Select **Tables**, then select the `Customers` table to view the contents of the table.
### Write queries within your Worker
After you have set up your database, run an SQL query from within your Worker.
1. Navigate to your `d1-tutorial` Worker and open the `index.ts` file. The `index.ts` file is where you configure your Worker's interactions with D1.
2. Clear the content of `index.ts`.
3. Paste the following code snippet into your `index.ts` file:
```typescript
export interface Env {
// If you set another name in the Wrangler config file for the value for 'binding',
// replace "DB" with the variable name you defined.
DB: D1Database;
}
export default {
async fetch(request, env): Promise {
const { pathname } = new URL(request.url);
if (pathname === "/api/beverages") {
// If you did not use `DB` as your binding name, change it here
const { results } = await env.DB.prepare(
"SELECT * FROM Customers WHERE CompanyName = ?",
)
.bind("Bs Beverages")
.all();
return Response.json(results);
}
return new Response(
"Call /api/beverages to see everyone who works at Bs Beverages",
);
},
} satisfies ExportedHandler;
```
In the code above, you:
1. Define a binding to your D1 database in your TypeScript code. This binding matches the `binding` value you set in the [Wrangler configuration file](/workers/wrangler/configuration/) under `[[d1_databases]]`.
2. Query your database using `env.DB.prepare` to issue a [prepared query](/d1/worker-api/d1-database/#prepare) with a placeholder (the `?` in the query).
3. Call `bind()` to safely and securely bind a value to that placeholder. In a real application, you would allow a user to define the `CompanyName` they want to list results for. Using `bind()` prevents users from executing arbitrary SQL (known as "SQL injection") against your application and deleting or otherwise modifying your database.
4. Execute the query by calling `all()` to return all rows (or none, if the query returns none).
5. Return your query results, if any, in JSON format with `Response.json(results)`.
After configuring your Worker, you can test your project locally before you deploy globally.
You can query your D1 database using your Worker.
1. Go to **Workers & Pages** > **Overview**.
2. Select the `d1-tutorial` Worker you created.
3. Select **Edit Code**.
4. Clear the contents of the `worker.js` file, then paste the following code:
```js
export default {
async fetch(request, env) {
const { pathname } = new URL(request.url);
if (pathname === "/api/beverages") {
// If you did not use `DB` as your binding name, change it here
const { results } = await env.DB.prepare(
"SELECT * FROM Customers WHERE CompanyName = ?"
)
.bind("Bs Beverages")
.all();
return new Response(JSON.stringify(results), {
headers: { 'Content-Type': 'application/json' }
});
}
return new Response(
"Call /api/beverages to see everyone who works at Bs Beverages"
);
},
};
```
5. Select **Save**.
## 5. Deploy your database
Deploy your database on Cloudflare's global network.
To deploy your Worker to production using Wrangler, you must first repeat the [database configuration](/d1/get-started/#configure-your-d1-database) steps after replacing the `--local` flag with the `--remote` flag to give your Worker data to read. This creates the database tables and imports the data into the production version of your database.
1. Bootstrap your database with the `schema.sql` file you created in step 4:
```sh
npx wrangler d1 execute prod-d1-tutorial --remote --file=./schema.sql
```
2. Validate the data is in production by running:
```sh
npx wrangler d1 execute prod-d1-tutorial --remote --command="SELECT * FROM Customers"
```
3. Deploy your Worker to make your project accessible on the Internet. Run:
```sh
npx wrangler deploy
```
```sh output
Outputs: https://d1-tutorial..workers.dev
```
You can now visit the URL for your newly created project to query your live database.
For example, if the URL of your new Worker is `d1-tutorial..workers.dev`, accessing `https://d1-tutorial..workers.dev/api/beverages` sends a request to your Worker that queries your live database directly.
4. Test your database is running successfully. Add `/api/beverages` to the provided Wrangler URL. For example, `https://d1-tutorial..workers.dev/api/beverages`.
1. Go to **Workers & Pages** > **Overview**.
2. Select your `d1-tutorial` Worker.
3. Select **Deployments**.
4. From the **Version History** table, select **Deploy version**.
5. From the **Deploy version** page, select **Deploy**.
This deploys the latest version of the Worker code to production.
## 6. (Optional) Develop locally with Wrangler
If you are using D1 with Wrangler, you can test your database locally. While in your project directory:
1. Run `wrangler dev`:
```sh
npx wrangler dev
```
When you run `wrangler dev`, Wrangler provides a URL (most likely `localhost:8787`) to review your Worker.
2. Go to the URL.
The page displays `Call /api/beverages to see everyone who works at Bs Beverages`.
3. Test your database is running successfully. Add `/api/beverages` to the provided Wrangler URL. For example, `localhost:8787/api/beverages`.
If successful, the browser displays your data.
:::note
You can only develop locally if you are using Wrangler. You cannot develop locally through the Cloudflare dashboard.
:::
## 7. (Optional) Delete your database
To delete your database:
Run:
```sh
npx wrangler d1 delete prod-d1-tutorial
```
1. Go to **Storages & Databases** > **D1**.
2. Select your `prod-d1-tutorial` D1 database.
3. Select **Settings**.
4. Select **Delete**.
5. Type the name of the database (`prod-d1-tutorial`) to confirm the deletion.
If you want to delete your Worker:
Run:
```sh
npx wrangler delete d1-tutorial
```
1. Go to **Workers & Pages** > **Overview**.
2. Select your `d1-tutorial` Worker.
3. Select **Settings**.
4. Scroll to the bottom of the page, then select **Delete**.
5. Type the name of the Worker (`d1-tutorial`) to confirm the deletion.
## Summary
In this tutorial, you have:
- Created a D1 database
- Created a Worker to access that database
- Deployed your project globally
## Next steps
If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the [Cloudflare Developers community on Discord](https://discord.cloudflare.com).
- See supported [Wrangler commands for D1](/workers/wrangler/commands/#d1).
- Learn how to use [D1 Worker Binding APIs](/d1/worker-api/) within your Worker, and test them from the [API playground](/d1/worker-api/#api-playground).
- Explore [community projects built on D1](/d1/reference/community-projects/).
---
# Cloudflare D1
URL: https://developers.cloudflare.com/d1/
import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct } from "~/components"
Create new serverless SQL databases to query from your Workers and Pages projects.
D1 is Cloudflare's managed, serverless database with SQLite's SQL semantics, built-in disaster recovery, and Worker and HTTP API access.
D1 is designed for horizontal scale out across multiple, smaller (10 GB) databases, such as per-user, per-tenant or per-entity databases. D1 allows you to build applications with thousands of databases at no extra cost for isolating with multiple databases. D1 pricing is based only on query and storage costs.
Create your first D1 database by [following the Get started guide](/d1/get-started/), learn how to [import data into a database](/d1/best-practices/import-export-data/), and how to [interact with your database](/d1/worker-api/) directly from [Workers](/workers/) or [Pages](/pages/functions/bindings/#d1-databases).
***
## Features
Create your first D1 database, establish a schema, import data and query D1 directly from an application [built with Workers](/workers/).
Execute SQL with SQLite's SQL compatibility and D1 Client API.
Time Travel is D1’s approach to backups and point-in-time-recovery, and allows you to restore a database to any minute within the last 30 days.
***
## Related products
Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
Deploy dynamic front-end applications in record time.
***
## More resources
Learn about D1's pricing and how to estimate your usage.
Learn about what limits D1 has and how to work within them.
Browse what developers are building with D1.
Learn more about the storage and database options you can build on with Workers.
Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers.
Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Developer Platform.
---
# Wrangler commands
URL: https://developers.cloudflare.com/d1/wrangler-commands/
import { Render, Type, MetaInfo } from "~/components"
D1 Wrangler commands use REST APIs to interact with the control plane. This page lists the Wrangler commands for D1.
## Global commands
## Experimental commands
### `insights`
Returns statistics about your queries.
```sh
npx wrangler d1 insights --
```
For more information, see [Query `insights`](/d1/observability/metrics-analytics/#query-insights).
---
# Application guide
URL: https://developers.cloudflare.com/developer-spotlight/application-guide/
If you use Cloudflare's developer products and would like to share your expertise then Cloudflare's Developer Spotlight program is for you. Whether you use Cloudflare in your profession, as a student or as a hobby, let us spotlight your creativity. Write a tutorial for our documentation and earn credits for your Cloudflare account along with having your name credited on your work.
The Developer Spotlight program is open for applicants until Thursday, the 24th of October 2024.
## Who can apply?
The following is required in order to be an eligible applicant for the Developer Spotlight program:
- You must not be an employee of Cloudflare.
- You must be 18 or older.
- All participants must agree to the [Developer Spotlight terms](/developer-spotlight/terms/).
## Submission rules
Your tutorial must be:
1. Easy for anyone to follow.
2. Technically accurate.
3. Entirely original, written only by you.
4. Written following Cloudflare's documentation style guide. For more information, please visit our [style guide documentation](/style-guide/) and our [tutorial style guide documentation](/style-guide/documentation-content-strategy/content-types/tutorial/#template)
5. About how to use [Cloudflare's Developer Platform products](/products/?product-group=Developer+platform) to create a project or solve a problem.
6. Complete, not an unfinished draft.
## How to apply
To apply to the program, submit an application through the [Developer Spotlight signup form](https://forms.gle/anpTPu45tnwjwXsk8). Successful applicants will be contacted by email.
## Account credits
Account credits can be used towards recurring monthly charges for Cloudflare plans or add-on services. Once a tutorial submission has been approved and published, we can then add 350 credits to your Cloudflare account. Credits are only valid for three years. Valid payment details must be stored on the recieving account before credits can be added.
## FAQ
### How many tutorial topic ideas can I submit?
You may submit as many tutorial topics ideas as you like in your application.
### When will I be compensated for my tutorial?
We will add the account credits to your Cloudflare account after your tutorial has been approved and published under the Developer Spotlight program.
### If my tutorial is accepted and published on Cloudflare's Developer Spotlight program, can I republish it elsewhere?
We ask that you do not republish any tutorials that have been published under the Cloudflare Developer Spotlight program.
### Will I be credited for my work?
You will be credited as the author of any tutorial you submit that is successfully published through the Cloudflare Developer Spotlight program. We will add your details to your work after it has been approved.
### What happens If my topic of choice gets accepted but the tutorial submission gets rejected?
Our team will do our best to help you edit your tutorial's pull request to be ready for submission; however, in the unlikely chance that your tutorial's pull request is rejected, you are still free to publish your work elsewhere.
---
# Developer Spotlight program
URL: https://developers.cloudflare.com/developer-spotlight/
import { LinkTitleCard } from "~/components";

Find examples of how our community of developers are getting the most out of our products.
Applications are currently open until Thursday, the 24th of October 2024. To apply, please read the [application guide](/developer-spotlight/application-guide/)
## View latest contributions
By Mackenly Jones
By Rajeev R. Sharma
By Hidetaka Okamoto
By Ivan Buendia
By Vasyl
By Aleksej Komnenovic
By John Siciliano
By Hidetaka Okamoto
By Dominik Fuerst
By Cody Walsh
---
# Developer Spotlight Terms
URL: https://developers.cloudflare.com/developer-spotlight/terms/
These Developer Spotlight Terms (the “Terms”) govern your participation in the Cloudflare Developer Spotlight Program (the “Program”). As used in these Terms, "Cloudflare", "us" or "we" refers to Cloudflare, Inc. and its affiliates.
THESE TERMS DO NOT APPLY TO YOUR ACCESS AND USE OF THE CLOUDFLARE PRODUCTS AND SERVICES THAT ARE PROVIDED UNDER THE [SELF-SERVE SUBSCRIPTION AGREEMENT](https://www.cloudflare.com/terms/), THE [ENTERPRISE SUBSCRIPTION AGREEMENT](https://www.cloudflare.com/enterpriseterms/), OR OTHER WRITTEN AGREEMENT SIGNED BETWEEN YOU AND CLOUDFLARE (IF APPLICABLE).
1. Eligibility. By agreeing to these Terms, you represent and warrant to us: (i) that you are at least eighteen (18) years of age; (ii) that you have not previously been suspended or removed from the Program and (iii) that your participation in the Program is in compliance with any and all applicable laws and regulations.
2. Submissions. From time-to-time, Cloudflare may accept certain tutorials, blogs, and other content submissions from its developer community (“Dev Content”) for consideration for publication on a Cloudflare blog, developer documentation, social media platform or other website. You grant us a worldwide, perpetual, irrevocable, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Dev Content in any and all media or distribution methods now known or later developed.
a. Likeness. You hereby grant to Cloudflare the royalty free right to use your name and likeness and any trademarks you include in the Dev Content in any and all manner, media, products, means, or methods, now known or hereafter created, throughout the world, in perpetuity, in connection with Cloudflare’s exercise of its rights under these Terms, including Cloudflare’s use of the Dev Content. Notwithstanding any other provision of these Terms, nothing herein will obligate Cloudflare to use the Dev Content in any manner. You understand and agree that you will have no right to any proceeds derived by Cloudflare or any third party from the use of the Dev Content.
b. Representations & Warranties. By submitting Dev Content, you represent and warrant that (1) you are the author and sole owner of all rights to the Dev Content; (2) the Dev Content is original and has not in whole or in part previously been published in any form and is not in the public domain; (3) your Dev Content is accurate and not misleading; (4) your Dev Content, does not: (i) infringe, violate, or misappropriate any third-party right, including any copyright, trademark, patent, trade secret, moral right, privacy right, right of publicity, or any other intellectual property or proprietary right; or (ii) slander, defame, or libel any third-party; and (2) no payments will be due from Cloudflare to any third party for the exercise of any rights granted under these Terms.
c. Compensation. Unless otherwise agreed by Cloudflare in writing, you understand and agree that Cloudflare will have no obligation to you or any third-party for any compensation, reimbursement, or any other payments in connection with your participation in the Program or publication of Dev Content.
3. Termination. These Terms will continue in full force and effect until either party terminates upon 30 days’ written notice to the other party. The provisions of Sections 2, 4, and 5 shall survive any termination or expiration of this agreement.
4. Indemnification. You agree to defend, indemnify, and hold harmless Cloudflare and its officers, directors, employees, consultants, affiliates, subsidiaries and agents (collectively, the "Cloudflare Entities") from and against any and all claims, liabilities, damages, losses, and expenses, including reasonable attorneys' fees and costs, arising out of or in any way connected with your violation of any third-party right, including without limitation any intellectual property right, publicity, confidentiality, property or privacy right. We reserve the right, at our own expense, to assume the exclusive defense and control of any matter otherwise subject to indemnification by you (and without limiting your indemnification obligations with respect to such matter), and in such case, you agree to cooperate with our defense of such claim.
5. Limitation of Liability. IN NO EVENT WILL THE CLOUDFLARE ENTITIES BE LIABLE TO YOU OR ANY THIRD PARTY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, OR PUNITIVE DAMAGES ARISING OUT OF OR RELATING TO YOUR PARTICIPATION IN THE PROGRAM, WHETHER BASED ON WARRANTY, CONTRACT, TORT (INCLUDING NEGLIGENCE), STATUTE, OR ANY OTHER LEGAL THEORY, WHETHER OR NOT THE CLOUDFLARE ENTITIES HAVE BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGE.
6. Independent Contractor. The parties acknowledge and agree that you are an independent contractor, and nothing in these Terms will create a relationship of employment, joint venture, partnership or agency between the parties. Neither party will have the right, power or authority at any time to act on behalf of, or represent the other party. Cloudflare will not obtain workers’ compensation or other insurance on your behalf, and you are solely responsible for all payments, benefits, and insurance required for the performance of services hereunder, including, without limitation, taxes or other withholdings, unemployment, payroll disbursements, and other related expenses. You hereby acknowledge and agree that these Terms are not governed by any union or collective bargaining agreement and Cloudflare will not pay you any union-required residuals, reuse fees, pension, health and welfare benefits or other benefits/payments.
7. Governing Law. These Terms will be governed by the laws of the State of California without regard to conflict of law principles. To the extent that any lawsuit or court proceeding is permitted hereunder, you and Cloudflare agree to submit to the personal and exclusive jurisdiction of the state and federal courts located within San Francisco County, California for the purpose of litigating all such disputes.
8. Modifications. Cloudflare reserves the right to make modifications to these Terms at any time. Revised versions of these Terms will be posted publicly online. Unless otherwise specified, any modifications to the Terms will take effect the day they are posted publicly online. If you do not agree with the revised Terms, your sole and exclusive remedy will be to discontinue your participation in the Program.
9. General. These Terms, together with any applicable product limits, disclaimers, or other terms presented to you on a Cloudflare controlled website (e.g., www.cloudflare.com, as well as the other websites that Cloudflare operates and that link to these Terms) or documentation, each of which are incorporated by reference into these Terms, constitute the entire and exclusive understanding and agreement between you and Cloudflare regarding your participation in the Program. Use of section headers in these Terms is for convenience only and will not have any impact on the interpretation of particular provisions. You may not assign or transfer these Terms or your rights hereunder, in whole or in part, by operation of law or otherwise, without our prior written consent. We may assign these Terms at any time without notice. The failure to require performance of any provision will not affect our right to require performance at any time thereafter, nor will a waiver of any breach or default of these Terms or any provision of these Terms constitute a waiver of any subsequent breach or default or a waiver of the provision itself. In the event that any part of these Terms is held to be invalid or unenforceable, the unenforceable part will be given effect to the greatest extent possible and the remaining parts will remain in full force and effect. Upon termination of these Terms, any provision that by its nature or express terms should survive will survive such termination or expiration.
---
# Demos and architectures
URL: https://developers.cloudflare.com/durable-objects/demos/
import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components"
Learn how you can use a Durable Object within your existing application and architecture.
## Demos
Explore the following demo applications for Durable Objects.
## Reference architectures
Explore the following reference architectures that use Durable Objects:
---
# Getting started
URL: https://developers.cloudflare.com/durable-objects/get-started/
import { Render, TabItem, Tabs, PackageManagers, WranglerConfig, TypeScriptExample } from "~/components";
This guide will instruct you through:
- Writing a JavaScript class that defines a Durable Object.
- Using Durable Objects SQL API to query a Durable Object's private, embedded SQLite database.
- Instantiating and communicating with a Durable Object from another Worker.
- Deploying a Durable Object and a Worker that communicates with a Durable Object.
If you wish to learn more about Durable Objects, refer to [What are Durable Objects?](/durable-objects/what-are-durable-objects/).
## Quick start
If you want to skip the steps and get started quickly, click on the button below.
[](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/staging/hello-world-do-template)
This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. Use this option if you are familiar with Cloudflare Workers, and wish to skip the step-by-step guidance.
You may wish to manually follow the steps if you are new to Cloudflare Workers.
## Prerequisites
## 1. Create a Worker project
You will access your Durable Object from a [Worker](/workers/). Your Worker application is an interface to interact with your Durable Object.
To create a Worker project, run:
Running `create cloudflare@latest` will install [Wrangler](/workers/wrangler/install-and-update/), the Workers CLI. You will use Wrangler to test and deploy your project.
This will create a new directory, which will include either a `src/index.js` or `src/index.ts` file to write your code and a [`wrangler.jsonc`](/workers/wrangler/configuration/) configuration file.
Move into your new directory:
```sh
cd durable-object-starter
```
## 2. Write a Durable Object class using SQL API
Before you create and access a Durable Object, its behavior must be defined by an ordinary exported JavaScript class.
:::note
If you do not use JavaScript or TypeScript, you will need a [shim](https://developer.mozilla.org/en-US/docs/Glossary/Shim) to translate your class definition to a JavaScript class.
:::
Your `MyDurableObject` class will have a constructor with two parameters. The first parameter, `ctx`, passed to the class constructor contains state specific to the Durable Object, including methods for accessing storage. The second parameter, `env`, contains any bindings you have associated with the Worker when you uploaded it.
```ts
export class MyDurableObject extends DurableObject {
constructor(ctx: DurableObjectState, env: Env) {
// Required, as we're extending the base class.
super(ctx, env)
}
}
```
Workers communicate with a Durable Object using [remote-procedure call](/workers/runtime-apis/rpc/#_top). Public methods on a Durable Object class are exposed as [RPC methods](/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/) to be called by another Worker.
Your file should now look like:
```ts
export class MyDurableObject extends DurableObject {
constructor(ctx: DurableObjectState, env: Env) {
// Required, as we're extending the base class.
super(ctx, env)
}
async sayHello():Promise {
let result = this.ctx.storage.sql
.exec("SELECT 'Hello, World!' as greeting")
.one();
return result.greeting;
}
}
```
In the code above, you have:
1. Defined a RPC method, `sayHello()`, that can be called by a Worker to communicate with a Durable Object.
2. Accessed a Durable Object's attached storage, which is a private SQLite database only accessible to the object, using [SQL API](/durable-objects/api/storage-api/#exec) methods (`sql.exec()`) available on `ctx.storage` .
3. Returned an object representing the single row query result using `one()`, which checks that the query result has exactly one row.
4. Return the `greeting` column from the row object result.
## 3. Instantiate and communicate with a Durable Object
:::note
Durable Objects do not receive requests directly from the Internet. Durable Objects receive requests from Workers or other Durable Objects.
This is achieved by configuring a binding in the calling Worker for each Durable Object class that you would like it to be able to talk to. These bindings must be configured at upload time. Methods exposed by the binding can be used to communicate with particular Durable Objects.
:::
A Worker is used to [access Durable Objects](/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/).
To communicate with a Durable Object, the Worker's fetch handler should look like the following:
```ts
export default {
async fetch(request, env, ctx): Promise {
const id:DurableObjectId = env.MY_DURABLE_OBJECT.idFromName(new URL(request.url).pathname);
const stub = env.MY_DURABLE_OBJECT.get(id);
const greeting = await stub.sayHello();
return new Response(greeting);
},
} satisfies ExportedHandler;
```
In the code above, you have:
1. Exported your Worker's main event handlers, such as the `fetch()` handler for receiving HTTP requests.
2. Passed `env` into the `fetch()` handler. Bindings are delivered as a property of the environment object passed as the second parameter when an event handler or class constructor is invoked. By calling the `idFromName()` function on the binding, you use a string-derived object ID. You can also ask the system to [generate random unique IDs](/durable-objects/api/namespace/#newuniqueid). System-generated unique IDs have better performance characteristics, but require you to store the ID somewhere to access the Object again later.
3. Derived an object ID from the URL path. `MY_DURABLE_OBJECT.idFromName()` always returns the same ID when given the same string as input (and called on the same class), but never the same ID for two different strings (or for different classes). In this case, you are creating a new object for each unique path.
4. Constructed the stub for the Durable Object using the ID. A stub is a client object used to send messages to the Durable Object.
5. Called a Durable Object by invoking a RPC method, `sayHello()`, on the Durable Object, which returns a `Hello, World!` string greeting.
6. Received an HTTP response back to the client by constructing a HTTP Response with `return new Response()`.
Refer to [Access a Durable Object from a Worker](/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/) to learn more about communicating with a Durable Object.
## 4. Configure Durable Object bindings
[Bindings](/workers/runtime-apis/bindings/) allow your Workers to interact with resources on the Cloudflare developer platform. The Durable Object bindings in your Worker project's [Wrangler configuration file](/workers/wrangler/configuration/) will include a binding name (for this guide, use `MY_DURABLE_OBJECT`) and the class name (`MyDurableObject`).
```toml
[[durable_objects.bindings]]
name = "MY_DURABLE_OBJECT"
class_name = "MyDurableObject"
```
The `bindings` section contains the following fields:
- `name` - Required. The binding name to use within your Worker.
- `class_name` - Required. The class name you wish to bind to.
- `script_name` - Optional. Defaults to the current [environment's](/durable-objects/reference/environments/) Worker code.
## 5. Configure Durable Object class with SQLite storage backend
A migration is a mapping process from a class name to a runtime state. You perform a migration when creating a new Durable Object class, or when renaming, deleting or transferring an existing Durable Object class.
Migrations are performed through the `[[migrations]]` configurations key in your Wrangler file.
The Durable Object migration to create a new Durable Object class with SQLite storage backend will look like the following in your Worker's Wrangler file:
```toml
[[migrations]]
tag = "v1" # Should be unique for each entry
new_sqlite_classes = ["MyDurableObject"] # Array of new classes
```
Refer to [Durable Objects migrations](/durable-objects/reference/durable-objects-migrations/) to learn more about the migration process.
## 6. Develop a Durable Object Worker locally
To test your Durable Object locally, run [`wrangler dev`](/workers/wrangler/commands/#dev):
```sh
npx wrangler dev
```
In your console, you should see a`Hello world` string returned by the Durable Object.
## 7. Deploy your Durable Object Worker
To deploy your Durable Object Worker:
```sh
npx wrangler deploy
```
Once deployed, you should be able to see your newly created Durable Object Worker on the [Cloudflare dashboard](https://dash.cloudflare.com/), **Workers & Pages** > **Overview**.
Preview your Durable Object Worker at `..workers.dev`.
## Summary and final code
Your final code should look like this:
```ts title="index.ts"
import { DurableObject } from "cloudflare:workers";
export class MyDurableObject extends DurableObject {
constructor(ctx: DurableObjectState, env: Env) {
// Required, as we are extending the base class.
super(ctx, env)
}
async sayHello():Promise {
let result = this.ctx.storage.sql
.exec("SELECT 'Hello, World!' as greeting")
.one();
return result.greeting;
}
}
export default {
async fetch(request, env, ctx):Promise {
const id:DurableObjectId = env.MY_DURABLE_OBJECT.idFromName(new URL(request.url).pathname);
const stub = env.MY_DURABLE_OBJECT.get(id);
const greeting = await stub.sayHello();
return new Response(greeting);
},
} satisfies ExportedHandler;
```
By finishing this tutorial, you have:
- Successfully created a Durable Object
- Called the Durable Object by invoking a [RPC method](/workers/runtime-apis/rpc/)
- Deployed the Durable Object globally
## Related resources
- [Create Durable Object stubs](/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/)
- [Access Durable Objects Storage](/durable-objects/best-practices/access-durable-objects-storage/)
- [Miniflare](https://github.com/cloudflare/workers-sdk/tree/main/packages/miniflare) - Helpful tools for mocking and testing your Durable Objects.
---
# Cloudflare Durable Objects
URL: https://developers.cloudflare.com/durable-objects/
import { Render, CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, LinkButton } from "~/components"
Create AI agents, collaborative applications, real-time interactions like chat, and more without needing to coordinate state, have separate storage, or manage infrastructure.
Durable Objects provide a building block for stateful applications and distributed systems.
Use Durable Objects to build applications that need coordination among multiple clients, like collaborative editing tools, interactive chat, multiplayer games, live notifications, and deep distributed systems, without requiring you to build serialization and coordination primitives on your own.
Get started
:::note
SQLite-backed Durable Objects are now available on the Workers Free plan with these [limits](/durable-objects/platform/pricing/).
[SQLite storage](/durable-objects/best-practices/access-durable-objects-storage/) and corresponding [Storage API](/durable-objects/api/storage-api/) methods like `sql.exec` have moved from beta to general availability. New Durable Object classes should use wrangler configuration for [SQLite storage](/durable-objects/best-practices/access-durable-objects-storage/#wrangler-configuration-for-sqlite-durable-objects).
:::
### What are Durable Objects?
For more information, refer to the full [What are Durable Objects?](/durable-objects/what-are-durable-objects/) page.
***
## Features
Learn how Durable Objects coordinate connections among multiple clients or events.
Learn how Durable Objects provide transactional, strongly consistent, and serializable storage.
Learn how WebSocket Hibernation allows you to manage the connections of multiple clients at scale.
Learn how to use alarms to trigger a Durable Object and perform compute in the future at customizable intervals.
***
## Related products
Cloudflare Workers provides a serverless execution environment that allows you to create new applications or augment existing ones without configuring or maintaining infrastructure.
D1 is Cloudflare's SQL-based native serverless database. Create a database by importing data or defining your tables and writing your queries within a Worker or through the API.
Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
***
## More resources
Browse what other developers are building with Durable Objects.
Learn about Durable Objects limits.
Learn about Durable Objects pricing.
Learn more about storage and database options you can build with Workers.
Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers.
Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Developer Platform.
---
# Release notes
URL: https://developers.cloudflare.com/durable-objects/release-notes/
import { ProductReleaseNotes } from "~/components";
{/* */}
---
# Videos
URL: https://developers.cloudflare.com/durable-objects/video-tutorials/
import { CardGrid, LinkCard } from "~/components";
---
# What are Durable Objects?
URL: https://developers.cloudflare.com/durable-objects/what-are-durable-objects/
import { Render } from "~/components";
## Durable Objects highlights
Durable Objects have properties that make them a great fit for distributed stateful scalable applications.
**Serverless compute, zero infrastructure management**
- Durable Objects are built on-top of the Workers runtime, so they support exactly the same code (JavaScript and WASM), and similar memory and CPU limits.
- Each Durable Object is [implicitly created on first access](/durable-objects/api/namespace/#get). User applications are not concerned with their lifecycle, creating them or destroying them. Durable Objects migrate among healthy servers, and therefore applications never have to worry about managing them.
- Each Durable Object stays alive as long as requests are being processed, and remains alive for several seconds after being idle before hibernating, allowing applications to [exploit in-memory caching](/durable-objects/reference/in-memory-state/) while handling many consecutive requests and boosting their performance.
**Storage colocated with compute**
- Each Durable Object has its own [durable, transactional, and strongly consistent storage](/durable-objects/api/storage-api/) (up to 10 GB[^1]), persisted across requests, and accessible only within that object.
**Single-threaded concurrency**
- Each [Durable Object instance has an identifier](/durable-objects/api/id/), either randomly-generated or user-generated, which allows you to globally address which Durable Object should handle a specific action or request.
- Durable Objects are single-threaded and cooperatively multi-tasked, just like code running in a web browser. For more details on how safety and correctness are achieved, refer to the blog post ["Durable Objects: Easy, Fast, Correct — Choose three"](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/).
**Elastic horizontal scaling across Cloudflare's global network**
- Durable Objects can be spread around the world, and you can [optionally influence where each instance should be located](/durable-objects/reference/data-location/#provide-a-location-hint). Durable Objects are not yet available in every Cloudflare data center; refer to the [where.durableobjects.live](https://where.durableobjects.live/) project for live locations.
- Each Durable Object type (or ["Namespace binding"](/durable-objects/api/namespace/) in Cloudflare terms) corresponds to a JavaScript class implementing the actual logic. There is no hard limit on how many Durable Objects can be created for each namespace.
- Durable Objects scale elastically as your application creates millions of objects. There is no need for applications to manage infrastructure or plan ahead for capacity.
## Durable Objects features
### In-memory state
Each Durable Object has its own [in-memory state](/durable-objects/reference/in-memory-state/). Applications can use this in-memory state to optimize the performance of their applications by keeping important information in-memory, thereby avoiding the need to access the durable storage at all.
Useful cases for in-memory state include batching and aggregating information before persisting it to storage, or for immediately rejecting/handling incoming requests meeting certain criteria, and more.
In-memory state is reset when the Durable Object hibernates after being idle for some time. Therefore, it is important to persist any in-memory data to the durable storage if that data will be needed at a later time when the Durable Object receives another request.
### Storage API
The [Durable Object Storage API](/durable-objects/api/storage-api/) allows Durable Objects to access fast, transactional, and strongly consistent storage. A Durable Object's attached storage is private to its unique instance and cannot be accessed by other objects.
There are two flavors of the storage API, a [key-value (KV) API](/durable-objects/api/storage-api/#kv-api) and an [SQL API](/durable-objects/api/storage-api/#sql-api).
When using the [new SQLite in Durable Objects storage backend](/durable-objects/reference/durable-objects-migrations/#enable-sqlite-storage-backend-on-new-durable-object-class-migration), you have access to both the APIs. However, if you use the previous storage backend you only have access to the key-value API.
### Alarms API
Durable Objects provide an [Alarms API](/durable-objects/api/alarms/) which allows you to schedule the Durable Object to be woken up at a time in the future. This is useful when you want to do certain work periodically, or at some specific point in time, without having to manually manage infrastructure such as job scheduling runners on your own.
You can combine Alarms with in-memory state and the durable storage API to build batch and aggregation applications such as queues, workflows, or advanced data pipelines.
### WebSockets
WebSockets are long-lived TCP connections that enable bi-directional, real-time communication between client and server. Because WebSocket sessions are long-lived, applications commonly use Durable Objects to accept either the client or server connection.
Because Durable Objects provide a single-point-of-coordination between Cloudflare Workers, a single Durable Object instance can be used in parallel with WebSockets to coordinate between multiple clients, such as participants in a chat room or a multiplayer game.
Durable Objects support the [WebSocket Standard API](/durable-objects/best-practices/websockets/#websocket-standard-api), as well as the [WebSockets Hibernation API](/durable-objects/best-practices/websockets/#websocket-hibernation-api) which extends the Web Standard WebSocket API to reduce costs by not incurring billing charges during periods of inactivity.
### RPC
Durable Objects support Workers [Remote-Procedure-Call (RPC)](/workers/runtime-apis/rpc/) which allows applications to use JavaScript-native methods and objects to communicate between Workers and Durable Objects.
Using RPC for communication makes application development easier and simpler to reason about, and more efficient.
## Actor programming model
Another way to describe and think about Durable Objects is through the lens of the [Actor programming model](https://en.wikipedia.org/wiki/Actor_model). There are several popular examples of the Actor model supported at the programming language level through runtimes or library frameworks, like [Erlang](https://www.erlang.org/), [Elixir](https://elixir-lang.org/), [Akka](https://akka.io/), or [Microsoft Orleans for .NET](https://learn.microsoft.com/en-us/dotnet/orleans/overview).
The Actor model simplifies a lot of problems in distributed systems by abstracting away the communication between actors using RPC calls (or message sending) that could be implemented on-top of any transport protocol, and it avoids most of the concurrency pitfalls you get when doing concurrency through shared memory such as race conditions when multiple processes/threads access the same data in-memory.
Each Durable Object instance can be seen as an Actor instance, receiving messages (incoming HTTP/RPC requests), executing some logic in its own single-threaded context using its attached durable storage or in-memory state, and finally sending messages to the outside world (outgoing HTTP/RPC requests or responses), even to another Durable Object instance.
Each Durable Object has certain capabilities in terms of [how much work it can do](/durable-objects/platform/limits/#how-much-work-can-a-single-durable-object-do), which should influence the application's [architecture to fully take advantage of the platform](/reference-architecture/diagrams/storage/durable-object-control-data-plane-pattern/).
Durable Objects are natively integrated into Cloudflare's infrastructure, giving you the ultimate serverless platform to build distributed stateful applications exploiting the entirety of Cloudflare's network.
## Durable Objects in Cloudflare
Many of Cloudflare's products use Durable Objects. Some of our technical blog posts showcase real-world applications and use-cases where Durable Objects make building applications easier and simpler.
These blog posts may also serve as inspiration on how to architect scalable applications using Durable Objects, and how to integrate them with the rest of Cloudflare Developer Platform.
- [Durable Objects aren't just durable, they're fast: a 10x speedup for Cloudflare Queues](https://blog.cloudflare.com/how-we-built-cloudflare-queues/)
- [Behind the scenes with Stream Live, Cloudflare's live streaming service](https://blog.cloudflare.com/behind-the-scenes-with-stream-live-cloudflares-live-streaming-service/)
- [DO it again: how we used Durable Objects to add WebSockets support and authentication to AI Gateway](https://blog.cloudflare.com/do-it-again/)
- [Workers Builds: integrated CI/CD built on the Workers platform](https://blog.cloudflare.com/workers-builds-integrated-ci-cd-built-on-the-workers-platform/)
- [Build durable applications on Cloudflare Workers: you write the Workflows, we take care of the rest](https://blog.cloudflare.com/building-workflows-durable-execution-on-workers/)
- [Building D1: a Global Database](https://blog.cloudflare.com/building-d1-a-global-database/)
- [Billions and billions (of logs): scaling AI Gateway with the Cloudflare Developer Platform](https://blog.cloudflare.com/billions-and-billions-of-logs-scaling-ai-gateway-with-the-cloudflare/)
- [Indexing millions of HTTP requests using Durable Objects](https://blog.cloudflare.com/r2-rayid-retrieval/)
Finally, the following blog posts may help you learn some of the technical implementation aspects of Durable Objects, and how they work.
- [Durable Objects: Easy, Fast, Correct — Choose three](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/)
- [Zero-latency SQLite storage in every Durable Object](https://blog.cloudflare.com/sqlite-in-durable-objects/)
- [Workers Durable Objects Beta: A New Approach to Stateful Serverless](https://blog.cloudflare.com/introducing-workers-durable-objects/)
## Get started
Get started now by following the ["Get started" guide](/durable-objects/get-started/) to create your first application using Durable Objects.
[^1]: Storage per Durable Object with SQLite is currently 1 GB. This will be raised to 10 GB for general availability.
---
# Cloudflare Email Routing
URL: https://developers.cloudflare.com/email-routing/
import { Description, Feature, Plan, RelatedProduct, Render } from "~/components"
Create custom email addresses for your domain and route incoming emails to your preferred mailbox.
It is available to all Cloudflare customers [using Cloudflare as an authoritative nameserver](/dns/zone-setups/full-setup/).
***
## Features
Leverage the power of Cloudflare Workers to implement any logic you need to process your emails. Create rules as complex or simple as you need.
With Email Routing you can have many custom email addresses to use for specific situations.
Email Routing includes metrics to help you check on your email traffic history.
***
## Related products
Cloudflare Email Security is a cloud based service that stops phishing attacks, the biggest cybersecurity threat, across all traffic vectors - email, web and network.
Email Routing is available to customers using Cloudflare as an authoritative nameserver.
---
# Limits
URL: https://developers.cloudflare.com/email-routing/limits/
import { Render } from "~/components"
## Email Workers size limits
When you process emails with Email Workers and you are on [Workers’ free pricing tier](/workers/platform/pricing/) you might encounter an allocation error. This may happen due to the size of the emails you are processing and/or the complexity of your Email Worker. Refer to [Worker limits](/workers/platform/limits/#worker-limits) for more information.
You can use the [log functionality for Workers](/workers/observability/logs/) to look for messages related to CPU limits (such as `EXCEEDED_CPU`) and troubleshoot any issues regarding allocation errors.
If you encounter these error messages frequently, consider upgrading to the [Workers Paid plan](/workers/platform/pricing/) for higher usage limits.
## Message size
Currently, Email Routing does not support messages bigger than 25 MiB.
## Rules and addresses
| Feature | Limit |
| -------------------------------------------------------------------------------- | ----- |
| [Rules](/email-routing/setup/email-routing-addresses/) | 200 |
| [Addresses](/email-routing/setup/email-routing-addresses/#destination-addresses) | 200 |
## Email Routing summary for emails sent through Workers
Emails sent through Workers will show up in the Email Routing summary page as dropped even if they were successfully delivered.
---
# Postmaster
URL: https://developers.cloudflare.com/email-routing/postmaster/
This page provides technical information about Email Routing to professionals who administer email systems, and other email providers.
Here you will find information regarding Email Routing, along with best practices, rules, guidelines, troubleshooting tools, as well as known limitations for Email Routing.
## Postmaster
### Authenticated Received Chain (ARC)
Email Routing supports [Authenticated Received Chain (ARC)](http://arc-spec.org/). ARC is an email authentication system designed to allow an intermediate email server (such as Email Routing) to preserve email authentication results. Google also supports ARC.
### Contact information
The best way to contact us is using our [community forum](https://community.cloudflare.com/new-topic?category=Feedback/Previews%20%26%20Betas&tags=email) or our [Discord server](https://discord.com/invite/cloudflaredev).
### DKIM signature
[DKIM (DomainKeys Identified Mail)](https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail) ensures that email messages are not altered in transit between the sender and the recipient's SMTP servers through public-key cryptography.
Through this standard, the sender publishes its public key to a domain's DNS once, and then signs the body of each message before it leaves the server. The recipient server reads the message, gets the domain public key from the domain's DNS, and validates the signature to ensure the message was not altered in transit.
Email Routing adds two new signatures to the emails in transit, one on behalf of the Cloudflare domain used for sender rewriting `email.cloudflare.net`, and another one on behalf of the customer's recipient domain.
Below is the DKIM key for `email.cloudflare.net`:
```sh
dig TXT cf2024-1._domainkey.email.cloudflare.net +short
```
```sh output
"v=DKIM1; h=sha256; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiweykoi+o48IOGuP7GR3X0MOExCUDY/BCRHoWBnh3rChl7WhdyCxW3jgq1daEjPPqoi7sJvdg5hEQVsgVRQP4DcnQDVjGMbASQtrY4WmB1VebF+RPJB2ECPsEDTpeiI5ZyUAwJaVX7r6bznU67g7LvFq35yIo4sdlmtZGV+i0H4cpYH9+3JJ78k" "m4KXwaf9xUJCWF6nxeD+qG6Fyruw1Qlbds2r85U9dkNDVAS3gioCvELryh1TxKGiVTkg4wqHTyHfWsp7KD3WQHYJn0RyfJJu6YEmL77zonn7p2SRMvTMP3ZEXibnC9gz3nnhR6wcYL8Q7zXypKTMD58bTixDSJwIDAQAB"
```
You can find the DKIM key for the customer's `example.com` domain by querying the following:
```sh
dig TXT cf2024-1._domainkey.example.com +short
```
### DMARC enforcing
Email Routing enforces Domain-based Message Authentication, Reporting & Conformance (DMARC). Depending on the sender's DMARC policy, Email Routing will reject emails when there is an authentication failure. Refer to [dmarc.org](https://dmarc.org/) for more information on this protocol.
### IPv6 support
Currently, Email Routing will connect to the upstream SMTP servers using IPv6 if they provide AAAA records for their MX servers, and fall back to IPv4 if that is not possible.
Below is an example of a popular provider that supports IPv6:
```sh
dig mx gmail.com
```
```sh output
gmail.com. 3084 IN MX 5 gmail-smtp-in.l.google.com.
gmail.com. 3084 IN MX 20 alt2.gmail-smtp-in.l.google.com.
gmail.com. 3084 IN MX 40 alt4.gmail-smtp-in.l.google.com.
gmail.com. 3084 IN MX 10 alt1.gmail-smtp-in.l.google.com.
gmail.com. 3084 IN MX 30 alt3.gmail-smtp-in.l.google.com.
```
```sh
dig AAAA gmail-smtp-in.l.google.com
```
```sh output
gmail-smtp-in.l.google.com. 17 IN AAAA 2a00:1450:400c:c09::1b
```
Email Routing also supports IPv6 through Cloudflare’s inbound MX servers.
### MX, SPF, and DKIM records
Email Routing automatically adds a few DNS records to the zone when our customers enable Email Routing. If we take `example.com` as an example:
```txt
example.com. 300 IN MX 13 amir.mx.cloudflare.net.
example.com. 300 IN MX 86 linda.mx.cloudflare.net.
example.com. 300 IN MX 24 isaac.mx.cloudflare.net.
example.com. 300 IN TXT "v=spf1 include:_spf.mx.cloudflare.net ~all"
```
[The MX (mail exchange) records](https://www.cloudflare.com/learning/dns/dns-records/dns-mx-record/) tell the Internet where the inbound servers receiving email messages for the zone are. In this case, anyone who wants to send an email to `example.com` can use the `amir.mx.cloudflare.net`, `linda.mx.cloudflare.net`, or `isaac.mx.cloudflare.net` SMTP servers.
### Outbound prefixes
Email Routing sends its traffic using both IPv4 and IPv6 prefixes, when supported by the upstream SMTP server.
If you are a postmaster and are having trouble receiving Email Routing's emails, allow the following outbound IP addresses in your server configuration:
**IPv4**
`104.30.0.0/19`
**IPv6**
`2405:8100:c000::/38`
_Ranges last updated: December 13th, 2023_
### Outbound hostnames
In addition to the outbound prefixes, Email Routing will use the following outbound domains for the `HELO/EHLO` command:
- `cloudflare-email.net`
- `cloudflare-email.org`
- `cloudflare-email.com`
PTR records (reverse DNS) ensure that each hostname has an corresponding IP. For example:
```sh
dig a-h.cloudflare-email.net +short
```
```sh output
104.30.0.7
```
```sh
dig -x 104.30.0.7 +short
```
```sh output
a-h.cloudflare-email.net.
```
### Sender rewriting
Email Routing rewrites the SMTP envelope sender (`MAIL FROM`) to the forwarding domain to avoid issues with [SPF](#spf-record). Email Routing uses the [Sender Rewriting Scheme](https://en.wikipedia.org/wiki/Sender_Rewriting_Scheme) to achieve this.
This has no effect to the end user's experience, though. The message headers will still report the original sender's `From:` address.
### SMTP errors
In most cases, Email Routing forwards the upstream SMTP errors back to the sender client in-session.
### Realtime Block Lists
Email Routing uses an internal Domain Name System Blocklists (DNSBL) service to check if the sender's IP is present in one or more Realtime Block Lists (RBL) lists. When the system detects an abusive IP, it blocks the email and returns an SMTP error:
```txt
554 found on one or more RBLs (abusixip). Refer to https://developers.cloudflare.com/email-routing/postmaster/#spam-and-abusive-traffic/
```
We update our RBLs regularly. You can use combined block list lookup services like [MxToolbox](https://mxtoolbox.com/blacklists.aspx) to check if your IP matches other RBLs. IP reputation blocks are usually temporary, but if you feel your IP should be removed immediately, please contact the RBL's maintainer mentioned in the SMTP error directly.
### Anti-spam
In addition to DNSBL, Email Routing uses advanced heuristic and statistical analysis of the email's headers and text to calculate a spam score. We inject the score in the custom `X-Cf-Spamh-Score` header:
```
X-Cf-Spamh-Score: 2
```
This header is visible in the forwarded email. The higher the score, 5 being the maximum, the more likely the email is spam. Currently, this system is experimental and passive; we do not act on it and suggest that upstream servers and email clients don't act on it either.
We will update this page with more information as we fine-tune the system.
### SPF record
A SPF DNS record is an anti-spoofing mechanism that is used to specify which IP addresses and domains are allowed to send emails on behalf of your zone.
The Internet Engineering Task Force (IETF) tracks the SPFv1 specification [in RFC 7208](https://datatracker.ietf.org/doc/html/rfc7208). Refer to the [SPF Record Syntax](http://www.open-spf.org/SPF_Record_Syntax/) to learn the SPF syntax.
Email Routing's SPF record contains the following:
```txt
v=spf1 include:_spf.mx.cloudflare.net ~all
```
In the example above:
- `spf1`: Refers to SPF version 1, the most common and more widely adopted version of SPF.
- `include`: Include a second query to `_spf.mx.cloudflare.net` and allow its contents.
- `~all`: Otherwise [`SoftFail`](http://www.open-spf.org/SPF_Record_Syntax/) on all other origins. `SoftFail` means NOT allowed to send, but in transition. This instructs the upstream server to accept the email but mark it as suspicious if it came from any IP addresses outside of those defined in the SPF records.
If we do a TXT query to `_spf.mx.cloudflare.net`, we get:
```txt
_spf.mx.cloudflare.net. 300 IN TXT "v=spf1 ip4:104.30.0.0/20 ~all"
```
This response means:
- Allow all IPv4 IPs coming from the `104.30.0.0/20` subnet.
- Otherwise, `SoftFail`.
You can read more about SPF, DKIM, and DMARC in our [Tackling Email Spoofing and Phishing](https://blog.cloudflare.com/tackling-email-spoofing/) blog.
---
## Known limitations
Below, you will find information regarding known limitations for Email Routing.
### Email address internationalization (EAI)
Email Routing does not support [internationalized email addresses](https://en.wikipedia.org/wiki/International_email). Email Routing only supports [internationalized domain names](https://en.wikipedia.org/wiki/Internationalized_domain_name).
This means that you can have email addresses with an internationalized domain, but not an internationalized local-part (the first part of your email address, before the `@` symbol). Refer to the following examples:
- `info@piñata.es` - Supported.
- `piñata@piñata.es` - Not supported.
### Non-delivery reports (NDRs)
Email Routing does not forward non-delivery reports to the original sender. This means the sender will not receive a notification indicating that the email did not reach the intended destination.
### Restrictive DMARC policies can make forwarded emails fail
Due to the nature of email forwarding, restrictive DMARC policies might make forwarded emails fail to be delivered. Refer to [dmarc.org](https://dmarc.org/wiki/FAQ#My_users_often_forward_their_emails_to_another_mailbox.2C_how_do_I_keep_DMARC_valid.3F) for more information.
### Sending or replying to an email from your Cloudflare domain
Email Routing does not support sending or replying from your Cloudflare domain. When you reply to emails forwarded by Email Routing, the reply will be sent from your destination address (like `my-name@gmail.com`), not your custom address (like `info@my-company.com`).
### Signs such "`+`" and "`.`" are treated as normal characters for custom addresses
Email Routing does not have advanced routing options. Characters such as `+` or `.`, which perform special actions in email providers like Gmail and Outlook, are currently treated as normal characters on custom addresses. More flexible routing options are in our roadmap.
---
# Demos and architectures
URL: https://developers.cloudflare.com/hyperdrive/demos/
import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components"
Learn how you can use Hyperdrive within your existing application and architecture.
## Demos
Explore the following demo applications for Hyperdrive.
## Reference architectures
Explore the following reference architectures that use Hyperdrive:
---
# Getting started
URL: https://developers.cloudflare.com/hyperdrive/get-started/
import { Render, PackageManagers, Tabs, TabItem } from "~/components";
Hyperdrive accelerates access to your existing databases from Cloudflare Workers, making even single-region databases feel globally distributed.
By maintaining a connection pool to your database within Cloudflare's network, Hyperdrive reduces seven round-trips to your database before you can even send a query: the TCP handshake (1x), TLS negotiation (3x), and database authentication (3x).
Hyperdrive understands the difference between read and write queries to your database, and caches the most common read queries, improving performance and reducing load on your origin database.
This guide will instruct you through:
- Creating your first Hyperdrive configuration.
- Creating a [Cloudflare Worker](/workers/) and binding it to your Hyperdrive configuration.
- Establishing a database connection from your Worker to a public database.
:::note
Hyperdrive currently works with PostgreSQL, MySQL and many compatible databases. This includes CockroachDB and Materialize (which are PostgreSQL-compatible), and Planetscale.
Learn more about the [databases that Hyperdrive supports](/hyperdrive/reference/supported-databases-and-features).
:::
## Prerequisites
Before you begin, ensure you have completed the following:
1. Sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already.
2. Install [`Node.js`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm). Use a Node version manager like [nvm](https://github.com/nvm-sh/nvm) or [Volta](https://volta.sh/) to avoid permission issues and change Node.js versions. [Wrangler](/workers/wrangler/install-and-update/) requires a Node version of `16.17.0` or later.
3. Have **a publicly accessible** PostgreSQL/MySQL (or compatible) database.
## 1. Log in
Before creating your Hyperdrive binding, log in with your Cloudflare account by running:
```sh
npx wrangler login
```
You will be directed to a web page asking you to log in to the Cloudflare dashboard. After you have logged in, you will be asked if Wrangler can make changes to your Cloudflare account. Scroll down and select **Allow** to continue.
## 2. Create a Worker
:::note[New to Workers?]
Refer to [How Workers works](/workers/reference/how-workers-works/) to learn about the Workers serverless execution model works. Go to the [Workers Get started guide](/workers/get-started/guide/) to set up your first Worker.
:::
Create a new project named `hyperdrive-tutorial` by running:
This will create a new `hyperdrive-tutorial` directory. Your new `hyperdrive-tutorial` directory will include:
- A `"Hello World"` [Worker](/workers/get-started/guide/#3-write-code) at `src/index.ts`.
- A [`wrangler.jsonc`](/workers/wrangler/configuration/) configuration file. `wrangler.jsonc` is how your `hyperdrive-tutorial` Worker will connect to Hyperdrive.
### Enable Node.js compatibility
[Node.js compatibility](/workers/runtime-apis/nodejs/) is required for database drivers, and needs to be configured for your Workers project.
## 3. Connect Hyperdrive to a database
Hyperdrive works by connecting to your database, pooling database connections globally, and speeding up your database access through Cloudflare's network.
It will provide a secure connection string that is only accessible from your Worker which you can use to connect to your database through Hyperdrive.
This means that you can use the Hyperdrive connection string with your existing drivers or ORM libraries without needing significant changes to your code.
To create your first Hyperdrive database configuration, change into the directory you just created for your Workers project:
```sh
cd hyperdrive-tutorial
```
To create your first Hyperdrive, you will need:
- The IP address (or hostname) and port of your database.
- The database username (for example, `hyperdrive-demo`).
- The password associated with that username.
- The name of the database you want Hyperdrive to connect to. For example, `postgres` or `mysql`.
Hyperdrive accepts the combination of these parameters in the common connection string format used by database drivers:
```txt
postgres://USERNAME:PASSWORD@HOSTNAME_OR_IP_ADDRESS:PORT/database_name
```
Most database providers will provide a connection string you can copy-and-paste directly into Hyperdrive.
To create a Hyperdrive connection, run the `wrangler` command, replacing the placeholder values passed to the `--connection-string` flag with the values of your existing database:
```sh
npx wrangler hyperdrive create --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name"
```
```txt
mysql://USERNAME:PASSWORD@HOSTNAME_OR_IP_ADDRESS:PORT/database_name
```
Most database providers will provide a connection string you can copy-and-paste directly into Hyperdrive.
To create a Hyperdrive connection, run the `wrangler` command, replacing the placeholder values passed to the `--connection-string` flag with the values of your existing database:
```sh
npx wrangler hyperdrive create --connection-string="mysql://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name"
```
:::note[Manage caching]
By default, Hyperdrive will cache query results. If you wish to disable caching, pass the flag `--caching-disabled`.
Alternatively, you can use the `--max-age` flag to specify the maximum duration (in seconds) for which items should persist in the cache, before they are evicted. Default value is 60 seconds.
Refer to [Hyperdrive Wrangler commands](/hyperdrive/reference/wrangler-commands/) for more information.
:::
If successful, the command will output your new Hyperdrive configuration:
```json
{
"hyperdrive": [
{
"binding": "HYPERDRIVE",
"id": ""
}
]
}
```
Copy the `id` field: you will use this in the next step to make Hyperdrive accessible from your Worker script.
:::note
Hyperdrive will attempt to connect to your database with the provided credentials to verify they are correct before creating a configuration. If you encounter an error when attempting to connect, refer to Hyperdrive's [troubleshooting documentation](/hyperdrive/observability/troubleshooting/) to debug possible causes.
:::
## 4. Bind your Worker to Hyperdrive
## 5. Run a query against your database
Once you have created a Hyperdrive configuration and bound it to your Worker, you can run a query against your database.
### Install a database driver
To connect to your database, you will need a database driver which allows you to authenticate and query your database. For this tutorial, you will use [Postgres.js](https://github.com/porsager/postgres), one of the most widely used PostgreSQL drivers.
To install `postgres`, ensure you are in the `hyperdrive-tutorial` directory. Open your terminal and run the following command:
With the driver installed, you can now create a Worker script that queries your database.
To connect to your database, you will need a database driver which allows you to authenticate and query your database. For this tutorial, you will use [mysql2](https://github.com/sidorares/node-mysql2), one of the most widely used MySQL drivers.
To install `mysql2`, ensure you are in the `hyperdrive-tutorial` directory. Open your terminal and run the following command:
With the driver installed, you can now create a Worker script that queries your database.
### Write a Worker
After you have set up your database, you will run a SQL query from within your Worker.
Go to your `hyperdrive-tutorial` Worker and open the `index.ts` file.
The `index.ts` file is where you configure your Worker's interactions with Hyperdrive.
Populate your `index.ts` file with the following code:
```typescript
// Postgres.js 3.4.5 or later is recommended
import postgres from "postgres";
export interface Env {
// If you set another name in the Wrangler config file as the value for 'binding',
// replace "HYPERDRIVE" with the variable name you defined.
HYPERDRIVE: Hyperdrive;
}
export default {
async fetch(request, env, ctx): Promise {
// Create a connection using the Postgres.js driver (or any supported driver, ORM or query builder)
// with the Hyperdrive credentials. These credentials are only accessible from your Worker.
const sql = postgres(env.HYPERDRIVE.connectionString, {
// Workers limit the number of concurrent external connections, so be sure to limit
// the size of the local connection pool that postgres.js may establish.
max: 5,
// If you are not using array types in your Postgres schema,
// disabling this will save you an extra round-trip every time you connect.
fetch_types: false,
});
try {
// Sample query
const results = await sql`SELECT * FROM pg_tables`;
// Clean up the client after the response is returned, before the Worker is killed
ctx.waitUntil(sql.end());
// Return result rows as JSON
return Response.json(results);
} catch (e) {
console.error(e);
return Response.json(
{ error: e instanceof Error ? e.message : e },
{ status: 500 },
);
}
},
} satisfies ExportedHandler;
```
Upon receiving a request, the code above does the following:
1. Creates a new database client configured to connect to your database via Hyperdrive, using the Hyperdrive connection string.
2. Initiates a query via `await sql` that outputs all tables (user and system created) in the database (as an example query).
3. Returns the response as JSON to the client.
After you have set up your database, you will run a SQL query from within your Worker.
Go to your `hyperdrive-tutorial` Worker and open the `index.ts` file.
The `index.ts` file is where you configure your Worker's interactions with Hyperdrive.
Populate your `index.ts` file with the following code:
```typescript
// mysql2 v3.13.0 or later is required
import { createConnection } from 'mysql2/promise';
export interface Env {
// If you set another name in the Wrangler config file as the value for 'binding',
// replace "HYPERDRIVE" with the variable name you defined.
HYPERDRIVE: Hyperdrive;
}
export default {
async fetch(request, env, ctx): Promise {
// Create a connection using the mysql2 driver (or any support driver, ORM or query builder)
// with the Hyperdrive credentials. These credentials are only accessible from your Worker.
const connection = await createConnection({
host: env.DB_HOST,
user: env.DB_USER,
password: env.DB_PASSWORD,
database: env.DB_NAME,
port: env.DB_PORT
// The following line is needed for mysql2 compatibility with Workers
// mysql2 uses eval() to optimize result parsing for rows with > 100 columns
// Configure mysql2 to use static parsing instead of eval() parsing with disableEval
disableEval: true
});
try{
// Sample query
const [results, fields] = await connection.query(
'SHOW tables;'
);
// Clean up the client after the response is returned, before the Worker is killed
ctx.waitUntil(connection.end());
// Return result rows as JSON
return new Response(JSON.stringify({ results, fields }), {
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
},
});
}
catch(e){
console.error(e);
return Response.json(
{ error: e instanceof Error ? e.message : e },
{ status: 500 },
);
}
},
} satisfies ExportedHandler;
```
Upon receiving a request, the code above does the following:
1. Creates a new database client configured to connect to your database via Hyperdrive, using the Hyperdrive connection string.
2. Initiates a query via `await connection.query` that outputs all tables (user and system created) in the database (as an example query).
3. Returns the response as JSON to the client.
## 6. Deploy your Worker
You can now deploy your Worker to make your project accessible on the Internet. To deploy your Worker, run:
```sh
npx wrangler deploy
# Outputs: https://hyperdrive-tutorial..workers.dev
```
You can now visit the URL for your newly created project to query your live database.
For example, if the URL of your new Worker is `hyperdrive-tutorial..workers.dev`, accessing `https://hyperdrive-tutorial..workers.dev/` will send a request to your Worker that queries your database directly.
By finishing this tutorial, you have created a Hyperdrive configuration, a Worker to access that database and deployed your project globally.
## Next steps
- Learn more about [how Hyperdrive works](/hyperdrive/configuration/how-hyperdrive-works/).
- How to [configure query caching](/hyperdrive/configuration/query-caching/).
- [Troubleshooting common issues](/hyperdrive/observability/troubleshooting/) when connecting a database to Hyperdrive.
If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the [Cloudflare Developers community on Discord](https://discord.cloudflare.com).
---
# Hyperdrive
URL: https://developers.cloudflare.com/hyperdrive/
import {
CardGrid,
Description,
Feature,
LinkTitleCard,
Plan,
RelatedProduct,
Tabs,
TabItem,
LinkButton,
} from "~/components";
Turn your existing regional database into a globally distributed database.
Hyperdrive is a service that accelerates queries you make to existing databases, making it faster to access your data from across the globe from [Cloudflare Workers](/workers/), irrespective of your users' location.
Hyperdrive supports any Postgres or MySQL database, including those hosted on AWS, Google Cloud, Azure, Neon and Planetscale. Hyperdrive also supports Postgres-compatible databases like CockroachDB and Timescale.
You do not need to write new code or replace your favorite tools: Hyperdrive works with your existing code and tools you use.
Use Hyperdrive's connection string from your Cloudflare Workers application with your existing Postgres drivers and object-relational mapping (ORM) libraries:
```ts
import postgres from 'postgres';
export default {
async fetch(request, env, ctx): Promise {
// Hyperdrive provides a unique generated connection string to connect to
// your database via Hyperdrive that can be used with your existing tools
const sql = postgres(env.HYPERDRIVE.connectionString);
try {
// Sample SQL query
const results = await sql`SELECT * FROM pg_tables`;
// Close the client after the response is returned
ctx.waitUntil(sql.end());
return Response.json(results);
} catch (e) {
return Response.json({ error: e instanceof Error ? e.message : e }, { status: 500 });
}
},
} satisfies ExportedHandler<{ HYPERDRIVE: Hyperdrive }>;
````
```json
{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "WORKER-NAME",
"main": "src/index.ts",
"compatibility_date": "2025-02-04",
"compatibility_flags": [
"nodejs_compat"
],
"observability": {
"enabled": true
},
"hyperdrive": [
{
"binding": "HYPERDRIVE",
"id": "",
"localConnectionString": ""
}
]
}
````
```ts
import { createConnection } from 'mysql2/promise';
export default {
async fetch(request, env, ctx): Promise {
const connection = await createConnection({
host: env.DB_HOST,
user: env.DB_USER,
password: env.DB_PASSWORD,
database: env.DB_NAME,
port: env.DB_PORT
// This is needed to use mysql2 with Workers
// This configures mysql2 to use static parsing instead of eval() parsing (not available on Workers)
disableEval: true
});
const [results, fields] = await connection.query(
'SHOW tables;'
);
return new Response(JSON.stringify({ results, fields }), {
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '\*',
},
});
},
} satisfies ExportedHandler;
````
```json
{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "WORKER-NAME",
"main": "src/index.ts",
"compatibility_date": "2025-02-04",
"compatibility_flags": [
"nodejs_compat"
],
"observability": {
"enabled": true
},
"hyperdrive": [
{
"binding": "HYPERDRIVE",
"id": "",
"localConnectionString": ""
}
]
}
````
Get started
---
## Features
Connect Hyperdrive to your existing database and deploy a [Worker](/workers/) that queries it.
Hyperdrive allows you to connect to any PostgreSQL or PostgreSQL-compatible database.
Hyperdrive allows you to connect to any MySQL database.
Use Hyperdrive to cache the most popular queries executed against your database.
---
## Related products
Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
Deploy dynamic front-end applications in record time.
---
## More resources
Learn about Hyperdrive's pricing.
Learn about Hyperdrive limits.
Learn more about the storage and database options you can build on with
Workers.
Connect with the Workers community on Discord to ask questions, show what you
are building, and discuss the platform with other developers.
Follow @CloudflareDev on Twitter to learn about product announcements, and
what is new in Cloudflare Developer Platform.
---
# Demos and architectures
URL: https://developers.cloudflare.com/images/demos/
import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components"
Learn how you can use Images within your existing architecture.
## Demos
Explore the following demo applications for Images.
## Reference architectures
Explore the following reference architectures that use Images:
---
# Getting started
URL: https://developers.cloudflare.com/images/get-started/
In this guide, you will get started with Cloudflare Images and make your first API request.
## Prerequisites
Before you make your first API request, ensure that you have a Cloudflare Account ID and an API token.
Refer to [Find zone and account IDs](/fundamentals/setup/find-account-and-zone-ids/) for help locating your Account ID and [Create an API token](/fundamentals/api/get-started/create-token/) to learn how to create an access your API token.
## Make your first API request
```bash
curl --request POST \
--url https://api.cloudflare.com/client/v4/accounts//images/v1 \
--header 'Authorization: Bearer ' \
--header 'Content-Type: multipart/form-data' \
--form file=@./
```
## Enable transformations on your zone
You can dynamically optimize images that are stored outside of Cloudflare Images and deliver them using [transformation URLs](/images/transform-images/transform-via-url/).
Cloudflare will automatically cache every transformed image on our global network so that you store only the original image at your origin.
To enable transformations on your zone:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/login) and select your account.
2. Go to **Images** > **Transformations**.
3. Go to the specific zone where you want to enable transformations.
4. Select **Enable for zone**. This will allow you to optimize and deliver remote images.
:::note
With **Resize images from any origin** unchecked, only the initial URL passed will be checked. Any redirect returned will be followed, including if it leaves the zone, and the resulting image will be transformed.
:::
:::note
If you are using transformations in a Worker, you need to include the appropriate logic in your Worker code to prevent resizing images from any origin. Unchecking this option in the dash does not apply to transformation requests coming from Cloudflare Workers.
:::
---
# Cloudflare Images
URL: https://developers.cloudflare.com/images/
import { CardGrid, Description, Feature, LinkTitleCard, Plan } from "~/components"
Store, transform, optimize, and deliver images at scale
Cloudflare Images provides an end-to-end solution designed to help you streamline your image infrastructure from a single API and runs on [Cloudflare's global network](https://www.cloudflare.com/network/).
There are two different ways to use Images:
- **Efficiently store and deliver images.** You can upload images into Cloudflare Images and dynamically deliver multiple variants of the same original image.
- **Optimize images that are stored outside of Images** You can make transformation requests to optimize any publicly available image on the Internet.
Cloudflare Images is available on both [Free and Paid plans](/images/pricing/). By default, all users have access to the Images Free plan, which includes limited usage of the transformations feature to optimize images in remote sources.
:::note[Image Resizing is now available as transformations]
All Image Resizing features are available as transformations with Images. Each unique transformation is billed only once per 30 days.
If you are using a legacy plan with Image Resizing, visit the [dashboard](https://dash.cloudflare.com/) to switch to an Imagesplan.
:::
***
## Features
Use Cloudflare’s edge network to store your images.
Accept uploads directly and securely from your users by generating a one-time token.
Add up to 100 variants to specify how images should be resized for various use cases.
Control access to your images by using signed URL tokens.
***
## More resources
Engage with other users and the Images team on Cloudflare support forum.
---
# Pricing
URL: https://developers.cloudflare.com/images/pricing/
By default, all users are on the Images Free plan. The Free plan includes access to the transformations feature, which lets you optimize images stored outside of Images, like in R2.
The Paid plan allows transformations, as well as access to storage in Images.
Pricing is dependent on which features you use. The table below shows which metrics are used for each use case.
| Use case | Metrics | Availability |
|----------|---------|--------------|
| Optimize images stored outside of Images | Images Transformed | Free and Paid plans |
| Optimized images that are stored in Cloudflare Images | Images Stored, Images Delivered | Only Paid plans |
## Images Free
On the Free plan, you can request up to 5,000 unique transformations each month for free.
Once you exceed 5,000 unique transformations:
- Existing transformations in cache will continue to be served as expected.
- New transformations will return a `9422` error. If your source image is from the same domain where the transformation is served, then you can use the [`onerror` parameter](/images/transform-images/transform-via-url/#onerror) to redirect to the original image.
- You will not be charged for exceeding the limits in the Free plan.
To request more than 5,000 unique transformations each month, you can purchase an Images Paid plan.
## Images Paid
When you purchase an Images Paid plan, you can choose your own storage or add storage in Images.
| Metric | Pricing |
|--------|---------|
| Images Transformed | First 5,000 unique transformations included + $0.50 / 1,000 unique transformations / month |
| Images Stored | $5 / 100,000 images stored / month |
| Images Delivered | $1 / 100,000 images delivered / month |
If you optimize an image stored outside of Images, then you will be billed only for Images Transformed.
Alternatively, Images Stored and Images Delivered apply only to images that are stored in your Images bucket. When you optimize an image that is stored in Images, then this counts toward Images Delivered — not Images Transformed.
## Metrics
### Images Transformed
A unique transformation is a request to transform an original image based on a set of [supported parameters](/images/transform-images/transform-via-url/#options). This metric is used only when optimizing images that are stored outside of Images.
For example, if you transform `thumbnail.jpg` as 100x100, then this counts as 1 unique transformation. If you transform the same `thumbnail.jpg` as 200x200, then this counts as a separate unique transformation.
You are billed for the number of unique transformations that are counted during each billing period.
Unique transformations are counted over a 30-day sliding window. For example, if you request `width=100/thumbnail.jpg` on June 30, then this counts once for that billing period. If you request the same transformation on July 1, then this will not count as a billable request, since the same transformation was already requested within the last 30 days.
The `format` parameter counts as only 1 billable transformation, even if multiple copies of an image are served. In other words, if `width=100,format=auto/thumbnail.jpg` is served to some users as AVIF and to others as WebP, then this counts as 1 unique transformation instead of 2.
#### Example
A retail website has 1,000 original product images that get served in 5 different sizes each month. This results in 5,000 unique transformations — or a cost of $2.50 per month.
### Images Stored
Storage in Images is available only with an Images Paid plan. You can purchase storage in increments of $5 for every 100,000 images stored per month.
You can create predefined variants to specify how an image should be resized, such as `thumbnail` as 100x100 and `hero` as 1600x500.
Only uploaded images count toward Images Stored; defining variants will not impact your storage limit.
### Images Delivered
For images that are stored in Images, you will incur $1 for every 100,000 images delivered per month. This metric does not include transformed images that are stored in remote sources.
Every image requested by the browser counts as 1 billable request.
#### Example
A retail website has a product page that uses Images to serve 10 images. If the page was visited 10,000 times this month, then this results in 100,000 images delivered — or $1.00 in billable usage.
---
# Demos and architectures
URL: https://developers.cloudflare.com/kv/demos/
import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components"
Learn how you can use KV within your existing application and architecture.
## Demo applications
Explore the following demo applications for KV.
## Reference architectures
Explore the following reference architectures that use KV:
---
# Getting started
URL: https://developers.cloudflare.com/kv/get-started/
import { Render, PackageManagers, Steps, FileTree, Details, Tabs, TabItem, WranglerConfig } from "~/components";
Workers KV provides low-latency, high-throughput global storage to your [Cloudflare Workers](/workers/) applications. Workers KV is ideal for storing user configuration data, routing data, A/B testing configurations and authentication tokens, and is well suited for read-heavy workloads.
This guide instructs you through:
- Creating a KV namespace.
- Writing key-value pairs to your KV namespace from a Cloudflare Worker.
- Reading key-value pairs from a KV namespace.
You can perform these tasks through the CLI or through the Cloudflare dashboard.
## Prerequisites
## 1. Create a Worker project
:::note[New to Workers?]
Refer to [How Workers works](/workers/reference/how-workers-works/) to learn about the Workers serverless execution model works. Go to the [Workers Get started guide](/workers/get-started/guide/) to set up your first Worker.
:::
Create a new Worker to read and write to your KV namespace.
1. Create a new project named `kv-tutorial` by running:
This creates a new `kv-tutorial` directory, illustrated below.
- kv-tutorial/
- node_modules/
- test/
- src
- **index.ts**
- package-lock.json
- package.json
- testconfig.json
- vitest.config.mts
- worker-configuration.d.ts
- **wrangler.jsonc**
Your new `kv-tutorial` directory includes:
- A `"Hello World"` [Worker](/workers/get-started/guide/#3-write-code) in `index.ts`.
- A [`wrangler.jsonc`](/workers/wrangler/configuration/) configuration file. `wrangler.jsonc` is how your `kv-tutorial` Worker accesses your kv database.
2. Change into the directory you just created for your Worker project:
```sh
cd kv-tutorial
```
:::note
If you are familiar with Cloudflare Workers, or initializing projects in a Continuous Integration (CI) environment, initialize a new project non-interactively by setting `CI=true` as an environmental variable when running `create cloudflare@latest`.
For example: `CI=true npm create cloudflare@latest kv-tutorial --type=simple --git --ts --deploy=false` creates a basic "Hello World" project ready to build on.
:::
1. Log in to your Cloudflare dashboard and select your account.
2. Go to [your account > **Workers & Pages** > **Overview**](https://dash.cloudflare.com/?to=/:account/workers-and-pages).
3. Select **Create**.
4. Select **Create Worker**.
5. Name your Worker. For this tutorial, name your Worker `kv-tutorial`.
6. Select **Deploy**.
## 2. Create a KV namespace
A [KV namespace](/kv/concepts/kv-namespaces/) is a key-value database replicated to Cloudflare’s global network.
[Wrangler](/workers/wrangler/) allows you to put, list, get, and delete entries within your KV namespace.
:::note
KV operations are scoped to your account.
:::
To create a KV namespace via Wrangler:
1. Open your terminal and run the following command:
```sh
npx wrangler kv namespace create
```
The `npx wrangler kv namespace create ` subcommand takes a new binding name as its argument. A KV namespace is created using a concatenation of your Worker’s name (from your Wrangler file) and the binding name you provide. A `BINDING_ID` is randomly generated for you.
For this tutorial, use the binding name `BINDING_NAME`.
```sh
npx wrangler kv namespace create BINDING_NAME
```
```sh output
🌀 Creating namespace with title kv-tutorial-BINDING_NAME
✨ Success!
Add the following to your configuration file:
[[kv_namespaces]]
binding = "BINDING_NAME"
id = ""
```
1. Go to [**Storage & Databases** > **KV**](https://dash.cloudflare.com/?to=/:account/workers/kv/namespaces).
2. Select **Create a namespace**.
3. Enter a name for your namespace. For this tutorial, use `kv_tutorial_namespace`.
4. Select **Add**.
## 3. Bind your Worker to your KV namespace
You must create a binding to connect your Worker with your KV namespace. [Bindings](/workers/runtime-apis/bindings/) allow your Workers to access resources, like KV, on the Cloudflare developer platform.
To bind your KV namespace to your Worker:
1. In your Wrangler file, add the following with the values generated in your terminal from [step 2](/kv/get-started/#2-create-a-kv-namespace):
```toml
[[kv_namespaces]]
binding = ""
id = ""
```
Binding names do not need to correspond to the namespace you created. Binding names are only a reference. Specifically:
- The value (string) you set for `` is used to reference this KV namespace in your Worker. For this tutorial, this should be `BINDING_NAME`.
- The binding must be [a valid JavaScript variable name](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar_and_types#variables). For example, `binding = "MY_KV"` or `binding = "routingConfig"` would both be valid names for the binding.
- Your binding is available in your Worker at `env.` from within your Worker.
:::note[Bindings]
A binding is how your Worker interacts with external resources such as [KV namespaces](/kv/concepts/kv-namespaces/). A binding is a runtime variable that the Workers runtime provides to your code. You can declare a variable name in your Wrangler file that binds to these resources at runtime, and interact with them through this variable. Every binding's variable name and behavior is determined by you when deploying the Worker.
Refer to [Environment](/kv/reference/environments/) for more information.
:::
1. Go to [**Workers & Pages** > **Overview**](https://dash.cloudflare.com/?to=/:account/workers-and-pages).
2. Select the `kv-tutorial` Worker you created in [step 1](/kv/get-started/#1-create-a-worker-project).
3. Select **Settings**.
4. Scroll to **Bindings**, then select **Add**.
5. Select **KV namespace**.
6. Name your binding (`BINDING_NAME`) in **Variable name**, then select the KV namespace (`kv_tutorial_namespace`) you created in [step 2](/kv/get-started/#2-create-a-kv-namespace) from the dropdown menu.
7. Select **Deploy** to deploy your binding.
## 4. Interact with your KV namespace
You can interact with your KV namespace via [Wrangler](/workers/wrangler/install-and-update/) or directly from your [Workers](/workers/) application.
### Write a value
To write a value to your empty KV namespace using Wrangler:
1. Run the `wrangler kv key put` subcommand in your terminal, and input your key and value respectively. `` and `` are values of your choice.
```sh
npx wrangler kv key put --binding= "" ""
```
```sh output
Writing the value "" to key "" on namespace .
```
Instead of using `--binding`, you can also use `--namespace-id` to specify which KV namespace should receive the operation:
```sh
npx wrangler kv key put --namespace-id= "" ""
```
```sh output
Writing the value "" to key "" on namespace .
```
To create a key and a value in local mode, add the `--local` flag at the end of the command:
```sh
npx wrangler kv key put --namespace-id=xxxxxxxxxxxxxxxx "" "" --local
```
1. Go to [**Storage & Databases** > **KV**](https://dash.cloudflare.com/?to=/:account/workers/kv/namespaces).
2. Select the KV namespace you created (`kv_tutorial_namespace`), then select **View**.
3. Select **KV Pairs**.
4. Enter a `` of your choice.
5. Enter a `` of your choice.
6. Select **Add entry**.
### Get a value
To access the value using Wrangler:
1. Run the `wrangler kv key get` subcommand in your terminal, and input your key value:
```sh
# Replace [OPTIONS] with --binding or --namespace-id
npx wrangler kv key get [OPTIONS] ""
```
A KV namespace can be specified in two ways:
```sh
npx wrangler kv key get --binding= ""
```
```sh
npx wrangler kv key get --namespace-id= ""
```
You can add a `--preview` flag to interact with a preview namespace instead of a production namespace.
:::caution
Exactly **one** of `--binding` or `--namespace-id` is required.
:::
:::note
To view the value directly within the terminal, add `--text`
:::
Refer to the [`kv bulk` documentation](/kv/reference/kv-commands/#kv-bulk) to write a file of multiple key-value pairs to a given KV namespace.
You can view key-value pairs directly from the dashboard.
1. Go to your account > **Storage & Databases** > **KV**.
2. Go to the KV namespace you created (`kv_tutorial_namespace`), then select **View**.
3. Select **KV Pairs**.
## 5. Access your KV namespace from your Worker
:::note
When using [`wrangler dev`](/workers/wrangler/commands/#dev) to develop locally, Wrangler defaults to using a local version of KV to avoid interfering with any of your live production data in KV. This means that reading keys that you have not written locally returns null.
To have `wrangler dev` connect to your Workers KV namespace running on Cloudflare's global network, call `wrangler dev --remote` instead. This uses the `preview_id` of the KV binding configuration in the Wrangler file. Refer to the [KV binding docs](/kv/concepts/kv-bindings/#use-kv-bindings-when-developing-locally) for more information.
:::
1. In your Worker script, add your KV binding in the `Env` interface:
```ts
interface Env {
BINDING_NAME: KVNamespace;
// ... other binding types
}
```
2. Use the `put()` method on `BINDING_NAME` to create a new key-value pair, or to update the value for a particular key:
```ts
let value = await env.BINDING_NAME.put(key, value);
```
3. Use the KV `get()` method to fetch the data you stored in your KV database:
```ts
let value = await env.BINDING_NAME.get("KEY");
```
Your Worker code should look like this:
```ts
export interface Env {
BINDING_NAME: KVNamespace;
}
export default {
async fetch(request, env, ctx): Promise {
try {
await env.BINDING_NAME.put("KEY", "VALUE");
const value = await env.BINDING_NAME.get("KEY");
if (value === null) {
return new Response("Value not found", { status: 404 });
}
return new Response(value);
} catch (err) {
// In a production application, you could instead choose to retry your KV
// read or fall back to a default code path.
console.error(`KV returned error: ${err}`);
return new Response(err, { status: 500 });
}
},
} satisfies ExportedHandler;
```
The code above:
1. Writes a key to `BINDING_NAME` using KV's `put()` method.
2. Reads the same key using KV's `get()` method, and returns an error if the key is null (or in case the key is not set, or does not exist).
3. Uses JavaScript's [`try...catch`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/try...catch) exception handling to catch potential errors. When writing or reading from any service, such as Workers KV or external APIs using `fetch()`, you should expect to handle exceptions explicitly.
To run your project locally, enter the following command within your project directory:
```sh
npx wrangler dev
```
When you run `wrangler dev`, Wrangler provides a URL (usually a `localhost:8787`) to review your Worker. The browser prints your value when you visit the URL provided by Wrangler.
The browser should simply return the `VALUE` corresponding to the `KEY` you have specified with the `get()` method.
1. Go to **Workers & Pages** > **Overview**.
2. Go to the `kv-tutorial` Worker you created.
3. Select **Edit Code**.
4. Clear the contents of the `workers.js` file, then paste the following code.
```js
export default {
async fetch(request, env, ctx) {
try {
await env.BINDING_NAME.put("KEY", "VALUE");
const value = await env.BINDING_NAME.get("KEY");
if (value === null) {
return new Response("Value not found", { status: 404 });
}
return new Response(value);
} catch (err) {
// In a production application, you could instead choose to retry your KV
// read or fall back to a default code path.
console.error(`KV returned error: ${err}`);
return new Response(err.toString(), { status: 500 });
}
},
};
```
The code above:
1. Writes a key to `BINDING_NAME` using KV's `put()` method.
2. Reads the same key using KV's `get()` method, and returns an error if the key is null (or in case the key is not set, or does not exist).
3. Uses JavaScript's [`try...catch`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/try...catch) exception handling to catch potential errors. When writing or reading from any service, such as Workers KV or external APIs using `fetch()`, you should expect to handle exceptions explicitly.
The browser should simply return the `VALUE` corresponding to the `KEY` you have specified with the `get()` method.
2. Select **Save**.
## 6. Deploy your KV
1. Run the following command to deploy KV to Cloudflare's global network:
```sh
npx wrangler deploy
```
2. Visit the URL for your newly created Workers KV application.
For example, if the URL of your new Worker is `kv-tutorial..workers.dev`, accessing `https://kv-tutorial..workers.dev/` sends a request to your Worker that writes (and reads) from Workers KV.
1. Go to **Workers & Pages** > **Overview**.
2. Select your `kv-tutorial` Worker.
3. Select **Deployments**.
4. From the **Version History** table, select **Deploy version**.
5. From the **Deploy version** page, select **Deploy**.
This deploys the latest version of the Worker code to production.
## Summary
By finishing this tutorial, you have:
1. Created a KV namespace
2. Created a Worker that writes and reads from that namespace
3. Deployed your project globally.
## Next steps
If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the [Cloudflare Developers community on Discord](https://discord.cloudflare.com).
- Learn more about the [KV API](/kv/api/).
- Understand how to use [Environments](/kv/reference/environments/) with Workers KV.
- Read the Wrangler [`kv` command documentation](/kv/reference/kv-commands/).
---
# Glossary
URL: https://developers.cloudflare.com/kv/glossary/
import { Glossary } from "~/components"
Review the definitions for terms used across Cloudflare's KV documentation.
---
# Cloudflare Workers KV
URL: https://developers.cloudflare.com/kv/
import {
CardGrid,
Description,
Feature,
LinkTitleCard,
Plan,
RelatedProduct,
Tabs,
TabItem,
LinkButton,
} from "~/components";
Create a global, low-latency, key-value data storage.
Workers KV is a data storage that allows you to store and retrieve data globally. With Workers KV, you can build dynamic and performant APIs and websites that support high read volumes with low latency.
For example, you can use Workers KV for:
- Caching API responses.
- Storing user configurations / preferences.
- Storing user authentication details.
Access your Workers KV namespace from Cloudflare Workers using [Workers Bindings](/workers/runtime-apis/bindings/) or from your external application using the REST API:
```ts
export default {
async fetch(request, env, ctx): Promise {
// write a key-value pair
await env.KV_BINDING.put('KEY', 'VALUE');
// read a key-value pair
const value = await env.KV_BINDING.get('KEY');
// list all key-value pairs
const allKeys = await env.KV_BINDING.list();
// delete a key-value pair
await env.KV_BINDING.delete('KEY');
// return a Workers response
return new Response(
JSON.stringify({
value: value,
allKeys: allKeys,
}),
);
},
} satisfies ExportedHandler<{ KV_BINDING: KVNamespace }>;
```
```json
{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "",
"main": "src/index.ts",
"compatibility_date": "2025-02-04",
"observability": {
"enabled": true
},
"kv_namespaces": [
{
"binding": "KV_BINDING",
"id": ""
}
]
}
```
See the full [Workers KV binding API reference](/kv/api/read-key-value-pairs/).
```
curl https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/values/$KEY_NAME \
-X PUT \
-H 'Content-Type: multipart/form-data' \
-H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
-H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
-d '{
"value": "Some Value"
}'
curl https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/values/$KEY_NAME \
-H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
-H "X-Auth-Key: $CLOUDFLARE_API_KEY"
```
```ts
const client = new Cloudflare({
apiEmail: process.env['CLOUDFLARE_EMAIL'], // This is the default and can be omitted
apiKey: process.env['CLOUDFLARE_API_KEY'], // This is the default and can be omitted
});
const value = await client.kv.namespaces.values.update('', 'KEY', {
account_id: '',
value: 'VALUE',
});
const value = await client.kv.namespaces.values.get('', 'KEY', {
account_id: '',
});
const value = await client.kv.namespaces.values.delete('', 'KEY', {
account_id: '',
});
// Automatically fetches more pages as needed.
for await (const namespace of client.kv.namespaces.list({ account_id: '' })) {
console.log(namespace.id);
}
```
See the full Workers KV [REST API and SDK reference](/api/resources/kv/subresources/namespaces/methods/list/) for details on using REST API from external applications, with pre-generated SDK's for external TypeScript, Python, or Go applications.
Get started
---
## Features
Learn how Workers KV stores and retrieves data.
The Workers command-line interface, Wrangler, allows you to [create](/workers/wrangler/commands/#init), [test](/workers/wrangler/commands/#dev), and [deploy](/workers/wrangler/commands/#publish) your Workers projects.
Bindings allow your Workers to interact with resources on the Cloudflare developer platform, including [R2](/r2/), [Durable Objects](/durable-objects/), and [D1](/d1/).
---
## Related products
Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
Cloudflare Durable Objects allows developers to access scalable compute and permanent, consistent storage.
Built on SQLite, D1 is Cloudflare’s first queryable relational database. Create an entire database by importing data or defining your tables and writing your queries within a Worker or through the API.
---
### More resources
Learn about KV limits.
Learn about KV pricing.
Ask questions, show off what you are building, and discuss the platform
with other developers.
Learn about product announcements, new tutorials, and what is new in
Cloudflare Developer Platform.
---
# Demos and architectures
URL: https://developers.cloudflare.com/pages/demos/
import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components"
Learn how you can use Pages within your existing application and architecture.
## Demos
Explore the following demo applications for Pages.
## Reference architectures
Explore the following reference architectures that use Pages:
---
# Cloudflare Pages
URL: https://developers.cloudflare.com/pages/
import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, Render } from "~/components"
Create full-stack applications that are instantly deployed to the Cloudflare global network.
Deploy your Pages project by connecting to [your Git provider](/pages/get-started/git-integration/), uploading prebuilt assets directly to Pages with [Direct Upload](/pages/get-started/direct-upload/) or using [C3](/pages/get-started/c3/) from the command line.
***
## Features
Use Pages Functions to deploy server-side code to enable dynamic functionality without running a dedicated server.
Rollbacks allow you to instantly revert your project to a previous production deployment.
Set up redirects for your Cloudflare Pages project.
***
## Related products
Cloudflare Workers provides a serverless execution environment that allows you to create new applications or augment existing ones without configuring or maintaining infrastructure.
Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
D1 is Cloudflare’s native serverless database. Create a database by importing data or defining your tables and writing your queries within a Worker or through the API.
Offload third-party tools and services to the cloud and improve the speed and security of your website.
***
## More resources
Learn about limits that apply to your Pages project (500 deploys per month on the Free plan).
Migrate to Pages from your existing hosting provider.
Deploy popular frameworks such as React, Hugo, and Next.js on Pages.
Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers.
Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.
---
# Getting started
URL: https://developers.cloudflare.com/pipelines/getting-started/
import { Render, PackageManagers, Details } from "~/components";
Cloudflare Pipelines allows you to ingest load high volumes of real time streaming data, and load into [R2 Object Storage](/r2/), without managing any infrastructure.
By following this guide, you will:
1. Setup an R2 bucket.
2. Create a pipeline, with HTTP as a source, and an R2 bucket as a sink.
3. Send data to your pipeline's HTTP ingestion endpoint.
4. Verify the output delivered to R2.
:::note
Pipelines is in **public beta**, and any developer with a [paid Workers plan](/workers/platform/pricing/#workers) can start using Pipelines immediately.
:::
***
## Prerequisites
To use Pipelines, you will need:
## 1. Set up an R2 bucket
Create a bucket by following the [get started guide for R2](/r2/get-started/), or by running the command below:
```sh
npx wrangler r2 bucket create my-bucket
```
Save the bucket name for the next step.
## 2. Create a Pipeline
To create a pipeline using Wrangler, run the following command in a terminal, and specify:
- The name of your pipeline
- The name of the R2 bucket you created in step 1
```sh
npx wrangler pipelines create my-clickstream-pipeline --r2-bucket my-bucket --batch-max-seconds 5 --compression none
```
After running this command, you will be prompted to authorize Cloudflare Workers Pipelines to create an R2 API token on your behalf. These tokens used by your pipeline when loading data into your bucket. You can approve the request through the browser link which will open automatically.
Choosing a pipeline name
When choosing a name for your pipeline:
- Ensure it is descriptive and relevant to the type of events you intend to ingest. You cannot change the name of the pipeline after creating it.
- The pipeline name must be between 1 and 63 characters long.
- The name cannot contain special characters outside dashes (`-`).
- The name must start and end with a letter or a number.
You will notice two optional flags are set while creating the pipeline: `--batch-max-seconds` and `--compression`. These flags are added to make it faster for you to see the output of your first pipeline. For production use cases, we recommend keeping the default settings.
Once you create your pipeline, you will receive a summary of your pipeline's configuration, as well as an HTTP endpoint which you can post data to:
```sh
🌀 Authorizing R2 bucket "my-bucket"
🌀 Creating pipeline named "my-clickstream-pipeline"
✅ Successfully created pipeline my-clickstream-pipeline
Id: [PIPELINE-ID]
Name: my-clickstream-pipeline
Sources:
HTTP:
Endpoint: https://[PIPELINE-ID].pipelines.cloudflare.com/
Authentication: off
Format: JSON
Worker:
Format: JSON
Destination:
Type: R2
Bucket: my-bucket
Format: newline-delimited JSON
Compression: GZIP
Batch hints:
Max bytes: 100 MB
Max duration: 300 seconds
Max records: 100,000
🎉 You can now send data to your Pipeline!
Send data to your Pipeline's HTTP endpoint:
curl "https://[PIPELINE-ID].pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]'
To send data to your Pipeline from a Worker, add the following configuration to your config file:
{
"pipelines": [
{
"pipeline": "my-clickstream-pipeline",
"binding": "PIPELINE"
}
]
}
```
## 3. Post data to your pipeline
Use a curl command in your terminal to post an array of JSON objects to the endpoint you received in step 1.
```sh
curl -H "Content-Type:application/json" \
-d '[{"event":"viewedCart", "timestamp": "2025-04-03T15:42:30Z"},{"event":"cartAbandoned", "timestamp": "2025-04-03T15:42:37Z"}]' \
```
Once the pipeline successfully accepts the data, you will receive a success message.
You can continue posting data to the pipeline. The pipeline will automatically buffer ingested data. Based on the batch settings (`--batch-max-seconds`) specified in step 2, a batch will be generated every 5 seconds, turned into a file, and written out to your R2 bucket.
## 4. Verify in R2
Open the [R2 dashboard](https://dash.cloudflare.com/?to=/:account/r2/overview), and navigate to the R2 bucket you created in step 1. You will see a directory, labeled with today's date (such as `event_date=2025-04-05`). Click on the directory, and you'll see a sub-directory with the current hour (such as `hr=04`). You should see a newline delimited JSON file, containing the data you posted in step 3. Download the file, and open it in a text editor of your choice, to verify that the data posted in step 2 is present.
***
## Next steps
* Learn about how to [setup authentication, or CORS settings](/pipelines/build-with-pipelines/sources/http), on your HTTP endpoint.
* Send data to your Pipeline from a Cloudflare Worker using the [Workers API documentation](/pipelines/build-with-pipelines/sources/workers-apis).
If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the [Cloudflare Developers community on Discord](https://discord.cloudflare.com).
---
# Overview
URL: https://developers.cloudflare.com/pipelines/
import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct } from "~/components";
Ingest real time data streams and load into R2, using Cloudflare Pipelines.
Cloudflare Pipelines lets you ingest high volumes of real time data, without managing any infrastructure. A single pipeline can ingest up to 100 MB of data per second. Ingested data is automatically batched, written to output files, and delivered to an [R2 bucket](/r2/) in your account. You can use Pipelines to build a data lake of clickstream data, or to store events from a Worker.
## Create your first pipeline
You can setup a pipeline to ingest data via HTTP, and deliver output to R2, with a single command:
```sh
$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket
🌀 Authorizing R2 bucket "my-bucket"
🌀 Creating pipeline named "my-clickstream-pipeline"
✅ Successfully created pipeline my-clickstream-pipeline
Id: 0e00c5ff09b34d018152af98d06f5a1xvc
Name: my-clickstream-pipeline
Sources:
HTTP:
Endpoint: https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/
Authentication: off
Format: JSON
Worker:
Format: JSON
Destination:
Type: R2
Bucket: my-bucket
Format: newline-delimited JSON
Compression: GZIP
Batch hints:
Max bytes: 100 MB
Max duration: 300 seconds
Max records: 100,000
🎉 You can now send data to your pipeline!
Send data to your pipeline's HTTP endpoint:
curl "https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]'
To send data to your pipeline from a Worker, add the following configuration to your config file:
{
"pipelines": [
{
"pipeline": "my-clickstream-pipeline",
"binding": "PIPELINE"
}
]
}
```
Refer to the [getting started guide](/pipelines/getting-started) to start building with pipelines.
:::note
While in beta, you will not be billed for Pipelines usage. You will be billed only for [R2 usage](/r2/pricing/).
:::
***
## Features
Each pipeline generates a globally scalable HTTP endpoint, which supports authentication and CORS settings.
Send data to a pipeline directly from a Cloudflare Worker.
Define batch sizes and enable compression to generate output files that are efficient to query.
***
## Related products
Cloudflare R2 Object Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
Cloudflare Workers allows developers to build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
***
## More resources
Learn about pipelines limits.
Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.
Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers.
---
# FAQs
URL: https://developers.cloudflare.com/pub-sub/faq/
## What messaging systems are similar?
Messaging systems that also implement or strongly align to the "publish-subscribe" model include AWS SNS (Simple Notification Service), Google Cloud Pub/Sub, Redis' PUBLISH-SUBSCRIBE features, and RabbitMQ. If you have used one of these systems before, you will notice that Pub/Sub shares similar foundations (topics, subscriptions, fan-in/fan-out models) and is easy to migrate to.
## How is Pub/Sub priced?
Cloudflare is still exploring pricing models for Pub/Sub and will share more with developers prior to GA. Users will be given prior notice and will require beta users to explicitly opt-in.
## Does Pub/Sub show data in the Cloudflare dashboard?
Pub/Sub today does not support the Cloudflare dashboard. You can set up Pub/Sub through Wrangler by following [these steps](/pub-sub/guide/).
## Where can I speak with other like-minded developers about Pub/Sub?
Try the #pubsub-beta channel on the [Cloudflare Developers Discord](https://discord.com/invite/cloudflaredev).
## What limits does Pub/Sub have?
Refer to [Limits](/pub-sub/platform/limits) for more details on client, broker, and topic-based limits.
---
# Get started
URL: https://developers.cloudflare.com/pub-sub/guide/
import { Render } from "~/components";
:::note
Pub/Sub is currently in private beta. You can [sign up for the waitlist](https://www.cloudflare.com/cloudflare-pub-sub-lightweight-messaging-private-beta/) to register your interest.
:::
Pub/Sub is a flexible, scalable messaging service built on top of the MQTT messaging standard, allowing you to publish messages from tens of thousands of devices (or more), deploy code to filter, aggregate and transform messages using Cloudflare Workers, and/or subscribe to topics for fan-out messaging use cases.
This guide will:
- Instruct you through creating your first Pub/Sub Broker using the Cloudflare API.
- Create a `..cloudflarepubsub.com` endpoint ready to publish and subscribe to using any MQTT v5.0 compatible client.
- Help you send your first message to the Pub/Sub Broker.
Before you begin, you should be familiar with using the command line and running basic terminal commands.
## Prerequisite: Create a Cloudflare account
In order to use Pub/Sub, you need a [Cloudflare account](/fundamentals/setup/account/). If you already have an account, you can skip this step.
## 1. Enable Pub/Sub
During the Private Beta, your account will need to be explicitly granted access. If you have not, sign up for the waitlist, and we will contact you when you are granted access.
## 2. Install Wrangler (Cloudflare CLI)
:::note
Pub/Sub support in Wrangler requires wrangler `2.0.16` or above. If you're using an older version of Wrangler, ensure you [update the installed version](/workers/wrangler/install-and-update/#update-wrangler).
:::
Installing `wrangler`, the Workers command-line interface (CLI), allows you to [`init`](/workers/wrangler/commands/#init), [`dev`](/workers/wrangler/commands/#dev), and [`publish`](/workers/wrangler/commands/#publish) your Workers projects.
To install [`wrangler`](https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler), ensure you have [`npm` installed](https://docs.npmjs.com/getting-started), preferably using a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm). Using a version manager helps avoid permission issues and allows you to easily change Node.js versions. Then run:
Validate that you have a version of `wrangler` that supports Pub/Sub:
```sh
wrangler --version
```
```sh output
2.0.16 # should show 2.0.16 or greater - e.g. 2.0.17 or 2.1.0
```
With `wrangler` installed, we can now create a Pub/Sub API token for `wrangler` to use.
## 3. Fetch your credentials
To use Wrangler with Pub/Sub, you'll need an API Token that has permissions to both read and write for Pub/Sub. The `wrangler login` flow does not issue you an API Token with valid Pub/Sub permissions.
:::note
This API token requirement will be lifted prior to Pub/Sub becoming Generally Available.
:::
1. From the [Cloudflare dashboard](https://dash.cloudflare.com), click on the profile icon and select **My Profile**.
2. Under **My Profile**, click **API Tokens**.
3. On the [**API Tokens**](https://dash.cloudflare.com/profile/api-tokens) page, click **Create Token**
4. Choose **Get Started** next to **Create Custom Token**
5. Name the token - e.g. "Pub/Sub Write Access"
6. Under the **Permissions** heading, choose **Account**, select **Pub/Sub** from the first drop-down, and **Edit** as the permission.
7. Select **Add More** below the newly created permission. Choose **User** > **Memberships** from the first dropdown and **Read** as the permission.
8. Select **Continue to Summary** at the bottom of the page, where you should see _All accounts - Pub/Sub:Edit_ as the permission.
9. Select **Create Token** and copy the token value.
In your terminal, configure a `CLOUDFLARE_API_TOKEN` environmental variable with your Pub/Sub token. When this variable is set, `wrangler` will use it to authenticate against the Cloudflare API.
```sh
export CLOUDFLARE_API_TOKEN="pasteyourtokenhere"
```
:::caution[Warning]
This token should be kept secret and not committed to source code or placed in any client-side code.
:::
With this environmental variable configured, you can now create your first Pub/Sub Broker!
## 4. Create your first namespace
A namespace represents a collection of Pub/Sub Brokers, and they can be used to separate different environments (production vs. staging), infrastructure teams, and in the future, permissions.
Before you begin, consider the following:
- **Choose your namespace carefully**. Although it can be changed later, it will be used as part of the hostname for your Brokers. You should not use secrets or other data that cannot be exposed on the Internet.
- Namespace names are global; they are globally unique.
- Namespaces must be valid DNS names per RFC 1035. In most cases, this means only a-z, 0-9, and hyphens are allowed. Names are case-insensitive.
For example, a namespace of `my-namespace` and a broker of `staging` would create a hostname of `staging.my-namespace.cloudflarepubsub.com` for clients to connect to.
With this in mind, create a new namespace. This example will use `my-namespace` as a placeholder:
```sh
wrangler pubsub namespace create my-namespace
```
```json output
{
"id": "817170399d784d4ea8b6b90ae558c611",
"name": "my-namespace",
"description": "",
"created_on": "2022-05-11T23:13:08.383232Z",
"modified_on": "2022-05-11T23:13:08.383232Z"
}
```
If you receive an HTTP 403 (Forbidden) response, check that your credentials are correct and that you have not pasted erroneous spaces or characters.
## 5. Create a broker
A broker, in MQTT terms, is a collection of connected clients that publish messages to topics, and clients that subscribe to those topics and receive messages. The broker acts as a relay, and with Cloudflare Pub/Sub, a Cloudflare Worker can be configured to act on every message published to it.
This broker will be configured to accept `TOKEN` authentication. In MQTT terms, this is typically defined as username:password authentication. Pub/Sub uses JSON Web Tokens (JWT) that are unique to each client, and that can be revoked, to make authentication more secure.
Broker names must be:
- Chosen carefully. Although it can be changed later, the name will be used as part of the hostname for your brokers. Do not use secrets or other data that cannot be exposed on the Internet.
- Valid DNS names (per RFC 1035). In most cases, this means only `a-z`, `0-9` and hyphens are allowed. Names are case-insensitive.
- Unique per namespace.
To create a new MQTT Broker called `example-broker` in the `my-namespace` namespace from the example above:
```sh
wrangler pubsub broker create example-broker --namespace=my-namespace
```
```json output
{
"id": "4c63fa30ee13414ba95be5b56d896fea",
"name": "example-broker",
"authType": "TOKEN",
"created_on": "2022-05-11T23:19:24.356324Z",
"modified_on": "2022-05-11T23:19:24.356324Z",
"expiration": null,
"endpoint": "mqtts://example-broker.namespace.cloudflarepubsub.com:8883"
}
```
In the example above, a broker is created with an endpoint of `mqtts://example-broker.my-namespace.cloudflarepubsub.com`. This means:
- Our Pub/Sub (MQTT) Broker is reachable over MQTTS (MQTT over TLS) - port 8883
- The hostname is `example-broker.my-namespace.cloudflarepubsub.com`
- [Token authentication](/pub-sub/platform/authentication-authorization/) is required to clients to connect.
## 6. Create credentials for your broker
In order to connect to a Pub/Sub Broker, you need to securely authenticate. Credentials are scoped to each broker and credentials issued for `broker-a` cannot be used to connect to `broker-b`.
Note that:
- You can generate multiple credentials at once (up to 100 per API call), which can be useful when configuring multiple clients (such as IoT devices).
- Credentials are associated with a specific Client ID and encoded as a signed JSON Web Token (JWT).
- Each token has a unique identifier (a `jti` - or `JWT ID`) that you can use to revoke a specific token.
- Tokens are prefixed with the broker name they are associate with (for example, `my-broker`) to make identifying tokens across multiple Pub/Sub brokers easier.
:::note
Ensure you do not commit your credentials to source control, such as GitHub. A valid token allows anyone to connect to your broker and publish or subscribe to messages. Treat credentials as secrets.
:::
To generate two tokens for a broker called `example-broker` with a 48 hour expiry:
```sh
wrangler pubsub broker issue example-broker --namespace=NAMESPACE_NAME --number=2 --expiration=48h
```
You should receive a success response that resembles the example below, which is a map of Client IDs and their associated tokens.
```json
{
"01G3A5GBJE5P3GPXJZ72X4X8SA": "eyJhbGciOiJFZERTQSIsImtpZCI6IkpEUHVZSnFIT3Zxemxha2tORlE5a2ZON1dzWXM1dUhuZHBfemlSZG1PQ1UifQ.
not-a-real-token.ZZL7PNittVwJOeMpFMn2CnVTgIz4AcaWXP9NqMQK0D_iavcRv_p2DVshg6FPe5xCdlhIzbatT6gMyjMrOA2wBg",
"01G3A5GBJECX5DX47P9RV1C5TV": "eyJhbGciOiJFZERTQSIsImtpZCI6IkpEUHVZSnFIT3Zxemxha2tORlE5a2ZON1dzWXM1dUhuZHBfemlSZG1PQ1UifQ.also-not-a-real-token.WrhK-VTs_IzOEALB-T958OojHK5AjYBC5ZT9xiI_6ekdQrKz2kSPGnvZdUXUsTVFDf9Kce1Smh-mw1sF2rSQAQ",
}
```
Each token allows you to publish or subscribe to the associated broker.
## 7. Subscribe and publish messages to a topic
Your broker is now created and ready to accept messages from authenticated clients. Because Pub/Sub is based on the MQTT protocol, there are client libraries for most popular programming languages. Refer to the list of [recommended client libraries](/pub-sub/learning/client-libraries/).
:::note
You can view a live demo available at [demo.mqtt.dev](http://demo.mqtt.dev) that allows you to use your own Pub/Sub Broker and a valid token to subscribe to a topic and publish messages to it. The `JWT` field in the demo accepts a valid token from your Broker.
:::
The example below uses [MQTT.js](https://github.com/mqttjs/MQTT.js) with Node.js to subscribe to a topic on a broker and publish a very basic "hello world" style message. You will need to have a [supported Node.js](https://nodejs.org/en/download/current/) version installed.
```sh
# Check that Node.js is installed
which node
# Install MQTT.js
npm i mqtt --save
```
Set your environment variables.
```sh
export CLOUDFLARE_API_TOKEN="YourAPIToken"
export CLOUDFLARE_ACCOUNT_ID="YourAccountID"
export DEFAULT_NAMESPACE="TheNamespaceYouCreated"
export BROKER_NAME="TheBrokerYouCreated"
```
We can now generate an access token for Pub/Sub. We will need both the client ID and the token (a JSON Web Token) itself to authenticate from our MQTT client:
```sh
curl -s -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" -H "Content-Type: application/json" "https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pubsub/namespaces/namespace/brokers/is-it-broken/credentials?type=TOKEN&topicAcl=#" | jq '.result | to_entries | .[0]'
```
This will output a `key` representing the `clientId`, and a `value` representing our (secret) access token, resembling the following:
```json
{
"key": "01HDQFD5Y8HWBFGFBBZPSWQ22M",
"value": "eyJhbGciOiJFZERTQSIsImtpZCI6IjU1X29UODVqQndJbjlFYnY0V3dzanRucG9ycTBtalFlb1VvbFZRZDIxeEUifQ....NVpToBedVYGGhzHJZmpEG1aG_xPBWrE-PgG1AFYcTPEBpZ_wtN6ApeAUM0JIuJdVMkoIC9mUg4vPtXM8jLGgBw"
}
```
Copy the `value` field and set it as the `BROKER_TOKEN` environmental variable:
```sh
export BROKER_TOKEN=""
```
Create a file called `index.js `, making sure that:
- `brokerEndpoint` is set to the address of your Pub/Sub broker.
- `clientId` is the `key` from your newly created access token
- The `BROKER_TOKEN` environmental variable populated with your access token.
:::note
Your `BROKER_TOKEN` is sensitive, and should be kept secret to avoid unintended access to your Pub/Sub broker. Avoid committing it to source code.
:::
```js
const mqtt = require("mqtt");
const brokerEndpoint = "mqtts://my-broker.my-namespace.cloudflarepubsub.com";
const clientId = "01HDQFD5Y8HWBFGFBBZPSWQ22M"; // Replace this with your client ID
const options = {
port: 8883,
username: clientId, // MQTT.js requires this, but Pub/Sub does not
clientId: clientId, // Required by Pub/Sub
password: process.env.BROKER_TOKEN,
protocolVersion: 5, // MQTT 5
};
const client = mqtt.connect(brokerEndpoint, options);
client.subscribe("example-topic");
client.publish(
"example-topic",
`message from ${client.options.clientId}: hello at ${Date.now()}`,
);
client.on("message", function (topic, message) {
console.log(`received message on ${topic}: ${message}`);
});
```
Run the example. You should see the output written to your terminal (stdout).
```sh
node index.js
```
```sh output
> received message on example-topic: hello from 01HDQFD5Y8HWBFGFBBZPSWQ22M at 1652102228
```
Your client ID and timestamp will be different from above, but you should see a very similar message. You can also try subscribing to multiple topics and publishing to them by passing the same topic name to `client.publish`. Provided they have permission to, clients can publish to multiple topics at once or as needed.
If you do not see the message you published, or you are receiving error messages, ensure that:
- The `BROKER_TOKEN` environmental variable is not empty. Try echo `$BROKER_TOKEN` in your terminal.
- You updated the `brokerEndpoint` to match the broker you created. The **Endpoint** field of your broker will show this address and port.
- You correctly [installed MQTT.js](https://github.com/mqttjs/MQTT.js#install).
## Next Steps
What's next?
- [Connect a worker to your broker](/pub-sub/learning/integrate-workers/) to programmatically read, parse, and filter messages as they are published to a broker
- [Learn how PubSub and the MQTT protocol work](/pub-sub/learning/how-pubsub-works)
- [See example client code](/pub-sub/examples) for publishing or subscribing to a PubSub broker
---
# Pub/Sub
URL: https://developers.cloudflare.com/pub-sub/
:::note
Pub/Sub is currently in private beta. Browse the documentation to understand how Pub/Sub works and integrates with our broader Developer Platform, and [sign up for the waitlist](https://www.cloudflare.com/cloudflare-pub-sub-lightweight-messaging-private-beta/) to get access in the near future.
:::
Pub/Sub is Cloudflare's distributed MQTT messaging service. MQTT is one of the most popular messaging protocols used for consuming sensor data from thousands (or tens of thousands) of remote, distributed Internet of Things clients; publishing configuration data or remote commands to fleets of devices in the field; and even for building notification or messaging systems for online games and mobile apps.
Pub/Sub is ideal for cases where you have many (from a handful to tens of thousands of) clients sending small, sub-1MB messages — such as event, telemetry or transaction data — into a centralized system for aggregation, or where you need to push configuration updates or remote commands to remote clients at scale.
Pub/Sub:
* Scales automatically. You do not have to provision "vCPUs" or "memory", or set autoscaling parameters to handle spikes in message rates.
* Is global. Cloudflare's Pub/Sub infrastructure runs in [hundreds of cities worldwide](https://www.cloudflare.com/network/). Every edge location is part of one, globally distributed Pub/Sub system.
* Is secure by default. Clients must authenticate and connect over TLS, and clients are issued credentials that are scoped to a specific broker.
* Allows you to create multiple brokers to isolate clients or use cases, for example, staging vs. production or customers A vs. B vs. C — as needed. Each broker is addressable by a unique DNS hostname.
* Integrates with Cloudflare Workers to enable programmable messaging capabilities: parse, filter, aggregate, and re-publish MQTT messages directly from your serverless code.
* Supports MQTT v5.0, the most recent version of the MQTT specification, and one of the most ubiquitous messaging protocols in use today.
If you are new to the MQTT protocol, visit the [How Pub/Sub works](/pub-sub/learning/how-pubsub-works/) to better understand how MQTT differs from other messaging protocols.
---
# Get started
URL: https://developers.cloudflare.com/privacy-gateway/get-started/
Privacy Gateway implementation consists of three main parts:
1. Application Gateway Server/backend configuration (operated by you).
2. Client configuration (operated by you).
3. Connection to a Privacy Gateway Relay Server (operated by Cloudflare).
***
## Before you begin
Privacy Gateway is currently in closed beta. If you are interested, [contact us](https://www.cloudflare.com/lp/privacy-edge/).
***
## Step 1 - Configure your server
As a customer of the Privacy Gateway, you also need to add server support for OHTTP by implementing an application gateway server. The application gateway is responsible for decrypting incoming requests, forwarding the inner requests to their destination, and encrypting the corresponding response back to the client.
The [server implementation](#resources) will handle incoming requests and produce responses, and it will also advertise its public key configuration for clients to access. The public key configuration is generated securely and made available via an API. Refer to the [README](https://github.com/cloudflare/privacy-gateway-server-go#readme) for details about configuration.
Applications can also implement this functionality themselves. Details about [public key configuration](https://datatracker.ietf.org/doc/html/draft-ietf-ohai-ohttp-05#section-3), HTTP message [encryption and decryption](https://datatracker.ietf.org/doc/html/draft-ietf-ohai-ohttp-05#section-4), and [server-specific details](https://datatracker.ietf.org/doc/html/draft-ietf-ohai-ohttp-05#section-5) can be found in the OHTTP specification.
### Resources
Use the following resources for help with server configuration:
* **Go**:
* [Sample gateway server](https://github.com/cloudflare/privacy-gateway-server-go)
* [Gateway library](https://github.com/chris-wood/ohttp-go)
* **Rust**: [Gateway library](https://github.com/martinthomson/ohttp/tree/main/ohttp-server)
* **JavaScript / TypeScript**: [Gateway library](https://github.com/chris-wood/ohttp-js)
***
## Step 2 - Configure your client
As a customer of the Privacy Gateway, you need to set up client-side support for the gateway. Clients are responsible for encrypting requests, sending them to the Cloudflare Privacy Gateway, and then decrypting the corresponding responses.
Additionally, app developers need to [configure the client](#resources-1) to fetch or otherwise discover the gateway’s public key configuration. How this is done depends on how the gateway makes its public key configuration available. If you need help with this configuration, [contact us](https://www.cloudflare.com/lp/privacy-edge/).
### Resources
Use the following resources for help with client configuration:
* **Objective C**: [Sample application](https://github.com/cloudflare/privacy-gateway-client-demo)
* **Rust**: [Client library](https://github.com/martinthomson/ohttp/tree/main/ohttp-client)
* **JavaScript / TypeScript**: [Client library](https://github.com/chris-wood/ohttp-js)
***
## Step 3 - Review your application
After you have configured your client and server, review your application to make sure you are only sending intended data to Cloudflare and the application backend. In particular, application data should not contain anything unique to an end-user, as this would invalidate the benefits that OHTTP provides.
* Applications should scrub identifying user data from requests forwarded through the Privacy Gateway. This includes, for example, names, email addresses, phone numbers, etc.
* Applications should encourage users to disable crash reporting when using Privacy Gateway. Crash reports can contain sensitive user information and data, including email addresses.
* Where possible, application data should be encrypted on the client device with a key known only to the client. For example, iOS generally has good support for [client-side encryption (and key synchronization via the KeyChain)](https://developer.apple.com/documentation/security/certificate_key_and_trust_services/keys). Android likely has similar features available.
***
## Step 4 - Relay requests through Cloudflare
Before sending any requests, you need to first set up your account with Cloudflare. That requires [contacting us](https://www.cloudflare.com/lp/privacy-edge/) and providing the URL of your application gateway server.
Then, make sure you are forwarding requests to a mutually agreed URL with the following conventions.
```txt
https://.privacy-gateway.cloudflare.com/
```
---
# Cloudflare Privacy Gateway
URL: https://developers.cloudflare.com/privacy-gateway/
import { Description, Feature, Plan } from "~/components"
Implements the Oblivious HTTP IETF standard to improve client privacy.
[Privacy Gateway](https://blog.cloudflare.com/building-privacy-into-internet-standards-and-how-to-make-your-app-more-private-today/) is a managed service deployed on Cloudflare’s global network that implements part of the [Oblivious HTTP (OHTTP) IETF](https://www.ietf.org/archive/id/draft-thomson-http-oblivious-01.html) standard. The goal of Privacy Gateway and Oblivious HTTP is to hide the client's IP address when interacting with an application backend.
OHTTP introduces a trusted third party between client and server, called a relay, whose purpose is to forward encrypted requests and responses between client and server. These messages are encrypted between client and server such that the relay learns nothing of the application data, beyond the length of the encrypted message and the server the client is interacting with.
***
## Availability
Privacy Gateway is currently in closed beta – available to select privacy-oriented companies and partners. If you are interested, [contact us](https://www.cloudflare.com/lp/privacy-edge/).
***
## Features
Learn how to set up Privacy Gateway for your application.
Learn about the different parties and data shared in Privacy Gateway.
Learn about how to query Privacy Gateway metrics.
---
# Demos and architectures
URL: https://developers.cloudflare.com/queues/demos/
import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components"
Learn how you can use Queues within your existing application and architecture.
## Demos
Explore the following demo applications for Queues.
## Reference architectures
Explore the following reference architectures that use Queues:
---
# Glossary
URL: https://developers.cloudflare.com/queues/glossary/
import { Glossary } from "~/components"
Review the definitions for terms used across Cloudflare's Queues documentation.
---
# Getting started
URL: https://developers.cloudflare.com/queues/get-started/
import { Render, PackageManagers, WranglerConfig } from "~/components";
Cloudflare Queues is a flexible messaging queue that allows you to queue messages for asynchronous processing. By following this guide, you will create your first queue, a Worker to publish messages to that queue, and a consumer Worker to consume messages from that queue.
## Prerequisites
To use Queues, you will need:
## 1. Create a Worker project
You will access your queue from a Worker, the producer Worker. You must create at least one producer Worker to publish messages onto your queue. If you are using [R2 Bucket Event Notifications](/r2/buckets/event-notifications/), then you do not need a producer Worker.
To create a producer Worker, run:
This will create a new directory, which will include both a `src/index.ts` Worker script, and a [`wrangler.jsonc`](/workers/wrangler/configuration/) configuration file. After you create your Worker, you will create a Queue to access.
Move into the newly created directory:
```sh
cd producer-worker
```
## 2. Create a queue
To use queues, you need to create at least one queue to publish messages to and consume messages from.
To create a queue, run:
```sh
npx wrangler queues create
```
Choose a name that is descriptive and relates to the types of messages you intend to use this queue for. Descriptive queue names look like: `debug-logs`, `user-clickstream-data`, or `password-reset-prod`.
Queue names must be 1 to 63 characters long. Queue names cannot contain special characters outside dashes (`-`), and must start and end with a letter or number.
You cannot change your queue name after you have set it. After you create your queue, you will set up your producer Worker to access it.
## 3. Set up your producer Worker
To expose your queue to the code inside your Worker, you need to connect your queue to your Worker by creating a binding. [Bindings](/workers/runtime-apis/bindings/) allow your Worker to access resources, such as Queues, on the Cloudflare developer platform.
To create a binding, open your newly generated `wrangler.jsonc` file and add the following:
```toml
[[queues.producers]]
queue = "MY-QUEUE-NAME"
binding = "MY_QUEUE"
```
Replace `MY-QUEUE-NAME` with the name of the queue you created in [step 2](/queues/get-started/#2-create-a-queue). Next, replace `MY_QUEUE` with the name you want for your `binding`. The binding must be a valid JavaScript variable name. This is the variable you will use to reference this queue in your Worker.
### Write your producer Worker
You will now configure your producer Worker to create messages to publish to your queue. Your producer Worker will:
1. Take a request it receives from the browser.
2. Transform the request to JSON format.
3. Write the request directly to your queue.
In your Worker project directory, open the `src` folder and add the following to your `index.ts` file:
```ts null {8}
export default {
async fetch(request, env, ctx): Promise {
let log = {
url: request.url,
method: request.method,
headers: Object.fromEntries(request.headers),
};
await env..send(log);
return new Response('Success!');
},
} satisfies ExportedHandler;
```
Replace `MY_QUEUE` with the name you have set for your binding from your `wrangler.jsonc` file.
Also add the queue to `Env` interface in `index.ts`.
```ts null {2}
export interface Env {
: Queue;
}
```
If this write fails, your Worker will return an error (raise an exception). If this write works, it will return `Success` back with a HTTP `200` status code to the browser.
In a production application, you would likely use a [`try...catch`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/try...catch) statement to catch the exception and handle it directly (for example, return a custom error or even retry).
### Publish your producer Worker
With your Wrangler file and `index.ts` file configured, you are ready to publish your producer Worker. To publish your producer Worker, run:
```sh
npx wrangler deploy
```
You should see output that resembles the below, with a `*.workers.dev` URL by default.
```
Uploaded (0.76 sec)
Published (0.29 sec)
https://..workers.dev
```
Copy your `*.workers.dev` subdomain and paste it into a new browser tab. Refresh the page a few times to start publishing requests to your queue. Your browser should return the `Success` response after writing the request to the queue each time.
You have built a queue and a producer Worker to publish messages to the queue. You will now create a consumer Worker to consume the messages published to your queue. Without a consumer Worker, the messages will stay on the queue until they expire, which defaults to four (4) days.
## 4. Create your consumer Worker
A consumer Worker receives messages from your queue. When the consumer Worker receives your queue's messages, it can write them to another source, such as a logging console or storage objects.
In this guide, you will create a consumer Worker and use it to log and inspect the messages with [`wrangler tail`](/workers/wrangler/commands/#tail). You will create your consumer Worker in the same Worker project that you created your producer Worker.
:::note
Queues also supports [pull-based consumers](/queues/configuration/pull-consumers/), which allows any HTTP-based client to consume messages from a queue. This guide creates a push-based consumer using Cloudflare Workers.
:::
To create a consumer Worker, open your `index.ts` file and add the following `queue` handler to your existing `fetch` handler:
```ts null {11}
export default {
async fetch(request, env, ctx): Promise {
let log = {
url: request.url,
method: request.method,
headers: Object.fromEntries(request.headers),
};
await env..send(log);
return new Response('Success!');
},
async queue(batch, env): Promise {
let messages = JSON.stringify(batch.messages);
console.log(`consumed from our queue: ${messages}`);
},
} satisfies ExportedHandler;
```
Replace `MY_QUEUE` with the name you have set for your binding from your `wrangler.jsonc` file.
Every time messages are published to the queue, your consumer Worker's `queue` handler (`async queue`) is called and it is passed one or more messages.
In this example, your consumer Worker transforms the queue's JSON formatted message into a string and logs that output. In a real world application, your consumer Worker can be configured to write messages to object storage (such as [R2](/r2/)), write to a database (like [D1](/d1/)), further process messages before calling an external API (such as an [email API](/workers/tutorials/)) or a data warehouse with your legacy cloud provider.
When performing asynchronous tasks from within your consumer handler, use `waitUntil()` to ensure the response of the function is handled. Other asynchronous methods are not supported within the scope of this method.
### Connect the consumer Worker to your queue
After you have configured your consumer Worker, you are ready to connect it to your queue.
Each queue can only have one consumer Worker connected to it. If you try to connect multiple consumers to the same queue, you will encounter an error when attempting to publish that Worker.
To connect your queue to your consumer Worker, open your Wrangler file and add this to the bottom:
```toml
[[queues.consumers]]
queue = ""
# Required: this should match the name of the queue you created in step 3.
# If you misspell the name, you will receive an error when attempting to publish your Worker.
max_batch_size = 10 # optional: defaults to 10
max_batch_timeout = 5 # optional: defaults to 5 seconds
```
Replace `MY-QUEUE-NAME` with the queue you created in [step 2](/queues/get-started/#2-create-a-queue).
In your consumer Worker, you are using queues to auto batch messages using the `max_batch_size` option and the `max_batch_timeout` option. The consumer Worker will receive messages in batches of `10` or every `5` seconds, whichever happens first.
`max_batch_size` (defaults to 10) helps to reduce the amount of times your consumer Worker needs to be called. Instead of being called for every message, it will only be called after 10 messages have entered the queue.
`max_batch_timeout` (defaults to 5 seconds) helps to reduce wait time. If the producer Worker is not sending up to 10 messages to the queue for the consumer Worker to be called, the consumer Worker will be called every 5 seconds to receive messages that are waiting in the queue.
### Publish your consumer Worker
With your Wrangler file and `index.ts` file configured, publish your consumer Worker by running:
```sh
npx wrangler deploy
```
## 5. Read messages from your queue
After you set up consumer Worker, you can read messages from the queue.
Run `wrangler tail` to start waiting for our consumer to log the messages it receives:
```sh
npx wrangler tail
```
With `wrangler tail` running, open the Worker URL you opened in [step 3](/queues/get-started/#3-set-up-your-producer-worker).
You should receive a `Success` message in your browser window.
If you receive a `Success` message, refresh the URL a few times to generate messages and push them onto the queue.
With `wrangler tail` running, your consumer Worker will start logging the requests generated by refreshing.
If you refresh less than 10 times, it may take a few seconds for the messages to appear because batch timeout is configured for 10 seconds. After 10 seconds, messages should arrive in your terminal.
If you get errors when you refresh, check that the queue name you created in [step 2](/queues/get-started/#2-create-a-queue) and the queue you referenced in your Wrangler file is the same. You should ensure that your producer Worker is returning `Success` and is not returning an error.
By completing this guide, you have now created a queue, a producer Worker that publishes messages to that queue, and a consumer Worker that consumes those messages from it.
## Related resources
- Learn more about [Cloudflare Workers](/workers/) and the applications you can build on Cloudflare.
---
# Cloudflare Queues
URL: https://developers.cloudflare.com/queues/
import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct } from "~/components"
Send and receive messages with guaranteed delivery and no charges for egress bandwidth.
Cloudflare Queues integrate with [Cloudflare Workers](/workers/) and enable you to build applications that can [guarantee delivery](/queues/reference/delivery-guarantees/), [offload work from a request](/queues/reference/how-queues-works/), [send data from Worker to Worker](/queues/configuration/configure-queues/), and [buffer or batch data](/queues/configuration/batching-retries/).
***
## Features
Cloudflare Queues allows you to batch, retry and delay messages.
Redirect your messages when a delivery failure occurs.
Configure pull-based consumers to pull from a queue over HTTP from infrastructure outside of Cloudflare Workers.
***
## Related products
Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
Cloudflare Workers allows developers to build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
***
## More resources
Learn about pricing.
Learn about Queues limits.
Try Cloudflare Queues which can run on your local machine.
Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.
Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers.
Learn how to configure Cloudflare Queues using Wrangler.
Learn how to use JavaScript APIs to send and receive messages to a Cloudflare Queue.
---
# Demos and architectures
URL: https://developers.cloudflare.com/r2/demos/
import {
ExternalResources,
GlossaryTooltip,
ResourcesBySelector,
} from "~/components";
Learn how you can use R2 within your existing application and architecture.
## Demos
Explore the following demo applications for R2.
## Reference architectures
Explore the following reference architectures that use R2:
---
# Getting started
URL: https://developers.cloudflare.com/r2/get-started/
import { Render } from "~/components"
Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
## 1. Install and authenticate Wrangler
:::note
Before you create your first bucket, you must purchase R2 from the Cloudflare dashboard.
:::
1. [Install Wrangler](/workers/wrangler/install-and-update/) within your project using npm and Node.js or Yarn.
2. [Authenticate Wrangler](/workers/wrangler/commands/#login) to enable deployments to Cloudflare. When Wrangler automatically opens your browser to display Cloudflare's consent screen, select **Allow** to send the API Token to Wrangler.
```txt
wrangler login
```
## 2. Create a bucket
To create a new R2 bucket from the Cloudflare dashboard:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select **R2**.
2. Select **Create bucket**.
3. Enter a name for the bucket and select **Create bucket**.
## 3. Upload your first object
1. From the **R2** page in the dashboard, locate and select your bucket.
2. Select **Upload**.
3. Choose to either drag and drop your file into the upload area or **select from computer**.
You will receive a confirmation message after a successful upload.
## Bucket access options
Cloudflare provides multiple ways for developers to access their R2 buckets:
* [Workers Runtime API](/r2/api/workers/workers-api-usage/)
* [S3 API compatibility](/r2/api/s3/api/)
* [Public buckets](/r2/buckets/public-buckets/)
---
# Cloudflare R2
URL: https://developers.cloudflare.com/r2/
import {
CardGrid,
Description,
Feature,
LinkButton,
LinkTitleCard,
Plan,
RelatedProduct,
} from "~/components";
Object storage for all your data.
Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
You can use R2 for multiple scenarios, including but not limited to:
- Storage for cloud-native applications
- Cloud storage for web content
- Storage for podcast episodes
- Data lakes (analytics and big data)
- Cloud storage output for large batch processes, such as machine learning model artifacts or datasets
Get started
Browse the examples
---
## Features
Location Hints are optional parameters you can provide during bucket creation to indicate the primary geographical location you expect data will be accessed from.
Configure CORS to interact with objects in your bucket and configure policies on your bucket.
Public buckets expose the contents of your R2 bucket directly to the Internet.
Create bucket scoped tokens for granular control over who can access your data.
---
## Related products
A [serverless](https://www.cloudflare.com/learning/serverless/what-is-serverless/) execution environment that allows you to create entirely new applications or augment existing ones without configuring or maintaining infrastructure.
Upload, store, encode, and deliver live and on-demand video with one API, without configuring or maintaining infrastructure.
A suite of products tailored to your image-processing needs.
---
## More resources
Understand pricing for free and paid tier rates.
Ask questions, show off what you are building, and discuss the platform
with other developers.
Learn about product announcements, new tutorials, and what is new in
Cloudflare Workers.
---
# Pricing
URL: https://developers.cloudflare.com/r2/pricing/
import { InlineBadge } from "~/components";
R2 charges based on the total volume of data stored, along with two classes of operations on that data:
1. [Class A operations](#class-a-operations) which are more expensive and tend to mutate state.
2. [Class B operations](#class-b-operations) which tend to read existing state.
For the Infrequent Access storage class, [data retrieval](#data-retrieval) fees apply. There are no charges for egress bandwidth for any storage class.
All included usage is on a monthly basis.
:::note
To learn about potential cost savings from using R2, refer to the [R2 pricing calculator](https://r2-calculator.cloudflare.com/).
:::
## R2 pricing
| | Standard storage | Infrequent Access storage |
| ---------------------------------- | ------------------------ | ------------------------------------------------------- |
| Storage | $0.015 / GB-month | $0.01 / GB-month |
| Class A Operations | $4.50 / million requests | $9.00 / million requests |
| Class B Operations | $0.36 / million requests | $0.90 / million requests |
| Data Retrieval (processing) | None | $0.01 / GB |
| Egress (data transfer to Internet) | Free [^1] | Free [^1] |
### Free tier
You can use the following amount of storage and operations each month for free. The free tier only applies to Standard storage.
| | Free |
| ---------------------------------- | --------------------------- |
| Storage | 10 GB-month / month |
| Class A Operations | 1 million requests / month |
| Class B Operations | 10 million requests / month |
| Egress (data transfer to Internet) | Free [^1] |
### Storage usage
Storage is billed using gigabyte-month (GB-month) as the billing metric. A GB-month is calculated by averaging the _peak_ storage per day over a billing period (30 days).
For example:
- Storing 1 GB constantly for 30 days will be charged as 1 GB-month.
- Storing 3 GB constantly for 30 days will be charged as 3 GB-month.
- Storing 1 GB for 5 days, then 3 GB for the remaining 25 days will be charged as `1 GB * 5/30 month + 3 GB * 25/30 month = 2.66 GB-month`
For objects stored in Infrequent Access storage, you will be charged for the object for the minimum storage duration even if the object was deleted or moved before the duration specified.
### Class A operations
Class A Operations include `ListBuckets`, `PutBucket`, `ListObjects`, `PutObject`, `CopyObject`, `CompleteMultipartUpload`, `CreateMultipartUpload`, `LifecycleStorageTierTransition`, `ListMultipartUploads`, `UploadPart`, `UploadPartCopy`, `ListParts`, `PutBucketEncryption`, `PutBucketCors` and `PutBucketLifecycleConfiguration`.
### Class B operations
Class B Operations include `HeadBucket`, `HeadObject`, `GetObject`, `UsageSummary`, `GetBucketEncryption`, `GetBucketLocation`, `GetBucketCors` and `GetBucketLifecycleConfiguration`.
### Free operations
Free operations include `DeleteObject`, `DeleteBucket` and `AbortMultipartUpload`.
### Data retrieval
Data retrieval fees apply when you access or retrieve data from the Infrequent Access storage class. This includes any time objects are read or copied.
### Minimum storage duration
For objects stored in Infrequent Access storage, you will be charged for the object for the minimum storage duration even if the object was deleted, moved, or replaced before the specified duration.
| Storage class | Minimum storage duration |
| ------------------------------------------------------ | ------------------------ |
| Standard storage | None |
| Infrequent Access storage | 30 days |
## R2 Data Catalog pricing
R2 Data Catalog is in **public beta**, and any developer with an [R2 subscription](/r2/pricing/) can start using it. Currently, outside of standard R2 storage and operations, you will not be billed for your use of R2 Data Catalog. We will provide at least 30 days' notice before we make any changes or start charging for usage.
To learn more about our thinking on future pricing, refer to the [R2 Data Catalog announcement blog](https://blog.cloudflare.com/r2-data-catalog-public-beta).
## Data migration pricing
### Super Slurper
Super Slurper is free to use. You are only charged for the Class A operations that Super Slurper makes to your R2 bucket. Objects with sizes < 100MiB are uploaded to R2 in a single Class A operation. Larger objects use multipart uploads to increase transfer success rates and will perform multiple Class A operations. Note that your source bucket might incur additional charges as Super Slurper copies objects over to R2.
Once migration completes, you are charged for storage & Class A/B operations as described in previous sections.
### Sippy
Sippy is free to use. You are only charged for the operations Sippy makes to your R2 bucket. If a requested object is not present in R2, Sippy will copy it over from your source bucket. Objects with sizes < 200MiB are uploaded to R2 in a single Class A operation. Larger objects use multipart uploads to increase transfer success rates, and will perform multiple Class A operations. Note that your source bucket might incur additional charges as Sippy copies objects over to R2.
As objects are migrated to R2, they are served from R2, and you are charged for storage & Class A/B operations as described in previous sections.
## Pricing calculator
To learn about potential cost savings from using R2, refer to the [R2 pricing calculator](https://r2-calculator.cloudflare.com/).
## R2 billing examples
### Data storage example 1
If a user writes 1,000 objects in R2 for 1 month with an average size of 1 GB and requests each 1,000 times per month, the estimated cost for the month would be:
| | Usage | Free Tier | Billable Quantity | Price |
| ------------------ | ------------------------------------------- | ------------ | ----------------- | ---------- |
| Class B Operations | (1,000 objects) \* (1,000 reads per object) | 10 million | 0 | $0.00 |
| Class A Operations | (1,000 objects) \* (1 write per object) | 1 million | 0 | $0.00 |
| Storage | (1,000 objects) \* (1 GB per object) | 10 GB-months | 990 GB-months | $14.85 |
| **TOTAL** | | | | **$14.85** |
| | | | | |
### Data storage example 2
If a user writes 10 objects in R2 for 1 month with an average size of 1 GB and requests 1,000 times per month, the estimated cost for the month would be:
| | Usage | Free Tier | Billable Quantity | Price |
| ------------------ | ------------------------------------------- | ------------ | ----------------- | --------- |
| Class B Operations | (1,000 objects) \* (1,000 reads per object) | 10 million | 0 | $0.00 |
| Class A Operations | (1,000 objects) \* (1 write per object) | 1 million | 0 | $0.00 |
| Storage | (10 objects) \* (1 GB per object) | 10 GB-months | 0 | $0.00 |
| **TOTAL** | | | | **$0.00** |
| | | | | |
### Asset hosting
If a user writes 100,000 files with an average size of 100 KB object and reads 10,000,000 objects per day, the estimated cost in a month would be:
| | Usage | Free Tier | Billable Quantity | Price |
| ------------------ | --------------------------------------- | ------------ | ----------------- | ----------- |
| Class B Operations | (10,000,000 reads per day) \* (30 days) | 10 million | 290,000,000 | $104.40 |
| Class A Operations | (100,000 writes) | 1 million | 0 | $0.00 |
| Storage | (100,000 objects) \* (100KB per object) | 10 GB-months | 0 GB-months | $0.00 |
| **TOTAL** | | | | **$104.40** |
| | | | | |
## Cloudflare billing policy
To learn more about how usage is billed, refer to [Cloudflare Billing Policy](/support/account-management-billing/cloudflare-billing-policy/).
## Frequently asked questions
### Will I be charged for unauthorized requests to my R2 bucket?
No. You are not charged for operations when the caller does not have permission to make the request (HTTP 401 `Unauthorized` response status code).
[^1]: Egressing directly from R2, including via the [Workers API](/r2/api/workers/), [S3 API](/r2/api/s3/), and [`r2.dev` domains](/r2/buckets/public-buckets/#enable-managed-public-access) does not incur data transfer (egress) charges and is free. If you connect other metered services to an R2 bucket, you may be charged by those services.
---
# Realtime vs Regular SFUs
URL: https://developers.cloudflare.com/realtime/calls-vs-sfus/
## Cloudflare Realtime vs. Traditional SFUs
Cloudflare Realtime represents a paradigm shift in building real-time applications by leveraging a distributed real-time data plane. It creates a seamless experience in real-time communication, transcending traditional geographical limitations and scalability concerns. Realtime is designed for developers looking to integrate WebRTC functionalities in a server-client architecture without delving deep into the complexities of regional scaling or server management.
### The Limitations of Centralized SFUs
Selective Forwarding Units (SFUs) play a critical role in managing WebRTC connections by selectively forwarding media streams to participants in a video call. However, their centralized nature introduces inherent limitations:
- **Regional Dependency:** A centralized SFU requires a specific region for deployment, leading to latency issues for global users except for those in proximity to the selected region.
- **Scalability Concerns:** Scaling a centralized SFU to meet global demand can be challenging and inefficient, often requiring additional infrastructure and complexity.
### How is Cloudflare Realtime different?
Cloudflare Realtime addresses these limitations by leveraging Cloudflare's global network infrastructure:
- **Global Distribution Without Regions:** Unlike traditional SFUs, Cloudflare Realtime operates on a global scale without regional constraints. It utilizes Cloudflare's extensive network of over 250 locations worldwide to ensure low-latency video forwarding, making it fast and efficient for users globally.
- **Decentralized Architecture:** There are no dedicated servers for Realtime. Every server within Cloudflare's network contributes to handling Realtime, ensuring scalability and reliability. This approach mirrors the distributed nature of Cloudflare's products such as 1.1.1.1 DNS or Cloudflare's CDN.
## How Cloudflare Realtime Works
### Establishing Peer Connections
To initiate a real-time communication session, an end user's client establishes a WebRTC PeerConnection to the nearest Cloudflare location. This connection benefits from anycast routing, optimizing for the lowest possible latency.
### Signaling and Media Stream Management
- **HTTPS API for Signaling:** Cloudflare Realtime simplifies signaling with a straightforward HTTPS API. This API manages the initiation and coordination of media streams, enabling clients to push new MediaStreamTracks or request these tracks from the server.
- **Efficient Media Handling:** Unlike traditional approaches that require multiple connections for different media streams from different clients, Cloudflare Realtime maintains a single PeerConnection per client. This streamlined process reduces complexity and improves performance by handling both the push and pull of media through a singular connection.
### Application-Level Management
Cloudflare Realtime delegates the responsibility of state management and participant tracking to the application layer. Developers are empowered to design their logic for handling events such as participant joins or media stream updates, offering flexibility to create tailored experiences in applications.
## Getting Started with Cloudflare Realtime
Integrating Cloudflare Realtime into your application promises a straightforward and efficient process, removing the hurdles of regional scalability and server management so you can focus on creating engaging real-time experiences for users worldwide.
---
# Changelog
URL: https://developers.cloudflare.com/realtime/changelog/
import { ProductReleaseNotes } from "~/components";
{/* */}
---
# DataChannels
URL: https://developers.cloudflare.com/realtime/datachannels/
DataChannels are a way to send arbitrary data, not just audio or video data, between client in low latency. DataChannels are useful for scenarios like chat, game state, or any other data that doesn't need to be encoded as audio or video but still needs to be sent between clients in real time.
While it is possible to send audio and video over DataChannels, it's not optimal because audio and video transfer includes media specific optimizations that DataChannels do not have, such as simulcast, forward error correction, better caching across the Cloudflare network for retransmissions.
```mermaid
graph LR
A[Publisher] -->|Arbitrary data| B[Cloudflare Realtime SFU]
B -->|Arbitrary data| C@{ shape: procs, label: "Subscribers"}
```
DataChannels on Cloudflare Realtime can scale up to many subscribers per publisher, there is no limit to the number of subscribers per publisher.
### How to use DataChannels
1. Create two Realtime sessions, one for the publisher and one for the subscribers.
2. Create a DataChannel by calling /datachannels/new with the location set to "local" and the dataChannelName set to the name of the DataChannel.
3. Create a DataChannel by calling /datachannels/new with the location set to "remote" and the sessionId set to the sessionId of the publisher.
4. Use the DataChannel to send data from the publisher to the subscribers.
### Unidirectional DataChannels
Cloudflare Realtime SFU DataChannels are one way only. This means that you can only send data from the publisher to the subscribers. Subscribers cannot send data back to the publisher. While regular MediaStream WebRTC DataChannels are bidirectional, this introduces a problem for Cloudflare Realtime because the SFU does not know which session to send the data back to. This is especially problematic for scenarios where you have multiple subscribers and you want to send data from the publisher to all subscribers at scale, such as distributing game score updates to all players in a multiplayer game.
To send data in a bidirectional way, you can use two DataChannels, one for sending data from the publisher to the subscribers and one for sending data the opposite direction.
## Example
An example of DataChannels in action can be found in the [Realtime Examples github repo](https://github.com/cloudflare/calls-examples/tree/main/echo-datachannels).
---
# Demos
URL: https://developers.cloudflare.com/realtime/demos/
import { ExternalResources, GlossaryTooltip } from "~/components"
Learn how you can use Realtime within your existing architecture.
## Demos
Explore the following demo applications for Realtime.
---
# Quickstart guide
URL: https://developers.cloudflare.com/realtime/get-started/
:::note[Before you get started:]
You must first [create a Cloudflare account](/fundamentals/setup/account/create-account/).
:::
## Create your first app
Every Realtime App is a separate environment, so you can make one for development, staging and production versions for your product.
Either using [Dashboard](https://dash.cloudflare.com/?to=/:account/calls), or the [API](/api/resources/calls/subresources/sfu/methods/create/) create a Realtime App. When you create a Realtime App, you will get:
* App ID
* App Secret
These two combined will allow you to make API Realtime from your backend server to Realtime.
---
# Example architecture
URL: https://developers.cloudflare.com/realtime/example-architecture/

1. Clients connect to the backend service
2. Backend service manages the relationship between the clients and the tracks they should subscribe to
3. Backend service contacts the Cloudflare Realtime API to pass the SDP from the clients to establish the WebRTC connection.
4. Realtime API relays back the Realtime API SDP reply and renegotiation messages.
5. If desired, headless clients can be used to record the content from other clients or publish content.
6. Admin manages the rooms and room members.
---
# Connection API
URL: https://developers.cloudflare.com/realtime/https-api/
Cloudflare Realtime simplifies the management of peer connections and media tracks through HTTPS API endpoints. These endpoints allow developers to efficiently manage sessions, add or remove tracks, and gather session information.
## API Endpoints
- **Create a New Session**: Initiates a new session on Cloudflare Realtime, which can be modified with other endpoints below.
- `POST /apps/{appId}/sessions/new`
- **Add a New Track**: Adds a media track (audio or video) to an existing session.
- `POST /apps/{appId}/sessions/{sessionId}/tracks/new`
- **Renegotiate a Session**: Updates the session's negotiation state to accommodate new tracks or changes in the existing ones.
- `PUT /apps/{appId}/sessions/{sessionId}/renegotiate`
- **Close a Track**: Removes a specified track from the session.
- `PUT /apps/{appId}/sessions/{sessionId}/tracks/close`
- **Retrieve Session Information**: Fetches detailed information about a specific session.
- `GET /apps/{appId}/sessions/{sessionId}`
[View full API and schema (OpenAPI format)](/realtime/static/calls-api-2024-05-21.yaml)
## Handling Secrets
It is vital to manage App ID and its secret securely. While track and session IDs can be public, they should be protected to prevent misuse. An attacker could exploit these IDs to disrupt service if your backend server does not authenticate request origins properly, for example by sending requests to close tracks on sessions other than their own. Ensuring the security and authenticity of requests to your backend server is crucial for maintaining the integrity of your application.
## Using STUN and TURN Servers
Cloudflare Realtime is designed to operate efficiently without the need for TURN servers in most scenarios, as Cloudflare exposes a publicly routable IP address for Realtime. However, integrating a STUN server can be necessary for facilitating peer discovery and connectivity.
- **Cloudflare STUN Server**: `stun.cloudflare.com:3478`
Utilizing Cloudflare's STUN server can help the connection process for Realtime applications.
## Lifecycle of a Simple Session
This section provides an overview of the typical lifecycle of a simple session, focusing on audio-only applications. It illustrates how clients are notified by the backend server as new remote clients join or leave, incorporating video would introduce additional tracks and considerations into the session.
```mermaid
sequenceDiagram
participant WA as WebRTC Agent
participant BS as Backend Server
participant CA as Realtime API
Note over BS: Client Joins
WA->>BS: Request
BS->>CA: POST /sessions/new
CA->>BS: newSessionResponse
BS->>WA: Response
WA->>BS: Request
BS->>CA: POST /sessions//tracks/new (Offer)
CA->>BS: newTracksResponse (Answer)
BS->>WA: Response
WA-->>CA: ICE Connectivity Check
Note over WA: iceconnectionstatechange (connected)
WA-->>CA: DTLS Handshake
Note over WA: connectionstatechange (connected)
WA<<->>CA: *Media Flow*
Note over BS: Remote Client Joins
WA->>BS: Request
BS->>CA: POST /sessions//tracks/new
CA->>BS: newTracksResponse (Offer)
BS->>WA: Response
WA->>BS: Request
BS->>CA: PUT /sessions//renegotiate (Answer)
CA->>BS: OK
BS->>WA: Response
Note over BS: Remote Client Leaves
WA->>BS: Request
BS->>CA: PUT /sessions//tracks/close
CA->>BS: closeTracksResponse
BS->>WA: Response
Note over BS: Client Leaves
WA->>BS: Request
BS->>CA: PUT /sessions//tracks/close
CA->>BS: closeTracksResponse
BS->>WA: Response
```
---
# Cloudflare Realtime
URL: https://developers.cloudflare.com/realtime/
import { Description, LinkButton } from "~/components";
Build real-time serverless video, audio and data applications.
Cloudflare Realtime is infrastructure for real-time audio/video/data applications. It allows you to build real-time apps without worrying about scaling or regions. It can act as a selective forwarding unit (WebRTC SFU), as a fanout delivery system for broadcasting (WebRTC CDN) or anything in between.
Cloudflare Realtime runs on [Cloudflare's global cloud network](https://www.cloudflare.com/network/) in hundreds of cities worldwide.
Get started
Realtime dashboard
Orange Meets demo app
---
# Introduction
URL: https://developers.cloudflare.com/realtime/introduction/
Cloudflare Realtime can be used to add realtime audio, video and data into your applications. Cloudflare Realtime uses WebRTC, which is the lowest latency way to communicate across a broad range of platforms like browsers, mobile, and native apps.
Realtime integrates with your backend and frontend application to add realtime functionality.
## Why Cloudflare Realtime exists
* **It is difficult to scale WebRTC**: Many struggle scaling WebRTC servers. Operators run into issues about how many users can be in the same "room" or want to build unique solutions that do not fit into the current concepts in high level APIs.
* **High egress costs**: WebRTC is expensive to use as managed solutions charge a high premium on cloud egress and running your own servers incur system administration and scaling overhead. Cloudflare already has 300+ locations with upwards of 1,000 servers in some locations. Cloudflare Realtime scales easily on top of this architecture and can offer the lowest WebRTC usage costs.
* **WebRTC is growing**: Developers are realizing that WebRTC is not just for video conferencing. WebRTC is supported on many platforms, it is mature and well understood.
## What makes Cloudflare Realtime unique
* **Unopinionated**: Cloudflare Realtime does not offer a SDK. It instead allows you to access raw WebRTC to solve unique problems that might not fit into existing concepts. The API is deliberately simple.
* **No rooms**: Unlike other WebRTC products, Cloudflare Realtime lets you be in charge of each track (audio/video/data) instead of offering abstractions such as rooms. You define the presence protocol on top of simple pub/sub. Each end user can publish and subscribe to audio/video/data tracks as they wish.
* **No lock-in**: You can use Cloudflare Realtime to solve scalability issues with your SFU. You can use in combination with peer-to-peer architecture. You can use Cloudflare Realtime standalone. To what extent you use Cloudflare Realtime is up to you.
## What exactly does Cloudflare Realtime do?
* **SFU**: Realtime is a special kind of pub/sub server that is good at forwarding media data to clients that subscribe to certain data. Each client connects to Cloudflare Realtime via WebRTC and either sends data, receives data or both using WebRTC. This can be audio/video tracks or DataChannels.
* **It scales**: All Cloudflare servers act as a single server so millions of WebRTC clients can connect to Cloudflare Realtime. Each can send data, receive data or both with other clients.
## How most developers get started
1. Get started with the echo example, which you can download from the Cloudflare dashboard when you create a Realtime App or from [demos](/realtime/demos/). This will show you how to send and receive audio and video.
2. Understand how you can manipulate who can receive what media by passing around session and track ids. Remember, you control who receives what media. Each media track is represented by a unique ID. It is your responsibility to save and distribute this ID.
:::note[Realtime is not a presence protocol]
Realtime does not know what a room is. It only knows media tracks. It is up to you to make a room by saving who is in a room along with track IDs that unique identify media tracks. If each participant publishes their audio/video, and receives audio/video from each other, you have got yourself a video conference!
:::
3. Create an app where you manage each connection to Cloudflare Realtime and the track IDs created by each connection. You can use any tool to save and share tracks. Check out the example apps at [demos](/realtime/demos/), such as [Orange Meets](https://github.com/cloudflare/orange), which is a full-fledged video conferencing app that uses [Workers Durable Objects](/durable-objects/) to keep track of track IDs.
---
# Limits, timeouts and quotas
URL: https://developers.cloudflare.com/realtime/limits/
Understanding the limits and timeouts of Cloudflare Realtime is crucial for optimizing the performance and reliability of your applications. This section outlines the key constraints and behaviors you should be aware of when integrating Cloudflare Realtime into your app.
## Free
* Each account gets 1,000GB/month of data transfer from Cloudflare to your client for free.
* Data transfer from your client to Cloudflare is always free of charge.
## Limits
* **API Realtime per Session**: You can make up to 50 API calls per second for each session. There is no ratelimit on a App basis, just sessions.
* **Tracks per API Call**: Up to 64 tracks can be added with a single API call. If you need to add more tracks to a session, you should distribute them across multiple API calls.
* **Tracks per Session**: There's no upper limit to the number of tracks a session can contain, the practical limit is governed by your connection's bandwidth to and from Cloudflare.
## Inactivity Timeout
* **Track Timeout**: Tracks will automatically timeout and be garbage collected after 30 seconds of inactivity, where inactivity is defined as no media packets being received by Cloudflare. This mechanism ensures efficient use of resources and session cleanliness across all Sessions that use a track.
## PeerConnection Requirements
* **Session State**: For any operation on a session (e.g., pulling or pushing tracks), the PeerConnection state must be `connected`. Operations will block for up to 5 seconds awaiting this state before timing out. This ensures that only active and viable sessions are engaged in media transmission.
## Handling Connectivity Issues
* **Internet Connectivity Considerations**: The potential for internet connectivity loss between the client and Cloudflare is an operational reality that must be addressed. Implementing a detection and reconnection strategy is recommended to maintain session continuity. This could involve periodic 'heartbeat' signals to your backend server to monitor connectivity status. Upon detecting connectivity issues, automatically attempting to reconnect and establish a new session is advised. Sessions and tracks will remain available for reuse for 30 seconds before timing out, providing a brief window for reconnection attempts.
Adhering to these limits and understanding the timeout behaviors will help ensure that your applications remain responsive and stable while providing a seamless user experience.
---
# Pricing
URL: https://developers.cloudflare.com/realtime/pricing/
Cloudflare Realtime billing is based on data sent from Cloudflare edge to your application.
Cloudflare Realtime SFU and TURN services cost $0.05 per GB of data egress.
There is a free tier of 1,000 GB before any charges start. This free tier includes usage from both SFU and TURN services, not two independent free tiers. Cloudflare Realtime billing appears as a single line item on your Cloudflare bill, covering both SFU and TURN.
Traffic between Cloudflare Realtime TURN and Cloudflare Realtime SFU or Cloudflare Stream (WHIP/WHEP) does not get double charged, so if you are using both SFU and TURN at the same time, you will get charged for only one.
### TURN
Please see the [TURN FAQ page](/realtime/turn/faq), where there is additional information on speficially which traffic path from RFC8656 is measured and counts towards billing.
### SFU
Only traffic originating from Cloudflare towards clients incurs charges. Traffic pushed to Cloudflare incurs no charge even if there is no client pulling same traffic from Cloudflare.
---
# Sessions and Tracks
URL: https://developers.cloudflare.com/realtime/sessions-tracks/
Cloudflare Realtime offers a simple yet powerful framework for building real-time experiences. At the core of this system are three key concepts: **Applications**, **Sessions** and **Tracks**. Familiarizing yourself with these concepts is crucial for using Realtime.
## Application
A Realtime Application is an environment within different Sessions and Tracks can interact. Examples of this could be production, staging or different environments where you'd want separation between Sessions and Tracks. Cloudflare Realtime usage can be queried at Application, Session or Track level.
## Sessions
A **Session** in Cloudflare Realtime correlates directly to a WebRTC PeerConnection. It represents the establishment of a communication channel between a client and the nearest Cloudflare data center, as determined by Cloudflare's anycast routing. Typically, a client will maintain a single Session, encompassing all communications between the client and Cloudflare.
* **One-to-One Mapping with PeerConnection**: Each Session is a direct representation of a WebRTC PeerConnection, facilitating real-time media data transfer.
* **Anycast Routing**: The client connects to the closest Cloudflare data center, optimizing latency and performance.
* **Unified Communication Channel**: A single Session can handle all types of communication between a client and Cloudflare, ensuring streamlined data flow.
## Tracks
Within a Session, there can be one or more **Tracks**.
* **Tracks map to MediaStreamTrack**: Tracks align with the MediaStreamTrack concept, facilitating audio, video, or data transmission.
* **Globally Unique Ids**: When you push a track to Cloudflare, it is assigned a unique ID, which can then be used to pull the track into another session elsewhere.
* **Available globally**: The ability to push and pull tracks is central to what makes Realtime a versatile tool for real-time applications. Each track is available globally to be retrieved from any Session within an App.
## Realtime as a Programmable "Switchboard"
The analogy of a switchboard is apt for understanding Realtime. Historically, switchboard operators connected calls by manually plugging in jacks. Similarly, Realtime allows for the dynamic routing of media streams, acting as a programmable switchboard for modern real-time communication.
## Beyond "Rooms", "Users", and "Participants"
While many SFUs utilize concepts like "rooms" to manage media streams among users, this approach has scalability and flexibility limitations. Cloudflare Realtime opts for a more granular and flexible model with Sessions and Tracks, enabling a wide range of use cases:
* Large-scale remote events, like 'fireside chats' with thousands of participants.
* Interactive conversations with the ability to bring audience members "on stage."
* Educational applications where an instructor can present to multiple virtual classrooms simultaneously.
### Presence Protocol vs. Media Flow
Realtime distinguishes between the presence protocol and media flow, allowing for scalability and flexibility in real-time applications. This separation enables developers to craft tailored experiences, from intimate calls to massive, low-latency broadcasts.
---
# Simulcast
URL: https://developers.cloudflare.com/realtime/simulcast/
Simulcast is a feature of WebRTC that allows a publisher to send multiple video streams of the same media at different qualities. For example, this is useful for scenarios where you want to send a high quality stream for desktop users and a lower quality stream for mobile users.
```mermaid
graph LR
A[Publisher] -->|Low quality| B[Cloudflare Realtime SFU]
A -->|Medium quality| B
A -->|High quality| B
B -->|Low quality| C@{ shape: procs, label: "Subscribers"}
B -->|Medium quality| D@{ shape: procs, label: "Subscribers"}
B -->|High quality| E@{ shape: procs, label: "Subscribers"}
```
### How it works
Simulcast in WebRTC allows a single video source, like a camera or screen share, to be encoded at multiple quality levels and sent simultaneously, which is beneficial for subscribers with varying network conditions and device capabilities. The video source is encoded into multiple streams, each identified by RIDs (RTP Stream Identifiers) for different quality levels, such as low, medium, and high. These simulcast streams are described in the SDP you send to Cloudflare Realtime SFU. It's the responsibility of the Cloudflare Realtime SFU to ensure that the appropriate quality stream is delivered to each subscriber based on their network conditions and device capabilities.
Cloudflare Realtime SFU will automatically handle the simulcast configuration based on the SDP you send to it from the publisher. The SFU will then automatically switch between the different quality levels based on the subscriber's network conditions, or the qaulity level can be controlled manually via the API. You can control the quality switching behavior using the `simulcast` configuration object when you send an API call to start pulling a remote track.
### Quality Control
The `simulcast` configuration object in the API call when you start pulling a remote track allows you to specify:
- `preferredRid`: The preferred quality level for the video stream (RID for the simulcast stream. [RIDs can be specified by the publisher.](https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/setParameters#encodings))
- `priorityOrdering`: Controls how the SFU handles bandwidth constraints.
- `none`: Keep sending the preferred layer, set via the preferredRid, even if there's not enough bandwidth.
- `asciibetical`: Use alphabetical ordering (a-z) to determine priority, where 'a' is most desirable and 'z' is least desirable.
- `ridNotAvailable`: Controls what happens when the preferred RID is no longer available, for example when the publisher stops sending it.
- `none`: Do nothing.
- `asciibetical`: Switch to the next available RID based on the priority ordering, where 'a' is most desirable and 'z' is least desirable.
You will likely want to order the asciibetical RIDs based on your desired metric, such as higest resoltion to lowest or highest bandwidth to lowest.
### Bandwidth Management across media tracks
Cloudflare Realtime treats all media tracks equally at the transport level. For example, if you have multiple video tracks (cameras, screen shares, etc.), they all have equal priority for bandwidth allocation. This means:
1. Each track's simulcast configuration is handled independently
1. The SFU performs automatic bandwidth estimation and layer switching based on network conditions independently for each track
### Layer Switching Behavior
When a layer switch is requested (through updating `preferredRid`) with the `/tracks/update` API:
1. The SFU will automatically generate a Full Intraframe Request (FIR)
2. PLI generation is debounced to prevent excessive requests
### Publisher Configuration
For publishers (local tracks), you only need to include the simulcast attributes in your SDP. The SFU will automatically handle the simulcast configuration based on the SDP. For example, the SDP should contain a section like this:
```sdp
a=simulcast:send f;h;q
a=rid:f send
a=rid:h send
a=rid:q send
```
If the publisher endpoint is a browser you can include these by specifying `sendEncodings` when creating the transceiver like this:
```js
const transceiver = peerConnection.addTransceiver(track, {
direction: "sendonly",
sendEncodings: [
{ scaleResolutionDownBy: 1, rid: "f" },
{ scaleResolutionDownBy: 2, rid: "h" },
{ scaleResolutionDownBy: 4, rid: "q" }
]
});
```
## Example
Here's an example of how to use simulcast with Cloudflare Realtime:
1. Create a new local track with simulcast configuration. There should be a section in the SDP with `a=simulcast:send`.
2. Use the [Cloudflare Realtime API](/realtime/https-api) to push this local track, by calling the /tracks/new endpoint.
3. Use the [Cloudflare Realtime API](/realtime/https-api) to start pulling a remote track (from another browser or device), by calling the /tracks/new endpoint and specifying the `simulcast` configuration object along with the remote track ID you get from step 2.
For more examples, check out the [Realtime Examples GitHub repository](https://github.com/cloudflare/calls-examples/tree/main/simulcast).
---
# FAQ
URL: https://developers.cloudflare.com/stream/faq/
import { GlossaryTooltip } from "~/components"
## Stream
### What formats and quality levels are delivered through Cloudflare Stream?
Cloudflare decides on which bitrate, resolution, and codec is best for you. We deliver all videos to industry standard H264 codec. We use a few different adaptive streaming levels from 360p to 1080p to ensure smooth streaming for your audience watching on different devices and bandwidth constraints.
### Can I download original video files from Stream?
You cannot download the *exact* input file that you uploaded. However, depending on your use case, you can use the [Downloadable Videos](/stream/viewing-videos/download-videos/) feature to get encoded MP4s for use cases like offline viewing.
### Is there a limit to the amount of videos I can upload?
* By default, a video upload can be at most 30 GB.
* By default, you can have up to 120 videos queued or being encoded simultaneously. Videos in the `ready` status are playable but may still be encoding certain quality levels until the `pctComplete` reaches 100. Videos in the `error`, `ready`, or `pendingupload` state do not count toward this limit. If you need the concurrency limit raised, [contact Cloudflare support](/support/contacting-cloudflare-support/) explaining your use case and why you would like the limit raised.
:::note
The limit to the number of videos only applies to videos being uploaded to Cloudflare Stream. This limit is not related to the number of end users streaming videos.
:::
* An account cannot upload videos if the total video duration exceeds the video storage capacity purchased.
Limits apply to Direct Creator Uploads at the time of upload URL creation.
Uploads over these limits will receive a [429 (Too Many Requests)](/support/troubleshooting/http-status-codes/4xx-client-error/#429-too-many-requests) or [413 (Payload too large)](/support/troubleshooting/http-status-codes/4xx-client-error/#413-payload-too-large) HTTP status codes with more information in the response body. Please write to Cloudflare support or your customer success manager for higher limits.
### Can I embed videos on Stream even if my domain is not on Cloudflare?
Yes. Stream videos can be embedded on any domain, even domains not on Cloudflare.
### What input file formats are supported?
Users can upload video in the following file formats:
MP4, MKV, MOV, AVI, FLV, MPEG-2 TS, MPEG-2 PS, MXF, LXF, GXF, 3GP, WebM, MPG, QuickTime
### Does Stream support High Dynamic Range (HDR) video content?
When HDR videos are uploaded to Stream, they are re-encoded and delivered in SDR format, to ensure compatibility with the widest range of viewing devices.
### What frame rates (FPS) are supported?
Cloudflare Stream supports video file uploads for any FPS, however videos will be re-encoded for 70 FPS playback. If the original video file has a frame rate lower than 70 FPS, Stream will re-encode at the original frame rate.
If the frame rate is variable we will drop frames (e.g. if there are more than 1 frames within 1/30 seconds, we will drop the extra frames within that period).
### What browsers does Stream work on?
You can embed the Stream player on the following platforms:
| Browser | Version |
| ------- | ----------------------------------- |
| Chrome | Supported since Chrome version 88+ |
| Firefox | Supported since Firefox version 87+ |
| Edge | Supported since Edge 89+ |
| Safari | Supported since Safari version 14+ |
| Opera | Supported since Opera version 75+ |
:::note[Note]
Cloudflare Stream is not available on Chromium, as Chromium does not support H.264 videos.
:::
| Mobile Platform | Version |
| --------------------- | ------------------------------------------------------------------------ |
| Chrome on Android | Supported on Chrome 90 |
| UC Browser on Android | Supported on version 12.12+ |
| Samsung Internet | Supported on 13+ |
| Safari on iOS | Supported on iOS 13.4+. Speed selector supported when not in fullscreen. |
### What are the recommended upload settings for video uploads?
If you are producing a brand new file for Cloudflare Stream, we recommend you use the following settings:
* MP4 containers, AAC audio codec, H264 video codec, 30 or below frames per second
* moov atom should be at the front of the file (Fast Start)
* H264 progressive scan (no interlacing)
* H264 high profile
* Closed GOP
* Content should be encoded and uploaded in the same frame rate it was recorded
* Mono or Stereo audio (Stream will mix audio tracks with more than 2 channels down to stereo)
Below are bitrate recommendations for encoding new videos for Stream:
| Resolution | Recommended bitrate |
| ---------- | ------------------- |
| 1080p | 8 Mbps |
| 720p | 4.8 Mbps |
| 480p | 2.4 Mbps |
| 360p | 1 Mbps |
### If I cancel my stream subscription, are the videos deleted?
Videos are removed if the subscription is not renewed within 30 days.
### I use Content Security Policy (CSP) on my website. What domains do I need to add to which directives?
If your website uses Content Security Policy (CSP) directives, depending on your configuration, you may need to add Cloudflare Stream's domains to particular directives, in order to allow videos to be viewed or uploaded by your users.
If you use the provided [Stream Player](/stream/viewing-videos/using-the-stream-player/), `videodelivery.net` and `*.cloudflarestream.com` must be included in the `frame-src` or `default-src` directive to allow the player's `