Worker Runtime
The worker runtime allows you to run server-side JavaScript to intercept and handle HTTP requests.
Include a _worker.js file in your deploy zip to use a Cloudflare Workers-compatible
runtime instead of the static file pipeline.
Overview
Workers give you full control over how requests are handled. Instead of serving static files directly, you write JavaScript that receives incoming requests and returns responses. This enables dynamic content, API endpoints, authentication, server-side rendering, and more — all while maintaining the simplicity of a single-command deploy.
The runtime supports two JavaScript engines: V8 (JIT-compiled, recommended for Linux and macOS) and QuickJS (interpreted, available on all platforms including Windows). Both engines provide the same Workers API — choose V8 for best performance, or QuickJS for maximum portability.
Basic example
Create a _worker.js file in the root of your site directory. It must export a default
object with a fetch() handler:
export default {
async fetch(request, env, ctx) {
// request: Request object (method, url, headers, body)
// env: bindings (env vars, secrets, KV, storage buckets, ASSETS)
// ctx: execution context — waitUntil() and passThroughOnException() are accepted but are currently no-ops
return new Response("Hello from worker!");
},
async scheduled(event, env, ctx) {
// event: { scheduledTime, cron }
// Called by cron triggers, not HTTP requests
console.log("Cron triggered:", event.cron);
}
};
The fetch() handler is invoked for every HTTP request to your site's subdomain.
The scheduled() handler is optional and is called by cron triggers.
The tail() handler is optional and receives log events from other worker executions
for log forwarding or analytics.
Available Web APIs
Workers have access to a Cloudflare Workers-compatible subset of standard Web APIs:
- Request / Response — Standard constructors with instance methods
.text(),.json(),.arrayBuffer(),.clone(); static methodsResponse.json(data, init?)andResponse.redirect(url, status?) - Headers — Full
get/set/has/delete/append/forEach/entries/keys/valuesAPI - URL / URLSearchParams — Standard URL parsing and manipulation
- TextEncoder / TextDecoder — UTF-8 encoding and decoding
- fetch() — Outbound HTTP requests (rate-limited, SSRF-protected — blocks private/loopback IPs)
- console.log / info / warn / error / debug — All five levels are captured to per-site log storage, viewable via API
- atob() / btoa() — Base64 decode and encode (Latin-1 strings)
- setTimeout / setInterval / clearTimeout / clearInterval — Timer functions backed by a real Go event loop with actual delays (see note below)
- AbortController / AbortSignal — Request cancellation; includes
AbortSignal.abort()andAbortSignal.timeout(ms) - Event / EventTarget / DOMException — DOM event primitives used internally by AbortSignal and custom event emitters
- crypto.getRandomValues(typedArray) — Fill a typed array with cryptographically random bytes (up to 65536 bytes)
- crypto.randomUUID() — Generate a random UUID v4 string
- crypto.subtle — Web Crypto API subset:
digest(algorithm, data)— SHA-1, SHA-256, SHA-384, SHA-512generateKey(algorithm, extractable, usages)— HMAC, AES-GCM, AES-CBC, ECDSA (P-256, P-384), ECDH (P-256, P-384, P-521), X25519, RSA-OAEP, RSASSA-PKCS1-v1_5, RSA-PSS, Ed25519importKey(format, keyData, algorithm, extractable, usages)— Formats:raw,jwk,pkcs8(RSA),spki(RSA). Algorithms: HMAC, AES-GCM, AES-CBC, ECDSA, ECDH, X25519, RSA-OAEP, RSASSA-PKCS1-v1_5, RSA-PSS, Ed25519exportKey(format, key)— Formats:raw,jwk,pkcs8(RSA),spki(RSA); key must be extractablesign(algorithm, key, data)— HMAC, ECDSA, RSASSA-PKCS1-v1_5, RSA-PSS, Ed25519verify(algorithm, key, signature, data)— HMAC, ECDSA, RSASSA-PKCS1-v1_5, RSA-PSS, Ed25519encrypt(algorithm, key, data)— AES-GCM (12-byte IV), AES-CBC (16-byte IV), RSA-OAEPdecrypt(algorithm, key, data)— AES-GCM (12-byte IV), AES-CBC (16-byte IV), RSA-OAEPderiveBits(algorithm, baseKey, length)— HKDF, PBKDF2, ECDH, X25519deriveKey(algorithm, baseKey, derivedKeyAlgorithm, extractable, usages)— HKDF, PBKDF2, ECDH, X25519wrapKey(format, key, wrappingKey, wrapAlgorithm)— Wraps via exportKey + encryptunwrapKey(format, wrappedKey, unwrappingKey, unwrapAlgo, unwrappedKeyAlgo, extractable, usages)— Unwraps via decrypt + importKey
- ReadableStream / WritableStream / TransformStream — Web Streams API with reader/writer locks, async iteration, and identity transform passthrough
- Blob / File / FormData — Binary data construction and multipart form handling;
BlobandFilesupport.text()and.arrayBuffer() - WebSocket / WebSocketPair — Server-side WebSocket support (Cloudflare Workers compatible). Create a
WebSocketPairto get a client/server pair, callserver.accept(), and return a 101 Response with the clientwebSocketproperty to upgrade the connection - HTMLRewriter — Streaming HTML transformation (Cloudflare Workers compatible). Chain
.on(selector, handler)and.onDocument(handler)calls, then.transform(response)to rewrite HTML. Element handlers supportgetAttribute,setAttribute,removeAttribute,before,after,prepend,append,setInnerContent,remove, andtagNamemutation - CompressionStream / DecompressionStream — Streaming compression and decompression. Supported formats:
gzip,deflate,deflate-raw,br(Brotli) - structuredClone(value) — Deep-clone JSON-serializable values; functions, symbols, Map, Set, WeakMap, and WeakSet are not supported
- queueMicrotask(fn) — Schedule a callback on the next microtask tick
- performance.now() — Milliseconds elapsed since the runtime was initialized (Go-backed, sub-millisecond precision)
- navigator.userAgent — Returns
"hostedat-worker/1.0" - URLPattern — URL pattern matching with named groups and wildcards. Supports
test(input)andexec(input)methods for matching URLs against patterns with:paramand*placeholders - EventSource — Server-Sent Events (SSE) client. Connects to an SSE endpoint and dispatches
message,open, anderrorevents. Supportsclose()andreadyState. SSRF-protected - connect(address, options?) — TCP socket connections (Cloudflare Workers compatible). Returns a socket with
readable/writablestreams,opened/closedpromises, andstartTls()for TLS upgrade. SSRF-protected — blocks private/loopback IPs - TextEncoderStream / TextDecoderStream — Streaming UTF-8 encoding and decoding as TransformStreams. Pipe bytes through
TextDecoderStreamto get strings, or strings throughTextEncoderStreamto get bytes - IdentityTransformStream — A no-op TransformStream that passes chunks through unchanged
- crypto.DigestStream — A WritableStream that computes a hash digest (SHA-1, SHA-256, SHA-384, SHA-512) as data is written. Access the result via the
.digestpromise after closing - MessageChannel / MessagePort — In-memory message passing between ports. Create a
MessageChannelto get aport1/port2pair, then usepostMessage(data)andonmessageto communicate - Cache API (caches / CacheStorage) — HTTP response caching backed by SQLite. Use
caches.defaultorawait caches.open(name)to get aCacheinstance withmatch(request),put(request, response), anddelete(request). TTL derived fromCache-Control: max-age
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
if (url.pathname === "/api/hello") {
return new Response(JSON.stringify({ message: "Hello, world!" }), {
headers: { "Content-Type": "application/json" }
});
}
return new Response("Not found", { status: 404 });
}
}; Example: Crypto
Generate UUIDs, hash data, sign requests, or encrypt values using the built-in
crypto global. All crypto.subtle operations return Promises.
export default {
async fetch(request, env) {
// Generate a random UUID
const id = crypto.randomUUID();
// Fill a typed array with random bytes
const buf = new Uint8Array(16);
crypto.getRandomValues(buf);
// SHA-256 hash of a string
const encoder = new TextEncoder();
const data = encoder.encode("hello world");
const hashBuffer = await crypto.subtle.digest("SHA-256", data);
const hashArray = Array.from(new Uint8Array(hashBuffer));
const hashHex = hashArray.map(b => b.toString(16).padStart(2, "0")).join("");
return Response.json({ id, hashHex });
}
}; export default {
async fetch(request, env) {
const encoder = new TextEncoder();
// Import a raw HMAC key (use env.SECRET in practice)
const keyData = encoder.encode("my-secret-key");
const key = await crypto.subtle.importKey(
"raw",
keyData,
{ name: "HMAC", hash: "SHA-256" },
false, // not extractable
["sign", "verify"]
);
// Sign a payload
const payload = encoder.encode("user:42");
const signature = await crypto.subtle.sign("HMAC", key, payload);
// Verify the signature
const valid = await crypto.subtle.verify("HMAC", key, signature, payload);
return Response.json({ valid });
}
}; export default {
async fetch(request, env) {
const encoder = new TextEncoder();
// AES-GCM requires a 128, 192, or 256-bit key (16, 24, or 32 bytes)
const rawKey = new Uint8Array(32);
crypto.getRandomValues(rawKey);
const key = await crypto.subtle.importKey(
"raw",
rawKey,
{ name: "AES-GCM" },
false,
["encrypt", "decrypt"]
);
// AES-GCM requires a 12-byte IV
const iv = new Uint8Array(12);
crypto.getRandomValues(iv);
const plaintext = encoder.encode("secret message");
const ciphertext = await crypto.subtle.encrypt({ name: "AES-GCM", iv }, key, plaintext);
const decrypted = await crypto.subtle.decrypt({ name: "AES-GCM", iv }, key, ciphertext);
return Response.json({
ciphertextBytes: new Uint8Array(ciphertext).length,
decrypted: new TextDecoder().decode(decrypted)
});
}
}; Example: Timers and encoding
setTimeout and setInterval use a Go-backed event loop with real
wall-clock delays. A setTimeout(fn, 500) call will actually wait 500 ms before
firing. All timers are bounded by the worker execution timeout — any pending timers are
cancelled when the timeout is reached.
export default {
async fetch(request) {
const encoded = btoa("Hello, worker!"); // "SGVsbG8sIHdvcmtlciE="
const decoded = atob(encoded); // "Hello, worker!"
return Response.json({ encoded, decoded });
}
}; export default {
async fetch(request) {
const result = await new Promise((resolve) => {
setTimeout(() => resolve("done after 100ms"), 100);
});
return new Response(result);
}
}; Example: FormData and Blob
Use FormData to construct multipart form bodies for upstream requests.
Blob and File can be used anywhere binary data or file uploads
are needed, including storage put() calls.
export default {
async fetch(request, env) {
// Build a multipart/form-data body to send to an upstream API
const form = new FormData();
form.append("username", "alice");
form.append("avatar", new File(["<svg/>"], "avatar.svg", { type: "image/svg+xml" }));
const response = await fetch("https://api.example.com/upload", {
method: "POST",
body: form,
});
return Response.json({ status: response.status });
}
}; export default {
async fetch(request, env) {
// Store a Blob directly in a storage bucket
const blob = new Blob(['{"hello":"world"}'], { type: "application/json" });
await env.MY_BUCKET.put("data.json", blob, {
httpMetadata: { contentType: blob.type }
});
return new Response("stored", { status: 200 });
}
}; Example: Cache API
Use the Cache API to store and retrieve HTTP responses. Each cache is backed by
SQLite and supports TTL via Cache-Control: max-age headers. Use caches.default
for the default cache or await caches.open(name) for named caches.
export default {
async fetch(request, env) {
const url = new URL(request.url);
const cacheKey = url.pathname;
// Check the default cache first.
const cached = await caches.default.match(cacheKey);
if (cached) {
return new Response(await cached.text(), {
headers: { "X-Cache": "HIT", "Content-Type": "application/json" }
});
}
// Fetch fresh data from upstream.
const data = { time: new Date().toISOString(), path: url.pathname };
const response = new Response(JSON.stringify(data), {
headers: {
"Content-Type": "application/json",
"Cache-Control": "max-age=60" // TTL: 60 seconds
}
});
// Store in cache (non-blocking).
await caches.default.put(cacheKey, response.clone());
return response;
}
}; Example: D1 database
D1 databases are Cloudflare Workers-compatible SQL databases backed by isolated SQLite instances.
Each D1 binding gets its own database file, completely separate from the application database.
Use prepare() to create parameterized statements and .all(), .first(),
.run(), or .raw() to execute them.
export default {
async fetch(request, env) {
const url = new URL(request.url);
if (url.pathname === "/users") {
// Query all users.
const { results } = await env.DB.prepare("SELECT id, name, email FROM users").all();
return Response.json(results);
}
if (url.pathname === "/users/create" && request.method === "POST") {
const { name, email } = await request.json();
// Parameterized insert — safe from SQL injection.
const result = await env.DB.prepare(
"INSERT INTO users (name, email) VALUES (?, ?)"
).bind(name, email).run();
return Response.json({ success: true, meta: result.meta });
}
return new Response("Not found", { status: 404 });
}
}; export default {
async fetch(request, env) {
// Run raw SQL to set up tables (useful for migrations).
await env.DB.exec(`
CREATE TABLE IF NOT EXISTS posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
body TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
)
`);
// Batch multiple prepared statements in a single call.
const results = await env.DB.batch([
env.DB.prepare("INSERT INTO posts (title, body) VALUES (?, ?)").bind("Hello", "First post"),
env.DB.prepare("INSERT INTO posts (title, body) VALUES (?, ?)").bind("World", "Second post"),
env.DB.prepare("SELECT * FROM posts"),
]);
return Response.json({
inserted: results.slice(0, 2).map(r => r.meta),
posts: results[2].results
});
}
}; Example: Durable Objects
Durable Objects provide strongly consistent, per-object key-value storage backed by SQLite.
Each object has a unique ID (deterministic from a name or randomly generated) and its own
isolated storage with get, put, delete, deleteAll,
and list methods. Storage operations return Promises.
export default {
async fetch(request, env) {
const url = new URL(request.url);
const userId = url.searchParams.get("user") || "anonymous";
// Get a deterministic ID from the user name.
const id = env.COUNTERS.idFromName(userId);
// Get a stub to interact with this specific object.
const stub = env.COUNTERS.get(id);
// Read and increment the counter in storage.
const current = await stub.storage.get("count") || 0;
await stub.storage.put("count", current + 1);
return Response.json({
user: userId,
objectId: id.toString(),
count: current + 1
});
}
}; export default {
async fetch(request, env) {
const id = env.SESSIONS.idFromName("demo-session");
const stub = env.SESSIONS.get(id);
// Put multiple values at once.
await stub.storage.put({ name: "Alice", role: "admin", loginAt: Date.now() });
// Get multiple values (returns a Map).
const data = await stub.storage.get(["name", "role"]);
// List all keys with a prefix.
const allKeys = await stub.storage.list({ prefix: "", limit: 100 });
// Delete a single key.
await stub.storage.delete("loginAt");
return Response.json({
name: data.get("name"),
role: data.get("role"),
totalKeys: allKeys.size
});
}
}; Environment bindings
The env parameter provides access to environment variables, secrets, KV namespaces,
D1 databases, Durable Object namespaces, Queues, Service Bindings,
storage bucket bindings, and the static asset pipeline.
Environment variables & secrets
Set environment variables via the API or dashboard. Both are accessed the same way in code, but secrets are masked in API list responses. Use secrets for API keys and sensitive values.
export default {
async fetch(request, env, ctx) {
const apiKey = env.MY_API_KEY; // secret
const apiUrl = env.API_URL; // plain env var
// Use environment bindings in your requests
const headers = new Headers();
headers.set("Authorization", "Bearer " + apiKey);
const response = await fetch(apiUrl, { headers });
return response;
}
}; KV Namespaces
KV namespaces provide persistent key-value storage backed by SQLite. Each namespace is isolated and accessed via a simple async API. Maximum value size is 1 MB.
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
if (url.pathname === "/counter") {
// Get current count
const count = await env.MY_KV.get("counter") || "0";
const newCount = parseInt(count) + 1;
// Store new count with metadata
await env.MY_KV.put("counter", String(newCount), {
metadata: { lastUpdated: new Date().toISOString() }
});
return new Response("Count: " + newCount);
}
return new Response("Not found", { status: 404 });
}
}; KV API reference
-
env.NAMESPACE.get(key)→Promise<string|null>— Retrieve a value by key -
env.NAMESPACE.put(key, value, options?)→Promise<void>— Store a value. Options:{ metadata?, expirationTtl? } -
env.NAMESPACE.delete(key)→Promise<void>— Delete a key -
env.NAMESPACE.list(options?)→Promise<{ keys: [{ name, metadata? }] }>— List keys. Options:{ prefix?, limit? }
Storage bucket bindings (R2-compatible)
Storage bindings expose S3 buckets as Worker bindings (for example, env.DOWNLOADS) with
an R2-style API. Create them via the CLI, the dashboard, or the API.
# Create a public bucket binding
hostedat storage create <site> --name DOWNLOADS --bucket <site-id>-downloads --public
# List buckets to get bucket IDs
hostedat storage list <site>
# Toggle visibility later
hostedat storage update <site> <bucket-id> --private
hostedat storage update <site> <bucket-id> --public curl -X POST https://your-hostedat-domain/api/v1/sites/<site-id>/storage/buckets \
-H "Authorization: Bearer <api-key>" \
-H "Content-Type: application/json" \
-d '{
"name": "DOWNLOADS",
"bucket_name": "<site-id>-downloads",
"public": true
}'
Naming rules: name must match ^[A-Z][A-Z0-9_]{0,63}$ (for example,
DOWNLOADS), and bucket_name must be 3-63 chars of lowercase letters/digits/dot/hyphen,
must not be an IP address, and must start with {site-id}-.
export default {
async fetch(request, env) {
const path = new URL(request.url).pathname;
if (path === "/storage/write") {
await env.DOWNLOADS.put("meta/build.json", JSON.stringify({
builtAt: new Date().toISOString()
}), {
httpMetadata: {
contentType: "application/json",
cacheControl: "public, max-age=60"
},
customMetadata: {
source: "worker"
}
});
return new Response("ok");
}
if (path === "/storage/read") {
const obj = await env.DOWNLOADS.get("meta/build.json");
if (!obj) return new Response("Not found", { status: 404 });
return new Response(await obj.text(), {
headers: {
"Content-Type": obj.httpMetadata.contentType || "application/json"
}
});
}
return new Response("Not found", { status: 404 });
}
}; get() returns an object with metadata and one-shot body readers:
.text(), .arrayBuffer(), and .json(). After one body read,
further reads reject with body already consumed.
export default {
async fetch(request, env) {
const url = new URL(request.url);
const key = url.searchParams.get("key") || "hostedat-linux-amd64";
if (url.pathname === "/downloads/private") {
const signedUrl = await env.DOWNLOADS.createSignedUrl(key, { expiresIn: 900 });
return Response.redirect(signedUrl, 302);
}
if (url.pathname === "/downloads/public") {
const objectUrl = env.DOWNLOADS.publicUrl(key);
return Response.redirect(objectUrl, 302);
}
return new Response("Not found", { status: 404 });
}
}; -
createSignedUrl()generates a SigV4 URL for private access. Keep the full query string (X-Amz-*) intact. -
publicUrl()buildshttps://storage.<domain>/<bucket>/<key>for direct object links. This is intended for buckets with public read enabled. -
createSignedUrl()andpublicUrl()are hostedat extensions; they are not part of Cloudflare's native R2 API. -
publicUrl()does not bypass bucket policy. If the bucket is private, unauthenticated requests to that URL will be denied. -
put()supportsstring,ArrayBuffer, typed arrays (for exampleUint8Array),DataView,Blob, andFile. For very large uploads, use/storage/buckets/:bucketId/upload-urlor S3 credentials.
Storage binding API reference
| Method | Return type | Notes |
|---|---|---|
env.BUCKET.get(key) | Promise<R2ObjectBody|null> | Returns object body + metadata, or null if missing. |
env.BUCKET.head(key) | Promise<R2Object|null> | Metadata only. Returns null when not found. |
env.BUCKET.put(key, value, options?) | Promise<R2Object> | value supports string, ArrayBuffer, TypedArray, DataView, Blob, and File. Supports httpMetadata and customMetadata. |
env.BUCKET.delete(keyOrKeys) | Promise<void> | Accepts a single key string or an array of keys. |
env.BUCKET.list(options?) | Promise<{ objects, truncated, cursor, delimitedPrefixes }> | Options: { prefix?, cursor?, delimiter?, limit? }. Default limit is 1000. |
env.BUCKET.createSignedUrl(key, options?) | Promise<string> | Options: { expiresIn? } seconds (clamped to 1-604800). |
env.BUCKET.publicUrl(key) | string | Builds a direct object URL for public buckets. |
ASSETS binding
The special env.ASSETS binding provides access to the static file pipeline.
Use it to serve static files from your worker, allowing you to add authentication, logging,
or other middleware while still serving your site's static content.
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
// Require auth for /admin paths
if (url.pathname.startsWith("/admin")) {
const authHeader = request.headers.get("Authorization");
if (authHeader !== "Bearer secret-token") {
return new Response("Unauthorized", { status: 401 });
}
}
// Pass through to static files (respects _redirects, _headers, SPA mode, 404.html)
return env.ASSETS.fetch(request);
}
}; env.ASSETS.fetch(request) serves files as if no worker existed — it respects
_redirects, _headers, SPA mode, and custom 404.html.
D1 databases
D1 bindings provide isolated SQLite databases accessible via a Cloudflare Workers-compatible API.
Each D1 binding gets its own database file stored separately from the application database.
Configure D1 bindings via the API or dashboard and access them as env.DB (or whatever
binding name you choose).
D1 API reference
-
env.DB.prepare(sql)→D1PreparedStatement— Create a parameterized statement -
stmt.bind(...params)→D1PreparedStatement— Bind positional parameters (returns a new statement) -
stmt.all()→Promise<{ results: object[], success: boolean, meta }>— Execute and return all rows as objects -
stmt.first(column?)→Promise<object|value|null>— Return the first row (or a specific column value) -
stmt.run()→Promise<{ success: boolean, meta }>— Execute a write statement (INSERT, UPDATE, DELETE) -
stmt.raw(options?)→Promise<any[][]>— Return rows as arrays. Pass{ columnNames: true }to include column names as the first row -
env.DB.batch(statements)→Promise<D1Result[]>— Execute multiple prepared statements in a single batch -
env.DB.exec(sql)→Promise<{ count: number }>— Execute raw SQL (multiple statements separated by;)
Durable Object namespaces
Durable Object bindings provide strongly consistent, per-object key-value storage. Each namespace contains uniquely identified objects with their own isolated storage. Values are JSON-serialized automatically. Configure bindings via the API or dashboard.
Durable Object namespace API reference
-
env.NS.idFromName(name)→DurableObjectId— Get a deterministic ID from a string name (SHA-256 based) -
env.NS.idFromString(hex)→DurableObjectId— Reconstruct an ID from its hex string representation -
env.NS.newUniqueId()→DurableObjectId— Generate a random unique ID -
env.NS.get(id)→DurableObjectStub— Get a stub for the given ID, providing access tostub.storageandstub.fetch()
Durable Object storage API reference
-
stub.storage.get(key)→Promise<any|null>— Get a single value by key -
stub.storage.get([keys])→Promise<Map>— Get multiple values (returns a Map) -
stub.storage.put(key, value)→Promise<void>— Store a single key-value pair -
stub.storage.put(entries)→Promise<void>— Store multiple entries from an object{ key: value, ... } -
stub.storage.delete(key)→Promise<boolean>— Delete a single key -
stub.storage.delete([keys])→Promise<number>— Delete multiple keys, returns count deleted -
stub.storage.deleteAll()→Promise<void>— Delete all entries for this object -
stub.storage.list(options?)→Promise<Map>— List entries. Options:{ prefix?, limit?, reverse? }. Default limit is 128
Queues
Queue bindings provide message queuing backed by SQLite. Workers can produce messages via
send() or sendBatch(). Configure bindings via the API or dashboard.
export default {
async fetch(request, env) {
// Send a single message.
await env.MY_QUEUE.send(JSON.stringify({ event: "page_view", path: "/home" }));
// Send a batch of messages.
await env.MY_QUEUE.sendBatch([
{ body: JSON.stringify({ event: "click", target: "signup" }) },
{ body: JSON.stringify({ event: "click", target: "login" }), contentType: "json" }
]);
return new Response("Messages sent");
}
}; Queue API reference
-
env.QUEUE.send(body, options?)→Promise<void>— Send a single message. Options:{ contentType? }(default:"json") -
env.QUEUE.sendBatch(messages)→Promise<void>— Send multiple messages. Each item:{ body, contentType? }
Service Bindings
Service Bindings allow one worker to call another worker's fetch() handler directly
without going through the network. The target worker is identified by its site ID and deploy key.
Configure bindings via the API or dashboard.
export default {
async fetch(request, env) {
// Call the auth worker's fetch handler directly.
const authResponse = await env.AUTH_SERVICE.fetch(
"https://fake-host/verify",
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ token: request.headers.get("Authorization") })
}
);
if (authResponse.status !== 200) {
return new Response("Unauthorized", { status: 401 });
}
return new Response("Authenticated!");
}
}; Service Binding API reference
-
env.BINDING.fetch(urlOrRequest, init?)→Promise<Response>— Call the target worker's fetch handler. Accepts a URL string or Request object, plus optional init options
Cron triggers
Cron schedules can be created via the API or dashboard. Each cron trigger calls the worker's
scheduled() handler at the specified interval.
export default {
async scheduled(event, env, ctx) {
// event.cron contains the cron expression (e.g., "0 * * * *")
// event.scheduledTime is the Unix timestamp (ms) when this should have run
console.log("Running scheduled task:", event.cron);
// Example: clear old KV entries
const keys = await env.MY_KV.list();
for (const key of keys.keys) {
await env.MY_KV.delete(key.name);
}
}
}; Cron format
Standard 5-field cron format: minute hour day month weekday
*— Any value*/N— Every N units (e.g.,*/15= every 15 minutes)0-30— Ranges0,15,30— Comma-separated lists
Examples:
0 * * * *— Every hour at minute 0*/15 * * * *— Every 15 minutes0 0 * * *— Daily at midnight0 9 * * 1— Every Monday at 9:00 AM
Tail handler
The optional tail() handler receives log events from other worker executions,
enabling log forwarding, alerting, or analytics pipelines. It is called with an array of
events, each containing script name, logs, exceptions, outcome, and timestamp.
export default {
async fetch(request) {
return new Response("ok");
},
async tail(events, env, ctx) {
// events: array of { scriptName, logs, exceptions, outcome, timestamp }
for (const event of events) {
console.log("Tail:", event.scriptName, "outcome:", event.outcome);
for (const log of event.logs) {
console.log(" [" + log.level + "]", log.message);
}
}
}
}; Worker lifecycle
Understanding the worker lifecycle helps optimize performance and troubleshoot issues:
- Deploy — When you upload a site with
_worker.js, the server detects it and validates the script for V8 execution - Source caching — The script source is cached to disk for fast restarts
- Runtime pool — A pool of pre-warmed JavaScript runtimes is created (size configurable
via
worker.pool_size) - Request handling — Incoming requests are routed to an available runtime from the pool, minimizing cold start latency
- Redeploy — On a new deployment, the old pool is invalidated and new bytecode is compiled
- Server restart — Bytecode is reloaded from disk (or recompiled from source as fallback)
API routes
Worker runtime endpoints are under /api/v1/sites/:id/worker/. Storage bindings are under
/api/v1/sites/:id/storage/buckets. All endpoints below require authentication and site ownership.
| Method | Path | Description |
|---|---|---|
POST | /worker/env | Set environment variable (upsert by name) |
GET | /worker/env | List environment variables (secrets masked) |
DELETE | /worker/env/:varId | Delete environment variable |
POST | /worker/kv | Create KV namespace |
GET | /worker/kv | List KV namespaces |
DELETE | /worker/kv/:nsId | Delete KV namespace and all entries |
POST | /worker/crons | Create cron schedule |
GET | /worker/crons | List cron schedules |
DELETE | /worker/crons/:cronId | Delete cron schedule |
GET | /worker/logs | Get recent worker logs (last 100, newest first) |
POST | /worker/d1 | Create D1 database binding |
GET | /worker/d1 | List D1 databases |
DELETE | /worker/d1/:d1Id | Delete D1 database and its SQLite file |
POST | /worker/do | Create Durable Object namespace binding |
GET | /worker/do | List Durable Object namespaces |
DELETE | /worker/do/:doId | Delete Durable Object namespace and all entries |
POST | /storage/buckets | Create a storage bucket binding |
GET | /storage/buckets | List storage bucket bindings for the site |
PATCH | /storage/buckets/:bucketId | Update bucket settings (for example, public read toggle) |
DELETE | /storage/buckets/:bucketId | Delete bucket binding and bucket data |
POST | /storage/buckets/:bucketId/upload-url | Generate a presigned PUT URL for direct uploads |
Next: CLI Reference to learn how to deploy workers from the command line.