Zero → production.
Ship safely with fast feedback loops, ephemeral test stacks, type-checked IAM policies, and built-in observability.
Stand up infrastructure. Tear it down just as fast.
Your whole cloud as one TypeScript program. plan previews the diff, deploy applies it, destroy unwinds it — predictably, every time. Learn more →
export default Alchemy.Stack( "MyApp", { providers: Cloudflare.providers() }, Effect.gen(function* () { const Photos = yield* Cloudflare.R2Bucket("Photos"); const Sessions = yield* Cloudflare.KVNamespace("Sessions"); const api = yield* Cloudflare.Worker("Api", { main: "./src/worker.ts", bindings: { Photos, Sessions }, }); return { url: api.url }; }), );
$
$
Compose Effects and Layers into type-safe cloud applications.
export default your function and Alchemy derives the rest: .bind() generates the IAM and env vars, .stream(…).process(…) wires up the event sources. No more missing policies or subscriptions. Learn more →
export default AWS.Lambda.Function( "JobApi", Effect.gen(function* () { const getPhoto = yield* S3.GetObject.bind(Photos); const putJob = yield* DynamoDB.PutItem.bind(Jobs); return { fetch: Effect.gen(function* () { const req = yield* HttpServerRequest; return yield* getPhoto({ key: req.url.slice(1) }); }), }; }), );
Drop down to async fetch when you need it.
Effect is optional. Declare resources, attach them as bindings, and Alchemy infers a fully typed env for your async worker — no codegen, no wrangler types, no manual interfaces. Learn more →
// alchemy.run.ts — declare resources, attach as bindings. export const Bucket = Cloudflare.R2Bucket("Bucket"); export const Worker = Cloudflare.Worker("Worker", { main: "./src/worker.ts", bindings: { Bucket }, }); export type WorkerEnv = Cloudflare.InferEnv<typeof Worker>;
// src/worker.ts — keep your existing async handler. import type { WorkerEnv } from "../alchemy.run.ts"; export default { async fetch(req: Request, env: WorkerEnv) { const obj = await env.Bucket.get("hello.txt"); return new Response(obj?.body ?? "Not found"); }, };
// alchemy.run.ts — declare resources, attach as bindings. export const Bucket = Cloudflare.R2Bucket("Bucket"); export const Worker = Cloudflare.Worker("Worker", { main: "./src/worker.ts", bindings: { Bucket }, }); export type WorkerEnv = Cloudflare.InferEnv<typeof Worker>;
// src/worker.ts — keep your existing async handler. import type { WorkerEnv } from "../alchemy.run.ts"; export default { async fetch(req: Request, env: WorkerEnv) { const obj = await env.Bucket.get("hello.txt"); return new Response(obj?.body ?? "Not found"); }, };
Bring any frontend. Hot-reload the cloud underneath it.
Vite, Astro, TanStack, SolidStart — frontend, backend, and infra hot-reload together against real R2 / KV / D1. Learn more →
$ alchemy dev
Verify against live resources: deploy → assert → destroy.
Each suite spins up its own isolated stack — live R2, DynamoDB, Workers — runs your assertions, then tears it back down. No mocks to babysit, no drift between fixture and production. Learn more →
$
// test/api.test.ts — isolated stack per suite. const stack = beforeAll(deploy(Stack)); // real R2, real DynamoDB afterAll(destroy(stack)); // torn down at the end test("PUT + GET round-trips through R2", Effect.gen(function* () { const { url } = yield* stack; const res = yield* HttpClient.get(`${url}/object/hello.txt`); expect(yield* res.text).toBe("hi!"); }));
Preview every PR with an ephemeral environment.
An Alchemy program is just TypeScript — drop one into GitHub Actions to deploy a per-PR stack for review, then let merge or close tear it down. Learn more →
- 1PR opened
- 2Deploy
- 3Comment
- 4Observe
- 5Destroyed
$# pull_request opened — STAGE=pr-147# workflow queued…
Dashboards and alarms ship with the resources.
Effect emits logs, spans, and metrics by default. Wire dashboards and alarms in the same alchemy.run.ts as the workers they watch — swap the OTLP exporter for Axiom, Datadog, or anywhere else. Learn more →
// src/Api.ts — Effect emits OpenTelemetry. Exporter is a Layer. const requestsTotal = Metric.counter("requests_total"); export default Cloudflare.Worker( "Api", Effect.gen(function* () { yield* Effect.logInfo("request received"); yield* Metric.increment(requestsTotal); return { fetch: handler }; }).pipe(Effect.provide(Otel)), // ← Axiom, Datadog, OTLP … );
Effect already emits logs, spans, and metrics. Swap the OTLP Layer to ship them to Axiom, Datadog, CloudWatch, or any OTLP endpoint. Your worker code doesn’t change.
// alchemy.run.ts — dashboards and alarms, in code. export const Dashboard = AWS.CloudWatch.Dashboard("ApiHealth", { widgets: [ Widget.line({ title: "p99 latency", metric: api.metrics.p99 }), Widget.line({ title: "requests / sec", metric: api.metrics.rps }), Widget.number({ title: "5xx ratio", metric: api.metrics.errorRate }), ], }); export const P99Alarm = AWS.CloudWatch.Alarm("p99Latency", { metric: api.metrics.p99, threshold: 500, evaluationPeriods: 5, alarmActions: [pagerDuty, slackWebhook], });
Declared next to the resources they watch. deploy ships them; destroy tears them down with the rest.
Paste this into your coding agent
to get started.
Help me build an Alchemy app on Cloudflare. Start by reading https://v2.alchemy.run/getting-started and follow it exactly: scaffold a fresh project, install the dependencies, create the `alchemy.run.ts` Stack with a single Cloudflare R2 Bucket (no Worker yet), and run `alchemy deploy` so I sign in to Cloudflare and provision the Bucket. Confirm the Bucket is live before moving on.
Then STOP and ASK ME what I want to build. From there, consult only the docs you need for what I asked for — don't march me through every tutorial. A Worker only gets added later if what I want to build needs one (the tutorial covers that in part-2).
Reference material (read on demand, skip the rest):
Tutorial — foundations, work through whichever parts I haven't touched:
https://v2.alchemy.run/tutorial/part-1 First Stack (state store + first resource)
https://v2.alchemy.run/tutorial/part-2 Add a Worker
https://v2.alchemy.run/tutorial/part-3 Testing
https://v2.alchemy.run/tutorial/part-4 Local Dev (`alchemy dev`)
https://v2.alchemy.run/tutorial/part-5 CI/CD (per-PR previews from GitHub Actions)
Cloudflare deep-dives — mix and match:
https://v2.alchemy.run/tutorial/cloudflare/durable-objects per-key state, RPC, Effect Streams
https://v2.alchemy.run/tutorial/cloudflare/hibernatable-websockets WebSockets that survive hibernation
https://v2.alchemy.run/tutorial/cloudflare/vite-spa Vite SPA frontend (TanStack / SolidStart / Vue / etc.)
https://v2.alchemy.run/tutorial/cloudflare/containers long-lived process per Durable Object
https://v2.alchemy.run/tutorial/cloudflare/workflows durable multi-step orchestration
Guides — cross-cutting how-tos:
https://v2.alchemy.run/guides/effect-http-api schema-validated HTTP API
https://v2.alchemy.run/guides/effect-rpc typed RPC
https://v2.alchemy.run/guides/frontends framework-specific Vite setup
https://v2.alchemy.run/concepts/testing integration testing patterns
https://v2.alchemy.run/guides/ci alternative CI setups
https://v2.alchemy.run/guides/circular-bindings two services that reference each other
https://v2.alchemy.run/guides/migrating-from-v1 migrating from v1 (async/await)
https://v2.alchemy.run/guides/cli CLI reference
Important:
- Confirm with me before each deploy. Don't batch.
- Do NOT instruct me to export CLOUDFLARE_ACCOUNT_ID or CLOUDFLARE_API_TOKEN. Alchemy stores credentials in profiles — `alchemy login` (or the first `alchemy deploy`) prompts interactively for OAuth or an API token and saves it to ~/.alchemy/profiles.json.
- Use `bun alchemy deploy` (or the npm/pnpm/yarn equivalent).
- If I'm migrating from Alchemy v1 (async/await), read https://v2.alchemy.run/guides/migrating-from-v1 first.