cloudflare-workers
Cloudflare Workersは、エッジコンピューティングやサーバーレス機能、グローバル展開を可能にし、CloudflareへのデプロイをサポートするSkillです。
📜 元の英語説明(参考)
Cloudflare Workers for edge computing, serverless functions, and global deployment. Use when user mentions "cloudflare workers", "wrangler", "edge functions", "serverless edge", "cloudflare pages", "D1 database", "R2 storage", "KV store", "workers AI", "edge computing", or deploying to Cloudflare.
🇯🇵 日本人クリエイター向け解説
Cloudflare Workersは、エッジコンピューティングやサーバーレス機能、グローバル展開を可能にし、CloudflareへのデプロイをサポートするSkillです。
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o cloudflare-workers.zip https://jpskill.com/download/6071.zip && unzip -o cloudflare-workers.zip && rm cloudflare-workers.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/6071.zip -OutFile "$d\cloudflare-workers.zip"; Expand-Archive "$d\cloudflare-workers.zip" -DestinationPath $d -Force; ri "$d\cloudflare-workers.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
cloudflare-workers.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
cloudflare-workersフォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 1
📖 Skill本文(日本語訳)
※ 原文(英語/中国語)を Gemini で日本語化したものです。Claude 自身は原文を読みます。誤訳がある場合は原文をご確認ください。
Cloudflare Workers
セットアップ
npm install -g wrangler && wrangler login
wrangler init my-worker
wrangler.toml
name = "my-worker"
main = "src/index.ts"
compatibility_date = "2024-01-01"
[vars]
ENVIRONMENT = "production"
[[kv_namespaces]]
binding = "MY_KV"
id = "abc123"
[[d1_databases]]
binding = "DB"
database_name = "my-db"
database_id = "def456"
[[r2_buckets]]
binding = "BUCKET"
bucket_name = "my-bucket"
Worker の基本
interface Env {
MY_KV: KVNamespace; DB: D1Database; BUCKET: R2Bucket;
ENVIRONMENT: string; API_KEY: string;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/api/health") return Response.json({ status: "ok" });
if (request.method === "POST" && url.pathname === "/api/data") {
const body = await request.json();
return new Response("Created", { status: 201 });
}
return new Response("Not Found", { status: 404 });
},
};
Wrangler CLI
wrangler dev # localhost:8787 でローカル開発サーバーを起動
wrangler dev --remote # 実際のCloudflareインフラストラクチャに対して開発
wrangler deploy # 本番環境にデプロイ
wrangler tail # デプロイされたWorkerからライブログをストリーミング
wrangler secret put API_KEY # 暗号化されたシークレットを設定
wrangler secret list # 設定済みのシークレットを一覧表示
wrangler delete # デプロイされたWorkerを削除
ルーティング
マップを使用した手動ルーティング、またはパスパラメーターには hono/itty-router を使用します。
// 手動
const routes: Record<string, () => Promise<Response>> = {
"/api/users": () => handleUsers(request),
"/api/posts": () => handlePosts(request),
};
const handler = routes[new URL(request.url).pathname];
if (handler) return handler();
// Hono (複雑なルーティングに推奨)
import { Hono } from "hono";
const app = new Hono<{ Bindings: Env }>();
app.get("/users/:id", (c) => c.json({ id: c.req.param("id") }));
export default app;
KV Store
グローバルで低遅延のキーバリューストアです。結果整合性があり、読み取りが多いデータに最適です。
wrangler kv namespace create MY_KV
wrangler kv namespace create MY_KV --preview
await env.MY_KV.put("user:123", JSON.stringify({ name: "Alice" }), {
expirationTtl: 3600, metadata: { created: Date.now() },
});
const value = await env.MY_KV.get("user:123", "json");
const list = await env.MY_KV.list({ prefix: "user:", limit: 100 });
await env.MY_KV.delete("user:123");
D1 Database
クエリ、トランザクション、マイグレーションを備えたエッジでの SQLite です。
wrangler d1 create my-db
wrangler d1 execute my-db --command "CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)"
wrangler d1 migrations create my-db init
wrangler d1 migrations apply my-db
const { results } = await env.DB.prepare("SELECT * FROM users WHERE id = ?").bind(userId).all();
await env.DB.prepare("INSERT INTO users (name) VALUES (?)").bind("Alice").run();
await env.DB.batch([
env.DB.prepare("INSERT INTO users (name) VALUES (?)").bind("Bob"),
env.DB.prepare("INSERT INTO users (name) VALUES (?)").bind("Carol"),
]);
const user = await env.DB.prepare("SELECT * FROM users WHERE id = ?").bind(1).first();
R2 Storage
エグレス料金なしの S3 互換オブジェクトストレージです。
wrangler r2 bucket create my-bucket
// アップロード
await env.BUCKET.put("images/photo.jpg", imageData, {
httpMetadata: { contentType: "image/jpeg" },
});
// ダウンロード
const object = await env.BUCKET.get("images/photo.jpg");
if (object) {
return new Response(object.body, {
headers: { "Content-Type": object.httpMetadata?.contentType ?? "application/octet-stream" },
});
}
// リスト、削除
const listed = await env.BUCKET.list({ prefix: "images/", limit: 50 });
await env.BUCKET.delete("images/photo.jpg");
大きなファイルには createMultipartUpload()、uploadPart()、complete() を使用してください。
Durable Objects
強力な整合性を持つステートフルなコンピューティングです。各オブジェクトは一意の ID とプライベートストレージを持ちます。レートリミッター、WebSocket 調整、共同編集、セッション状態などに使用します。
[[durable_objects.bindings]]
name = "COUNTER"
class_name = "Counter"
[[migrations]]
tag = "v1"
new_classes = ["Counter"]
export class Counter implements DurableObject {
constructor(private state: DurableObjectState, private env: Env) {}
async fetch(request: Request): Promise<Response> {
const current = (await this.state.storage.get<number>("count")) ?? 0;
await this.state.storage.put("count", current + 1);
return Response.json({ count: current + 1 });
}
}
// Workerからの呼び出し:
const id = env.COUNTER.idFromName("my-counter");
const response = await env.COUNTER.get(id).fetch(request);
Workers AI
エッジで ML モデルを実行します。wrangler.toml に binding = "AI" を持つ [ai] を追加します。
// テキスト生成
const resp = await env.AI.run("@cf/meta/llama-3.1-8b-instruct", {
messages: [{ role: "user", content: "Summarize this article." }],
});
// 埋め込み
const emb = await env.AI.run("@cf/baai/bge-base-en-v1.5", { text: ["document to embed"] });
// 画像分類
const cls = await env.AI.run("@cf/microsoft/resnet-50", { image: await request.arrayBuffer() });
環境変数とシークレット
機密性のない値は wrangler.toml の [vars] の下に配置します。シークレットは wrangler secret put API_KEY で設定します。どちらも env.API_KEY、env.ENVIRONMENT などでアクセスできます。
複数の環境:
[env.staging]
name = "my-worker-staging"
vars = { ENVIRONMENT = "staging" }
[env.production]
name = "my-worker-production"
vars = { ENVIRONMENT = "production" }
wrangler deploy --env staging または --env production でデプロイします。
Cron トリガー
[triggers]
crons = ["0 */6 * * *", "30 8 * * 1"]
export default {
async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext): Promis 📜 原文 SKILL.md(Claudeが読む英語/中国語)を展開
Cloudflare Workers
Setup
npm install -g wrangler && wrangler login
wrangler init my-worker
wrangler.toml
name = "my-worker"
main = "src/index.ts"
compatibility_date = "2024-01-01"
[vars]
ENVIRONMENT = "production"
[[kv_namespaces]]
binding = "MY_KV"
id = "abc123"
[[d1_databases]]
binding = "DB"
database_name = "my-db"
database_id = "def456"
[[r2_buckets]]
binding = "BUCKET"
bucket_name = "my-bucket"
Worker Basics
interface Env {
MY_KV: KVNamespace; DB: D1Database; BUCKET: R2Bucket;
ENVIRONMENT: string; API_KEY: string;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/api/health") return Response.json({ status: "ok" });
if (request.method === "POST" && url.pathname === "/api/data") {
const body = await request.json();
return new Response("Created", { status: 201 });
}
return new Response("Not Found", { status: 404 });
},
};
Wrangler CLI
wrangler dev # local dev server on localhost:8787
wrangler dev --remote # dev against real Cloudflare infrastructure
wrangler deploy # deploy to production
wrangler tail # stream live logs from deployed worker
wrangler secret put API_KEY # set an encrypted secret
wrangler secret list # list configured secrets
wrangler delete # remove the deployed worker
Routing
Manual routing with a map, or use hono/itty-router for path params:
// Manual
const routes: Record<string, () => Promise<Response>> = {
"/api/users": () => handleUsers(request),
"/api/posts": () => handlePosts(request),
};
const handler = routes[new URL(request.url).pathname];
if (handler) return handler();
// Hono (recommended for complex routing)
import { Hono } from "hono";
const app = new Hono<{ Bindings: Env }>();
app.get("/users/:id", (c) => c.json({ id: c.req.param("id") }));
export default app;
KV Store
Global, low-latency key-value store. Eventually consistent. Best for read-heavy data.
wrangler kv namespace create MY_KV
wrangler kv namespace create MY_KV --preview
await env.MY_KV.put("user:123", JSON.stringify({ name: "Alice" }), {
expirationTtl: 3600, metadata: { created: Date.now() },
});
const value = await env.MY_KV.get("user:123", "json");
const list = await env.MY_KV.list({ prefix: "user:", limit: 100 });
await env.MY_KV.delete("user:123");
D1 Database
SQLite at the edge with queries, transactions, and migrations.
wrangler d1 create my-db
wrangler d1 execute my-db --command "CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)"
wrangler d1 migrations create my-db init
wrangler d1 migrations apply my-db
const { results } = await env.DB.prepare("SELECT * FROM users WHERE id = ?").bind(userId).all();
await env.DB.prepare("INSERT INTO users (name) VALUES (?)").bind("Alice").run();
await env.DB.batch([
env.DB.prepare("INSERT INTO users (name) VALUES (?)").bind("Bob"),
env.DB.prepare("INSERT INTO users (name) VALUES (?)").bind("Carol"),
]);
const user = await env.DB.prepare("SELECT * FROM users WHERE id = ?").bind(1).first();
R2 Storage
S3-compatible object storage with no egress fees.
wrangler r2 bucket create my-bucket
// Upload
await env.BUCKET.put("images/photo.jpg", imageData, {
httpMetadata: { contentType: "image/jpeg" },
});
// Download
const object = await env.BUCKET.get("images/photo.jpg");
if (object) {
return new Response(object.body, {
headers: { "Content-Type": object.httpMetadata?.contentType ?? "application/octet-stream" },
});
}
// List, delete
const listed = await env.BUCKET.list({ prefix: "images/", limit: 50 });
await env.BUCKET.delete("images/photo.jpg");
For large files use createMultipartUpload(), uploadPart(), complete().
Durable Objects
Strongly consistent, stateful compute. Each object has a unique ID and private storage. Use for rate limiters, WebSocket coordination, collaborative editing, session state.
[[durable_objects.bindings]]
name = "COUNTER"
class_name = "Counter"
[[migrations]]
tag = "v1"
new_classes = ["Counter"]
export class Counter implements DurableObject {
constructor(private state: DurableObjectState, private env: Env) {}
async fetch(request: Request): Promise<Response> {
const current = (await this.state.storage.get<number>("count")) ?? 0;
await this.state.storage.put("count", current + 1);
return Response.json({ count: current + 1 });
}
}
// Calling from a worker:
const id = env.COUNTER.idFromName("my-counter");
const response = await env.COUNTER.get(id).fetch(request);
Workers AI
Run ML models at the edge. Add [ai] with binding = "AI" to wrangler.toml.
// Text generation
const resp = await env.AI.run("@cf/meta/llama-3.1-8b-instruct", {
messages: [{ role: "user", content: "Summarize this article." }],
});
// Embeddings
const emb = await env.AI.run("@cf/baai/bge-base-en-v1.5", { text: ["document to embed"] });
// Image classification
const cls = await env.AI.run("@cf/microsoft/resnet-50", { image: await request.arrayBuffer() });
Environment Variables and Secrets
Non-sensitive values go in wrangler.toml under [vars]. Set secrets with wrangler secret put API_KEY. Both accessed through env.API_KEY, env.ENVIRONMENT, etc.
Multiple environments:
[env.staging]
name = "my-worker-staging"
vars = { ENVIRONMENT = "staging" }
[env.production]
name = "my-worker-production"
vars = { ENVIRONMENT = "production" }
Deploy with wrangler deploy --env staging or --env production.
Cron Triggers
[triggers]
crons = ["0 */6 * * *", "30 8 * * 1"]
export default {
async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext): Promise<void> {
ctx.waitUntil(doCleanup(env));
},
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
return new Response("OK");
},
};
Test locally: curl "http://localhost:8787/__scheduled?cron=0+*/6+*+*+*"
Middleware Patterns
CORS
function corsHeaders(origin: string): HeadersInit {
return {
"Access-Control-Allow-Origin": origin,
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type, Authorization",
};
}
Auth
async function requireAuth(request: Request, env: Env): Promise<Response | null> {
const token = request.headers.get("Authorization")?.replace("Bearer ", "");
if (!token || token !== env.API_KEY) {
return Response.json({ error: "Unauthorized" }, { status: 401 });
}
return null; // proceed
}
Rate Limiting
Use a Durable Object to track request timestamps per key. Store timestamps in storage, filter to the current window, reject if over limit, append and persist otherwise.
Local Development
wrangler dev # local server with miniflare runtime
wrangler dev --persist-to=./data # persist KV/D1/R2 data locally
wrangler dev --port 3000 # custom port
wrangler dev --remote # proxy to Cloudflare (real bindings)
Miniflare simulates KV, D1, R2, Durable Objects, and caches locally.
Deployment
wrangler deploy # deploy to production
wrangler deploy --env staging # named environment
wrangler deploy --dry-run # validate without deploying
wrangler versions list # list deployed versions
wrangler rollback # rollback to previous version
Custom domains:
routes = [{ pattern = "api.example.com/*", zone_name = "example.com" }]
Common Patterns
API Proxy
async function proxyRequest(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
url.hostname = "api.upstream.com";
return fetch(new Request(url.toString(), {
method: request.method,
headers: { ...Object.fromEntries(request.headers), "X-API-Key": env.UPSTREAM_KEY },
body: request.body,
}));
}
Edge Cache
async function cachedFetch(request: Request, ctx: ExecutionContext): Promise<Response> {
const cache = caches.default;
let response = await cache.match(request);
if (response) return response;
response = await fetch("https://api.origin.com" + new URL(request.url).pathname);
const cached = new Response(response.body, response);
cached.headers.set("Cache-Control", "s-maxage=300");
ctx.waitUntil(cache.put(request, cached.clone()));
return cached;
}
Webhook Handler
async function handleWebhook(request: Request, env: Env): Promise<Response> {
const signature = request.headers.get("X-Signature-256") ?? "";
const body = await request.text();
const key = await crypto.subtle.importKey(
"raw", new TextEncoder().encode(env.WEBHOOK_SECRET),
{ name: "HMAC", hash: "SHA-256" }, false, ["verify"]
);
const valid = await crypto.subtle.verify(
"HMAC", key, hexToBytes(signature.replace("sha256=", "")),
new TextEncoder().encode(body)
);
if (!valid) return new Response("Invalid signature", { status: 401 });
return new Response("OK", { status: 200 });
}
URL Shortener
app.post("/shorten", async (c) => {
const { url } = await c.req.json();
const id = crypto.randomUUID().slice(0, 8);
await c.env.MY_KV.put(`url:${id}`, url, { expirationTtl: 86400 * 30 });
return c.json({ short: `${new URL(c.req.url).origin}/${id}` });
});
app.get("/:id", async (c) => {
const target = await c.env.MY_KV.get(`url:${c.req.param("id")}`);
if (!target) return c.text("Not found", 404);
return c.redirect(target, 302);
});
Limits
| Resource | Free | Paid |
|---|---|---|
| CPU time/request | 10 ms | 30 s (Unbound) / 50 ms |
| Memory | 128 MB | 128 MB |
| Worker size | 1 MB | 10 MB |
| Subrequests (fetch) | 50 | 1000 |
| KV reads/day | 100K | 10M+ |
| KV writes/day | 1K | 1M+ |
| D1 rows read/day | 5M | 50B |
| D1 rows written/day | 100K | 50M |
| R2 Class A ops/month | 1M | $4.50/M |
| R2 storage | 10 GB | $0.015/GB-mo |
| Cron triggers | 3 | 3+ |
| Request body size | 100 MB | 100 MB |
Key constraints: no raw TCP/UDP sockets (use WebSockets or Tunnels). crypto.subtle available. Node.js built-ins partially supported via nodejs_compat flag. Globals persist within an isolate but not across cold starts. ctx.waitUntil() extends execution after response for background work.