🛠️ Firecrawl Automation
Firecrawlを使って、ウェブサイトの自動巡回
📺 まず動画で見る(YouTube)
▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗
※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。
📜 元の英語説明(参考)
Automate web crawling and data extraction with Firecrawl -- scrape pages, crawl sites, extract structured data, batch scrape URLs, and map website structures through the Composio Firecrawl integration.
🇯🇵 日本人クリエイター向け解説
Firecrawlを使って、ウェブサイトの自動巡回
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o firecrawl-automation.zip https://jpskill.com/download/1637.zip && unzip -o firecrawl-automation.zip && rm firecrawl-automation.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/1637.zip -OutFile "$d\firecrawl-automation.zip"; Expand-Archive "$d\firecrawl-automation.zip" -DestinationPath $d -Force; ri "$d\firecrawl-automation.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
firecrawl-automation.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
firecrawl-automationフォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 1
💬 こう話しかけるだけ — サンプルプロンプト
- › Firecrawl Automation の使い方を教えて
- › Firecrawl Automation で何ができるか具体例で見せて
- › Firecrawl Automation を初めて使う人向けにステップを案内して
これをClaude Code に貼るだけで、このSkillが自動発動します。
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
Firecrawl Automation
Run Firecrawl web crawling and extraction directly from Claude Code. Scrape individual pages, crawl entire sites, extract structured data with AI, batch process URL lists, and map website structures without leaving your terminal.
Toolkit docs: composio.dev/toolkits/firecrawl
Setup
- Add the Composio MCP server to your configuration:
https://rube.app/mcp - Connect your Firecrawl account when prompted. The agent will provide an authentication link.
- Be mindful of credit consumption -- scope your crawls tightly and test on small URL sets before scaling.
Core Workflows
1. Scrape a Single Page
Fetch content from a URL in multiple formats with optional browser actions for dynamic pages.
Tool: FIRECRAWL_SCRAPE
Key parameters:
url(required) -- fully qualified URL to scrapeformats-- output formats:markdown(default),html,rawHtml,links,screenshot,jsononlyMainContent(default true) -- extract main content only, excluding nav/footer/adswaitFor-- milliseconds to wait for JS rendering (default 0)timeout-- max wait in ms (default 30000)actions-- browser actions before scraping (click, write, wait, press, scroll)includeTags/excludeTags-- filter by HTML tagsjsonOptions-- for structured extraction withschemaand/orprompt
Example prompt: "Scrape the main content from https://example.com/pricing as markdown"
2. Crawl an Entire Site
Discover and scrape multiple pages from a website with configurable depth, path filters, and concurrency.
Tool: FIRECRAWL_CRAWL_V2
Key parameters:
url(required) -- starting URL for the crawllimit(default 10) -- max pages to crawlmaxDiscoveryDepth-- depth limit from the root pageincludePaths/excludePaths-- regex patterns for URL pathsallowSubdomains-- include subdomains (default false)crawlEntireDomain-- follow sibling/parent links, not just children (default false)sitemap--include(default),skip, oronlyprompt-- natural language to auto-configure crawler settingsscrapeOptions_formats-- output format for each pagescrapeOptions_onlyMainContent-- main content extraction per page
Example prompt: "Crawl the docs section of firecrawl.dev, max 50 pages, only paths matching docs"
3. Extract Structured Data
Extract structured JSON data from web pages using AI with a natural language prompt or JSON schema.
Tool: FIRECRAWL_EXTRACT
Key parameters:
urls(required) -- array of URLs to extract from (max 10 in beta). Supports wildcards likehttps://example.com/blog/*prompt-- natural language description of what to extractschema-- JSON Schema defining the desired output structureenable_web_search-- allow crawling links outside initial domains (default false)
At least one of prompt or schema must be provided.
Check extraction status with FIRECRAWL_EXTRACT_GET using the returned job id.
Example prompt: "Extract company name, pricing tiers, and feature lists from https://example.com/pricing"
4. Batch Scrape Multiple URLs
Scrape many URLs concurrently with shared configuration for efficient bulk data collection.
Tool: FIRECRAWL_BATCH_SCRAPE
Key parameters:
urls(required) -- array of URLs to scrapeformats-- output format for all pages (defaultmarkdown)onlyMainContent(default true) -- main content extractionmaxConcurrency-- parallel scrape limitignoreInvalidURLs(default true) -- skip bad URLs instead of failing the batchlocation-- geolocation settings withcountrycodeactions-- browser actions applied to each pageblockAds(default true) -- block advertisements
Example prompt: "Batch scrape these 20 product page URLs as markdown with ad blocking"
5. Map Website Structure
Discover all URLs on a website from a starting URL, useful for planning crawls or auditing site structure.
Tool: FIRECRAWL_MAP_MULTIPLE_URLS_BASED_ON_OPTIONS
Key parameters:
url(required) -- starting URL (must behttps://orhttp://)search-- guide URL discovery toward specific page typeslimit(default 5000, max 100000) -- max URLs to returnincludeSubdomains(default true) -- include subdomainsignoreQueryParameters(default true) -- dedupe URLs differing only by query paramssitemap--include,skip, oronly
Example prompt: "Map all URLs on docs.example.com, focusing on API reference pages"
6. Monitor and Manage Crawl Jobs
Track crawl progress, retrieve results, and cancel runaway jobs.
Tools: FIRECRAWL_CRAWL_GET, FIRECRAWL_GET_THE_STATUS_OF_A_CRAWL_JOB, FIRECRAWL_CANCEL_A_CRAWL_JOB
FIRECRAWL_CRAWL_GET-- get status, progress, credits used, and crawled page dataFIRECRAWL_CANCEL_A_CRAWL_JOB-- stop an active or queued crawl
Both require the crawl job id (UUID) returned when the crawl was initiated.
Example prompt: "Check the status of crawl job 019b0806-b7a1-7652-94c1-e865b5d2e89a"
Known Pitfalls
- Rate limiting: Firecrawl can trigger "Rate limit exceeded" errors (429). Prefer
FIRECRAWL_BATCH_SCRAPEover many individualFIRECRAWL_SCRAPEcalls, and implement backoff on 429/5xx responses. - Credit consumption:
FIRECRAWL_EXTRACTcan fail with "Insufficient credits." Scope tightly and avoid broad homepage URLs that yield sparse fields. Test on small URL sets first. - Nested error responses: Per-page failures may be nested in
response.data.code(e.g.,SCRAPE_DNS_RESOLUTION_ERROR) even when the outer API call succeeds. Always validate inner status/error fields. - JS-heavy pages: Non-rendered fetches may miss key content. Use
waitFor(e.g., 1000-5000ms) for dynamic pages, or configurescrapeOptions_actionsto interact with the page before scraping. - Extraction schema precision: Vague or shifting schemas/prompts produce noisy, inconsistent output. Freeze your schema and test on a small sample before scaling to many URLs.
- Crawl jobs are async:
FIRECRAWL_CRAWL_V2returns immediately with a job ID. UseFIRECRAWL_CRAWL_GETto poll for results. Cancel stuck crawls withFIRECRAWL_CANCEL_A_CRAWL_JOBto avoid wasting credits. - Extract job polling:
FIRECRAWL_EXTRACTis also async for larger jobs. Retrieve final output withFIRECRAWL_EXTRACT_GET. - URL batching for extract: Keep extract URL batches small (~10 URLs) to avoid 429 rate limit errors.
- Deeply nested responses: Results are often nested under
data.dataor deeper. Inspect the returned shape rather than assuming flat keys.
Quick Reference
| Tool Slug | Description |
|---|---|
FIRECRAWL_SCRAPE |
Scrape a single URL with format/action options |
FIRECRAWL_CRAWL_V2 |
Crawl a website with depth/path control |
FIRECRAWL_EXTRACT |
Extract structured data with AI prompt/schema |
FIRECRAWL_BATCH_SCRAPE |
Batch scrape multiple URLs concurrently |
FIRECRAWL_MAP_MULTIPLE_URLS_BASED_ON_OPTIONS |
Discover/map all URLs on a site |
FIRECRAWL_CRAWL_GET |
Get crawl job status and results |
FIRECRAWL_GET_THE_STATUS_OF_A_CRAWL_JOB |
Check crawl job progress |
FIRECRAWL_CANCEL_A_CRAWL_JOB |
Cancel an active crawl job |
FIRECRAWL_EXTRACT_GET |
Get extraction job status and results |
FIRECRAWL_CRAWL_PARAMS_PREVIEW |
Preview crawl parameters before starting |
FIRECRAWL_SEARCH |
Web search + scrape top results |
Powered by Composio