jpskill.com
🛠️ 開発・MCP コミュニティ

playwright-scraper

Playwright web scraping: dynamic content, auth flows, pagination, data extraction, screenshots

⚡ おすすめ: コマンド1行でインストール(60秒)

下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。

🍎 Mac / 🐧 Linux
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o playwright-scraper.zip https://jpskill.com/download/22142.zip && unzip -o playwright-scraper.zip && rm playwright-scraper.zip
🪟 Windows (PowerShell)
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/22142.zip -OutFile "$d\playwright-scraper.zip"; Expand-Archive "$d\playwright-scraper.zip" -DestinationPath $d -Force; ri "$d\playwright-scraper.zip"

完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。

💾 手動でダウンロードしたい(コマンドが難しい人向け)
  1. 1. 下の青いボタンを押して playwright-scraper.zip をダウンロード
  2. 2. ZIPファイルをダブルクリックで解凍 → playwright-scraper フォルダができる
  3. 3. そのフォルダを C:\Users\あなたの名前\.claude\skills\(Win)または ~/.claude/skills/(Mac)へ移動
  4. 4. Claude Code を再起動

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-18
取得日時
2026-05-18
同梱ファイル
1
📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

playwright-scraper

Purpose

This skill enables web scraping using Playwright, a Node.js library for browser automation. It focuses on handling dynamic content, authentication flows, pagination, data extraction, and screenshots to reliably scrape modern websites.

When to Use

Use this skill for scraping sites with JavaScript-rendered content (e.g., React or Angular apps), sites requiring login (e.g., dashboards), handling multi-page results (e.g., search results), or capturing visual data (e.g., screenshots for verification). Avoid for static HTML sites where simpler tools like requests suffice.

Key Capabilities

  • Dynamically load and interact with content using Playwright's browser control.
  • Manage authentication flows, such as logging in via forms or API tokens.
  • Handle pagination by navigating pages, clicking "next" buttons, or parsing URLs.
  • Extract data using selectors, with options for JSON output or file saves.
  • Capture screenshots or full-page PDFs for debugging or reporting.
  • Supports headless or visible browser modes for flexibility.

Usage Patterns

Always initialize a browser context first, then create pages for navigation. Use async patterns for reliability. For authenticated scraping, handle cookies or sessions per context. Structure scripts to loop through pages for pagination and use try-catch for flaky elements. Pass configurations via JSON files or environment variables for reusability.

Common Commands/API

Use Playwright's Node.js API. Install via npm install playwright. Key methods include:

  • Launch browser: const browser = await playwright.chromium.launch({ headless: true });
  • Navigate page: const page = await browser.newPage(); await page.goto('https://example.com');
  • Handle auth: await page.fill('#username', process.env.USERNAME); await page.fill('#password', process.env.PASSWORD); await page.click('#login');
  • Extract data: const data = await page.evaluate(() => document.querySelector('#target').innerText); console.log(data);
  • Pagination: while (await page.$('#next-button')) { await page.click('#next-button'); await page.waitForSelector('.item'); }
  • Take screenshot: await page.screenshot({ path: 'screenshot.png' }); CLI flags for running scripts: Use npx playwright test with flags like --headed for visible mode or --timeout 30000 for extended waits.

Integration Notes

Integrate by importing Playwright in Node.js projects. For auth, use environment variables like $PLAYWRIGHT_USERNAME and $PLAYWRIGHT_PASSWORD to avoid hardcoding. Configuration format: Use a JSON file for settings, e.g., { "url": "https://target.com", "selector": "#data-element" }. Pass it via script args: node scraper.js --config config.json. For larger systems, chain with tools like Puppeteer (if migrating) or export data to databases via page.evaluate results. Ensure compatibility with Node.js 14+ and handle proxy settings with browser.launch({ proxy: { server: 'http://myproxy.com:8080' } }).

Error Handling

Anticipate common errors like timeout on dynamic loads or selector failures. Use page.waitForSelector with timeouts: await page.waitForSelector('#element', { timeout: 10000 }).catch(err => console.error('Element not found:', err));. For network issues, wrap page.goto in try-catch: try { await page.goto(url, { waitUntil: 'networkidle' }); } catch (e) { console.error('Navigation failed:', e.message); await browser.close(); }. Handle authentication failures by checking for error elements: if (await page.$('#error-message')) { throw new Error('Login failed'); }. Log errors with details and retry up to 3 times using a loop.

Concrete Usage Examples

  1. Scraping a logged-in dashboard: First, set env vars: export PLAYWRIGHT_USERNAME='user@example.com' and export PLAYWRIGHT_PASSWORD='securepass'. Then, run: const browser = await playwright.chromium.launch(); const page = await browser.newPage(); await page.goto('https://dashboard.com/login'); await page.fill('#username', process.env.PLAYWRIGHT_USERNAME); await page.fill('#password', process.env.PLAYWRIGHT_PASSWORD); await page.click('#submit'); const data = await page.evaluate(() => document.querySelector('#dashboard-data').innerText); console.log(data); await browser.close(); This extracts data from a protected page.
  2. Handling pagination on a search site: Script: const browser = await playwright.chromium.launch(); const page = await browser.newPage(); await page.goto('https://search.com?q=query'); let items = []; while (true) { items.push(...await page.$$eval('.result-item', elements => elements.map(el => el.innerText))); const nextButton = await page.$('#next-page'); if (!nextButton) break; await nextButton.click(); await page.waitForTimeout(2000); } console.log(items); await browser.close(); This collects results across multiple pages.

Graph Relationships

  • Related to: "selenium-automation" (alternative browser automation tool)
  • Depends on: "node-runtime" (for Playwright execution)
  • Complements: "data-extraction" (for post-processing scraped data)
  • In cluster: "community" (shared with other open-source tools)