🛠️ Scrapling
Webサイトから情報を効率的に収集するため、HTTP取得やブラウザ自動操作、Cloudflare回避、クローリングをCLIやPythonで行うSkill。
📺 まず動画で見る(YouTube)
▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗
※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。
📜 元の英語説明(参考)
Web scraping with Scrapling - HTTP fetching, stealth browser automation, Cloudflare bypass, and spider crawling via CLI and Python.
🇯🇵 日本人クリエイター向け解説
Webサイトから情報を効率的に収集するため、HTTP取得やブラウザ自動操作、Cloudflare回避、クローリングをCLIやPythonで行うSkill。
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o scrapling.zip https://jpskill.com/download/1161.zip && unzip -o scrapling.zip && rm scrapling.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/1161.zip -OutFile "$d\scrapling.zip"; Expand-Archive "$d\scrapling.zip" -DestinationPath $d -Force; ri "$d\scrapling.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
scrapling.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
scraplingフォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 1
💬 こう話しかけるだけ — サンプルプロンプト
- › Scrapling を使って、最小構成のサンプルコードを示して
- › Scrapling の主な使い方と注意点を教えて
- › Scrapling を既存プロジェクトに組み込む方法を教えて
これをClaude Code に貼るだけで、このSkillが自動発動します。
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
Scrapling
Scrapling is a web scraping framework with anti-bot bypass, stealth browser automation, and a spider framework. It provides three fetching strategies (HTTP, dynamic JS, stealth/Cloudflare) and a full CLI.
This skill is for educational and research purposes only. Users must comply with local/international data scraping laws and respect website Terms of Service.
When to Use
- Scraping static HTML pages (faster than browser tools)
- Scraping JS-rendered pages that need a real browser
- Bypassing Cloudflare Turnstile or bot detection
- Crawling multiple pages with a spider
- When the built-in
web_extracttool does not return the data you need
Installation
pip install "scrapling[all]"
scrapling install
Minimal install (HTTP only, no browser):
pip install scrapling
With browser automation only:
pip install "scrapling[fetchers]"
scrapling install
Quick Reference
| Approach | Class | Use When |
|---|---|---|
| HTTP | Fetcher / FetcherSession |
Static pages, APIs, fast bulk requests |
| Dynamic | DynamicFetcher / DynamicSession |
JS-rendered content, SPAs |
| Stealth | StealthyFetcher / StealthySession |
Cloudflare, anti-bot protected sites |
| Spider | Spider |
Multi-page crawling with link following |
CLI Usage
Extract Static Page
scrapling extract get 'https://example.com' output.md
With CSS selector and browser impersonation:
scrapling extract get 'https://example.com' output.md \
--css-selector '.content' \
--impersonate 'chrome'
Extract JS-Rendered Page
scrapling extract fetch 'https://example.com' output.md \
--css-selector '.dynamic-content' \
--disable-resources \
--network-idle
Extract Cloudflare-Protected Page
scrapling extract stealthy-fetch 'https://protected-site.com' output.html \
--solve-cloudflare \
--block-webrtc \
--hide-canvas
POST Request
scrapling extract post 'https://example.com/api' output.json \
--json '{"query": "search term"}'
Output Formats
The output format is determined by the file extension:
.html-- raw HTML.md-- converted to Markdown.txt-- plain text.json/.jsonl-- JSON
Python: HTTP Scraping
Single Request
from scrapling.fetchers import Fetcher
page = Fetcher.get('https://quotes.toscrape.com/')
quotes = page.css('.quote .text::text').getall()
for q in quotes:
print(q)
Session (Persistent Cookies)
from scrapling.fetchers import FetcherSession
with FetcherSession(impersonate='chrome') as session:
page = session.get('https://example.com/', stealthy_headers=True)
links = page.css('a::attr(href)').getall()
for link in links[:5]:
sub = session.get(link)
print(sub.css('h1::text').get())
POST / PUT / DELETE
page = Fetcher.post('https://api.example.com/data', json={"key": "value"})
page = Fetcher.put('https://api.example.com/item/1', data={"name": "updated"})
page = Fetcher.delete('https://api.example.com/item/1')
With Proxy
page = Fetcher.get('https://example.com', proxy='http://user:pass@proxy:8080')
Python: Dynamic Pages (JS-Rendered)
For pages that require JavaScript execution (SPAs, lazy-loaded content):
from scrapling.fetchers import DynamicFetcher
page = DynamicFetcher.fetch('https://example.com', headless=True)
data = page.css('.js-loaded-content::text').getall()
Wait for Specific Element
page = DynamicFetcher.fetch(
'https://example.com',
wait_selector=('.results', 'visible'),
network_idle=True,
)
Disable Resources for Speed
Blocks fonts, images, media, stylesheets (~25% faster):
from scrapling.fetchers import DynamicSession
with DynamicSession(headless=True, disable_resources=True, network_idle=True) as session:
page = session.fetch('https://example.com')
items = page.css('.item::text').getall()
Custom Page Automation
from playwright.sync_api import Page
from scrapling.fetchers import DynamicFetcher
def scroll_and_click(page: Page):
page.mouse.wheel(0, 3000)
page.wait_for_timeout(1000)
page.click('button.load-more')
page.wait_for_selector('.extra-results')
page = DynamicFetcher.fetch('https://example.com', page_action=scroll_and_click)
results = page.css('.extra-results .item::text').getall()
Python: Stealth Mode (Anti-Bot Bypass)
For Cloudflare-protected or heavily fingerprinted sites:
from scrapling.fetchers import StealthyFetcher
page = StealthyFetcher.fetch(
'https://protected-site.com',
headless=True,
solve_cloudflare=True,
block_webrtc=True,
hide_canvas=True,
)
content = page.css('.protected-content::text').getall()
Stealth Session
from scrapling.fetchers import StealthySession
with StealthySession(headless=True, solve_cloudflare=True) as session:
page1 = session.fetch('https://protected-site.com/page1')
page2 = session.fetch('https://protected-site.com/page2')
Element Selection
All fetchers return a Selector object with these methods:
CSS Selectors
page.css('h1::text').get() # First h1 text
page.css('a::attr(href)').getall() # All link hrefs
page.css('.quote .text::text').getall() # Nested selection
XPath
page.xpath('//div[@class="content"]/text()').getall()
page.xpath('//a/@href').getall()
Find Methods
page.find_all('div', class_='quote') # By tag + attribute
page.find_by_text('Read more', tag='a') # By text content
page.find_by_regex(r'\$\d+\.\d{2}') # By regex pattern
Similar Elements
Find elements with similar structure (useful for product listings, etc.):
first_product = page.css('.product')[0]
all_similar = first_product.find_similar()
Navigation
el = page.css('.target')[0]
el.parent # Parent element
el.children # Child elements
el.next_sibling # Next sibling
el.prev_sibling # Previous sibling
Python: Spider Framework
For multi-page crawling with link following:
from scrapling.spiders import Spider, Request, Response
class QuotesSpider(Spider):
name = "quotes"
start_urls = ["https://quotes.toscrape.com/"]
concurrent_requests = 10
download_delay = 1
async def parse(self, response: Response):
for quote in response.css('.quote'):
yield {
"text": quote.css('.text::text').get(),
"author": quote.css('.author::text').get(),
"tags": quote.css('.tag::text').getall(),
}
next_page = response.css('.next a::attr(href)').get()
if next_page:
yield response.follow(next_page)
result = QuotesSpider().start()
print(f"Scraped {len(result.items)} quotes")
result.items.to_json("quotes.json")
Multi-Session Spider
Route requests to different fetcher types:
from scrapling.fetchers import FetcherSession, AsyncStealthySession
class SmartSpider(Spider):
name = "smart"
start_urls = ["https://example.com/"]
def configure_sessions(self, manager):
manager.add("fast", FetcherSession(impersonate="chrome"))
manager.add("stealth", AsyncStealthySession(headless=True), lazy=True)
async def parse(self, response: Response):
for link in response.css('a::attr(href)').getall():
if "protected" in link:
yield Request(link, sid="stealth")
else:
yield Request(link, sid="fast", callback=self.parse)
Pause/Resume Crawling
spider = QuotesSpider(crawldir="./crawl_checkpoint")
spider.start() # Ctrl+C to pause, re-run to resume from checkpoint
Pitfalls
- Browser install required: run
scrapling installafter pip install -- without it,DynamicFetcherandStealthyFetcherwill fail - Timeouts: DynamicFetcher/StealthyFetcher timeout is in milliseconds (default 30000), Fetcher timeout is in seconds
- Cloudflare bypass:
solve_cloudflare=Trueadds 5-15 seconds to fetch time -- only enable when needed - Resource usage: StealthyFetcher runs a real browser -- limit concurrent usage
- Legal: always check robots.txt and website ToS before scraping. This library is for educational and research purposes
- Python version: requires Python 3.10+