jpskill.com
💼 ビジネス コミュニティ

seo-drift

SEO drift monitoring: capture baselines of SEO-critical elements, detect changes, and track regressions over time. Git for SEO — baseline, diff, and track changes to your on-page SEO. Use when user says "SEO drift", "baseline", "track changes", "did anything break", "SEO regression", "compare SEO", "before and after", "monitor SEO changes", or "deployment check".

⚡ おすすめ: コマンド1行でインストール(60秒)

下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。

🍎 Mac / 🐧 Linux
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o seo-drift.zip https://jpskill.com/download/10568.zip && unzip -o seo-drift.zip && rm seo-drift.zip
🪟 Windows (PowerShell)
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/10568.zip -OutFile "$d\seo-drift.zip"; Expand-Archive "$d\seo-drift.zip" -DestinationPath $d -Force; ri "$d\seo-drift.zip"

完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。

💾 手動でダウンロードしたい(コマンドが難しい人向け)
  1. 1. 下の青いボタンを押して seo-drift.zip をダウンロード
  2. 2. ZIPファイルをダブルクリックで解凍 → seo-drift フォルダができる
  3. 3. そのフォルダを C:\Users\あなたの名前\.claude\skills\(Win)または ~/.claude/skills/(Mac)へ移動
  4. 4. Claude Code を再起動

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-18
取得日時
2026-05-18
同梱ファイル
1
📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

SEO Drift Monitor (April 2026)

Git for your SEO. Capture baselines, detect regressions, track changes over time.


Commands

Command Purpose
/seo drift baseline <url> Capture current SEO state as a "known good" snapshot
/seo drift compare <url> Compare current page state to stored baseline
/seo drift history <url> Show change history and past comparisons

What It Captures

Every baseline records these SEO-critical elements:

Element Field Source
Title tag title parse_html.py
Meta description meta_description parse_html.py
Canonical URL canonical parse_html.py
Robots directives meta_robots parse_html.py
H1 headings h1 (array) parse_html.py
H2 headings h2 (array) parse_html.py
H3 headings h3 (array) parse_html.py
JSON-LD schema schema (array) parse_html.py
Open Graph tags open_graph (dict) parse_html.py
Core Web Vitals cwv (dict) pagespeed_check.py
HTTP status code status_code fetch_page.py
HTML content hash html_hash (SHA-256) Computed
Schema content hash schema_hash (SHA-256) Computed

How Comparison Works

The comparison engine applies 17 rules across 3 severity levels. Load references/comparison-rules.md for the full rule set with thresholds, recommended actions, and cross-skill references.

Severity Levels

Level Meaning Response Time
CRITICAL SEO-breaking change, likely traffic loss Immediate
WARNING Potential impact, needs investigation Within 1 week
INFO Awareness only, may be intentional Review at convenience

Storage

All data is stored locally in SQLite:

~/.cache/claude-seo/drift/baselines.db

Tables

  • baselines: Captured snapshots with all SEO elements
  • comparisons: Diff results with triggered rules and severities

URL normalization ensures consistent matching: lowercase scheme/host, strip default ports (80/443), sort query parameters, remove UTM parameters, strip trailing slashes.


Command: baseline

Captures the current state of a page and stores it.

Steps:

  1. Validate URL (SSRF protection via google_auth.validate_url())
  2. Fetch page via scripts/fetch_page.py
  3. Parse HTML via scripts/parse_html.py
  4. Optionally fetch CWV via scripts/pagespeed_check.py (use --skip-cwv to skip)
  5. Hash HTML body and schema content (SHA-256)
  6. Store snapshot in SQLite

Execution:

python scripts/drift_baseline.py <url>
python scripts/drift_baseline.py <url> --skip-cwv

Output: JSON with baseline ID, timestamp, URL, and summary of captured elements.


Command: compare

Fetches the current page state and diffs it against the most recent baseline.

Steps:

  1. Validate URL
  2. Load most recent baseline from SQLite (or specific --baseline-id)
  3. Fetch and parse current page state
  4. Run all 17 comparison rules
  5. Classify findings by severity
  6. Store comparison result
  7. Output JSON diff report

Execution:

python scripts/drift_compare.py <url>
python scripts/drift_compare.py <url> --baseline-id 5
python scripts/drift_compare.py <url> --skip-cwv

Output: JSON with all triggered rules, old/new values, severity, and actions.

After comparison, offer to generate an HTML report:

python scripts/drift_report.py <comparison_json_file> --output drift-report.html

Command: history

Shows all baselines and comparisons for a URL.

Execution:

python scripts/drift_history.py <url>
python scripts/drift_history.py <url> --limit 10

Output: JSON array of baselines (newest first) with timestamps and comparison summaries.


Cross-Skill Integration

When drift is detected, recommend the appropriate specialized skill:

Finding Recommendation
Schema removed or modified Run /seo schema <url> for full validation
CWV regression Run /seo technical <url> for performance audit
Title or meta description changed Run /seo page <url> for content analysis
Canonical changed or removed Run /seo technical <url> for indexability check
Noindex added Run /seo technical <url> for crawlability audit
H1/heading structure changed Run /seo content <url> for E-E-A-T review
OG tags removed Run /seo page <url> for social sharing analysis
Status code changed to error Run /seo technical <url> for full diagnostics

Error Handling

Scenario Action
URL unreachable Report error from fetch_page.py. Do not guess state. Suggest user verify URL.
No baseline exists for URL Inform user and suggest running baseline first.
SSRF blocked (private IP) Report validate_url() rejection. Never bypass.
SQLite database missing Auto-create on first use. No error.
CWV fetch fails (no API key) Store null for CWV fields. Skip CWV rules during comparison.
Page returns 4xx/5xx Still capture as baseline (status code IS a tracked field).
Multiple baselines exist Use most recent unless --baseline-id specified.

Security

  • All URL fetching goes through scripts/fetch_page.py which enforces SSRF protection (blocks private IPs, loopback, reserved ranges, GCP metadata endpoints)
  • No curl, no subprocess HTTP calls -- only the project's validated fetch pipeline
  • All SQLite queries use parameterized placeholders (?), never string interpolation
  • TLS always verified -- no verify=False anywhere in the pipeline

Typical Workflows

Pre/Post Deployment Check

/seo drift baseline https://example.com     # Before deploy
# ... deploy happens ...
/seo drift compare https://example.com      # After deploy

Ongoing Monitoring

/seo drift baseline https://example.com     # Initial capture
# ... weeks later ...
/seo drift compare https://example.com      # Check for drift
/seo drift history https://example.com      # Review all changes

Investigating a Traffic Drop

/seo drift compare https://example.com      # What changed?
/seo drift history https://example.com      # When did it change?