🛠️ Dogfood
自社のウェブアプリを実際にユーザーの視
📺 まず動画で見る(YouTube)
▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗
※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。
📜 元の英語説明(参考)
Exploratory QA of web apps: find bugs, evidence, reports.
🇯🇵 日本人クリエイター向け解説
自社のウェブアプリを実際にユーザーの視
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o dogfood.zip https://jpskill.com/download/1201.zip && unzip -o dogfood.zip && rm dogfood.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/1201.zip -OutFile "$d\dogfood.zip"; Expand-Archive "$d\dogfood.zip" -DestinationPath $d -Force; ri "$d\dogfood.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
dogfood.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
dogfoodフォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 2
💬 こう話しかけるだけ — サンプルプロンプト
- › Dogfood の使い方を教えて
- › Dogfood で何ができるか具体例で見せて
- › Dogfood を初めて使う人向けにステップを案内して
これをClaude Code に貼るだけで、このSkillが自動発動します。
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
Dogfood: Systematic Web Application QA Testing
Overview
This skill guides you through systematic exploratory QA testing of web applications using the browser toolset. You will navigate the application, interact with elements, capture evidence of issues, and produce a structured bug report.
Prerequisites
- Browser toolset must be available (
browser_navigate,browser_snapshot,browser_click,browser_type,browser_vision,browser_console,browser_scroll,browser_back,browser_press) - A target URL and testing scope from the user
Inputs
The user provides:
- Target URL — the entry point for testing
- Scope — what areas/features to focus on (or "full site" for comprehensive testing)
- Output directory (optional) — where to save screenshots and the report (default:
./dogfood-output)
Workflow
Follow this 5-phase systematic workflow:
Phase 1: Plan
- Create the output directory structure:
{output_dir}/ ├── screenshots/ # Evidence screenshots └── report.md # Final report (generated in Phase 5) - Identify the testing scope based on user input.
- Build a rough sitemap by planning which pages and features to test:
- Landing/home page
- Navigation links (header, footer, sidebar)
- Key user flows (sign up, login, search, checkout, etc.)
- Forms and interactive elements
- Edge cases (empty states, error pages, 404s)
Phase 2: Explore
For each page or feature in your plan:
-
Navigate to the page:
browser_navigate(url="https://example.com/page") -
Take a snapshot to understand the DOM structure:
browser_snapshot() -
Check the console for JavaScript errors:
browser_console(clear=true)Do this after every navigation and after every significant interaction. Silent JS errors are high-value findings.
-
Take an annotated screenshot to visually assess the page and identify interactive elements:
browser_vision(question="Describe the page layout, identify any visual issues, broken elements, or accessibility concerns", annotate=true)The
annotate=trueflag overlays numbered[N]labels on interactive elements. Each[N]maps to ref@eNfor subsequent browser commands. -
Test interactive elements systematically:
- Click buttons and links:
browser_click(ref="@eN") - Fill forms:
browser_type(ref="@eN", text="test input") - Test keyboard navigation:
browser_press(key="Tab"),browser_press(key="Enter") - Scroll through content:
browser_scroll(direction="down") - Test form validation with invalid inputs
- Test empty submissions
- Click buttons and links:
-
After each interaction, check for:
- Console errors:
browser_console() - Visual changes:
browser_vision(question="What changed after the interaction?") - Expected vs actual behavior
- Console errors:
Phase 3: Collect Evidence
For every issue found:
-
Take a screenshot showing the issue:
browser_vision(question="Capture and describe the issue visible on this page", annotate=false)Save the
screenshot_pathfrom the response — you will reference it in the report. -
Record the details:
- URL where the issue occurs
- Steps to reproduce
- Expected behavior
- Actual behavior
- Console errors (if any)
- Screenshot path
-
Classify the issue using the issue taxonomy (see
references/issue-taxonomy.md):- Severity: Critical / High / Medium / Low
- Category: Functional / Visual / Accessibility / Console / UX / Content
Phase 4: Categorize
- Review all collected issues.
- De-duplicate — merge issues that are the same bug manifesting in different places.
- Assign final severity and category to each issue.
- Sort by severity (Critical first, then High, Medium, Low).
- Count issues by severity and category for the executive summary.
Phase 5: Report
Generate the final report using the template at templates/dogfood-report-template.md.
The report must include:
- Executive summary with total issue count, breakdown by severity, and testing scope
- Per-issue sections with:
- Issue number and title
- Severity and category badges
- URL where observed
- Description of the issue
- Steps to reproduce
- Expected vs actual behavior
- Screenshot references (use
MEDIA:<screenshot_path>for inline images) - Console errors if relevant
- Summary table of all issues
- Testing notes — what was tested, what was not, any blockers
Save the report to {output_dir}/report.md.
Tools Reference
| Tool | Purpose |
|---|---|
browser_navigate |
Go to a URL |
browser_snapshot |
Get DOM text snapshot (accessibility tree) |
browser_click |
Click an element by ref (@eN) or text |
browser_type |
Type into an input field |
browser_scroll |
Scroll up/down on the page |
browser_back |
Go back in browser history |
browser_press |
Press a keyboard key |
browser_vision |
Screenshot + AI analysis; use annotate=true for element labels |
browser_console |
Get JS console output and errors |
Tips
- Always check
browser_console()after navigating and after significant interactions. Silent JS errors are among the most valuable findings. - Use
annotate=truewithbrowser_visionwhen you need to reason about interactive element positions or when the snapshot refs are unclear. - Test with both valid and invalid inputs — form validation bugs are common.
- Scroll through long pages — content below the fold may have rendering issues.
- Test navigation flows — click through multi-step processes end-to-end.
- Check responsive behavior by noting any layout issues visible in screenshots.
- Don't forget edge cases: empty states, very long text, special characters, rapid clicking.
- When reporting screenshots to the user, include
MEDIA:<screenshot_path>so they can see the evidence inline.
同梱ファイル
※ ZIPに含まれるファイル一覧。`SKILL.md` 本体に加え、参考資料・サンプル・スクリプトが入っている場合があります。
- 📄 SKILL.md (6,270 bytes)
- 📎 references/issue-taxonomy.md (3,682 bytes)