jpskill.com
🛠️ 開発・MCP コミュニティ 🟡 少し慣れが必要 👤 幅広いユーザー

🛠️ Spike

spike

本格的な開発を始める前に、アイデアが実現可能か

⏱ 障害ポストモーテム 1日 → 1時間

📺 まず動画で見る(YouTube)

▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗

※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。

📜 元の英語説明(参考)

Throwaway experiments to validate an idea before build.

🇯🇵 日本人クリエイター向け解説

一言でいうと

本格的な開発を始める前に、アイデアが実現可能か

※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。

⚡ おすすめ: コマンド1行でインストール(60秒)

下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。

🍎 Mac / 🐧 Linux
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o spike.zip https://jpskill.com/download/1250.zip && unzip -o spike.zip && rm spike.zip
🪟 Windows (PowerShell)
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/1250.zip -OutFile "$d\spike.zip"; Expand-Archive "$d\spike.zip" -DestinationPath $d -Force; ri "$d\spike.zip"

完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。

💾 手動でダウンロードしたい(コマンドが難しい人向け)
  1. 1. 下の青いボタンを押して spike.zip をダウンロード
  2. 2. ZIPファイルをダブルクリックで解凍 → spike フォルダができる
  3. 3. そのフォルダを C:\Users\あなたの名前\.claude\skills\(Win)または ~/.claude/skills/(Mac)へ移動
  4. 4. Claude Code を再起動

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-17
取得日時
2026-05-17
同梱ファイル
1

💬 こう話しかけるだけ — サンプルプロンプト

  • Spike の使い方を教えて
  • Spike で何ができるか具体例で見せて
  • Spike を初めて使う人向けにステップを案内して

これをClaude Code に貼るだけで、このSkillが自動発動します。

📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

Spike

Use this skill when the user wants to feel out an idea before committing to a real build — validating feasibility, comparing approaches, or surfacing unknowns that no amount of research will answer. Spikes are disposable by design. Throw them away once they've paid their debt.

Load this when the user says things like "let me try this", "I want to see if X works", "spike this out", "before I commit to Y", "quick prototype of Z", "is this even possible?", or "compare A vs B".

When NOT to use this

  • The answer is knowable from docs or reading code — just do research, don't build
  • The work is production path — use writing-plans / plan instead
  • The idea is already validated — jump straight to implementation

If the user has the full GSD system installed

If gsd-spike shows up as a sibling skill (installed via npx get-shit-done-cc --hermes), prefer gsd-spike when the user wants the full GSD workflow: persistent .planning/spikes/ state, MANIFEST tracking across sessions, Given/When/Then verdict format, and commit patterns that integrate with the rest of GSD. This skill is the lightweight standalone version for users who don't have (or don't want) the full system.

Core method

Regardless of scale, every spike follows this loop:

decompose  →  research  →  build  →  verdict
   ↑__________________________________________↓
                  iterate on findings

1. Decompose

Break the user's idea into 2-5 independent feasibility questions. Each question is one spike. Present them as a table with Given/When/Then framing:

# Spike Validates (Given/When/Then) Risk
001 websocket-streaming Given a WS connection, when LLM streams tokens, then client receives chunks < 100ms High
002a pdf-parse-pdfjs Given a multi-page PDF, when parsed with pdfjs, then structured text is extractable Medium
002b pdf-parse-camelot Given a multi-page PDF, when parsed with camelot, then structured text is extractable Medium

Spike types:

  • standard — one approach answering one question
  • comparison — same question, different approaches (shared number, letter suffix a/b/c)

Good spike questions: specific feasibility with observable output. Bad spike questions: too broad, no observable output, or just "read the docs about X".

Order by risk. The spike most likely to kill the idea runs first. No point prototyping the easy parts if the hard part doesn't work.

Skip decomposition only if the user already knows exactly what they want to spike and says so. Then take their idea as a single spike.

2. Align (for multi-spike ideas)

Present the spike table. Ask: "Build all in this order, or adjust?" Let the user drop, reorder, or re-frame before you write any code.

3. Research (per spike, before building)

Spikes are not research-free — you research enough to pick the right approach, then you build. Per spike:

  1. Brief it. 2-3 sentences: what this spike is, why it matters, key risk.

  2. Surface competing approaches if there's real choice:

    Approach Tool/Library Pros Cons Status
    ... ... ... ... maintained / abandoned / beta
  3. Pick one. State why. If 2+ are credible, build quick variants within the spike.

  4. Skip research for pure logic with no external dependencies.

Use Hermes tools for the research step:

  • web_search("python websocket streaming libraries 2025") — find candidates
  • web_extract(urls=["https://websockets.readthedocs.io/..."]) — read the actual docs (returns markdown)
  • terminal("pip show websockets | grep Version") — check what's installed in the project's venv

For libraries without docs pages, clone and read their README.md / examples/ via read_file. Context7 MCP (if the user has it configured) is also a good source — mcp_*_resolve-library-id then mcp_*_query-docs.

4. Build

One directory per spike. Keep it standalone.

spikes/
├── 001-websocket-streaming/
│   ├── README.md
│   └── main.py
├── 002a-pdf-parse-pdfjs/
│   ├── README.md
│   └── parse.js
└── 002b-pdf-parse-camelot/
    ├── README.md
    └── parse.py

Bias toward something the user can interact with. Spikes fail when the only output is a log line that says "it works." The user wants to feel the spike working. Default choices, in order of preference:

  1. A runnable CLI that takes input and prints observable output
  2. A minimal HTML page that demonstrates the behavior
  3. A small web server with one endpoint
  4. A unit test that exercises the question with recognizable assertions

Depth over speed. Never declare "it works" after one happy-path run. Test edge cases. Follow surprising findings. The verdict is only trustworthy when the investigation was honest.

Avoid unless the spike specifically requires it: complex package management, build tools/bundlers, Docker, env files, config systems. Hardcode everything — it's a spike.

Building one spike — a typical tool sequence:

terminal("mkdir -p spikes/001-websocket-streaming")
write_file("spikes/001-websocket-streaming/README.md", "# 001: websocket-streaming\n\n...")
write_file("spikes/001-websocket-streaming/main.py", "...")
terminal("cd spikes/001-websocket-streaming && python3 main.py")
# Observe output, iterate.

Parallel comparison spikes (002a / 002b) — delegate. When two approaches can run in parallel and both need real engineering (not 10-line prototypes), fan out with delegate_task:

delegate_task(tasks=[
    {"goal": "Build 002a-pdf-parse-pdfjs: ...", "toolsets": ["terminal", "file", "web"]},
    {"goal": "Build 002b-pdf-parse-camelot: ...", "toolsets": ["terminal", "file", "web"]},
])

Each subagent returns its own verdict; you write the head-to-head.

5. Verdict

Each spike's README.md closes with:

## Verdict: VALIDATED | PARTIAL | INVALIDATED

### What worked
- ...

### What didn't
- ...

### Surprises
- ...

### Recommendation for the real build
- ...

VALIDATED = the core question was answered yes, with evidence. PARTIAL = it works under constraints X, Y, Z — document them. INVALIDATED = doesn't work, for this reason. This is a successful spike.

Comparison spikes

When two approaches answer the same question (002a / 002b), build them back to back, then do a head-to-head comparison at the end:

## Head-to-head: pdfjs vs camelot

| Dimension | pdfjs (002a) | camelot (002b) |
|-----------|--------------|----------------|
| Extraction quality | 9/10 structured | 7/10 table-only |
| Setup complexity | npm install, 1 line | pip + ghostscript |
| Perf on 100-page PDF | 3s | 18s |
| Handles rotated text | no | yes |

**Winner:** pdfjs for our use case. Camelot if we need table-first extraction later.

Frontier mode (picking what to spike next)

If spikes already exist and the user says "what should I spike next?", walk the existing directories and look for:

  • Integration risks — two validated spikes that touch the same resource but were tested independently
  • Data handoffs — spike A's output was assumed compatible with spike B's input; never proven
  • Gaps in the vision — capabilities assumed but unproven
  • Alternative approaches — different angles for PARTIAL or INVALIDATED spikes

Propose 2-4 candidates as Given/When/Then. Let the user pick.

Output

  • Create spikes/ (or .planning/spikes/ if the user is using GSD conventions) in the repo root
  • One dir per spike: NNN-descriptive-name/
  • README.md per spike captures question, approach, results, verdict
  • Keep the code throwaway — a spike that takes 2 days to "clean up for production" was a bad spike

Attribution

Adapted from the GSD (Get Shit Done) project's /gsd-spike workflow — MIT © 2025 Lex Christopherson (gsd-build/get-shit-done). The full GSD system offers persistent spike state, MANIFEST tracking, and integration with a broader spec-driven development pipeline; install with npx get-shit-done-cc --hermes --global.