jpskill.com
🛠️ 開発・MCP コミュニティ 🔴 エンジニア向け 👤 エンジニア・AI開発者

🛠️ Stressテスト

stress-test

事業の前提条件がどの程度のストレスに耐えられるかを検証するSkill。

⏱ ボイラープレート実装 半日 → 30分

📺 まず動画で見る(YouTube)

▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗

※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。

📜 元の英語説明(参考)

/em -stress-test — Business Assumption Stress Testing

🇯🇵 日本人クリエイター向け解説

一言でいうと

事業の前提条件がどの程度のストレスに耐えられるかを検証するSkill。

※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-17
取得日時
2026-05-17
同梱ファイル
1

💬 こう話しかけるだけ — サンプルプロンプト

  • Stress Test を使って、最小構成のサンプルコードを示して
  • Stress Test の主な使い方と注意点を教えて
  • Stress Test を既存プロジェクトに組み込む方法を教えて

これをClaude Code に貼るだけで、このSkillが自動発動します。

📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

/em:stress-test — Business Assumption Stress Testing

Command: /em:stress-test <assumption>

Take any business assumption and break it before the market does. Revenue projections. Market size. Competitive moat. Hiring velocity. Customer retention.


Why Most Assumptions Are Wrong

Founders are optimists by nature. That's a feature — you need optimism to start something from nothing. But it becomes a liability when assumptions in business models get inflated by the same optimism that got you started.

The most dangerous assumptions are the ones everyone agrees on.

When the whole team believes the $50M market is real, when every investor call goes well so you assume the round will close, when your model shows $2M ARR by December and nobody questions it — that's when you're most exposed.

Stress testing isn't pessimism. It's calibration.


The Stress-Test Methodology

Step 1: Isolate the Assumption

State it explicitly. Not "our market is large" but "the total addressable market for B2B spend management software in German SMEs is €2.3B."

The more specific the assumption, the more testable it is. Vague assumptions are unfalsifiable — and therefore useless.

Common assumption types:

  • Market size — TAM, SAM, SOM; growth rate; customer segments
  • Customer behavior — willingness to pay, churn, expansion, referrals
  • Revenue model — conversion rates, deal size, sales cycle, CAC
  • Competitive position — moat durability, competitor response speed, switching cost
  • Execution — team velocity, hire timeline, product timeline, operational scaling
  • Macro — regulatory environment, economic conditions, technology availability

Step 2: Find the Counter-Evidence

For every assumption, actively search for evidence that it's wrong.

Ask:

  • Who has tried this and failed?
  • What data contradicts this assumption?
  • What does the bear case look like?
  • If a smart skeptic was looking at this, what would they point to?
  • What's the base rate for assumptions like this?

Sources of counter-evidence:

  • Comparable companies that failed in adjacent markets
  • Customer churn data from similar businesses
  • Historical accuracy of similar forecasts
  • Industry reports with conflicting data
  • What competitors who tried this found

The goal isn't to find a reason to stop — it's to surface what you don't know.

Step 3: Model the Downside

Most plans model the base case and the upside. Stress testing means modeling the downside explicitly.

For quantitative assumptions (revenue, growth, conversion):

Scenario Assumption Value Probability Impact
Base case [Original value] ?
Bear case -30% ?
Stress case -50% ?
Catastrophic -80% ?

Key question at each level: Does the business survive? Does the plan make sense?

For qualitative assumptions (moat, product-market fit, team capability):

  • What's the earliest signal this assumption is wrong?
  • How long would it take you to notice?
  • What happens between when it breaks and when you detect it?

Step 4: Calculate Sensitivity

Some assumptions matter more than others. Sensitivity analysis answers: if this one assumption changes, how much does the outcome change?

Example:

  • If CAC doubles, how does that change runway?
  • If churn goes from 5% to 10%, how does that change NRR in 24 months?
  • If the deal cycle is 6 months instead of 3, how does that affect Q3 revenue?

High sensitivity = the assumption is a key lever. Wrong = big problem.

Step 5: Propose the Hedge

For every high-risk assumption, there should be a hedge:

  • Validation hedge — test it before betting on it (pilot, customer conversation, small experiment)
  • Contingency hedge — if it's wrong, what's plan B?
  • Early warning hedge — what's the leading indicator that would tell you it's breaking before it's too late to act?

Stress Test Patterns by Assumption Type

Revenue Projections

Common failures:

  • Bottom-up model assumes 100% of pipeline converts
  • Doesn't account for deal slippage, churn, seasonality
  • New channel assumed to work before tested at scale

Stress questions:

  • What's your actual historical win rate on pipeline?
  • If your top 3 deals slip to next quarter, what happens to the number?
  • What's the model look like if your new sales rep takes 4 months to ramp, not 2?
  • If expansion revenue doesn't materialize, what's the growth rate?

Test: Build the revenue model from historical win rates, not hoped-for ones.

Market Size

Common failures:

  • TAM calculated top-down from industry reports without bottoms-up validation
  • Conflating total market with serviceable market
  • Assuming 100% of SAM is reachable

Stress questions:

  • How many companies in your ICP actually exist and can you name them?
  • What's your serviceable obtainable market in year 1-3?
  • What percentage of your ICP is currently spending on any solution to this problem?
  • What does "winning" look like and what market share does that require?

Test: Build a list of target accounts. Count them. Multiply by ACV. That's your SAM.

Competitive Moat

Common failures:

  • Moat is technology advantage that can be built in 6 months
  • Network effects that haven't yet materialized
  • Data advantage that requires scale you don't have

Stress questions:

  • If a well-funded competitor copied your best feature in 90 days, what do customers do?
  • What's your retention rate among customers who have tried alternatives?
  • Is the moat real today or theoretical at scale?
  • What would it cost a competitor to reach feature parity?

Test: Ask churned customers why they left and whether a competitor could have kept them.

Hiring Plan

Common failures:

  • Time-to-hire assumes standard recruiting cycle, not current market
  • Ramp time not modeled (3-6 months before full productivity)
  • Key hire dependency: plan only works if specific person is hired

Stress questions:

  • What happens if the VP Sales hire takes 5 months, not 2?
  • What does execution look like if you only hire 70% of planned headcount?
  • Which single person, if they left tomorrow, would most damage the plan?
  • Is the plan achievable with current team if hiring freezes?

Test: Model the plan with 0 net new hires. What still works?

Competitive Response

Common failures:

  • Assumes incumbents won't respond (they will if you're winning)
  • Underestimates speed of response
  • Doesn't model resource asymmetry

Stress questions:

  • If the market leader copies your product in 6 months, how does pricing change?
  • What's your response if a competitor raises $30M to attack your space?
  • Which of your customers have vendor relationships with your competitors?

The Stress Test Output

ASSUMPTION: [Exact statement]
SOURCE: [Where this came from — model, investor pitch, team gut feel]

COUNTER-EVIDENCE
• [Specific evidence that challenges this assumption]
• [Comparable failure case]
• [Data point that contradicts the assumption]

DOWNSIDE MODEL
• Bear case (-30%): [Impact on plan]
• Stress case (-50%): [Impact on plan]
• Catastrophic (-80%): [Impact on plan — does the business survive?]

SENSITIVITY
This assumption has [HIGH / MEDIUM / LOW] sensitivity.
A 10% change → [X] change in outcome.

HEDGE
• Validation: [How to test this before betting on it]
• Contingency: [Plan B if it's wrong]
• Early warning: [Leading indicator to watch — and at what threshold to act]