atomise
複雑な問題を最小単位に分解し、確信度を追跡しながら解決策を導き出す、高度な思考を支援するSkill。
📜 元の英語説明(参考)
Atom of Thoughts (AoT) reasoning - decompose complex problems into atomic units with confidence tracking and backtracking. For genuinely complex reasoning, not everyday questions. Triggers on: atomise, complex reasoning, decompose problem, structured thinking, verify hypothesis.
🇯🇵 日本人クリエイター向け解説
複雑な問題を最小単位に分解し、確信度を追跡しながら解決策を導き出す、高度な思考を支援するSkill。
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 1
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
Atomise - Atom of Thoughts Reasoning
Decompose complex problems into minimal, verifiable "atoms" of thought. Unlike chain-of-thought (linear, error-accumulating), AoT treats each step as independently verifiable and backtracks when confidence drops.
Use for: Security analysis, architectural decisions, complex debugging, multi-step proofs. Don't use for: Simple questions, trivial calculations, information lookup.
/atomise "<problem>" [--light | --deep] [--math | --code | --security | --design]
The Core Loop
1. DECOMPOSE -> Break into atomic subquestions (1-2 sentences each)
2. SOLVE -> Answer leaf nodes first, propagate up
3. VERIFY -> Test each hypothesis (counterexample, consistency, domain check)
4. CONTRACT -> Summarize verified state in 2 sentences (drop history)
5. EVALUATE -> Confident enough? Done. Too uncertain? Backtrack and try another path.
Repeat until confident or all paths exhausted.
Atoms
Each atom is a minimal unit:
{id, type, content, depends_on[], confidence, verified}
| Type | Purpose | Starting Confidence |
|---|---|---|
| premise | Given facts | 1.0 |
| reasoning | Logical inference | Inherited from parents |
| hypothesis | Claim to test | Max 0.7 until verified |
| verification | Test result | Based on test outcome |
| conclusion | Final answer | Propagated from chain |
Confidence propagates: A child can't be more confident than its least-confident parent.
Confidence (Honest Caveat)
These numbers are heuristic, not calibrated probabilities. They're useful for tracking relative certainty, not for actual risk assessment.
| Threshold | Meaning |
|---|---|
| > 0.85 | Confident enough to conclude |
| 0.6 - 0.85 | Needs more verification |
| < 0.6 | Decompose further or backtrack |
| < 0.5 | Backtrack - this path isn't working |
Verification adjusts confidence:
- Confirmed -> maintain or slight boost
- Partial -> reduce ~15%
- Refuted -> major reduction, likely backtrack
Modes
Depth:
--light- Fast: max 3 levels, 0.70 confidence threshold- (default) - Standard: max 5 levels, 0.85 threshold
--deep- Exhaustive: max 7 levels, 0.90 threshold
Domain (adjusts verification style):
--math- Arithmetic checks, proof validation, boundary tests--code- Type checking, invariant verification, test generation--security- Threat modeling, attack surface, adversarial thinking--design- Tradeoff analysis, constraint satisfaction, feasibility
Output
ANSWER: {result}
CONFIDENCE: {0.0-1.0} - {why}
KEY CHAIN: P1 -> R1 -> H1 -> V1 -> C1
ATOMS:
| id | type | content | conf | verified |
|----|------|---------|------|----------|
| P1 | premise | Given: ... | 1.0 | Y |
| R1 | reasoning | Therefore: ... | 0.95 | Y |
| ... | ... | ... | ... | ... |
RISKS: {what could change this}
Add --verbose for full trace, --quiet for just the answer.
Execution Guide
Phase 0: Setup
- Restate the problem in one sentence
- Extract premises as atoms (given facts = 1.0, assumptions = 0.6)
- Sketch approaches: Direct solve? Decompose? Reframe? Pick best.
Phase 1+: Iterate
- Atomicity gate: Can you answer from verified atoms? Yes -> solve. No -> decompose.
- Decompose: Build dependency tree of atomic subquestions
- Solve + Verify: Leaves first, propagate up. Every hypothesis needs verification.
- Contract: Summarize in <=2 sentences. Drop everything else.
- Evaluate:
- Confident? -> Terminate
- Uncertain but viable? -> Continue
- Low confidence? -> Backtrack, try alternative
Backtracking
When a path yields confidence < 0.5 after verification:
- Prune that branch
- Restore to last contracted state
- Try alternative from initial sketch
Examples
# Complex debugging
/atomise "Why does this function return null on the second call?" --code
# Security review
/atomise "Is this authentication flow vulnerable to session fixation?" --security
# Architecture decision
/atomise "Should we use event sourcing for this domain?" --deep --design
# Quick decision (light mode)
/atomise "Redis vs Memcached for this cache layer?" --light
Anti-Patterns
BAD: /atomise "What's 2+2?" -> Just answer it
BAD: /atomise "Rewrite this function" -> That's implementation, not reasoning
BAD: Forcing conclusion despite low confidence -> Let it backtrack
GOOD: /atomise for genuine uncertainty requiring structured decomposition
Remember
- Atomic = minimal. 1-2 sentences per atom.
- Verify everything. Hypotheses need tests.
- Contract aggressively. Keep only what's needed for next step.
- Backtrack freely. Low confidence means try another path.
- Confidence is heuristic. Useful for structure, not actual probabilities.