jpskill.com
🛠️ 開発・MCP コミュニティ 🔴 エンジニア向け 👤 エンジニア・AI開発者

🛠️ Multiエージェントタスク統括

multi-agent-task-orchestrator

複数のAIエージェントに仕事を割り振る際

⏱ テスト計画作成 2時間 → 20分

📺 まず動画で見る(YouTube)

▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗

※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。

📜 元の英語説明(参考)

Route tasks to specialized AI agents with anti-duplication, quality gates, and 30-minute heartbeat monitoring

🇯🇵 日本人クリエイター向け解説

一言でいうと

複数のAIエージェントに仕事を割り振る際

※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-17
取得日時
2026-05-17
同梱ファイル
1

💬 こう話しかけるだけ — サンプルプロンプト

  • Multi Agent Task Orchestrator を使って、最小構成のサンプルコードを示して
  • Multi Agent Task Orchestrator の主な使い方と注意点を教えて
  • Multi Agent Task Orchestrator を既存プロジェクトに組み込む方法を教えて

これをClaude Code に貼るだけで、このSkillが自動発動します。

📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

Multi-Agent Task Orchestrator

Overview

A production-tested pattern for coordinating multiple AI agents through a single orchestrator. Instead of letting agents work independently (and conflict), one orchestrator decomposes tasks, routes them to specialists, prevents duplicate work, and verifies results before marking anything done. Battle-tested across 10,000+ tasks over 6 months.

When to Use This Skill

  • Use when you have 3+ specialized agents that need to coordinate on complex tasks
  • Use when agents are doing duplicate or conflicting work
  • Use when you need audit trails showing who did what and when
  • Use when agent output quality is inconsistent and needs verification gates

How It Works

Step 1: Define the Orchestrator Identity

The orchestrator must know what it IS and what it IS NOT. This prevents it from doing work instead of delegating:

You are the Task Orchestrator. You NEVER do specialized work yourself.
You decompose tasks, delegate to the right agent, prevent conflicts,
and verify quality before marking anything done.

WHAT YOU ARE NOT:
- NOT a code writer — delegate to code agents
- NOT a researcher — delegate to research agents
- NOT a tester — delegate to test agents

This "NOT-block" pattern reduces task drift by ~35% in production.

Step 2: Build a Task Registry

Before assigning work, check if anyone is already doing this task:

import sqlite3
from difflib import SequenceMatcher

def check_duplicate(description, threshold=0.55):
    conn = sqlite3.connect("task_registry.db")
    c = conn.cursor()
    c.execute("SELECT id, description, agent, status FROM tasks WHERE status IN ('pending', 'in_progress')")
    for row in c.fetchall():
        ratio = SequenceMatcher(None, description.lower(), row[1].lower()).ratio()
        if ratio >= threshold:
            return {"id": row[0], "description": row[1], "agent": row[2]}
    return None

Step 3: Route Tasks to Specialists

Use keyword scoring to match tasks to the best agent:

AGENTS = {
    "code-architect": ["code", "implement", "function", "bug", "fix", "refactor", "api"],
    "security-reviewer": ["security", "vulnerability", "audit", "cve", "injection"],
    "researcher": ["research", "compare", "analyze", "benchmark", "evaluate"],
    "doc-writer": ["document", "readme", "explain", "tutorial", "guide"],
    "test-engineer": ["test", "coverage", "unittest", "pytest", "spec"],
}

def route_task(description):
    scores = {}
    for agent, keywords in AGENTS.items():
        scores[agent] = sum(1 for kw in keywords if kw in description.lower())
    return max(scores, key=scores.get) if max(scores.values()) > 0 else "code-architect"

Step 4: Enforce Quality Gates

Agent output is a CLAIM. Test output is EVIDENCE.

After agent reports completion:
1. Were files actually modified? (git diff --stat)
2. Do tests pass? (npm test / pytest)
3. Were secrets introduced? (grep for API keys, tokens)
4. Did the build succeed? (npm run build)
5. Were only intended files touched? (scope check)

Mark done ONLY after ALL checks pass.

Step 5: Run 30-Minute Heartbeats

Every 30 minutes, ask:
1. "What have I DELEGATED in the last 30 minutes?"
2. If nothing → open the task backlog and assign the next task
3. Check for idle agents (no message in >30min on assigned task)
4. Relance idle agents or reassign their tasks

Examples

Example 1: Delegating a Code Task

[ORCHESTRATOR -> code-architect] TASK: Add rate limiting to /api/users
SCOPE: src/middleware/rate-limit.ts only
VERIFICATION: npm test -- --grep "rate-limit"
DEADLINE: 30 minutes

Example 2: Handling a Duplicate

User asks: "Fix the login bug"
Registry check: Task #47 "Fix authentication bug" is IN_PROGRESS by security-reviewer
Decision: SKIP — similar task already assigned (78% match)
Action: Notify user of existing task, wait for completion

Best Practices

  • Always define NOT-blocks for every agent (what they must refuse to do)
  • Use SQLite for the task registry (lightweight, no server needed)
  • Set similarity threshold at 55% for anti-duplication (lower = too many false positives)
  • Require evidence-based quality gates (not just agent claims)
  • Log every delegation with: task ID, agent, scope, deadline, verification command

Common Pitfalls

  • Problem: Orchestrator starts doing work instead of delegating Solution: Add explicit NOT-blocks and role boundaries

  • Problem: Two agents modify the same file simultaneously Solution: Task registry with file-level locking and queue system

  • Problem: Agent claims "done" without actual changes Solution: Quality gate checks git diff before accepting completion

  • Problem: Tasks pile up without progress Solution: 30-minute heartbeat catches stale assignments and reassigns

Related Skills

  • @code-review - For reviewing code changes after delegation
  • @test-driven-development - For ensuring quality in agent output
  • @project-management - For tracking multi-agent project progress

Limitations

  • Use this skill only when the task clearly matches the scope described above.
  • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
  • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.