🛠️ Zipai最適化ツール
AIが情報を処理する際の最小単位であるトークンを
📺 まず動画で見る(YouTube)
▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗
※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。
📜 元の英語説明(参考)
Adaptive token optimizer: intelligent filtering, surgical output, ambiguity-first, context-window-aware, VCS-aware, MCP-aware.
🇯🇵 日本人クリエイター向け解説
AIが情報を処理する際の最小単位であるトークンを
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o zipai-optimizer.zip https://jpskill.com/download/3743.zip && unzip -o zipai-optimizer.zip && rm zipai-optimizer.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/3743.zip -OutFile "$d\zipai-optimizer.zip"; Expand-Archive "$d\zipai-optimizer.zip" -DestinationPath $d -Force; ri "$d\zipai-optimizer.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
zipai-optimizer.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
zipai-optimizerフォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 1
💬 こう話しかけるだけ — サンプルプロンプト
- › Zipai Optimizer を使って、最小構成のサンプルコードを示して
- › Zipai Optimizer の主な使い方と注意点を教えて
- › Zipai Optimizer を既存プロジェクトに組み込む方法を教えて
これをClaude Code に貼るだけで、このSkillが自動発動します。
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
ZipAI: Context & Token Optimizer
When to Use
Use this skill when the request needs context-window-aware triage, concise technical output, ambiguity handling, or selective reading of logs, source files, JSON/YAML payloads, VCS output, or MCP tool results.
Rules
Rule 1 — Adaptive Verbosity
- Ops/Fixes: technical content only. No filler, no echo, no meta.
- Architecture/Analysis: full reasoning authorized and encouraged.
- Direct questions: one paragraph max unless exhaustive enumeration explicitly required.
- Long sessions: never re-summarize prior context. Assume developer retains full thread memory.
- Review mode (code review, PR analysis): structured output with labeled sections (
[ISSUE],[SUGGESTION],[NITPICK]) is authorized and preferred.
Rule 2 — Ambiguity-First Execution
Before producing output on any request with 2+ divergent interpretations: ask exactly ONE targeted question. Never ask about obvious intent. Never stack multiple questions. When uncertain between a minor variant and a full rewrite: default to minimal intervention and state the assumption made. When the scope is ambiguous (file vs. project vs. repo): ask once, scoped to the narrowest useful boundary.
Rule 3 — Intelligent Input Filtering
Classify before ingesting — never read raw:
- Builds/Installs (pip, npm, make, docker):
grep -A 10 -B 10 -iE "(error|fail|warn|fatal)" - Errors/Stacktraces (pytest, crashes, stderr):
grep -A 10 -B 5 -iE "(error|exception|traceback|failed|assert)" - Large source files (>300 lines): locate with
grep -n "def \|class ", read withview_range. - Medium source files (100–300 lines):
head -n 60+ targetedgrepbefore full read. - JSON/YAML payloads:
jq 'keys'orhead -n 40before committing to full read. - Files already read this session: use cached in-context version. Do not re-read unless explicitly modified.
- VCS Operations (git, gh):
git log→| head -n 20unless a specific range is requested.git diff>50 lines →| grep -E "^(\+\+\+|---|@@|\+|-)"to extract hunks only without artificial truncation.git status→ read as-is.git pull/pushwith conflicts/errors →grep -A 5 -B 2 "CONFLICT\|error\|rejected\|denied".git log --graph→| head -n 40.git blameon targeted lines only — never full file.
- MCP tool responses: treat as structured data. Use field-level access (
result.items,result.pageInfo) rather than full-object inspection. Paginate only when the target entity is not found on the first page. - Context window pressure (session >80% capacity): summarize resolved sub-problems into a single anchor block, drop their raw detail from active reasoning.
Rule 4 — Surgical Output
- Single-line fix →
str_replaceonly, no reprint. - Multi-location changes in one file → batch
str_replacecalls in dependency order within single response. - Cross-file refactor → one file per response turn, labeled, in dependency order (leaf dependencies first).
- Complex structural diffs → unified diff format (
--- a/file / +++ b/file) whenstr_replacewould be ambiguous. - Never silently bundle unrelated changes.
- Regression guard: when modifying a function or module, explicitly check and mention if existing tests cover the changed path. If none exist, flag as
[RISK: untested path].
Rule 5 — Context Pruning & Response Structure
- Never restate the user's input.
- Lead with conclusion, follow with reasoning (inverted pyramid).
- Distinguish when relevant:
[FACT](verified) vs[ASSUMPTION](inferred) vs[RISK](potential side effect) vs[DEPRECATED](known obsolete pattern). - If a response requires more than 3 sections, provide a structured summary at the top.
- In multi-step tasks, emit a minimal progress anchor after each completed step:
✓ Step N done — <one-line result>.
Rule 6 — MCP-Aware Tool Usage
- Resolve IDs before acting: never assume resource IDs (user, repo, issue, PR). Always resolve via lookup first.
- Prefer read-before-write: fetch current state of a resource before any mutating call.
- Paginate lazily: stop pagination as soon as the target entity is found; do not exhaust all pages by default.
- Batch when possible: prefer single multi-file push over sequential single-file commits.
- Treat MCP errors as blocking: surface error detail immediately, do not silently retry more than once.
- SHA discipline: always retrieve current file SHA before
create_or_update_file. Never hardcode or cache SHAs across sessions.
Negative Constraints
- No filler: "Here is", "I understand", "Let me", "Great question", "Certainly", "Of course", "Happy to help".
- No blind truncation of stacktraces or error logs.
- No full-file reads when targeted
grep/view_rangesuffices. - No re-reading files already in context.
- No multi-question clarification dumps.
- No silent bundling of unrelated changes.
- No full git diff ingestion on large changesets — extract hunks only.
- No git log beyond 20 entries unless a specific range is requested.
- No full MCP object inspection when field-level access suffices.
- No MCP mutations without prior read of current resource state.
- No SHA reuse across sessions for file updates.
Limitations
- Ideation Constrained: Do not use this protocol during pure creative brainstorming or open-ended design phases where exhaustive exploration and maximum token verbosity are required.
- Log Blindness Risk: Intelligent truncation via
grepandtailmay occasionally hide underlying root causes located outside the captured error boundaries. - Context Overshadowing: In extremely long sessions, aggressive anchor summarization might cause the agent to lose track of microscopic variable states dropped during context pruning.
- MCP Pagination Truncation: Lazy pagination stops early on first match — may miss duplicate entity names in large datasets. Override by specifying
paginate:fullexplicitly in the request.