🐍 Mambaアーキテクチャ(O(n)計算量で5倍速)
Transformerに代わる状態空間モデル Mamba の解説Skill。100万トークン処理可能。研究者向け。
📺 まず動画で見る(YouTube)
▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗
※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。
📜 元の英語説明(参考)
State-space model with O(n) complexity vs Transformers' O(n²). 5× faster inference, million-token sequences, no KV cache. Selective SSM with hardware-aware design. Mamba-1 (d_state=16) and Mamba-2 (d_state=128, multi-head). Models 130M-2.8B on HuggingFace.
🇯🇵 日本人クリエイター向け解説
Transformerに代わる状態空間モデル Mamba の解説Skill。100万トークン処理可能。研究者向け。
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 4
💬 こう話しかけるだけ — サンプルプロンプト
- › Mambaアーキテクチャ(O(n)計算量で5倍速) を使って、最小構成のサンプルコードを示して
- › Mambaアーキテクチャ(O(n)計算量で5倍速) の主な使い方と注意点を教えて
- › Mambaアーキテクチャ(O(n)計算量で5倍速) を既存プロジェクトに組み込む方法を教えて
これをClaude Code に貼るだけで、このSkillが自動発動します。
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
Mamba - Selective State Space Models
Quick start
Mamba is a state-space model architecture achieving O(n) linear complexity for sequence modeling.
Installation:
# Install causal-conv1d (optional, for efficiency)
pip install causal-conv1d>=1.4.0
# Install Mamba
pip install mamba-ssm
# Or both together
pip install mamba-ssm[causal-conv1d]
Prerequisites: Linux, NVIDIA GPU, PyTorch 1.12+, CUDA 11.6+
Basic usage (Mamba block):
import torch
from mamba_ssm import Mamba
batch, length, dim = 2, 64, 16
x = torch.randn(batch, length, dim).to("cuda")
model = Mamba(
d_model=dim, # Model dimension
d_state=16, # SSM state dimension
d_conv=4, # Conv1d kernel size
expand=2 # Expansion factor
).to("cuda")
y = model(x) # O(n) complexity!
assert y.shape == x.shape
Common workflows
Workflow 1: Language model with Mamba-2
Complete LM with generation:
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
from mamba_ssm.models.config_mamba import MambaConfig
import torch
# Configure Mamba-2 LM
config = MambaConfig(
d_model=1024, # Hidden dimension
n_layer=24, # Number of layers
vocab_size=50277, # Vocabulary size
ssm_cfg=dict(
layer="Mamba2", # Use Mamba-2
d_state=128, # Larger state for Mamba-2
headdim=64, # Head dimension
ngroups=1 # Number of groups
)
)
model = MambaLMHeadModel(config, device="cuda", dtype=torch.float16)
# Generate text
input_ids = torch.randint(0, 1000, (1, 20), device="cuda", dtype=torch.long)
output = model.generate(
input_ids=input_ids,
max_length=100,
temperature=0.7,
top_p=0.9
)
Workflow 2: Use pretrained Mamba models
Load from HuggingFace:
from transformers import AutoTokenizer
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
# Load pretrained model
model_name = "state-spaces/mamba-2.8b"
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") # Use compatible tokenizer
model = MambaLMHeadModel.from_pretrained(model_name, device="cuda", dtype=torch.float16)
# Generate
prompt = "The future of AI is"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
output_ids = model.generate(
input_ids=input_ids,
max_length=200,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.2
)
generated_text = tokenizer.decode(output_ids[0])
print(generated_text)
Available models:
state-spaces/mamba-130mstate-spaces/mamba-370mstate-spaces/mamba-790mstate-spaces/mamba-1.4bstate-spaces/mamba-2.8b
Workflow 3: Mamba-1 vs Mamba-2
Mamba-1 (smaller state):
from mamba_ssm import Mamba
model = Mamba(
d_model=256,
d_state=16, # Smaller state dimension
d_conv=4,
expand=2
).to("cuda")
Mamba-2 (multi-head, larger state):
from mamba_ssm import Mamba2
model = Mamba2(
d_model=256,
d_state=128, # Larger state dimension
d_conv=4,
expand=2,
headdim=64, # Head dimension for multi-head
ngroups=1 # Parallel groups
).to("cuda")
Key differences:
- State size: Mamba-1 (d_state=16) vs Mamba-2 (d_state=128)
- Architecture: Mamba-2 has multi-head structure
- Normalization: Mamba-2 uses RMSNorm
- Distributed: Mamba-2 supports tensor parallelism
Workflow 4: Benchmark vs Transformers
Generation speed comparison:
# Benchmark Mamba
python benchmarks/benchmark_generation_mamba_simple.py \
--model-name "state-spaces/mamba-2.8b" \
--prompt "The future of machine learning is" \
--topp 0.9 --temperature 0.7 --repetition-penalty 1.2
# Benchmark Transformer
python benchmarks/benchmark_generation_mamba_simple.py \
--model-name "EleutherAI/pythia-2.8b" \
--prompt "The future of machine learning is" \
--topp 0.9 --temperature 0.7 --repetition-penalty 1.2
Expected results:
- Mamba: 5× faster inference
- Memory: No KV cache needed
- Scaling: Linear with sequence length
When to use vs alternatives
Use Mamba when:
- Need long sequences (100K+ tokens)
- Want faster inference than Transformers
- Memory-constrained (no KV cache)
- Building streaming applications
- Linear scaling important
Advantages:
- O(n) complexity: Linear vs quadratic
- 5× faster inference: No attention overhead
- No KV cache: Lower memory usage
- Million-token sequences: Hardware-efficient
- Streaming: Constant memory per token
Use alternatives instead:
- Transformers: Need best-in-class performance, have compute
- RWKV: Want RNN+Transformer hybrid
- RetNet: Need retention-based architecture
- Hyena: Want convolution-based approach
Common issues
Issue: CUDA out of memory
Reduce batch size or use gradient checkpointing:
model = MambaLMHeadModel(config, device="cuda", dtype=torch.float16)
model.gradient_checkpointing_enable() # Enable checkpointing
Issue: Slow installation
Install binary wheels (not source):
pip install mamba-ssm --no-build-isolation
Issue: Missing causal-conv1d
Install separately:
pip install causal-conv1d>=1.4.0
Issue: Model not loading from HuggingFace
Use MambaLMHeadModel.from_pretrained (not AutoModel):
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
model = MambaLMHeadModel.from_pretrained("state-spaces/mamba-2.8b")
Advanced topics
Selective SSM: See references/selective-ssm.md for mathematical formulation, state-space equations, and how selectivity enables O(n) complexity.
Mamba-2 architecture: See references/mamba2-details.md for multi-head structure, tensor parallelism, and distributed training setup.
Performance optimization: See references/performance.md for hardware-aware design, CUDA kernels, and memory efficiency techniques.
Hardware requirements
- GPU: NVIDIA with CUDA 11.6+
- VRAM:
- 130M model: 2GB
- 370M model: 4GB
- 790M model: 8GB
- 1.4B model: 14GB
- 2.8B model: 28GB (FP16)
- Inference: 5× faster than Transformers
- Memory: No KV cache (lower than Transformers)
Performance (vs Transformers):
- Speed: 5× faster inference
- Memory: 50% less (no KV cache)
- Scaling: Linear vs quadratic
Resources
- Paper (Mamba-1): https://arxiv.org/abs/2312.00752 (Dec 2023)
- Paper (Mamba-2): https://arxiv.org/abs/2405.21060 (May 2024)
- GitHub: https://github.com/state-spaces/mamba ⭐ 13,000+
- Models: https://huggingface.co/state-spaces
- Docs: Repository README and wiki
同梱ファイル
※ ZIPに含まれるファイル一覧。`SKILL.md` 本体に加え、参考資料・サンプル・スクリプトが入っている場合があります。
- 📄 SKILL.md (7,368 bytes)
- 📎 references/architecture-details.md (5,456 bytes)
- 📎 references/benchmarks.md (8,120 bytes)
- 📎 references/training-guide.md (9,012 bytes)