minimind
Train a small GPT model from scratch in 2 hours — understand LLM architecture by building one. Use when: learning how LLMs work internally, training custom small language models, educational AI projects, fine-tuning experiments.
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o minimind.zip https://jpskill.com/download/15126.zip && unzip -o minimind.zip && rm minimind.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/15126.zip -OutFile "$d\minimind.zip"; Expand-Archive "$d\minimind.zip" -DestinationPath $d -Force; ri "$d\minimind.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
minimind.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
minimindフォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-18
- 取得日時
- 2026-05-18
- 同梱ファイル
- 1
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
MiniMind
Overview
Train a 64M parameter GPT language model from scratch in ~2 hours on a single GPU. Understand every component of LLM architecture by building one yourself — from tokenizer training to RLHF alignment. The architecture tracks Qwen3/Qwen3-MoE, so patterns you learn here apply directly to full-scale models.
Source: jingyaogong/minimind (45k+ stars)
Instructions
1. Clone and install
git clone https://github.com/jingyaogong/minimind.git
cd minimind
pip install -r requirements.txt
python download_data.py
2. Pretrain on raw text
python train_pretrain.py \
--data_path ./data/pretrain_data.jsonl \
--model_config ./config/minimind-3.yaml \
--epochs 2 \
--batch_size 32 \
--learning_rate 5e-4
3. Supervised fine-tuning (SFT)
python train_sft.py \
--pretrained_model ./checkpoints/pretrain/best.pt \
--data_path ./data/sft_data.jsonl \
--epochs 3 \
--batch_size 16 \
--learning_rate 1e-5
4. RLHF alignment (optional)
python train_dpo.py \
--model_path ./checkpoints/sft/best.pt \
--preference_data ./data/dpo_pairs.jsonl
Multiple alignment methods: DPO (simplest), PPO (classic), GRPO (DeepSeek-style).
5. Run inference
python inference.py --model_path ./checkpoints/dpo/best.pt
Or start an OpenAI-compatible API server:
python api_server.py --model_path ./checkpoints/sft/best.pt --port 8000
Examples
Example 1: Full training pipeline on a single GPU
Train a 64M parameter model from scratch on an NVIDIA RTX 3090:
git clone https://github.com/jingyaogong/minimind.git && cd minimind
pip install -r requirements.txt
python download_data.py
python train_pretrain.py # ~1h — learns language patterns
python train_sft.py # ~30min — learns to follow instructions
python train_dpo.py # ~20min — aligns with human preferences
python inference.py --model_path ./checkpoints/dpo/best.pt
# > What is machine learning?
# Machine learning is a subset of artificial intelligence where systems
# learn patterns from data rather than being explicitly programmed...
Total cost: ~$3 in GPU rental or ~2 hours on your own hardware.
Example 2: LoRA fine-tuning for a specific domain
Fine-tune the base model on medical Q&A data using LoRA (parameter-efficient):
python train_lora.py \
--base_model ./checkpoints/sft/best.pt \
--data_path ./data/medical_qa.jsonl \
--lora_rank 8 \
--lora_alpha 16
SFT data format:
{
"conversations": [
{"role": "user", "content": "What are the symptoms of Type 2 diabetes?"},
{"role": "assistant", "content": "Common symptoms include increased thirst, frequent urination, blurred vision, slow-healing wounds, and unexplained weight loss..."}
]
}
Example 3: Python API for programmatic inference
from model import MiniMindModel
from tokenizer import MiniMindTokenizer
model = MiniMindModel.from_pretrained("./checkpoints/sft/best.pt")
tokenizer = MiniMindTokenizer("./tokenizer/tokenizer.model")
prompt = "Explain how attention works in transformers"
input_ids = tokenizer.encode(prompt)
output = model.generate(input_ids, max_new_tokens=256, temperature=0.7)
print(tokenizer.decode(output))
Guidelines
- GPU requirements: 64M model needs ~4GB VRAM (RTX 3090 recommended); even a GTX 1660 works (slower)
- MoE variant available at 198M total / 64M active parameters — needs ~8GB VRAM
- Pre-built datasets on HuggingFace: pretrain corpus, SFT pairs, DPO preference pairs, tool-use examples
- Multi-GPU supported via DDP and DeepSpeed for larger variants
- Not production-grade — MiniMind is an educational tool. It won't match GPT-4, but teaches you every component: tokenizer, embeddings, attention, FFN, training dynamics, RLHF
- Architecture uses GQA attention, SwiGLU FFN, RMSNorm, and RoPE with YaRN — matching production model patterns