🛠️ MLPipelineワークフロー
データ準備からモデル展開まで、機械学習の全工程を自動化し、効率的な運用を実現するSkill。
📺 まず動画で見る(YouTube)
▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗
※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。
📜 元の英語説明(参考)
Complete end-to-end MLOps pipeline orchestration from data preparation through model deployment.
🇯🇵 日本人クリエイター向け解説
データ準備からモデル展開まで、機械学習の全工程を自動化し、効率的な運用を実現するSkill。
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o ml-pipeline-workflow.zip https://jpskill.com/download/3165.zip && unzip -o ml-pipeline-workflow.zip && rm ml-pipeline-workflow.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/3165.zip -OutFile "$d\ml-pipeline-workflow.zip"; Expand-Archive "$d\ml-pipeline-workflow.zip" -DestinationPath $d -Force; ri "$d\ml-pipeline-workflow.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
ml-pipeline-workflow.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
ml-pipeline-workflowフォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 1
💬 こう話しかけるだけ — サンプルプロンプト
- › ML Pipeline Workflow を使って、最小構成のサンプルコードを示して
- › ML Pipeline Workflow の主な使い方と注意点を教えて
- › ML Pipeline Workflow を既存プロジェクトに組み込む方法を教えて
これをClaude Code に貼るだけで、このSkillが自動発動します。
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
ML Pipeline Workflow
Complete end-to-end MLOps pipeline orchestration from data preparation through model deployment.
Do not use this skill when
- The task is unrelated to ml pipeline workflow
- You need a different domain or tool outside this scope
Instructions
- Clarify goals, constraints, and required inputs.
- Apply relevant best practices and validate outcomes.
- Provide actionable steps and verification.
- If detailed examples are required, open
resources/implementation-playbook.md.
Overview
This skill provides comprehensive guidance for building production ML pipelines that handle the full lifecycle: data ingestion → preparation → training → validation → deployment → monitoring.
Use this skill when
- Building new ML pipelines from scratch
- Designing workflow orchestration for ML systems
- Implementing data → model → deployment automation
- Setting up reproducible training workflows
- Creating DAG-based ML orchestration
- Integrating ML components into production systems
What This Skill Provides
Core Capabilities
-
Pipeline Architecture
- End-to-end workflow design
- DAG orchestration patterns (Airflow, Dagster, Kubeflow)
- Component dependencies and data flow
- Error handling and retry strategies
-
Data Preparation
- Data validation and quality checks
- Feature engineering pipelines
- Data versioning and lineage
- Train/validation/test splitting strategies
-
Model Training
- Training job orchestration
- Hyperparameter management
- Experiment tracking integration
- Distributed training patterns
-
Model Validation
- Validation frameworks and metrics
- A/B testing infrastructure
- Performance regression detection
- Model comparison workflows
-
Deployment Automation
- Model serving patterns
- Canary deployments
- Blue-green deployment strategies
- Rollback mechanisms
Reference Documentation
See the references/ directory for detailed guides:
- data-preparation.md - Data cleaning, validation, and feature engineering
- model-training.md - Training workflows and best practices
- model-validation.md - Validation strategies and metrics
- model-deployment.md - Deployment patterns and serving architectures
Assets and Templates
The assets/ directory contains:
- pipeline-dag.yaml.template - DAG template for workflow orchestration
- training-config.yaml - Training configuration template
- validation-checklist.md - Pre-deployment validation checklist
Usage Patterns
Basic Pipeline Setup
# 1. Define pipeline stages
stages = [
"data_ingestion",
"data_validation",
"feature_engineering",
"model_training",
"model_validation",
"model_deployment"
]
# 2. Configure dependencies
# See assets/pipeline-dag.yaml.template for full example
Production Workflow
-
Data Preparation Phase
- Ingest raw data from sources
- Run data quality checks
- Apply feature transformations
- Version processed datasets
-
Training Phase
- Load versioned training data
- Execute training jobs
- Track experiments and metrics
- Save trained models
-
Validation Phase
- Run validation test suite
- Compare against baseline
- Generate performance reports
- Approve for deployment
-
Deployment Phase
- Package model artifacts
- Deploy to serving infrastructure
- Configure monitoring
- Validate production traffic
Best Practices
Pipeline Design
- Modularity: Each stage should be independently testable
- Idempotency: Re-running stages should be safe
- Observability: Log metrics at every stage
- Versioning: Track data, code, and model versions
- Failure Handling: Implement retry logic and alerting
Data Management
- Use data validation libraries (Great Expectations, TFX)
- Version datasets with DVC or similar tools
- Document feature engineering transformations
- Maintain data lineage tracking
Model Operations
- Separate training and serving infrastructure
- Use model registries (MLflow, Weights & Biases)
- Implement gradual rollouts for new models
- Monitor model performance drift
- Maintain rollback capabilities
Deployment Strategies
- Start with shadow deployments
- Use canary releases for validation
- Implement A/B testing infrastructure
- Set up automated rollback triggers
- Monitor latency and throughput
Integration Points
Orchestration Tools
- Apache Airflow: DAG-based workflow orchestration
- Dagster: Asset-based pipeline orchestration
- Kubeflow Pipelines: Kubernetes-native ML workflows
- Prefect: Modern dataflow automation
Experiment Tracking
- MLflow for experiment tracking and model registry
- Weights & Biases for visualization and collaboration
- TensorBoard for training metrics
Deployment Platforms
- AWS SageMaker for managed ML infrastructure
- Google Vertex AI for GCP deployments
- Azure ML for Azure cloud
- Kubernetes + KServe for cloud-agnostic serving
Progressive Disclosure
Start with the basics and gradually add complexity:
- Level 1: Simple linear pipeline (data → train → deploy)
- Level 2: Add validation and monitoring stages
- Level 3: Implement hyperparameter tuning
- Level 4: Add A/B testing and gradual rollouts
- Level 5: Multi-model pipelines with ensemble strategies
Common Patterns
Batch Training Pipeline
# See assets/pipeline-dag.yaml.template
stages:
- name: data_preparation
dependencies: []
- name: model_training
dependencies: [data_preparation]
- name: model_evaluation
dependencies: [model_training]
- name: model_deployment
dependencies: [model_evaluation]
Real-time Feature Pipeline
# Stream processing for real-time features
# Combined with batch training
# See references/data-preparation.md
Continuous Training
# Automated retraining on schedule
# Triggered by data drift detection
# See references/model-training.md
Troubleshooting
Common Issues
- Pipeline failures: Check dependencies and data availability
- Training instability: Review hyperparameters and data quality
- Deployment issues: Validate model artifacts and serving config
- Performance degradation: Monitor data drift and model metrics
Debugging Steps
- Check pipeline logs for each stage
- Validate input/output data at boundaries
- Test components in isolation
- Review experiment tracking metrics
- Inspect model artifacts and metadata
Next Steps
After setting up your pipeline:
- Explore hyperparameter-tuning skill for optimization
- Learn experiment-tracking-setup for MLflow/W&B
- Review model-deployment-patterns for serving strategies
- Implement monitoring with observability tools
Related Skills
- experiment-tracking-setup: MLflow and Weights & Biases integration
- hyperparameter-tuning: Automated hyperparameter optimization
- model-deployment-patterns: Advanced deployment strategies
Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.