jpskill.com
🛠️ 開発・MCP コミュニティ

spark

Distributed processing framework for large-scale data sets using in-memory computing.

⚡ おすすめ: コマンド1行でインストール(60秒)

下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。

🍎 Mac / 🐧 Linux
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o spark.zip https://jpskill.com/download/22186.zip && unzip -o spark.zip && rm spark.zip
🪟 Windows (PowerShell)
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/22186.zip -OutFile "$d\spark.zip"; Expand-Archive "$d\spark.zip" -DestinationPath $d -Force; ri "$d\spark.zip"

完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。

💾 手動でダウンロードしたい(コマンドが難しい人向け)
  1. 1. 下の青いボタンを押して spark.zip をダウンロード
  2. 2. ZIPファイルをダブルクリックで解凍 → spark フォルダができる
  3. 3. そのフォルダを C:\Users\あなたの名前\.claude\skills\(Win)または ~/.claude/skills/(Mac)へ移動
  4. 4. Claude Code を再起動

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-18
取得日時
2026-05-18
同梱ファイル
1
📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

spark

Purpose

Apache Spark is a fast, distributed processing framework for handling large-scale data sets using in-memory computing. It enables efficient batch processing, real-time analytics, machine learning, and graph processing on clusters.

When to Use

Use Spark for processing datasets larger than a single machine's memory, such as analyzing terabytes of log data or running ETL jobs. Apply it in scenarios requiring fast iterative computations, like machine learning algorithms, or when integrating with big data ecosystems like Hadoop. Avoid it for small-scale tasks where simpler tools like Pandas suffice.

Key Capabilities

  • In-memory caching for speeding up iterative algorithms, e.g., via persist(StorageLevel.MEMORY_ONLY).
  • Fault-tolerant distributed computing with RDDs (Resilient Distributed Datasets) for automatic recovery.
  • Support for multiple languages: Scala, Python, Java, R; e.g., use PySpark for data frames with from pyspark.sql import SparkSession.
  • Built-in libraries: Spark SQL for structured data queries, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for real-time data.
  • Scalability to thousands of nodes, with dynamic resource allocation via YARN or Kubernetes.

Usage Patterns

To process data with Spark, start by creating a SparkSession in your code. For batch jobs, submit via spark-submit; for interactive work, use Spark shells. Always specify the master URL, like "yarn" for cluster mode. Handle data sources by reading from files or databases, transforming with DataFrames, and writing outputs. For streaming, use Structured Streaming to process Kafka topics in real-time.

Example 1: Word count in PySpark.

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("WordCount").getOrCreate()
words = spark.read.text("hdfs://path/to/file.txt").rdd.flatMap(lambda x: x[0].split(" "))
counts = words.map(lambda x: (x, 1)).reduceByKey(lambda a, b: a + b)
counts.saveAsTextFile("hdfs://output/path")

Example 2: ETL job from CSV to Parquet.

spark = SparkSession.builder.master("local[*]").appName("ETL").getOrCreate()
df = spark.read.format("csv").option("header", "true").load("s3://bucket/data.csv")
df.write.format("parquet").mode("overwrite").save("hdfs://processed/data.parquet")

To run these, use: spark-submit --master yarn --executor-memory 4g your_script.py.

Common Commands/API

Use spark-submit for running applications: spark-submit --class MainClass --master yarn --deploy-mode cluster --driver-memory 2g your.jar arg1 arg2. For interactive sessions, run pyspark or spark-shell. Key API calls include creating a SparkSession: SparkSession.builder().appName("App").master("local").getOrCreate(). Read data with spark.read.csv("path", header=True, inferSchema=True). Transform data using DataFrame APIs, e.g., df.filter(df['age'] > 30).groupBy('department').count(). For configurations, use SparkConf: conf = SparkConf().set("spark.executor.cores", "2"). Set env vars for cluster access, like $SPARK_MASTER_URL for the master node.

Integration Notes

Integrate Spark with Hadoop by setting $HADOOP_CONF_DIR env var to your Hadoop config path, then use YARN as the master. For Kafka, add the connector via --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2 in spark-submit, and read streams with spark.readStream.format("kafka").option("kafka.bootstrap.servers", "host:port").load(). Connect to databases using JDBC: df.write.jdbc(url="jdbc:postgresql://host/db", table="table", mode="append"), requiring JDBC drivers in the classpath. Use config files like spark-defaults.conf for settings, e.g., spark.sql.shuffle.partitions 200.

Error Handling

Handle OutOfMemory errors by increasing memory: add --driver-memory 4g --executor-memory 8g to spark-submit. For failed tasks, check Spark UI at http://driver-host:4040 for logs, and use spark.task.maxFailures config to set retry limits. Common serialization issues (e.g., NotSerializableException) are fixed by making classes serializable, like implementing Serializable in Java. For data skew, repartition data with df.repartition(100).write.... Always wrap code in try-except for API calls, e.g., try: df = spark.read.csv("path") except Exception as e: print(e).

Graph Relationships

Connected to: data-engineering cluster (e.g., hadoop for storage, airflow for orchestration). Related tags: big-data, distributed-computing. Links: integrates with kafka for streaming, uses hadoop file systems for input/output.