jpskill.com
🛠️ 開発・MCP コミュニティ

arch-event-driven

Event-driven: Kafka/RabbitMQ, event sourcing, CQRS, pub/sub, dead letter queues, schema registry

⚡ おすすめ: コマンド1行でインストール(60秒)

下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。

🍎 Mac / 🐧 Linux
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o arch-event-driven.zip https://jpskill.com/download/22272.zip && unzip -o arch-event-driven.zip && rm arch-event-driven.zip
🪟 Windows (PowerShell)
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/22272.zip -OutFile "$d\arch-event-driven.zip"; Expand-Archive "$d\arch-event-driven.zip" -DestinationPath $d -Force; ri "$d\arch-event-driven.zip"

完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。

💾 手動でダウンロードしたい(コマンドが難しい人向け)
  1. 1. 下の青いボタンを押して arch-event-driven.zip をダウンロード
  2. 2. ZIPファイルをダブルクリックで解凍 → arch-event-driven フォルダができる
  3. 3. そのフォルダを C:\Users\あなたの名前\.claude\skills\(Win)または ~/.claude/skills/(Mac)へ移動
  4. 4. Claude Code を再起動

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-18
取得日時
2026-05-18
同梱ファイル
1
📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

arch-event-driven

Purpose

This skill implements event-driven architectures using Kafka, RabbitMQ, and related patterns like event sourcing, CQRS, pub/sub, dead letter queues, and schema registries. It helps design scalable, decoupled systems for real-time event processing in microservices environments.

When to Use

Use this skill for scenarios requiring asynchronous communication, such as microservices that need to react to events without direct dependencies. Apply it in high-volume data pipelines, real-time analytics, or when decoupling producers and consumers is essential, like in e-commerce order processing or IoT data streams. Avoid it for simple synchronous operations where polling suffices.

Key Capabilities

  • Set up Kafka topics and partitions for event streaming.
  • Implement event sourcing by storing events in Kafka for state reconstruction.
  • Apply CQRS to separate read and write models, using Kafka for commands and queries.
  • Manage pub/sub with Kafka consumer groups for fan-out scenarios.
  • Handle failures via dead letter queues in Kafka or RabbitMQ.
  • Enforce schema validation using Confluent Schema Registry for Avro schemas.

Usage Patterns

To implement pub/sub, create a Kafka topic and have producers publish events; consumers subscribe via groups. For event sourcing, store all state changes as events in a Kafka stream and replay them to build current state. In CQRS, route commands to a write service (e.g., via Kafka producer) and queries to a read service (e.g., from a materialized view). Use dead letter queues by configuring Kafka topics to redirect failed messages. Always define event schemas in JSON or Avro format for consistency.

Common Commands/API

Use Kafka CLI for topic management: run kafka-topics.sh --bootstrap-server localhost:9092 --create --topic orders --partitions 3 --replication-factor 2. To produce events, use:

kafka-console-producer.sh --bootstrap-server localhost:9092 --topic orders
{"orderId": 123, "status": "placed"}

Consume events with:

kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic orders --from-beginning

For RabbitMQ, declare exchanges and queues via CLI: rabbitmqadmin declare exchange name=events type=fanout. API endpoints: Use Kafka REST Proxy at /topics/{topic}/messages with POST for producing (e.g., curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" --data '{"records":[{"value":{"orderId":123}}]}' http://localhost:8082/topics/orders). Authenticate with env var: $KAFKA_API_KEY in headers like -H "Authorization: Bearer $KAFKA_API_KEY". Config formats: Use Kafka properties file like key.serializer=org.apache.kafka.common.serialization.StringSerializer in producer configs.

Integration Notes

Integrate Kafka with applications by adding the Kafka client library (e.g., in Java: kafka-clients:3.0.0). Set environment variables for credentials: export RABBITMQ_URL=amqp://user:$RABBITMQ_PASSWORD@localhost. For schema registry, point to Confluent's endpoint: schema.registry.url=http://localhost:8081 in producer configs. When linking with databases, use Kafka Connect for JDBC sources: configure with JSON file like {"name": "jdbc-source", "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "connection.url": "jdbc:postgresql://localhost:5432/db"}. Ensure producers handle retries on transient errors.

Error Handling

Configure dead letter queues in Kafka by setting up a separate topic for failures: in consumer code, catch exceptions and produce to "dead-letter-topic". Example:

try { consumer.poll(Duration.ofMillis(100)); } catch (Exception e) { producer.send(new ProducerRecord("dead-letter-topic", record.value())); }

In RabbitMQ, bind a queue to a dead letter exchange. Always log errors with details like error code and timestamp. Use schema registry to validate events and reject invalid ones, e.g., via SchemaRegistryClient API. Monitor with tools like Kafka's JMX for lag and errors; set up alerts if consumer lag exceeds 1000 messages.

Concrete Usage Examples

Example 1: Basic Kafka Pub/Sub Setup
To set up a pub/sub for user events: First, create a topic: kafka-topics.sh --bootstrap-server localhost:9092 --create --topic user-events. Produce an event:

kafka-console-producer.sh --bootstrap-server localhost:9092 --topic user-events
{"userId": 1, "action": "login"}

Consume it: kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic user-events --group mygroup. This decouples producers from consumers for scalable event handling.

Example 2: Implementing CQRS with Event Sourcing
For an e-commerce app, use Kafka for commands: Produce to "commands-topic" with producer.send(new ProducerRecord("commands-topic", "{\"command\": \"placeOrder\", \"orderId\": 123}")). For queries, maintain a read model by consuming events and updating a database. Example consumer code:

consumer.subscribe(Arrays.asList("events-topic"));
while (true) { ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1)); for (ConsumerRecord<String, String> record : records) { updateReadModel(record.value()); } }

This ensures write operations are handled separately from reads, improving performance.

Graph Relationships

  • Related to cluster: se-architecture
  • Connected tags: event-driven, kafka, eventsourcing, cqrs
  • Links to: se-deployment (for Kafka cluster setup), se-data-pipelines (for event streaming integrations)