mem0
You are an expert in Mem0, the memory infrastructure for AI applications. You help developers add persistent, personalized memory to LLM-powered apps and agents — storing user preferences, conversation history, facts, and context that persists across sessions, enabling AI that remembers users, learns from interactions, and provides increasingly personalized responses.
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o mem0.zip https://jpskill.com/download/15117.zip && unzip -o mem0.zip && rm mem0.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/15117.zip -OutFile "$d\mem0.zip"; Expand-Archive "$d\mem0.zip" -DestinationPath $d -Force; ri "$d\mem0.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
mem0.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
mem0フォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-18
- 取得日時
- 2026-05-18
- 同梱ファイル
- 1
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
Mem0 — Memory Layer for AI Agents
You are an expert in Mem0, the memory infrastructure for AI applications. You help developers add persistent, personalized memory to LLM-powered apps and agents — storing user preferences, conversation history, facts, and context that persists across sessions, enabling AI that remembers users, learns from interactions, and provides increasingly personalized responses.
Core Capabilities
Memory Management
# memory_service.py — Add persistent memory to any AI app
from mem0 import Memory
# Initialize with vector store
memory = Memory.from_config({
"llm": {
"provider": "openai",
"config": {"model": "gpt-4o-mini"},
},
"embedder": {
"provider": "openai",
"config": {"model": "text-embedding-3-small"},
},
"vector_store": {
"provider": "qdrant",
"config": {"host": "localhost", "port": 6333, "collection_name": "memories"},
},
})
# Add memories from conversation
messages = [
{"role": "user", "content": "I'm allergic to peanuts and I'm training for a marathon"},
{"role": "assistant", "content": "I'll keep your peanut allergy in mind! For marathon training, nutrition is key..."},
]
memory.add(messages, user_id="user_42")
# Mem0 extracts: "User is allergic to peanuts", "User is training for a marathon"
# Add explicit memory
memory.add("User prefers Python over JavaScript for backend work", user_id="user_42")
# Search memories
results = memory.search("What dietary restrictions?", user_id="user_42")
# → [{"memory": "User is allergic to peanuts", "score": 0.94}]
# Get all memories for a user
all_memories = memory.get_all(user_id="user_42")
# Update memory
memory.update(memory_id="mem_abc123", data="User completed their first marathon in March 2026")
# Delete specific memory
memory.delete(memory_id="mem_abc123")
# Delete all user memories (GDPR compliance)
memory.delete_all(user_id="user_42")
AI Chat with Memory
from openai import OpenAI
from mem0 import Memory
client = OpenAI()
memory = Memory()
async def chat_with_memory(user_id: str, user_message: str) -> str:
# Retrieve relevant memories
relevant = memory.search(user_message, user_id=user_id, limit=5)
memory_context = "\n".join([f"- {m['memory']}" for m in relevant])
# Generate response with memory context
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": f"""You are a personal assistant.
You know these things about the user:
{memory_context}
Use this context to personalize your responses."""},
{"role": "user", "content": user_message},
],
)
assistant_message = response.choices[0].message.content
# Store new memories from this conversation
memory.add(
[
{"role": "user", "content": user_message},
{"role": "assistant", "content": assistant_message},
],
user_id=user_id,
)
return assistant_message
# Session 1
await chat_with_memory("user_42", "I just moved to Berlin and I love Italian food")
# Stores: "User lives in Berlin", "User loves Italian food"
# Session 2 (days later)
await chat_with_memory("user_42", "Recommend a restaurant for tonight")
# → Remembers Berlin + Italian food → suggests Italian restaurants in Berlin
Organization-Level Memory
# Shared knowledge across an organization
memory.add(
"Our refund policy allows returns within 30 days with receipt",
user_id="agent_support",
metadata={"type": "policy", "department": "support"},
)
# Agent-specific memory
memory.add(
"Customer prefers email over phone for follow-ups",
user_id="user_42",
agent_id="support_agent",
)
# Search with filters
results = memory.search(
"refund policy",
user_id="agent_support",
filters={"type": "policy"},
)
Installation
pip install mem0ai
Best Practices
- User-scoped memories — Always pass
user_id; memories are isolated per user for privacy - Automatic extraction — Pass full conversations; Mem0 extracts facts automatically using LLM
- Search before generate — Query relevant memories before LLM call; inject as system prompt context
- Memory hygiene — Periodically review and prune outdated memories; users' preferences change
- GDPR compliance — Use
delete_all(user_id=...)for right-to-erasure requests - Metadata for filtering — Add metadata tags (type, department, source) for precise memory retrieval
- Conflict resolution — Mem0 handles contradictions (e.g., "moved from NYC to Berlin" updates location)
- Self-hosted option — Use Qdrant/Chroma locally for data sovereignty; no data leaves your infrastructure