📦 Open Notebook
AIを活用した調査や文書分析のため、多様な情報源をノートブックで整理し、AIによる要約や対話、検索などをプライバシー保護下で行うオープンソースツールです。
📺 まず動画で見る(YouTube)
▶ 【Claude Code完全入門】誰でも使える/Skills活用法/経営者こそ使うべき ↗
※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。
📜 元の英語説明(参考)
Self-hosted, open-source alternative to Google NotebookLM for AI-powered research and document analysis. Use when organizing research materials into notebooks, ingesting diverse content sources (PDFs, videos, audio, web pages, Office documents), generating AI-powered notes and summaries, creating multi-speaker podcasts from research, chatting with documents using context-aware AI, searching across materials with full-text and vector search, or running custom content transformations. Supports 16+ AI providers including OpenAI, Anthropic, Google, Ollama, Groq, and Mistral with complete data privacy through self-hosting.
🇯🇵 日本人クリエイター向け解説
AIを活用した調査や文書分析のため、多様な情報源をノートブックで整理し、AIによる要約や対話、検索などをプライバシー保護下で行うオープンソースツールです。
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 9
💬 こう話しかけるだけ — サンプルプロンプト
- › Open Notebook の使い方を教えて
- › Open Notebook で何ができるか具体例で見せて
- › Open Notebook を初めて使う人向けにステップを案内して
これをClaude Code に貼るだけで、このSkillが自動発動します。
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
Open Notebook
Overview
Open Notebook is an open-source, self-hosted alternative to Google's NotebookLM that enables researchers to organize materials, generate AI-powered insights, create podcasts, and have context-aware conversations with their documents — all while maintaining complete data privacy.
Unlike Google's Notebook LM, which has no publicly available API outside of the Enterprise version, Open Notebook provides a comprehensive REST API, supports 16+ AI providers, and runs entirely on your own infrastructure.
Key advantages over NotebookLM:
- Full REST API for programmatic access and automation
- Choice of 16+ AI providers (not locked to Google models)
- Multi-speaker podcast generation with 1-4 customizable speakers (vs. 2-speaker limit)
- Complete data sovereignty through self-hosting
- Open source and fully extensible (MIT license)
Repository: https://github.com/lfnovo/open-notebook
Quick Start
Prerequisites
- Docker Desktop installed
- API key for at least one AI provider (or local Ollama for free local inference)
Installation
Deploy Open Notebook using Docker Compose:
# Download the docker-compose file
curl -o docker-compose.yml https://raw.githubusercontent.com/lfnovo/open-notebook/main/docker-compose.yml
# Set the required encryption key
export OPEN_NOTEBOOK_ENCRYPTION_KEY="your-secret-key-here"
# Launch the services
docker-compose up -d
Access the application:
- Frontend UI: http://localhost:8502
- REST API: http://localhost:5055
- API Documentation: http://localhost:5055/docs
Configure AI Provider
After startup, configure at least one AI provider:
- Navigate to Settings > API Keys in the UI
- Add credentials for your preferred provider (OpenAI, Anthropic, etc.)
- Test the connection and discover available models
- Register models for use across the platform
Or configure via the REST API:
import requests
BASE_URL = "http://localhost:5055/api"
# Add a credential for an AI provider
response = requests.post(f"{BASE_URL}/credentials", json={
"provider": "openai",
"name": "My OpenAI Key",
"api_key": "sk-..."
})
credential = response.json()
# Discover available models
response = requests.post(
f"{BASE_URL}/credentials/{credential['id']}/discover"
)
discovered = response.json()
# Register discovered models
requests.post(
f"{BASE_URL}/credentials/{credential['id']}/register-models",
json={"model_ids": [m["id"] for m in discovered["models"]]}
)
Core Features
Notebooks
Organize research into separate notebooks, each containing sources, notes, and chat sessions.
import requests
BASE_URL = "http://localhost:5055/api"
# Create a notebook
response = requests.post(f"{BASE_URL}/notebooks", json={
"name": "Cancer Genomics Research",
"description": "Literature review on tumor mutational burden"
})
notebook = response.json()
notebook_id = notebook["id"]
Sources
Ingest diverse content types including PDFs, videos, audio files, web pages, and Office documents. Sources are processed for full-text and vector search.
# Add a web URL source
response = requests.post(f"{BASE_URL}/sources", data={
"url": "https://arxiv.org/abs/2301.00001",
"notebook_id": notebook_id,
"process_async": "true"
})
source = response.json()
# Upload a PDF file
with open("paper.pdf", "rb") as f:
response = requests.post(
f"{BASE_URL}/sources",
data={"notebook_id": notebook_id},
files={"file": ("paper.pdf", f, "application/pdf")}
)
Notes
Create and manage notes (human or AI-generated) associated with notebooks.
# Create a human note
response = requests.post(f"{BASE_URL}/notes", json={
"title": "Key Findings",
"content": "TMB correlates with immunotherapy response in NSCLC...",
"note_type": "human",
"notebook_id": notebook_id
})
Context-Aware Chat
Chat with your research materials using AI that cites sources.
# Create a chat session
session = requests.post(f"{BASE_URL}/chat/sessions", json={
"notebook_id": notebook_id,
"title": "TMB Discussion"
}).json()
# Send a message with context from sources
response = requests.post(f"{BASE_URL}/chat/execute", json={
"session_id": session["id"],
"message": "What are the key biomarkers for immunotherapy response?",
"context": {"include_sources": True, "include_notes": True}
})
Search
Search across all materials using full-text or vector (semantic) search.
# Vector search across the knowledge base
results = requests.post(f"{BASE_URL}/search", json={
"query": "tumor mutational burden immunotherapy",
"search_type": "vector",
"limit": 10
}).json()
# Ask a question with AI-powered answer
answer = requests.post(f"{BASE_URL}/search/ask/simple", json={
"query": "How does TMB predict checkpoint inhibitor response?"
}).json()
Podcast Generation
Generate professional multi-speaker podcasts from research materials with 1-4 customizable speakers.
# Generate a podcast episode
job = requests.post(f"{BASE_URL}/podcasts/generate", json={
"notebook_id": notebook_id,
"episode_profile_id": episode_profile_id,
"speaker_profile_ids": [speaker1_id, speaker2_id]
}).json()
# Check generation status
status = requests.get(f"{BASE_URL}/podcasts/jobs/{job['job_id']}").json()
# Download audio when ready
audio = requests.get(
f"{BASE_URL}/podcasts/episodes/{status['episode_id']}/audio"
)
Content Transformations
Apply custom AI-powered transformations to content for summarization, extraction, and analysis.
# Create a custom transformation
transform = requests.post(f"{BASE_URL}/transformations", json={
"name": "extract_methods",
"title": "Extract Methods",
"description": "Extract methodology details from papers",
"prompt": "Extract and summarize the methodology section...",
"apply_default": False
}).json()
# Execute transformation on text
result = requests.post(f"{BASE_URL}/transformations/execute", json={
"transformation_id": transform["id"],
"input_text": "...",
"model_id": "model_id_here"
}).json()
Supported AI Providers
Open Notebook supports 16+ AI providers through the Esperanto library:
| Provider | LLM | Embedding | Speech-to-Text | Text-to-Speech |
|---|---|---|---|---|
| OpenAI | Yes | Yes | Yes | Yes |
| Anthropic | Yes | No | No | No |
| Google GenAI | Yes | Yes | No | Yes |
| Vertex AI | Yes | Yes | No | Yes |
| Ollama | Yes | Yes | No | No |
| Groq | Yes | No | Yes | No |
| Mistral | Yes | Yes | No | No |
| Azure OpenAI | Yes | Yes | No | No |
| DeepSeek | Yes | No | No | No |
| xAI | Yes | No | No | No |
| OpenRouter | Yes | No | No | No |
| ElevenLabs | No | No | Yes | Yes |
| Perplexity | Yes | No | No | No |
| Voyage | No | Yes | No | No |
Environment Variables
Key configuration variables for Docker deployment:
| Variable | Description | Default |
|---|---|---|
OPEN_NOTEBOOK_ENCRYPTION_KEY |
Required. Secret key for encrypting stored credentials | None |
SURREAL_URL |
SurrealDB connection URL | ws://surrealdb:8000/rpc |
SURREAL_NAMESPACE |
Database namespace | open_notebook |
SURREAL_DATABASE |
Database name | open_notebook |
OPEN_NOTEBOOK_PASSWORD |
Optional password protection for the UI | None |
API Reference
The REST API is available at http://localhost:5055/api with interactive documentation at /docs.
Core endpoint groups:
/api/notebooks- Notebook CRUD and source association/api/sources- Source ingestion, processing, and retrieval/api/notes- Note management/api/chat/sessions- Chat session management/api/chat/execute- Chat message execution/api/search- Full-text and vector search/api/podcasts- Podcast generation and management/api/transformations- Content transformation pipelines/api/models- AI model configuration and discovery/api/credentials- Provider credential management
For complete API reference with all endpoints and request/response formats, see references/api_reference.md.
Architecture
Open Notebook uses a modern stack:
- Backend: Python with FastAPI
- Database: SurrealDB (document + relational)
- AI Integration: LangChain with the Esperanto multi-provider library
- Frontend: Next.js with React
- Deployment: Docker Compose with persistent volumes
Important Notes
- Open Notebook requires Docker for deployment
- At least one AI provider must be configured for AI features to work
- For free local inference without API costs, use Ollama
- The
OPEN_NOTEBOOK_ENCRYPTION_KEYmust be set before first launch and kept consistent across restarts - All data is stored locally in Docker volumes for complete data sovereignty
同梱ファイル
※ ZIPに含まれるファイル一覧。`SKILL.md` 本体に加え、参考資料・サンプル・スクリプトが入っている場合があります。
- 📄 SKILL.md (9,627 bytes)
- 📎 references/api_reference.md (10,325 bytes)
- 📎 references/architecture.md (7,441 bytes)
- 📎 references/configuration.md (5,313 bytes)
- 📎 references/examples.md (8,417 bytes)
- 📎 scripts/chat_interaction.py (6,227 bytes)
- 📎 scripts/notebook_management.py (4,170 bytes)
- 📎 scripts/source_ingestion.py (4,901 bytes)
- 📎 scripts/test_open_notebook_skill.py (15,600 bytes)