🛠️ Autoskill
??ーザーの画面操作を観察し、繰り返
📺 まず動画で見る(YouTube)
▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗
※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。
📜 元の英語説明(参考)
Observe the user's screen via screenpipe, detect repeated research workflows, match them against existing scientific-agent-skills, and draft new skills (or composition recipes that chain existing ones) for the patterns not yet covered. Use when the user asks to analyze their recent work and propose skills based on what they actually do. Requires the screenpipe daemon (https://github.com/screenpipe/screenpipe) running locally on port 3030 — the skill has no other data source and will refuse to run if screenpipe is unreachable. All detection runs locally; only redacted cluster summaries reach the LLM.
🇯🇵 日本人クリエイター向け解説
??ーザーの画面操作を観察し、繰り返
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 13
💬 こう話しかけるだけ — サンプルプロンプト
- › Autoskill を使って、最小構成のサンプルコードを示して
- › Autoskill の主な使い方と注意点を教えて
- › Autoskill を既存プロジェクトに組み込む方法を教えて
これをClaude Code に貼るだけで、このSkillが自動発動します。
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
autoskill
Requires a running screenpipe daemon. This skill has no alternate data source — it reads exclusively from the local screenpipe HTTP API (default
http://localhost:3030). If the daemon isn't running,run()raisesScreenpipeUnreachablewith install instructions.
Network access & environment variables. This skill makes authenticated HTTP requests to (a) the user's local screenpipe daemon on loopback, and (b) the user-configured LLM backend — one of
http://localhost:1234/v1(LM Studio, default),https://api.anthropic.com(opt-in Claude), or a user-supplied BYOK Foundry gateway. The skill reads three environment variables —SCREENPIPE_TOKEN,ANTHROPIC_API_KEY,FOUNDRY_API_KEY— and uses each only to authenticate to the single endpoint its name implies. No other network destinations, no telemetry, no data egress to any third party.
Overview
Turn the user's own workflow history — captured passively by the local screenpipe daemon — into new skills. This skill is on-demand: the user invokes it with a time window, it queries screenpipe's local HTTP API, clusters repeated workflow patterns, compares each pattern against the existing skills in this repo, and produces a staged folder of proposals the user can review, edit, and promote.
When to Use This Skill
Invoke this skill when the user asks to:
- "Analyze my last 4 hours / day / week and propose new skills."
- "Look at what I've been doing and tell me what's not covered yet."
- "Draft a skill from my recent workflow."
- "Find composition recipes for workflows I repeat."
Do not invoke it for one-off questions about screenpipe itself, for real-time screen queries, or without an explicit user request — the skill analyzes sensitive local content and must stay explicitly user-triggered.
Privacy Posture
- Screenpipe handles app/window filtering at capture time. Install a starter deny-list by copying
references/screenpipe-config.yamlinto the user's screenpipe config. Sensitive apps (password managers, messaging, banking) are never OCR'd in the first place. - Raw OCR never leaves the machine.
scripts/fetch_window.pypulls data over localhost HTTP.scripts/cluster.pyreduces the timeline to app/duration/title summaries.scripts/redact.pystrips emails, API keys, bearer tokens, and phone numbers as defense-in-depth before any cluster summary reaches the LLM. - LLM backend defaults to
local. The recommended setup is LM Studio runningGemma-4-31B-it— strong reasoning at a size that fits on most workstation GPUs, and no data ever leaves your machine. Cloud backends (claude,foundry) are opt-in and documented inconfig.yamlfor users who explicitly want them. Detection and embeddings always run locally regardless of backend choice. - Dry-run mode (
--plan) prints the exact timeline that will be analyzed before any LLM call. - TLS for localhost (optional, for corporate policy): see
references/https-proxy.mdfor the Caddy pattern.
Prerequisites
1. Screenpipe daemon
Either install the official release or build from source. Either way the daemon binds HTTP on localhost:3030 by default.
From source (recommended if you want the CLI daemon without the desktop GUI):
git clone --depth 1 https://github.com/mediar-ai/screenpipe.git
cd screenpipe
cargo build -p screenpipe-engine --release
# System deps (macOS): cmake + full Xcode.app (not just Command Line Tools).
# brew install cmake
# # if xcodebuild plug-ins error: sudo xcodebuild -runFirstLaunch
./target/release/screenpipe doctor # confirm permissions + ffmpeg
./target/release/screenpipe record --disable-audio --use-pii-removal
First run will prompt for macOS Screen Recording permission. Grant it and relaunch.
2. Screenpipe API token
The local API now requires bearer auth. Retrieve your token and export it:
export SCREENPIPE_TOKEN=$(screenpipe auth token)
(Or set screenpipe.token directly in config.yaml — env var is preferred since it keeps secrets out of version control.)
3. Python environment
Via pipenv from the repo root:
pipenv install httpx pyyaml sentence-transformers
The embedding model (sentence-transformers/all-MiniLM-L6-v2, ~80 MB) downloads on first run.
4. Local LLM (default path) — LM Studio
- Install LM Studio.
- Download
Gemma-4-31B-it(or another strong reasoning model; adjustlocal.modelinconfig.yaml). - Load it via the CLI for headless use (no GUI required):
lms load gemma-4-31b-it --context-length 131072 --gpu max -y
lms status # confirm server running on :1234
5. Cloud LLM backends (optional, opt-in)
Only if you explicitly opt out of local:
claude: setANTHROPIC_API_KEY, flipbackend: claudeinconfig.yaml.foundry: setFOUNDRY_API_KEY, flipbackend: foundry, setfoundry.endpointto your corporate gateway URL.
Architecture
screenpipe daemon (user-installed)
│ HTTP on localhost:3030
▼
scripts/fetch_window.py → normalized timeline events
scripts/redact.py → regex scrub (defense-in-depth)
scripts/cluster.py → sessions + clusters (local only)
scripts/match_skills.py → top-k vs existing 135 skills (local embeddings)
scripts/synthesize.py → LLM judge: reuse / compose / novel
│
▼
~/.autoskill/proposed/<timestamp>/ (default; override with --out)
├── report.md
├── composition-recipes/<name>/SKILL.md
└── new-skills/<name>/SKILL.md
scripts/promote.py → user-approved proposal → scientific-skills/<name>/
Workflow
The skill ships a unified CLI at scripts/autoskill.py with three subcommands:
python scripts/autoskill.py doctor --config config.yaml --skills-dir ../
python scripts/autoskill.py run --start ... --end ... --config config.yaml
python scripts/autoskill.py promote --proposed ~/.autoskill/proposed/<ts> --skills-dir ../ --name <skill>
0. Preflight with doctor
Before a full run, verify every dependency in one shot:
python scripts/autoskill.py doctor \
--config scientific-skills/autoskill/config.yaml \
--skills-dir scientific-skills
The report covers config (backend choice valid), skills_dir (exists), screenpipe (reachable + authed), and llm (LM Studio serving or API key present). Non-zero exit on any failure, with the offending line marked error.
1. Run the pipeline
export SCREENPIPE_TOKEN=$(screenpipe auth token)
python scripts/autoskill.py run \
--start "2026-04-17T00:00:00Z" \
--end "2026-04-17T23:59:59Z" \
--config scientific-skills/autoskill/config.yaml \
--skills-dir scientific-skills
Proposals land in ~/.autoskill/proposed/<timestamp>/ by default, keeping experimental output out of the skills repo. Pass --out PATH to override.
Internally:
- Fetch —
fetch_windowpaginates screenpipe's/searchendpoint, normalizes events to{ts, app, window_title, text, content_type}. - Redact —
redactscrubs emails, API keys, bearer tokens, phones from OCR text and window titles as defense-in-depth over screenpipe's own PII removal. - Cluster —
segment_sessionssplits on idle gaps (default 10 min) and drops short sessions;cluster_sessionsgroups sessions by app-signature and keeps clusters of sizemin_cluster_size(default 2). - Match —
load_skill_descriptionsreads frontmatter from everySKILL.mdinscientific-skills/;top_k_matchesranks each cluster against all skills using localsentence-transformersembeddings (cosine similarity). - Synthesize —
synthesizeprompts the configured LLM backend to classify each cluster asreuse,compose, ornoveland emit a SKILL.md body where appropriate. - Report — writes
<out_dir>/<ts>/report.md, plusnew-skills/<name>/SKILL.mdorcomposition-recipes/<name>/SKILL.mdfor each proposal.
Add --dry-run to stop after clustering; this skips the LLM (and the sentence-transformers load), writing only plan.md for inspection.
2. Review and promote
Open ~/.autoskill/proposed/<ts>/report.md, edit drafts in place, delete anything you don't want. Then:
python scripts/autoskill.py promote \
--proposed ~/.autoskill/proposed/2026-04-17T14-30-00 \
--skills-dir scientific-skills \
--name zotero-pubmed-helper
promote moves the directory into scientific-skills/<name>/, refusing to overwrite an existing skill. Exits non-zero with a friendly error if the proposal isn't found or the target already exists.
Configuration
See config.yaml for the full shape. Default values (local-first):
backend: local
local:
endpoint: http://localhost:1234/v1 # LM Studio's Developer server
model: Gemma-4-31B-it
screenpipe:
url: http://localhost:3030 # or https://screenpipe.local via Caddy
cluster:
min_session_minutes: 5
idle_gap_minutes: 10
min_cluster_size: 2
To opt into a cloud backend:
backend: claude # or foundry
claude:
model: claude-opus-4-7
Composition recipes vs new skills
- compose: the LLM judged that chaining existing skills covers the workflow. The emitted SKILL.md is intentionally thin — frontmatter + a "Workflow" section that invokes existing skills in order. The same agent runtime that discovered the skill can then invoke it end-to-end.
- novel: no combination of existing skills covers it. A fuller SKILL.md is drafted, still following repo conventions (frontmatter, Overview, When to Use, Workflow). The user should always review new-skill drafts before promoting.
Testing
The skill is covered by a small pytest suite at tests/. Each script is unit-tested in isolation with dependency injection (mock HTTP transport, stub backend, stub embedder):
cd scientific-skills/autoskill
python -m pytest tests/ -v
Composition with other skills in this repo
The autoskill's embedding index covers all 135 sibling skills. Workflows that look like scientific writing will match scientific-writing / literature-review / citation-management; figure work will match scientific-schematics / generate-image / infographics; slide prep matches scientific-slides / pptx; etc. When a cluster scores high against two or three sibling skills the emitted composition recipe names them explicitly, so the user's future agent invocations use the optimized paths already documented in this repo.
同梱ファイル
※ ZIPに含まれるファイル一覧。`SKILL.md` 本体に加え、参考資料・サンプル・スクリプトが入っている場合があります。
- 📄 SKILL.md (11,480 bytes)
- 📎 references/https-proxy.md (1,518 bytes)
- 📎 references/screenpipe-config.yaml (1,567 bytes)
- 📎 scripts/autoskill.py (1,169 bytes)
- 📎 scripts/backends.py (2,380 bytes)
- 📎 scripts/cluster.py (1,723 bytes)
- 📎 scripts/doctor.py (3,512 bytes)
- 📎 scripts/fetch_window.py (1,172 bytes)
- 📎 scripts/match_skills.py (1,316 bytes)
- 📎 scripts/promote.py (1,581 bytes)
- 📎 scripts/redact.py (1,634 bytes)
- 📎 scripts/run.py (7,381 bytes)
- 📎 scripts/synthesize.py (2,378 bytes)