jpskill.com
💬 コミュニケーション コミュニティ

last30days-skill

Deep research engine covering the last 30 days across 10+ sources -- Reddit, X/Twitter, YouTube, TikTok, HackerNews, Polymarket, Bluesky, and the web. Synthesizes findings into grounded, cited reports. Use when: researching trending topics, competitive intelligence, understanding what people are saying about a subject right now.

⚡ おすすめ: コマンド1行でインストール(60秒)

下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。

🍎 Mac / 🐧 Linux
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o last30days-skill.zip https://jpskill.com/download/15058.zip && unzip -o last30days-skill.zip && rm last30days-skill.zip
🪟 Windows (PowerShell)
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/15058.zip -OutFile "$d\last30days-skill.zip"; Expand-Archive "$d\last30days-skill.zip" -DestinationPath $d -Force; ri "$d\last30days-skill.zip"

完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。

💾 手動でダウンロードしたい(コマンドが難しい人向け)
  1. 1. 下の青いボタンを押して last30days-skill.zip をダウンロード
  2. 2. ZIPファイルをダブルクリックで解凍 → last30days-skill フォルダができる
  3. 3. そのフォルダを C:\Users\あなたの名前\.claude\skills\(Win)または ~/.claude/skills/(Mac)へ移動
  4. 4. Claude Code を再起動

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-18
取得日時
2026-05-18
同梱ファイル
1
📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

Last 30 Days Research

Overview

Research any topic across Reddit, X/Twitter, Bluesky, Truth Social, YouTube, TikTok, Instagram, Hacker News, Polymarket, and the web. Surfaces what people are actually discussing, recommending, betting on, and debating right now. Synthesizes findings into a grounded report with citations and engagement stats.

Instructions

Step 1: Parse User Intent

Before running research, extract from the user's input:

  • TOPIC: What they want to learn about
  • TARGET_TOOL: Where they will use the results (if specified, otherwise "unknown")
  • QUERY_TYPE: One of RECOMMENDATIONS, NEWS, COMPARISON, PROMPTING, or GENERAL

Display your parsing to the user before calling any tools:

I'll research {TOPIC} across Reddit, X, Bluesky, YouTube, TikTok, and the web.

Parsed intent:
- TOPIC = {TOPIC}
- TARGET_TOOL = {TARGET_TOOL or "unknown"}
- QUERY_TYPE = {QUERY_TYPE}

Research typically takes 2-8 minutes. Starting now.

Step 2: Resolve X Handle (Optional)

If the topic could have its own X/Twitter account (people, brands, products, companies), do a quick WebSearch to find their handle. Pass it as --x-handle={handle} to search their posts directly. Skip for generic concepts.

Step 3: Run the Research Script

Run in foreground with a 5-minute timeout:

for dir in "." "${CLAUDE_PLUGIN_ROOT:-}" "$HOME/.claude/skills/last30days" \
  "$HOME/.agents/skills/last30days" "$HOME/.codex/skills/last30days"; do
  [ -n "$dir" ] && [ -f "$dir/scripts/last30days.py" ] && SKILL_ROOT="$dir" && break
done

python3 "${SKILL_ROOT}/scripts/last30days.py" $ARGUMENTS --emit=compact --no-native-web --save-dir=~/Documents/Last30Days

Read the entire output -- it contains Reddit, X, YouTube, TikTok, Instagram, HN, Polymarket, and web sections.

Step 4: Supplement with WebSearch

After the script finishes, run WebSearch queries based on QUERY_TYPE:

  • RECOMMENDATIONS: best {TOPIC} recommendations, most popular {TOPIC}
  • NEWS: {TOPIC} news 2026, {TOPIC} announcement update
  • PROMPTING: {TOPIC} prompts examples 2026
  • GENERAL: {TOPIC} 2026, {TOPIC} discussion
  • COMPARISON: Run three passes (TOPIC_A alone, TOPIC_B alone, "A vs B")

Exclude reddit.com and x.com (already covered by script).

Step 5: Synthesize and Present

Weight sources by engagement signals: Reddit/X highest (upvotes, likes), YouTube high (views, transcripts), TikTok high (viral signal), web lowest (no engagement data). Cross-platform signals are strongest evidence.

Present in this order:

  1. "What I learned" synthesis with citations (prefer @handles and r/subreddits over web sources)
  2. Stats block showing counts per platform (omit platforms with 0 results)
  3. Invitation with 2-3 specific follow-up suggestions based on actual findings

Cite sources sparingly: 1 source per pattern, short format ("per @handle" or "per r/sub"). Never paste raw URLs. Use publication names for web sources.

Step 6: Handle Follow-ups

After presenting results, stay in expert mode. Answer follow-up questions from your research without new searches. Only run new research if the user asks about a different topic. Write prompts only when explicitly requested.

Options: --days=N (lookback period), --quick (fewer sources), --deep (comprehensive), --agent (non-interactive output).

Examples

Example 1: Research a Developer Framework

User asks: "What are developers saying about Bun runtime?"

Parsed intent:
- TOPIC = Bun runtime
- TARGET_TOOL = unknown
- QUERY_TYPE = GENERAL

Script returns 22 Reddit threads (1,840 upvotes), 35 X posts (4,200 likes), 8 YouTube videos (120K views). Key findings: developers praise startup speed (per @jaraborner), Bun 1.2 announcement drove Reddit discussion (per r/javascript), YouTube benchmarks show 3x faster cold starts vs Node (per Fireship). Pattern: adoption growing in CLI tools but not production servers yet (per r/node).

Example 2: Competitive Comparison

User asks: "Plaud vs Granola for AI meeting notes"

QUERY_TYPE = COMPARISON. Run three research passes: "Plaud" alone, "Granola" alone, "Plaud vs Granola". Synthesize as side-by-side comparison with community sentiment, strengths, weaknesses, and head-to-head table. Present specific odds and mention counts: "Plaud mentioned 18x across Reddit/X with mixed sentiment on hardware quality; Granola mentioned 12x with strong praise for transcript accuracy (per @sarahk_ai)."

Guidelines

  • Always display parsed intent before running any tools
  • Run the research script in foreground, never in background
  • Read the entire script output -- missing sections produces incomplete stats
  • Weight engagement-backed sources (Reddit, X, YouTube) over web articles
  • Never paste raw URLs in output -- use publication/site names
  • For RECOMMENDATIONS queries, extract specific product/tool names, not generic advice
  • Polymarket odds are high-signal data -- weave them into narrative as supporting evidence
  • Omit any platform line from stats that returned 0 results
  • Stay in expert mode after presenting results -- answer follow-ups from existing research
  • Only credential used is SCRAPECREATORS_API_KEY; X/Bluesky/Truth Social tokens are optional
  • The skill reads public data only and does not post, like, or modify content on any platform