rig
Rust製のAIフレームワーク「Rig」を活用し、LLMアプリケーション開発を効率化するSkill。
📜 元の英語説明(参考)
Build LLM-powered applications with Rig, the Rust AI framework. Use when creating agents, RAG pipelines, tool-calling workflows, structured extraction, or streaming completions. Covers all providers with a unified API.
🇯🇵 日本人クリエイター向け解説
Rust製のAIフレームワーク「Rig」を活用し、LLMアプリケーション開発を効率化するSkill。
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 1
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
Building with Rig
Rig is a Rust library for building LLM-powered applications with a provider-agnostic API. All patterns use the builder pattern and async/await via tokio.
Quick Start
use rig::completion::Prompt;
use rig::providers::openai;
#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
let client = openai::Client::from_env();
let agent = client
.agent(openai::GPT_4O)
.preamble("You are a helpful assistant.")
.build();
let response = agent.prompt("Hello!").await?;
println!("{}", response);
Ok(())
}
Core Patterns
1. Simple Agent
let agent = client.agent(openai::GPT_4O)
.preamble("System prompt")
.temperature(0.7)
.max_tokens(2000)
.build();
let response = agent.prompt("Your question").await?;
2. Agent with Tools
Define a tool by implementing the Tool trait, then attach it:
let agent = client.agent(openai::GPT_4O)
.preamble("You can use tools.")
.tool(MyTool)
.build();
See references/tools.md for the full Tool trait signature.
3. RAG (Retrieval-Augmented Generation)
let embedding_model = client.embedding_model(openai::TEXT_EMBEDDING_ADA_002);
let index = vector_store.index(embedding_model);
let agent = client.agent(openai::GPT_4O)
.preamble("Answer using the provided context.")
.dynamic_context(5, index) // top-5 similar docs per query
.build();
See references/rag.md for vector store setup and the Embed derive macro.
4. Streaming
use futures::StreamExt;
use rig::streaming::StreamedAssistantContent;
use rig::agent::prompt_request::streaming::MultiTurnStreamItem;
let mut stream = agent.stream_prompt("Tell me a story").await?;
while let Some(chunk) = stream.next().await {
match chunk? {
MultiTurnStreamItem::StreamAssistantItem(
StreamedAssistantContent::Text(text)
) => print!("{}", text.text),
MultiTurnStreamItem::FinalResponse(resp) => {
println!("\n{}", resp.response());
}
_ => {}
}
}
5. Structured Extraction
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Deserialize, Serialize, JsonSchema)]
struct Person {
pub name: Option<String>,
pub age: Option<u8>,
}
let extractor = client.extractor::<Person>(openai::GPT_4O).build();
let person = extractor.extract("John is 30 years old.").await?;
6. Chat with History
use rig::completion::Chat;
let history = vec![
Message::from("Hi, I'm Alice."),
// ...previous messages
];
let response = agent.chat("What's my name?", history).await?;
Agent Builder Methods
| Method | Description |
|---|---|
.preamble(str) |
Set system prompt |
.context(str) |
Add static context document |
.dynamic_context(n, index) |
Add RAG with top-n retrieval |
.tool(impl Tool) |
Attach a callable tool |
.tools(Vec<Box<dyn ToolDyn>>) |
Attach multiple tools |
.temperature(f64) |
Set temperature (0.0-1.0) |
.max_tokens(u64) |
Set max output tokens |
.additional_params(json!{...}) |
Provider-specific params |
.tool_choice(ToolChoice) |
Control tool usage |
.build() |
Build the agent |
Available Providers
Create a client with ProviderName::Client::from_env() or ProviderName::Client::new("key").
| Provider | Module | Example Model Constant |
|---|---|---|
| OpenAI | openai |
GPT_4O, GPT_4O_MINI |
| Anthropic | anthropic |
CLAUDE_4_OPUS, CLAUDE_4_SONNET |
| Cohere | cohere |
COMMAND_R_PLUS |
| Mistral | mistral |
MISTRAL_LARGE |
| Gemini | gemini |
model string |
| Groq | groq |
model string |
| Ollama | ollama |
model string |
| DeepSeek | deepseek |
model string |
| xAI | xai |
model string |
| Together | together |
model string |
| Perplexity | perplexity |
model string |
| OpenRouter | openrouter |
model string |
| HuggingFace | huggingface |
model string |
| Azure | azure |
deployment string |
| Hyperbolic | hyperbolic |
model string |
| Galadriel | galadriel |
model string |
| Moonshot | moonshot |
model string |
| Mira | mira |
model string |
| Voyage AI | voyageai |
embeddings only |
Vector Store Crates
| Backend | Crate |
|---|---|
| In-memory | rig-core (built-in) |
| MongoDB | rig-mongodb |
| LanceDB | rig-lancedb |
| Qdrant | rig-qdrant |
| SQLite | rig-sqlite |
| Neo4j | rig-neo4j |
| Milvus | rig-milvus |
| SurrealDB | rig-surrealdb |
Key Rules
- All async code runs on tokio.
- Use
WasmCompatSend/WasmCompatSyncinstead of rawSend/Syncfor WASM compatibility. - Use proper error types with
thiserror— neverResult<(), String>. - Avoid
.unwrap()— use?operator.
Further Reference
Detailed API documentation (available when installed via Claude Code skills):
- tools — Tool trait, ToolDefinition, ToolEmbedding, attachment patterns
- rag — Vector stores, Embed derive, EmbeddingsBuilder, search requests
- providers — Provider-specific initialization, model constants, env vars
- patterns — Multi-agent, hooks, streaming details, chaining, extraction
For the full reference, see the Rig examples at rig-core/examples/ or https://docs.rig.rs