jpskill.com
🛠️ 開発・MCP コミュニティ 🔴 エンジニア向け 👤 エンジニア・AI開発者

🛠️ AWS Cost Cleanup

aws-cost-cleanup

使われていないAWS(アマゾンウェブサービス)

⏱ ライブラリ調査+組込 半日 → 1時間

📺 まず動画で見る(YouTube)

▶ 【衝撃】最強のAIエージェント「Claude Code」の最新機能・使い方・プログラミングをAIで効率化する超実践術を解説! ↗

※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。

📜 元の英語説明(参考)

Automated cleanup of unused AWS resources to reduce costs

🇯🇵 日本人クリエイター向け解説

一言でいうと

使われていないAWS(アマゾンウェブサービス)

※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-17
取得日時
2026-05-17
同梱ファイル
1

💬 こう話しかけるだけ — サンプルプロンプト

  • AWS Cost Cleanup を使って、最小構成のサンプルコードを示して
  • AWS Cost Cleanup の主な使い方と注意点を教えて
  • AWS Cost Cleanup を既存プロジェクトに組み込む方法を教えて

これをClaude Code に貼るだけで、このSkillが自動発動します。

📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

AWS Cost Cleanup

Automate the identification and removal of unused AWS resources to eliminate waste.

When to Use This Skill

Use this skill when you need to automatically clean up unused AWS resources to reduce costs and eliminate waste.

Automated Cleanup Targets

Storage

  • Unattached EBS volumes
  • Old EBS snapshots (>90 days)
  • Incomplete multipart S3 uploads
  • Old S3 versions in versioned buckets

Compute

  • Stopped EC2 instances (>30 days)
  • Unused AMIs and associated snapshots
  • Unused Elastic IPs

Networking

  • Unused Elastic Load Balancers
  • Unused NAT Gateways
  • Orphaned ENIs

Cleanup Scripts

Safe Cleanup (Dry-Run First)

#!/bin/bash
# cleanup-unused-ebs.sh

echo "Finding unattached EBS volumes..."
VOLUMES=$(aws ec2 describe-volumes \
  --filters Name=status,Values=available \
  --query 'Volumes[*].VolumeId' \
  --output text)

for vol in $VOLUMES; do
  echo "Would delete: $vol"
  # Uncomment to actually delete:
  # aws ec2 delete-volume --volume-id $vol
done
#!/bin/bash
# cleanup-old-snapshots.sh

CUTOFF_DATE=$(date -d '90 days ago' --iso-8601)

aws ec2 describe-snapshots --owner-ids self \
  --query "Snapshots[?StartTime<='$CUTOFF_DATE'].[SnapshotId,StartTime,VolumeSize]" \
  --output text | while read snap_id start_time size; do

  echo "Snapshot: $snap_id (Created: $start_time, Size: ${size}GB)"
  # Uncomment to delete:
  # aws ec2 delete-snapshot --snapshot-id $snap_id
done
#!/bin/bash
# release-unused-eips.sh

aws ec2 describe-addresses \
  --query 'Addresses[?AssociationId==null].[AllocationId,PublicIp]' \
  --output text | while read alloc_id public_ip; do

  echo "Would release: $public_ip ($alloc_id)"
  # Uncomment to release:
  # aws ec2 release-address --allocation-id $alloc_id
done

S3 Lifecycle Automation

# Apply lifecycle policy to transition old objects to cheaper storage
cat > lifecycle-policy.json <<EOF
{
  "Rules": [
    {
      "Id": "Archive old objects",
      "Status": "Enabled",
      "Transitions": [
        {
          "Days": 90,
          "StorageClass": "STANDARD_IA"
        },
        {
          "Days": 180,
          "StorageClass": "GLACIER"
        }
      ],
      "NoncurrentVersionExpiration": {
        "NoncurrentDays": 30
      },
      "AbortIncompleteMultipartUpload": {
        "DaysAfterInitiation": 7
      }
    }
  ]
}
EOF

aws s3api put-bucket-lifecycle-configuration \
  --bucket my-bucket \
  --lifecycle-configuration file://lifecycle-policy.json

Cost Impact Calculator

#!/usr/bin/env python3
# calculate-savings.py

import boto3
from datetime import datetime, timedelta

ec2 = boto3.client('ec2')

# Calculate EBS volume savings
volumes = ec2.describe_volumes(
    Filters=[{'Name': 'status', 'Values': ['available']}]
)

total_size = sum(v['Size'] for v in volumes['Volumes'])
monthly_cost = total_size * 0.10  # $0.10/GB-month for gp3

print(f"Unattached EBS Volumes: {len(volumes['Volumes'])}")
print(f"Total Size: {total_size} GB")
print(f"Monthly Savings: ${monthly_cost:.2f}")

# Calculate Elastic IP savings
addresses = ec2.describe_addresses()
unused = [a for a in addresses['Addresses'] if 'AssociationId' not in a]

eip_cost = len(unused) * 3.65  # $0.005/hour * 730 hours
print(f"\nUnused Elastic IPs: {len(unused)}")
print(f"Monthly Savings: ${eip_cost:.2f}")

print(f"\nTotal Monthly Savings: ${monthly_cost + eip_cost:.2f}")
print(f"Annual Savings: ${(monthly_cost + eip_cost) * 12:.2f}")

Automated Cleanup Lambda

import boto3
from datetime import datetime, timedelta

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')

    # Delete unattached volumes older than 7 days
    volumes = ec2.describe_volumes(
        Filters=[{'Name': 'status', 'Values': ['available']}]
    )

    cutoff = datetime.now() - timedelta(days=7)
    deleted = 0

    for vol in volumes['Volumes']:
        create_time = vol['CreateTime'].replace(tzinfo=None)
        if create_time < cutoff:
            try:
                ec2.delete_volume(VolumeId=vol['VolumeId'])
                deleted += 1
                print(f"Deleted volume: {vol['VolumeId']}")
            except Exception as e:
                print(f"Error deleting {vol['VolumeId']}: {e}")

    return {
        'statusCode': 200,
        'body': f'Deleted {deleted} volumes'
    }

Cleanup Workflow

  1. Discovery Phase (Read-only)

    • Run all describe commands
    • Generate cost impact report
    • Review with team
  2. Validation Phase

    • Verify resources are truly unused
    • Check for dependencies
    • Notify resource owners
  3. Execution Phase (Dry-run first)

    • Run cleanup scripts with dry-run
    • Review proposed changes
    • Execute actual cleanup
  4. Verification Phase

    • Confirm deletions
    • Monitor for issues
    • Document savings

Safety Checklist

  • [ ] Run in dry-run mode first
  • [ ] Verify resources have no dependencies
  • [ ] Check resource tags for ownership
  • [ ] Notify stakeholders before deletion
  • [ ] Create snapshots of critical data
  • [ ] Test in non-production first
  • [ ] Have rollback plan ready
  • [ ] Document all deletions

Example Prompts

Discovery

  • "Find all unused resources and calculate potential savings"
  • "Generate a cleanup report for my AWS account"
  • "What resources can I safely delete?"

Execution

  • "Create a script to cleanup unattached EBS volumes"
  • "Delete all snapshots older than 90 days"
  • "Release unused Elastic IPs"

Automation

  • "Set up automated cleanup for old snapshots"
  • "Create a Lambda function for weekly cleanup"
  • "Schedule monthly resource cleanup"

Integration with AWS Organizations

# Run cleanup across multiple accounts
for account in $(aws organizations list-accounts \
  --query 'Accounts[*].Id' --output text); do

  echo "Checking account: $account"
  aws ec2 describe-volumes \
    --filters Name=status,Values=available \
    --profile account-$account
done

Monitoring and Alerts

# Create CloudWatch alarm for cost anomalies
aws cloudwatch put-metric-alarm \
  --alarm-name high-cost-alert \
  --alarm-description "Alert when daily cost exceeds threshold" \
  --metric-name EstimatedCharges \
  --namespace AWS/Billing \
  --statistic Maximum \
  --period 86400 \
  --evaluation-periods 1 \
  --threshold 100 \
  --comparison-operator GreaterThanThreshold

Best Practices

  • Schedule cleanup during maintenance windows
  • Always create final snapshots before deletion
  • Use resource tags to identify cleanup candidates
  • Implement approval workflow for production
  • Log all cleanup actions for audit
  • Set up cost anomaly detection
  • Review cleanup results weekly

Risk Mitigation

Medium Risk Actions:

  • Deleting unattached volumes (ensure no planned reattachment)
  • Removing old snapshots (verify no compliance requirements)
  • Releasing Elastic IPs (check DNS records)

Always:

  • Maintain 30-day backup retention
  • Use AWS Backup for critical resources
  • Test restore procedures
  • Document cleanup decisions

Kiro CLI Integration

# Analyze and cleanup in one command
kiro-cli chat "Use aws-cost-cleanup to find and remove unused resources"

# Generate cleanup script
kiro-cli chat "Create a safe cleanup script for my AWS account"

# Schedule automated cleanup
kiro-cli chat "Set up weekly automated cleanup using aws-cost-cleanup"

Additional Resources

Limitations

  • Use this skill only when the task clearly matches the scope described above.
  • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
  • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.