Media Pipelines on S3: Presigned Uploads, Lifecycle Policies, and Cost Controls at Scale

Media Pipelines on S3: Presigned Uploads, Lifecycle Policies, and Cost Controls at Scale
Eliminate server upload bottlenecks with presigned URLs, enable massive file uploads via multipart, and automate storage archival for significant ongoing savings.

Context: The Challenge of Media Processing at Scale

Building a media-heavy application requires handling large files videos, images, documents efficiently without bottlenecking your application servers. Whether you're processing video content, managing user-generated media, or handling document storage, the traditional server-proxied approach creates multiple points of failure.

The Breaking Point

Traditional architectures face several challenges:

  • Server memory pressure from concurrent large file uploads
  • Timeout issues on larger files during peak usage
  • No resume capability for failed uploads
  • Rising infrastructure costs from vertical scaling
  • Unreliable uploads for clients with intermittent connectivity

What Changed

After migrating to direct S3 uploads with presigned URLs and implementing intelligent lifecycle policies:

  • Dramatically reduced upload infrastructure costs
  • Enabled seamless uploads of massive files via multipart with resume capability
  • Achieved significantly faster upload speeds by eliminating the server proxy hop
  • Implemented automatic archival: recent content stays hot, older content moves to cold storage
  • Provided clients reliable upload mechanisms that handle network interruptions gracefully

This guide shares a production-ready architecture, including code, cost analysis, and lessons learned from processing media at scale.


Overview

This guide covers direct-to-S3 uploads, multipart uploads for large files, automated lifecycle management, and cost optimization for production-grade media pipelines.

Target: Media platforms, content management systems, video streaming services
Scale: High-volume uploads, multi-GB files, petabyte-scale storage
Cost Impact: Significant reduction in upload costs plus substantial storage cost savings over time


Table of Contents

  1. The Problem with Server-Proxied Uploads
  2. Presigned URLs for Direct Upload
  3. Multipart Upload for Large Files
  4. Lifecycle Policies for Cost Optimization
  5. Production Deployment

1. The Problem with Server-Proxied Uploads

Current Implementation Issues

Your typical controller proxies file uploads through the server:

def upload_files(self):
    files = request.files.getlist('files')
    # File loaded into server memory - BOTTLENECK
    result = self.media_service.upload_media_files(files)

Critical Problems

Issue Impact Example
Memory Pressure Large file = Large server RAM Server OOM crashes under concurrent uploads
Network Double-Hop Client → Server → S3 2x bandwidth costs + 2x latency
Upload Limits Framework request size limits Large files fail at proxy layer
Timeout Risks Long uploads exceed gateway timeout Massive files fail after timeout limit
Vertical Scaling Trap Must scale servers for I/O, not compute Significant cost increase for traffic growth

Cost Impact

Traditional proxied uploads require:

  • Server egress charges (Server → S3 transfer)
  • Oversized compute instances for I/O handling
  • Load balancer data processing fees

Direct uploads eliminate:

  • Server egress (Client → S3 direct)
  • I/O-driven compute scaling
  • Most load balancer processing

Key Insight: The cost differential depends on your architecture. Cross-region or internet gateway deployments see maximum savings. Same-region with VPC endpoint deployments still save significantly through compute reduction.


2. Presigned URLs for Direct Upload

How It Works

Traditional Flow Direct Upload Flow
Step 1: Client sends file to Server (large payload) Step 1: Client requests presigned URL from Server
Step 2: Server holds file in memory (BOTTLENECK) Step 2: Server generates signed URL (small response)
Step 3: Server uploads file to S3 (large payload) Step 3: Client uploads directly to S3 (large payload)
Total Hops: 2 Total Hops: 1
Server Load: Full file I/O Server Load: Small response
Bandwidth Cost: 2x Bandwidth Cost: 1x

Implementation

Generate Presigned URL

import uuid
from datetime import datetime
import boto3
from functools import wraps
from flask import request, jsonify
from flask_limiter import Limiter

# Rate limiter setup
limiter = Limiter(
    app,
    key_func=lambda: request.headers.get('X-User-ID', 'anonymous'),
    default_limits=["1000 per day"]
)

def require_auth(f):
    @wraps(f)
    def decorated(*args, **kwargs):
        token = request.headers.get('Authorization')
        user_id = verify_token(token)  # Your auth logic
        if not user_id:
            return jsonify({'error': 'Unauthorized'}), 401
        request.user_id = user_id
        return f(*args, **kwargs)
    return decorated

def generate_presigned_upload(self, user_id: str, filename: str, 
                              content_type: str, max_size_mb: int = 5120,
                              public: bool = True) -> dict:
    """Generate presigned POST URL for direct S3 upload"""
    
    # Check user quota
    self._enforce_quota(user_id, max_size_mb)
    
    # Generate unique S3 key
    unique_id = str(uuid.uuid4())
    timestamp = datetime.utcnow().strftime('%Y/%m/%d')
    safe_filename = self._sanitize_filename(filename)
    file_key = f"media/{timestamp}/{user_id}/{unique_id}/{safe_filename}"
    
    # Security constraints
    max_size_bytes = max_size_mb * 1024 * 1024
    
    # Build fields
    fields = {
        'Content-Type': content_type,
        'x-amz-server-side-encryption': 'AES256',
        'success_action_status': '200',
        'Cache-Control': 'max-age=31536000, immutable'
    }
    
    # Build conditions
    conditions = [
        {'bucket': self.bucket},
        ['content-length-range', 1, max_size_bytes],
        {'Content-Type': content_type},
        {'x-amz-server-side-encryption': 'AES256'},
        ['starts-with', '$Cache-Control', '']
    ]
    
    # Add ACL if public
    if public:
        fields['acl'] = 'public-read'
        conditions.append({'acl': 'public-read'})
    
    presigned = self.s3_client.generate_presigned_post(
        Bucket=self.bucket,
        Key=file_key,
        Fields=fields,
        Conditions=conditions,
        ExpiresIn=3600  # 1 hour
    )
    
    # Track pending upload for quota enforcement
    self._track_pending_upload(user_id, file_key, max_size_mb)
    
    return {
        'upload_url': presigned['url'],
        'fields': presigned['fields'],
        'file_key': file_key,
        'expires_at': int(time.time()) + 3600
    }

def _sanitize_filename(self, filename: str) -> str:
    """Remove path traversal attacks"""
    import re
    safe = filename.replace('../', '').replace('..\\', '')
    safe = re.sub(r'[^a-zA-Z0-9._-]', '_', safe)
    return safe[:255]

def _enforce_quota(self, user_id: str, size_mb: int):
    """Check if user has quota for upload"""
    current = db.get_user_storage_mb(user_id)
    quota = db.get_user_quota_mb(user_id)
    
    if current + size_mb > quota:
        raise QuotaExceededError(
            f"Upload would exceed quota: {current + size_mb} MB > {quota} MB"
        )

Flask Endpoint

@app.route('/v1/media/presigned-upload', methods=['POST'])
@limiter.limit("10 per minute")
@require_auth
def get_presigned_upload():
    """
    Generate presigned upload URL
    
    Headers:
        Authorization: Bearer <token>
    
    Body:
        {
            "filename": "video.mp4",
            "content_type": "video/mp4",
            "max_size_mb": 500,
            "public": true
        }
    
    Response:
        {
            "upload_url": "https://...",
            "fields": {...},
            "file_key": "media/2025/11/04/uuid/video.mp4",
            "expires_at": 1699123456
        }
    """
    try:
        data = request.get_json()
        
        if not data.get('filename'):
            return jsonify({'error': 'filename required'}), 400
        if not data.get('content_type'):
            return jsonify({'error': 'content_type required'}), 400
        
        result = media_service.generate_presigned_upload(
            user_id=request.user_id,
            filename=data['filename'],
            content_type=data['content_type'],
            max_size_mb=data.get('max_size_mb', 5120),
            public=data.get('public', True)
        )
        
        return jsonify(result), 200
        
    except QuotaExceededError as e:
        return jsonify({'error': str(e)}), 403
    except Exception as e:
        app.logger.error(f"Presigned upload error: {e}")
        return jsonify({'error': 'Internal server error'}), 500

Client-Side Upload

async function uploadMedia(file) {
    // Step 1: Get presigned URL
    const resp = await fetch('/v1/media/presigned-upload', {
        method: 'POST',
        headers: {
            'Content-Type': 'application/json',
            'Authorization': `Bearer ${token}`
        },
        body: JSON.stringify({
            filename: file.name,
            content_type: file.type
        })
    });
    
    if (!resp.ok) {
        throw new Error(`Failed to get presigned URL: ${resp.status}`);
    }
    
    const {upload_url, fields, file_key} = await resp.json();
    
    // Step 2: Upload directly to S3
    const formData = new FormData();
    
    // Add fields BEFORE file
    Object.entries(fields).forEach(([key, value]) => {
        formData.append(key, value);
    });
    
    // File MUST be last
    formData.append('file', file);
    
    const uploadResp = await fetch(upload_url, {
        method: 'POST',
        body: formData
    });
    
    if (!uploadResp.ok) {
        const text = await uploadResp.text();
        throw new Error(`S3 upload failed: ${uploadResp.status} ${text}`);
    }
    
    console.log('Upload complete:', file_key);
    return file_key;
}

3. Multipart Upload for Large Files

When to Use Multipart

File Size Method Reason
< 100MB Single presigned POST Simple, fast
100MB - 5GB Multipart (recommended) Better reliability
> 5GB Multipart (required) S3 limit is 5GB per PUT

AWS Limits

  • Min part size: 5 MB
  • Max part size: 5 GB
  • Max parts per upload: 10,000 (hard limit)
  • Max object size: 5 TB

Benefits

  • Resumability: Failed parts can be retried individually
  • Parallel uploads: Multiple parts upload simultaneously
  • Better reliability: Network issues don't fail entire upload

Implementation

Initiate Multipart Upload

def initiate_multipart_upload(self, user_id: str, filename: str, 
                               content_type: str) -> dict:
    """Start multipart upload session"""
    
    # Check quota
    self._enforce_quota(user_id, max_size_mb=5120)
    
    unique_id = str(uuid.uuid4())
    timestamp = datetime.utcnow().strftime('%Y/%m/%d')
    safe_filename = self._sanitize_filename(filename)
    file_key = f"media/{timestamp}/{user_id}/{unique_id}/{safe_filename}"
    
    response = self.s3_client.create_multipart_upload(
        Bucket=self.bucket,
        Key=file_key,
        ContentType=content_type,
        ServerSideEncryption='AES256'
    )
    
    return {
        'upload_id': response['UploadId'],
        'file_key': file_key,
        'part_size': 100 * 1024 * 1024  # 100MB chunks (adjustable)
    }

Generate Presigned URLs for Parts (Batch Mode)

@app.route('/v1/media/multipart/get-part-urls', methods=['POST'])
@limiter.limit("100 per minute")
@require_auth
def get_part_urls():
    """
    Generate presigned URLs for a batch of parts
    
    Body:
        {
            "file_key": "...",
            "upload_id": "...",
            "start_part": 1,
            "end_part": 100
        }
    
    Response:
        {
            "parts": [
                {"part_number": 1, "presigned_url": "https://..."},
                ...
            ]
        }
    """
    data = request.get_json()
    
    start = data['start_part']
    end = data['end_part']
    
    # Validate batch size
    if end - start > 100:
        return jsonify({'error': 'Max batch size is 100'}), 400
    
    if end > 10_000:
        return jsonify({'error': 'Part number exceeds AWS limit'}), 400
    
    urls = []
    for part_number in range(start, end + 1):
        url = s3_client.generate_presigned_url(
            ClientMethod='upload_part',
            Params={
                'Bucket': bucket,
                'Key': data['file_key'],
                'UploadId': data['upload_id'],
                'PartNumber': part_number
            },
            ExpiresIn=3600
        )
        urls.append({
            'part_number': part_number,
            'presigned_url': url
        })
    
    return jsonify({'parts': urls}), 200

Complete Upload

@app.route('/v1/media/multipart/complete', methods=['POST'])
@require_auth
def complete_multipart_upload():
    """
    Finalize multipart upload
    
    Body:
        {
            "file_key": "...",
            "upload_id": "...",
            "parts": [{"PartNumber": 1, "ETag": "..."}]
        }
    """
    data = request.get_json()
    
    try:
        s3_client.complete_multipart_upload(
            Bucket=bucket,
            Key=data['file_key'],
            UploadId=data['upload_id'],
            MultipartUpload={'Parts': data['parts']}
        )
        
        return jsonify({'success': True}), 200
        
    except Exception as e:
        return jsonify({'error': str(e)}), 500

Abort Endpoint

@app.route('/v1/media/multipart/abort', methods=['POST'])
@require_auth
def abort_multipart_upload():
    """
    Abort failed/cancelled multipart upload
    
    Body:
        {
            "file_key": "...",
            "upload_id": "..."
        }
    """
    data = request.get_json()
    
    try:
        s3_client.abort_multipart_upload(
            Bucket=bucket,
            Key=data['file_key'],
            UploadId=data['upload_id']
        )
        
        return jsonify({'success': True}), 200
        
    except Exception as e:
        return jsonify({'error': str(e)}), 500

Client-Side Multipart Upload

async function uploadLargeFile(file) {
    // Calculate optimal part size to stay under part limit
    const MAX_PARTS = 10_000;
    const MIN_PART_SIZE = 5 * 1024 * 1024;  // AWS minimum
    
    let partSize = 100 * 1024 * 1024;  // Start with 100 MB
    let totalParts = Math.ceil(file.size / partSize);
    
    // Adjust if exceeds part limit
    if (totalParts > MAX_PARTS) {
        partSize = Math.ceil(file.size / MAX_PARTS);
        totalParts = Math.ceil(file.size / partSize);
        
        if (partSize < MIN_PART_SIZE) {
            throw new Error('File configuration error');
        }
    }
    
    console.log(`Uploading ${file.size} bytes in ${totalParts} parts of ${partSize} bytes`);
    
    // Step 1: Initiate
    const initResp = await fetch('/v1/media/multipart/initiate', {
        method: 'POST',
        headers: {
            'Content-Type': 'application/json',
            'Authorization': `Bearer ${token}`
        },
        body: JSON.stringify({
            filename: file.name,
            content_type: file.type
        })
    });
    
    if (!initResp.ok) throw new Error(`Init failed: ${initResp.status}`);
    
    const {upload_id, file_key} = await initResp.json();
    
    // Step 2: Upload parts in batches
    const BATCH_SIZE = 100;
    const CONCURRENT_UPLOADS = 3;
    const uploadedParts = [];
    
    try {
        for (let batchStart = 1; batchStart <= totalParts; batchStart += BATCH_SIZE) {
            const batchEnd = Math.min(batchStart + BATCH_SIZE - 1, totalParts);
            
            // Fetch batch of presigned URLs
            const urlsResp = await fetch('/v1/media/multipart/get-part-urls', {
                method: 'POST',
                headers: {
                    'Content-Type': 'application/json',
                    'Authorization': `Bearer ${token}`
                },
                body: JSON.stringify({
                    file_key,
                    upload_id,
                    start_part: batchStart,
                    end_part: batchEnd
                })
            });
            
            if (!urlsResp.ok) {
                throw new Error(`URL generation failed: ${urlsResp.status}`);
            }
            
            const {parts: partUrls} = await urlsResp.json();
            
            // Upload this batch with concurrency control
            for (let i = 0; i < partUrls.length; i += CONCURRENT_UPLOADS) {
                const batch = partUrls.slice(i, i + CONCURRENT_UPLOADS);
                
                const results = await Promise.all(
                    batch.map(part => uploadPartWithRetry(file, part, partSize))
                );
                
                uploadedParts.push(...results);
                
                // Progress reporting
                const progress = (uploadedParts.length / totalParts) * 100;
                console.log(`Progress: ${progress.toFixed(1)}%`);
            }
        }
        
        // Step 3: Complete upload
        const completeResp = await fetch('/v1/media/multipart/complete', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'Authorization': `Bearer ${token}`
            },
            body: JSON.stringify({
                file_key,
                upload_id,
                parts: uploadedParts.sort((a, b) => a.PartNumber - b.PartNumber)
            })
        });
        
        if (!completeResp.ok) {
            throw new Error(`Complete failed: ${completeResp.status}`);
        }
        
        return {file_key, success: true};
        
    } catch (error) {
        // Abort on failure
        await fetch('/v1/media/multipart/abort', {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                'Authorization': `Bearer ${token}`
            },
            body: JSON.stringify({file_key, upload_id})
        });
        
        throw error;
    }
}

// Helper: Upload single part with exponential backoff
async function uploadPartWithRetry(file, part, partSize, maxRetries = 3) {
    const {part_number, presigned_url} = part;
    
    const start = (part_number - 1) * partSize;
    const end = Math.min(start + partSize, file.size);
    const blob = file.slice(start, end);
    
    for (let attempt = 1; attempt <= maxRetries; attempt++) {
        try {
            const resp = await fetch(presigned_url, {
                method: 'PUT',
                body: blob,
                headers: {
                    'Content-Type': 'application/octet-stream'
                }
            });
            
            if (!resp.ok) {
                throw new Error(`Upload failed: ${resp.status} ${resp.statusText}`);
            }
            
            // Extract ETag and strip quotes
            let etag = resp.headers.get('ETag');
            if (!etag) {
                throw new Error('No ETag in response');
            }
            
            // Remove quotes: "abc123" → abc123
            etag = etag.replace(/^"(.*)"$/, '$1');
            
            return {
                PartNumber: part_number,
                ETag: etag
            };
            
        } catch (error) {
            if (attempt === maxRetries) throw error;
            
            // Exponential backoff
            const delay = Math.min(1000 * Math.pow(2, attempt - 1), 10000);
            console.warn(`Part ${part_number} failed (attempt ${attempt}), retrying in ${delay}ms`);
            await new Promise(resolve => setTimeout(resolve, delay));
        }
    }
}

Smart Upload Strategy

async function smartUpload(file) {
    const SIZE_THRESHOLD = 100 * 1024 * 1024;  // 100MB
    
    if (file.size > SIZE_THRESHOLD) {
        return await uploadLargeFile(file);
    } else {
        return await uploadMedia(file);
    }
}

4. Lifecycle Policies for Cost Optimization

Storage Class Economics

Storage Class Cost Characteristics Retrieval Use Case
Standard Highest storage cost Free, instant Active media (recent)
Standard-IA Lower storage cost Small retrieval fee Warm (occasional access)
Glacier Instant Significant savings Higher retrieval fee, instant Cold with instant access
Deep Archive Lowest cost Retrieval fee + delay Compliance (rarely accessed)

Cost Analysis

Over time, lifecycle policies shift older content to cheaper storage tiers, creating compounding savings:

  • Early months: Moderate savings as content begins transitioning
  • Mid-term: Substantial savings as bulk of content reaches cold storage
  • Steady-state: Maximum ongoing savings with optimal distribution

Implementation

def configure_lifecycle_policies(self):
    """Configure automatic archival rules"""
    
    self.s3_client.put_bucket_lifecycle_configuration(
        Bucket=self.bucket,
        LifecycleConfiguration={
            'Rules': [
                {
                    'Id': 'large-files-lifecycle',
                    'Status': 'Enabled',
                    'Filter': {
                        'And': {
                            'Prefix': 'media/',
                            'ObjectSizeGreaterThan': 131072  # 128 KB (IA/Glacier minimum)
                        }
                    },
                    'Transitions': [
                        {'Days': 30, 'StorageClass': 'STANDARD_IA'},
                        {'Days': 90, 'StorageClass': 'GLACIER_IR'},
                        {'Days': 365, 'StorageClass': 'DEEP_ARCHIVE'}
                    ]
                },
                {
                    'Id': 'noncurrent-version-cleanup',
                    'Status': 'Enabled',
                    'Filter': {'Prefix': 'media/'},
                    'NoncurrentVersionTransitions': [
                        {
                            'NoncurrentDays': 30,
                            'StorageClass': 'STANDARD_IA'
                        },
                        {
                            'NoncurrentDays': 90,
                            'StorageClass': 'GLACIER_IR'
                        }
                    ],
                    'NoncurrentVersionExpiration': {
                        'NoncurrentDays': 365
                    }
                },
                {
                    'Id': 'abort-incomplete-multipart',
                    'Status': 'Enabled',
                    'Filter': {'Prefix': ''},
                    'AbortIncompleteMultipartUpload': {
                        'DaysAfterInitiation': 7
                    }
                }
            ]
        }
    )

Critical: Abort Incomplete Multipart Uploads

Problem: Incomplete multipart uploads consume storage but aren't visible in normal listings. They accumulate silently and cost money.

Solution: Lifecycle policy to abort after 7 days (shown above).


5. Production Deployment

Environment Setup

# .env
AWS_REGION=us-east-1
S3_BUCKET=my-media-bucket
MAX_UPLOAD_SIZE_MB=5120
MULTIPART_THRESHOLD_MB=100

S3 Bucket Configuration

# Enable versioning
aws s3api put-bucket-versioning \
  --bucket my-media-bucket \
  --versioning-configuration Status=Enabled

# Enable encryption
aws s3api put-bucket-encryption \
  --bucket my-media-bucket \
  --server-side-encryption-configuration '{
    "Rules": [{
      "ApplyServerSideEncryptionByDefault": {
        "SSEAlgorithm": "AES256"
      }
    }]
  }'

CORS Configuration

{
  "CORSRules": [{
    "AllowedOrigins": ["https://yourdomain.com"],
    "AllowedMethods": ["GET", "POST", "PUT"],
    "AllowedHeaders": [
      "Content-Type",
      "Content-MD5",
      "Cache-Control",
      "x-amz-*"
    ],
    "ExposeHeaders": ["ETag"],
    "MaxAgeSeconds": 3000
  }]
}

IAM Policy (Least Privilege)

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowPresignedGeneration",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:AbortMultipartUpload",
        "s3:ListMultipartUploadParts"
      ],
      "Resource": "arn:aws:s3:::my-media-bucket/media/*"
    },
    {
      "Sid": "AllowMultipartManagement",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucketMultipartUploads"
      ],
      "Resource": "arn:aws:s3:::my-media-bucket"
    }
  ]
}

S3 Event Notification (Post-Processing)

# Trigger Lambda on object creation
s3_client.put_bucket_notification_configuration(
    Bucket=bucket,
    NotificationConfiguration={
        'LambdaFunctionConfigurations': [
            {
                'LambdaFunctionArn': 'arn:aws:lambda:region:account:function:process-media',
                'Events': ['s3:ObjectCreated:*'],
                'Filter': {
                    'Key': {
                        'FilterRules': [
                            {'Name': 'prefix', 'Value': 'media/'},
                            {'Name': 'suffix', 'Value': '.mp4'}
                        ]
                    }
                }
            }
        ]
    }
)

CloudWatch Cost Alarm

import boto3

cloudwatch = boto3.client('cloudwatch')

cloudwatch.put_metric_alarm(
    AlarmName='S3-Storage-Cost-Alert',
    ComparisonOperator='GreaterThanThreshold',
    EvaluationPeriods=1,
    MetricName='EstimatedCharges',
    Namespace='AWS/Billing',
    Period=86400,
    Statistic='Maximum',
    Threshold=1000.0,
    ActionsEnabled=True,
    AlarmActions=['arn:aws:sns:region:account:billing-alerts'],
    AlarmDescription='Alert if S3 costs exceed threshold',
    Dimensions=[
        {'Name': 'ServiceName', 'Value': 'AmazonS3'}
    ]
)

Summary

Key Benefits

  • Dramatically reduced upload costs: Eliminate server proxy bottleneck
  • Massive file support: Handle files up to 5TB with multipart uploads
  • Significantly faster uploads: Direct client-to-S3 connection
  • Substantial storage savings: Automatic lifecycle management reduces costs over time
  • Improved reliability: Resume capability and parallel uploads

Key Takeaways

  • Presigned URLs eliminate server bottlenecks and reduce infrastructure costs
  • Multipart uploads enable reliable large file transfers with resume capability (required for files >5GB)
  • Lifecycle policies reduce storage costs progressively over time
  • Abort incomplete uploads to prevent storage leaks
  • Authentication and rate limiting are critical for production deployment
  • Batch URL generation prevents memory issues with large multipart uploads

Implementation Steps

  1. Replace proxied upload with presigned URL generation
  2. Add authentication and rate limiting to all endpoints
  3. Implement multipart upload with batch URL generation for large files
  4. Configure lifecycle policies (run once)
  5. Set up IAM least privilege policies
  6. Configure S3 event notifications for post-processing
  7. Enable CloudWatch cost alarms

Resources

Read more