AWS Cost Optimization: Two Real-World Examples from My DevOps Journey

Manu Arora

Bharat Kasera / June 09, 2025

4 min read

Cost optimization is a hot topic in every organization, and as a DevOps engineer, it’s become a core part of my role. Recently, I faced an interview question that really put my skills to the test: “Explain a cost optimization activity you performed in your current organization.” I wanted to share my experience with you, breaking down two practical examples of how I saved costs in AWS. This post is my way of passing along what I’ve learned, with step-by-step clarity and a bit of enthusiasm for the craft!


The Big Picture

Cloud costs can spiral if left unchecked, especially when resources pile up over time. In my current role, I tackled two cost-saving tasks:

  1. Cleaning up unused EBS volumes to eliminate wasted storage costs.
  2. Upgrading GP2 volumes to GP3 for better performance and lower costs.

Both tasks used AWS tools like Lambda and Boto3, plus some automation to keep things tidy long-term. Let’s dive in!


Example 1: Cleaning Up Unused EBS Volumes

The Problem

Our developers frequently create Amazon Elastic Block Store (EBS) volumes for projects, and over the past couple of years, we’d accumulated a ton of them. Some were attached to stopped EC2 instances, others had snapshots taken, and many were just sitting there, unused, racking up costs. My task? Find and delete these unused volumes to save money.

My Approach

I started by using the AWS CLI to list all EBS volumes:

aws ec2 describe-volumes

This gave me a raw list, but manually sorting through hundreds of volumes was a nightmare. I realized a programmatic approach would be smarter, so I turned to AWS Lambda and Python’s Boto3 library. I wrote a Lambda function to identify volumes in the “available” state (meaning they weren’t attached to any instance) and delete them after confirming they weren’t needed.

Here’s the code I used:

import boto3

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')
    volumes = ec2.describe_volumes(Filters=[{'Name': 'status', 'Values': ['available']}])
    for volume in volumes['Volumes']:
        print(f"Unused volume: {volume['VolumeId']}")
        ec2.delete_volume(VolumeId=volume['VolumeId'])
        print(f"Deleted volume: {volume['VolumeId']}")
    return {"status": "success"}

Safety First

Before deleting anything, I worked with the development team to ensure no critical data would be lost. For volumes with potential value, I took snapshots as a backup. This cautious approach saved us from any “oops” moments.

Automation for the Win

To keep costs down long-term, I set up the Lambda function to run automatically every fourth Friday of the month using Amazon EventBridge. This ensured unused volumes wouldn’t pile up again. The result? Significant cost savings and a cleaner AWS environment.

Pro Tip: Always take snapshots before deleting volumes, and involve stakeholders to avoid surprises. Automation is your friend for ongoing maintenance!


Example 2: Upgrading EBS Volumes from GP2 to GP3

The Opportunity

While working on EBS volumes, I noticed another cost-saving opportunity. Many of our volumes were using the older GP2 type, which is pricier and less performant than the newer GP3 type. GP3 offers lower costs and higher IOPS (Input/Output Operations Per Second), so upgrading made sense for both savings and performance.

My Solution

I wrote another Lambda function to automate the upgrade process:

  1. Identify GP2 volumes using Boto3.
  2. Create a snapshot of each GP2 volume.
  3. Provision a new GP3 volume from the snapshot.
  4. Detach the old GP2 volume and attach the new GP3 volume to the instance.

Here’s a simplified version of the script:

import boto3

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')
    volumes = ec2.describe_volumes(Filters=[{'Name': 'volume-type', 'Values': ['gp2']}])
    for volume in volumes['Volumes']:
        volume_id = volume['VolumeId']
        # Create snapshot
        snapshot = ec2.create_snapshot(VolumeId=volume_id, Description=f"Snapshot for {volume_id}")
        # Create GP3 volume from snapshot (after snapshot is completed)
        new_volume = ec2.create_volume(
            SnapshotId=snapshot['SnapshotId'],
            VolumeType='gp3',
            AvailabilityZone=volume['AvailabilityZone']
        )
        # Detach old volume and attach new one (simplified logic)
        print(f"Upgraded {volume_id} to GP3")
    return {"status": "success"}

The Impact

This upgrade reduced costs significantly while boosting performance for our applications. It was a win-win, and the automation ensured we could handle large numbers of volumes efficiently.


Key Takeaways

These projects taught me that cost optimization isn’t just about cutting corners—it’s about working smarter with the right tools. Using AWS CLI, Lambda, and Boto3, I was able to tackle immediate cost issues and set up systems to prevent future waste. Sharing these examples in an interview shows you’re not just reactive but proactive about delivering value.

If you’re prepping for a similar question, have a couple of specific examples ready. Highlight the problem, your solution, and the impact, and you’ll leave a strong impression. Keep experimenting, and happy optimizing! 😄

Want to hire me? Let's discuss.

Drop your message and let's discuss about your project.

Chat on WhatsApp

Email me at