I opened a client's account last month and found three dev servers running 24/7, a handful of unattached Elastic IPs, and a database no one had touched in eight months. Around $200 in monthly waste, none of it obvious until you look. These are the seven places I check on every account.
This is not about running a cost optimization project for six months. These are the fast checks: the things that take 20 minutes to find and 20 minutes to fix. Most small teams have at least three of them.
1. Public IPv4 addresses you forgot about
AWS started charging for public IPv4 addresses in February 2024. Every public IP now costs $0.005 per hour, which is about $3.60 per month per address. That sounds small but it adds up fast on accounts that were never cleaned up.
The common culprit: Elastic IPs that were allocated for a dev server that no longer exists. The server was terminated. The IP was not released. It has been billing you for months. AWS actually charges more for unattached Elastic IPs than for attached ones, so terminated instances leave a bill behind.
In the AWS console, go to EC2 → Elastic IPs. Look at the Associated With column. Anything that says "-" is unattached and billing you. Release it.
# List all Elastic IPs and whether they're attached
aws ec2 describe-addresses --query 'Addresses[*].[PublicIp,AssociationId]' --output tableIf you use Azure, check Public IP Addresses in the portal and look for any not associated to a running resource.
2. Oversized compute instances running 24/7
Most small teams pick an instance size when they set something up and never touch it again. An m5.xlarge that was provisioned for a load test two years ago is still running, now serving five users a day. The CPU graph shows 3% average utilization.
AWS has a tool called Compute Optimizer that looks at your CloudWatch metrics and tells you which instances are oversized. It is free to use. Azure has Advisor. Both will give you a direct recommendation: this instance could be downsized from X to Y with no expected impact.
# AWS CLI: get rightsizing recommendations
aws compute-optimizer get-ec2-instance-recommendations --query 'instanceRecommendations[*].[instanceArn,finding,recommendationOptions[0].instanceType]' --output tableCompute accounts for about 35% of total cloud waste per Flexera's 2026 report. It is the single biggest category.
3. Snapshots and old AMIs nobody needs
Every time you create a snapshot or build an AMI, it sits in storage billing you until you delete it. Developers snapshot instances before an upgrade, forget about them, and leave them. CI/CD pipelines create AMIs on every build and never rotate old ones. A busy pipeline can accumulate hundreds of snapshots.
EBS snapshot storage in AWS costs between $0.05 and $0.10 per GB per month depending on region. A 50GB snapshot costs $2.50 to $5 per month. A hundred of them is $250 to $500 per month, sitting quietly in your account.
# List all snapshots owned by your account
aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[*].[SnapshotId,VolumeSize,StartTime,Description]' --output tableSort by date. Delete anything older than your backup retention policy. If you do not have a retention policy, anything older than 90 days is a reasonable starting point.
4. Dev and staging environments left running nights and weekends
A developer starts a staging environment on Monday morning. They work on it through the week. Friday afternoon they push to production and the sprint ends. The staging environment keeps running through the weekend. Keeps running the next week. Nobody touches it. It bills the entire time.
Non-production environments should not run outside business hours. A Lambda function or EventBridge rule that stops instances at 7pm and starts them at 8am saves 65% of their cost overnight and covers the full weekend.
# Example: stop tagged instances on a schedule using AWS CLI
# Tag your dev instances with Environment=dev, then:
aws ec2 stop-instances --instance-ids $(
aws ec2 describe-instances --filters "Name=tag:Environment,Values=dev" "Name=instance-state-name,Values=running" --query 'Reservations[*].Instances[*].InstanceId' --output text
)The proper way to do this is with an Instance Scheduler or a cron-triggered Lambda, but even a manual stop-start discipline saves real money.
5. Data transfer and egress charges
Egress is one of the most misunderstood parts of cloud billing. Sending data out of AWS to the internet costs money. Sending data between two regions costs money. Even sending data from one AWS service to another in the same region can cost money if they are in different availability zones.
The most common issue I find: an application in us-east-1 that talks to an RDS instance in us-east-2. Every query is a cross-region transfer. Move the database to the same region as the app. Alternatively, a misconfigured CDN that re-fetches content from the origin on every request defeats the entire point and drives up egress costs.
In the AWS Cost Explorer, set the dimension to "Usage Type" and filter for anything starting with "DataTransfer". If those line items are large, the architecture needs a look.
6. Databases you are over-provisioned for
RDS and Azure Database instances tend to be over-provisioned at setup and then left alone. A db.m5.large running a startup's side project at 8% CPU utilization is a candidate for downsizing to db.t3.medium.
Also check: Multi-AZ. Multi-AZ doubles your database cost by maintaining a synchronous standby in another availability zone. For production databases handling real users, it is worth it. For development environments, staging databases, or internal tools, it is almost never justified. I regularly find dev RDS instances running Multi-AZ because someone copied the production Terraform config and never changed the setting.
# Check which RDS instances have Multi-AZ enabled
aws rds describe-db-instances --query 'DBInstances[*].[DBInstanceIdentifier,DBInstanceClass,MultiAZ,DBInstanceStatus]' --output table7. Reserved Instance coverage gaps
Reserved Instances and Savings Plans on AWS (and Reserved Instances on Azure) offer significant discounts in exchange for a one or three year commitment. If your workloads are stable and you have been running on On-Demand pricing for more than six months, you are paying a premium.
A 1-year Reserved Instance for an m5.large in us-east-1 saves about 36% compared to On-Demand. A 3-year commitment saves about 58%. If your staging servers run 24/7 on the same instance type for months, there is no reason not to have a reservation covering them.
In the AWS console, go to Cost Explorer → Savings Plans → Recommendations. AWS will calculate exactly how much you could save based on your actual usage history.
The order to check them
If I am looking at an account for the first time, I go:
- Unattached Elastic IPs (takes 5 minutes, immediate savings)
- Orphaned snapshots (look at the oldest ones first)
- Dev environments running 24/7 (talk to the team about stop schedules)
- Oversized compute (run Compute Optimizer, compare recommendations against actual load)
- Database Multi-AZ in non-production environments
- Egress anomalies (if there are big DataTransfer line items)
- Reserved Instance coverage (only after the above are cleaned up)
The first three usually find something within 30 minutes. The savings compound every month from that point on.
$ find-waste --cloud-account
The seven spots in this post are the first things I check when I open a new account. If you want that done on yours, I can go through it and tell you what is actually costing you.
$ ./start-cloud-audit.sh →