AWS Migration: 3 Major Cost Optimization Strategies to Use
Most businesses start considering cloud migration when their in-house systems start posing more challenges than delivering benefits. And most of these challenges come in the form of serious investments in adding hardware, updating software, mitigating disaster potential, or to protect against breaches. The AWS Cost Optimization Model is the ideal solution for business.
The benefits of cloud computing in regards to those hurdles are well-established:
- Reduced TOC for infrastructure
- Less downtime/higher SLA levels
- Instant scalability and capacity for upgrading/downgrading
- Business continuity and faster disaster recovery
- Enhanced security
What’s even more important is that even small businesses with tight budgets can obtain the same world-class cloud solutions as the industry giants – cloud storage, computing, management, and more – and quite literally “pay as they go.” This will give your business unprecedented agility and the ability to compete with larger players in terms of time-to-market and service levels.
The only question becomes which cloud platform provider to use. While there are a number of entrants getting into the business of cloud services, the three main contenders are Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS). Of these three, AWS is currently the most powerful of the three.
Why AWS Cost Optimization Model?
AWS is considered the “flagship” of cloud services companies. Since the service launched in 2004, it has consistently added new features and services and is still the most comprehensive in terms of offerings (over 100 of them). Among the many benefits are:
- Elastic Compute Cloud (EC2) – scalable computing on demand
- Flexibility to choose the source
- Scalability – the ability to change resources rapidly, and use those resources only when needed
- C3: Pay only for storage you use
- Redshift solution for backups with no downtime
- The latest in security measures, including protection against natural disasters
- Automated backups, set up quickly and easily.
- Company IT support 24/7
- Serverless solutions (Lambda) – pay only while an app is actually working
- Codebuild – enabling IT developers to automate server builds. AWS can run unit tests during the build process.
Of course, there are too many benefits to list here, but until the other cloud platforms (Google and Azure) catch up, AWS is the most comprehensive solution, especially if you want to keep the TOC down.
How to Optimize AWS Costs
The key to the most cost-effective use of AWS lies in the responsible use of the services you choose. For example, you may need to scale up to 1,000 instances (servers) for a short-term load, and that will cost much. But if you don’t have a process in place to scale those instances back down during periods of low usage, you are paying for services you don’t need.
One of the best places to start, in terms of cost-effective use of AWS, is with AWS itself. The company has developed, and continually updates, a document called Well-Architected Framework – a compilation of best cloud computing best practices that include hundreds of tips and strategies for cost savings.
Below you will find three comprehensive cost optimization strategies based on those best practices, plus our extensive experience as a managed AWS support partner.
1. Turning Off What is Not in Use
This is the number one cause of avoidable costs. It’s fast and easy to spin up lots of infrastructure, and it is also easy for developers to forget all that they have spun up. And the cost is ticking away.
Locating and then eliminating all of this unused stuff and, even better, preventing the build-up in the first place is ideal. Fortunately, AWS has a lot of tools to do this. Here are a few that will have the greatest impact on your price tag:
- Breaking down the bill based upon user-defined tags: For every project, the developer must create a “project id=1234” tag, and this tag should be attached to every resource of a project. And tools such as Graffiti Monkey or Janitor Monkey will allow automated tagging and deletion of resources that aren’t tagged right. Then, using the IAM service, certain staff members can view the invoices and even receive alerts when some bills have exceeded the previously defined threshold.
- S3 Lifecycle: S3 is basically a dumping ground for files. It continues to expand, allowing an almost unlimited number of files to be added. Using AWS Lifecycle Policies, you can manage your “bucket” of files to save costs. You might have logs to a bucket that you only need for a specific period of time. They need to be deleted after that; other files may be accessed a lot for a period but then not accessed so much. You will want to archive them for some specified time and then delete them. You might want to archive some files from the beginning (e.g. database backups) to meet such things as regulatory compliance. With lifecycle configurations, AWS can be told where and how to transition files into less expensive storage, archive them, or delete them entirely. The way to do all of this is explained in the AWS Setting Lifecycle configuration on a Bucket. If you don’t do this, you’ll end up paying for the junk you no longer need.
- Auto-Scaling: A few of AWS services (e.g., EC2 and DynamoDB) let you set up auto-scaling. So the development team defines when capacity should expand or shrink, based on predefined metrics. Thus, costs are calculated on that scaling without the need for user intervention.
- Optimize Big Data: Redshift offers a huge data warehouse for big data so that this data can be operationalized and analyzed (as well as backed up, of course). But once the usefulness of the data has been extrapolated, there is no need to keep it there. Move it to cold storage to secure a cheaper rate. NB: Don’t forget to set up appropriate public cloud configurations to protect all the crucial data you store in the cloud.
2. Use AWS Tools Correctly
The sheer number of tools offered by AWS can be a bit daunting. And often, the easy solution (e.g. EC2) is not the cheapest. Looking at alternatives can pay off. Here are a few examples:
- Load Balancing: Before AWS existed, there were any number of open-source load balancing tools, such as Apache. An IT team needs to understand that it can still use these tools, by simply installing them on EC2. And think about this: if you have what is considered a moderate load, you still may need up to 20 instances in a month’s time. You can rack up a hefty bill this way. On the other hand, AWS ELB service is a fraction of the cost, and its automatic features including scaling, patches, and monitoring/alerts. In other words, if there is a managed solution, building it in EC2 is not cost-effective.
- EC2 vs. Lambda: Lambda is probably one of the most popular AWS tools, primarily because it costs so much less than EC2. If a developer has a custom code to be run on AWS, Lambda could supply a cheaper solution. In fact, the use of EC2 is declining and is now used primarily for installing standalone applications, such as WordPress.
- Use “Reservations”: Most of AWS cloud computing solutions allow customers to pay by “reservations.” Reservations are a great method of reserving usage by paying, at highly discounted rates, in advance. So, if you know that you are going to need X number of instances of a certain type over the next several years, you can purchase them now and save a bundle.
- Bandwidth Costs: Bandwidth is a cost that is not necessarily cheaper through AWS than what you would pay in-house. But it will be important to do the research to keep this cost as low as possible. Costs will vary with the AWS services you are using. Redshift, for example, has a relatively expensive bandwidth cost, while ELB has some of the cheapest. AWS does have tools to help keep costs down, including Direct Connect, CloudFront geo-fencing, and Snowball.
3. Be Realistic with Planning
While planning can sometimes be the “bane of an organization’s existence,” this is not the case when using AWS cloud services.
Because AWS is heavily focused on self-service, organizations must establish and implements standards and plans. And the larger the enterprise, the more important this is. There must be cost controls and complete visibility relative to costs. Here are a few “must-do’s.”
- Establish a tagging topology that is company-wide. These are simple metadata the can be associated with every resource. The key to tagging is that you will be able to separate costs out and see where costs may be more than they should be. And then you can begin the analysis to see how costs can be cut.
- Plans for Deployment. One of the best things about AWS is that you have so many options to do the same thing. And everyone must be on the same page if you are to determine costs and which cost thresholds are being exceeded. You can use tools such as Jenkins for deployment and, for maintenance, Chef, Puppet, or Ansible.
- Backup and Data Recovery: any plan must include backup and recovery simulations for testing. This has always been done on-site, and it must also be done with AWS. AWS infrastructure changes occur rapidly, so recovery plans should be reviewed on a regular basis. There may be cost savings in changing out your backup and recovery options.
More on this topic: Top 10 AWS services
How Can We Help
Optimizing costs through AWS use is, as you can see, a challenging task. It requires a consistent review of services, new options, and comparison of costs within AWS professional services and tools. Indeed, a small team could spend most of their days on this task alone.
Romexsoft is here to provide a deep audit of your cloud infrastructure and show you where cost savings can be achieved. We provide custom solutions based on your needs and budget – tagging topologies, billing alerts, monitoring and adjusting all strategies, architectures, and plans, and full backup and recovery testing, as well as 24/7 DevOps support services. Get in touch today to discover how we can reduce your AWS bill!
Written by Romexsoft on April 23rd, 2019