Six Pillars of the AWS Well-Architected Framework: The Impact of its Usage When Building SaaS Application
If you consider the possible improvements that mechanisms like the Framework could bring to the solution impracticable and incompatible with the factual everyday issues, you will be able to fully appreciate the benefits the Framework could potentially introduce to your existing software product, or the software-to-be.
Getting the hang of even the essentials of Amazon Web Services, in particular the AWS Well-Architected Framework, may seem perplexing to non-technical and technical professionals, alike. So, we have taken care of the complexity and information density of the topic in question and produced a comprehensible and digestible set of facts concerning the AWS Well-Architected Framework.
Table of Contents
What is AWS Well-Architected Framework? Who can use it & how?
Over the years of building solutions for various vertical markets, AWS Architects have gained considerable experience: not only in architecting properly but also in assisting thousands of customers to design and review their own architectures on the cloud computing platform. Building on this expertise, AWS professionals have cherry-picked primary and the most efficient strategies for architecting systems on Amazon Web Services which now constitute the AWS Well-Architected Framework. The last-mentioned represents a sustained approach to assessing AWS application architecture and implementing designs with high scalability to accommodate fluctuating needs of apps. The Framework has already been put to use, and its conducted reviews amount to a five-digit number.
The AWS Well-Architected Framework cites a set of foundational questions used to discover if a particular architecture is on par with the expected qualities of the modern cloud-based systems. The procedure of the Framework juxtaposes a given solution against the features dictated by the cloud best standards and suggests adjustments needed to align with them.
Who is to benefit from the Framework most are chief technical officers, architects, developers, and operations team members. Those in technology roles will profit from AWS cloud architecture best practices and patterns while designing and operating services on a cloud resource, as well as from references to further implementation details.
Constructing cloud-based software involves taking lots of crucial decisions, advantages, or hazards which aren’t always transparent. Carefully weighing up both of these is key to building a solution that suits your needs. Here’s where AWS Well-Architected Framework comes in handy by providing a profound evaluation of your architecting and operating patterns against the tried-and-tested practices selected on the basis of the AWS-certified staff’s expertise. Having identified the areas for remediation in such comparison, while also armed with the most relevant data the Framework supplies, you produce a reliable, time- and cost-efficient workload on AWS.
The Six Pillars of the AWS Well-Architected Framework
AWS Well-Architected Framework has six pillars at its core, comprehensively described in the further subsections of the article:
The pillar provides insights into operations and procedures for constant support and improvement on each phase of development, deployment, and operation of a workload that delivers business value.
It allows making the most of what modern cloud technologies offer for cybersecurity readiness.
Application of the pillar ensures the software operates smoothly and functions infallibly at any moment one requests by testing the workload through its Software Development Life Cycle (SDLC).
It enables efficient use of computing resources to satisfy the system requirements, even more so under the unstable circumstances of technological advancement or varying demand.
This pillar focuses on trimming unnecessary spending by striking a balance between the most convenient price point and effective solutions that deliver business value.
The recently introduced pillar gives the opportunity to approach the business’s environmental, economic, and societal footprints by boosting the efficiency of every component of the software, amplifying the outcomes, and getting the best out of the resources available.
What is AWS Well-Architected Tool & what’s the role of this framework in SaaS App Development?
The AWS Well-Architected Tool, set up on the basis of Well-Architected Framework AWS, was created to enable your systems’ review. It is guided by, yet not restricted to architectural best practices established by AWS which ensure the development of application infrastructures characterized by high efficiency, operational resilience, increased reliability, and security.
What the Tool constitutes is a questionnaire with a list of suggested answers you can choose from. Upon testing your cloud application, service, or capability using the AWS Well-Architected Tool, you will receive a thorough AWS Well-Architected Framework review of the workload with an outline of possible remediation and/or improvement suggestions, based on your current shortcomings and requirements. Thus, the Tool is perfectly suitable for client-vendor cooperation in both building software from scratch and upgrading the produced cloud workload.
An additional feature alongside The AWS Well-Architected Framework Tool that enables more precise adherence to the best practices is custom AWS Well-Architected Lenses. AWS provides various types of lenses, each of which applies to a defined type of workload with a special set of questions, comments, and recommendations for improvement in a particular area. AWS Well-Architected Framework goes far and beyond to offer tailored Lenses for particular domains, for instance, machine learning, data analytics, serverless, high-performance computing, IoT, SAP, streaming media, the games industry, hybrid networking, and financial services.
How AWS Well-Architected Tool Works
With the most customized and all-embracing review of the cloud workload in mind, AWS advises clients to apply not only the available AWS Lenses but also to create their own custom lenses. The six pillars of the AWS Well-Architected Framework, on which you will be fully instructed further, account for a cornerstone for developing your own questionnaire points and assessing your software according to the best strategies established by your own organization.
The AWS Well-Architected Tool uses your data to develop an action plan for building a cloud service that responds to the best practices defined by either you or AWS Solution Architects. In order to have the AWS Well-Architected Tool review your software, a few simple steps need to be taken. Once your workload is defined, you can apply one or more AWS Well-Architected Lenses, alternatively, your own custom lens, and launch the evaluation.
Usually, AWS Well-Architected Tool works in the following way:
Step 1 – Define workload
Inform the Tool on the type of software: simple workloads (e.g. a static website) and complex workloads (e.g. microservices architectures with multiple data stores and many components) to lay the foundation for the review.
Step 2 – Conduct architectural review
Responding to a set of primary questions from a questionnaire prompted by an appropriate lens, evaluate your workloads comparing them to the best practices, selected on your own or by AWS.
Step 3 – Apply best practices
Having completed the review, receive a full account of the issues detected in your workloads, together with a detailed guide for the desired improvements.
Operational Excellence Pillar
The first pillar on the list addresses the launch and monitoring of developed systems. At the same time, the Operational Excellence Pillar involves constant improvement of functional procedures, which include, but aren’t limited to responding to operational events, automating changes, and defining standards to manage daily operations.
Undoubtedly, there is little prospect of attaining Operational excellence for an organization that isolates workload operations from the lines of business or its development teams. Business objectives are easier to accomplish on condition that the teams, the functional processes, and the underlying business visions are in mutual concordance. The Operational Excellence Pillar posits the link should be restored, and offers the practices for architecting systems with effective event (both planned and unplanned) response, efficient operation, duly status informing, and constant improvement to serve your business goals.
Aiming at Operational excellence in the cloud necessarily entails understanding what it comprises. In this respect, four areas are distinguished:
Understanding the internal processes of workload creation, the contribution that every team member makes to their team, and, in turn, the teams’ contribution to the goal-fulfillment of the business they work for is primary to establish a mutually beneficial relationship that produces outcomes. It is an underlying idea for setting shared priorities and standards, defining organizational structure and responsibilities, which would govern a well-coordinated goal-oriented work, yet allow amendments for upgrading or changing business and market needs.
The properly conducted preparation stage provides an exhaustive understanding of the workload’s possible planned, or unplanned behaviors. Thus, it eliminates emergence of the unforeseen threats within the process of developing, running, and using a SaaS APP.
Keeping abreast with the processes that are run and metrics within the workload clarifies the current functioning of the app, operational risks, new features released, and the monitoring of a SaaS APP. These lead to a more precise analysis of the end user’s satisfaction, including the probable reasons for disengaging. Another merit concerns increased efficiency of development teams that also minifies operational overhead costs, succeeded by lower TCO.
- Set the direction for improvement of your SaaS application based on the data received and analyzed in the Operate sub-section, and stick to it in order to get the ultimate version of your workload;
- Employ and apply the latest AWS services that cut expenditure on development, and/or facilitate programming of new software features.
Needless to say, your security posture is directly linked to customer confidence and brand value, among the more obvious data breach, financial leaks, spyware, or system failure. AWS Well-Architected Framework Security Pillar has it as its central point. The pillar allows making use of cloud technologies to predict, prevent and respond to any threats as well as enforce privacy, data integrity, guard assets, and enhance detection of security events within a software environment.
There are six areas that comprise AWS security, namely:
Cloud security foundations include:
- Shared Responsibility Model
The model distributes responsibilities between AWS and Customers. It frees the latter from burdensome responsibilities that are more effectively managed by AWS, for instance, infrastructure protection that includes hardware, software, networking, and facilities that run cloud services. In turn, Customers’ responsibilities depend mainly on the type of Cloud service they opt for.
- AWS Response to Abuse and Compromise
AWS helps identify potential abuse activities that are offensive, illegal, hacking, corrupting, misusing, and could compromise the integrity, authentication, and availability of the software thus posing threat to your business, or any other internet resources.
- AWS Account Management and Separation
AWS strongly approves isolating workloads by arranging them in separate or group accounts according to function, a common set of controls, or compliance requirements.
Identity and access management
Running workloads on cloud services demands identity management and permissions set up so as to allocate access to those authorized who comply with the established conditions. IAM ensures you have complete control over your resources to grant full/partial access to your AWS accounts, under certain circumstances. You will also be presented with a wide variety of capabilities for managing permissions of both human and machine identities.
Protecting when you are ignorant of danger is ineffective, if not pointless. That’s why threat detection plays a key role in ensuring security, functioning in legal or compliance obligations, quality assurance mechanisms, threat identification, and incident response. Recognizing a potential malicious attack, unexpected behavior or security misconfiguration is what detection is all about. Detection media are of different types, to illustrate, scrutinizing logs from your workload for exploits used. Meeting internal and external policies and requirements is imperative, with this aim, one is supposed to consistently review the detection mechanisms of the workload. Mechanisms like automated alerting and notifications that are based on defined conditions and prompt threat investigation are among the reactive factors for identification of the existence and scope of the malicious activity.
Being the chief part of an information security program, infrastructure protection guarantees the safeguarding of your systems and services in the cloud from unintended and unauthorized access, as well as the reduction of vulnerabilities. Following the AWS most reliable practices, you will be able to establish trust boundaries (e.g., network and account boundaries), system security configuration and maintenance (e.g., system hardening, minimization, and vulnerability patching), operating system authentication and authorizations (e.g., users, keys, and access levels) among other relevant policy-enforcement points (e.g., API gateways and web application firewalls). Infrastructure protection also offers such methodologies as security-in-depth – a multi-layered defense approach that enables the satisfaction of organizational or regulatory obligations. This, along with other methodologies on the list, is vital to effectively operating cloud-soft.
Safeguarding data from privacy compromise or loss largely depends on foundational practices followed at the initial stage of workload architecting. AWS possesses numerous means you could choose from in order to prevent mishandling or abide by regulatory obligations. To give an instance, data encryption delivers information unintelligible to unauthorized access thus protecting it, or data classification allows organizing information according to its level of sensitivity.
Dealing with security incidents with ad hoc procedures is barely reliable when it comes to your security posture. The preparation for responding to malware attacks, and mitigating their potential threat to your organization is what most definitely needs to be made, apart from assuring that preventive and detective controls are in place. Your team’s ability to proceed with operating during a security threat, isolate and contain incidents, conduct forensics on issues, minimize damage and restore the system to its previous state is an immediate consequence of the abovementioned preparation. You guarantee that your software recovers from threats with minimal business disruption on condition that you plan ahead and take care of the tools and mechanisms before any security threat occurs, and then regularly practice incident response through game days.
Reliability is what best reflects software functioning quality from the customer’s point of view: precise and faultless functioning with a capability to operate and test the workload throughout its lifecycle. AWS Well-Architected Framework Reliability Pillar ensures failure-free task performance of the workload within a predefined time period and predetermined environment. Should failures in operating still occur, reliability focuses on swift recovery and system restoration to the pre-error state to continue fulfilling its objectives. Except for recovery planning, reliability in the cloud covers adapting to changing requirements and distributed system design.
The Reliability Pillar offers AWS best practices for the following four areas:
Foundational requirements are a concept much broader than a single system. Foundations that correlate with reliability should be established at the very outset of architecting a workload. For instance, sufficient network bandwidth has to be foreseen and pre-planned at your data center, otherwise, inadequate resources may bring on long lead times. Assimilating relevant requirements at the stage of preliminary planning of development is behind sufficient networking and compute capacity resulting in unbounded resource allocation and scalability in line with changeable needs. AWS offers the possibility to adopt most foundational requirements incorporated in the cloud systems as appropriate.
Reliability Pillar centers around the management of quotas and limitations present in the AWS cloud, including their regional peculiarities, to prevent an unanticipated undesirable outcome (e.g., unmet need to scale compute capacity while there are no more IPs available). The same applies to the network infrastructure (both cloud and hybrid) topology planning – it is crucial to ensure service connections by anticipating potential issues and pre-planning their prevention or resolution.
Careful forethought should be given not only to foundational requirements, your software and infrastructure reliability also depends much on the decisions you make in terms of architecture. To get a consistently reliable workload, certain patterns in Amazon cloud architecture design should be observed, including the upcoming:
- service-oriented (aka micro-service) aspect – separating workloads into isolated services judging by the business domains, or tasks these workloads perform;
- service communication aspect – separate services call for communication which could be either synchronous or asynchronous;
- failure-response behavior aspect – suppose a failure occurs in a separate service, or a group of services, there are three options of system design to cope:
- fail-fast: failure is promptly reported and isolated.
- stateless services: user sessions are kept separately, thus server fails won’t affect the app or user experience. This is quite unlike stateful services, whereby all data are contained in the server and, in case of failure, sessions are lost and users logged out.
- retry attempts: should any failure arise, a mechanism of retry attempts has to be developed. Nonetheless, there is also supposed to be a limit on the number of retry attempts.
As changes, in business or technology, are inevitable, any system you develop should be ready for such. Among the prerequisites to the reliable functioning of a workload environment is thorough preparation to anticipate and accommodate probable workload changes. The latter include both external changes, such as traffic influx, and internal ones, such as new feature deployment, microservice design, and integration. Thereafter, the system is to be designed in a way to enable monitoring of these changes and react by means of resource scaling, in addition, update seamlessly, without disrupting end users.
There is every chance that at some point your cloud software, irrespective of your provider, is liable to fail. Most types of failures, though, can be anticipated and taken into account when designing the system. Such preparation facilitates failure identification, response, and system recovery with as little harm to result delivery as possible. It includes designing policies of backups, disaster recovery, high availability, damaged service isolation or replacement, along with the policy of rollbacks to the previous AWS infrastructure stability.
In pursuit of reliability, hence, certain measures should be taken to implement cloud system resiliency. An indispensable upfront step before applying any recommended practices is making sure that the team involved with your workloads is familiar with and conscious of the reliability aims you put in order to achieve your business goals.
Performance Efficiency Pillar
One of the most salient tasks to support your organization’s strategic planning goals is optimal resource allocation to ensure you attain your objectives by making the most of available resources. AWS Well-Architected Framework Performance Efficiency Pillar addresses IT and computing resources selection, distribution, and optimization with intention of preserving efficiency by performance monitoring. These measures are taken to satisfy evolving business needs while keeping up with advancing technology on the one hand and shifts in demand on the other.
Performance Efficiency is realized via the areas described further:
One and the same task can be addressed in different ways with varying services, depending on particular requirements or limitations (these typically center around budget, performance speed, and stability). Balancing these, the ultimate key to a given workload could incorporate numerous solutions. Through the Performance Efficiency Pillar, AWS offers resources of manifold types and configurations, thus you are sure to select the approach that will correspond to your demands exactly.
AWS Well-Architected Framework lists the following sets:
- Performance Architecture Selection
- Compute Architecture Selection
- Storage Architecture Selection
- Database Architecture Selection
- Network Architecture Selection.
Though the inventory of tools used in architecting is a limited number, technology isn’t static, and new approaches are being constantly developed – to offer a new perspective to cumbersome and resource-consuming tasks. Experimentation with new services and features that advances in technology have to offer can now be conducted even easier: through the code infrastructure in the cloud. So as to keep up-to-date and timely implement the cutting-edge practices to boost the system’s overall efficiency, it is critical to systematically review the workload on every level: from code infrastructure and deployment strategies to specific implementations and app frameworks.
Upon implementation, your system is in need of constant monitoring. The latter covers two aspects: passive monitoring, which consists in observing the system workload in real life and tracking down under- or overutilization, and active monitoring, which helps prevent or address the issues before they can affect your end-users.
Monitoring at AWS comprises five distinct phases:
- Generation – scope of monitoring, metrics, and thresholds;
- Aggregation – creating a complete view from multiple sources;
- Real-time processing and alarming – recognizing and responding;
- Storage – data management and retention policies;
- Analytics – dashboards, reporting, and insights.
There is no silver bullet when it comes to architecting solutions. Aiming to meet a certain requirement inevitably entails giving up others; the opposition typically builds around the speed of response, recovery speed, global availability, software maintainability or complexity, and the overall cost of development. To avoid ending in a deadlock while balancing those, spare a thought for trade-offs. Reaching them is in direct proportion to architecture complexity, so load testing will be needed to measure the benefit you receive.
Cost Optimization Pillar
The business value may and should be delivered while at the same time avoiding an expensive budget overrun. Relying on the AWS Well-Architected Framework Cost Optimization Pillar, you are guided to create a perfectly functional service that corresponds to your demands and expectations in addition to helping you achieve a better return on your overall technology investments. The best AWS practices in architecting, delivering, and maintaining cloud environments are offered within the Framework so that you can build your workload capabilities, configure and operate services, together with improving profitability and avoiding needless spending.
With the Pillar’s business-focused approach, you will gain insight on the topics of fund allocation, spending over time, resource selection, and scaling to serve your business needs within the optimal budget.
Cost Optimization on AWS leans on these five key areas:
Practice Cloud Financial Management
The primary task of Cloud Financial Management (CFM) is maximizing business value simultaneously with optimizing scaling and expenditure. Financial Management is nowhere near mindless resource throttle – it is about continuous discipline in assets assignment and management. AWS suggests a separate position, or even a team, should be prescribed to control spending and cost reduction, as well as raise the issue of cost-optimization on a regular basis.
Expenditure and usage awareness
As a rule, organizations distribute workloads to be managed by different teams, each in a separate organization unit with its own revenue stream. Given the circumstance, mere finance tracking is not enough – it is significant to understand the cost structure in terms of both the teams and the cloud services those teams use. Accurate cost and usage monitoring could include Service Control Policies, such as AWS IAM and SCP, which prevent random launches of unplanned costly resources, or a properly configured expense alert, which notifies of the budget exceeding. Competent resource monitoring across the workloads and organization units helps eliminate waste, identify profitable organization units and products, handle mismanaged resources, and allocate them according to the needs and budgetary framework.
As already mentioned, there is no single solution to a task or requirement. Any business need can be met with more than one cloud service which is determined by a set of your functional and non-functional priorities. Focusing on cost-efficiency, though, you shouldn’t neglect other aspects that directly affect your workload’s performance. You will definitely strike the right balance and find the most appropriate services, resources, and configurations provided that you carefully consider the following categories:
- Evaluate Cost When Selecting Services
- Select the Correct Resource Type, Size, and Number
- Select the Best Pricing Model
- Plan for Data Transfer
Manage demand and resource supply
It is generally accepted that the economic principle of minimal resource expenditure to meet the needs of the product’s end-users is to be complied with so as to improve your overall business performance while conserving resources. As the demand is prone to change, your cloud infrastructure should be flexible and adjust accordingly. It is quite easily attainable on AWS, based on the pay-as-you-go billing which allows bypassing costly overprovisioning. At the same time, just-in-time supply in some aspects generates resources to cover others that need accommodation, such as resource failures, high availability, and provision time.
Optimize over time
Evidently, technology never stands still: new solutions evolve that are cheaper, and also compare favorably with indicators other than cost. For this reason, it is sensible to review your current cloud decisions with an aim of ensuring their effectiveness: in both budgeting and otherwise. Regularly scheduled reviews are important to keep up with the newly released AWS features and dispose of no longer viable workloads.
The sixth pillar of the Framework stems from the growing concern of those with long-term business goals and ethical standards. AWS Well-Architected Framework Sustainability Pillar emphasizes the environmental dimension of the influence that running cloud services exerts. AWS puts forward the concept of awareness, a shared responsibility model for sustainability, optimum exploitation of the minimal resources, and decreasing downstream effects. Application of the AWS cloud architecture principles, operational guidance, potential trade-offs, and improvement plans to arrive at your sustainability objectives entails enhanced software efficiency as well as environmental waste and harm reduction.
The Sustainability Pillar is a recently introduced, newly added facet of the AWS Well-Architected Framework. As of this writing, AWS is scrutinizing its purposes and functions to suggest tangible application-oriented ways of employing the Pillar in the development of cloud solutions. Follow our updates to stay informed.
Well done on reaching the end!
Certainly, AWS Well-Architected Framework is not the easiest concept to grasp. So, it is only logical that some grey areas arise within the topic even for those involved in IT. The article does the utmost to render the overall picture of this aspect of Amazon Web Services to help achieve a fundamental understanding among the readers of various backgrounds. Still, it is alright if ideas don’t take shape for you upon reading this article. Comprehension comes with experience and expertise. Romexsoft possesses plenty of these and is ready to assist with your projects when needed.