How to Evolve Application Architecture When Constantly Adding New Features: Romexsoft Experience

A pivotal step in modernizing application architecture is migrating from monoliths to microservices. While the benefits of containerized application architecture and microservices are well-known, the actual process of migrating can be a perplexing task, especially on the first attempt. In this article, we are exploring our individual experience of making the shift, the challenges faced, and strategies for a successful migration with the example of the application, developed by us from scratch. Upon browsing through the article, you will find out:
  • how we developed a web-based app on monolithic architecture;
  • what changes we implemented in preparation for migration to the microservices architecture;
  • what we applied to comply with HIPAA;
  • what lessons engineers can learn from our trials and errors.
How to evolve application architecture when constantly adding new features

Table of Contents

Overall problematique with the application architecture for evolving software

Monoliths enjoyed their momentum in software architecture development for quite a while. Due to the ease of their construction, deployment and maintenance, monolithic applications, typically developed as a single tightly connected unit which runs within the same environment, used to be a preferred choice.

Speaking of today, since more recent microservices application architecture exceeds its predecessors even in the most obvious indicators of scale and complexity, the formerly perfectly suiting monolithic architecture creates more obstacles than possibilities, in particular, when it comes to developing, deploying and supporting a reliable and secure yet scalable and flexible application.

As a company with considerable experience in software engineering, Romexsoft faced the initial challenges to monolithic architecture early on while designing and maintaining applications built on this pattern. In the article, we share our story of transitioning from monolith to microservices architecture illustrated by a project that has lasted for 12 years, developed from scratch. Other companies in the process of or just considering the shift to microservices will benefit from us communicating our conclusions from comparing the advantages and disadvantages of both application architecture patterns as well as our insights on handling the pitfalls on the way from monolithic to more modern application architecture.

About the application we develop under the project

TherapyBOSS is a customized Software-as-a-Service solution, owned by Pragma-IT, based in Chicago, United States, which has been designed specifically for therapy companies and health agencies. The application offers a range of services, including Early Intervention, Physical Therapy, Speech Therapy, Skilled Nursing, and assistance of Medical Social Workers for patients receiving home treatment.

The TherapyBOSS app enables home health professionals to effectively coordinate patient care and ensure compliance, inter alia through the management of referrals, including scheduling, electronic documentation charting, tracking reassessments, and communication within the care team. The clinician application is a time- and cost-saving solution that streamlines operations and ensures total compliance with regulations for patients treated remotely. The TherapyBOSS consists of both a web-based and mobile application, allowing clinicians to record treatment from any device, even without a WiFi connection. The number of the app’s daily visitors amounts to thousands with more than 3,000 providers signed up at the time.

What challenges in the healthcare industry this solution solves

The TherapyBOSS solution is a much-needed response to requests associated with home health care management of:

  • Workflows between all stakeholders (e.g. home health agencies, therapy providers and clinicians).
  • Patients and referrals tracking.
  • Clinical staff management.
  • Clinical documentation.
  • Invoices, billing and payroll.
  • Scheduling and appointments.

Application evolution as the project progressed

The beginning of the project and its inherent challenges

The TherapyBOSS app was a project constructed from square one. Having launched in 2010, we developed the UI, both front-end and back-end architecture – database design included – to finalize the solution with its release in production two years later. Now, we continue developing new features and maintaining the product.

In the early stages of the project, the team employed two full-stack engineers, one UI designer and one QA expert. The team’s initial approach involved a monolithic architecture where all the services were hosted on a single on-premise server and operated using a single database. One of the primary benefits of using a monolithic web application architecture is its ability to accelerate development by having a solution based on a single code base. Although this approach may have been effective a decade ago, it no longer aligns with modern development practices and our current requirements, in particular, with cloud application architecture.

Present-day technological demands make once reliable and sufficient monolithic application architecture obsolete. Our practical experience reveals that such an architectural approach is increasingly burdened with shortcomings, which are presented in detail further.

Monolithic application architecture disadvantages

Single point of failure
This one refers to a critical component of the system that, if it were to malfunction, would result in the failure of the entire system. In other words, a monolithic system lacks the necessary redundancy to maintain functionality in the event of a failure in a key component. This makes the monolithic application architecture less reliable and less resilient in the face of disruptions, oftentimes leading to a complete shutdown of the system.

Performance issues
Performance issues in monoliths can be aided by scaling the entire service. Still, this measure can impair the single database supporting all services whereby bottlenecks or limitations can have far-reaching consequences. Optimizing the database queries by streamlining the system-database interaction is a way out. Another one is using read replicas to distribute the load across multiple instances of the database, thus reducing the strain on any one of them. Having said that, there are still inherent limits to the efficacy of such optimization measures.

Infrastructure cost
A monolithic service recurrently encounters performance issues, and as stated above, its efficient operation can be maintained by scaling the entire system. Such a step typically entails increased expenditures for the application’s infrastructure, such as additional servers, storage space, and networking equipment, which can add up over time. Without additional resources and infrastructure to support the expanded service, the service will not be able to meet the users’ needs.

Problems with deployment
In order to keep up with user requirements and hold a competitive standing, monolithic solutions are bound to have frequent changes implemented. As monolithic application architecture comprises tightly coupled and interdependent components, any modification made to one component can impact the entire system’s functionality, rendering it inoperable. Even a minor alteration to an application can necessitate the complete re-deployment of the entire application, which is both time-consuming and resource-intensive.

Less scalability
Not only does tightly coupled architecture pose a challenge to app deployment, but it also impedes scalability. As the codebase expands, just like when new features are to be incorporated, the entire architecture must be adjusted to accommodate the changes. Even a small modification to a single function requires the alteration of the entire solution. Apart from the intense resource consumption mentioned above, implementing changes and scaling disrupt your continuous delivery process, leading to delays and other complications.

High dependence between functionalities
Monolithic software applications with tightly coupled functionalities can create maintenance and scalability challenges. Changes in one area can lead to unforeseen effects in other parts, complicating debugging and issue isolation. Furthermore, introducing new features may require changes in multiple components, resulting in longer development and release times. As a result of this close dependency, monolithic applications may experience software engineering problems and downtime.

Monolith vs Microservices Architecture

Initial features vs architecture development through the lens of application users growth

Mobile application part

We launched a web-based project to assist home health and therapy personnel companies with streamlining their daily workflow, patient management, billing automation, and clinical documentation processing. The idea behind the solution was that the application was supposed to eliminate the need for clinicians to manually record clinical information on paper, instead providing them with an electronic platform for managing all their documentation.

However, due to inconsistent internet connection in some regions of the US, we developed a separate laptop application that operated in offline mode. This way, clinicians could access all the application’s features, such as scheduling patient visits, filling out clinical documentation, and communicating with other members of the care team, even without internet access.

The main challenges we experienced concerned:

  • ensuring that app operates in an offline mode.
  • establishing HIPAA compliances – the ePHI data stored in the app have to be encrypted.
  • maintaining correct synchronization of locally added data without loss or distortion.
  • developing a synchronization algorithm to reconcile the records added simultaneously on different devices.
  • creating a calendar to enable clinicians to manage their scheduled appointments.

How we handled these challenges:

  • In order to guarantee offline app operation, we enabled data storage in an SQLite database on our local system. Upon the first login, all the required information was loaded, so a server connection was not critical for operation. All tasks and computations were carried out using the data stored locally.
  • To ensure user privacy, we utilized encrypted mode in SQLite, which encrypts all data stored within the database. This meant that no one, aside from our application, could access user information. Additionally, our application logic was designed to only retrieve data from the server that the current user was authorized to access, for instance, a clinician would only be able to see data for the patients they were currently treating.
  • So as to synchronize locally added data accordingly, we equipped our synchronization algorithm with robust validation logic that prevented the records from sync-up with patients to whom the current user had no relation.
  • Aiming to merge the simultaneously added clinical records from various devices and resolve the emergent conflicts, we decided to first substitute an ID with a synchronization_key to uniquely identify records, and then utilize last_updated_on to synchronize only the records modified after the last sync-up.
  • With a view to creating a full calendar with an agenda list feature and facilitating the management of scheduled appointments, we implemented RRule from the iCalendar protocol.

In 2012, our primary goal with TherapyBOSS was to launch an MVP application at the earliest opportunity in order to ASAP test whether the desktop app would work regardless of the internet connection. At that time, our company had experience with building apps on the Adobe Air platform, so it was our preferred choice for our desktop version of the solution. The decision was motivated by the fact that smartphones did not use to prevail back then, though, in a few years, it would have to be reconsidered. As mobile platforms took off, we resorted to Adobe once more for their renewed toolkit allowed smooth conversion of the solution for both Android and iOS.

As mentioned before, we were striving to test out an MVP so that the stakeholders could settle on further investment feasibility. Although a cross-platform solution suited this initial objective perfectly, since the market response to the project was favorable and the app attracted all the more users, it became outdated really soon. We commenced working on the cross-platform app’s non-native UI, a single-threaded mode operation and limits to upgrades, as a result, TherapyBOSS now has a native iOS app with a modern, intuitive UI and special features to facilitate the work of clinicians, such as optimized map routes to plan the visits and the iCloud calendar to arrange them.

Web application part

The main challenges we experienced concerned implementing:

  • billing and payment systems between various business entities
  • automated processing for ERA/EDI transactions
  • various payment gateways integration
  • distributed cache and session management
  • an integrated reporting solution
  • HIPAA compliance for the WEB app
  • robust personal information security policies
  • maximum possible restriction to personal data access
  • regular backups
  • disaster recovery
  • set up CI/CD
  • automated testing
  • integration with AWS.

Initial architecture

Objective: launch an MVP and base the further development on the actual user feedback.
Timeframe: two years to build the first version.
Team: two developers at the outset, and a QA expert hired before deployment.
Infrastructure: both PROD and DEV environments, the latter was accessible to the customer to track the progress.
Capacity: the web application architecture of the initial server sufficed for about 2 years of smooth operation until the exponential growth in the number of users caused regular lags.
Architecture: a monolith consisting of a single web server operating on Windows OS which also hosted the database.
Stack: Java, Struts, Spring Security, Hibernate, FreeMarker, jQuery, HTML/CSS, and MySQL.
Automation: no CI/CD processes, all builds and deployments were done manually.

Here is what the initial application architecture looked like:

Initial app architecture

Interim application architecture

For the absence of resources for drastic changes within the solution, we resorted to a few temporal means which nonetheless led to boosted performance and enhanced user experience. These include the change in the:

Sever: we added one more web server and balanced the load between two servers with an Nginx.
Database: we moved from MySQL to MariaDB in order to speed up the database and set up Master-Slave Replication to distribute loading on different nodes.
OS: we migrated from a Windows server to Linux (Ubuntu) so as to strengthen security and automate infrastructure management.

Here is the schema of our interim app architecture:

Interim app architecture

The renewed architecture with the interim methods met the requirements of the market at that time and for an upcoming couple of years until the capabilities got exhausted and the need for more radical transformations grew more poignant. The intended outcomes for a more profound modernization necessitated automation, in particular:

  • Implementation of CI/CD mechanism and practices based on Jenkins.
  • Integration of automation testing, with the use of Selenium and Cucumber, due to a number of critical modifications to the solution which entailed a considerable amount of regression and retesting.

Current application architecture

The technological advances and market demands have prepared us for more radical changes within the application, in particular, we have matured for major architectural changes and the first steps in migrating monoliths to microservices.

We have commenced the path of breaking the monolith into microservices by extracting some features into separate services and allocating resources for them. An example is extracting the logic responsible for mobile and web app synchronization. This also helped us relieve the load on the core app and distribute it among the microservices, which are easier to scale as well. Another thing is boosting performance and reliability in case of an outage by providing redundancy with two instances per microservice and a load balancer to distribute requests. As to the framework, we first migrated from Struts to Spring MVC and then completely to Spring Boot.

Servers
We have purchased five new servers where infrastructure for DEV and PROD environments was rolled out. In sum, we possess five servers: one allocated wholly to DEV environments, encompassing WEB application and database instances, as well as Jenkins for CD/CI. Two more servers are committed to the core WEB application and microservices, and another two are assigned to our database infrastructure, specifically MariaDB and MongoDB. An important part of our current architecture is that each service is duplicated on two distinct physical servers to prevent downtime in case one physical server fails.

Infrastructure management
We utilize Proxmox for administering our virtual machines. With the expansion of our architecture, monitoring microservices and infrastructure has become an acute need. In dealing with it, we have decided to use the Zabbix tool which monitors not only microservices, but also web applications, and hardware. The tool has been configurated to spot potential incidents way before they may occur to proactively prevent them.

Database
We have added new nodes to MariaDB so as to keep it for the web app while for the microservices we have opted for the more agile and scalable MongoDB. We have also employed Apache Kafka to manage real-time data streams for async operations. As the major part of business logic was implemented using SQL queries, the database became a bottleneck. Addressing this issue, we have deployed MariaDB Galera Cluster along with MaxScale to accelerate request processing time. For now, we have a fault-proof multi-master cluster which comprises six nodes.

Here is the schema of our current “on-premises” architecture for the whole application:

Current on premises architecture – How to evolve application architecture

How we are planning to develop the app’s architecture in the future

While working on the project, we are fully committed to continuously improving our solutions with various methods and approaches. On the lookout for the upcoming stringent technological demands, we have developed a strategy for upgrading the architectural weaving of the TherapyBOSS app.

Acknowledging the constraints that physical servers may impose on the architecture, we have resolved to set AWS web application architecture as the main direction for our forthcoming journey from monolith to microservices.

The initial steps that we have already conducted with the aim of migrating monolith to microservices encompass taking up AWS S3 as a backup for all files uploaded to our application and establishing a disaster recovery environment powered exclusively by AWS services.

HIPAA compliance: how we managed it

TherapyBOSS application must comply with HIPAA requirements because we collect, process, and store patients’ sensitive data, including electronic and paper clinical documentation related to patients and information about the services received or to be received by the patient.

It is evident that implementing HIPAA compliance has significant impact on the architecture of the application (both web and mobile versions), as we must encrypt patient data, be meticulous about application security, and implement logic that provides access to patient data only to those users who need it for their work, as well as:

  • use only HIPAA-compliant AWS services;
  • implement additional data backups (to prevent possible data loss);
  • implement additional data recovery procedures and plans.

HIPAA, also known as the Health Insurance Portability and Accountability Act, is something each health tech solution functioning in the US is supposed to conform to. As an official federal law document, HIPAA sets the standards that are mandatory to observe for safeguarding patient privacy. Thus, any system dealing with protected health information must be constructed in accordance with HIPAA in order to be able to guarantee sensitive data protection.

HIPAA compliance is a complex undertaking and necessitates numerous steps to be obtained. A health tech solution’s scope of work to become HIPAA-compliant is determined by the specificities of how personal health-related data are collected, stored, transmitted and displayed. However, there are a few widespread scenarios one may consider as guidance in carrying out a strategy for HIPAA compliance, we will recount them further.

Access control requirements implementation

Healthcare solutions are nowadays built as complex systems so that they can handle multiple requests from a great number of users across various organizations for all kinds of health-related data.

Speaking of data access, in spite of technological advancements associated with sensitive information privacy and information access control, enacting a robust Access Control, which is a key point within HIPAA rules, still presents difficulties thanks to the complexity of data access in modern solutions. As one of the health tech apps’ objectives is an effective treatment in due time, solutions tend to grant users broader access privileges which could create room for a data breach.

What we have done to tackle this challenge and establish access control successfully was take the two basic HIPAA requirements into account. In particular, we have prioritized the user ID assignment for monitoring user identity as crucial to the authentication system. And secondly, we have put in protocols for obtaining access to electronic health information.

Person or entity authentication requirements implementation

Ensuring Person or Entity Authentication is another major requirement for health tech apps on the way to complying with HIPAA standards. The procedure essentially verifies that the user requesting electronic protected health information, or ePHI actually has the necessary access privileges – regardless of the verification level. On Person or Entity Authentication implementation for TherapyBOSS, we relied on the following four verification features proposed by HHS:

  • “biometric” identification system
  • “password” system
  • “personal identification number” (PIN)
  • “telephone callback” or a “token” system that uses a physical device for user authentication.

Transmission security requirements implementation

The policy of Transmission Security is relevant for information transmissions across electronic communications and networks, wired or wireless connections, as well as within an application, and applies not only to information mandated under HIPAA regulations but to all forms of individual health data stored and conveyed.

The security requirements do not dictate certain specific technologies or approaches to be applied; as the businesses vary, so do their individual needs and the technological solutions that accommodate them.

In order to ensure the security of the web-based solution of TherapyBOSS, we at Romexsoft have decided to obtain an SSL certificate. The Secure Sockets Layer, also known as SSL, is responsible for encrypting all input and output traffic to and from the app. We were governed by the principle that every page that either contains and collects protected health information or transmits authorization cookies, in addition to user login pages, is to be encrypted by SSL. One more step we have taken to ensure ePHI security was eliminating alternative insecure versions of the aforementioned pages which could be accessible to users.

The data backup and storage implementation

Backup services for electronic protected health information (ePHI) are one of the fundamental things on the way to HIPAA compliance. According to the HIPAA acknowledged regulations, all electronic protected health information, collected, stored, processed and displayed, is to be backed up with the backup copies kept in various secure locations in line with the established best practices.

Both developers and product owners distinctly see the benefits of backups in events of outages or disasters, though it is something to be taken care of much in advance. Such measures prevent information loss in the event of unforeseen circumstances, for instance, the damage of different origins inflicted on an on-premises data center, like natural disasters.

So, we have fully taken advantage of the procedure by making the backup copies easily retrievable by authorized persons from different secured environments in case one physical location fails. In addition, our experience with TherapyBOSS suggests that the scale and size of the data should be considered prior to creating backup copies as they affect the implementation process.

Integrity as a feature

For organizations that collect, process, store and present sensitive data, a great threat is posed by information tampering. With unencrypted and digitally unsigned information, it is virtually impossible to detect or prevent data tampering. Depending on the specific business needs and requests, the application may or may not need tamper-proofing; the methods and approaches to achieve it also vary according to the organization’s specifics. In most cases, implementing PGP, SSL, or AES encryption can be an effective solution to ensure data integrity and security, which also addresses the next point of concern.

Encryption and decryption implementation

§164.306 of HIPAA mandates that a covered entity is to have a mechanism in place for the encryption and decryption of electronic protected health information. To ensure compliance with HIPAA policies and procedures, it is essential to determine the appropriate technology for the healthcare application. Encryption is widely considered the industry standard for sensitive data protection. This process involves complex algorithms that convert data into indecipherable symbols, which can only be decrypted using a security key. Data encryption is particularly important when ePHIs are stored or backed up in locations accessible to non-staff users.

With TherayBOSS, we were striving for the highest level of security, so we have encrypted all collected and stored ePHI and made them accessible exclusively to authorized individuals with the proper security keys. This has not only ensured compliance with HIPAA regulations but also safeguarded all health-related sensitive data against unauthorized access in cases other than security key compromise.

Audit controls implementation

Entities subject to the HIPAA Privacy and Security Rule are supposed to have audit controls with the aim of tracking and monitoring the electronic systems’ activities that involve ePHI. The audit controls should be supplemented by a feature of audit records reviews which guarantees that the electronic systems’ activity conforms with the set norms; they typically include compromising events in logging in and out, accessing, updating and editing files.

Following the requirements of HIPAA, we have implemented appropriate audit controls across the hardware, software, and procedural mechanisms to track and scrutinize activities within the information systems of TherapyBOSS.

We have discovered that when it comes to establishing audit controls for the ePHI, there are two points to bear in mind. Firstly, it is not only vital to timely detect security incidents but also promptly and properly take corrective action aimed at tackling them. In order to do so, we recommend adopting real-time audit trails reviews. Secondly, there is no point in tracking and discovering security incidents if your application is not governed by appropriate policies and procedures. Proper audit control standards, acknowledged by HIPAA, will largely determine the consequences of the app’s risk analysis, so to take preventive or corrective measures one had better set up the policies and procedures accordingly. Speaking of TherapyBOSS, we have implemented the logic which enables the traceability of a patient’s documentation.

Automatic logoff implementation

Automatic logoff is a common security feature that automatically ends a login session after a preset period of user inactivity and demands authorized users to re-enter a password to access their ePHI anew. This measure ensures that access to sensitive data is terminated when a user walks away from their device.

Unlike some of the required HIPAA specifications, timeout and logoff belong to the addressable specifications. This fact allows some flexibility in the requirement implementation depending on the individual peculiarities of a given application. For this reason, the duration of a timeout-before-logoff period ranges from only 2 minutes for high-traffic areas to much longer periods for electronic information systems safeguarded by controlled limited access (e.g. laboratories). On average, the most widespread choice for the timeout period amounts to 10 minutes, though it is much better determined on the basis of the covered entity’s risk analysis and policies and procedures.

Our decisions involving TherapyBOSS were backed up by the solution’s particular features and requirements. While the web application automatically logs the user out upon an hour of idleness, the duration of a timeout-before-logoff period for our mobile application amounts to 24 hours of user inactivity. Additionally, the mobile app automatically closes screens containing private data after a minute, provided that the user does not enter anything (this is the screen for changing passwords and personal information).

Learning from the app development failures

Developing a health tech application surely is a complex process, and mistakes are bound to happen. As promised, hereunder we are going to share our own failures alongside valuable lessons for developers and entrepreneurs alike that we have accumulated from Romexsoft’s experience in developing TherapyBOSS.

Maintain documentation during each project stage (architecture & infrastructure)

At the outset of a project, keeping thorough documentation may seem a waste of time although as development unravels, one may be time and again proven that it is much rather a time and effort-saving investment. While relying on a few engineers as repositories of a few essential technical details is quite justifiable, it is necessary to adopt a bit of a long-term perspective. Lacking scrupulous records is likely to cost your organization a lot in times of one of the crucial engineer’s resignment, new employees’ onboarding, or diving into the logic of the initially developed functionalities.

As for our experience with TherapyBOSS, we have already come up against some of the aforementioned challenges brought about by insufficient documentation. That is why nowadays we prevent possible risks by keeping track of the essentials of our architecture and application with Confluence.

Build application on one technological stack (try to avoid completely different frameworks)

As we have mentioned in the section on the current architecture of TherapyBOSS, we initially used Struts and Spring frameworks in the development of the solution. Speaking of the framework shift, certain vulnerability issues in the Struts led us to fully transition to the Spring. The use of a single technological stack allows us to reduce not only the effort in integrating frameworks and other technologies but also the likelihood of issue occurrence.

Have AT scenarios for at least the most critical functionality

When it comes to investing, it can be quite a challenge to justify certain funding priorities to the app owners. That is typically the case with automated testing which most of the clients believe to be less financially reasonable than manual testing. This held true for us with TherapyBOSS, so for several years into the development process we employed manual testers. Simultaneously, we ran into changes in crucial functionality which led to the need for alterations and retesting of the whole feature. Overall manual retesting turned out to be worth a lot of time, effort and finance so in the end we still got the chance to introduce automated testing to TherapyBOSS. Starting small and implementing some automated testing scenarios we managed to strike the right balance between efficient testing and optimized costs.

Misunderstanding the requirements

Before commencing any development, it is crucial to establish a thorough understanding of the customer’s requirements and to clarify any ambiguities that may arise. Misunderstandings mostly happen to inexperienced engineers who hesitate to admit to not understanding a task fully, which leads to the discovery of bugs during testing. We have resolved this issue by conducting regular planning sessions with the customer or their representative before each sprint to ensure clear communication and understanding; we also encourage engineers to ask questions and provide their interpretation of tasks and features. It is important to remember that the later in the development process it is, the more a bug’s cost increases.

Overflow data size

Another lesson we’ve learned from our own experience is choosing the appropriate data type for the ID column in the database, and using them sensibly. We once stumbled upon an issue stemming from database neglect: some of the IDs in our tables reached the maximum allowed value for INTEGER. We undertook a database examination which revealed gaps between IDs since the code removed and reinserted the same records whenever an entity was edited. What we thereafter did was generate a new ID for each record and changed our code to update the corresponding record instead of deleting and reinserting it.

Create flexible application architecture at the very beginning

When opening up the design of a new software solution, devoting sufficient time to consider all possible future aspects of the product’s growth proves to be of paramount importance. We will enlist the points that need to be taken into account at the initial stage so that the solution can be easily scaled and supplemented with necessary features as needed.

Architecture
The more decoupled the app’s components are, the more freedom engineer teams have to develop and maintain separate modules of the solution in addition to designing, running, testing and troubleshooting distinct components of each individual module. Integrating various layers and modules of your application makes sense if you wish to create a possibility to replace a layer or a module in one of the successive stages of the app’s development. Another noteworthy remark concerns choosing common architecture patterns rather than creating your own to ensure that your architecture is comprehensible among engineers outside your current Development and DevOps team.

Database
When designing the database, use clear and understandable table and column names to avoid confusion in the future as databases tend to be significantly extended, e.g TherapyBOSS primary database exceeds the table number of 200. Grouping fields with similar essence into separate tables and linking them helps avoid having too much information in one table and, as a result, speeds up updating records. For microservices, it is better to have smaller databases that contain only information related to the specific service, which facilitates horizontal scaling.

Implement MVP and then incrementally add new features having feedback from real customers

It is rather common sense than exclusively our experience, yet ignoring the practicality of an MVP is an unfortunate error. It essentially allows bypassing the waste of considerable amounts of resources invested into developing a fully-featured and functional product only to discover the unfavourable market response. The more justified approach includes launching a functionally minimally-equipped product to expose it to real users and having gathered the feedback from the market, implementing only the useful and value-generating features.

Set up CI/CD for releases

While at the beginning of the project manual work may suffice, as the project expands and various modules and services need to be integrated, establishing a strategy and processes for continuous integration and continuous delivery (CI/CD) makes all the more sense, especially for the release turnaround time. We automate our building, testing, packaging and deploying with Jenkins.

Integrate tools to monitor infrastructure

Preemptively identifying and addressing potential issues rather than simply reacting to incidents as they arise is an advantageous strategy regardless of the individual specificities. Regrettably, we have come to practice this only after experiencing several incidents in our production environment. Effective management of a large infrastructure with multiple services calls for visibility into resource utilization, including available space, memory, and processor loading. Monitoring and measuring outruns any improvement.

Have staging infrastructure that is almost the same as production infrastructure to test new changes before the release

We have resolved to commit resources to a staging environment, admittedly after some performance issues. Our discovery was that the resources and data set in the development environment were vastly different from those in the production environment, which led to a dissatisfactory gap in the performance within those two. In order to ensure quality and performance in a production-like environment, we established staging environments to test out any new modifications before deploying them. This helped to bring the likelihood of performance issues to the minimum and, ultimately, create a more reliable product with an improved user experience.

Evolving application architecture FAQ

How can the principles of evolving application architecture be applied to other industries outside of healthcare?

The principles of evolving application architecture are universal and can be applied across various industries. For instance, in the finance industry, these principles can help in managing complex transactions and ensuring data security. In the retail industry, they can aid in handling large inventories and enhancing customer experiences. The key is to understand the specific needs and challenges of each industry and adapt the architecture to meet those needs.

What are some potential pitfalls or challenges that one might face when evolving architecture in a highly regulated industry?

Evolving architecture in a highly regulated industry can present several challenges. These may include ensuring compliance with industry-specific regulations, managing sensitive data securely, and maintaining system integrity during the transition. It's crucial to have a clear understanding of the regulations, involve stakeholders early in the process, and implement robust security measures to mitigate these challenges.

How can organizations ensure that their modern application architecture remains flexible and adaptable for future changes?

Organizations can ensure the flexibility and adaptability of their modern application architecture by adopting principles such as modularity, scalability, and loose coupling. Using cloud-based services can also provide flexibility as they can be easily scaled up or down based on demand. Regularly updating the technology stack and adopting DevOps practices like continuous integration and continuous delivery (CI/CD) can also help in maintaining adaptability.

What strategies can be employed to minimize disruption during the transition from a monolithic architecture to a microservices architecture?

To minimize disruption during the transition, organizations can adopt a phased approach, gradually replacing parts of the monolithic system with microservices. This allows the existing system to continue functioning while the new system is being built. Additionally, using containerization technologies like Docker and orchestration tools like Kubernetes can help manage microservices more efficiently and reduce disruption.

Share The Post