Are you the poor guy who gets to move legacy applications to the cloud or other modern platforms? The path to modernizing legacy applications is paved with containers and microservices, as well as new tooling and development processes. The trick to get through this process is to make the right choices about how to refactor your applications to take advantage of the best approaches and technologies.
Most of us don’t have the luxury to build “net new” applications for cloud-based platforms. Instead we must migrate existing or legacy applications to the cloud, and these applications could be between 5 and 20 years old.
Some applications can take a “lift and shift” path to the cloud, meaning that if code modifications are made, most will need to be significantly refactored to take advantage of cloud features. The applications are redesigned, recoded, and repurposed for the specific cloud platform. This gives the legacy application a new life, and a new purpose.
As you might expect, this is not a straightforward process, and there are new technologies that need to be considered. The enablement of the application to externalize APIs and microservices is the best path for applications to provide the best functionality on cloud platforms, and the containerization of the applications ensures an easily distributed architecture and cloud-to-cloud portability.
Here are my best practices for modernizing and refactoring legacy applications, including understanding how to identify the right places for containers and microservices, as well as how rebuilding should occur:
Basics of containers and microservices
The use of containers to “wrap” or containerize existing legacy applications comes with a few advantages, including the ability to reduce complexity by leveraging container abstractions. The containers remove the dependencies on the underlying infrastructure services, which reduces the complexity of dealing with those platforms. This means that we can abstract the access to resources, such as storage, from the legacy application itself. This makes the application portable, but also speeds the refactoring of the legacy applications, since the containers handle much of the access to native cloud resources.
There are other advantages:
- Containers offer the ability to leverage automation to maximize their portability and, with portability, their value. Through the use of automation, we’re scripting a feature we could also do manually, in essence, such as migrating containers from one cloud to another.
- Also, the the ability to provide better security and governance services — by placing those services around rather than within containers — is huge. In many instances, security and governance services are platform-specific, not application-specific. For example, legacy applications tend not to have security and governance functions within them. The ability to place security and governance services outside the application domain provides better portability and less complexity when refactoring.
- Containers can provide better-distributed computing capabilities as well. A legacy application can be divided into many different domains, all residing within containers. These containers can be run on any number of different cloud platforms, including those that provide the highest cost and performance efficiencies. So legacy applications can be distributed and optimized according to their utilization of the platform from within the container. For example, one could place an I/O-intensive portion of the application on a bare-metal cloud that provides the best performance, place a compute-intensive portion of the application on a public cloud that can provide the proper scaling and load balancing, and perhaps even place a portion of the application on traditional hardware and software. All of these elements work together to form the application, and the application has been separated into components that can be optimized.
The architecture of microservices
Microservices are an architecture as well as a mechanism. They’re an architectural pattern in which complex applications are composed of small, independent processes that communicate with each other using language-agnostic APIs. This, at its essence, is service-oriented computing, which decomposes the legacy application down to the functional primitive and builds it as sets of services that can be leveraged by other applications or the application itself.
The benefits of this approach include efficiencies through reuse of microservices. As we rebuild the legacy application for the cloud, we modify it to expose services that are accessible by other applications. More importantly, we can consume services from the rebuilt legacy application so we don’t have to build functionality from scratch.
For instance, some legacy systems have built-in systems such as credit validations, mapping, and address validation services that must be maintained. This costs hundreds of thousands of dollars a year, or more. The service-based approach allows us to reach out and consume remote services that provide this functionality — which allows you to get out of the business of maintaining services that can be found in other places. This also allows us to expose services for use within the enterprise by other applications, or even sell services to other enterprises over the open Internet.
How to select the right applications for modernization
Not all legacy applications will be right for the cloud. This is especially true when you consider containerizing and service-enabling the applications. Here is my advice:
- Don’t try to refactor very old applications that are built using very old languages and databases. For instance, applications that are currently mainframe-based, written in languages such as Cobol or Fortran, or those built using proprietary languages and technology, such as a 4GL. While tools are on the way to help modernize these applications, it’s typically more economical to rebuild these from scratch in the cloud.
- Applications that are poorly designed will take a greater degree of work to get them ready for the cloud. Containers and services require that the application architectures follow a specific set of patterns. If the applications require a complete gutting, it may be easier and cheaper to start anew.
- Applications that tightly couple to the data store are another poor choice for migration to the cloud, using containers and services. You’ll have to decouple the data from the application layer, and thus redo most of the application. Again, as with points 1 and 2, this is more pain than gain to run it in the cloud.
Refactoring legacy applications for containers and microservices
The process of containerizing an application and service-enabling it at the same time is more art than science at this point. However, certain success patterns are beginning to emerge as enterprises migrate legacy applications to the cloud using containers and service orientation as the architecture.
Pattern One decides quickly how the application is to be broken into components that will be run inside of containers, in a distributed environment. This means breaking the application down to its functional primitive and building it back up as component pieces, to minimize the amount of code that has to be changed.
Pattern Two builds data access as-a-service for use by the application, and has all data calls go through those data services. This will decouple the data from the application components (containers) and allow you to change the data without breaking the application. Moreover, you’re putting data complexity into its own domain, which will be another container that is accessed using data-oriented microservices.
Pattern Three is to splurge on testing. While many will point to the stability of containers as a way around black-box and white-box testing, the application now exists in a new architecture with new service dependencies. There could be a lot of debugging that has to occur up front, before deployment.
Operational considerations: CloudOps
Modernized legacy applications must be managed differently in production from their management prior to migration. This phase is known as CloudOps, or the operation of the application containers in the cloud.
When put into production, those charged with CloudOps should take advantage of the architecture of containers. Manage them as distributed components that function together to form the applications but that can also be separately scaled. For instance, the container that manages the user interface can be replicated across servers as the demand for that container go up when users log on in the morning. This provides a handy way for CloudOps to build auto-scaling features around the application, to expand and de-expand the use of cloud resources as needs change.
Most enterprises believe the cloud will become the new home for legacy applications. However, all legacy applications are not a fit for the cloud, at least, not yet. Care must be given to select the right applications to make the move.
The use of containers and microservices makes things easier. This approach forces the application developer charged with refactoring the application to think about how to best redesign the applications to become containerized and service-oriented. In essence, you’re taking a monolithic application and turning it into something that’s more complex and distributed. However, it should also be more productive, agile, and cost-effective. That’s the real objective here.