When migrating applications to the cloud most enterprises focus on two approaches:
1) Lift-and-shift, where there is a direct port of the application with minimal code modifications.
2) Refactoring, where you customize the application to run on a cloud platform. A partial refactoring modifies only specific portions of the application to take advantage of the cloud platform, whereas a complete refactoring changes most of the application.
A summary of the pros and cons of each approach include:
Applications that employ a well-defined architecture, where the data is coupled to the application logic, and where the data is difficult to separate are ideal for lift-and-shift. The cost of substantially modifying or refactoring these types of applications is prohibitive. Additionally, if the application already runs well on the cloud, there is no compelling reason to refactor it.
For business-critical applications that are poorly designed there is significant risk lift-and-shifting them to the cloud. Without refactoring these applications will consume cloud resources inefficiently, thus generating a much higher public cloud bill, and may event create performance and stability problems. In this case, given the importance of the application, it is well worth the investment to complete a partial or complete refactor in order to take full advantage of the cloud platform.
In addition to lift-and-shift and application refactoring, we now have the option to leverage containers for new or existing cloud application migrations and development. The concept of containers is nothing new; we’ve been using them for years as an approach to compartmentalize whole systems, abstracting containers from the physical platform, and thus allowing you to move them around from platform to platform, or in this case, cloud-to-cloud. Out of the modern container technology, Docker is the most well-know, but other flavors exist, such as those from CoreOS and container cluster managers, including those from Google and Mesos.
Let’s do a quick Docker container review. The Linux kernel, which is in the container, allows for resource isolation (CPU, memory, I/O, network, etc.) and does not require starting any virtual machines. Docker extends a common container format, called Linux Containers (LXC), with a high-level API to provide a lightweight virtualization solution that runs processes in isolation. Docker also provides namespaces to completely isolate an application’s view of the operating environment, including process trees, network, user IDs and file systems.
The use of this technology is rather exciting. It solves two obvious and expansive problems: How to provide true application portability among cloud platforms, and how to build cloud applications using standard approaches and architecture patterns. Workloads can certainly be placed in virtual machines, but the use of containers is a much better approach, with a better chance of success, as cloud computing moves from simple architectures to complex and distributed architectures.
The ability to provide lightweight platform abstraction within the container, without using virtualization, is much more efficient for creating workload bundles that are transportable from cloud-to-cloud. In many cases, virtualization is just too cumbersome for workload migration. Thus, containers provide a real foundation for moving workloads around within hybrid or multi-clouds, without having to alter much or any of the application.
The concept of containers has a few basic features and advantages, including:
The ability to reduce complexity by leveraging container abstractions.
Containers remove dependencies on the underlying infrastructure services, reducing the complexity of dealing with those platforms. They are truly small platforms that support an application or application services that sit inside of a very well-defined domain, the containers.
The ability to leverage automation with containers to maximize their portability, and thus their value.
Through the use of automation, we’re basically scripting things we could also do manually, such as migrating containers from one cloud to another. Or, reconfiguring communications between the containers, such as tiered services, or data service access. However, today it’s much harder to guarantee portability and the behavior of applications when using automation. Indeed, automation often relies on many external dependencies that can break at any time. Thus, this remains a problem we need to solve. However, it’s definitely solvable.
The ability to provide better security and governance by placing those services around, rather than within containers.
In many instances, security and governance services are platform-specific, not application-specific. Thus, the ability to place security and governance services outside of the application domain provides better portability, and less complexity during implementation and operations.
The ability to provide better distributed computing capabilities, considering that an application can be divided into many different domains, all-residing within containers.
Containers can be run on any number of different cloud platforms, including those that provide the most cost and performance efficiencies. Therefore, applications can be distributed and optimized, based on their utilization of the platform from within the container. For example, you could place an I/O-intensive portion of the application on a bare metal cloud to provide the best performance, a compute-intensive portion of the application on a public cloud to provide the proper scaling and load balancing, and perhaps even place a portion of the application on traditional hardware and software. They can all work together to form the application, and the application has been separated into components that can be optimized.
The ability to provide automation services that can provide policy-based optimization and self-configuration.
None of this works without an automation layer that “auto-magically” finds the best place to run the container, as well as manages the changes in the configurations, and other things specific to the cloud platforms where the containers reside.
Making the Comparison
For our purposes, we elected to compare containers with refactoring (combining all forms), and lift-and-shift. The idea is to look at each approach around core attributes or disruptive vectors that most enterprises consider as value points for cloud application development. These include:
- Code portability- the ability to move code from platform to platform, cloud or not.
- Data portability- the ability to move data from platform to platform, cloud or not.
- Cloud native features- the ability to leverage native features of the cloud platform to support better performance.
- Application performance- the ability to run applications at optimal performance.
- Data performance- the ability to receive and send data at optimal performance.
- Use of services- the ability to leverage services and micro-services within the application, as well as leverage the application as sets of services and micro-services that can be consumed by other applications.
- Governance and security- the ability to provide core governance and security services from within the application.
- Business agility- the ability to quickly change the application around the needs of the business, or to compress the time-to-market, or the time it takes to get the application into production.
Table 1 lists the disruptive vectors, and the weighting that we placed on each. Note that Use of Services is high because it drives value in almost all of the other disruptive vectors.
Table 1: Disruptive vectors with weighting.
Table 2 (below) shows the ranking of each approach to application migration, as set against the disruptive vectors. The rankings were based upon the findings within enterprises, as to the value that can be found with each approach.
Generally speaking, when considering the disruptive vectors, container-enabling applications as they migrate to the cloud provides the best approach. The application is forced into an architecture that’s naturally distributed, with isolated components, and it is naturally service-oriented. Not considered are the cost, with both container-enablement and refactoring being much higher than simple lift-and-shift.
Table 2: Three approaches to development ranked against the disruptive vectors.
Finally, Table 3 (below) shows the index score, which considers the vectors and the weighting we used. Containers provide the most value, with refactoring second, and lift-and-shift third. Again, we’re not considering costs with this analysis.
Table 3: The index score, considering the vectors and leveraged weighting.
Figure 1 sums up the findings in a spider diagram. The surprise with these results is that, while containers are indeed older approaches, the use of Docker, CoreOS, and other technologies in this space provides a sound set of container platforms that offer guidance around how to port applications and convert them for use within containers. What does this mean? The path to containers is getting easier. Although refactoring can mean almost any type of modification to the applications, container-enablement is a standard and pretty straightforward process. However, keep in mind that containers are still relatively new. The value points will change as the technology evolves, likely for the better.
Figure 1: The results in a spider diagram, with containers leading the rankings.
And the Winner Is…
The tendency is to think that new ways of building systems will be the way we build systems for years to come. While that has not been the case in the past, it could be the case with containers.
Containers deliver a standard, useful enabling technology, and provide a path to application architecture that offers both managed distribution and service orientation. We’ve been trying to reach this state for years.
Perhaps what’s most compelling is the portability advantage of containers, which remains the battle cry of container technology providers these days. That being said, it will be years before we really understand the true value of containers, as we move container-based applications from cloud to cloud. It’s also important to note that while containers are designed to make portability simple and easy, there can be another side to this story. Container security boundaries can intrude risks and some applications aren’t a good fit for containers in the first place.
I suspect that, if momentum continues, containers will become a part of most IT shops, no matter if they are moving to the cloud or not. The viability of this technology, as well as versatility, will be something that we continue to explore and exploit over the next several years. Count on the fact that a few mistakes will be made, but the overall positive impact of containers is a foregone conclusion.