Once you’ve ported an application to a cloud-based platform, including Google, Amazon Web Services (AWS), IBM, and Microsoft, it’s tough, risky, and expensive to move that application from one cloud to another.
This isn’t by design. The market moved so quickly that public and private cloud providers couldn’t build portability into their platform and still keep pace with demand. There’s also the fact that portability isn’t in the best interests of cloud providers.
Enter new approaches based on old approaches, namely containers, and thus Docker and container cluster managers, such as Google’s Kubernetes, as well as hundreds of upstarts. The promise is to provide a common abstraction layer that allows applications to be localized within the container, and then ported to other public and private cloud providers that support the container standard.
RightScale’s new State of the Cloud Report confirms that containers (exemplified by Docker and CoreOS) are undergoing rapid growth. The quick uptake of containers makes a lot of sense given what they offer. At a high level, containers provide lightweight platform abstraction without using virtualization.
Containers are also much more efficient for creating workload bundles that are transportable from cloud to cloud. In many cases, virtualization is too cumbersome for workload migration. Thus, containers provide a real foundation for moving workloads around hybrid clouds and multiclouds without having to alter much, if any, of the application.
More specifically, containers provide these advantages:
- Reduced complexity through container abstractions
- The ability to use automation with containers to maximize their portability
- Better security and governance from placing services around, rather than inside, containers
- Better distributed computing capabilities, because an application can be divided into many separate domains, all residing within containers
- The ability to provide automation services that offer policy-based optimization and self-configuration.
Containers provide something we’ve been trying to achieve for years: a standard application architecture that offers both managed distribution and service orientation.
Most compelling right now is containers’ portability advantage. However, I suspect we’ll discover more value over time. In fact, I suspect that containers will become a part of most IT shops, no matter whether they’re moving to the cloud or not.
Defining a New Value for Containers
Containers are predicated on the goal of deploying and managing n-tier application designs. By their nature, containers manage n-tier application components, e.g., database servers, application servers, web servers, etc., at the operating system level. Indeed, portability is inherent because all operating system and application configuration dependencies are packaged and delivered inside a container to any other operating system platform. Containers are preferable to virtual machines here because they share compute platform resources very well whereas virtual machine platforms tend to acquire and hold resources on a machine-by-machine basis.
In essence, containers can move from cloud to cloud and system to system, and thus can also provide automation for this process. In other words, we not only can leverage containers, but also can have them automatically “live migrate” from cloud to cloud as needed to support the application’s requirements.
At the center of the container evolution is a cloud orchestration layer that can provision the infrastructure required to support the containers, as well as perform the live migration and monitor their health after the migration occurs (see Figure 1).
The concepts of autoprovisioning and automigration are often promoted within modern cloud computing development but are elusive in practice. These concepts have a few basic features and advantages.
First is the ability to reduce complexity by leveraging container abstractions. Containers remove the dependencies on the underlying infrastructure services, which reduces the complexity of dealing with those platforms. Containers are truly small platforms that support an application or an application’s services that sit inside of a well-defined domain.
The second advantage is the ability to leverage automation with containers to maximize their portability, and thus their value. Through the use of automation, we script things we could also do manually, such as migrating containers from one cloud to another. We can also reconfigure communications between containers, such as tiered services, or data service access. However, today it’s much harder to guarantee portability and application behavior when using automation. Indeed, automation often relies on many external dependencies that can break at any time, and thus remains a problem. However, it’s indeed solvable.
Another advantage is the ability to provide better security and governance services by placing those services around, rather than within, containers. In many instances, security and governance services are platform-specific, not application-specific. Placing security and governance services outside of the application domain provides better portability and less complexity during implementation and operations.
Better distributed computing capabilities can also be provided since an application can be divided into many different domains, all residing with containers. These containers can be run on any number of cloud platforms, including those that provide the most cost and performance efficiencies, and therefore applications can be distributed and optimized as to their use of the platform from within the container. For example, an I/O-intensive portion of the application could run on a bare metal cloud that provides the best performance, while a compute-intensive portion of the application runs on a public cloud that provides the proper scaling and load balancing. Perhaps even a portion of the application could run on traditional hardware and software. They all work together to form the application, and the application is separated into components that can be optimized.
Finally, there’s the ability to provide automation services that offer policy-based optimization and self-configuration. None of this works without providing an automation layer that can “automagically” find the best place to run the container, as well as deal with the changes in the configurations, and other things specific to the cloud platforms where the containers reside. However, we’ve learned that n-tier applications have inherent limitations. “They are designed to scale up with very little focus paid on scaling down and no attention paid to scaling out or in. They typically are rife with single points of failure and tend to manage their own state via the use of cluster-style computing. Each tier of the n-tiered architecture must be scaled independently of the other tiers.” Also, keep in mind that the automation/orchestration required will not always be portable. Indeed, that’s likely the new lock-in layer; once you’ve built out the operational side, how easy is it to migrate from cloud to cloud? As Lori MacVittie of F5.com noted in an email, portability of container clustering and orchestration is likely to quickly become the bottleneck.
Making the Business Case
The problem with technical assertions is that they need to define a business benefit to be accepted by the industry as a best practice. The technical benefits I’ve defined need to be translated into direct business benefits that provide a quick return on investment.
One business benefit is the ability to automatically find least-cost cloud providers. Part of the benefit of moving from cloud to cloud is that you can leverage this portability to find the least-cost provider. Assuming most things are equal, the applications within a set of containers can live migrate to a cloud that offers price advantages for similar types of cloud services, such as storage.
For example, an inventory control application that exists within a dozen or so containers might have some storage-intensive components that cost $100,000 a month on AWS. However, Google charges $50,000 a month for the same types of resources. Understanding this configuration possibility within the orchestration layer, the containers can automigrate/live migrate to the new cloud where there’s a 50 percent savings. If Google raises its prices and AWS lowers theirs, the reverse could occur.
These automation concepts also support better reliability. We’ve all done business cases around uptime and down-time. In some instances, businesses can lose as much as $1 million an hour when systems aren’t operating. Even if the performance issue lasts for only an hour or two, the lost productivity can move costs well into thousands of dollars per minute.
This architecture shown in Figure 1 can help avoid outages and related performance issues by opening other cloud platforms where the container workloads can relocate if issues occur on the primary clouds. For example, if AWS suffers an outage, the containers can be relocated to Google in a matter of minutes, where they can operate once again until the problem is resolved. You might choose to run redundant versions of the containers on both clouds, supporting an active/active type of recovery platform.
Containers might sound like distributed application nirvana. They certainly offer a better way to utilize emerging cloud-based platforms. However, there are many roadblocks in front of us and a lot of work to be done.
We need to consider the fact that current technology can’t provide this type of automation. Although it can certainly manage machine instances, even containers, using basic policy and scripting approaches, automatically moving containers from cloud to cloud using policy-driven automation, including autoconfiguration and autolocalization, isn’t there yet.
Also, we’ve only just begun our Docker container journey. We still have a lot to learn about the technology’s potential as well as its limitations. As we learned from the use of containers and distributed objects from years ago, the only way this technology can provide value is through coordinating clouds that support containers. Although having a standard here is great, history shows that vendors and providers tend to march off in their own proprietary directions for the sake of market share. If that occurs, all is lost.
The final issue is complexity. It only seems like we’re making things less complex. Over time, the use of containers as the means of platform abstraction will result in applications that morph toward architectures that are much more complex and distributed. Moving forward, it might not be unusual to find applications that exist in hundreds of containers, running on dozens of different models and brands of cloud computing. The more complex these things become, the more vulnerable they are to operational issues.
All Things Considered, Containers Might Be a Much Better Approach to Building Applications on the Cloud.
PaaS and IaaS clouds will still provide the platform foundations and even development capabilities. However, these things will likely commoditize over time, moving from a true platform to good container hosts. It will be interesting to see if the larger providers want to take on that role. Considering provider interest in Docker, that indeed could be their direction.
The core question now: if this is the destination of this technology and application hosting on cloudbased platforms, should I redirect resources toward this new vision? I suspect that most enterprises already have their hands full with the great cloud migration. However, as we get better at cloud application architectures using approaches that better account for both automation and portability, we’ll eventually land on containers.