However, this hype has not led to the IT community’s universal acceptance of Docker containers as the Next Big Thing; far from it. In defiance of the classic Gartner hype cycle sequence, the container discussion has seemingly jumped straight from the technology trigger to the trough of disillusionment, altogether bypassing inflated expectations. It seems that each day brings a new critical appraisal of containers in general, and Docker in particular. These critiques often focus on the supposedly stark contrast between Docker container technology’s potential and its immaturity and unreadiness for actual production use.
Experience in the engineering trenches with Docker has given me a quite different perspective on the current and future state of containers. I readily concede that Docker containers are a relatively untested technology, and that major functional gaps in networking, storage, and security must be bridged in order for containers to become a mainline infrastructure component. However, I will argue that mainstream acceptance and widespread use of containers are closer than most people think, and that containers are already well on their way to becoming ubiquitous. The discussion below highlights and “busts” commonly believed myths about Docker and containerization.
Myth 1: Docker Containers Are Best Understood as Small VMs
Like many IT professionals with extensive virtualization experience, the concept of Docker containers as virtual machine “mini-me’s” was a logical first frame of reference. The equivalence of containers and virtual machines is a simple, elegant, easily understood, and absolutely wrong construct. In fact, Docker architects make sure to debunk this idea during bootstrap training sessions, stating baldly that containers are not virtual machines, and should not be treated like them. Docker containers do serve the same purpose as vms, presenting a subset of resources like compute, storage, libraries, and networking to an application process. However, there are key differences, especially in one area.
A virtual machine uses a full operating system running on an abstracted hardware-as- software layer, provisioned for the application process in toto. In contrast, a container only provisions the specific operating system resources necessary to run its specific application process. Installing an application in a vm first requires building and configuring a full instance of a specific OS version on the virtual hardware supplied by a specific hypervisor version. Provisioning a “containerized” application basically requires a Docker engine running on a common kernel, a Docker image (instruction set), and access to required libraries (which can be built on demand).
This one basic architectural difference translates to the key advantage Docker containers hold over virtual machines: speed. Simply put, for almost any standard development or operational task – build, configure, test, deploy, update, migrate – Docker containers are faster and cost less. The promise of faster time to market and lower TCO is the holy grail driving many a container quest. While it is unlikely that containers will fully replace virtual machines, it is conceivable that containers may end up being one of the primary vm workloads.
Myth 1 Status: BUSTED!
Myth 2: Docker Containers Aren’t Suitable for Critical Workloads
There is currently a pervasive (and faulty) perception that Docker containers are only being utilized in dev-test and proof-of-concept projects. In fact, the question I am most often asked by IT colleagues and customers goes like this: “Is anyone using Docker containers for critical workloads, or even in production?” The answer is an unequivocal “Yes” – critical workloads are being run in Docker containers, and much more pervasively than is commonly understood.
Here are a few examples:
- Global financial services corporation ING is using Docker containers to help accelerate its continuous delivery process and drive 500 deployments/week, meeting speed to market goals
- Global investment bank Goldman Sachs uses Docker containers to centralize application builds and deployments
- Streaming music leader Spotify uses Docker containers to make software deployments repeatable, straightforward, and fault-tolerant
- Application performance management leader New Relic is using Docker containers to solve its most challenging deployment issues
Even more telling is the fact that some very large-scale Docker container deployments are unpublicized, precisely because these enterprises are realizing competitive advantage.
Myth 2 Status: BUSTED!
Myth 3: Containers Won’t Impact Enterprise IT for Several Years
Many IT industry observers cite the traditionally slow enterprise adoption rate of new technologies as a reason that containers won’t have a major impact on enterprise IT in the short term. While this viewpoint has some historical basis, it does not take into account the drastically altered environment that enterprise IT works in today. While enterprise IT projects have traditionally spanned months and years, the capabilities of cloud technologies combined with DevOps approaches have irrevocably changed that model.
Key leaders in many IT organizations are grasping containers’ potential for accelerating development cycles and lowering operational costs, especially when combined with cloud and DevOps. These capabilities can translate to tremendous competitive advantage, and if anything can light a fire under enterprise IT, existential factors like direct competition can. I saw this reality playing out during the recent container engagements I participated in with Docker architects Aaron Huslage and Matt Bentley. Both engagements were with enterprise IT organizations in longstanding Fortune 50 corporations, both with tons of legacy infrastructure and processes. Nevertheless, both were pushing extremely hard to incorporate containers into their application delivery processes and architectures, leveraging Docker’s expertise.
Myth 3 Status: BUSTED!
Myth 4: Docker Containers Aren’t as Secure as Traditional Infrastructure
This myth is often phrased more directly (and less accurately) as “containers aren’t secure.” Useful security assessments entirely depend on measurement against accepted standards – after all, total security can only be achieved by total inaccessibility. If the standard is a virtual machine like VMware’s, there is no arguing the fact that containers do not offer the same level of isolation, and that Docker containers have security gaps that must be addressed.
However, a fact that is often overlooked is that container deployment often leads to a corresponding reduction in deployed operating system instances. Modern virtual machines offer a constrained attack surface. However, each virtual machine runs an operating system instance, so each instance’s total attack surface is the combination of the VM and the OS. If a virtualized operating system is compromised, there is no real utility to continuing the attack to the hypervisor layer. If a Docker container is compromised, unless standard hardening protocols are ignored, the attacker only has access to that single application process, not an entire operating system.
The takeaway here is that even at this stage of container evolution, where security is not yet a mature feature, the aggregate attack surface of a containerized application stack may not differ appreciably from that of a virtualized stack.
Myth 4 Status: Seems Plausible, but BUSTED!
Myth 5: Containers Cannot Be Deployed and Orchestrated At Scale
This container myth is perhaps the easiest to bust. This statement is only true when amended to read “Containers cannot be orchestrated at scale… out of the box with prepackaged solutions.” Even this amended statement will soon be outdated, as Docker makes Machine, Swarm, and Compose available later this year as part of its offering.
Notwithstanding the Docker contributions, there are several production-tested container orchestration examples currently available, led by Google’s own engineering experience and the resulting Kubernetes. Following the acquisition of Orchard & Fig, the Docker API now provides a clear path for development of container orchestration tools, a path followed not just by Fig/Compose but also by Spotify’s Helios and New Relic’s Centurion.
Myth 5 Status: BUSTED!
Myth 6: Rocket and Other Competitors Will Slow Docker Adoption
The fact and timing of CoreOS’s introduction of Rocket as a competing container standard led many casual observers to question Docker’s viability. “If Docker was so great”, the reasoning went, “why would a competing standard emerge so soon? After all, it took several years for a serious challenge to VMware and its hypervisor to emerge.” The answer is quite simple – Docker the technology is an open-source project with an open API, a project being driven forward by the open-source development community, not just Docker the company. VMware chose a closed, Microsoft-style model for its virtualization project, and not surprisingly found itself with Microsoft as a direct (and eventually competent) competitor.
Docker has so far stayed true to its open-source roots, and while Docker the company may or may not stay at the forefront of the container movement, Docker the technology almost assuredly will maintain its position and relevance for the foreseeable future. Rocket, LXD, Spoonium, and others will all have their auditions and followers within the open source container community, but ultimately the majority of developers will gravitate towards the best of breed – and Docker has that crown for the foreseeable future.
Myth 6 Status: BUSTED!
Common IT wisdom holds that Docker containers aren’t ready for prime time, and that a “wait and see” stance is most appropriate. To the casual IT observer, the Docker container standard seems relatively new and untested, even though it is based on Linux container technology (LXC) which has been under development since 2006.
My take is that container ubiquity is not all that far off, and that many of the perceived obstacles are already being addressed. Both new and traditional IT organizations are overcoming the inherent challenges of new tech and leveraging Docker containers for competitive advantage NOW. Organizations that adopt a passive stance on containers do so at their own risk.