Skip to content
CTP is part of HPE Pointnext Services.   Explore our new services here →
  • The Doppler Report
Cloud TP Logo
  • Thought Leadership
  • Clients
  • Services
  • Careers
  • Contact Us

Cloud Technology Partners

CLOUD SERVICES

  • The Cloud Adoption Program
  • Application Migration
  • Software Development
  • Infrastructure Modernization
  • DevOps & Continuous Delivery
  • Cloud Security & Governance
  • Cloud Strategy Consulting

TECH DOMAIN

  • Amazon Web Services
  • Google Cloud Platform

ABOUT US

  • Company Overview
  • Leadership Team
  • Partners
  • News & Recognition
  • Announcements
  • Weekly Cloud Report
  • Client Case Studies
  • Events

CAREERS

  • Join Us
  • Job Opportunities
 Cloud Technology Partners
  • Doppler Home
  • Client Case Studies
  • Podcasts
  • Videos
  • White Papers
  • Quarterly
  • Events
  • Subscribe

Turtles All The Way Down: What the Community is Building on the Backs of Containers

As anyone who has built cloud native applications knows, moving to the cloud is not about optimizing cost. It is instead about the speed and agility of delivery. Containers are fueling the way modern applications are built.
Wayland Jeong GM Hybrid Cloud Business
Share this 
doppler_mail1

For more content like this, Get THE DOPPLER
email every Friday.
 
Subscribe here  chevron_right

“My opponent’s reasoning reminds me of the heathen, who, being asked on what the world stood, replied, ’On a tortoise.’ ‘But on what does the tortoise stand?’ ’On another tortoise.’ With Mr. Barker, too, there are tortoises all the way down.” (Vehement and vociferous applause.)

– Second Evening: Remarks of Rev. Dr. Berg

You may have heard of containers and think of them as yet another way to virtualize and decouple applications from hardware. But, more importantly, containers represent another layer of an ever evolving stack of tools and technologies that most modern cloud native applications developers use to speed the delivery of value to customers. Eric Pearson, the CIO of Intercontinental Hotels Group sums it up well:

“The battle is no longer about large organizations outperforming the small; it’s about the fast beating the slow”

As anyone who has built cloud native applications knows, moving to the cloud is not about optimizing cost. It is instead about the speed and agility of delivery. Containers are fueling the way modern applications are built. They represent the tip of the iceberg, and open source is fueling innovation like never before. As an IT professional, you should understand why containers are important, how they relate to virtualization and the ecosystem that is being built on the shoulders of giants.

VMs and Containers

Virtual machines (VMs) have indeed revolutionized the IT industry. Significant offerings include: VMware ESX hypervisor; the Linux KVM hypervisor, which is the foundation of OpenStack; Microsoft Hyper-V; and the Xen hypervisor, which is largely the foundation of the AWS IaaS. These have allowed organizations to run an entire distributed operating environment independent of the underlying hardware. But virtualization is nothing new. For decades, mainframes have employed virtualization, with time-sharing systems developed by IBM in the late 1960’s and early 1970’s. Before then, mainframe computers were single use systems. But innovative products, such as the IBM 360 and 370 and the CP/CMS time-sharing operating system, brought to market breakthrough technologies, and marked the shift from single use computers to multi-user and multi-tasking systems. Then, the introduction of x86 processors with MMUs and the commercialization of hypervisor technology from VMware brought mainframe virtualization to the masses.

Docker and container technology is now all the rage, but many often confuse containers with replacements for virtual machines. On the contrary, containers can run happily on any operating system, whether that OS is running in a VM, or booted on a bare-metal server. Containers are instead a method for decoupling the application runtime environment and its dependencies from the underlying OS kernel; and in doing so, they address the issue of software portability (see Figure 1). With containers, you build once and run anywhere.

At its core, Docker is built on Linux Containers (LXC). A Linux Container is comprised of co-groups, which provide limit controls for a set of resources (memory limits, prioritization, etc.) and namespaces. These controls effectively give a set of processes its own sandbox through constructs like PID isolation, a rooted file system (i.e., a writable snapshot chroot’d into the container) and private user/group ID spaces. Docker containers build on top of these Linux kernel capabilities an application centric packaging system that allows you to bundle your code and all its dependencies into a single artifact that is portable across any Docker enabled machine. Moreover, containers can be layered and re-used similar to objects in object-oriented programming languages. For example, a base container image, such as a base class, can be reused and extended by stacking containers that further refine the functionality. Virtual machines have had a profound impact on increasing hardware utilization, while Docker containers have revolutionized application development by increasing speed and agility through application reuse and portability.

Microservices

Containers not only provide a way to distribute hermetically sealed applications in a space-efficient and compact way, but they also encourage decomposing large applications into smaller independent chunks, or what the industry now calls “microservices.” Instead of a monolithic application deployed within a single operating environment, a new generation of highly distributed and loosely coupled applications are now built on smaller, self-contained services that can be scheduled across a sea of compute, memory and storage. These microservices scale independently and only communicate when necessary through well-defined and lightweight interfaces over HTTP and REST or Remote Procedure Calls (RPCs) over wire protocols like Google’s Protocol Buffers (protobuf).

For example, one can easily build a simple monolithic service that stands up a Sock Shop on the web, such as the Weaveworks demo (Figure 2). Such a website could be built with an integrated front-end web server, back-end database, authentication system, ordering service, etc., packaged together on a single physical server. This might be perfectly fine as you introduce your store to the public, but as it gains popularity, how can you scale it to meet the ever increasing demand? And as you scale your website, how can you ensure it remains highly available?

A microservice architecture for the Sock Shop (Figure 2) achieves scale and resiliency by decoupling each subsystem into individual services packaged in a set of containers distributed across a cluster of compute instances. Each of the smaller services performs essential tasks in your Sock Shop website. For example, your Sock Shop has a cluster of front-end web servers which serve up dynamic pages and redirect client requests to the appropriate back-end services, such as a microservice for handling logins, credentials, etc., or a service for handling shopping carts and ordering. All these can scale to accommodate increased demand. Each of these “tiers” can be “auto-scaled” independently and adapted to your pre-defined SLAs, meeting the dynamic demands of the overall system. You can see how even the simplest website could be decomposed into tens or hundreds of individual microservices running in containers. This begs the question, how do you schedule all these containers and ensure that the service is resilient.

Kubernetes

Enter the container orchestration system called Kubernetes, originally conceived and developed by Google and now maintained by the Cloud Native Computing Foundation. Kubernetes, or “K8s” (pronounced “kates”), is a container-centric management platform. Other container orchestration solutions exist, such as Docker Swarm and Mesos Marathon, but Kubernetes has significant traction and only looks to be picking up steam. Three main concepts comprise a K8s system. The first is the Pod, which is a set of related containers guaranteed to run on the same node and is the smallest deployable unit that can be created, scheduled and managed. The second is the concept of a controller embodied within the K8s master node, which controls the lifecycle of Pods running on a cluster of worker nodes, and manages the lifecycle of a “deployment.” This includes scheduling, maintaining defined state and orchestrating updates to Pods and containers. Finally is the K8s concept of a “service” defining how to access the application (e.g., through a load balancer), so that a user of the service need not know anything about the configuration specifics of the underlying Pods.

A service is defined and deployed using a configuration file written in YAML (YAML Ain’t Markup Language). The deployment is defined in a declarative style where the end state is specified and K8s does its best to maintain this state. For example, you can define a service where the deployment must have eight Pods running at all times across the cluster. If a Pod fails, K8s will do its best to schedule another Pod to maintain the defined end state of the deployment (i.e., eight running Pods). The K8s master node will also try to schedule Pods on worker nodes as efficiently as possible across the cluster, ensuring even load distribution and resiliency.

Helm

So we have identified a system for solving the problem of automatically scheduling and ensuring resiliency in this distributed cluster, but how would you describe the application you want to deploy? You can imagine that the YAML configuration can get rather complex, even for a relatively simple website like the Sock Shop where you need to specify ConfigMaps, services, Pods, persistent volumes, etc. This is where Helm comes in. Think of Helm as a package manager for K8s, analogous to what yum and apt are to Linux. Similarly, Helm Charts are K8s packages analogous to debs and rpms. Helm gives developers a way to package their entire application into Charts, which can be installed on a K8s cluster with a single command. Not only does Helm give you an effective package manager for K8s, but it also is a way to package and publish your applications to public repositories for others to leverage. Developers can find hundreds of curated and stable Charts, packaging a variety of useful applications ranging from databases (MySQL, MongoDB) to Web publishing applications, such as WordPress.

Service Discovery

You need to manage the high degree of complexity that comes with these highly scalable and decoupled architectures. You no longer have a single, monolithic, straightforward application to reason about. You now have a highly distributed system with complex interactions that must be scheduled efficiently, monitored for failures and brokered to ensure conflicts do not arise between resources, such as ports and IP addresses. We can let Docker, for example, assign random ports for us, but how do other services know which ports to use? Moreover, a distributed system running across hundreds or thousands of individual computers is a highly dynamic environment. At that scale, failure is assumed as the norm rather than the exception. As physical nodes come and go, container Pods fail and get reassigned to other nodes in the cluster. Additional problems include avoiding conflicts, discovering services and ensuring efficient scheduling.

Service discovery tools are necessary to coordinate all these services without requiring a tedious manual configuration across the entire cluster. At its core, a discovery service is a persistent key-value store, which itself must be scalable and fault tolerant, so it does not become a bottleneck or single point of failure. Service discovery tools can be built on top of distributed key-value stores, such as ZooKeeper and etcd, where services register themselves so that other services can find their endpoints. Consul by HashiCorp is another tool, which is a batteries-included service discovery system that is horizontally scalable and includes health checks, notifications and a persistent registry.

Serverless

Serverless architectures or what some call Function as a Service, or “FaaS,” provide a platform for ephemeral services that run for short durations triggered by well-defined events. The best known FaaS is AWS Lambda. Instead of having a set of containers running in a reserved EC2 instance, which you pay for regardless of the activity of the code within that instance, you can deploy your code in Lambda, which is only scheduled and run when necessary (i.e., triggered by an event queue). With Lambda, you define the event queue you want to attach to your function, and when the event fires, Lambda schedules your function, runs the function, which possibly creates events in queues that trigger other functions, then terminates the function upon exit.

The AWS Alexa service, combined with custom skills implemented in AWS Lambda, is a great practical example of FaaS. Through the AWS Alexa service, you can register your Alexa device and create a custom skill that can be invoked through a series of defined voice commands, allowing you to personalize your Alexa device. For example, you can easily create a custom skill where when prompted with “Alexa, tell me something inspiring,” Alexa will reply back with an inspiring quotation for the day. Such a personalization could be implemented by defining a skill in the Alexa service comprised of “utterances” or voice fragments (e.g., “tell me something inspiring”) used to associate an intent which triggers a function that implements the selection of a random quotation and ultimately returns a speech directive to Alexa. This function can be linked to AWS Lambda through an associated Amazon Resource Name (ARN). Since AWS Lambda is a metered service (you only pay for what you use), it is far more cost effective for running your Alexa skill, versus implementing it in a reserved EC2 instance.

At its core, a Serverless architecture is comprised of some language specific function that is run on-demand, scheduled, executed and terminated. Serverless is being hyped as the post-VM and post-container way to build and deploy applications, but even Serverless functions need to run on some platform, and it just so happens that there is a perfectly good one available — Kubernetes!! In fact, there are at least two open source projects using K8s as the underlying platform for Serverless: Kubeless and Fission. The Fission project is a great example of how K8s can be used for Severless. Fission is comprised of an API server, a service router, a pool manager and a set of function specific Pods layered on top of K8s (see Figure 4). Fission manages a pool of Pods hosting functions loaded into Fission. The controller keeps track of the functions, and manages event triggers and container images configured with defined functions. The pool manager manages a set of containers for running functions.

The pool manager keeps these containers “warm,” and schedules them on demand when triggered by specific events. The router receives HTTP requests and forwards them to the appropriate function Pod, requesting a Pod be scheduled by the pool manager if necessary. The pool manager is crucial to Fission, since it allows a function to be run with near instant start-up time, avoiding the latency of loading containers.

Managing Complexity

Many more open-source projects are available to the developer of modern cloud native applications. You can use HashiCorp Vault for credential and secret management. HashiCorp Terraform and AWS Cloud Formation both perform deployment automation for true infrastructure as code. The open- source project Habitat, maintained by Chef, gives you an Anti-Corruption pattern with which to build new applications or modernize existing ones. And we have not even scratched the surface! But how as an enterprise can you manage this complexity? A plethora of challenges emerge from these modern tools and technologies. Developers in the line of business (LoB) create “shadow IT” with siloed and unmonitored spend. Because each application team adopts its own set of tools, there is a lack of consistency and commonality. Security and compliance end up being best effort by each individual LoB. This developer freedom results in speed and agility, but it is at the cost of sprawling complexity with little or no governance. Furthermore, an enterprise of any reasonable size typically has some combination of public and private clouds they need to manage, and often strategically engage, with more than one public cloud vendor to mitigate lock-in.

A number of very innovate products have emerged from small, nimble companies and organizations. Executives managing any significant amount of cloud complexity should pay attention to them. Sophisticated products, such as those from Densify, which are focused on the optimization of cloud native applications using machine learning and multi-variable constraint solvers, can substantially reduce cloud spend and help broker workloads between different public and private cloud platforms. Cloud security can be tricky, and identifying real threat surfaces to properly evaluate your security posture can be challenging. Look at companies like RedLock that go beyond static rules applied to AWS CloudWatch and CloudTrail logs. They instead attempt to understand the meaning of your infrastructure, and perform higher level analysis to determine if threats are real or not. Finally, look at products like HPE OneSphere that can be employed as a single portal which allows developers to have the freedom to access the APIs and services they need, with the cost controls, governance and security the enterprise demands. Point solutions, or “sidecar” approaches, can be effective, although many organizations want a single framework that gives them a view across their hybrid estate. HPE OneSphere aspires to be such a solution. Look for more products that are inherently hybrid portals to emerge as the sprawl of hybrid IT picks up steam.

The pace of innovation fueled by open-source communities is staggering. Managing that complexity for the enterprise is paramount. The modern application developer and the entire full-stack team stands on the backs of turtles that are truly turtles, all the way down.

Share this


Related articles

 

Kubernetes and Opening Core Technologies at Google

By David Linthicum

 

5 Steps to Building a Cloud-Ready Application Architecture

 

Refactor vs. Lift-and-Shift vs. Containers

By David Linthicum

Related tags

Application Development   Containers   Kubernetes   Managed Services   Software & Technology

Wayland Jeong

Full bio and recent posts »



Find what you're looking for.

Visit The Doppler topic pages through the links below.

PLATFORMS

AWS
CTP
Docker
Google
IBM
Kubernetes
Microsoft Azure
OpenStack
Oracle
Rackspace

BEST PRACTICES

App Dev
App Migration
Disaster Recovery
Change Management
Cloud Adoption
Cloud Economics
Cloud Strategy
Containers
Data Integration
DevOps
Digital Innovation
Hybrid Cloud
Managed Services
Security & Governance

SUBJECTS

Big Data
Blockchain
Cloud Careers
CloudOps
Drones
HPC
IoT
Machine Learning
Market Trends
Mobile
Predictive Maintenance
Private Cloud
Serverless Computing
Sustainable Computing
TCO / ROI
Technical "How To" Vendor Lock-In

INDUSTRIES

Agriculture
Energy & Utilities
Financial Services
Government
Healthcare
Manufacturing
Media & Publishing
Software & Technology
Telecom

EVENTS

CES
DockerCon
Google NEXT
Jenkins
re:Invent


 

Get The Doppler

Join 5,000+ IT professionals who get The Doppler for cloud computing news and best practices every week.

Subscribe here


Services

Cloud Adoption
Application Migration
Digital Innovation
Compliance
Cost Control
DevOps
IoT

Company

Overview
Leadership
Why CTP?
News
Events
Careers
Contact Us

The Doppler

Top Posts
White Papers
Podcasts
Videos
Case Studies
Quarterly
Subscribe

Connect

LinkedIn
Twitter
Google +
Facebook
Sound Cloud

CTP is hiring.

Cloud Technology Partners, a Hewlett Packard Enterprise company, is the premier cloud services and software company for enterprises moving to AWS, Google, Microsoft and other leading cloud platforms. We are hiring in sales, engineering, delivery and more. Visit our careers page to learn more.

CWC-blue-01

© 2010 - 2019 Cloud Technology Partners, Inc., a Hewlett Packard Enterprise company. All rights reserved. Here is our privacy policy CTP, CloudTP and Cloud with Confidence are registered trademarks of Cloud Technology Partners, Inc., or its subsidiaries in the United States and elsewhere.

Do Not Sell My Personal Information

  • Home
  • Cloud Adoption
  • Digital Innovation
  • Managed Cloud Controls
  • The Doppler Report
  • Clients
  • Partners
  • About CTP
  • Careers
  • Contact Us
  • Most Recent Posts
  • All Topics
  • Podcasts
  • Case Studies
  • Videos
  • Contact
Our privacy statement has been changed to provide you with additional information on how we use personal data and ensure compliance with new privacy and data protection laws.  
Please take time to read our new Privacy Statement.
Continue