If you have embraced the concept of cloud-native computing, your organization is set up for success in today’s competitive IT environment. You have already made significant progress. You have signed on to the basic tenets of modern IT management practices like DevOps, continuous delivery, microservices and containers. You have set a goal to create applications that are purpose-built for the cloud model. And you are buying into Cloud Native Computing Foundation (CNCF) core principles and values – isolating container-packaged applications, steering clear of vendor lock-in, providing unlimited scalability, optimizing resource utilization and stressing resiliency.
Now what? The next step on your journey is to choose a platform – or platforms – to host your cloud-native applications. The platform itself needs to dovetail with all the concepts listed above, departing from traditional enterprise application design and enabling a new agile approach.
What platforms are out there that would accomplish these goals? What are the pros and cons of each? Can platforms be used together? What platforms should be used when–and why? These are all good questions, and all are important to the future of your cloud implementation.
Weighing Your Options
It is commonly agreed that there are three development and deployment platform models you can follow to achieve a cloud-native approach. Each has its own benefits and drawbacks. They are all built on container technology, but containerization is not a requirement. They are: Platform as a Service (PaaS), Container Orchestration (or Containers as a Service, CaaS) and Serverless (or Functions as a Service, FaaS). Let us look at them, one by one.
PaaS — Although some experts still list PaaS as an option for cloud-native platforms, we would argue that it is not. PaaS was seen as a popular, useful development platform when it burst onto the scene in the mid- to late first decade of this century. But in recent years the PaaS approach has been superseded, to the point where many in the industry are declaring it dead. Building on a PaaS tends to lock developers into one environment, and workloads cannot be shared across platforms. This runs counter to the CNCF’s core principle of frictionless sharing.
That leaves Container Orchestration and Serverless – two legitimate platforms that are positioned to handle the demands cloud native imposes on IT environments, now and in the future.
Container Orchestration — These platforms–such as Kubernetes, Swarm and Mesos–give developers power they never had on PaaS or other conventional development platforms. They can build and deploy portable applications, and run them anywhere, without having to reconfigure and deploy for different environments.
This capability gives developers a tremendous amount of flexibility and control over which exact image versions to deploy, and where. They can essentially oversee the whole infrastructure – giving them the final say over runtimes, reusability of images and movements of containerized apps to the cloud.
The downside to Container Orchestration? It introduces a new level of complexity to the process. Building a Kubernetes cluster that is highly available is a complicated task. Adding more container orchestrators requires developers to pay more attention to things like Service discovery, load balancing, capacity management, monitoring, logging, version upgrades and other common services. So,developers will l have their work cut out for them, or else they will have to rely on managed Kubernetes orchestrators, such as EKS, Fargate, AKS or GKE, which come with a certain degree of vendor lock-in and few versions behind the latest.
Serverless —Serverless platforms involve much less hands-on care than container orchestrators. Using tools such as AWS Lambda, Azure Functions or IBM Openwhisk, development teams can write logic as small pieces of code that respond to specific events. Serverless is essentially a managed service. Developers can focus on applications that respond to triggers, and let the platform take care of all the incidentals – autoscaling, patching, elastic load balancing, etc.
This is great for developers who want to leverage a pay-as-you-go model, which charges only for the time that code is actually running. This works well for event-driven and unpredictable workloads, such as IoT, big data and messages. Middleware layers can be optimized to improve application performance over time. And Serverless also allows for easy integration to third-party APIs and plug-ins.
But there are some downsides. Serverless is a less mature computing model, so there are less comprehensive and stable samples, tools, best practices and documentation. It is harder to debug than other platforms. Due to the on-demand structure, revving up “cold starts” after the system sits idle can trigger problems. Serverless workload runtimes are also capped at five minutes; anything longer requires additional orchestration or refactoring into multiple microservices.
Arriving at a Platform Solution
So, which platform do you choose? There is no single right answer – no one-size-fits-all solution. Certain types of workloads fit better in containers, and others are more attuned to serverless environments. You can split workloads up, depending on their characteristics and the organization’s needs. The important thing is to prepare for the project and ask the right questions, so you can make the best match.
Bottom line: Serverless environments tend to be the best fit for greenfield apps, apps being moved to the cloud, event driven workloads and anything that requires a lot of scaling. IoT apps, data streams and workloads that need file transfers are also good candidates for serverless. In addition, modern day ops teams are looking at how to leverage serverless technology to automate tasks such as log parsing, auto-tagging, security event remediations, alarms, auto-scaling, etc. On the other hand, legacy workloads that need more controls and have to be moved around, in and out of the cloud, do better in containers. The more granular the requirements, the more inviting containers become.
That is a general blueprint to follow. But situations vary, and organizations need to go through an evaluation process before locking down a platform strategy for cloud-native computing.
This doesn’t happen overnight. It’s best to do a thorough portfolio assessment, over four to six weeks. Look at the whole picture. Classify the workloads based on different criteria – what can be ported easily, what needs to be rebuilt, and what would you have to spend to manage certain development tasks. Run this through a Cloud Business Office function, and you will emerge with a plan based on the business value a particular platform strategy will deliver.
Still under the covers are serverless platforms such as Lambda or Azure, which use container orchestrators with advanced schedulers and hypervisors, to pre-warm the containers and launch them on-scale. Today, serverless poses limitations in terms of support for multiple languages, execution periods and deployment models. But it is just matter of time before you will be able to include the serverless scheduler such as Virtual kubelets on top of Kubernetes, to standardize the packaging and deployment models across both serverless and container platforms.
Cloud-native computing is changing the landscape – forcing organizations to think about new models and new delivery methods. Choosing the right platform is critical to the long-term success of a cloud-native implementation. Settling on a strategy is the first step. Take time to understand your options and make the decision that is right for you.