Over the past few years, serverless and containers emerged as winners for building modern infrastructure. Each of these technologies have their own sweet spots and challenges. Highly customized container workloads might require sophisticated orchestrator infrastructure with high reliability. Serverless cloud services are limited when it comes to application portability, barring some recent advancements, such as OpenFaaS.
With the rise of managed Kubernetes, modern application developers are growing comfortable with using containers. They build and package applications in local machines, and distribute them to different environments as container images–without worrying about performance, scalability or the specifics of the Kubernetes implementation on a cloud service provider.
But with serverless deployments, the developer experience is not as smooth a ride when it comes to application packaging, portability and debugging. Developers may still need to deal with cloud infrastructure operations, and the nitty-gritty of vendor-specific offerings such as AWS SAM or Azure Functions plugins when it comes to troubleshooting coding errors and exceptions, cold start times, etc.
Using container-based functions is one attempt at addressing this problem. These functions package applications into short-lived, transient containers, ideal for serverless type workloads. Every major cloud provider is innovating their hypervisors and processors to support these transient containers, which they can create within a few microseconds. However, the implementations are different across the major cloud providers.
AWS Firecracker deploys transient container images as micro VMs powered by KVM and Rust. KVM offers fast startup time and low memory overhead to create thousands of micro VMs on the same machine. Both Lambda and Fargate use Firecracker behind the scenes.
With the release of Azure Functions 2.0, Functions core tools now support exporting Azure Functions into Dockerfile, so you can package an application function into containers and get deployed on an Azure Container Instances (ACI). ACI fires up in few seconds and provides hypervisor-level security to make sure the application is hardened and isolated at the instance level.
While Microsoft and Amazon may choose to run these transient containers as VMs, Google has built Cloud Run based on Knative to deploy transient container stacks on top of Kubernetes and Istio. Knative focuses on an idiomatic developer experience and is designed to plug easily into existing build and CI/CD toolchains.
Though implementations differ, the focal point is the management via Kubernetes. Virtual Kubelets integrates ACI with k8s, whereas Firecracker provides OCI-compliant containers to manage container images on Kubernetes, and Knative is already built on k8s.
Adopting Kubernetes to be a unified platform to manage both serverless and container workloads not only allows you to fully exploit and embrace the modern, cloud-native architectures, but also optimizes your DevOps capabilities.