In the past two years, Kubernetes has claimed the lion’s share of the PaaS and CaaS markets with the promise of application probability across multicloud and hybrid cloud environments. However, in reality, multi-cluster management across multiple clouds is still maturing. This blog post explores some of the Kubernetes deployment patterns for multi-cluster applications that could scale across clouds.
Kubernetes Cluster Federation (KubeFed) is the official Kubernetes project for resource federation across clusters. In a typical multi-cluster environment, the federation control plane is installed on one of the clusters to function as federation host, and the rest of the member clusters are federated via the host cluster.
A KubeFed install enables additional APIs and features on the host cluster to implement cluster federation, such as:
- Kubefedctl: Along the same vein as kubectl, kubefedctl is a command-line utility to create and manage any federated resources.
- Push propagation: The KubeFed sync controller propagates any resource types or custom resource definitions (CRDs) that require federation across remote clusters and maintains the desired state of the resources across member clusters. E.g., Running kubefedctl enable service propagates the usage of “FederatedService” as a resource type across the member clusters.
- Multi-cluster DNS: By default, Kubernetes deploys a name server, called KubeDNS, to provide name resolutions within a cluster. For name resolution between services across clusters, KubeFed utilizes the ExternalDNS controller. This watches DNSEndpoint resources and creates A and TXT records in the configured DNS provider for each target federated ingress and federated service resource type, to enable cross-cluster resolutions. ExternalDNS can be configured with CoreDNS as the back end in your host cluster, or integrated with cloud DNS services, such as Amazon Route 53, Google Cloud DNS, Microsoft Azure DNS, etc.
For example, deploying a federated ingress service using kubefedctl triggers the push propagator to enable federated ingress resource types across member clusters. Upon deployment, the KubeFed controller manages resource types and implements resource synchronization, DNS management and cross-cluster orchestration.
The diagram below shows an example of multi-cluster topology using KubeFed v2. Federated deployment utilizes FederatedService and Placement configurations to distribute workloads across the clusters. As a result, fed-svc-1 is deployed on clusters A, B, D, and fed-svc-2 on clusters A, C, D.
Service mesh model:
While KubeFed uses federated resource types to propagate resources across member clusters, products like Google Anthos or Banzai Cloud PKE use Istio multicluster features and config management operators to enable multi-cluster pod connectivity. With the service mesh model, the deployments are managed on the individual clusters, but the service-to-service cross-cluster traffic is managed by the Istio control plane. Istio deploys the sidecar proxy with each pod, and Istio multi-cluster enables pod-to-pod connectivity across clusters. Istio multi-cluster can be deployed with a single control plane, or multiple control planes for each environment.
Istio with a single control plane
Shared or single control planes are ideal for multicloud environments, connected via VPN or transit gateways with flat, non-overlapping IP ranges. Multiple Kubernetes control planes are remotely connected to a central control plane by integrating remote Istios with primary Istio Pilot, telemetry and policy pods. Once connected, Envoy communicates with a single control plane and forms a mesh network across multiple clusters, while cross-cluster communications are managed through ingress Istio gateways.
Istio with multiple control planes
For multicloud networks without VPN connectivity or with overlapping IP ranges, Istio replicated control planes can be used to connect services across the clusters. Instead of using a shared Istio control plane to manage the mesh, in this configuration each cluster has its own Istio control plane installation, each managing its own endpoints. The IP address of the Istio ingress gateway service in each cluster must be accessible from every other cluster, ideally using L4 network load balancers (NLBs). A service in a given cluster that needs to be accessed from a different remote cluster requires a ServiceEntry configuration in the remote cluster in *.global format. CoreDNS, installed with Istio, can provide a domain for .global entries and uses port 53 to proxy services external to the mesh.
Running Kubernetes in multi-cluster and multicloud environments is gaining a lot of attention among enterprises. This is because it enables advanced functionalities for Kubernetes -as app portability, multi-region deployments, high availability, etc.–and avoids vendor lock-in scenarios. In fact, enterprise K8s platforms, such as OpenShift 4.0, already supporting KubeFed, along with Google’s backing of Istio, are already making Kubernetes Federation and multi-cluster apps more of reality.