“Let us proceed with a multi-cloud strategy, as we do with our current hardware providers.” – unnamed CIO
On the surface, a multi-cloud strategy may seem like a safe approach. But as organizations move through executing this strategy, it often invites unintended consequences, including increased amounts of management complexity and personnel enablement, and potentially higher operating costs.
Let us review some of the decisions organizations face when looking at multi-cloud approaches and why revisiting a decision to pursue this strategy may be a good idea.
Cost Management Strategy
Many organizations have an existing procurement strategy where they use multiple technology vendors for similar services (servers, storage, etc.), in an attempt to reduce pricing through the competitive bid process. However, when selecting a public cloud vendor, the same procurement strategy is less effective. On a public cloud platform, there are already a variety of controls in place that attempt to control spend, such as volume discounts and reserved instances. Furthermore, higher service consumption with one public cloud platform provider creates higher volume discounts, while splitting service consumption across two or more providers will reduce your discount levels. Additionally, there are services, such as CTP’s Continuous Cost Controls, which help manage and optimize ongoing spend by providing recommendations for aligning application and resource requirements.
Vendor lock-in seems to be another serious consideration. Concerns around future price increases and long-term vendor viability (“will my provider be in business five years from now?”) still exist among technology leaders in organizations. A way to mitigate this risk is to target just the three primary public cloud providers – AWS, Microsoft and Google — for your agreements. While there is always a risk, it is relatively safe to assume these public cloud vendors will be in business for decades to come.
Operations Management Strategy
Legacy, on-premises environments can generally be managed centrally with a handful of tools that provide capabilities across multiple hardware platforms. In the public cloud, organizations are still looking for that “single pane of glass” to manage multiple cloud platforms and, for hybrid architectures, on-prem platforms too. However, because of service updates and changing APIs, you will need new and potentially different management and ancillary tools for each cloud platform. Therefore, the “single pane of glass” that can provide full management capability across the major cloud platforms is still a distant reality.
Containerized applications have brought about application portability, but their widespread use will diminish one of public cloud’s value drivers – the ability to leverage cloud-native services. By refactoring applications to use cloud-native cutting-edge services, such as managed databases, serverless architecture, voice/image recognition and artificial intelligence/machine learning platforms, organizations can increase agility by offloading middleware and platform management requirements.
Personnel/Resource Management Strategy
Each cloud platform is different – using different service catalogs, different cloud-native programming languages, etc. With the availability and cost of cloud engineers and architects at a premium, staffing for a multi-cloud organization can be an expensive endeavor. To properly execute a multi-cloud strategy, an organization will need resources with deep expertise on each cloud platform in its portfolio. Tools, such as HashiCorp’s Terraform, have been developed to assist by providing a vendor-agnostic programming language, but there are still gaps.
Data-Driven Decisions – Data Gravity
Sometimes, the physical location of the data that supports the application may drive the decision for a multi-cloud strategy. Data gravity, where locating the applications close to the data repository, is never more of an issue than in public cloud architecture. Having an analytics platform on one cloud, with the data it uses stored in another cloud presents latency challenges and additional costs to egress data transfers. Understand that the majority of your applications need to co-reside with your data repositories, which may ultimately drive the adoption of a single cloud provider.
What Should Your Organization Do?
Pay Attention to the 80/20 Rule
While there are some significant differences between the leading public cloud platform providers, an organization should engage the one that fits the majority (the 80 percent) of their use cases. For those unique use cases that might drive an organization to a different or second public cloud provider, be very strategic in how that new provider would be integrated into your operations. Ask yourself two questions. Is introducing another public cloud provider worth the investment in personnel enablement and tool integration? Can this special use case be addressed in some manner by my primary provider?
To ensure that your strategy aligns with your business needs, invest your time in planning and research. Your team must understand the cloud provider’s capabilities and roadmaps, and have a plan to match up those capabilities with your projected requirements and needs.
It is worth noting that the United States Government is set to invest $10 billion in a single cloud provider to support their Joint Enterprise Defense Infrastructure (JEDI) initiative. Department of Defense spokesperson Heather Bobb put it this way: “The single award is advantageous because, among other things, it improves security, improves data accessibility and simplifies the Department’s ability to adopt and use cloud services.”
Can a single provider address your needs? Probably, but you can only be confident in that decision by doing your homework.