The consumers of the public cloud want to understand what the future costs of the cloud will be, and when they will reap the benefits of their migration. Whether inspired by the opportunity to exit a data center, separate from expensive licensors, or reduce overhead for application support, companies starting their cloud journey often look to the full business case to assist in making decisions. The logical first step is to find a fast means of accurately evaluating the cloud economics, specifically the Total Cost of Ownership (TCO) of the public cloud against data center costs. But when speed takes priority transforming these models into a Rough Order of Magnitude (ROM) cost exercise, executives are often left wondering where their team’s estimates have gone awry.
Rough Order of Magnitude
ROMs have two main factors that allow for rapid production of a cost model: infrastructure utilization and lift-and-shift migration to the public cloud. Calculators working on a ROM cost exercise are far too prone at this point to match instance types based on utilization rates of CPU and memory, apply discounts for prepays, and estimate savings due to the shutting down of environments. The focus of a ROM is to show the potential savings for what is often the largest part of the future state bill: infrastructure/compute. While ROMs will show savings more often than not, they speak more to the high costs of remaining in a data center or co-location, as compared to efficiently operating in today’s public cloud.
Since there are no time allowances for delving into the details of specific application requirements, the lift-and-shift assumption allows for the ROM to be an exercise done amongst the finance team with little to no interaction from the application teams. It does not take into account that some apps will be consolidated into one another or replaced. We’ve seen on average 25% of an enterprise’s applications fall into this category. Identifying such apps reduces the amount of work and cost ROMs would otherwise estimate. For the rest of a footprint, however, it can be a daunting task to build out architectures for each application before deciding how, or even if, to migrate. It becomes even more unmanageable if an enterprise has hundreds or thousands of highly complex applications or application teams that are already at full capacity supporting their applications.
Achieving the “Total” part of a TCO
To get to the aggregate of a TCO, some important factors must be included that are often unable to be assessed in shorter timelines. First, you must account for the new tools and shared services that are going to be needed in order to fully operate in the public cloud. Many industries and enterprises do not allow open source tools, so the costs incurred for implementing best-of-breed tools and shared services need to be accounted for. This is why we at Cloud Technology Partners have spent time generating a recommended technology stack for a Minimum Viable Cloud (MVC).
We’ve also seen costs vary greatly between clients, depending on a team’s readiness to utilize the cloud and the amount of time it will take to train people with the skills necessary to operate their cloud, should they opt to conduct the migration themselves.
Finally, and most importantly, is a cloud-first culture. Change can be difficult and despite the public cloud being utilized for over a decade, old habits need to be cast aside in this new world. Failing to foster an environment where innovation, creativity, and agency are rewarded appropriately will elongate the time needed for a migration, if not undo it completely.
Starting in Shifts
What we recommend is taking a waved approach to building and maintaining your TCO, similar to the model you would use when migrating to the cloud. Select and assess applications for the first wave of migration and create the architecture. Then, calculate a TCO for those applications, including the cost of labor to migrate them into the cloud. These architectures can then be used to estimate costs of similar applications for future waves, when a deeper dive assessment will need to be completed and costs refined. This waved approach infuses a culture of collaboration and innovation, going all the way from the enterprise’s financial planning, to deployment, and inevitable exodus from the world of the data center.
This waved approach ensures that your architecture drives costs, not the other way around. It breaks down silos across the enterprise as new teams look to the resulting catalog of configurations that came before them for an accurate starting point for their specific app. Most importantly, it will give everyone an understanding of what each wave of applications will cost in the cloud. Leaving the data center will save your enterprise money, but exactly how much is only able to be assessed once an application is fully understood.
CTP’s latest offering, Continuous Cost Controls, has been created to solve these common challenges. It enables enterprises to maintain a catalog of application configurations, track actual vs. project spend during and post migration, and identify additional means of optimizing cost in the cloud. Once an application is in the cloud, there are often additional opportunities to optimize costs. As new technologies emerge, people and processes will shift to match these technological advances.