For this article, I was inspired by the American comedian Jeff Foxworthy’s “You Might Be a Redneck If…” album. It started me thinking about the misconceptions, misunderstandings and misdeeds that affect our clients on their journey to the cloud. So, here is my own version of Jeff’s routine, with a new spin:
“You Might Not Be Ready for the Cloud If…”
- You ask, “What is the biggest server I can get in the cloud?”
Sure, it is important to know the amount of RAM and the number of CPUs you can get on a server in the cloud. However, focusing only on those details takes away from the real objective of what you are trying to achieve. Unlike in your own data center, where it might take six weeks to get a server stood up which makes you request the biggest box you can possibly get away with, spinning a server on the cloud is very different. If your environment is architected correctly, you should be able to simply launch a new server of a different configuration, either from your service catalog or by changing the parameters in your automation framework. That change from waiting six weeks to waiting five minutes frees you to focus on the real task at hand — delivering value faster.
- You want to be able to walk through the vendors’ (AWS, Azure, Google) data centers
We are emotionally attached to our data centers, and want to be able see where our servers and data are located in the cloud as well. This topic comes up particularly often when talking to Legal and Compliance personnel. The majority of heavily regulated companies have some sort of “right to audit” clause in their contracts that require their clients to have the ability to walk through the data centers. They insist that the same concept should be applied when they move to the cloud; but cloud providers simply do not let you walk through their data centers, nor there is any need to do so. Let us be honest, if your clients come to your data center and you show them “their” server, do they really have any idea whether it is indeed their server or not? Cloud providers spread their servers across different data centers, and you have to rely on their third-party audits and various reports (SOC 1, SOC 2, ISO, etc.) for the assurance they are operating appropriately. This is not any different from trusting Bank of America to keep your money safe without having to go to the local branch to see what is in the vault.
- You try to replicate your on-premises network topology to the cloud
We have spent years perfecting our networking craft in data centers. We take incredible pride in how we architected our DMZ, how we divided the network into Class B and Class C blocks and how effective we are at managing routers between different subnets. So when we start designing our cloud footprint, our first reaction is to apply all that knowledge to the cloud. Yet that is plain wrong, as the cloud is not just another data center in a galaxy far far away. For one thing, networking design in the cloud should take advantage of the software-defined networking (SDN) capabilities that cloud allows. It should also take into consideration the organizational structure and the application workloads you plan to move to the cloud.
- You think cloud is where the pictures are stored, and even those are not secure
If you still have concerns about public cloud security, then you are not aware of all of the capabilities public cloud has to offer. As much as we think we are more protected in our own data centers, the reality is that when it comes to security, it is virtually impossible to match the capabilities of the likes of AWS, Azure and Google. That does not mean you should blindly trust your local public cloud provider; instead, you should understand the shared responsibility model and take advantage of all your provider’s capabilities. In fact, the majority of high-profile breaches have been against on-premises data centers, and when implemented correctly, public cloud is perfectly safe for storing even the most sensitive data. You might be surprised at some of the highly sensitive workloads currently operating in cloud environments.
- You want to put your firewall between every subnet
Time and again we hear requests from companies to route all “north-south” traffic between subnets through a firewall so it can be inspected. In addition, companies insist on implementing their own firewall (Check Point, Palo Alto or equivalent) instead of relying on security groups to address some of these concerns. These requests come from outdated policies that state “stateful firewall with packet inspection capabilities must be deployed between subnets.” Or sometimes they come from the plain justification that network administrators already know their firewalls on premises and will find it easier to manage ones in the cloud if they match. Both those reasons are not good enough for simply implementing on-premises firewalls in the cloud and routing all traffic through them — this just does not make sense. Use the cloud capabilities to their full potential and design your networks and routing to only allow traffic between subnets and instances as necessary. You can still implement a third-party firewall if needed, but make sure there is a good reason for it.
- You distribute temporary passwords/SSH keys to your admins in order to access instances and update software packages
Remember the old days, say, pre-Sarbanes-Oxley, when pretty much everyone had access to the servers? Then things changed, and people had to request temporary authorization to log into the servers to make changes (patches, fixes, etc.), and those changes were monitored via PowerBroker, or an equivalent. If you still follow the same approach on the cloud, you have not embraced automation and cloud concepts to their full extent. Patching the servers should not involve logging in, but rather updating your server definition template and relaunching the server. As a matter of fact, in a well architected cloud environment, there should be very few reasons why you would need humans logging into servers.
- You say “multi-cloud” within two seconds after someone asks you about your cloud strategy
Multi-cloud itself is not a bad idea or strategy, but you have to understand what it really means to you. The multi-cloud term has been somewhat overused and seems to be a knee-jerk reaction to concerns about cloud or vendor lock-in. In addition, people have a misconception that they will be able to move workloads between on-premises, Azure and AWS at will, with ease and with no operational overhead. The reality is quite different. It takes time to get good at one cloud. It will take even more time to get good at several. You should definitely have a multi-cloud strategy, but it should be based on the business objectives you are trying to achieve. For example, you might want to run certain workloads in Google because of its capabilities; in Azure because of proximity to your corporate offices; or in AWS because of the breadth of offerings. But most certainly do not select multiple clouds because you are afraid AWS will raise the rates on you. After all, when you bought Oracle to power your workloads, you did not also purchase Sybase just in case Oracle changed its licencing agreements.
- You want to know when it is a good time in the cloud to take servers down to patch them
The good old Patch Tuesdays simply do not work in the cloud. If you really need to ask that, you are not ready for the cloud. Your infrastructure should be designed to be self-healing and to withstand failure, so if a server dies another one launches automatically based on the template defined. If that is how you have it set up, patching servers is as simple as updating the server template definition and relaunching the new one. Once the server passes the necessary tests in automated fashion, it can be safely put into use.
- You submit a ticket to your InfoSec team to check your server for vulnerabilities after you spin it up
This just smells of manual effort, ITIL processes and delays–not that checking for vulnerabilities is bad. On the contrary, it is absolutely essential, but that checking should be done automatically via your deployment pipeline. Your InfoSec team should be responsible for defining processes and the gates needed to ensure that servers are secure, but they should absolutely get out of the way when those servers get launched. The InfoSec team should be the enablers of security, instead of gatekeepers preventing things from being deployed quickly.
- Your Operations group is learning how to run CheckPoints, NetScaler, F5s, SolarWinds, Cisco routers and other data center tools in the cloud
Just do not do it. I am not against any of those tools per se, but do not blindly run to implement them in the cloud just because you have them in the data center and are comfortable with them. The whole point of tools is to help you achieve certain objectives, but there might be different ways to achieve those objectives in the cloud. Some of the old tools were never built for the cloud and its dynamic nature, and trying to force them into your cloud architecture could prove to be more detrimental than beneficial. Figure out what you are trying to do first, and work toward that. Case in point: if you use a monitoring tool in the data center to tell you when the server goes down so you can launch a new one, you might want to consider designing the architecture in the cloud to re-launch a server automatically without paging your Operations staff to do the same.
- You compare the cost of an on-premises server to a cloud instance to identify the savings
It is very tempting to do that, since most public cloud providers list the price of various instances online. After all, multiply X cents per hour times 720 hours per month and you have your cost. Match that against what it costs you for on-premises, and you can identify your savings or additional costs depending on the result. However, thinking that way is extremely short sighted for several reasons. First, server costs alone do not represent the true costs of running IT. Second, even if the server in the cloud is more expensive, that might be OK if you get more out of it. Simply take a look at our mobile devices. We pay a lot more for them now than a few years ago, but buy them anyway since they offer features and convenience we never had before. Last but not least, the mentality of “server A here is cheaper than server B there” places the focus on comparing widgets instead looking at the true promise of cloud, which is agility, innovation and the ability to deliver value faster.
- You submit a ticket to ServiceNow to request IT to launch you a cloud instance
One of the essential cloud characteristics is on-demand self-service. Should you not be able to just do it yourself? Nothing else needs to be said here.
- You ask your auditors to check compliance every six months
You might be fully aware of the shared responsibility model and have already implemented the necessary controls. You might have your internal and even external auditors conduct periodic assessments of your cloud environment, but that is still a point-in-time exercise. If you have not implemented a process to continuously monitor your security and compliance in the cloud, you have some work to do. Cloud is dynamic, and checking your compliance posture just once in a while is simply not good enough.
- You send a few people to training and think your enterprise is ready for cloud
Operating in the cloud requires different skills. The most common mistake companies make is to not give enough attention to what is necessary to truly operate production environments in the cloud. Your people will need to be upskilled, retrained and possibly repositioned in order to support your business with cloud as the enabler. This goes far beyond just IT operations and should include other groups, such as developers, project management, finance, vendor management, legal/compliance, audit and even the most senior management. Sending Jim, John and Jackie to AWS class does not cut it. Putting in place a comprehensive cloud talent enablement program does.
- You get your bill and cannot figure where the costs come from
Oops, we did it again–got lost in the cloud spend. This happens more than you think. Cloud makes it easy to launch and use resources, therefore potentially surprising you at the end of the month. However, that is not the cloud provider’s fault, and it is not the reason to yell that cloud is a fake and costs more than what was expected. You probably did not do a good job setting your environment from the start. Initially, you should conduct a high level TCO to understand the rough magnitude of your spend. After that, you should ensure that you have appropriate fiscal governance in place (resource tagging, reporting, cross-charging, budget notifications, etc. to be in full control of your cloud spend, instead of just hoping for savings.
Did any of the above hit close to home? If it did, do not worry, you are not alone. The observations above are just some of the struggles we have seen our clients go through in their cloud journeys. If you are hitting similar roadblocks in your organization, give CTP a call. We can definitely help.