Artificial intelligence (AI) is a hot topic these days. We’re told everyone can use it. We all have our own ideas of what AI can do– some very accurate, some a bit, well, paranoid. If we set aside the AI fiction depicted in the Terminator movies, what is the reality of AI, and how does it apply to today’s enterprises?
Artificial intelligence is an old concept, first defined in 1959 as the ability to give computers the capacity to learn without having to be reprogrammed. Today, it’s a type of approach and technology that uses learning model concepts directly related to pattern recognition and computational learning. AI studies algorithms that have the ability to learn through patterns and applies that learning to make predictions about patterns of data. It’s a better alternative to leveraging static program instructions, and instead makes data-driven predictions or decisions that improve over time, without human intervention or additional programming.
The applications of artificial intelligence have been widely promoted as the ultimate build systems to provide increased value to enterprises. A more practical concept of machine learning, a sub category of AI that includes neural networking technologies, has arisen as well.
The AI in Your Pocket
AI is pervasive these days. We speak to Siri on our iPhones to find out who texted us, or where the best burger can be had. We use Amazon Echo’s Alexa to have complete conversations about what movies are playing, or who won yesterday’s baseball game. Indeed, we even trust our lives to self-driving systems now found in Teslas and other vehicles.
Of course, the downside of having AI in our pockets, cars and computers thinking for us all the time is that we may become too dependent on them. For example, while it may not be possible to crash your car anymore due to AI fail-safes, we should watch that we don’t lose our ability to think fast enough to avoid a crash when AI isn’t available.
It’s Not Applicable for Everything
Artificial intelligence is best leveraged for the specific types of applications that can benefit the most from this technology, such as fraud detection, predictive marketing, machine monitoring (IoT) and inventory management. Many enterprises that use AI do not effectively leverage it, and thus waste money. Keep in mind that AI technology can be costly in terms of processing time and storage, whether or not it’s in the cloud.
As artificial intelligence becomes more affordable through the use of cloud platforms, one of the biggest concerns is that the technology will be misapplied. This already seems to be a pattern, as cloud providers are promoting artificial intelligence as having wide value. However, that value won’t be realized if artificial intelligence is applied to systems that don’t benefit from making predictions arrived from patterns found in data.
Artificial intelligence has the ability to provide tremendous value to businesses if correctly applied. While the technology was once beyond most enterprise budgets, public cloud providers’ ability to offer AI now makes it affordable. Enterprises that look for applications for this technology can, in some cases, find game changers for their businesses.
To that point, we at CTP created a table to put artificial intelligence into proper perspective. Figure 1 identifies the patterns of use, interfaces, and industries that can benefit most from AI technology.
While this diagram could be much more complex, given the different variations and use cases of artificial intelligence, the objective was to provide a practical guide for business-oriented use cases that many enterprises will encounter.
Moving to an Intelligent Future
AI can learn on its own and is cheaper to run. So what can we actually do with these systems? In a recent MIT Sloan Management Review article, the Accenture Institute for High Performance studied a group of enterprises that were using artificial intelligence to increase sales growth. The survey results showed that 76% of respondents said they planned on increasing sales by using artificial intelligence to find greater predictive accuracy and thus align sales resources accordingly.
Google Cloud AI and Amazon AI are the best examples of public cloud artificial intelligence options. Both apply AI technology within their respective clouds to drive interest in application development on their cloud services. Their offers often pair the ability to efficiently leverage artificial intelligence services with big data management systems that provide the source of the data, and thus the source of the patterns.
It’s important to consider all aspects of your requirements, and how the public cloud provider can best meet them. This goes beyond artificial intelligence, to the way in which data, middleware and analytical services work together to solve real business problems.
Artificial intelligence systems offered by public cloud providers include SDKs (software developer kits) and APIs that allow developers to embed AI within their applications. This bridges the gap between the capabilities of artificial intelligence and the actual real world use of this technology. An example would be the ability to determine if a loan application is fraudulent, based on past and current patterns, as applied to the data that’s within the loan application.
There are downsides to AI on a public cloud. First, you must leverage something that’s basically native to the public cloud provider, which means you have to port the data to other clouds or bring it back on premises, which could be problematic. Second, many enterprises have a tendency to overuse AI, leveraging it for applications that don’t actually need its capabilities. For instance, AI is overkill for simple business processes that are more procedural in nature.
Artificial Intelligence Usage Patterns
All artificial intelligence models are not the same. They are all defined to learn but provide different solution patterns. Most cloud providers, including AWS, Google and Microsoft, provide support for three types of predictions. They have different names, but they boil down to three types: binary prediction, category prediction and value prediction (Figure 2). Let’s explore the potential use cases of each.
Binary predictions answer questions that elicit a yes or no response. For example, “Does the order contain data that the artificial intelligence application has previously flagged as fraudulent?” Or, “Based on data that comes from an AI-enabled recommendation engine, will a customer be likely to buy an ‘up sell’ product?”
We leverage more applications for these types of predictions than for the other types because the responses are far less complex: yes or no. Thus, these types of artificial intelligence use cases often find themselves in typical business processes, such as order processing, credit check systems and recommendation engines used to suggest videos, music or other products to users based upon gathered data and learned responses. We found that finances and manufacturing verticals tend to benefit most from this aspect of artificial intelligence (Figure 2).
Category predictions involve looking at a data set and then, based upon learned information, placing that information into a particular category. This is useful when very different types of data are being analyzed, and categories need to be applied so the data can be better understood and processed.For instance, insurance companies place different instances of claims in specific categories, based on what they’ve learned over the years.
An example would be to define the likely cause of an accident, even if such information is not a part of the data. Thus, the AI system can make assignments such as “alcohol likely involved,” “likely fraudulent,” or “likely weather related,” based on past learning, such as the time of day that the accident occurred, as well as location, the type of damage done, age of driver, etc.
Category predictions have many different types of applications, such as when we need to place additional meaning around data, but direct correlation data is not in the existing database. Finance, manufacturing, and retail are all verticals that can use this category prediction type of technology.
We found that the finance and healthcare verticals can especially benefit from this aspect of artificial intelligence (Figure 2). For instance, a financial user may need to classify transactions into categories, such as “likely fraudulent” or “likely non-compliant.” Or, hospitals may need to categorize test results that come back from the lab.
Value predictions are more complex, but also more insightful. They actually tell you quantitatively about likely outcomes from data culled through using learning models to find patterns.Let’s say we want to find out how many units of a product are likely to sell in the next month. This can be good information to know. It can permit tighter manufacturing planning. It can tell you if additional revenue needs to be generated to meet quarterly objectives. It can even reduce the travel costs of sales people following up on leads.
We found that all verticals can benefit from this aspect of artificial intelligence (see Figure 2), but especially manufacturing, for production optimization, and government, for defense operations such as threat assessments.Both open source and proprietary artificial intelligence systems that support these types of predictions have been around for years. But in the past, the hardware and software costs of these systems have been prohibitive for most enterprises. Moreover, even if a company could afford it, they likely did not have the artificial intelligence talent required to design the prediction models and manage the data science too.
Enter cloud based artificial intelligence solutions from the big three public cloud providers AWS, Google and Microsoft. All are very different, but with some common advantages and limitations.
These systems are cheap to operate. You only pay a few dollars per hour, on average, to drive your very own AI application. Public clouds also provide cheap data storage. You can leverage true databases or storage systems for the data input into your AI-enabled applications.
Finally, they all provide SDKs and APIs that allow you to embed AI functionality directly into your applications while supporting most programming languages.
The real value of AI technology is its use from within applications. For instance, the ability to determine in real time if a loan application is likely fraudulent, and to then provide a process to immediately deal with the issue, perhaps allowing an applicant to fix any errors and re-submit. These types of predictions are more focused on operations and transactions.
Artificial intelligence systems that reside on particular public clouds are fairly bound to those clouds. Therefore, if you use an artificial intelligence system on cloud A, then the data storage mechanism on cloud A will typically be natively supported. However, your enterprise database is not supported, unless you provide data integration between your on premises data storage system and those in the cloud.
The key value for the cloud provider occurs when you take advantage of the native artificial intelligence system. It will then be in your best interest to take advantage of the native storage systems and native databases as well. Also, the applications live better on the cloud platform if they can frequently talk to the artificial intelligence models, which, in turn, often talk to the data. Get the hook?
Of course, if you are already looking to move data, applications and other processes to the cloud, you’re fine. The artificial intelligence system can be accessed as a native cloud service. But if you’re working with hybrid- or multi-cloud based deployments, as most are, the separation of the data from the artificial intelligence engine will be problematic, in terms of performance, cost and usability. Clearly, AI could be offered as a cloud provider’s loss leader, designed to attach more enterprises to that cloud.
Although artificial intelligence is being sold as the shiny new tool, it’s technology that’s been evolving for years. It’s actually current IT economics that allows us to consider the power and instances of AI, to finally provide value to the enterprise.
Artificial intelligence is now a reality because of cloud computing. The danger here is the overuse of AI technology for applications that frankly don’t need it. Perhaps we all should also use our natural intelligence to do some predictive learning.