From IBM’s chess-playing supercomputer to self-driving cars Artificial Intelligence (AI) is frothing up (opens in new tab) towards the crest of the hype cycle. With remarkable progress around big data, better algorithms and deep neural networks now available, politicians and academics alike fret about the possibility that robots will take control and human intelligence will become an artifact of a slower, more arcane era.
Before you brace yourself against the onslaught of non-humans, consider this: AI is still a toddler, technologically speaking. Hype blossoms on misinformation, and what many outsiders call AI today is really machine learning, which is a subset of something much bigger. While machine learning powers now-familiar devices like Netflix recommendations and Nest’s self-programmed thermostat, the AI industry as a whole has a long way to go before robots replace people on a large scale.
Where the Hype is Coming From
Progress in machine learning is asynchronous, and to know where we really stand, it’s worth looking at the areas that are growing most rapidly. Augmented reality, driverless cars and personal assistants like Alexa and, in the near future, Facebook’s Jarvis, have grown out of advanced machine learning applications in image recognition, speech and text translations, and driving.
As they say, ‘data is the new oil,’ and the fuel for rapid machine learning growth consists of huge libraries of data from which to evolve, and make relevant, clean, large datasets. While sensitive or proprietary industries such as healthcare and security have to collect and protect their own proprietary data, social data including photos and languages are a different story. Facebook can automatically tag individuals in photos, companies are launching wearable technology to translate other languages in real-time and Tesla uses a repository of around 800 million miles (opens in new tab) of driving data to power its hands-free driving mode.
But large amounts of publicly available data do not make AI—rather, they facilitate the rapid growth of specific machine learning applications. In order to pierce through the hype and discern where machine learning is most helpful, and how to differentiate it from AI, it is important to understand the distinctions between each.
Artificial Intelligence and Machine Learning are Not the Same
The difference is subtle, but important.
An advanced form of pattern recognition, machine learning happens when computers are given access to vast amounts of information and use statistical interpretation to develop predictive models. In recent years, a “perfect storm” of low-cost, powerful computation and a large amount of publicly available information in the cloud has enabled computers to make sense of highly complex and nonlinear relationships. As a result, machine learning algorithms allow computers to predict, with some accuracy, which products you want to buy or what music you’d like to listen to.
In contrast, Artificial Intelligence is defined not just by the complexity of the calculations but by the extent that it can mimic human decision-making processes. A truly intelligent machine would almost be like C-3PO in Star Wars, demonstrating qualities like reasoning, judgment and imagination. Machine learning is therefore often seen as a subset of or even a prerequisite to Artificial Intelligence, as a machine will have to be able to analyze and interpret large amounts of information to inform its resulting decisions and actions. Algorithms alone, though, are not enough.
Reality Check: How Businesses Can Actually Benefit
Businesses that want to be at the forefront of new technologies would be wise to strategically incorporate machine learning into their current processes with a focus on identifying areas where large amounts of data can be collected and enhanced. This way, they will be ready to quickly deploy new AI technologies as they arrive.
Driverless cars might garner the most public attention, but for the average business, the most transformative capabilities will come when machine learning is applied to security applications, content management and virtual assistants.
Security: Hacking represents one of the most innovative areas of technology, for better and worse. Machine intelligence provides an essential counterpoint to technologically advanced hacks, many of which themselves are driven by bots and other forms of machine intelligence. As data becomes more valuable and online activity increases, companies should expect to battle an increasing amount of malware. Security issues can be addressed with behavioral modeling techniques, which will likely be another focus of the industry in the near-term as deploying solutions takes on an increased urgency.
Content management: Rather than collecting data solely for auditing purposes
businesses can extend their analysis from usage patterns, using insights to better help customers. This leaves a large opportunity for companies to implement standardized services to make use of that data. Predictive modeling in content management can inform IT managers which types of content are most frequently accessed, helping inform content governance efforts and making it easier to decide which types of storage and security each type of content needs. Intelligent alerts about potential compliance violations and unauthorized access requests are additional potential use cases.
Virtual assistants: While a C-3PO still represents the distant future, there are very real types of automation that businesses should keep a close eye on, as they save workers from mundane tasks. Instead of manually entering contact data into a spreadsheet after receiving an email, an intelligent CRM could automatically populate a database without human intervention. A Slack-enabled chatbot can automatically schedule conference calls based on a simple command; an Uber could be ordered automatically based on a calendared flight time; a natural language application could write simple updates or take care of internal communications.
Businesses of the Future
The tech industry has proven in principle that machines are capable of making sense of inputs and outputs even when those relationships are nonlinear and difficult to codify. The replication of neural networks for advanced machine learning positions scientists and engineers well-equipped to move to the next face of AI development.
There is still a lot of work to do and the pace of change may vary across industries, but everyone should be prepared. While AI has not yet fully arrived, watching the advancements in machine learning unfold portends an exciting future.
Vineet Jain, Co-Founder & CEO, Egnyte (opens in new tab)
Image Credit: John Williams RUS / Shutterstock