Skip to main content

AI or Artificial Intelligence: What businesses need to know

Artificial Intelligence (AI) has quickly become a hot topic in the last couple of years, thanks primarily to the rapid progression of software and the development of new and exciting (or new and scary, depending on your view) technologies.

Movies and TV have long made us aware of the potential of AI, but many of the seemingly farfetched ideas are now closer to becoming a reality. And, as investment continues to grow, so too will the interest and success of such projects.

It's also an area that has generated a huge amount of debate. Some of the most prominent figures in the technology industry, such as the likes of Tesla CEO Elon Musk and Microsoft founder Bill Gates, have publicly voiced their (rather differing) opinions on the future of AI.

The current state of AI

In terms of existing AI applications, there have been plenty of recent developments, even if not many of them are in mainstream use just yet. IBM’s Watson supercomputer, for example, has shown that via a combination of big data and machine learning it can beat humans on a TV quiz show – as well as having more practical applications. Plus, in 2014 a computer program called Eugene Goostman was hailed by some as the first example of a machine passing the Turing test, designed as a benchmark for whether a computer is displaying intelligent behaviour. On a more rudimentary scale, personal assistants like Cortana and Siri are able to “learn” from their prior interactions with the user and alter their responses accordingly.

However, the most prominent recent examples of Artificial Intelligence making the news have been largely negative, with many outlets choosing to focus on the dangers of unchecked AI development. This is of course partly a result of media sensationalism, but it also reflects the thoughts and feelings of some of the most highly respected scientists, technologists and entrepreneurs. Stephen Hawking has a been a prominent critic of Artificial Intelligence and the risk it poses to the human race, and last year was willing to sign an open letter warning against potential AI “pitfalls,” alongside the likes of Elon Musk and Steve Wozniak.

But is there any substance to these claims or are they nothing more than scaremongering?

More worrying trends in Artificial Intelligence development include the possibilities of autonomous weaponry designed to acquire and destroy targets and the use of hugely powerful computers to carry out mass data mining.

Most observers agree that AI is a “dual use” technology, capable of both great good and great harm and it is this dichotomy that we must wrestle with, and hopefully overcome. As Stephen Hawking said back in 2014, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.”

How's the future looking?

Listening to some industry experts, the future for the human race looks bleak and AI is largely to blame. Much of this revolves around a cut-off point, sometimes referred to as the “singularity,” whereby artificial intelligence exceeds that of humans. It is difficult to predict how the world will change after this point and if we would be able to control the AI devices that we have created. Theories for when the singularity is likely to occur range from 2045 to the distant future – as yet no one can give a definitive answer.

However, we can forecast a number of AI applications likely to emerge in the near future. Autonomous vehicles are being developed by the likes of Google, Mercedes-Benz and Audi, and could eventually see human-driven cars become a thing of the past. The technology, of course, still has a long way to go, but legislative approval and test runs are already underway.

It is also likely that Artificial Intelligence will cause mass disruption to a number of industries. Just as the advent of industrial machinery led to mass unemployment in the manufacturing sector, the growth of AI systems could provoke a similar impact in the service industry. Many processes will become automated, meaning current employees will need to be re-trained and repurposed elsewhere.

Perhaps the most significant future impact of AI, however, will be a more subtle one. In the coming years, AI will slowly but surely become an everyday aspect of our lives, omnipresent but unobtrusive. It will be second nature to interact with digital or robotic assistants, as we subsume the technology in much the same way as we have the Internet and smartphones.

AI in business

Although killer robots make all the headlines, it is the more practical business applications of AI that is funding the industry. Artificial Intelligence startups received $310 million of investment last year and the likes of Google, Facebook and Apple have set their sights on playing a key role in the industry. At the moment, business uses for AI include virtual agents that can resolve customer issues without the need for human involvement, making sense of huge data volumes and developing more intelligent forecasting that helps business leaders to make better informed decisions.

Many businesses may be worried about the negative implications that AI could have, including disrupting traditional ways of working and causing large scale unemployment. However, it is more likely that agile businesses will be able to use AI as an opportunity for growth and will see just as many jobs created as destroyed. In fact, according to a recent study titled, “State of Artificial Intelligence & Big Data in the Enterprise,” 80 per cent of enterprise executives believe that AI improves worker performance and creates jobs. In the near future at least, human performance is more likely to be enhanced by Artificial Intelligence than replaced by it.

Image Credit: Shutterstock / Kentoh

Barclay has been writing about technology for a decade, starting out as a freelancer with IT Pro Portal covering everything from London’s start-up scene to comparisons of the best cloud storage services.  After that, he spent some time as the managing editor of an online outlet focusing on cloud computing, furthering his interest in virtualization, Big Data, and the Internet of Things.