In the summer of 1956, the Dartmouth Research Project on Artificial Intelligence would legitimise the field, turning AI from a subject of absurd fiction to a scientific pursuit that looked set to change the world. And yet, 60 years on it feels like AI is only just beginning to reveal its true potential.
Recent advancements in connectivity and the IoT have triggered an explosion in the volume of data, and AI is becoming an essential tool in handling this.
Just two years ago, Stephen Hawking told the Independent that creating successful AI would be “the greatest event in human history”. However, he went on to say that “unfortunately, it might also be the last”. Concerns about the autonomous power given to AI technology, as well as the sensitive data that it is able to access, have raised serious questions on how AI should be managed.
And so, as AI celebrates its 60th birthday, we have gathered a panel of IT industry experts to discuss how AI has the potential to help or harm the technology industry.
“As companies like Facebook launch Artificial Intelligence (AI) projects that enable businesses to deliver automated customer support, online shopping guidance, content and interactive experiences for its users through ChatBots, AI is fast moving beyond the realms of science fiction and entering the workplace,” said Michael Hack, SVP of EMEA Operations at Ipswitch.
In fact, Hack argues that AI will change the day to day role of the IT team. “In the future, intelligent networks will be able to constantly scan all data and every aspect of the business infrastructure. There will be no need for IT to search through log files. Instead, IT teams will be engaged in monitoring and managing security scripts in much the same way as they oversee today’s automated solutions. In theory, life for the IT manager should become much more about ensuring the enterprise infrastructure is designed to be secure by default – rather than engaged in the reactive ‘patch and pray’ behaviours of today.”
“Ultimately,” Hack concludes, “embracing AI will involve an interesting learning curve, but organisations must ensure they are prepared to answer these tricky questions, or the benefits could be short lived.”
For Simon Moffatt, EMEA Director, Advanced Customer Engineering at ForgeRock, “one of the most interesting real-life developments in cognitive computing is the creation of digital assistant services. These self-learning systems typically use a combination of language processing, pattern recognition and data mining to attempt to respond to humans and complete tasks in a conversational manner.”
“As these applications become more mainstream”, Moffatt explains, “they will collect and manage highly personal data. For example, some digital assistants, such as Microsoft’s Cortana and Amazon’s Alexa, are being programmed to interact with other bots on behalf of humans to do things like buy a pizza, book a hotel room or submit medical information. While this is a big step forward for personal computing, the more intelligent that cognitive technologies become, the more damaging the impact could be if they were compromised. Consider that digital assistants are being designed to not only handle menial tasks, but also more sophisticated lifestyle and medical applications. Their access to sensitive data and financial information increases the possibility of misuse exponentially.”
It is for this reason AI has to be designed with security in mind. Moffatt argues that “for both human-to-bot and digital assistant-to-bot interactions, it is essential for every party to have a validated identity, particularly when the robots and digital assistants are handling sensitive information. If a user or bot can be identified, it becomes far easier to ensure that the commands being given are genuine and can be trusted. Importantly, this means that if a bot or digital assistant attempted to do something that it is not permitted to do, it can be identified and prevented."
Cyber security is also a top of mind for Wieland Alge, VP and GM EMEA at Barracuda Networks, who believes that whilst the security industry might be resistant towards AI in the short term, it has the potential to improve the efficiency of cyber security solutions. “Traditionally, one of the paradigms of IT security has been to establish predictability in an increasingly unpredictable world. Now, this age must come to an end. We have to embrace change and AI will certainly be one of the most important elements of the next digital transformation.”
For Alge, AI will influence security in two ways. “First, AI systems will need a lot of data collected from many different sources, including those that have not yet been connected to the network. As more and more objects become networked, this creates a significant challenge to establish and maintain privacy, reliability, access to and integrity of that data.”
“Secondly, the industry also stands to profit from AI methods and algorithms,” Alge explains. This will especially be the case when it comes to creating more agile responses to advanced attacks. Early steps have already been made, but there is still some way to go. I expect that AI applications will focus on effective human-to-machine interaction and making mass tasks like driving easier first. All that said, the future is notoriously hard to predict!”
For Thomas Fischer, Global Cyber Security Advocate at Digital Guardian, AI will only be as successful as the integrity of the data it draws from. "AI systems and machine learning algorithms rely on being able to learn from and build upon vast amounts of data. This volume of data is what differentiates and makes an AI system better able to identify patterns in anything from medical virus outbreaks to events and alerts in information security. The security and integrity of the data will be a key consideration in the successful application of AI technologies.”
“For instance, AI systems are typically not standalone entities. Most AI and machine learning engines require access to multiple data sets and other computing systems. It's possible that they even share information across the Internet.” Fischer explains that ‘this presents a challenge in how to ensure that no sensitive intellectual property or personal data is leaked or stored in unsecure locations. What if the data underlying the AI is erroneous or incomplete, what might be the quality of the results in this situation? A malicious party could use the weakness in base line data to target a company or person by implementing a “denial of service” through the corruption of data.”
“As the key underlying asset, AI implementers must consider and address the security risks and issues in data sets, because AI will only be as successful as the integrity of the data it is based upon.”
Image Credit: Mopic / Shutterstock