The age old war of man versus machine has pitted artificially intelligent robots against humans in a manner that catastrophises the demise of civilisation. While that may be the storyline of many Hollywood movies however, it couldn’t be further removed from reality.
Unfortunately, many people including technology revolutionaries like Elon Musk and Stephen Hawking believe artificial intelligence (AI) poses one of the greatest threats to humanity in the 21st century. And, in the UK specifically, over half of British consumers are concerned about the impact robots and AI will have on their everyday lives.
In one sense, it wouldn’t be unreasonable to see why this is a fair assessment. Daily news headlines and industry reports continuously claim that millions of jobs will be replaced by machines over the next 3-5 years, especially those in healthcare, energy, manufacturing and financial services.
However, the growth of the technology is also projected to revolutionise our daily lives; create new jobs, and make the planet infinitely more sustainable and efficient.
Like every industrial revolution that came before, the way people live work and play will change for the better - if we let it.
The case for AI
The reality is machines are better at repetitive, monotonous tasks than humans. They compute numbers in seconds, they don’t tire of repetition and they can be programmed to never make a mistake when performing such basic, elementary duties.
Embracing this phenomenon can create incredible efficiencies especially in today’s on-demand digital era, and this effect has already been widely seen across a number of sectors.
For example, online supermarket OCADO has automated its warehouses by using 4G-enabled robots to help complete consumers’ orders. The company plans to also introduce humanoid robots to help technicians with maintenance work.
But retail isn’t the only industry being impacted. AI is transforming every industry from financial services, to military defences, to gaming.
In the financial services industry, high-street banks and wealth management companies are investing in algorithms and automation to augment their human advisors. AI-powered robo-advisers or robo-traders are able to determine the best funds for customers to invest in and recommend banking products such as credit cards, personal loans, and mortgages automatically based on a few details inputted by the applicant.
Even international military organisation NATO is turning to AI to help in its decision-making process. It is increasingly using the technology to strengthen NATO anti-access area denial systems, designed to terminate or prevent enemy forces from entering restricted sea, land, or air spaces. As the technology evolves, it could get to the point where it starts recommending strategic decisions to human NATO officials on vital issues.
All of these innovative examples of AI illustrate the fundamental need for human involvement. Either at the programming level or working alongside machines.
The future of AI depends on an important mind-set shift. It can no longer be about man versus machine, but man and machine.
The human touch
The human brain will always be the original and most powerful super-computer. It created AI, in all of its permutations, and nurtured it to the point where it can understand language and learn from itself.
If society as a whole is able to embrace a culture of openness and collaboration, humans will always be in control of AI rather than letting it control us. This can be achieved through sharing research, making algorithms more open, or creating shared data sets that are consciously free of the biases that their human counterparts might embed in them.
The foremost voices in the tech industry are stressing the need for algorithms and data to be as open - and for standards to be as a transparent and uniform - as possible. If we cannot see what kind of data is driving these new models of working, then we cannot understand what outcomes they are going to drive, or factors they may exclude.
It’s time the industry moves past the euphoria surrounding AI and understand that in order to move the technology forward in any significant way, it requires skills in crafting experiments, domain expertise, in selecting data that matters for that experiment, and ensuring the eventual-application is fair, inclusive and beneficial to all parties involved.
If we look at the navigation and traffic app – Waze – it’s only as good as the people who contribute to it. It benefits every driver on the road, alleviates congestion and is updated in real-time. It may be simplifying things, but if the industry applied that thinking to how it builds AI applications, it would put human need at the epicentre and eliminate things like AI-bias or math-washing.
Artificially intelligent chatbots for examples have previously justified double-edged personality concerns by allowing human bias to infiltrate their operations, particularly when they are left to learn for themselves. When deployed in industries where accurate decision-making is fundamental to the entire industry’s operations, it’s critical to ensure humans remain at the heart of AI, in a collaborative way.
Experts predict that artificial intelligence will contribute $15.7 trillion to the global economy by 2030. That level of investment and reliance on AI only reinforces the need for more initiatives like Google PAIR, for example.
Efforts like this will go a long way in democratising the technology and ensuring that it benefits everyone – via new skillsets, applications and insights – for the greater human good, not just a chosen few.
Suman Nambiar is head of the AI Practice at Mindtree
Image source: Shutterstock/Sarah Holmlund