Perception of artificial intelligence (AI) has fluctuated dramatically over the years, from one extreme to the other. From the “biggest existential threat” to humanity - as autonomous car pioneer Elon Musk claimed back in 2014 - to the great hope and saviour of medicine, the environment, industry and even interplanetary travel. The volatility, however, which for so long defined the technology, is quickly becoming outdated, replaced by a new and more accurate consensus - that of conservative optimism.
This shift is driven by two distinct developments: the first, a better understanding of AI’s actual role within society, both today and tomorrow; the second, an acceptance of its value without the want to know every line of code. Musk himself changed tact to closer reflect the times, later stating that “we need to be careful” about AI’s development, rather than fearing it unendingly. This change of impetus encapsulates the broader discussion around AI ethics, as well as the growing confidence in real-world applications of the technology.
In order to look forward with confidence, however, and to build on the foundations, we must first look back to assess the distrust that has hung over AI’s social status to date. Only then will the perception pivot from negative to positive.
The secret society
Ask the average person to explain the inner workings of AI, or a specific machine learning algorithm, and near enough each one would respond: I haven’t got a clue. That is not to say that people do not possess the capacity to learn, or that AI as a concept is too far-fetched to grasp - confined to those with a PhD in astrophysics or quantum mechanics. In fact, when you boil it down to its roots, AI today is pretty much just simple maths - classification and prediction.
And herein lies the fundamental issue. Why ethical uses have been so few to date, and why distrust is the main emotion associated with the technology. People think they do not understand it. And you always fear what you don’t understand. But in reality, confusion stems from the secrecy of those businesses that develop AI algorithms today, and their tendency to over complicate the solutions that result from them.
The majority of algorithms operate inside of a black box; in other words, a closed system that masks the inner workings of the “simple” mathematics. Operating in silos, these lines of code are solely owned by the software companies and large corporations that want to grow and make money. They in turn ask people to trust them as they introduce novel concepts that impact daily lives, such as mortgage offerings or schooling. It is, therefore, an issue of opacity and transparency rather than complexity. And here is the proof.
When you undergo an operation and meet the surgeon, you do not ask to see the individual neurons of their brain. You may want to know where they studied or the success rate they have performing this particular procedure, but it would be highly irregular to want a breakdown of actual brain function.
This ought to be the same for AI. Because showing an algorithm will not make people understand; giving them tangible use cases and success stories will.
AI is slowly becoming all pervasive, which will only help to change perceptions for the better. Look no further than the ability to unlock a phone with your face or convert written words into text on tablets and computers. The quickest and easiest way to convince people of AI is to show them that it works - no matter how small the success. Because people want the ease of use.
Ultimately, it boils down to the data. The more on offer; the more opportunities there are to build solutions. But it must be collected and acted on with transparency; otherwise, AI will never fully be embraced by a cautious public. Data collaboration, therefore, is the key, with multiple sources feeding into the overall result.
Take the city of the future, for example. Optimising mobility cannot be the work of a single AI because one business simply does not have access to the variety of data needed to build smarter urban spaces. Traffic lights, road works, public transport, green areas - the list goes on. But bring all these data points together and AI - alongside a human lens - can begin to develop the next generation of cities.
Using city planning as a model for wider AI use, you begin to realise how important transparency is, because there will always be an emotional quotient that AI cannot imitate. Neighbourhood sentiment or the historical significance of a building will not immediately be apparent to a machine. But benchmarking data-driven models with human inference can negate potential bias: if the AI says a road should become a one-way route to increase efficiency, but local residents know that it will then act as a cut-through, the reasoning behind the working will become transparent and a more informed decision can be made.
AI will be accepted in time, but its success is dependent on open and trustworthy design. All technology is met by its critics; history has taught us that. But the AI conversation today is already more positive than it was 12 months ago. It will only continue to improve as businesses, governments and individuals collaborate and share data more.
Above all, it is an education piece. Contact breeds familiarity, and familiarity promotes trust. There is no better time to teach AI values than early in people’s development. Get schools and younger generations engaged with AI in order to promote healthy conversations; only then will solutions derived from AI become second nature. Expose the AI to more inclusive data sets and get diverse groups in contact with AI - the two will not only be mutually reinforcing, but accelerate the rate at which the perception of AI changes for the better.
Michael Kopp, Director of Data Science, HERE Technologies