AI is like a hotdog, it’s not really what it says it is

null

It’s hard enough to find organic intelligence on this planet let alone artificial intelligence, so can we please stop talking about AI as if it’s going to get the smarts and take over the world? People need to think of AI like hotdog. It’s not really what it says it is. It’s a frankfurter sausage and a cheap bread roll. It’s not a dog that is hot just as artificial intelligence, a computer programmed by a human to perform a task, is not intelligent.

We shouldn’t be giving AI complicated human jobs, and the only thing we really have to fear is our own stupidity in entrusting algorithms with human complexities because not only is AI unintelligent, it doesn’t even have any common sense.

Yet we hear terms like “deep AI” and “super smart computers” so often we’re convinced beyond any doubt that computers are smarter than any human. But, for example, a deep learning AI that has been fed thousands of human conversations can’t acquire language with the same sophistication used by toddlers. People learn by extrapolating and generalising. The brain is capable of more than just recognising patterns in large amounts of data: it can acquire deeper abstractions from little data. AI can’t.

There is learning from trial and error and from experience and there is learning from being creative. From trying something new. Computers cannot create something new.  They do not have the motivation to invent. They can be programmed to have a goal, but it is not the same as motivation and desire. Desire is a uniquely organic trait. 

That’s not to say that there is no danger attached to AI and we shouldn’t tread carefully. It’s just the source of danger has been wildly misjudged. Stephen Hawkins warned that AI could be the worst event in the history of civilisation. The real risk not being malice, but competence. “A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble,” he said. I mostly agree with Hawkins here, but I the danger isn’t the competence of the AI, rather the incompetence of those who programmed it.

Huge concerns

The more immediate worry is how people are trusting algorithms before trusting other people. AI is being given decisively human jobs. We’re told doctors will no longer be needed because AI can scan a massive database and find the diagnosis better than a doctor. Call me old fashioned, but you’ll still find me visiting human doctors in times of illness. They have the competitive edge of being people too therefore understanding the ailment in a way that AI never could.  

Yet in California, a new law is slated to take effect in October 2019 where AI will decide on the freedom of citizens. The law in that state requires the criminal justice system to replace cash bail with an algorithmic pretrial risk assessment. If someone receives a “high” risk score, the person must be detained prior to arraignment, effectively placing crucial decisions about a person’s freedom into the hands AI. 

This is a huge concern. Algorithms can reinforce existing inequity in the criminal justice system.  Researchers at Dartmouth University found in January that one widely used tool, COMPAS, wrongly classified black defendants as being at risk of committing a misdemeanour within two years at a rate of 40 per cent, versus 25.4 per cent for white defendants. This is hardly to be unexpected. AI can’t even be trusted with a Twitter account. Remember Microsoft’s racist AI on Twitter? The tech giant enthusiastically gave its super-smart AI a Twitter account and let it tweet its little battery heart out. They waited with baited breath. Perhaps it would learn the meaning of life and tweet it to the world? Maybe it would start a revolution? Or maybe it would just retweet cat pics? No, no, and no. The account was promptly shut down and Microsoft was left apologising for their social learning AI that became a trash-talking racist in less than a day. Surely that is a lesson if ever there was one. AI is not neutral. AI follows the crowd and creates a feedback loop.

Making better decisions

But we still let it make decisions thinking it comes with no prejudices. Would you feel comfortable with AI deciding if you can have a mortgage or not?  At this year’s MONEY2020, the largest finance tradeshow in the world, PayPal President and CEO Dan Schulman talked about algorithmic credit scoring, where payments and social media data coupled to machine learning will make lending decisions. The idea is noble, AI does not have the same prejudice and bias as a human. Except it does because AI is only able to scan a database which has been made by humans. It doesn’t create what is in the database. It’s a supportive system and academics have pointed out that this “weblining,” by algorithms reproduces the same old credit inequalities. The systems learn from existing data sets so follow existing bias shapes.  

Perhaps the cherry on the cake of our manmade AI problem is China’s 'social credit' system. China plans to rank all its citizens based on their "social credit" by 2020 and people can be rewarded or punished according to their scores. If you drive badly, smoke in a non-smoking zone, you might see yourself barred from the best dating websites. If you behave yourself, you could get reduced energy bills. The scheme is being piloted for millions of people across the country already and will be mandatory when fully rolled out. Every Chinese citizen will have a searchable file of amalgamated data from public and private sources tracking their social credit. All this data on their citizens will be scanned by AI and could see Chinese society effectively run by AI.  Let’s hope that model doesn’t sweep west.

If we know and understand there is really no such thing as artificial intelligence, we know there is nothing to be afraid of. The thing we need to tackle is the human decision makers to make sure they make better decisions when programming AI and using it.

Oliver Wessling, founder, NOS Microsystems
Image Credit: Alex KNight / Unsplash