The concept of machines learning, thinking and acting as a human might has for at least 200 years been the stuff of dystopian science fiction; prophecies abound of human subservience to superintelligent android overlords while well-intended but ill-informed political oratory guides a debate frequently decades ahead of itself.
"War to the death should be instantly proclaimed against them,” warned Samuel Butler in his 1863 polemic, Darwin among the Machines. “Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race."
While we can laugh at his haughty pessimism today, in terms of message, little has changed in present discourse. Laser-eyed automatons grace red-topped newspapers with stark warnings of approaching servitude more frequently than not. But the reality is, that most applications of machine learning are, not just mundane, but totally invisible.
It’s less a case of ‘blink and you’ll miss it,’ than it is a series of automated tasks aiding humans in everyday decision-making. Consider my industry, advertising. The decisions consumers make—from what to watch to where to eat—are aided by computers trained to learn from the data they exchange for services online. Few in adland will be surprised to learn that. Knowing what a consumer might want, when, and how, is perhaps the most useful weapon in the marketer’s armoury.
These invisible decisions go beyond advertising, though. Consider the last time you rode in an Uber. You stepped out of the car and money was exchanged on your behalf based on an algorithm learning the journey distance. A decision was made for your convenience. Likewise, Spotify learns what you’re interested in to curate playlists, while Chip from Barclays learns your spending habits and saves your money, without you noticing.
These invisible decisions transcend the digital and physical realms. For example, when you are half an hour away from your smart home, your heating might come on and adjust to your optimum temperature. Lights might flick-on based on your proximity to home. You leave the lounge lights on at night, and the decision is made, by a machine, to switch them off to conserve energy.
Many of the banal decisions we make—the heavy-lifting at least—are, for a combination of convenience and efficiency, being quickly supplanted by machines that understand and learn from our behaviours. It’s no sci-fi dystopia; call off the frame breakers for now. But it does compound the fact that, as the world travels in this direction, this unconscious decision-making presents both problems and opportunities for brands.
For argument’s sake, let’s ponder Coca-Cola. As a multinational conglomerate producing the world’s most widely-consumed soda, the brand is at the cusp of invisible decision-making. Consider this: You, a Coke-drinking consumer with a smart fridge, allow your fridge to make the decision to re-order Coke when stocks deplete. But the algorithm learns on historical data, and given that the fridge has made the decision countless times before, it will keep making that decision until no teeth remain in your gums.
It’s like a brand filter bubble. You might be perfectly happy having your purchase decisions reinforced to the point that you only consume one brand of soda. If you like something, be prepared to like it a lot more.
Good news for Coca-Cola, no doubt. But for brands that aren’t deeply-ingrained in the hive mind of consumer conscience, this of course presents a problem. If algorithms are making consumer purchase choices reinforced by historical data, how do you sell your product?
This is beyond cut-through or shelf space. Unseen decision-making can give brands a razor sharp competitive edge, and should be utilised, no doubt. But countering that, there’s two avenues of approach to combat the machines.
The first is, naturally, to fight machine with machine; AI to AI, mano-o-mano. Purchasing habits produce reams of data. If someone’s fridge is purchasing litres of Coca-Cola on their behalf, and if that data is available for sale, competitor brands can utilise it to persuade otherwise. There’s reverse engineering in play. By figuring out how the AI in a consumer’s fridge learns, one could, in theory, guide it towards learning your product.
The second is more human. As smart algorithms make (often better) choices for us, our relationship with originator brands is in danger of dying out. Brands that are human—that appeal to our innate human nature; our values and desire to interact—will thrive in a world of detached decision making. Advertisers need to ask ‘how do we bypass the tech and get back to the person we want to convert?’ Advertising should go back to focusing on thinking and creativity; "The customer is not a moron. She's your wife,” once quipped a dated ad man.
Treating people like people, rather than just another number on a screen, another metric, another conversion to meet KPIs, seems obvious. But most miss the mark. Brand choice, while often a chore, is also a reflection of individuals’ belief systems and values. Young people are already ‘escaping the algorithm’ in their social media choices, what’s to say the same won’t happen with brand decisions?
Convenience and efficiency are no doubt desirable, but even with detached decision making, appealing to base humanity through hyper-targeting—understanding individuals holistically—will go a long way in setting a brand apart from the one the algorithm picked.
Machine learning in its present form reinforces things innate to us as individuals, which is ironic. Self-determination, choice, trial and error, make us human. Take that away, and what do we have left? A lot of Coca-Cola, perhaps.
Shorful Islam, Chief Data Scientist at Tribal Worldwide London
Image Credit: Geralt / Pixabay