AI, privacy and data ethics

(Image credit: Image Credit: Enzozo / Shutterstock)

Technology, especially machine learning and the growth of Artificial Intelligence (AI) applications, is already making a tremendous difference to the modern marketer’s day to day role and activities. Seen as a way to overcome challenges posed by the constant flow of customer data and insights across the growing range of customer touchpoints, marketing automation is being deployed to both enhance the customer experience at the front of house but also analyse the CX at the back. Siri, Alexa and Cortana are gradually embedding themselves into people’s lives as the homebased digital personal assistant; it is estimated that in the US alone, one fifth of homes has an Amazon smart speaker within their walls

This gives the providers of these devices great opportunity through a potentially unparalleled data source yet it also poses risk to brands who may be further removed from the customer, e.g. ‘Hey personal assistant, order some ketchup’; the issue being any ketchup, the last one bought by default, the highest rated, the most discounted, the one that’s paid a premium to come first?

Before the introduction of these assistants, the businesses behind them will have had to compile and curate sufficient data to hone the artificial intelligence algorithms that power their platforms. However, much of their improvement will be expected to happen once they’re live and learning. There has been talk that stricter consumer privacy legislation will hamper innovation and deployment of artificial intelligence in many applications. However, some believe that these new regulations will create a foundation where AI applications will become more trusted and reliable, set against a backdrop of increased focus on consumer privacy and tightened guidelines on data use.

Today, more than ever, “customer first” inevitably must also mean “data first”. In many ways, the updated legislation and the current AI revolution are born out of the same trend: the need to understand, manage and protect the data now available. AI’s self-learning capabilities and ability to make sense of huge data volumes certainly opens up new business opportunities. Not only will these tools enable performance marketing to match people to the items they want – from trainers to electrical goods – more quickly and efficiently, but automating this process will also allow the human element in the process, the marketers, to focus on providing more value to customers; in the same way a proven auto-pilot can keep pilots fresh from the mundane and allow them to guide and intervene where they can make the biggest difference. Of course, the inevitable flip side is it raises the spectre of over-automation, highlighting the need for strong controls that safeguard consumers – both their personal data and their individual experiences with brands.

Riding the digital revolution 

All organisations that handle personal data need to be GDPR compliant, but in terms of business concerns, the risks and exposure clearly vary significantly. The regulation obliges entities to adopt the best cybersecurity measures and internal human IT-hygiene procedures available at each single moment in processes. In a nutshell, the human, ‘common sense’ element in the process becomes more important, rather than obsolete.

Current privacy legislation also specifically indicates that individuals can contest automated processes related to management of their personal data. However, the technology already exists for machines to make what we might understand as judgment calls on data actions – whether to delete it, or move it based on machine learning. The happy medium here can be found where technologies require human decisions but still apply automated ones. In these cases, the human isn’t automating the decision, they are automating the action. We decide the rules for how the data is managed and then let the technology implement it. We’re leveraging the technology but at the end of the day, its humans telling it what to do.

Because of this human element, data ethics should be a central consideration for companies and individuals developing or deploying AI. Although ethical data practices require organisations to establish policies and processes, the purpose of a data ethics program is simple – to provide a step in the decision-making process ensuring that the question is asked, is our data use legal, fair, proportionate and just? Those who have considered these implications of their data use and who use this question as an important foundation of process when it comes to using AI generated insights, should maintain a competitive edge as well as keeping on the right side of compliance and consumer privacy issues. An ethical approach from the outset ensures the public trusts this technology, and the companies behind it.

Despite riding the digital revolution for some time now, economically and technologically, we are entering a new era. The opportunity that the current landscape of enhanced privacy presents for AI is that of an intelligent engine that can look at data sources, multiple ways that information is being processed, take some basic rules and automate a lot of the requirements around tagging, and associating consent. Innovation will therefore not be hampered but directed and determined by the backdrop of consumer privacy and robust data ethics. This focus on the consumer and their data privacy will in turn give rise to new applications, technologies and businesses that will both deliver value to us as people while helping organisations achieve and maintain data compliance. AI can help to fuel a brave new data-driven world, but it is the humans in it that can put this into practice in the right way, to build the right relationships that last.  

Jed Mole, Vice President of Marketing, Acxiom
Image Credit: Enzozo / Shutterstock