Will GDPR hinder or harness the power of AI?

null

The coming into force of the General Data Protection Regulation is a sea change for digital technology companies, and indeed any business dealing online with customers in the European Union. Whereas the behind-the-scenes decision making and internet profiling-slash-tracking done by software has almost universally been closed off to internet users, the new era of GDPR will inject some radical transparency into what have been derided as black box algorithms. 

Yet the impact of the GDPR extends beyond requiring additional consent and explanations to marketing and advertising systems, or making data breach disclosures a top priority for enterprise IT. It seeks to establish a new relationship between user and system, one where transparency and a standard of privacy are non-negotiable. This will have major implications for the development and use of artificial intelligence, and has led to some to warn that GDPR will prove a major setback for the creation and deployment of AI in GDPR countries. 

Given the primacy of AI for internet technology going forward -- AI, to an even greater extent than now, will be a part of all digital systems -- this discussion on GDPR and AI has not gotten the attention it deserves, as the GDPR headlines focus on user tracking, social media advertising and enterprise compliance efforts.

While it is true that GDPR’s application to the usage of AI is still under debate, an outline of how AI may be affected and how companies can successfully adapt AI to the new world of GDPR is becoming increasingly clear. In short, while GDPR does present some challenges around making AI decisions transparent, it also brings opportunities for firms to build it to be future-proof and more human-accessible. Enterprises that are serious about making AI a major or even minor part of the way they do business will be able to make their AI systems compliant with GDPR without hobbling AI’s performance or possibility.

GDPR is a huge regulation, covering a range of online privacy measures empowering users. Amidst the various new rules, there is one section with major significance for AI. Article 22 of the GDPR states “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Translated from legalese, this passage has been interpreted to require an appeal for decisions that are made with no human involvement, such a when a machine learning algorithm decides if you are eligible for a loan, for example, in order to prevent discrimination. 

A second prong of this is a perceived new GDPR power giving individuals the right to demand an explanation of how an AI system made a decision that affects them. There is discussion in legal circles of whether this “right to explanation” does in fact exist under GDPR, or to what extent it will apply. There’s a lot at stake here. Obviously, building explainability into AI decision-making models is no small task, involving devising methods of clearly establishing on how input data is interpreted and making the logic traceable. 

Many will prefer not to confront this problem, and complain about the vagueness of much of the GDPR, especially its application to AI, which has been argued by some to be a feature, not a bug, of the new privacy protections. However, smart companies should take the regulations seriously and commit to ensuring systems that could fall under GDPR, including AI, will be compliant. The threat of sizeable fines of €20 million or 4% of global turnover provides a sharp incentive, but surveys show a large percent of enterprise companies expect not to be compliant by the GDPR deadline, with many resigned to being hit with GDPR fines and assuming the brace for impact position. 

For AI-involved businesses, the problem is further underlined, and a definite answer to the question of a right to explanation will probably not be known until Article 22 is tested in court -- a result that will take years. But despite this uncertainty the writing is on the wall, and companies using AI decision systems and related analytics/profiling models should begin ensuring they at the very least have a roadmap to achieve compliance, and are building into their systems clear data management frameworks and visibility into each stage or factor in decision-making logic. 

These systems will also have to be brought in line with another GDPR principle related to AI-based decisions and data processing, namely obtaining explicit informed consent from individuals to use their data. Companies need a robust and compliant data management framework, with user consent and the data linked to that consent will tagged and associated, so in the case of consent being revoked or an external audit the consent chain can be clearly established -- and pulled consent and privacy audits will become very real events once GDPR takes effect. 

For AI technology specifically, GDPR provides a prime opportunity to position AI systems for the incoming age of data transparency, as high-profile data abuse scandals make new and wide-ranging digital privacy regimes inevitable in other countries. Indeed, many expect GDPR to serve as a model that will be copied. This means GDPR rules affecting AI should be seen as part of a greater transition away from the current black box decision making of AI applications to the forthcoming era of explainable AI, which prioritizes human understanding of AI models. Importantly for companies wary of the cost of redesigning existing AI and machine learning systems, researchers are finding that explainable AI does not hinder the performance of AI models, and also builds trust between humans and AI, which is key for its growth, acceptance and adoption. 

So GDPR will not be the end of AI. While companies may face a notable upfront cost to bring their systems in line with GDPR, the new privacy rules are not a fatal blow for AI technology and applications. Instead, firms should see the new GDPR rules as an opportunity to adopt their systems to the new age of data and user-privacy transparency, which will soon be the norm worldwide. As the cost of implementing the required changes will increase exponentially if delayed, owing to the ever-growing and increasingly complex stores of user data interacted with by AI, the ongoing major advancements in AI’s capability and applicability, and inevitable GDPR fines, the sooner companies advance to this next stage of privacy-compliant regulated AI the better. 

Roy Pereira, CEO and Founder of Zoom.ai 

Image Credit: Harakir / Pixabay