Skip to main content

We need to set rules and limits around face data

(Image credit: Image Credit: Zapp2Photo / Shutterstock)

This week, the ACLU sued the Justice Department, the DEA, and the FBI to gain access to information on these agencies' facial recognition practices amid fears they are secretly tracking US citizens across the country. It's the latest in a string of concerns that powerful organisations are using facial recognition and emotion analytics in unethical ways.

In August, for example, we heard that the King’s Cross area in London had rolled out facial recognition, attracting Black Mirror-eque comparisons.

And Amazon is arguably first in the firing line after news broke this summer that its Rekognition technology, part of the company's Web Services division, is harvesting facial data from unwitting participants with little transparency over what it's being used for. Amazon is reportedly working with several US public institutions to run data collection projects at scale in order to improve public safety and security. Collection points include public spaces such as airports and border control.

The problem with Amazon’s practice is it recognises, classifies, and tracks facial data on a continuous basis without consent or a defined number of subjects in each study/data collection project. It is harvesting data from participants that were neither recruited nor compensated, which is both unfair and unethical.

Amazon’s facial-data projects are concerning not just because of the questionable ethics. These projects will result in a massive data set that will give the company a monopoly in the AI emotional technology race. As with all AI and ML technologies, whoever has the largest data set wins. Through Amazon's rumoured deal with police, the company will get unparalleled access to data sets that no one else in the US market can compete with. Hikvision already has a similar position in the Chinese market and has been heavily judged for that reason. Giving Amazon such a deal, paired with its current market power and access to cheap capital, will ensure a monopolistic position. Without access to the vast data sets needed to improve this technology, other companies will simply not be competing on a level playing field.

What is the solution?

Amazon CEO Jeff Bezos admits facial recognition regulation “makes a lot of sense” as there is “potential for abuses of that kind of technology." But Amazon’s solution -- for its public policy team to draft up its own laws for legislators and regulators to adopt -- leaves a lot to be desired. To ensure responsible use of the technologies, regulation should instead be based on the following three principles:

1. When dealing with third-party partners - companies that would like access to the raw datasets, but were not originally involved - it is imperative to ensure the same ethical approvals as the original research from local authorities, universities and internal ethics committees of clients, in order to ensure all legislation and data protection rights are followed. Without this, third party companies could harvest the data and use it to improve their algorithms, as happened with prof. Kogan, who received approvals for his academic research, but later gave Cambridge Analytica access to the acquired datasets, leading to a big data privacy scandal. 

2. Facial data studies must be controlled experiments where participants are (a) formally recruited, (b) fully informed of the research objectives and the data being collected, and (c) financially compensated for their data.

3. The antitrust authorities should understand the danger of data monopolies in this field, where big tech players dominate due to their data sets and no other company or institution can realistically compete in building emotional AI algorithms.

So why not just ban it?

Given the potential for abuse, it's tempted to think of just banning facial recognition and emotion analytics technologies altogether. But they do have their purpose.

From a commercial perspective, facial data can help get past inherent biases and reveal how people really feel, helping businesses better cater to their customers. This is especially important in use cases where customers in different demographics are not in a position to express themselves; for example, senior citizens trying to explain when and why tech products are difficult to understand or use. And the use of biomarkers can have a profound impact on society, for example in healthcare data analytics and research, where having access to high frequency data can help enable timely public health responses.

The technology is still young and we are just scratching the surface of its potential, but we must ensure it is protected from abuse -- with no exceptions; not even Amazon.

Josipa Majic is founder and CEO, Tacit