Skip to main content

When automating government functions, bake in ethics from the beginning

(Image credit: Image Credit: MNBB Studio / Shutterstock)

To best serve their citizens any government needs the ability to make the best decisions possible, and quality decision making is reliant on quality information.  For thousands of years, governments relied on human bureaucracies to collect, organise and process the information they need to function. But today, the amount of information that governments deal with has surpassed the ability for analysis by a human, unassisted.

One solution presents itself:  Artificial intelligence (AI). AI systems are capable of processing more complex information at a faster rate – introducing scale and precision to augment human decision-making, giving us a greater ability to solve acute problems. Examples of practical applications are numerous, from fraud detection at financial institutions to inventory management at large retail organisations – and it is clear that computers are already helping business to make better decisions.

For UK government departments and public bodies facing shrinking budgets and demand for improved services, AI technology has huge potential. In fact, according to the Government AI Readiness Index 2019 compiled by Oxford Insights, the UK ranks second, behind only Singapore, in readiness to take advantage of this technology. We can expect more government functions to implement AI technologies to tap into the vast volumes of data available to them, to gain access to real-time information and to generate sophisticated insights for improved decision-making.

However, as we move into an era in which government functions rely more and more on machine-enabled decision making, we must confront the ethical questions raised by AI head on. Should machines be responsible for making decisions, and if so, can the technology be controlled to avoid unintended or adverse outcomes? In my opinion, aligning human values with machine-enabled decision making is only possible when ethics code is implemented from the very start – and this is completely reliant on data.

It all begins with good data

Speaking on the opening day of London Tech Week 2019, the Mayor of London Sadiq Khan warned that local and national government "must tread extremely carefully," given the many ethical issues that the advancement of AI raises. He has a point – after all, AI systems are only as good as the data we put into them.

Data bias is a very real concern which businesses and governments alike must confront proactively. Data sets can be skewed, and, if biases are present, algorithms could actually amplify them. There are many examples, with facial recognition being one of the most contentious. Researchers found that leading facial recognition programmes, for example, are very accurate for white men, but less so when analysing people of different genders and ethnicities.

Many tech companies are self-regulating in an attempt to establish guidelines for ethical AI practices, but the public and policymakers have their own ideas about how AI and humanity should converge. As a result, we have seen the establishment of a new Centre for Data Ethics and Innovation, which aims to strengthen the UK’s ability to lead the world in the safe and ethical use of data in an AI and data-driven economy.

However, the best way to prevent bias in AI systems is to implement ethical code at the data collection phase. Oftentimes, it must begin with a large enough sample of data to yield trustworthy insights and minimise subjectivity. Thus, a robust system capable of collecting and processing the richest and most complex sets of information, including both structured data and unstructured, including textual content, is necessary to generate the most accurate insights.

Additionally, data collection principles should be overseen by teams representing a rich blend of views, backgrounds, and characteristics (race, gender, etc.). To ensure perspectives span the fullest possible spectrum, it may also be prudent to include personnel from various departments, levels, and teams. In addition, government bodies should consider having an HR or ethics specialist working in tandem with data scientists to ensure that AI recommendations align with the government’s cultural values.

Of course, even a preventive approach like the one outlined above can never safeguard data entirely against bias. It is therefore critical that results are examined for signs of prejudices after the fact as well. Any noteworthy correlations among race, sexuality, age, gender, religion and similar factors should be investigated. If a bias is detected, mitigation strategies such as adjustments of sample distributions can be implemented. An example of this is Wimbledon tweaking their highlight algorithm to remove bias for or against certain players. Like data collection, however, these mitigation efforts should also be properly vetted with input from diverse perspectives.

Ensuring public safety

Questions have also been raised around the use of AI outside of traditional practices in government. Of course, government agencies and public bodies must adopt strategies that make security a top priority, but the providers of AI tools should also play an integral role in safeguarding personal data. Security must be a core feature ingrained in AI technology, not a last-minute add-on. Tech companies have a responsibility to build AI systems in a way that minimises their risk of malfunction or being hacked, which could jeopardise the privacy of the government departments that use them as well as that of citizens.

However, if industry pressure alone does not steer AI providers toward effective security, governments may intervene in the name of public safety. This is already happening in areas such as self-driving vehicles, where governments naturally have a purview as stakeholders in public safety. Looking ahead, it is likely that both law and self-regulation will be needed to create enforceable blanket policies for protecting the personal privacy of the public and what the Asilomar AI Principles refer to as “people’s real or perceived liberty”.

Looking ahead

This is especially important for government organisations, whose decisions have a long-term – and often profound – impact on the lives of citizens, the economy and the culture of a country. With the stakes so high, it is vital that government departments and public bodies start out with a clear goal that aligns to ethical values and routinely monitor AI practices and outcomes. This will be the most effective means for putting AI to its best possible use in government functions.

In many ways, AI has already begun to help support the betterment of humanity overall. For instance, AI-driven solutions like asset performance optimisation support outcomes such as fewer train accidents and less airplane downtime, bringing predictive maintenance alerts to the attention of responsible personnel before serious issues take place. Thus, aligning AI practices with human values becomes possible when goals for machine-enabled decision making are based on ethics code from the very start.

Zachary Jarvinen, head of product marketing, AI and Analytics, OpenText