Skip to main content

Machine learning: the not-so-secret way of boosting the public sector

(Image credit:

Machine learning is by no means a new phenomenon. It has been used in various forms for decades, but it is very much a technology of the present due to the massive increase in the data upon which it thrives. It has been widely adopted by businesses, reducing the time and improving the value of the insight they can distil from large volumes of customer data.

However, in the public sector there is a different story. Despite being championed by some in government, machine learning has often faced a reaction of concern and confusion. This is not intended as general criticism and in many cases it reflects the greater value that civil servants place on being ethical and fair, than do some commercial sectors.

One fear is that, if the technology is used in place of humans, unfair judgements might not be noticed or costly mistakes in the process might occur. Furthermore, as many decisions being made by government can dramatically affect people’s lives and livelihood then often  decisions become highly subjective and discretionary judgment is required. There are also those still scarred by films such as iRobot, but that’s a discussion for another time.

Fear of the unknown is human nature, so fear of unfamiliar technology is thus common. But fears are often unfounded and providing an understanding of what the technology does is an essential first step in overcoming this wariness. So for successful digital transformation not only do the civil servants who are considering such technologies need to become comfortable with its use but the general public need to be reassured that the technology is there to assist, not replace, human decisions affecting their future health and well-being.

Human assistants, not human alternatives

There’s a strong case to be made for greater adoption of machine learning across a diverse range of activities. The basic premise of machine learning is that a computer can derive a formula from looking at lots of historical data that enables the prediction of certain things the data describes. This formula is often termed an algorithm or a model. We use this algorithm with new data to make decisions for a specific task, or we use the additional insight that the algorithm provides to enrich our understanding and drive better decisions.

For example, machine learning can analyse patients’ interactions in the healthcare system and highlight which combinations of therapies in what sequence offer the highest success rates for patients; and maybe how this regime is different for different age ranges. When combined with some decisioning logic that incorporates resources (availability, effectiveness, budget, etc.)  it’s possible to use the computers to model how scarce resources could be deployed with maximum efficiency to get the best tailored regime for patients.

When we then automate some of this, machine learning can even identify areas for improvement in real time and far faster than humans – and it can do so without bias, ulterior motives or fatigue-driven error. So, rather than being a threat, it should perhaps be viewed as a reinforcement for human effort in creating fairer and more consistent service delivery.

Machine learning is an iterative process; as the machine is exposed to new data and information, it adapts through a continuous feedback loop, which in turn provides continuous improvement. As a result, it produces more reliable results over time and evermore finely tuned and improved decision-making. Ultimately, it’s a tool for driving better outcomes.

The true value of AI

The opportunities for AI to enhance service delivery are many. Another example in healthcare is Computer Vision (another branch of AI), which is being used in cancer screening and diagnosis. We’re already at the stage where AI, trained from huge libraries of images of cancerous growths, is better at detecting cancer than human radiologists. This application of AI has numerous examples, such as work being done at Amsterdam UMC to increase the speed and accuracy of tumour evaluations.

But let’s not get this picture wrong.  Here, the true value is in giving the clinician more accurate insight or a second opinion that informs their diagnosis and, ultimately, the patient’s final decision regarding treatment. A machine is there to do the legwork, but the human decision to start a programme for cancer treatment, remains with the humans.

Acting with this enhanced insight enables doctors to become more efficient as well as effective. Combining the results of CT scans with advanced genomics using analytics, the technology can assess how patients will respond to certain treatments. This means clinicians avoid the stress, side effects and cost of putting patients through procedures with limited efficacy, while reducing waiting times for those patients whose condition would respond well. Yet, full-scale automation could run the risk of creating a lot more VOMIT.

VOMIT: What are the risks?

Victims Of Modern Imaging Technology (VOMIT) is a new phenomenon where a condition such as a malignant tumour is detected by imaging and thus at first glance it would seem wise to remove it. However, medical procedures to remove it carry a morbidity risk which may be greater than the risk the tumour presents during the patient’s likely lifespan. Here, ignorance could be bliss for the patient and doctors would examine the patient holistically, including mental health, emotional state, family support and many other factors that remain well beyond the grasp of AI to assimilate into an ethical decision. 

All decisions like these have a direct impact on people’s health and wellbeing. With cancer, the faster and more accurate these decisions are, the better.  However, whenever cost and effectiveness are combined there is an imperative for ethical judgement rather than financial arithmetic.

Unlocking the potential of AI

Healthcare is a rich seam for AI but its application is far wider. For instance, machine learning could also support policymakers in planning housebuilding and social housing allocation initiatives, where they could both reduce the time for the decision but also make it more robust. Using AI in infrastructural departments could allow road surface inspections to be continuously updated via cheap sensors or cameras in all council vehicles (or cloud-sourced in some way). The AI could not only optimise repair work (human or robot) but also potentially identify causes and then determine where strengthened roadways would cost less in whole-life costs versus regular repairs or perhaps a different road layout would reduce wear.

In the US, government researchers are already using machine learning to help officials make quick and informed policy decisions on housing. Using analytics, they analyse the impact of housing programmes on millions of lower-income citizens, drilling down into factors such as quality of life, education, health and employment. This instantly generates insightful, accessible reports for the government officials making the decisions. Now they can enact policy decisions as soon as possible for the benefit of residents.

Balancing ethics with efficiency

While some of the fears about AI are fanciful, there is a genuine cause for concern about the ethical deployment of such technology. In our healthcare example, allocation of resources based on gender, sexuality, race or income wouldn’t be appropriate unless these specifically had an impact on the prescribed treatment or its potential side-effects. This is self-evident to a human, but a machine would need this to be explicitly defined. Logically, a machine would likely display bias to those groups whose historical data gave better resultant outcomes, thus perpetuating any human equality gap present in the training data.

The recent review by the Committee on Standards in Public Life into AI and its ethical use by government and other public bodies concluded that there are “serious deficiencies” in regulation relating to the issue, although it stopped short of recommending the establishment of a new regulator.

The review was chaired by crossbench peer Lord Jonathan Evans, who commented:

“Explaining AI decisions will be the key to accountability – but many have warned of the prevalence of ‘Black Box’ AI. However our review found that explainable AI is a realistic and attainable goal for the public sector, so long as government and private companies prioritise public standards when designing and building AI systems.”

Fears of machine learning replacing all human decision-making need to be debunked as myth: this is not the purpose of the technology. Instead, it must be used to augment human decision-making, unburdening them from the time-consuming job of managing and analysing huge volumes of data. Once its role can be made clear to all those with responsibility for implementing it, machine learning can be applied across the public sector, contributing to life-changing decisions in the process.

Find out more on the use of AI and machine learning in government.

Simon Dennis, Director of AI & Analytics Innovation, SAS UK

Simon Dennis, Director of AI & Analytics Innovation, SAS UK.