Skip to main content

A spotlight on the EU’s AI legislation – Realizing the full potential of AI

artificial intelligence
(Image credit: Image Credit: Geralt / Pixabay)

The EU’s proposed AI legislation published in April, sparked debate on the true impact that new AI rules would have on businesses. Overall, it seems to be that the legislation has the potential to benefit society as a whole, but this could ultimately hinder companies and how they use AI in the long term. 

However, O’Reilly’s recent ‘AI in the Enterprise’ research discovered that, while more businesses are continuing to use AI or are considering implementing it in the near future, only 52 percent of these companies are checking for issues of fairness or bias within their AI systems. 

One of the major roadblocks to AI’s advancement has been a lack of trust in the technology. This is especially true in the public sector, where AI-assisted choices may have a significant influence on people’s lives. The EU’s AI legislation aims to correct this by assisting organizations in navigating ethical AI usage. This will help to establish trust over time, allowing businesses to ultimately realize AI’s full potential. Businesses must now change how they use and implement AI to ensure that they always fall on the right side of the line.

Struggling to keep up  

The AI-train has rapidly been gaining momentum in recent years, both in terms of business usage and results. We’re now seeing the technology being used for cancer detection, climate change analysis, the control of traffic and marketing for businesses. Globally, a quarter (26 percent) of businesses have reached the ‘mature’ stage of AI usage. This means that they have revenue-yielding AI products in production. In the UK, this figure is even higher, with 36 percent classifying their AI usage as ‘mature’. 

Looking at the industry breakdown, retail came out on top, with 40 percent claiming that their usage of AI was mature. This was closely followed by financial services (38 percent) and telecommunications (37 percent). Comparatively, education (10 percent) and government (16 percent) were the least mature in their usage of AI. 

The stats suggest that, while AI adoption in the private sector is snowballing, the public sector is struggling to keep up. The question is: why?

Releasing the handbrake  

There is likely more than one factor as to why the public sector is struggling in its uptake of AI. Budgetary concerns could certainly be a key issue, but perhaps not enough to account for such a large difference between the public and private sector. The other glaring issue is public trust. 

The general public already had their guard up against the use of AI in the public sector. Their worst fears were then proven correct in 2020 when A-Level and GCSE grades were predicted using an AI algorithm that faced accusations of bias. This led to the results being scrapped and replaced by predicted grades given by teachers. It’s examples like these which damage public trust in AI.

In terms of checking AI models for bias, the UK is ahead of the global standard. Across the globe, just 52 percent of companies are checking their algorithms for bias. Meanwhile, in the UK, this figure rises to 56 percent. However, when it comes to decisions that impact people’s lives and their futures, a little better than half isn’t enough. This counts for both the public and the private sector. Private sector companies, such as banks, also have the power to make decisions that can impact people’s lives. 

The EU’s AI legislation, which focuses heavily on AI ethics, should force companies to confront these shortcomings and be the starting point for organizations to build public trust and, in time, release the handbrake which is holding AI back. A more educated approach to AI will be key to achieving this. 

Focus on education and training  

It’s clear that not enough businesses are checking for bias in their AI models. However, research suggests that this isn’t necessarily negligence but, instead, a lack of training and skills. Globally, the biggest bottlenecks to AI adoption are a lack of skilled people (19 percent) and data quality (18 percent). In the UK, a quarter (25 percent) labeled a lack of data/data quality as a major hindrance and 14 percent said the same about skills within the organization. 

This skills gap is already having a huge impact on the adoption of AI and, with the introduction of the EU’s AI legislation, will have an even greater impact if businesses do not act soon. Half of UK businesses admitted that only about 50 percent of their AI projects are actually completed. Meanwhile, as we’ve seen, those that are completed run a risk of being biased. Moving forward, neither of these options will be profitable for companies. 

To close this skills gap, businesses must ensure that they are providing adequate training for their AI-handling employees. This means equipping them with the necessary knowledge to develop and train an algorithm that is highly functional and ethical. Feeding the algorithm with high-quality and unbiased data is the first step, but employees must also be trained to consistently check the algorithm for bias or inconsistencies and make the necessary changes.

With the introduction of the new AI laws, some employees may be nervous to make a mistake. Businesses can take this fear away by empowering their employees to learn in the flow of work. This means allowing them to ask questions and receive quick answers, based on the most up-to-date guidance, which they can apply to their work. The learning platforms to enable this exist, and it’s now time for employers to start leaning on them. Or they could be one of the first organizations to feel the sting of the new AI legislation. 

Businesses and organizations may be tempted to interpret the new AI regulation as a restriction on their technological ambitions. Instead, it should be viewed as advice that will assist them in making the most out of AI. Companies can roll out AI initiatives without fear of public backlash if they stay inside the confines of the new legislation. This will then enable them to test new AI technologies with greater confidence in the long term. However, to build this trust, businesses need to continually keep getting it right when it comes to AI. This means no more instances of AI bias or technologies which push the boundaries of privacy. Regular education and training is the only way to achieve this level of continued excellence.

Rachel Roumeliotis, Vice President of Data and AI, O’Reilly

Rachel Roumeliotis, a vice president of content strategy at O'Reilly Media and covers a wide variety of programming topics, ranging from data and AI, to open source in the enterprise, to emerging programming languages.