Skip to main content

AI is a Wild West - and proactive governance is needed

artificial intelligence
(Image credit: Image source: Shutterstock/PHOTOCREO Michal Bednarek)

For some time, there has been an acute need for a legal framework to govern artificial intelligence (AI). This is largely due to the number of longstanding regulatory and ethical concerns surrounding the technology since its inception. I am a firm believer that we need to properly govern AI to prevent issues such as unethical biases, the undermining of legal and regulatory norms, and the blurred lines of organizational accountability from happening. 

These problems can seriously overwhelm users, business and citizens, and yet would be so avoidable if proper governance for AI was in place. So, earlier this year, when the EU Commission put forward the idea of a world-first legal framework for AI, great progress was made. However, the biggest step forward to date came in May, when the EU shared its proposal on what the rules for responsible AI should look like - and they made very interesting reading.

AI is front and center of our society nowadays but while the growing use of the technology across areas such as healthcare, lending, detecting fraud, and hiring is really positive, we must be mindful of the increased level of risk that consumers are currently facing. We need to resolve the governing of AI to ensure that both businesses and the general public are fully protected from sub-optimal AI. The industry and its investors also need confidence in the future of this technology if it is to succeed en masse and proper governance through an ethical framework is the only way to achieve such assurance.

It’s for this reason that organizations across the globe, including both the Linux Foundation’s AI & Data group and the World Economic Forum’s Global AI Action Alliance, are uniting with the EU Commission to try and decipher what a world in which AI is properly governed should look like. These organizations agree on several key ethical and legal issues, which will undoubtedly prove pivotal in establishing some form of governing framework, most notably that AI’s decisions must remain transparent, explainable, reproducible, and, crucially, private at all times. 


However, the major question is less about the defining factors of the governance itself, as we really need to focus on how an AI framework is going to be enforced and indeed implemented in reality. One might rightly assume that the law will naturally ensure regulations are enforced but designers and technologists needn’t rely on this being the case. In fact, it might be more rational for industry to fast-track this process, rather than solely relying on enforcement through law.

We must remember that the current framework for AI is somewhat of a Wild West right now, riddled with risks that can hurt both consumers and businesses alike. We can’t afford for that to be the case and must take it upon ourselves to govern AI in the meantime whilst a developed legal framework is ratified - which could be years away. It is more ideal for self-regulation to complement the law, and it will do in years to come, but right now, industry must be proactive in governing the ethical issues that surround AI to build the necessary confidence in the technology among the public, our investors, and the sector itself. This is achievable, we just need to ensure that organizations working with AI have a definitive standard to follow.

Self-regulation hasn’t necessarily been successful in the past though and we must learn from the failings of tech multinationals such as Facebook and Google in how not to police yourself. It’s for this reason that we need an unbiased third party to come forward and govern AI in the open air, ensuring transparency at all times. A system of certification would undoubtedly be the way forward and could easily set out industry standards for organizations to be tested against as a means of certifying their use of AI. Artificial intelligence can take a leaf out of another industry’s book in this sense; for instance, both the LEED and BREEAM green building and sustainability certification systems are used across the globe to keep building and construction firms in line. So, why not adopt a similar method of certification for artificial intelligence?

Leveling the playing field

An independent certification would ensure AI held a level playing field and would stimulate best practice in organizations worldwide. Straightforward tiers of certification would reflect how well an AI system meets the ethical and regulatory standards a third party sets out and would be easy enough to issue. For example, a gold standard certification would be reserved for an organization that had been particularly proactive and thorough in its approach to the use of AI, whereas a firm operating at more of a baseline level would receive a low-level certification.

As certification for AI became commonplace, we’d begin to experience the knock-on benefits it would also create. In essence, protection of the public from subpar AI systems would become far greater, while exposure to liability would be lessened or removed for developers and compliance officers. The legal and ethical uncertainty that currently surrounds AI would be far removed, meaning new artificial intelligence systems could be developed and deployed en masse and with confidence thanks to a rekindled trust in the technology from consumers and industry alike. Only an independent certification system can enable this and encourage AI’s long-term growth.

As an adviser to the forward-thinking team at the Responsible AI Institute, I am delighted to be a part of the journey towards an industry-first AI certification system. We want the artificial intelligence sector to truly thrive, and by pioneering the industry’s first self-certification system, we believe we can help the technology provide real-world societal difference. According to OECD, there are five principles for responsible AI, and it will be important to draw upon these in our work as we set out the gold standard and best practice for the responsible use of artificial intelligence. At the Responsible AI Institute, the team is turning theory into practice, allowing AI to flourish and society as a whole to greatly benefit.

Mark Rolston, Founder & Chief Creative Officer, argodesign

Mark Rolston, Founder & Chief Creative Officer, argodesign