Skip to main content

Getting serious about AI foundations

(Image credit: Image Credit: Geralt / Pixabay)

AI is rapidly making its mark in the workplace. By 2024, Gartner predicts that AI technologies will influence nearly 69 per cent of manager’s workloads. It might seem like a high percentage, but the truth is businesses and governments are increasingly looking to AI for automated and smart decision making.

AI is by its very nature is probably one of the most disruptive technologies we have seen in the digital age. Having a strong ethical framework in place that underpins its use is imperative. It is, therefore, essential that we have the infrastructure in place to figure out our AI supply chain. We need to be able to see the connections and context between our data. We need to know how it is harvested and processed. We also need to be clear on any assumptions and biases that could be codified or amplified. As with any new technology, AI can mirror the biases of those that have created it.

Smart decision making and accountability

As the use of AI accelerates it will become far more difficult to allocate any responsibility to decisions. If errors are made which result in prejudice or trauma, for example, how will we know and who will be held accountable.

The link between AI intent and outcomes needs to be audited. We can begin by aligning our success metrics for AI to the expected outcomes and answering questions that allow us to pinpoint if the AI system is wrong. AI is a very powerful technology, but with this power comes responsibility.

Towards AI transparency

If a seemingly impenetrable AI system has been used to make significant decisions, it may be extremely complex to unravel the causes behind a specific course of action.

In supply chains, tracking is critical to understanding. We should work to use interpretable models whenever possible to provide clear insight into the decision process.

Erasing bias Machine Learning (ML)

One of the real risks with AI is in intensifying and reinforcing existing human biases in decision making systems. Some of these biases can be due to a lack of diverse perspectives when training the system, for example. Decision making can also be slanted by reliance on incomplete data or the use of historical data that does not match the values of society today.

It is paramount, therefore, that AI develops in a non-discriminatory way. To mitigate these dangers, our AI supply chain needs to totally understand the data that is being used for training and testing. We must be able to answer questions on how and by whom data was acquired. We must also know that the data is representative of how the AI model will be applied.

Data lineage and protection against the manipulation of data is fundamental in creating trustworthy AI. Data lineage provides a complete audit trail of data, which is essential for compliance and data regulations.

Data lineage plays a vital role in understanding data, which makes it a core principle of AI. This doesn’t just refer to tracking data, but also the lineage of changes. For example, how is data cleansed and what exactly has been added and removed from this data, if anything.

With AI increasingly independently making decisions, it is imperative you understand your data lineage and its complete journey.

Gaining momentum

In 2019, we suddenly saw public, private and government interest grow in guidelines for AI systems that have major repercussions on both cultural values and society as a whole. 

Last year the European Union (EU) published ethics guidelines for trustworthy AI. It says that AI should adhere to the basic ethical principles of respect for human autonomy, prevention of harm, fairness and accountability. The principles encompass seven key requirements any AI system should meet in order to be deemed “trustworthy”.

The high-level EU expert group on AI advised that AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. It also advised that AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Industry watchers believe that the guidelines are both timely and informed, but believe that there is still a long way to go in creating overarching guidelines.

In 2020, however, be prepared to see momentum pick up with new and updated responsible AI checklists scheduled to publish. In addition, organisations are beginning to look to independent risk assessors for support in addition to tracking the human elements in their AI supply chains such as monitoring the diversity of teams working on AI projects that may influence system outcomes, such as race and gender.

In addition, AI’s performance is enhanced and it is more easily trackable when context is added to the mix. To this end, AI needs to be furnished with related data to draw on to solve problems. This will ultimately allow AI to handle more complex decision making.

Fostering trustworthiness in AI

Putting algorithms to one side, understanding exactly how data was used to train our AI model and why it is crucial to validating classifications and predictions.

Data lineage traces data’s journey from its origin to its current location, noting every move such as how it was harvested and processed. It is critical to regulatory compliance. It is data that is very straightforward to encode in a graph network representation.

Graphs have already proved their power in managing supply chains, co-ordinating, tracking and recognising the patterns in complex interdependencies, making them easy to read.

Today you can purchase an eco-friendly t-shirt that claims to be ethically sourced or a bar of fair-trade chocolate. This is essentially enabled by effective supply chain management. But, we have no idea if the data used in AI systems is trained on ethically sourced data or that the data it is being fed isn’t somehow biased. This is unsatisfactory and will eventually create major issues for companies with their consumers, especially if the AI proves to be wrong or biased.

A commitment to ethical AI is only valuable if it is actually implemented correctly. We predict that in 2020, AI and ethics will become a major issue. Ethical AI will need to be built into the product design, development and release framework. Questions will need to be answered about data collection, transparency and values. AI will need to transparently track all elements of the supply chain and incorporate the right context for smart, accurate decision making.

AI has to be fair, accountable and easy to understand. The sooner we make this possible the sooner we can tap into the real power of this exciting new technology.

Amy Hodler, Director, Analytics and AI Program, Neo4j