IBM launches AI bias tool

null

In an effort to fight bias in artificial intelligence, IBM is launching a new tool capable of analysing how and why algorithms make the decisions they do in real time.

The company's Trust and Transparency capabilities will also be able to scan for any signs of bias in AI software and recommend adjustments that can be made.

The software service runs on the IBM Cloud and will help organisations manage AI systems from a number of industry players such as the firm's own Watson, Tensorflow, SparkML, AWS SageMaker and AzureML. IBM will also work with businesses to ensure that they are able to harness the power of its new software.

Additionally IBM Research will release an AI bias detection and migration toolkit called Fairness 360 Kit for the open source community which will bring the tools and education needed to encourage global collaboration around addressing the issue of bias in AI.

General Manager of Watson AI at IBM, Beth Smith explained why the company chose to release an AI bias detection tool in a blog post, saying:

"IBM led the industry in establishing Trust and Transparency principles for the development of new AI technologies. It's time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making." 

The firm's AI bias tool not only explains decision-making and detects bias in AI models at runtime, it also automatically recommends data to add to the model to help mitigate any bias it has detected. The explanations are provided in easy to understand terms and show which factors led to one decision or another.

With the announcement of its new software service, IBM is giving companies and the tech industry as a whole the chance to address the issue of AI bias before it leads to any more controversies that could tarnish their reputations and put off customers.

Image Credit: Ricochet64 / Shutterstock