IBM Debuts Tools to Make AI More Fair

Mitch Wagner
9/19/2018

Bias and transparency can be big problems for artificial intelligence.

It's difficult for humans to figure out whether AI is making decisions for unfair reasons. AI sometimes incorporates racial and gender biases inherited from flawed data sets. And AI isn't good at explaining why it made a decision, which is a big deal if you're rejecting someone for a job or a loan, or deciding on whether they should go to prison.

IBM Corp. (NYSE: IBM) is looking to help fix those problems with technologies announced Wednesday. The company introduced a software service which automatically detects bias and explains how AI makes decisions in real time, using the IBM Cloud. And IBM Research is open sourcing an AI bias detection and mitigation toolkit.

Although 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack in-house talents to confidently manage the technology, IBM says. IDC released a report Wednesday saying it expects worldwide spending on cognitive and AI systems to reach $77.6 billion in 2022.


Now entering its fifth year, the 2020 Vision Executive Summit is an exclusive meeting of global CSP executives focused on navigating the disruptive forces at work in telecom today. Join us in Lisbon on December 4-6 to meet with fellow experts as we define the future of next-gen communications and how to make it profitable.


IBM's new Trust and Transparency capabilities on the IBM Cloud are compatible with most of the popular AI frameworks used by enterprises, including Watson, Tensorflow, SparkML, AWS SageMaker and AzureML. The software service "explains decision-making and detects bias in AI models at runtime -- as decisions are being made -- capturing potentially unfair outcomes as they occur. Importantly, it also automatically recommends data to add to the model to help mitigate any bias it has detected," IBM said.

The open source AI Fairness 360 toolkit is a library of algorithms, code and tutorials designed to help the open source community build fairer AI.

Bias is a significant concern for AI developers, as artificial intelligence gets used for literal life-and-death decisions such as medical decision support. AI is also advising judges on whether an inmate should be sent to prison; a ProPublica investigation found evidence the model may be biased against minorities.

The problem is that AI is only good as the data it's fed; if the data is biased, the AI's decisions will be biased too. In one outrageous example, Google's photo-identification service was identifying black people as gorillas.

IBM's new software service is designed to help resolve those problems and make AI more fair. Accenture launched its own fairness toolkit in June.

Algorithms designed to make AI more fair are a great step forward. But my colleague Jamie Davies at Telecoms.com raises the intriguing question: Who decides whether the fairness algorithms are fair?

Related posts:

— Mitch Wagner Follow me on Twitter Visit my LinkedIn profile Visit me on Tumblr Follow me on Facebook Executive Editor, Light Reading

(0)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Featured Video
Upcoming Live Events
October 1-2, 2019, New Orleans, Louisiana
October 10, 2019, New York, New York
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3, 2019, New York, New York
December 3-5, 2019, Vienna, Austria
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events
Partner Perspectives - content from our sponsors
Edge Computing, the Next Great IT Revolution
By Rajesh Gadiyar, Vice President & CTO, Network & Custom Logic Group, Intel Corp
Innovations in Home Media Terminals for the Upcoming 5G Era
By Tang Wei, Vice President, ZTE Corporation
All Partner Perspectives