Bias and transparency can be big problems for artificial intelligence.
It's difficult for humans to figure out whether AI is making decisions for unfair reasons. AI sometimes incorporates racial and gender biases inherited from flawed data sets. And AI isn't good at explaining why it made a decision, which is a big deal if you're rejecting someone for a job or a loan, or deciding on whether they should go to prison.
IBM Corp. (NYSE: IBM) is looking to help fix those problems with technologies announced Wednesday. The company introduced a software service which automatically detects bias and explains how AI makes decisions in real time, using the IBM Cloud. And IBM Research is open sourcing an AI bias detection and mitigation toolkit.
Although 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack in-house talents to confidently manage the technology, IBM says. IDC released a report Wednesday saying it expects worldwide spending on cognitive and AI systems to reach $77.6 billion in 2022.
IBM's new Trust and Transparency capabilities on the IBM Cloud are compatible with most of the popular AI frameworks used by enterprises, including Watson, Tensorflow, SparkML, AWS SageMaker and AzureML. The software service "explains decision-making and detects bias in AI models at runtime -- as decisions are being made -- capturing potentially unfair outcomes as they occur. Importantly, it also automatically recommends data to add to the model to help mitigate any bias it has detected," IBM said.
The open source AI Fairness 360 toolkit is a library of algorithms, code and tutorials designed to help the open source community build fairer AI.
Bias is a significant concern for AI developers, as artificial intelligence gets used for literal life-and-death decisions such as medical decision support. AI is also advising judges on whether an inmate should be sent to prison; a ProPublica investigation found evidence the model may be biased against minorities.
The problem is that AI is only good as the data it's fed; if the data is biased, the AI's decisions will be biased too. In one outrageous example, Google's photo-identification service was identifying black people as gorillas.
Algorithms designed to make AI more fair are a great step forward. But my colleague Jamie Davies at Telecoms.com raises the intriguing question: Who decides whether the fairness algorithms are fair?
- Microsoft Takes Aim at Salesforce With Dynamics AI
- AT&T's Gilbert: AI Critical to 5G Infrastructure
- Microsoft Buys AI Startup Lobe, to Feed Human Brains to Machines
— Mitch Wagner Executive Editor, Light Reading