News

IBM Introduces Software to Alleviate AI Bias

None
Sept. 24, 2018

By: Michael Feldman

In an effort to improve AI transparency, IBM has introduced an open source toolkit that provides algorithms that can detect and mitigate bias in machine learning applications.

The problem of AI bias and transparency has become an impediment to deployment of machine learning models as they move from the research lab into commercial settings. While many organizations recognize the potential of these models to greatly improve the speed and accuracy of many human tasks, these same organizations worry about the legal ramifications of biased implementations.

In a study conducted by IBM to determine the value of artificial intelligence to companies, they found 60 percent of business leaders worried about exposure to liability as a result of AI adoption. A cancer treatment recommendation powered by machine learning may be right 98 percent of the time, but what are the liabilities of the software-maker when a misdiagnosis results in death? A less fatal, but still undesirable type of bias comes into play when statistical discrimination results in decisions that favor one group over another, which can occur in algorithms that compute credit worthiness.

Despite those kinds of issues, 82 percent of enterprises are looking to move ahead with AI adoption – the motivating factor being increased revenue. In the IBM study, business leaders saw the greatest value in the technology in areas such as customer service, innovation, information security, and, ironically, risk management.

To deal with AI’s downside of potential bias, IBM has come up with the AI Fairness 360 toolkit, aka AIF360. In a blog posted by IBM’s Kush Varshney, he says that the toolkit was created as a summer project earlier this year. The initial release contains nine bias mitigation algorithms and 30 fairness metrics that Varshney says have been developed by the broader algorithmic fairness research community.

At runtime, the AIF360 software is able to detect bias, offer confidence metrics, and explain the decision-making behind the machine learning model.  It can also recommend data to add to the model to help mitigate detected bias.

As an open source project, IBM isn’t interested in monetizing AIF360, at least not directly. The idea is to make AI more generally usable, so customers will spend money on IBM’s own AI services. As a result, the company is encouraging the community to contribute additional functionality to the toolkit.

“One of the reasons we decided to make AIF360 an open source project as a companion to the adversarial robustness toolbox is to encourage the contribution of researchers from around the world to add their metrics and algorithms,” writes Varshney. “It would be really great if AIF360 becomes the hub of a flourishing community.”

Details of the toolkit and how to use it are provided here.