Companies and governments increasingly rely upon algorithms to make decisions that affect people’s lives and livelihoods – from loan approvals, to recruiting, legal sentencing, and college admissions. Less vital decisions, too, are being delegated to machines, from product recommendations to dating matches.

In response, many experts have called for rules and regulations that would make the inner workings of these algorithms transparent. But transparency can backfire and cause confusion if not implemented carefully.

Fortunately, there is a smart way forward. Users should be able to demand the data behind the algorithmic decisions made for them, including in recommendation systems, credit and insurance risk systems, advertising programs, and social networks.

This tackles “intentional concealment” by corporations. But it doesn’t address the technical challenges associated with transparency in modern algorithms. Here, a movement called explainable AI (xAI) might be helpful.

xAI systems work by analyzing various inputs used by a decision-making algorithm, measuring the impact of each of the inputs individually and in groups, and finally reporting the set of inputs that had the biggest impact on the final decision. In 2014, Stanford professor Clifford Nass faced a student revolt. Read more from…

thumbnail courtesy of