CNET también está disponible en español. The social network’s engineers have a tool called Fairness Flow to find bias in their algorithms. Also: Facebook open-sourced an AI that plays StarCraft.

Facebook research scientist Isabel Kloumann, speaking at F8, discusses the company’s efforts to ensure its AIs behave ethically. As Facebook’s artificial intelligence technology gets smarter and more important to the social network’s sprawling business, the company is working to keep its AI systems from ethical lapses.

The company has built a system called Fairness Flow that can measure for potential biases for or against particular groups of people, research scientist Isabel Kloumann said at Facebook’s F8 conference on Wednesday. “We wanted to ensure jobs recommendations weren’t biased against some groups over others,” Kloumann said, so it checks for differences in treating men versus women or people under 40 years old versus people older, she said.

That’s probably helpful given Silicon Valley’s struggles to deal with sex and age discrimination. It’s a good time for Facebook to be working on its ethics smarts.

The company is under fire for lapses of its own when it comes to protecting its 2 billion users’ privacy and for letting Russians manipulate US elections through Facebook. Chief Executive Mark Zuckerberg on Tuesday pledged to do better, and on Wednesday, Facebook announced it’s using AI to remove posts from its social network that involve nudity, graphic violence, terrorist content, hate speech, spam, fake accounts and suicide. Read more from cnet.com…

thumbnail courtesy of cnet.com