It’s probably happened to you. You’re having a chat with someone online (on social media, via email, in Slack) when things take a nasty turn.

The conversation starts out civil, but before you know it, you’re trading personal insults with a stranger / co-worker / family friend. Well, we have some good news: scientists are looking into it, and with a little help from machine learning, they could help us stop arguments online before they even happen.

The work comes from researchers at Cornell University, Google Jigsaw, and Wikimedia, who teamed up to create software that scans a conversation for verbal ticks and predicts whether it will end acrimoniously or amiably. Notably, the software was trained and tested on a hotbed of high-stakes discussion: the “talk page” on Wikipedia articles, where editors discuss changes to phrasing, the need for better sources, and so on.

Using a type of machine learning known as logistic regression, the researchers worked out how to best balance these factors when their software made its judgments. At the end of the training period, when given a pair of conversations started friendly but one ended in personal insults, the software was able to predict which was which just under 65 percent of the time.

That’s pretty good, although some major caveats apply: first, the test was done on a limited data set (Wikipedia talk pages, where, unusually for online discussions, participants have a shared goal of improving the quality of an article). Second, humans still performed better on the same task, making the right call 72 percent of the time. Read more from…

thumbnail courtesy of