Criticism for hate speech, extremism, fake news, and other content that violates community standards has the largest social media networks strengthening policies, adding staff, and re-working algorithms. In the Social (Net)Work Series, we explore social media moderation, looking at what works and what doesn’t, while examining possibilities for improvement. “A.I.

can pick up offensive language and it can recognize images very well. The power of identifying the image is there,” says Winston Binch, the chief digital officer of Deutsch, a creative agency that uses A.I.

in creating digital campaigns for brands from Target to Taco Bell. “The gray area becomes the intent.” Using natural language processing, A.I.

can be trained to recognize text across multiple languages. A program designed to spot posts that violate community guidelines, for example, can be taught to detect racial slurs or terms associated with extremist propaganda.

A.I. can also be trained to recognize images, to prevent some forms of nudity or recognize symbols like the swastika. Read more from digitaltrends.com…

thumbnail courtesy of digitaltrends.com