It’s no secret that YouTube has struggled to moderate the videos on its platform over the past year. The company has faced repeated scandals over its inability to rid itself of inappropriate and disturbing content, including some videos aimed at children.

Often missing from the discussion over YouTube’s shortcomings, though, are the employees directly tasked with removing things like porn and graphic violence, as well as the contractors that help train AI to learn to detect unwelcome uploads. But a Mechanical Turk task shared with WIRED appears to provide a glimpse into what training one of YouTube’s machine learning tools looks like at the ground level.

MTurk is an Amazon-owned marketplace where corporations and academic researchers pay individual contractors to perform micro-sized services—called Human Intelligence Tasks—in exchange for a small sum, usually less than a dollar. MTurk workers help keep the internet running by completing jobs like identifying objects in a photo, transcribing an audio recording, or helping to train an algorithm.

And while MTurk workers don’t make content moderation decisions directly, they do routinely help train YouTube’s machine learning tools in all sorts of ways. The machine learning tools that they help train also do more than just find inappropriate videos, they aid other parts of YouTube’s system, like its recommendation algorithm.

“YouTube and Google have been posting tasks on Mechanical Turk for years,” says Rochelle LaPlante, the Mechanical Turk worker who shared the specific assignment with WIRED. “It’s been all different kinds of stuff—tagging content types, looking for adult content, flagging content that is conspiracy theory-type stuff, marking if titles are appropriate, marking if titles match the video, identifying if a video is from a VEVO account.” LaPlante says that the tasks and guidelines often change. Read more from wired.com…

thumbnail courtesy of wired.com