Social media companies are under tremendous pressure to police their platforms. National security officials press for takedowns of “terrorist content,” parents call for removal of “startling videos” masquerading as content for kids, and users lobby for more aggressive approaches to hateful or abusive content. So it’s not surprising that YouTube’s first-ever Community Guidelines Enforcement Report, released this week, boasts that 8,284,039 videos were removed in the last quarter of 2017, thanks to a “combination of people and technology” that flag content that violates YouTube policies.

But the report raises more questions about YouTube’s removal policies than it answers, particularly with regard to the use of machine-learning algorithms that flag and remove content because they detect, for example, “pornography, incitement to violence, harassment, or hate speech.” Content flagging and removal policies are increasingly consequential. Because so much speech has migrated onto major social platforms, the decisions those platforms make about limiting content have huge implications for freedom of expression worldwide.

The platforms, as private companies, are not constrained by the First Amendment, but they have a unique and growing role in upholding free speech as a value as well as a right. We’ve developed powerful machine learning that detects content that may violate our policies and sends it for human review.

In some cases, that same machine learning automatically takes an action, like removing spam videos. There are no easy solutions.

Companies like YouTube face government and public pressure to shut down content or be shut down themselves. Some companies are trying to develop nuanced ways to address the issue. Read more from aclu.org…

thumbnail courtesy of aclu.org