[Editor’s note: This story originally was published by Real Clear Politics.]
By Kalev Leetaru
Real Clear Politics
Social media was supposed to usher in a new era of freedom of expression worldwide, eliminating government censorship. Instead, the web has not only entrenched that censorship, but granted a handful of companies nearly absolute control over what constitutes “truth” and “acceptable ideas” online. Facebook’s new “hateful memes” initiative is a case in point, reminding us just how quickly Silicon Valley is moving to redefine our concept of free speech.
Earlier this week Facebook announced a new competition in which it released a dataset of memes it considers “hate speech” and encouraged AI researchers to build algorithms that could automatically recognize and delete them from its platform.
For the purposes of its competition, Facebook defines hate speech as “a direct or indirect attack on people based on characteristics, including ethnicity, race, nationality, immigration status, religion, caste, sex, gender identity, sexual orientation, and disability or disease. We define attack as violent or dehumanizing (comparing people to non-human things, e.g. animals) speech, statements of inferiority, and calls for exclusion or segregation. Mocking hate crime is also considered hate speech.”
At first glance, it would seem relatively straightforward to identify such “hate speech,” with Facebook noting its competition content raters were trained in just four hours. At the same time, the company also acknowledged that its raters frequently disagreed and that in many cases it had to defer to professional hate speech reviewers to render judgement.
The problem lies in the fact that in a culturally diverse world, one person’s call for human rights can be seen by another group as hate speech. Take the example of a women’s rights group that, on Facebook, condemns Islamic and Hasidic gender segregation and calls for economic and legal action against members of those religions. On the one hand, such a post could be seen as a plea for women’s rights. On the other hand, under Facebook’s rules it would appear to constitute prohibited hate speech targeting religion.
Similarly, a minority rights organization might post a so-called “Karen” meme assailing the privilege of white women and calling for reparations, sanctions and protests targeting this group. Under Facebook’s rules this would appear to constitute hate speech given its targeting of a specific race and gender, but at the same time, banning such posts would silence minority discussion of privilege.
In a reflection of just how hard it can be to draw a distinction between permitted and prohibited posts, when asked to comment on whether Facebook would consider either of these examples to be “hate speech” or whether they were permissible under its rules, the company declined to comment. That the company would not readily issue a verdict shows just how fraught and complex these decisions are.
In a multicultural online world spanning the entirety of the earth’s population, any viewpoint is likely to find someone somewhere that finds it offensive or hateful. Liberal democracies have engaged in centuries of spirited debate trying to reconcile the desire for freedom of expression with their societies’ differences, with each country’s laws reflecting their distinct societal and cultural norms.
Why then should private entities be permitted, in the space of just a decade and a half, to upend these often hard-won freedoms? More importantly, why does Facebook believe it is acceptable for itself to replace the laws enacted by elected officials answerable to the public with its own set of arbitrary rules prohibiting otherwise legal speech? Asked this question, the company only pointed to Mark Zuckerberg’s call last year for greater government intervention in content moderation and the company’s belief that online censorship is necessary to “keep people safe” and that it “continuously engages with outside experts” on these topics.
As Facebook rushes to build its army of AI content moderators to eliminate any trace of nonconformity from its platform, the decisions it is making about what beliefs to permit within the digital public square will have profound implications for the future of democracy itself.
RealClear Media Fellow Kalev Leetaru is a senior fellow at the George Washington University Center for Cyber & Homeland Security. His past roles include fellow in residence at Georgetown University’s Edmund A. Walsh School of Foreign Service and member of the World Economic Forum’s Global Agenda Council on the Future of Government.
[Editor’s note: This story originally was published by Real Clear Politics.]