It took almost half an hour and thousands of views before the video of the New Zealand massacre was reported to Facebook and was finally removed. More than 200 users watched the live stream of the mosque attacks in Christchurch, and in a short time, images, and even the complete video, were shared all around the world through Twitter, Facebook and Youtube.
Every social media platform immediately apologized and sent their condolences to the families impacted; however, this issue reveals once again the problem that these platforms are facing when dealing with hateful content.
A quick look through the comment section of a racially charged video shows how serious and pervasive the problem of hate speech is. In the United States, as well as in many other developed democracies, hate speech is usually treated as protected free speech unless it produces or incites lawless action.
The Supreme Court has protected the rights of Nazis, anti-Semites, cross-burning Klansmen and an anti-gay Baptist sect to openly denigrate others with almost no restrictions. Hate speech is protected by law unless it is “likely to produce a clear and present danger of a serious substantive evil that rises far above public inconvenience, annoyance, or unrest…”
This law usually pertains to any comment that incites violence; however, any comment that disqualifies or denigrates a person or a group of people is harmful. This is damaging not just because these comments lead to harassment and violence, but also because it brings on, promotes, and enables hatred and discrimination in our society.
One key reason why hate speech is protected is in order to avoid a corrupt government silencing unpopular opinions that don’t match with its political views. As should be expected in any real democracy, people should have the right to express their judgements freely. However, it’s vitally important to understand that freedom of speech comes with some limitations and responsibilities, and it’s not the same criticizing the government as targeting a group of people due to its race, ethnicity, gender or sexuality.
Our personal rights end when someone else’s rights begin, meaning that hateful acts or statements towards individuals or groups of people moved by biased social perceptions should not be protected. Confronting hate speech is not about controlling or censoring the conversation but promoting tolerance and inclusivity.
What we say or do on social media is usually exempt from punishment and due to its worldwide reach and anonymity, these platforms have become the main setting for hate speech.
According to a study from the Pew Research Center, 41 percent of American adults have experienced some form of online harassment, and minorities are particularly unprotected. One in four African Americans say they have been targeted with harassment online because of their race or ethnicity, as have one in ten Hispanics, and women are twice as likely to be targeted online.
In the context of a culture in which hate is a social phenomenon, it doesn’t seem reasonable to pretend that certain prohibitions on social media would correct or solve the existence of this phenomenon. In other words, establishing consequences for hate speech does not mean that the problems of hate and discrimination would disappear. However, this would definitely contribute to a proper education on respect, equality and embracing diversity.
Most social media companies such as Google, Facebook and Twitter have policies regarding what kinds of hate speech are allowed on their platforms, however, the policies are often inconsistently applied. For example, Charlotte Spencer-Smith, a project assistant in the Institute for Comparative Media and Communication Studies, explains that in social media, if a protected category like sex or ethnicity is combined with a non-protected category like age, this produces a non-protected category. Meaning that “black children” would be unprotected and an attack against black children would not be removed.
The issue of online abuse is part of the much bigger problem of the nature of the relationship between governments, social media and users. These challenges include data protection and privacy, criminal activity, fake news and hate speech. Governments have to take action and be more responsive to the capabilities and consequences of developing technology.
In January of 2018, Germany passed a piece of legislation which stated that social media companies with over 2 million users must delete or block “evidently unlawful content” within 24 hours, or within a week for more complicated cases. With this law, Germany is taking a different approach by making companies legally responsible in the fight against hate speech.
It also requires awareness by citizens in order to get rid of hate speech in social media. For this reason, governments should be investing in education about digital rights and digital responsibility.
Under the guise of free speech, harassers, racists, sexists and xenophobes are allowed to say whatever they want to whoever they want with total impunity. Now, it’s time for us to make sure that social media is a safe place for everyone.