Striking a Balance: Freedom of Speech and Institutional Censorship Online

Note: I have censored words from social media posts that could be perceived as culturally offensive or inappropriate

According to Jessie Daniels, cyber racism refers to “a range of white supremacist movements in Europe and North America and to the new horizons the Internet and digital media have opened for expression of whiteness across national boundaries.” (4) As demonstrated in the two readings for this week, the Internet is both an empowering tool for personal expression and a lawless haven for racism and bigotry. In order to ensure a safe online experience, many people are pressuring companies like Facebook and Twitter to devise better censorship algorithms capable of detecting and eliminating offensive behavior, such as the racist tweets following Amandla Stenberg’s casting as Rue in the 2012 Hunger Games movie.

Untitled

Cyber racism, however, is not always easy to identify. As Irene Kwok and Yuzhou Wang explain, the presence of “racist tweets against blacks…may not be obvious against a backdrop of half a billion tweets a day.” (1621) In their research, Kwok and Wang demonstrate the future difficulties online companies could face in their efforts to systematically censor racially charged comments. The filter designed for the study, for instance, was only able to capture offensive language 76% of the time and was unable to identify relationships between words—causing it to erroneously censor innocent language. Their study also revealed the added complication of determining which words are exclusively appropriate within certain communities. In other words, according to Kwok and Wang, an effective filtration system will need to have the ability to recognize statements as racist or non-racist depending on the racial identity of the person who said it. The following two tweets provide a good example. Should Case A be allowed in the Twitter community or should the company take an all-or-nothing stance on the use of certain words? How do you think Facebook and Twitter should treat repeat offenders?

 Case A: African-American female quoting a contemporary cultural icon:

Untitled

Case B: White male attacking an African-American teenager:

Untitled

The difference between the two statements is obvious to any person consciously searching for online slurs, but appear equally offensive to an algorithm that is unable to analyze the context of the conversation. Computers, and even humans, face a similar predicament in identifying the cloaked comments and websites discussed in Jessie Daniels’ book. She provides an extensive array of examples in her well researched and encompassing writing. I was particularly captivated by her analysis on the cultural impact of cyber racism. According to the author, the “least recognized—and, hence, most insidious—threat posed by white supremacy online is the epistemological menace to our accumulation and production of knowledge about race, racism, and civil rights in the digital era.” (8) In one example, Daniels describes how a user attempted to employ “moderate-sounding rhetoric and an appeal to the nation’s founding ideals to make a point that runs counter to the democratic ideals of equality for all.” (53) The excerpt is included below:

Untitled

I found this user’s comment particularly troubling because, unlike personal attacks made over Twitter, it is virtually undetectable to filtration systems and has the potential to indoctrinate unsuspecting readers. As Daniels points out, the regulation of such websites is a highly controversial and polarizing topic of debate. She persuasively argues that the United States must first recognize the racial realities of its history and then embrace the urgent need to restrict hate-speech online. According to the author, the United States tends to “ignore and downplay the formative effects of colonialism, slavery, ongoing and systemic racism, and the white racial frame on the acceptance of white supremacy online.”(179) That is particularly worrying given the amount of influence the United States wields online. After reading this chapter, I asked myself two questions: Can governmental entities regulate the Internet in an effective manner? If so, should regulation be crafted at the national level, despite the “border-less” nature of cyber-space?

The questions generated from these readings are crucial to the integrity and sustainability of the Internet as a productive platform for exchanging information. However, both users and governmental actors must accept that free speech cannot take precedence in every online situation. With free speech online, individuals must assume a greater responsibility in order to ensure it is used properly. As stated in response to a recent ruling in Australia, “free speech is not absolute…there is a point where it comes into conflict with other rights and should be legally curtailed.” (The Australian) Indeed, in the case of Twitter I believe the company should begin to enforce a filtration system in a transparent and user-friendly manner—communicating to users why their tweets are being blocked and signaling which words to avoid. Twitter could employ a bigram system, as Kwok and Wang suggest, to analyze the relationship between words and minimize the risk of blocking non-racist tweets. Similarly, Twitter should provide ample warning before deactivating the account of an alleged repeat offender. Those users should also have access to a resolution center in order to appeal their case. Many other sites, such as PayPal and eBay, already include such services on their customer service section.

Similarly, I am confident that national government officials need to play a central role in the effort to combat racism online. Despite a lack of results at the judicial level, past cases have provided great insight into how national governments could potentially regulate the Internet. In her discussion of the French lawsuit against Yahoo in 2000, for instance, Daniels mentions the development of a new technology, geo-ID, which can “identify and screen Internet content on the basis of geographical source.” (177) This tool could not only help restrict the flow of racist information onto public sites, but also, more importantly, could enable the government to match IP addresses to known hate groups, as catalogued in the Southern Poverty Law Center website.

Class Discussion Questions

  1. Would you be wiling to have your Twitter and Facebook posts screened in order to guard against hate-speech online, despite knowing you would never engage in such behavior?
  1. As an online user, do you feel there are adequate mechanisms currently in place to report abusive or hurtful language? Which social media platform do you feel is best at doing this? Which do you feel is the worst?
  1. Do you think it is more important to prosecute individual actors online that are bullying others based on their race or established organizations that create cloaked websites to misrepresent historical information? Please describe what you deem would be the most effective mechanism for regulating the option you picked.

Works Cited

“Balotelli Tweet: The ‘Ugly Side’ Of Social Media.” YouTube. YouTube, 22 Sept. 2014. Web. 29 Sept. 2014.

Daniels, Jessie. Cyber Racism: White Supremacy Online and the New Attack on Civil Rights. Lanham, Md.: Rowman & Littlefield, 2009. Print.

“Facebook Can’t See the Problem with Horrible Racism.” Us Vs Th3m. 2 Apr. 2014. Web. 29 Sept. 2014.

“Global Internet Map 2006.” Global Internet Map 2006. Web. 29 Sept. 2014. http://www.telegeography.com/telecom-resources/map-gallery/global-internet-map-2006

“Hate Map.” Southern Poverty Law Center. Web. 29 Sept. 2014.

Jasmine. “I Ain’t Got No Quarrel with Them Viet Cong … They Never Called Me Nigger.” Muhammad Ali, 1966.” Twitter, 29 Sept. 2014. Web. 29 Sept. 2014.

Kwok, Irene, and Yuzhou Wang. “Locate the Hate: Detecting Tweets against Blacks.” Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence. Print.

Liftoffs. “Because You Can’t Afford Air Conditioning Because You’re a Nigger.” Twitter. Twitter, 29 Sept. 2014. Web. 29 Sept. 2014.

“Racist Hunger Games Fans Are Very Disappointed.” Jezebel. 26 Mar. 2013. Web. 29 Sept. 2014.

“What Does the Online Hate Prevention Institute (OHPI) Do?” YouTube, 19 Aug. 2014. Web. 29 Sept. 2014.

Wilson, Tim. “Free Speech Is Best Medicine for the Bigotry Disease.” The Australian. 26 Mar. 2014. Web. 29 Sept. 2014.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s