The Digital Frontier

Screen Shot 2014-09-29 at 11.29.05 PM

The two readings from this week, “Cyber Racism” and “Locate the Hate: Detecting Tweets against Blacks,” focus on the implications of a society that is so heavily engrossed in social media. The authors of both sources identify this as especially problematic when the material online gives both implicit and explicit messages about anti-minority sentiments.

Irene Kwok and Yuzhou Wang’s study, “Locate the Hate: Detecting Tweets against Blacks,” investigates the evolving nature of digital racism through the lens of the relatively new online social networking service, Twitter. Kwok and Wang’s article states that while African Americans make up only 14 percent of the American population, they more significantly make up 25 percent of the Twitter population. This means that a quarter of the users on Twitter are black. Keeping that in mind, it is stimulating to also take note of the research in their article that was shown prove that 51 percent of Americans express anti-black feelings. However, due to the billions of tweets made every day by twitter users, many of these anti-black tweets simply remain undetected.

Using Twitter as a digital platform, Kwok and Wang coded specific “hate” words from multiple and diverse Twitter accounts and placed them on a binary scale with two compartments titled “racist” or “nonracist.” Thus, each accounted Twitter user would then be placed into one of these two categories.

Screen Shot 2014-09-29 at 11.30.24 PM

An example of a “racist” tweet on Kwok and Wang’s binary scale, as it contains the “hate word,” “nigger.”

The twitter user above, Shelly, would be coded as “racist” in Kwok and Wang’s study. This process had an accuracy rate of 76 percent. Yet even this classifying/coding process was not exactly accurate. Although 76 percent may seem high, and indeed it is, this system did not reach even close to all of the racist tweets on Twitter. In their article they further mention that some tweets did not contain the “hate words” they were looking for, but were still considered extremely racist. An example of this, which was also used in their article, is, “Why did Obama’s great granddaddy cross the road? Because my great granddaddy yanked his neck chain in that direction” (Kwok and Wang, 1622).  As one can see here, this tweet does not contain any seemingly coded “hate words,” and as a result is a perfect example of how many tweets remained undetected in their study. However, the overall suggestion made in their article was that even though their research method was not as efficient as they would have hoped, it should provide a driving incentive for further research based on the results they did find.

The results found in Kwok and Wang’s study can be attributed to the conceptual theories in “Cyber Racism” by Jessie Daniels, as they both interplay with each other. Both readings lead its readers to conclude that racism online is an issue that is perhaps inevitable unless severe action on the matter is commenced. As an educated audience, citizens must recognize that there is currently a chain of racism that is perpetuated by negative online use. This chain begins with the surplus of online negativity (as Kwok and Wang note 51% of content is negatively associated with race). This massive amount of input leads people to internalize these messages. Specifically, this can be seen in children. As seen in the short video below, children at a very young age have already absorbed the negative messages/connotations associated with blackness into their psyche, and this effect can often cause these children to grow up with hatred of minorities. They then put their learned message out in the world through social media and the cycle continues, affecting more and more individuals.

In this video, the effects of Daniels, Kwok, and Wang’s message is seen. People are deeply affected by the messages of society. In the video almost every child associated the negative qualities (such as bad, ugly, or impolite) to the black dolls and the positive qualities (such as smart, nice, pretty) to the white dolls. This effect is directly correlated to messages of our society that they constantly hear around them.

This video further underscores the importance of “Cyber Racism” and “Locate the Hate: Detecting Tweets against Blacks.” The digital era is dominating our world more and more each day and it is imperative that we understand and are aware of its affects on people. Like Daniels states, the representation of white supremacy online is a demonstration of our actual world around us. This means that in a matter of seconds, with a click of a button, one is likely to see through the digital world how profound white supremacy is in the real world we live in.

When using Twitter, or any version of social media for that matter, everyone has a voice despite what his or her identity consists of. As the digital frontier becomes more and more integrated into our everyday living and breathing lives, it is essential that these negative attitudes be addressed. Online material no longer just stays online. It enters the collective conscious of the everyday real world. Although Daniels, Kwok, and Wang’s findings may appear small, one can see through analysis they have large and serious implications.

Discussion Questions:

1) Although Daniels, Kwok, and Wang imply that cyber racism may be inevitable, what steps do we need to take as a society to overcome this problem?

2) Kwok and Wang’s study was very inefficient.  However, despite how inefficient it was, it was still able to gather some serious data.  Thus, further research should prove that their implications are right.  How would they conduct a second study to gather most or all of the racist acts on social media?

3) Although cyber racism is formulated by a specific individual, should we hold Twitter, Facebook, and other major social media conglomerates accountable for enabling these people with more freedom of speech?  Furthermore, what steps should be taken in an ideal world? Should the state or U.S. law take steps towards the rectification of cyber racism?

Works Cited

Daniels, Jessie. 2009. Cyber Racism: White Supremacy Online and the New Attack on Civil Rights. Lanham, Md.: Rowman & Littlefield Publishers. (selections)

“Doll Test.” Youtube. Youtube, 7 February 2012. Web. 14 Sept. 2014.

Kwok, Irene, and Yuzhou Wang. 2013. “Locate the Hate: Detecting Tweets against Blacks.” Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, 1621–22.

“Racist Tweet Screenshot.” 2014. JPG file. https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&docid=JlayseIg0KanYM&tbnid=JJuDl7sgxHDZXM:&ved=0CAcQjRw&url=http%3A%2F%2Frepublicansareracists.com%2F2012%2F11%2F13%2Fgop-meltdown-2012-mapping-racist-tweets-from-the-2012-election%2F&ei=vSAqVNChOYaxyASwpoDgAw&bvm=bv.76477589,d.aWw&psig=AFQjCNEUZPwM87HCXc8pUYuVnEf7nobG2g&ust=1412133434700363

“The Impacts of Technology.” Youtube. Youtube, 15 April 2013. Web. 14 Sept. 2014.

“Twitter Screenshot.” 2014. JPG file. https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&docid=JCsQl6GklqoCsM&tbnid=UG8p4n8HrmOp6M:&ved=0CAcQjRw&url=http%3A%2F%2F3qdigital.com%2Fcategory%2Fsocialmedia%2Ftwitter%2F&ei=XCAqVNHcPJeAygSu6YCYBg&bvm=bv.76477589,d.aWw&psig=AFQjCNHfNlGO5TeDKbVYrb7g3QR2smiuPw&ust=1412133329979923

How the Dynamics of Cyber Racism are seen in the Debate over Facebook Name Regulation

Recent news has circulated around Facebook’s enforcement of their policy on using “real” names. Although it may not seem to be directly related to cyber racism and the readings from this week I still see common themes that show how cyber racism is at play in this recent debate. The Guardian reports that recently hundreds of drag queens were informed that their Facebook profiles were going to be deactivated unless they changed their names to be “real” names. In a discussion of authority, white supremacy, and globalization below, I hope to shed some light on the key issues I see with the enforcing of this policy.

Safety in the Internet world

            In the same article in the Guardian, Facebook justified their actions by saying that “we’ve seen situations where people have used fake names to engage in bad behavior online”, and the author notes that this “completely misses the point. Seeing ‘situations’ where fake names enable bad behavior isn’t a reason to crack down on pseudonyms. It’s a reason to crack down on bad behavior.” Getting to the root of one problem is better than creating another. The Facebook online policy on “changing your name and birthday” corroborates this statement: “Facebook is a community where people use their real identities. We require everyone to provide their real names, so you always know who you’re connecting with. This helps keep our community safe.” But hate speech online occurs regardless of whether you can be identified by your name or not. In a blog post on NPR a statement from Chris Wolf, national chairman of the Anti-Defamation League’s Civil Rights Committee, tried to justify the real name policy and said: “As someone who has studied online hate for 20 years, I know that a real name policy works to prevent hate speech and harassment. … On balance, the benefits on anonymity for one group needs to be balanced against the potential harm anonymity can cause everyone.” The problem here lies with the anonymity of a name. In this case, people aren’t trying to hide, they are trying to belong. A name is the ultimate signifier of identity, it is how people know you and how you represent yourself to the world. Nicknames and fake names can often be more important signifiers of belonging to certain communities than one’s birth name. The Internet communities of belonging, for example your circle of Facebook friends, are thus regulated and patrolled under the guise of public safety. Where is the regulation of usernames on white supremacist sites that enable hate speech and blogging? Why aren’t those users forced to give their real names, to own up to their speech, and not hide behind the screen of anonymity and a fake username. There was even a discussion in Cyber Racism by Jessie Daniels on how people on these sites choose their usernames, how there are cultural, racial, class, and gendered specificities that go into picking a username that fully represents the individual who registers with the site (Daniels 63). If these people are allowed to have so much freedom and thought go into defining parts of their identity with a name, why aren’t people allowed to use their desired names on Facebook?

Further along the lines of safety, sometimes people may change their Facebook names to protect themselves from an abusive past. Maybe there is someone that they don’t want to find them or know about their life. Changing a name gives the people control of how they represent themselves on the Internet and can give a comfortable level of anonymity that is controlled within a small circle of friends. In the NPR article the author notes that often, “in a violent situation, control over one’s self is taken away. One of the ways to restore it, she says, ‘is really having the freedom of choosing whatever name you want to use, whatever gender you really are and want to be’”. By being told that they are being forced to use “real” names, those who maybe felt once safe are now going to be exposed.

ByL-frtCYAA_ebH

The concept of “real” in general is also very troubling. Who gets to dictate what is a real name? How will this be regulated? In the “Help” section of Facebook they say that “The name you use should be your real name as it would be listed on your credit card, driver’s license or student ID” and a very broad statement proclaims: “Pretending to be anything or anyone isn’t allowed”. This speaks to authenticity and that those who don’t use their real birth names are somehow “pretending” to be something they’re not, that any form of deviation from the norm is not real, it is an act, and should not be taken seriously.

How white supremacy reigns

            The white supremacy behind cyber racism is discussed at length in the Jessie Daniels reading. It is the foundation upon which all the tenets of Internet racism and policing rest, and the lens through which established thought comes to be. Just think about who gets the make all the rules, who is designing these sites, who is in leadership positions in the companies that get to make decisions like this one? The authority in charge has the power to put into place the ideals of white supremacy and just acts as a catalyst for maintaining the current power dynamics. The author of the Guardian article notes that, “A ‘real name’ policy is fundamentally an appeal to authority – we outsource the ability to determine “real names” to parents and parent figures in the government. So when Facebook allows users to report “fake names”, it is giving them permission to enforce other people’s identities that they think are not sufficiently ratified by existing government authorities”. The population in power is largely white and male. Laws have been institutionalized to include and privilege certain parties in this world and basically make it practically very hard to live comfortably and be anything but white. Systemic racism uses institutionalized practices to invoke and enforce racist practices. In this case, the regulation of true self proclaimed identity is an attack on personal liberties and is a way of systemically controlling populations.

Furthermore, Facebook is a site for people all over the world and thus enforces the ideals of white supremacy to a global audience. Daniels also notes how white supremacy is not confined to US boundaries although the majority of hate websites do come from American domains (Daniels 45). The “white is right” mentality is insidious and is pervasive in all facets of life around the globe. It is seen covertly in everyday life. In Facebook’s “Community Standards” regulations page they say that, regarding hate speech, “Facebook does not permit hate speech, but distinguishes between serious and humorous speech.” I will not discuss the problems of this statement at length here for the sake of brevity, but again, putting the judging of harmful speech in the hands of outside parties disregards the effects that many covert forms of subtle cyber racism have on large populations of people. Just take a look at popular media. Look at the people that are featured on the covers of magazine and in leading roles on television. We are saturated with images of successful white heteronormativity on the daily (examples below) and thus are told that this way of life is “normal” and “right”. Everything else is some deviation from that and thus subject to regulation to realign with those frameworks.

Screen Shot 2014-09-28 at 11.53.12 PM Screen Shot 2014-09-28 at 11.53.46 PM Screen Shot 2014-09-28 at 11.54.48 PM Screen Shot 2014-09-28 at 11.55.12 PM Screen Shot 2014-09-28 at 11.55.52 PM

 

Gender Dynamics at play

            One doesn’t have to look far to see how gender dynamics are affected in this case. First of all we can see how people who may be transitioning, or are drag queens are directly targeted here. Facebook is taking away the liberty for people to say who they really are and to define how they represent themselves to the world. Furthermore, men or women with abusive partners may be searching for a safe haven in the anonymity of names on Facebook and are now going to be forced to reveal themselves to a potentially dangerous partner. If you think about those who are targeted by this new policy we can see how gender roles are also at play. I personally know many high school girls (also mostly white) who change their Facebook names during senior year so that college admissions offices can’t search their names and find pictures of them at some high school party with a beer in their hand or wearing a “slutty nurse” costume from Halloween. These young white girls are not the target of Facebook’s new enforcement of the policy and yet they have more to hide, one could argue, than many people who are.

Racial dynamics at play

Beyond the racial elements already mentioned, we need to really consider what we mean when we say “real” name. This essentially means a “normal white sounding name” and thus reinforces racial divides in another way. There have been many studies done on the effects of job hiring as related to your name on a resume and as mentioned in a New York Times article from 2009, “Research has shown that applicants with black-sounding names get fewer callbacks than those with white-sounding names, even when they have equivalent credentials.” Since whiteness equates to being hired and being more successful, it is no wonder that applicants will try to downplay aspects of their blackness (like going by a different name). The article in the New York Times further notes that, “if playing down blackness is a common strategy born of necessity, perceived or real, it still takes a psychic toll, maybe a greater one now, as people calibrate identity more carefully.” The regulation of Facebook names is just a more institutionalized practice of the self regulating that already sadly happens in many non-white communities. The author of the article further mentioned that, “In some ways, they are denying who and what they are,” he said. “They almost have to pretend themselves away.”

Immigrant and other dynamics at play                 

The policy is also a very US citizen centric one. First of all, names cannot include characters from other languages thus enforcing English as the universal language and othering anyone else who writes or speaks in anything else. Furthermore, Facebook only accepts certain forms of ID as valid:

  • “Birth certificate
  • Driver’s license
  • Passport
  • Marriage certificate
  • Official name change paperwork
  • Personal or vehicle insurance card
  • Non-driver’s government ID (ex: disability, SNAP card, national ID card)
  • Green card, residence permit or immigration papers
  • Voter ID card”

The nature of having many of these IDs is specific to being an American citizen and often the tests, fees, and overall process of acquiring these forms of ID also reflect institutionalized practices that are meant to privilege white Americans and shut out or make life difficult for anyone who threatens the status quo.

If we want Facebook to be a truly free space for people to express themselves, connect with friends, and ultimately feel safe, then having a “fake” name is in no way threatening this. More often than not the names that are used are more “real” to the user than their birth name anyway. It’s all about really thinking about who has the authority to regulate in these situations, who are the victims, and how those power dynamics speak to larger discrepancies in gender and race relations.

Sources:

“What Names Are Allowed on Facebook? | Facebook Help Center | Facebook.” What Names Are Allowed on Facebook? | Facebook Help Center | Facebook. N.p., n.d. Web. 29 Sept. 2014.

Farrington, Dana. “Facebook Requires Real Names. What Does That Mean For Drag Queens?” NPR. NPR, 28 Sept. 2014. Web. 28 Sept. 2014.

“What Types of ID Does Facebook Accept? | Facebook Help Center | Facebook.” What Types of ID Does Facebook Accept? | Facebook Help Center | Facebook. N.p., n.d. Web. 29 Sept. 2014.

“Facebook Community Standards.” Facebook. N.p., n.d. Web. 28 Sept. 2014.

Luo, Michael. “‘Whitening’ the Résumé.” Www.nytimes.com. The New York Times, 5 Dec. 2009. Web. 28 Sept. 2014.

https://pbs.twimg.com/media/ByL-frtCYAA_ebH.jpg”. 2014. JPEG file.

Screen shots of google searches. 2014. JPEG file.

Zimmerman, Jess. “Facebook’s Real Name Policy Is a Drag, and Not Just for the Performers It Outs.” Www.theguardian.com. The Guardian, 24 Sept. 2014. Web.

Striking a Balance: Freedom of Speech and Institutional Censorship Online

Note: I have censored words from social media posts that could be perceived as culturally offensive or inappropriate

According to Jessie Daniels, cyber racism refers to “a range of white supremacist movements in Europe and North America and to the new horizons the Internet and digital media have opened for expression of whiteness across national boundaries.” (4) As demonstrated in the two readings for this week, the Internet is both an empowering tool for personal expression and a lawless haven for racism and bigotry. In order to ensure a safe online experience, many people are pressuring companies like Facebook and Twitter to devise better censorship algorithms capable of detecting and eliminating offensive behavior, such as the racist tweets following Amandla Stenberg’s casting as Rue in the 2012 Hunger Games movie.

Untitled

Cyber racism, however, is not always easy to identify. As Irene Kwok and Yuzhou Wang explain, the presence of “racist tweets against blacks…may not be obvious against a backdrop of half a billion tweets a day.” (1621) In their research, Kwok and Wang demonstrate the future difficulties online companies could face in their efforts to systematically censor racially charged comments. The filter designed for the study, for instance, was only able to capture offensive language 76% of the time and was unable to identify relationships between words—causing it to erroneously censor innocent language. Their study also revealed the added complication of determining which words are exclusively appropriate within certain communities. In other words, according to Kwok and Wang, an effective filtration system will need to have the ability to recognize statements as racist or non-racist depending on the racial identity of the person who said it. The following two tweets provide a good example. Should Case A be allowed in the Twitter community or should the company take an all-or-nothing stance on the use of certain words? How do you think Facebook and Twitter should treat repeat offenders?

 Case A: African-American female quoting a contemporary cultural icon:

Untitled

Case B: White male attacking an African-American teenager:

Untitled

The difference between the two statements is obvious to any person consciously searching for online slurs, but appear equally offensive to an algorithm that is unable to analyze the context of the conversation. Computers, and even humans, face a similar predicament in identifying the cloaked comments and websites discussed in Jessie Daniels’ book. She provides an extensive array of examples in her well researched and encompassing writing. I was particularly captivated by her analysis on the cultural impact of cyber racism. According to the author, the “least recognized—and, hence, most insidious—threat posed by white supremacy online is the epistemological menace to our accumulation and production of knowledge about race, racism, and civil rights in the digital era.” (8) In one example, Daniels describes how a user attempted to employ “moderate-sounding rhetoric and an appeal to the nation’s founding ideals to make a point that runs counter to the democratic ideals of equality for all.” (53) The excerpt is included below:

Untitled

I found this user’s comment particularly troubling because, unlike personal attacks made over Twitter, it is virtually undetectable to filtration systems and has the potential to indoctrinate unsuspecting readers. As Daniels points out, the regulation of such websites is a highly controversial and polarizing topic of debate. She persuasively argues that the United States must first recognize the racial realities of its history and then embrace the urgent need to restrict hate-speech online. According to the author, the United States tends to “ignore and downplay the formative effects of colonialism, slavery, ongoing and systemic racism, and the white racial frame on the acceptance of white supremacy online.”(179) That is particularly worrying given the amount of influence the United States wields online. After reading this chapter, I asked myself two questions: Can governmental entities regulate the Internet in an effective manner? If so, should regulation be crafted at the national level, despite the “border-less” nature of cyber-space?

The questions generated from these readings are crucial to the integrity and sustainability of the Internet as a productive platform for exchanging information. However, both users and governmental actors must accept that free speech cannot take precedence in every online situation. With free speech online, individuals must assume a greater responsibility in order to ensure it is used properly. As stated in response to a recent ruling in Australia, “free speech is not absolute…there is a point where it comes into conflict with other rights and should be legally curtailed.” (The Australian) Indeed, in the case of Twitter I believe the company should begin to enforce a filtration system in a transparent and user-friendly manner—communicating to users why their tweets are being blocked and signaling which words to avoid. Twitter could employ a bigram system, as Kwok and Wang suggest, to analyze the relationship between words and minimize the risk of blocking non-racist tweets. Similarly, Twitter should provide ample warning before deactivating the account of an alleged repeat offender. Those users should also have access to a resolution center in order to appeal their case. Many other sites, such as PayPal and eBay, already include such services on their customer service section.

Similarly, I am confident that national government officials need to play a central role in the effort to combat racism online. Despite a lack of results at the judicial level, past cases have provided great insight into how national governments could potentially regulate the Internet. In her discussion of the French lawsuit against Yahoo in 2000, for instance, Daniels mentions the development of a new technology, geo-ID, which can “identify and screen Internet content on the basis of geographical source.” (177) This tool could not only help restrict the flow of racist information onto public sites, but also, more importantly, could enable the government to match IP addresses to known hate groups, as catalogued in the Southern Poverty Law Center website.

Class Discussion Questions

  1. Would you be wiling to have your Twitter and Facebook posts screened in order to guard against hate-speech online, despite knowing you would never engage in such behavior?
  1. As an online user, do you feel there are adequate mechanisms currently in place to report abusive or hurtful language? Which social media platform do you feel is best at doing this? Which do you feel is the worst?
  1. Do you think it is more important to prosecute individual actors online that are bullying others based on their race or established organizations that create cloaked websites to misrepresent historical information? Please describe what you deem would be the most effective mechanism for regulating the option you picked.

Works Cited

“Balotelli Tweet: The ‘Ugly Side’ Of Social Media.” YouTube. YouTube, 22 Sept. 2014. Web. 29 Sept. 2014.

Daniels, Jessie. Cyber Racism: White Supremacy Online and the New Attack on Civil Rights. Lanham, Md.: Rowman & Littlefield, 2009. Print.

“Facebook Can’t See the Problem with Horrible Racism.” Us Vs Th3m. 2 Apr. 2014. Web. 29 Sept. 2014.

“Global Internet Map 2006.” Global Internet Map 2006. Web. 29 Sept. 2014. http://www.telegeography.com/telecom-resources/map-gallery/global-internet-map-2006

“Hate Map.” Southern Poverty Law Center. Web. 29 Sept. 2014.

Jasmine. “I Ain’t Got No Quarrel with Them Viet Cong … They Never Called Me Nigger.” Muhammad Ali, 1966.” Twitter, 29 Sept. 2014. Web. 29 Sept. 2014.

Kwok, Irene, and Yuzhou Wang. “Locate the Hate: Detecting Tweets against Blacks.” Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence. Print.

Liftoffs. “Because You Can’t Afford Air Conditioning Because You’re a Nigger.” Twitter. Twitter, 29 Sept. 2014. Web. 29 Sept. 2014.

“Racist Hunger Games Fans Are Very Disappointed.” Jezebel. 26 Mar. 2013. Web. 29 Sept. 2014.

“What Does the Online Hate Prevention Institute (OHPI) Do?” YouTube, 19 Aug. 2014. Web. 29 Sept. 2014.

Wilson, Tim. “Free Speech Is Best Medicine for the Bigotry Disease.” The Australian. 26 Mar. 2014. Web. 29 Sept. 2014.

California — the not-so-golden, kind-of-hateful state

In his work, Cyber Racism, Jessie Daniels discusses the intersection of race and digital technology, usually considered two separate entities.  He begins with a powerful quote from white supremacist David Duke, “I believe that the Internet will begin a chain reaction of racial enlightenment that will shake the world by the speed of its intellectual conquest.”  We typically focus on the positive effects of the internet — increased interconnectedness, faster dissemination of information, and globalization of conversation.  However, it is precisely these aspects that white supremacists have taken advantage of to bring their conversation and ideology online.  The danger of cyber racism and supremacist propaganda online extends beyond harassment and physical harm, especially as youth increasingly turn to the Internet for civil rights information but are unable to easily discern fact from fiction.

Without a critical digital literacy, it is near impossible for the unwitting user to realize they have stumbled upon a white supremacist site that detracts from every hard-earned win in racial equality.  Despite fifty years having passed since the civil rights movement, this global white identity termed “translocal whiteness” by Les Back fosters a racist cyberculture that becomes much harder to fight head-on.  A small consolation is that these sites are not successful tools for recruitment as they still rely on face-to-face interactions for bringing in new members.  Still, every online community provides a forum where participants can associate their ideology with that of our founding fathers and thus “engage in a self-perpetuating cycle of validating (their) knowledge claims.”

It’s important to note the bigger picture struggle between freedom of speech and protection of equality in America.  The American predisposition to ensuring the former makes it very difficult for offenders to be convicted as in the sole case of Richard Machado.  His email crime was only possible due to seemingly benign technological capabilities including email, the cc function, searchable online directories, and online aliases.  At the time of this book’s publication, Machado was the only person convicted of an Internet hate crime in the United States.  Given the sheer number of cloaked supremacist sites, this begs the question of whether we are too often turning a blind eye to cyber racism.  Filtering programs are simply not enough, especially paired with the inconsistent application of content rules on sites like Google and AOL.  Google may shift blame to their search engine algorithm to explain why supremacist sites are common top results, but perhaps we should think about better moderating what is displayed.  Regardless, if free speech considerations continue to weigh heavier in this debate, then it’s absolutely necessary to increase digital literacy and educate our youth about cyber racism as the newest form of oppression.

hate-speech-is-not-free-speech

This week’s second reading, “Locate the Hate: Detecting Tweets against Blacks,” discusses the use of labeled data from Twitter accounts to monitor hate speech, particularly of anti-black nature. In the constant struggle of free speech versus censorship, Twitter is unique in the intensity of its “racially charged dialogues”, especially in comparison to other social media platforms like Facebook (aptly described in class as a platform of positivity).   The authors point to the issue that hate speech on Twitter is “not always evident given Twitter’s instant feeds” and how this helps anti-black users with large followings gain a surprising amount of traction on the site.

The initial approach the authors employed was to compile 100 tweets with hate speech that were consequently classified by 3 students of different races as offensive (or not) and if so, rated on the level of offensiveness using a scale from 1-5. Only 33% agreed in their classifications, suggesting machines would have an even harder time classifying racist tweets.  The authors then turned to the Naïve Bayes classifier using a training dataset of roughly 25,000 racist tweets (self-classified or categorized through news sources). Labels were developed for reasons as to why each tweet was deemed racist, such as “contains offensive words”, and “threatening.” 86% of the anti-black tweets contained offensive words, and so became the basis of unique words in the racist/nonracist training sets.

An interesting finding was that “niggers” and “nigger” were most prominent in the racist sphere, whereas “niggas” and “nigga” were found in informal speech within the black Twitter community. The authors concluded that “acceptable usage of these words is restricted to blacks and approved allies of blacks”, given that “nigga” has become synonymous with “person of male gender.”   Thus, the race of the tweeter is an important nuance that adds to the complexity of analysis pertaining to racism.

The Southern Poverty Law Center is an internationally-reaching organization whose “innovative Teaching Tolerance program produces and distributes…documentary films, books, lesson plans, and other materials that promote tolerance and respect in our nation’s schools.”   Take a look at the youtube video below for an example of how Laurence Tan, a fifth-grade teacher in LA centers his curriculum on five values — engage, educate, experience, empower and enact — and includes families/parents in the conversation to help his students become “socially critical and responsible individuals.”

The SPLC also has resources like their Hate Map, which visually displays the number of hate organizations in each state.  Looking at the map holistically, it makes sense that the South is heavily concentrated with hate groups.  However, I was genuinely surprised by the fact that California has the highest number of recorded hate organizations at 77 total. As a California native, I have always perceived the state as both liberal and extremely welcoming towards diverse communities. I didn’t realize the amount of friction this diverse population might cause and we see this duplicated in New York, an even more diverse state (described as a melting pot), yet still on the higher end of the spectrum with 42 hate groups. This suggests to me that having a racially heterogeneous population can be fruitful in increasing understanding and communication between racial subgroups. At the same time, racially homogenous states in the mid-west have an almost non-existent number of hate groups so under what conditions does increasing diversity in communities actually help reduce racial tensions?

 Discussion Questions for The Week

1) In “Locate the Hate”, the authors state “acceptable usage of these words (niggas/nigga) is restricted to blacks and approved allies of blacks.” However, I don’t believe there is ample justification to use the term “nigga,” even if it is accepted as casual speech.  Through this line of reasoning, someone can justify that Asians are allowed to use the term “chink” freely whereas any non-Asian would be considered racist for communicating an ethnic slur. Why is it sometimes okay for racial subgroups to use slurs against themselves? Isn’t the use of these slurs detrimental to our efforts to combat racism?

chinese-chink_o_1063427

<Funny if an Asian says it, unacceptable for anyone else to say it?>

2) According to Facebook’s Help Center, “hate speech, credible threats, or direct attacks on an individual or group” are not allowed on the site. Looking at Twitter’s posting rules with regards to violence and threats, “you may not publish or post direct, specific threats of violence against others.” Unlike Facebook, Twitter does not explicitly address hate speech. This is likely a factor in the amplification of hate speech that the authors of “Locate the Hate” speak to in their article. In your opinion, should Twitter also prevent the posting of hate speech on their site and if so, who or what should determine what constitutes hate speech?

3) The Southern Poverty Law Center focuses on education as the means to reducing the existence of potency of racial hate in the United States. To me, this implies a focus on educating the nation’s youth in their formative time of racial biases. According to this New York Times article, “we are living at an unusual moment when the rate of progress has been dizzying from one generation to the next, such that Americans older than 60, say, are rooted in a radically different sense of society from those younger than 40.” With this generational tension, what sort of back-channels are there for educating older generations with stronger racial biases? Do any of you have stories about how your perspective has helped shift that of your parents and/or grandparents?

Works Cited

Bai, M. (2010, July 17). Beneath Divides Seemingly About Race Are Generational Fault Lines. Retrieved September 24, 2014, from http://www.nytimes.com/2010/07/18/us/politics/18bai.html?_r=0

Daniels, J. (2009). Cyber racism: White supremacy online and the new attack on civil rights. Lanham, Md.: Rowman & Littlefield.

Hate Map. (2014, January 1). Retrieved September 24, 2014, from http://www.splcenter.org/get-informed/hate-map#s=CA

Kwok, I., & Wang, Y. (2013). Locate the Hate: Detecting Tweets against Blacks. Association for the Advancement of Artificial Intelligence.

Laurence Tan -Teaching Tolerance Awards. (n.d.). Retrieved September 28, 2014, from  http://www.youtube.com/watch?v=fVeNxOQPKMc

The Twitter Rules. (n.d.). Retrieved September 24, 2014, from https://support.twitter.com/articles/18311-the-twitter-rules

Warnings | Facebook Help Center | Facebook. (n.d.). Retrieved September 24, 2014, from https://www.facebook.com/help/101389386674555/

Anyone can become a successful hotelier…as long as you are white: Racism and Airbnb

Airbnb is a service that allows anyone to become a hotelier overnight. For any short period of time, members of Airbnb can list rooms, or an entire house or apartment, to complete strangers for rent. Users set up a profile that lets the potential guests see their name, picture, and images of the space available. “As of 2013, Airbnb has 300,000 listings, comparable in total size to Marriott’s 535,000 rooms worldwide.” (Edelman and Luca) Ben Edelman and Michael Luca analyzed information about Airbnb hosts in New York. What they found is not only painting a picture of significant racial biases on websites like Airbnb, but also is reflective of racism in our present society. Edelman and Luca found that “non-black hosts are able to charge approximately 12% more than black hosts, holding location, rental characteristics, and quality constant. Moreover, black hosts receive a larger price penalty for having a poor location score relative to non-black hosts.” (Edelman and Luca, 2014).

Stats on Airbnb

Stats on Airbnb

On Airbnb’s home page, they state that racism is prohibited, and that they are an “open market place” (Airbnb). Yet the problem still exists. The problem is implicit race bias. Implicit biases are our stereotypes and judgments towards certain people or groups that affect the way we think or act in an unconscious way. Implicit biases could cause a person to act favorably towards a certain group of people, or very unfavorably, without the person even being aware of their actions. Because these biases are buried deep in our subconscious a person cannot find them, hide them, or control them (Kirwan Institute, 2014).

ki-ib2014-01-1

What do you see?

Mahzarin R. Banaji and Anthony G. Greenwald explore this phenomenon in their book Blind spot: Hidden Biases of Good People. One test Banaji and Greenwald ran was testing whether people preferred Oprah or Martha Stewart. They found people showed preference to Stewart even if they consciously insisted they preferred Oprah. (Banaji and Greenwald, xiii). This example shows us that our minds preferences are often out of our control. Even if a person is a diehard Oprah fan, he or she will implicitly favor Martha Stewart because she is white. Even though this idea seems far-fetched, Banaji and Greenwald show another example of how our minds can so easily trick us. In the diagram pictured below, Banaji and Greenwald urge readers to believe that the colors of block A and B are the same. This seems completely implausible. Yet, when you take a piece of paper and cut hole into where the squares are, hiding the rest of the image, the squares perfectly coordinate (Banaji and Greenwald, 8).

Screen Shot 2014-09-22 at 7.55.35 PM copy
In the case of Airbnb, the power of implicit bias is revealed. One can also see the potential fallout of these sneaky biases. Users of the sight may not be trying to hold these biases, yet because they are clearly there, implicit biases are impeding the lives of African-Americans who depend on the revenue of Airbnb. Implicit biases do not only affect the lives of people using Airbnb. Implicit biases affect the lives of students and everyday people (see below-youtube video). This is why the issue must be addressed. Unfortunately, Airbnb had not done too much about the issue. This could be a result of the fact that the company had been enduring quite a bit of heat in the media as of lately. Last fall, the company was faced with a crackdown from the New York’s Attorney General, Eric Scheiderman, who claimed the company use of short term rentals was in violation with New York City laws (Stanley). The only response that Airbnb generated was to direct people to its website’s policy on racism (seen below).

Screen Shot 2014-09-22 at 10.40.43 AM

Airbnb policy on racism

It is hard to say what exactly can be done about this issue, because it would be hard to avoid profiles of hosts. Furthermore, it is even more difficult to try and control implicit biases. I think when it comes to implicit biases, the best thing Airbnb could do is to inform people that these biases exist. The discrimination page on their website simply states that discrimination of any kind is prohibited, but what if their users are not aware of the fact they are holding biases? Perhaps Airbnb could have a link to the implicit bias test (Link to test: https://implicit.harvard.edu/implicit/) so users could see their biases for themselves. I also think it would helpful to change the profile settings Airbnb has right now. Instead of showing the person’s profile picture, Airbnb could display the information about the person (i.e. their reviews) and the property before users are allowed to see the host’s picture. Therefore, potential guests would be choosing their host based on quality of hospitality not on looks.

Works Cited

Banaji, Mahzarin R., and Anthony G. Greenwald. Blindspot: Hidden Biases of Good People. New York: Delacorte, 2013. Print.

Edelman, Benjamin, and Michael Luca. “Digital Discrimination: The Case of Airbnb.com.” Harvard Business School (2014): n. pag. Web. 19 Jan. 2014. <http://www.hbs.edu/faculty/Publication%20Files/14-054_e3c04a43-c0cf-4ed8-91bf-cb0ea4ba59c6.pdf&gt;.

“Life Cycles of Inequity: A Series on Black Men.” YouTube. YouTube, n.d. Web. 22 Sept. 2014.

Science:, State Of The. “Implicit Bias.” State of Science: Implicit Bias Review 2014 (n.d.): n. pag. 2014. Web. 19 Sept. 2014.

Stanley, Chuck. “Racial Profiling on AirBnB.” Reality Check NYC. N.p., 5 Feb. 2014. Web. 19 Sept. 2014.

“What Is Airbnb’s Position on Discrimination?” Vacation Rentals, Homes, Apartments & Rooms for Rent. N.p., n.d. Web. 22 Sept. 2014.

Airbnb is a service that allows anyone to become a hotelier overnight. For any short period of time, members of Airbnb can list rooms, or an entire house or apartment, to complete strangers for rent. Users set up a profile that lets the potential guests see their name, picture, and images of the space available. “As of 2013, Airbnb has 300,000 listings, comparable in total size to Marriott’s 535,000 rooms worldwide.” (Edelman and Luca) Ben Edelman and Michael Luca analyzed information about Airbnb hosts in New York. What they found is not only painting a picture of significant racial biases on websites like Airbnb, but also is reflective of racism in our present society. Edelman and Luca found that “non-black hosts are able to charge approximately 12% more than black hosts, holding location, rental characteristics, and quality constant. Moreover, black hosts receive a larger price penalty for having a poor location score relative to non-black hosts.” (Edelman and Luca, 2014).

Stats on Airbnb

Stats on Airbnb

On Airbnb’s home page, they state that racism is prohibited, and that they are an “open market place” (Air BNB). Yet the problem still exists. The problem is implicit race bias. Implicit biases are our stereotypes and judgments towards certain people or groups that affect the way we think or act in an unconscious way. Implicit biases could cause a person to act favorably towards a certain group of people, or very unfavorably, without the person even being aware of their actions. Because these biases are buried deep in our subconscious a person cannot find them, hide them, or control them (Kirwan Institute, 2014).

What do you see?

What do you see?

Mahzarin R. Banaji and Anthony G. Greenwald explore this phenomenon in their book Blind spot: Hidden Biases of Good People. One test Banaji and Greenwald ran was testing whether people preferred Oprah or Martha Stewart. They found people showed preference to Stewart even if they consciously insisted they preferred Oprah. (Banaji and Greenwald, xiii). This example shows us that our minds preferences are often out of our control. Even if a person is a diehard Oprah fan, he or she will implicitly favor Martha Stewart because she is white. Even though this idea seems far-fetched, Banaji and Greenwald show another example of how our minds can so easily trick us. In the diagram pictured below, Banaji and Greenwald urge readers to believe that the colors of block A and B are the same. This seems completely implausible. Yet, when you take a piece of paper and cut hole into where the squares are, hiding the rest of the image, the squares perfectly coordinate (Banaji and Greenwald, 8).

Screen Shot 2014-09-22 at 7.55.35 PM
In the case of Airbnb, the power of implicit bias is revealed. One can also see the potential fallout of these sneaky biases. Users of the sight may not be trying to hold these biases, yet because they are clearly there, implicit biases are impeding the lives of African-Americans who depend on the revenue of Airbnb. Implicit biases do not only affect the lives of people using Airbnb. Implicit biases affect the lives of students and everyday people (see below-youtube video). This is why the issue must be addressed. Unfortunately, Airbnb had not done too much about the issue. This could be a result of the fact that the company had been enduring quite a bit of heat in the media as of lately. Last fall, the company was faced with a crackdown from the New York’s Attorney General, Eric Scheiderman, who claimed the company use of short term rentals was in violation with New York City laws (Stanley). The only response that Airbnb generated was to direct people to its website’s policy on racism (seen below).

Screen Shot 2014-09-22 at 10.40.43 AM

Airbnb policy on discrimination

It is hard to say what exactly can be done about this issue, because it would be hard to avoid profiles of hosts. Furthermore, it is even more difficult to try and control implicit biases. I think when it comes to implicit biases, the best thing Airbnb could do is to inform people that these biases exist. The discrimination page on their website simply states that discrimination of any kind is prohibited, but what if their users are not aware of the fact they are holding biases? Perhaps Airbnb could have a link to the implicit bias test (Link to test: https://implicit.harvard.edu/implicit/) so users could see their biases for themselves. I also think it would helpful to change the profile settings Airbnb has right now. Instead of showing the person’s profile picture, Airbnb could display the information about the person (i.e. their reviews) and the property before users are allowed to see the host’s picture. Therefore, potential guests would be choosing their host based on quality of hospitality not on looks.

Works Cited

Banaji, Mahzarin R., and Anthony G. Greenwald. Blindspot: Hidden Biases of Good People. New York: Delacorte, 2013. Print.

Edelman, Benjamin, and Michael Luca. “Digital Discrimination: The Case of Airbnb.com.” Harvard Business School (2014): n. pag. Web. 19 Jan. 2014. <http://www.hbs.edu/faculty/Publication%20Files/14-054_e3c04a43-c0cf-4ed8-91bf-cb0ea4ba59c6.pdf&gt;.

“Life Cycles of Inequity: A Series on Black Men.” YouTube. YouTube, n.d. Web. 22 Sept. 2014.

Science:, State Of The. “Implicit Bias.” State of Science: Implicit Bias Review 2014 (n.d.): n. pag. 2014. Web. 19 Sept. 2014.

Stanley, Chuck. “Racial Profiling on AirBnB.” Reality Check NYC. N.p., 5 Feb. 2014. Web. 19 Sept. 2014.

“What Is Airbnb’s Position on Discrimination?” Vacation Rentals, Homes, Apartments & Rooms for Rent. N.p., n.d. Web. 22 Sept. 2014.

Axes and Breasts: Gender Depiction in MMORPGs

One of my first experiences connecting to the Internet in a social capacity was to play EverQuest. EverQuest, or EQ for short, was a Massively Multiplayer Online Role Playing Game (MMORPG) that revolutionized the genre when it was released in 1999. The friend of my parents who frequently babysat me and my brother when my parents were out of town was an early player of the game. She took her gaming seriously and subscribed to multiple accounts and had multiple computer on which she ran the game. Often these secondary accounts were left to run automatically as trading mules, but when we stayed over, she let us make our own characters and play on them.

Wallpaper available from the EverQuest website. The original box art was a cutout of this, focusing on the woman on the left in the blue and yellow.

Wallpaper available from the EverQuest website. The original box art was a cutout of this, focusing on the woman on the left in the blue and yellow.

EQ, from before the game is even installed, presents a questionable ethics. The box art features a scantily clad, white-skinned, blond, human female. Notable also is her apparent class—examples are the sword-wielding warrior, the magic staff wizard, and the nimble, knife using rogue—a magic user. Magic users across the genre stand far from combat, casting spells out of the mayhem of the mêlée. Magic users, as a general rule, are also limited to cloth armor. This lends itself to being especially poorly clothed, although in my day I’ve seen some pretty revealing plate armor on female character models.

The point of this is that game design in this case of EQ, presents a pretty poor model for leadership in a genre-defining game. While the box art frequntly promises far more epic gameplay than can readily be found, the fashion choices are pretty constant. Male-models get big steel pauldrons while female-models get loincloths and sports bras. (The model distinction is critical, as will be made clear later on. There is nothing stopping a male from playing a character that looks female, and vice versa.) While EQ is still around, it’s far from the most popular thing on the market.

In 2004 World of Warcraft was launched. Similar to EQ it’s a fantasy MMORPG, but it took off in a way even EQ never did. The above video is the promotional cinematic used in marketing the game. I’ve broken down the characters that appear in it:

  1. Dwarf male hunter, who gets a gun and a giant bear pet
  2. Night Elf female druid, who gets to turn into a feline to run through pretty nature and when she fights, in on the defensive against a physical onslaught
  3. Undead male warlock, he gets some pretty sweet 40 foot tall infernal stone buddies
  4. Tauren male druid, who, as an aside to the main point, have some questionable Native American appropriation going on, gets to fight with a giant tree trunk club
  5. Orc male warrior, who gets to look ripped and go on the offensive in combat

Not the best step forward. 1:4 female-to-male representation, and the only female gets to wear some clothing that wouldn’t get her through a Cambridge winter while she’s fending off an attack from a clearly juicing male. As a finale note on looks, the first expansion introduced a new race, the Draenei, who get to have a bicep gender difference that looks like this.

An in-game screenshot available on the Draenei information page on the World of Warcraft website

Actually inside the game, there’s another serious representation gap. As is noted in Bergstrom et al. there is a case of unequal representation regarding in-game professions (31:2011). While they use a statistical tool I as a humanities concentrator don’t really understand, they assert that the gender breakdown in trainers (non-player characters who are accessed to further advance skill). There is a statistical significance in the only two professions where female trainers outnumber males: first aid and herbalism (32:2011). When compared to male-trainer dominated professions such as blacksmithing, engineering, and the notable mining (where males outnumber females 24-to-2, p-value 0.0016), flower picking and putting on band-aids become pretty obviously stereotypical female pastimes. Soft, passive, support roles.

But the game makers aren’t the only ones at fault in portraying female bodies in traditional, weak, squishy, female roles. because when it comes to it, the only thing that makes this particular string of code male as opposed to female is graphic choice. Gaming, and particularly non-player characters, take the sex-gender distinction back to before that had been articulated. There is no gender. There is only “biological” (or digital, as the case may be) sex.

Players know this. While most people I remember playing with played characters whose models matched their offline sex/gender (nobody I ever played with outed themselves as transgender), not all player characters did. In another paper, Bergstrom et al. note that while gender choice of avatar does not divide along stereotypical healer-warrior roles among novice players, it likely becomes a learned norm through the game design that is reflected in the upper skill level of players. For example, while “female priest avatars account for 85.2% of the priests […] only 51.9% of priests were played by women” (102:2012). The priest is a primarily healing class, and a particularly squishy one too.

What can be done about this? Remodel some of the NPCs. There is no reason, plot-based or otherwise, why many of these currently male-bodies NPCs can’t be female-bodied. Mining trainers can and should, more frequently be female. There is literally nothing stopping this change from taking place other than a lack of caring about equitable representation. And while they’re at it, female models can get some similar robes to the males. Instead of adding a low neckline and some slits in the skirt, the same item which currently includes these features on the male model and not on the female, can remove these objectifying features. Femininity is not inherently bad, and it doesn’t have to be erased, but it can be presented in a non-objectifying and non-exploitationary manner. If Bergstrom et al. (2012) are correct in asserting gender-norm avatars are actually a learned practice, the game makers can get together and do their small part in fighting stereotypes.


Bergstrom, K., McArthur, V., Jenson, J., & Peyton, T. (2011). “All in a Day’s Work: A study of World of Warcraft NPCs comparing gender to professions”. In Proceedings of the 2011 ACM SIGGRAPH Symposium on Video Games (Sandbox ’11). ACM, New York, NY, USA, 31-35.

Bergstrom, K., Jenson, J., & de Castell, S. “What’s ‘Choice’ Got to Do With It? Avatar Selection Differences Between Novice and Expert players of World of Warcraft and Rift”. In Proceedings of the International Conference on the Foundations of Digital Games (FDG ’12). ACM, New York, NY, USA, 97-104.

Blizzard Entertainment. World of Warcraft Cinematic Trailer. Jan. 22, 2010. youtu.be/vlVSJ0AvZe0?list=UUbLj9QP9FAaHs_647QckGtg

Blizzard Entertainment. Untitled screenshot of Draenei. Dec. 16, 2010. http://us.battle.net/wow/en/media/screenshots/races?view=draenei02&keywords=draenei

Sony Online Entertainment. Untitled wallpaper [cropped]. n.d. https://www.everquest.com/media

Limited Control Over Unlimited Outflow of Privacy

I first used my computer when I was around 6 years old. It was my mother’s computer and the purpose of my use was to draw a digital picture by using game soft for children. When I turned to the age around 10, I started to play simulation game called “Sim Park”, which was a game to create your own virtual park. As my parents did not allow me to play with any kind of TV games, such as Super Famicom, Sim Park was the only digital game I could enjoy during my childhood. However, as the game was not connected to Internet or I was not competing against other players, things I could do was limited within the offline game soft. Therefore, my interest in computer game did not go further.

When I was 14 years old, my use of computer shifted from playing games to emailing and creating digital documents. As I had friends in the US, I exchanged emails with them. I also spent time in front of the computer in order to type a script for the theater play I was involved. In addition, I had a “digital information class” in my junior high school and learnt how to create website. Still at this point however, my use of computer was basically limited to the offline sphere and online activity was only the email exchange.

When I entered college, I started to use Internet for searching information online, mainly related to my study. I got my first lap-top computer at this time and also used it to write papers and create power points. I also created a SNS account called “mixi”, which is a Japanese version of Facebook, although I had never heard about Facebook at that time. It was not until I moved to Canada as an exchange student in 2006 that I started to use the real Facebook. Just after few days of my arriving in Canada, my Canadian roommate suggested me to sign up for Facebook. There was no language preference in Japanese on Facebook at that time, and non of my Japanese friend had Facebook account. I also started to use MSN messenger while I was in Canada, but I used it only to talk with my non-Japanese friends or Japanese friends who were also studying abroad. I remember that the instructions for the MSN messenger was all in English as well.

Japanese version of Facebook "mixi" was the first SNS I used.

Japanese version of Facebook “mixi” was the first SNS I used.

Recalling my use of SNS in the past, I think if I was not an English speaker, I would have less access to information online as the information provided in Japanese are limited and it is not common for most of the Japanese people to use English in every day life, including the digital world.

If Japan shared language with other countries and also engaged more in the development of the digital world, our access to the global information sphere would have been broader and deeper. In other words, Japan had lost many chances to bring in diversity in the digital world due to the language barrier, and perhaps due to the Silicon Valley’s white-centered environment which resulted in creating the “digital divide” (Nakamura, 2014). However, the “digital divide” that Japan experiences is not imposed by the isolation from the Internet access, but rather by actively isolating itself from the global world by creating “Japan(ese) only” digital devices, computer soft, and web application. We ironically call this isolated development of globally available product as “Galapagos syndrome” and this greatly impacted the quality and quantity of information that is accessible for me before I got familiar with using English in my daily life.

Isolating the country (Japan) from the rest of the world and developing the original technology. We call this the Galapagoz syndrome.

Isolating the country (Japan) from the rest of the world and developing the original technology. We call this the Galapagoz syndrome.

There is almost no single day I am not using Internet. Many of the online pages I use require me to sign up by providing my personal information, and I sign in to those pages every day. As Scholz (2013) states, I recognize that “We, the `users,` are sold as the product” as I get more emails and advertisements which I never requested. However, since I am not losing anything but just being annoyed by the unrequested emails and advertisements up until now, I have not cared too much about how to handle my information online to the unknown owners of the web page I signed up. Even if I delete my account from those web pages, I believe that the information I have already provided are stored for unlimited period of time and could be abused anytime. Thus, the extent to which I have control over my personal information has already been limited. I can still try not to disclose too much about my personal life, political opinion, and so on, but this does not help me from providing my privacy to the owner of the web page. It only helps me to prevent me from providing some personal information to the actual readers (of Facebook, mixi, or blogs etc.) who are also selling themselves as a “product” of the digital world, just as myself.

The control I have over the privacy online is limited.

The control I have over the privacy online is limited.

The Interwebs for a Noob: A Source of Stress

Objectively, I can appreciate the many advantages, pleasures, and conveniences of our digitalized lives. My generation has, at its phone flipping fingertips, access to the largest database ever known to mankind. For this, I am often grateful. I cannot imagine my life without the ability to instantaneously “google” (*verb*) any question that pops into my head or arises from a conversation. I cannot imagine my life without the ability to condescendingly fact-check and correct a person mid-conversation without even having to leave the table. I cannot imagine what it would be like to feel disconnected from my family all over the world. I cannot imagine what it would be like to not see pictures of my baby nephew, or of my cousin’s wedding, or of my uncle’s edited, filtered, and ultimately “instagrammed” dinner. Simply put, I cannot imagine my life without the Internet. But sometimes I wish I could.

My first real connection with the Internet stemmed from my brother’s love of gaming. In 2002, at the ripe age of 7, I wandered into the mysterious realm of Runescape with a sword-swinging older brother (and idol) by my side. At this time, Runescape was just beginning to garner worldwide interest and the user base was rapidly increasing. My experience began on the docks fishing for lobsters (a time-consuming but simple process) while my brother took breaks for snacks and made vague promises about compensation. For hours I would stare at the screen, clicking on the water and then on the lobster cage. Not exactly most people’s definition of fun, but there was something thrilling about it. Text bubbles appeared over the people around me as they traded and bargained and socialized and I had a vague concept that these people were from all over the world. Sitting in my small home in the North of England, I felt universally connected to another world, perhaps a more adult world, in which no one had to know I was a “noob” at both gaming and life.

But it soon became clear that my youth (“noob status”) would make itself known online. Not only did my brother effectively gain free fishing labor from me, but online strangers also recognized and acted upon by naivety. I vividly remember having a conversation with another user during which he complimented me on my rune armor. Having worked very hard to buy it, I found myself blushing behind the screen. I was subsequently talked into letting him “try it on” in exchange for a single coin. Once the trade was complete, his character disappeared signaling that he had gone offline (and taken my beautiful rune armor with him). I cried for hours, mourning my loss and cursing the cruel world of Runescape. My parents couldn’t believe that someone could be so mean. My brother laughed.

In many ways this experience (and unfortunately/embarrassingly similar ones thereafter) shaped how I feel about the Internet today. While I now consider myself computer literate, I still feel vulnerable to scams. So much of my information is stored online and my online presence is only growing by the day. While I readily sign up and provide information for a growing number of apps, my fundamental lack of Internet IQ bothers me constantly. Do I really know how safe my banking information is? Can I trust this website to keep my personal information private? These are questions to which I have to assume answers without really having the time, motivation, or knowledge to make educated decisions. Will I ever not be a noob?

Alas, I’ve missed a very important transition from my online gaming days to my online social days. Upon moving to Canada, I entered the world of “MSN messenger”. My time spent on Runescape waned and my time spent awkwardly messaging friends after-school began. I mean, how much did 10-year-old-me have to inform my classmates only a few hours after seeing them? “The ride home was super smooth today and my mum’s cheese and crackers were on point”. Yet, it seemed so important at the time: it made the cool kids “cool”, earned shier people brownie points, and spurred on young love. So even though it seemed pointless, it served as a greater social function in the school and I had no choice but to make some time for it.

This designated online “social” time grew with the emergence of Facebook. I made an account at the age of 12, which meant I had to lie about my age (and it’s been a struggle changing my birthday ever since). Facebook has recorded my life since then, leaving me susceptible to revisited embarrassing 2007 Facebook statuses but also providing a living, breathing, multimedia diary of sorts. I worry about my account crashing because so much of my life is stored on there. I also worry about my life being permanently stored on Facebook and how much of it could come back to haunt me.

All in all, this makes molding my Facebook “presence” particularly difficult. Of course, Facebook is a social tool used to impress other people with just how amazing the lives of its users are. This is, in itself, a source of stress, especially with the rise of new social media tools such as Instagram and Twitter. “Its Thursday today…I should really Instagram a throwback. I haven’t tweeted in days…I should do something funny before people unfollow me. Sophomore year has nearly started and I still haven’t uploaded any photos from Freshman year! Wow, there are so many photos that never made it to Facebook. I should really dedicate a month to getting caught up on the last three years.” The thought process is ridiculous, yet so, so real. It is a constant nag, both internal and external, with friends and family wanting you to upload photos from this event, while you also try and perfectly time your profile picture upload so it gets the maximum amount of likes (while still living life and being a human being of course). If this isn’t enough to worry about, our generation also has to be concerned about maintaining a professional image on our online accounts (or having really amazing privacy settings). Balancing the two can be extremely difficult and leads to even more questions: “Should I upload this funny photo of me chugging beer in London? Everyone would think that’s funny. But would that look bad to a future employer?” The struggle is real.

Yes, it is true that the Internet has made some things much easier. But it is also true that the Internet leaves many people vulnerable to deception and fraud (as illustrated by my Runescape incident). While it is true that this mirrors “offline” life, I would argue that the Internet has the ability to more explicitly target socio-economically challenged communities, youth (noobs), and the elderly. Furthermore, the Internet has made social life more complicated than ever before and, in doing so, has meant that the “online” population has to spend an ever-increasing amount of time, effort, and thought on their image. Sometimes I try and imagine my life without these stresses but then I realize I haven’t refreshed my email in 30 minutes and I probably have to do that ASAP.

 

*Side Note: I later had my Runescape character hacked and stolen. The new user went on to make my character one of the top 25 foresters in the world. Wut.*

 

From Jump Start to Late Starter

As with most kids, in the US and apparently overseas as well, my experience with technology and computers centered largely around video games. I can’t recall exactly when we got our first personal computer for the house, but it had to be somewhere between 2002-2003. While I used my Playstation and Nintendo 64 mainly for fun, leisurely games such as Spiderman, Capcom Street Fighter, Super Smash Bros., and others, the computer was largely reserved for educational games. Before the personal computer entered our world, my mom would buy me and my cousins various learning books, from the Pre-K level onward, to practice the things we were learning. The computer allowed her to simply continue this trend digitally.

My childhood

Continued…

To this day I have stacks of computer games for several grade levels in Math/Reading Blaster, typing games, and even a game that was meant to teach me about my asthma. It was these games that really shaped my initial experience with this type of technology. At this period in my life, computers were meant almost exclusively for the transference of knowledge, or tied somehow to my education. The only other places I used computers were at school for computer classes (mainly learning how to type) and at my after-school program where, once you had finished all of your homework and reading, you were permitted to play games on the computer.

The next stage of my interactions with technology came in the form of social media. When it comes to social media, it seems I have always been consistently behind the game. All of my friends had AIM accounts well before a friend of mine took pity on me in the 6th grade created my “ggoodggurl132” persona. AIM was useful because before I had a phone, it was the way I communicated with many of my friends after school or during winter and summer breaks. I had created an email account when I was about 10, but it seemed weird to email my friends all the time. For some reason I remember still communicating with a select few people on AIM in high school, but heaven knows why. The next step on my social media ladder was MySpace, which I, once again, arrived late to the party for. All throughout middle school my classmates were using MySpace as ways to connect with one another (and apparently other random people…) and I felt left out. At the time, my middle school classmates were also connected to BlackPlanet, Xanga, and Migenete; social media sites meant exclusively for black, asian, and Latino people, respectively. I can’t recall my reaction thoughts to the existence of these sites, but I don’t think they were negative. I think I just took it as another social media site, just specific to being able to connect o people with your community. Nevertheless, my mom watched a lot of dateline back then and was very averse to social media, so I had to choose wisely. So when I was 13 I begged my mom to let me make one and she finally did. I was so excited to get to create my sparkly background and put together the playlist that would play when people visited my page and I remember what a big deal it was to be in/at the top of someone’s top 5. I can’t remember it very well, but I think that was also my first introduction to mild coding, as I remember having to use codes to tell the program what color you wanted various things to be on your profile. Nevertheless, this excitement didn’t last very long since as soon as I got to high school, I realized that everyone had moved on to Facebook. In the spring of my sophomore year my friends once again took pity on me and helped me create a Facebook account. This time I didn’t ask my mom’s permission, and she was not very happy about this. She told me to use a fake last name, because she didn’t want me putting my information out on the internet. She would often ask to see my Facebook, and ask me how I knew the various people that were listed as my friends. It was completely unacceptable to friend anyone who I didn’t actually know in real life.

My mom’s apprehension came less from a fear or privacy, or lack of rights to what you put on the internet, and more from a fear that I would talk to people that I didn’t know and, for some incomprehensible reason, try to meet them in person. This was never my goal, but I think for many of us our beginning fear of technology came largely from the stories of young children/teenagers, usually girls, joining chatrooms or talking to people who misrepresenting themselves for malicious purposes. While I’m sure this is still a legitimate fear, it seems that our concerns have now moved largely to privacy and being able to protect our own information. It’s also a matter of how closely our online identities are now tied to our real life identities, such that we need to be careful what persona our social media sites give off to potential employers or school admissions committees or even our mothers who may have recently friended us.

As Nathan Jerguson notes in his article The Disconnectionists, many scholars are concerned with the topic of the “online self” the inauthentic digital persona we create for social media that is restricting us from interacting with people IRL (in real life) as our true authentic self. As Jerguson presents them, the disconnectionist believe that if  we could just disconnect from social media we could get back to this true version of ourselves. But Erving Goffman in The Presentation of Self  in 1954 discussed the classic sociological theory that ever second of our lives is a performance, put on for various audiences using various props and performative strategies in various settings. If this is the case is there ever really one, true, “real” self?  It might seem that our ability to present one uniform and acceptable self might decrease with the number of social media sites we participate in (Facebook, Twitter, Tumblr, blogposts, etc.), but I prefer to view it as increasing our ability to present our multiple selves. All these sites can be used for different purposes and in some ways can exist in entirely different worlds. In other words they all have different audiences, and I think this is a good thing. There is a feeling of freedom that should come with not being tied down to one identity, or persona, and that is the attitude with which I approach my recent interaction with the digital world, which comes largely through social media not only as a way to connect with friends, or to garner valuable information, but also as a freedom of expression. In this way, my experience with technology is currently shaping who I am. Largely through twitter as a way to stay updated on what’s going on in popular culture, the world, and particularly in the black community. As opposed to what seems to be the general consensus, I feel much more connected to the world than I did before the Internet came into my life.