The Promises and Perils of Peer Production

Psychological studies are often criticized for their use of undergraduate students as research subjects. As the argument goes, it is difficult to generalize findings that are primarily based off of the attitudes of college students at a small number of universities in the United States. And there is certainly truth to such criticism. But there’s also another truth the critics often ignore: finding a large number of randomly selected research subjects who come from diverse demographic backgrounds is rarely feasible, either economically or time-wise.

But there’s potentially a new method of quickly choosing research subjects at a minimal cost. Several websites allow both private businesses and academic researchers to hire a large number of workers for the purpose complete short, simple tasks. The workers are independent contractors and the cost is cheap—often five to twenty cents for five to ten minutes of work; the result is that individual workers who provide a small amount labor are able to collectively complete a large task for a single employer. It’s called peer production, and the largest service providing this labor is Amazon’s Mechanical Turk. And it’s potentially a system that allows researchers to quickly afford thousands of research subjects from around the world—the famous “college sophomore problem” may have finally found a solution.

Mechanical Turk, however, creates its own problems for researchers. A survey of Mechanical Turk users by New York University professor Panos Ipeirotis found that approximately 50% of Mechanical Turk’s workers come from the United States. The other major country is India, whose workers make up 40% of the site’s workers. Within the United States, the average Mechanical Turk user is a young, female worker who holds a bachelor’s degree and has an income below the U.S. household median. This does not reflect the demographic makeup of the United States. As such, there may still be problems of generalizability and accuracy. However, this may not be as large of a problem as it first appears; several studies provide evidence that researchers can limit the population of Mechanical Turk users they chose from and adjust results so as to make results more generalizable than traditional undergraduate studies.

The larger risk for academics is not generalizability or accuracy, but rather the quality of work Turkers provide. In my own experience with a survey experiment that was put on Mechanical Turk, most survey responses were complete and all quality control questions were answered accurately. However, a large number of surveys were completed in an extremely short time period and some responses were incoherent or appeared to involve little thought. Ipeirotis’ survey provides some clues as to why this might be the case: according to his research, 15% of Mechanical Turk users in the United States use the site as their primary source of income; an additional 30% of users report that they use the site because they are unemployed or underemployed.

If a significant portion of workers use Mechanical Turk primarily as a means of generating income, their incentive is to game the system to get as much money as possible. This causes surveys to be taken quickly and without careful attention. Even quality control questions may not be enough. Online communities of Mechanical Turk workers such as Turker Nation have developed techniques for identifying quality control questions and skilled Mechanical Turk users can likely answer quality control questions accurately while still breezing through the rest of the survey. Researchers interested in the quality of survey responses would do well to building further quality checks into their surveys; one potential method would be to ask several specific questions about the experimental manipulation test subjects were given. This would at least ensure that survey respondents read everything they were supposed to.

Articles about Amazon’s Mechanical Turk often reference the inspiration for the service’s name. As the story goes, Amazon took the name from a machine built in the 18th century claimed to be able to beat any chess player. The machine toured around the world and dumbfounded amateur and professional chess players alike. Years after the success of the machine, it was revealed that the entire contraption was a hoax—inside was just a skilled chess player making all the moves. It is indeed an apt metaphor for the site. Though Mechanical Turk quickly delivers cheap, accurate survey responses, we can’t forget that it is ultimately real people taking the surveys. And these people have just as much of an incentivize to save money as researchers do; academics must adjust their research methods accordingly.

Works cited:

Ipeirotis, Panagiotis G. “Demographics of Mechanical Turk.” (2010).

Mason, Winter, and Siddharth Suri. “Conducting behavioral research on Amazon’s Mechanical Turk.” Behavior research methods 44.1 (2012): 1-23.

Income and Internet Usage

When I initially started to do research on Americans’ social media habits, I operated under the assumption that almost all Americans had access to the internet. As such, I believed that most individuals could easily access Facebook or Twitter from their computer; if someone didn’t use social media it was due to personal preference, not lack of access.

And it is true that a large majority of Americans live in homes with a computer that’s connected to the internet. As of 2012, 74.8% of households have an Internet connection at home (Census Bureau). Of the 25.2% of American households that don’t have internet access, 7.3% report not having an internet connection because it is too expensive (Census Bureau). That’s a small minority of the population, but the United States is a large country with over 117 million households; that 7.3% represents over 8.5 million families that can’t afford the internet.

I had to take into account a new variable when examining America’s social media use: class. And indeed, regular internet usage is positively correlated with a person’s income. 73% of people making less than $30,000 a year report using the internet compared to 99% of people making over $75,000 a year (Madden).

I expected social media use to likewise be positively correlated with income. But surprisingly, people in lower income brackets are actually more likely to use social media than wealthier individuals. 72% of people making under $30,000 a year use social media. Only 65% of people in higher income brackets report using social networking sites. In addition, lower income groups are more likely to use both Facebook and Twitter (Madden).

The increasing use of cell phones as an internet browsing device might explain poorer people’s propensity to use social media. 43% of people making less than $30,000 a year do most of their internet browsing on their cellphone as compared to 21% of people making over $75,000 a year. Cellphones lower barriers to entry by allowing people to cheaply access the internet even when they don’t have access to a computer connected to the internet at home.

I was surprised that having a lower income—despite preventing people from having a computer with internet access—doesn’t prohibit people from using social media. Perhaps social media truly is a democratizing force that allows a greater portion of the population to participate in our country’s political discourse.

Given the increasing importance of social media in geopolitics, it would also be interesting to explore the relationship between income and internet usage in other countries. Are richer households more likely to have access to the internet in other countries? Are poor people in Hong Kong or Tehran able to access social networking sites? The answers to these questions may help us understand the demographic makeup of protest movements around the world.

“Computer & Internet Trends in America.” Measuring America. United States Census Bureau, 3 Feb. 2014. Web. 19 Oct. 2014. <http://www.census.gov/hhes/computer/files/2012/Computer_Use_Infographic_FINAL.pdf>.

Madden, Mary. “Technology Use by Different Income Groups.” Pew Research Internet Project. Pew Research, 29 May 2013. Web. 19 Oct. 2014. <http://www.pewinternet.org/2013/05/29/technology-use-by-different-income-groups/>.

Facebook in History

Earlier this week I spent time in Swem’s Special Collections looking at the notebook of a 19th century businessman. The Prince William County resident’s dated cursive was nearly illegible, the pages were faded and torn, and the text was overfilled with marginalia and curious attempts at multiplication. But the notebook is worth the trouble. For historians, such writings provide critical insights into the daily anxieties and hopes of middle class Virginians; the excitement over the world fair, the religious commodities purchased, the debts owed. From such documents we can historicize the experience of everyday life.

All of which led to a question: a hundred years after my death, what documents will historians utilize to historicize the ordinary, daily affects of my generation? Postings on social network sites are one immediate answer. Within my age cohort, 89% of people use social networking sites and 46% of internet users report posting original photos and videos (Pew 2014). On social media, we work out the inane realities of day to day living—information of little use to us, but of value to historians trying to figure out exactly what we were up to.

Historians no longer have to sift through mildewed, incomplete documents. Instead, we have a perfectly preserved ledger of the social lives of an entire generation. Paucity has been replaced with overabundance and researchers will struggle to construct coherent historical narratives out of a nearly infinite reserve of readily available data. In a matter of decades, the field of history stands to be revolutionized and historians may find themselves pining for the days when one had to crank through microfilm for a glimpse into the past.

But despite the radical changes social media threatens to impose upon Herodotus’ old discipline, there’s a relative lack of understanding as to what structures our online behavior. What motivates a post? Which topics are discussed? Who does the posting? This is to say, we have all the more reason to begin an inquiry into the ways people discuss politics online. No discipline stands alone, and history may find itself in need of political theory and computer science sooner rather than later.

“Social Networking Fact Sheet.” Pew Research Centers Internet American Life Project RSS. Jan. 2014. Web. 7 Oct. 2014. <http://www.pewinternet.org/fact-sheets/social-networking-fact-sheet/>.

Technology Can’t Change Politics

A surprising amount of academic articles about the internet–particularly those written in the early 2000s–refer to internet technology as a transformative tool that has the potential to fundamentally alter American politics. Unfortunately, it seems as if technological innovation isn’t sufficient to spur political reform. The past 30 years have seen enormous technological change, including widespread adoption of personal computers, the internet, and cell phones. These technologies have had a profound impact on the ways in which we interact with others and perform daily tasks. And yet our political systems remain unchanged. The political debates and challenges of 1994—or even 1984—seem remarkably similar to those of 2014.

This, I believe, is due to the powerful effect of institutions. America’s political institutions create an incentive structure for politicians. New technologies can change how we elect people, who we elect, how we interact with elected officials, etc. But in some sense these changes are superfluous. Whether you use social media to help elect Barack Obama or print media to put George H. W. Bush in the White House, there will still be two dominant parties. Money will drive political outcomes. Capital will accrue wealth faster than labor, leading to inequality. One Senator will have the ability to block entire pieces of legislation. Organized political minorities will have a greater influence than apathetic majorities. Technology, however great its effect on our personal lives, is largely unable to alter the incentive structures America’s political institutions create.

The fight to reform institutions is not a battle that can be resolved through technology. Rather, it is a struggle that involves political and philosophic debate. Technology cannot alter the fundamental inequalities of power and wealth that distort political outcomes and make change so challenging. To pretend otherwise merely obfuscates the real issues and makes effecting positive social change more difficult for everyone involved.

What’s So Bad About Polarization?

The increasing use of the internet as a communications tool has fundamentally changed the way Americans discuss politics. Whereas once people used to bash politicians in local barbershops, now people have the ability to do so on social media sites with people around the world.  Some have posited that this will naturally diversify citizens’ political networks, thus enhancing the democratic process. A review of the literature, however, casts doubt upon this optimistic view. Instead, the future looks much like the past: research indicates that the internet likely increases polarization by allowing citizens to more easily self-select into ideologically homogenous groups (Bienenstock et al., 1990; Garner and Palmer 2011).

Take for example Facebook, America’s most popular social network site. Facebook is designed so that you only see content from people whose pages you visit the most, i.e. your closest friends. Your closest friends tend to be very similar to you, including in their political beliefs (McPherson et al., 2001). As such, if it is true that people self-select into homogenous networks on Facebook, online political discussion will lead to greater partisanship and polarization.

But could increased polarization actually be a positive development for American democracy? While closed discussion among a partisan, polarized group of people might seem like a negative thing, studies indicate that polarization actually leads to more informed and consistent voters (Levendusky 2010). Despite its negative connotations, polarization motivates citizens to become more politically engaged and knowledgeable; it also serves as a powerful heuristic that allows ordinary citizens to easily understand complex political issues. Perhaps the danger to our polity lays not so much in polarization itself as it does in broken political institutions that are unable to accommodate polarized parties. Unfortunately, that is a problem whose solution can only come from political imagination and will; an internet connection will not suffice.

Against the Quantitative Turn in Political Science

In doing readings and conducting literature reviews for both the SNaPP Lab and other government courses, one dominant trend in contemporary political science emerges: Theory is out, quantitative research is in.

At first glance, this quantitative turn in political science would seem to be a positive development. Theory guided works often appear impenetrable, given to subjective interpretation, and unable to provide practical policy solutions. The rise of quantitative research in political science offers the emancipatory hope that the pernicious subjectivity so endemic to the social sciences can be partially overcome. Speculative theory can be replaced by an objective and systematic method of inquiry that allows the discipline to better address the practical, everyday needs of people.

It sounds nice, but an actual look through contemporary political science research leaves one a little less optimistic about the discipline’s new passion for empiricism. Studies replete with advanced statistical techniques and computational models have produced, as Ziliak and McCloskey call it, a “cult of statistical significance” that has allowed the profession to skirt the fundamental questions in favor of limited debates over measurement or method that fail to tell us much at all.

This isn’t to bash all empirical research in the social sciences. Quantitative research can be endlessly illuminating and the advent of computers has given academics the ability to quantify and interpret a greater breadth of information than at any time in human history. But without any underlying theory or principle to guide the questions being asked, researchers will continue to find themselves awash with data but powerless to make it mean anything.

Political scientists would do well to pay attention to the warning Hannah Arendt put forth some forty years ago: “The need of reason is not inspired by the quest for truth but by the quest for meaning. And truth and meaning are not the same.” As humans, we find value and meaning in the political. The power of political science is that it allows us to better understand the human condition by examining the ways in which we forge self-knowledge and purpose through our engagement in public discourse and the political process. A political science discipline which ignores this fact does so at its own peril.

Gender in Online Political Discussions

In conducting research on how and why people engage in political discussion on social media sites, I found myself continually wondering about the question of gender. What interested me wasn’t the demographics of social media users–we know that the average user of social media is female and in the 18-24 age bracket–but rather the ways in which gender might structure political debate online.

A wide body of literature exists which suggests that women in positions of power are expected to conform to traditional conceptions of femininity. This often causes women to be evaluated differently than men for performing the same task. For example, a 1979 study by Rhoda Unger showed that female professors who were considered harsh graders were not perceived as being “nice” and “nurturant” and as such received lower evaluation scores. Men who were perceived as harsh graders, however, did not experience a commensurate drop in their evaluation scores. In addition, other studies have shown that female instructors who are perceived as being unsociable and unfriendly receive substantially worse evaluations than female instructors who are considered warm and friendly. In contrast, male instructors were given better evaluations when they were perceived as being unfriendly (Kierstead et al., 1988). These pernicious stereotypes regarding what constitutes appropriate behavior for each gender likewise manifest themselves in public political discourse; numerous studies have shown that female leaders are devalued when they employ direct and assertive governing tactics that are traditionally associated with masculinity (Eagly, Makhijani and Klonsky, 1992).

As such, it seems only natural to posit whether these disparities in how men and women are evaluated persist on social media sites. Are women who engage in heated political discourse online viewed differently than men who do the same? Are impassioned political arguments made by women online considered less authoritative? Less informed? What is the general perception of women who are opinion leaders within a particular online social network?

Unfortunately, there is a large absence of available research on the topic. This only serves, however, to make the issue of gender in online political debate a more interesting and fruitful area for us to explore through further reading, surveying, and experimentation!