Web-scraping

Earlier in the semester, I was trying to collect newspaper articles from online archives. When the structure of the archives changed from PDF to webpage links, I needed to find a way to automate the retrieval process.

Professor Settle then introduced me to Professor Van Der Veen who uses web-scraping in his own research. He also held a workshop that went through the web-scraping tutorial which can be found here.

Web scraping entails “automatically get some information from a website instead of manually copying it.”  There are several ways to go about doing this. We used Python along with several other packages and tried it out on the William & Mary Government Department website.

One of the aspects of web-scraping involves using Firebug, which is a Mozilla Firefox add-on that gives access to a variety of web development tools. Once you open up a webpage and click on Firebug, you can see what part of the webpage corresponds to the HTML code in the web development window at the bottom of the screen, as shown below. firebug

Personally, I have yet to make it all the way through the tutorial because no attempt at programming really ever goes smoothly, no matter what I try.

Although I’m no longer interested in trying to get the newspaper articles that originally led me to wanting to learn how to web-scrape, web-scraping is a useful skill that is worth learning.

References

http://www.sciedupress.com/journal/index.php/air/article/view/1390

http://stair.wm.edu/scraping.html

http://getfirebug.com/

Are We Really More Alike Than Unalike? Anchoring Vignettes in Cross-Cultural Survey Research

You are conducting an expansive international public health survey, with an ultimate goal of cross-cultural comparison. You ask your respondents the following question, adapted from a World Health Organization survey:

“Overall in the last 30 days, how much of a problem have you had with energy and vitality??”

The response categories are: None, Mild, Moderate, Severe, and Extreme/Cannot Do.

A 27 year-old woman who comes home fatigued during a particularly hard few weeks at work answers, “Severe.” An 85 year-old woman who can get out of bed in the morning and dress herself with minimal assistance answers, “Mild.” Does the younger woman have more of a problem than the older woman with energy and vitality, or are the two respondents applying differing standards for energy and vitality?

Because of the two women’s ages, you can assume that it is very likely that the two women do not possess the same latent level of “energy and vitality” – the older woman probably has objectively less. In addition, your survey spans different countries, so these two women are not only of different ages, but they come from different cultures.
Classic anthropological as well as clinical studies, suggest that culture influences perceptions of pain. In some countries, similar self-reports of health correlate negatively with objective measures of health (King 2009, Sen 2002). This problem is called differential item functioning (DIF). While it has been studied most extensively in the public health literature, but it poses a problem for political science survey research, too – especially in cross-cultural comparisons of political attitudes (on engagement, efficacy, corruption).

Anchoring vignettes represent one possible solution to DIF. By presenting a set of hypothetical scenarios that correspond to each value of a variable, researchers establish absolute variable thresholds for all respondents. Establishing these thresholds allows for interpersonal comparability across cultures.
Anchoring vignettes rest on the following two assumptions, however:

1. Response consistency: Despite the hypothetical nature of the vignette scenarios, respondents apply the same absolute scale to evaluating the vignette characters as they would to evaluating themselves.

2. Vignette equivalence: Although respondents have differing life experiences, socioeconomic backgrounds, and personalities, they use the same absolute scale to judge the levels of the variables presented in the vignettes (King et al. 2004)

Researchers rarely test the assumptions of response consistency and vignette equivalence, although they do not always hold, especially in cross-cultural survey research.

When they do test them, and the assumptions do not hold, they conclude by questioning the validity of the anchoring vignettes method in correcting for DIF and interpersonal incomparability.

Rather than discount the method all together, however, why not establish, as Kapteyn et al. (2011) suggest, a “systematic experimental approach to the design of anchoring vignettes”?

References
Kapteyn, Arie, et al. “Anchoring Vignettes and Response Consistency.” RAND. (2011).

King, Gary, et al. “Enhancing the Validity and Cross-cultural Comparability of Measurement in Survey Research.” American Political Science Review 98.01 (2004): 191-207.

King, Gary. “The Anchoring Vignettes Website.” 2008-08—25). http://gking. harvard. edu/vign (2009).

Sen, Amartya. 2002. “Health: Perception versus Observation.” BMJ 324:860–861.

A Brief History of Press Freedom in Egypt

As an Egyptian-American daughter of journalists, the Egyptian media over the past few years has fascinated me. Growing up in America, where media bias at least attempts to be subtle, I was shocked at the blatant pro-government stance I saw on TV and in print media while in Egypt. For my own research, I’ll be looking at state-run and independent newspaper coverage of events over the last few years. But to do that, it’s important to understand the history of print media in Egypt.

The development of print press spans a long period of Egyptian history. During the first phase of British control, from 1882 to 1914, the press was used as a mechanism for political participation. The newspaper was “a key organ of political expression and criticism” and “played a critical role in the crystallization of a new Egyptian national identity” as Egyptians struggled to break free of British colonial rule.[1] The 2013 overthrow of Morsi was not Egypt’s first coup. In fact, Egypt’s post-colonial history begins with Nasser’s 1952 coup that overthrew the monarchy and expelled the remnants of the British occupation. Sadat succeeded Nasser in 1970, and when he was assassinated, Mubarak was sworn in as president in 1981. Despite the different leadership style of each presidency, they were all characterized by strong media censorship.

Mass Media Under Nasser

When Nasser and his Free Officers overthrew King Farouk’s monarchy, he outlawed censorship but reinstated the law just one month later, warning the media to be “approving of the government’s activities or to be noncommittal.”[2] Censorship greatly increased under Nasser, not only due to his strict rule, but also as a result of some self-censoring journalists who saw themselves as part of Nasser’s cause, and thus voluntarily became his puppets. Those who chose to defy Nasser faced torture in prison, or if they were lucky, were merely silenced. Mustafa Amin, for example, a liberal journalist who favored Western democracy to Nasser’s close ties with the Soviet Union, was accused of being an American spy and imprisoned and tortured. In a BBC interview, Amin describes being attacked by dogs to the point of collapse “seven days a week.” In 1960, Nasser nationalized the press, thereby giving ownership of private press to the National Union, and effectively crushing any semblance of independence in the media.[3]

Mass Media Under Sadat

Sadat’s eleven-year presidency continued the censorship tradition, though with a less heavy hand than Nasser’s. Where Nasser would arrest and detain, Sadat would fire and end careers. In 1973, over one hundred journalists had their professional licenses revoked for six months as a reminder “of their dependence on the regime for their livelihood.”[4] Then in 1979, Sadat passed a “Law of Shame” which prevented the press from publishing antigovernment content, and in 1980, his appointment of members of the Press Syndicate put “the press in the hands of non-journalists and government employees.”[5] Public criticism did exist in Sadat’s Egypt, however. Journalists were able to criticize the economy, traffic, or any other problem so long as government officials were not explicitly named.

Mass Media Under Mubarak

The 1981 assassination of Sadat led to the “election” of Mubarak as president. Mubarak exhibited a markedly more lenient approach to press freedom than Nasser or Sadat, but was extremely repressive nonetheless. Although they needed government approval to function, independent and opposition newspapers emerged during this time.[6] Many newspapers were privatized and the Internet and satellite television offered new platforms for criticism. But despite this growing pluralism in the media scene, “arrests and abuse of journalists—police assaults and raids, detentions, even torture—continued.”[7] In 1995, Mubarak’s Parliament passed Press Law 93, which imposed anything from fines to prison sentences on journalists “’publishing false information with the aim of attacking the economy’ in order to accuse industrialists and politicians;” and another bill passed in 1996 made it possible to charge journalists who criticized Mubarak or his family.[8] Although a diverse media emerged in Mubarak’s era, critics and opponents were too often silenced and repressed, making Mubarak a miniscule improvement from his predecessors.

The Media After the January 2011 Uprising

The 2011 uprising brought about enormous hope and potential for change, both for Egypt as a whole and for the journalism industry. During the mass protests that called for Mubarak’s resignation, social media played a large role in mobilizing the masses and disseminating information in a time where state-run media refused to accurately portray what was happening in the streets. Many Egyptians relied on the Internet and Al Jazeera for accurate information during the protests, which led to the revocation of Al Jazeera’s broadcasting license, the detainment of its bureau chief, and eventually a nearly week long, statewide Internet and phone blackout.[9] Since the reinstatement of the Internet and the fall of Mubarak, many steps were taken to reform the media including the disbandment of the Ministry of Information.

Media Under Morsi

In early October 2012, Morsi pardoned detainees who were arrested for participating in protests since January 2011 and cleared many journalists of their charges. Later in October, however, Morsi’s government came under scrutiny for its reactions to criticism. Bassem Youssef, commonly known as “the Jon Stewart of Egypt,” famously ridiculed and criticized Morsi on his show El Bernameg. He was arrested and jailed for insulting the president and “showing contempt toward Islam,” but later released on bail and allowed to resume his show.[10] (He’s giving a talk in DC tomorrow!) Following the Mubarak pattern of repressing journalists and critics, “criminals” under Morsi included Alber Saber Ayad, charged with “defamation of religion,” a Shi’a man accused of desecrating a mosque, two Muslim men “charged with defaming Christianity for burning the Bible,” and a Christian man dealt a six-year sentence for posting photos considered offensive to Islam to the Internet.[11] Editors-in-chief of many large state newspapers were replaced and popular news hosts were investigated. Still, Morsi’s presidency saw an opposition media “more vocal and critical than ever.”[12]

Media Today

The past three years of tumult resulted in an increasingly polarized political atmosphere, seen in the discourse in journalism and the mass media. Within the subset of an extremely poor education system, journalism in Egypt remains in shambles. Journalists perceive themselves to be “activists instead of watchdogs,” writing biased and often propagandist news articles rather than objective fact-based content.[13] This bias is also due in part to an Egyptian journalistic phenomenon known as al-maktab, “the desk,” whereby the well-educated, “top-dog” journalists write and re-write other journalists’ articles, ultimately leading to uniform narratives in the media.[14]

In a recent study, Egypt was among the top ten jailers of journalists around the world. There is still a ways to go in ensuring press freedom in Egypt, and understanding the implications of this repression is critical to understanding the events and public attitudes that have developed over the past four years.

 


[1] Contemporary Egypt: Through Egyptian Eyes. Essays in Honour of Professor P.J. Vatikiotis by Charles Tripp Review by: Israel Gershoni. (1995). Middle Eastern Studies, 31(1), 174-180.

[2] Zajackowski, D. (1989). A Comparison of Censorship, Control, and Freedom of the Press in Israel and Egypt: An Update From the Journalists’ Perspective.

[3] Elkamel, S. (2013, May 1). Objectivity in the Shadows of Political Turmoil: A Comparative Content Analysis of News Framing in Post-Revolution Egypt’s Press.

[4] Zajackowski, D. (1989).

[5] Alianak, S. (2007). Middle Eastern leaders and Islam: A precarious equilibrium (p. 180). New York: Peter Lang.

[6] ElMasry, M., Basiony, D., & Elkamel, S. (2014). Egyptian Journalistic Professionalism in the Context of Revolution: Comparing Survey Results from Before and After the January 25, 2011 Uprising. International Journal of Communication, 8.

[7] Khamis, S. (2011). The Transformative Egyptian Media Landscape: Changes, Challenges and Comparative Perspectives. International Journal of Communication, 5

[8] Alianak, S. (2007). Middle Eastern leaders and Islam: A precarious equilibrium (p. 180). New York: Peter Lang.

[9] Khamis, S. (2011).

[10]S. Kalin. (2013, April 8). Here are the jokes that got Bassem Youssef, the “Jon Stewart of Egypt,” arrested.

[11] AMNESTY INTERNATIONAL PUBLIC STATEMENT Egypt: Broadcaster’s conviction for “insulting the President” another blow to freedom of expression. (2012, October 23).

[12] Proposed Egyptian constitution to ‘limit’ media freedom – International Media Support (IMS). (2012, December 1).

[13] Elmasry, M. (2014, January 29). Egypt and the Struggle for Democracy. Conference at Georgetown University, Washington, DC.

[14] Elmasry, M.

The Downstream Effects of Recurring Daily Exposure to Online Emotional Content: A Self-Experiment

Several months ago, I came across a Chrome browser extension developed by Lauren McCarthy, an artist and programmer based in Brooklyn, NY, called the Facebook Mood Manipulator. With a cheeky nod to the Facebook data science team’s 2013 massive-scale emotional contagion experiment, the extension asks you to choose how you’d like to feel — the four options are positive, emotional, aggressive, and open — and filters your feed accordingly through the text and sentiment analysis program Linguistic Inquiry Word Count (LIWC).

It looks like this:                                                          Source. http://lauren-mccarthy.com/moodmanipulator/

I downloaded the extension and reactivated my Facebook account for a week to test it out on my own. The sentiment analysis component is definitely imperfect, perhaps because LIWC is less accurate with shorter bits of text like Facebook status updates and comments than longer excerpts. Nonetheless, I got some interesting results. Setting the manipulator to “positive” promoted a couple of personal status updates about friends and family, but “open” did not change my feed at all. Setting it to “aggressive” once caused my whole feed to go blank, and then to refresh with no change in the initial distribution of posts. The “emotional” filter, however, was particularly strong — it consistently brought some contentious political and social discussion threads to the top of my feed, as well as certain breaking news stories. After about a week of checking my Facebook News Feed daily on the “emotional” setting, I noticed that after logging off, the effects of the “treatment” lingered — I found myself downright concerned about some of the commentary I had come across, and I initiated face-to-face conversations about the news stories I saw posted more often than I typically do. Recurring daily exposure to “emotional” posts thus seemed not only to produce some sort of downstream effect on my mood, but affected my social behavior offline.*

I am wary of drawing any conclusions from this self-experiment — because I knew I was being “studied,” I probably overestimated my responses to the mood manipulator — but nonetheless, assessing my self-report results highlighted the utility of tracking downstream effects in social media experiments. Initial exposure to a disagreeable online political post or a contentious comment thread may trigger an initial emotional response or encourage someone to engage with the online content, for example, but recurring daily exposure to such content may also alter mood, lead people to reconsider opinions, or affect their motivation to engage in more traditional forms of political action,  such as face-to-face deliberation or protest. For political scientists and social psychologists alike, it is these downstream effects on individuals’ attitudes and behavior, rather than individuals’ initial reactions to treatments, that matter most — social science experiments should thus take particular care to capture them.

*Note: I don’t think the “emotional” filter accounts for valence — when it works correctly, it displays both “positive” and “negative” posts. I wish the “positive” and “aggressive” filters had worked properly, so I could test whether repeated exposure to valence-charged posts affected my offline behavior differently than exposure to mixed-valence posts.

The Promises and Perils of Peer Production

Psychological studies are often criticized for their use of undergraduate students as research subjects. As the argument goes, it is difficult to generalize findings that are primarily based off of the attitudes of college students at a small number of universities in the United States. And there is certainly truth to such criticism. But there’s also another truth the critics often ignore: finding a large number of randomly selected research subjects who come from diverse demographic backgrounds is rarely feasible, either economically or time-wise.

But there’s potentially a new method of quickly choosing research subjects at a minimal cost. Several websites allow both private businesses and academic researchers to hire a large number of workers for the purpose complete short, simple tasks. The workers are independent contractors and the cost is cheap—often five to twenty cents for five to ten minutes of work; the result is that individual workers who provide a small amount labor are able to collectively complete a large task for a single employer. It’s called peer production, and the largest service providing this labor is Amazon’s Mechanical Turk. And it’s potentially a system that allows researchers to quickly afford thousands of research subjects from around the world—the famous “college sophomore problem” may have finally found a solution.

Mechanical Turk, however, creates its own problems for researchers. A survey of Mechanical Turk users by New York University professor Panos Ipeirotis found that approximately 50% of Mechanical Turk’s workers come from the United States. The other major country is India, whose workers make up 40% of the site’s workers. Within the United States, the average Mechanical Turk user is a young, female worker who holds a bachelor’s degree and has an income below the U.S. household median. This does not reflect the demographic makeup of the United States. As such, there may still be problems of generalizability and accuracy. However, this may not be as large of a problem as it first appears; several studies provide evidence that researchers can limit the population of Mechanical Turk users they chose from and adjust results so as to make results more generalizable than traditional undergraduate studies.

The larger risk for academics is not generalizability or accuracy, but rather the quality of work Turkers provide. In my own experience with a survey experiment that was put on Mechanical Turk, most survey responses were complete and all quality control questions were answered accurately. However, a large number of surveys were completed in an extremely short time period and some responses were incoherent or appeared to involve little thought. Ipeirotis’ survey provides some clues as to why this might be the case: according to his research, 15% of Mechanical Turk users in the United States use the site as their primary source of income; an additional 30% of users report that they use the site because they are unemployed or underemployed.

If a significant portion of workers use Mechanical Turk primarily as a means of generating income, their incentive is to game the system to get as much money as possible. This causes surveys to be taken quickly and without careful attention. Even quality control questions may not be enough. Online communities of Mechanical Turk workers such as Turker Nation have developed techniques for identifying quality control questions and skilled Mechanical Turk users can likely answer quality control questions accurately while still breezing through the rest of the survey. Researchers interested in the quality of survey responses would do well to building further quality checks into their surveys; one potential method would be to ask several specific questions about the experimental manipulation test subjects were given. This would at least ensure that survey respondents read everything they were supposed to.

Articles about Amazon’s Mechanical Turk often reference the inspiration for the service’s name. As the story goes, Amazon took the name from a machine built in the 18th century claimed to be able to beat any chess player. The machine toured around the world and dumbfounded amateur and professional chess players alike. Years after the success of the machine, it was revealed that the entire contraption was a hoax—inside was just a skilled chess player making all the moves. It is indeed an apt metaphor for the site. Though Mechanical Turk quickly delivers cheap, accurate survey responses, we can’t forget that it is ultimately real people taking the surveys. And these people have just as much of an incentivize to save money as researchers do; academics must adjust their research methods accordingly.

Works cited:

Ipeirotis, Panagiotis G. “Demographics of Mechanical Turk.” (2010).

Mason, Winter, and Siddharth Suri. “Conducting behavioral research on Amazon’s Mechanical Turk.” Behavior research methods 44.1 (2012): 1-23.

This Could Be The Start Of Something New

LIWC is great, all hail LIWC. Except when it stops working.

The Linguistic Inquiry and Word Count (LIWC) program is a text analysis program developed which provides a numerical representation of a number of dimensions of the speech (Positive emotional, anger, pronouns, the list goes on). The way it works is pretty simple: We start with the dictionary.

It really doesn't make much sense on its own like that

A sample of the LIWC Dictionary

If the word “boy” is in your text, the categories 121 and 124 get incremented by one (those category numbers correspond to “social” and “human”). Then, the output file calculates the percentage of words that matched the category in the sample, out of the total words in the sample.

Because Sahil has no friends

No words in this sample matched any of the words coded as “friend” in the dictionary

The LIWC program is great at this kind of stuff, and can do a bunch of cool things, like process multiple files at a time. Except for the fact that sometimes it breaks. Specifically, when you’re dealing with really big text files.

The project I worked on last semester involved analyzing tweets through LIWC. And, boy, people sure do tweet a lot. 7.6 megabytes doesn’t sound like a lot. A high quality picture will probably be larger than that. But 7.6 megabytes of text is a lot of text. In this case, it was over a million words. LIWC, the program, crashes when you put in this file.

Thankfully, we had the dictionary from LIWC, so I began working on our own version of LIWC, coded in python. It didn’t seem that hard, we knew how LIWC was doing the calculations, and while we wouldn’t have any of the fancy features the commercial software provides, for our purposes it seemed good enough.

What’s the progress? Well it turns out that just counting the total number of words is a pretty complicated task. I have a version of the text analysis program working where I get numerical outputs, but they do not match what the LIWC program provides. This is probably a fine tuning issue. Over the coming weeks, I’ll be giving LIWC and my program the same inputs, and adjusting my program until the results match.

How long will this take? Hopefully not long. Famous last words.

Day in the Life of a SNaPP Lab Researcher

When looking for a topic to conduct an individual research, I knew I wanted to look at the gender differences in politics. This is a large field in political science and my research could go in several different directions. Since I am a part of the online political discussion team within the SNaPP Lab and since many researchers have pointed to the rising importance of online political discussion to democratic deliberation (Papacharissi 2004), I decided to look at gender differences in online political discussion. The development of my research project has taken months; these months have been filled with Qualtrics programming, literature analysis, question formulation, experimental design, and even more Qualtrics programming. Along the way it was only natural for me to question my decision to look at gender differences in political science. I remember one time when I was getting particularly frustrated with the experimental design portion I decided to take a break and watch the Daily Show with Jon Stewart. This episode just so happened to feature a beautiful segment called, “The Broads Must Be Crazy,” where Jon Stewart showed the double standard of evaluation of aggression and emotionality for women and men in politics. I remember laughing so hard when he pointed to newscastors criticizing Hillary Clinton for almost crying on television, yet when male politicians such as Mitch McConnell and John Boehner shed a few tears reporters deemed it an act of courage. Watching this segment on the Daily Show reminded me of the importance of my individual research project. The segment also reminded me of two articles which stuck out to me throughout the literature review process of my research: (1) Mendez and Obsborn, 2010 and (2) Herring 1993.

In the literature overall, I found that politics is subconsciously assumed to be a male dominated arena. Mendez and Osborn conducted a study to look at gender differences and perceived credibility within face to face political discussions. Mendez and Osborn used 1996 data in which individuals identified their objective knowledge of politics, the identity and gender of their main partner in political discussion, and their perceptions/beliefs about their partner’s credibility in political discussion. Mendez and Osborn (2010) found that both men and women who identified a woman as their discussion partner, perceived women as not having much political knowledge—even if the woman had the most objective political knowledge in the room (Mendez & Osborn 2010). Even when various factors are controlled—such as age, partisanship, and education level—women are perceived to be less credible in political discussion than men (Mendez & Osborn 2010). While this study applies to face to face political discussion, in online discussion boards there have been observed gender differences as well. In online discussion boards, women overall post less than men (Herring 1994). Critics may point out women are the ones holding themselves back and should just increase the frequency of their posting online. However it was found that when women participate equally in online discussion, women become censored as their postings are either ignored or delegitimized (Herring 1994). Also when women do post, their posts are significantly shorter than men’s and women are more likely to contribute to discussions of a personal nature rather than of issues (Herring 1994). When men introduce adversarial comments, women report being more concerned than men and are more likely to avoid the conversation all together than respond. However when women do respond to an adversarial comment, they are extremely likely to adopt a typically male, confrontational response (Herring 1994).

In my study I will be looking at similar things that Mendez and Obsorn, and Herring looked at within their research. By creating a fake facebook newsfeed conversation with one aggressor and one neutral, I will manipulate the gender of the aggressor and the neutral. This will allow me to look at how subjects rate credibility, argumentativeness, professionalism, and emotionality of both the aggressor and the neutral party across several different cells. The research design portion is almost over and I am in the final stages before the project is launched. Stay tuned as the data analysis portion of this experiment is coming soon.

Works Cited:
Herring, Susan. “Gender Differences in Computer-Mediated Communication: Bringing Familiar Baggage to the New Frontier.” (1994). Web.
Mendez, Jeanette, and Tracy Osborn. “Gender and the Perception of Knowledge in Political Discussion.” Political Research Quarterly 63.2 (2010): 269-79. Sage Journals. Web.
“The Broads Must Be Crazy.” The Daily Show with Jon Stewart. Comeday Central. New York, 22 Apr. 2014. Television.

Income and Internet Usage

When I initially started to do research on Americans’ social media habits, I operated under the assumption that almost all Americans had access to the internet. As such, I believed that most individuals could easily access Facebook or Twitter from their computer; if someone didn’t use social media it was due to personal preference, not lack of access.

And it is true that a large majority of Americans live in homes with a computer that’s connected to the internet. As of 2012, 74.8% of households have an Internet connection at home (Census Bureau). Of the 25.2% of American households that don’t have internet access, 7.3% report not having an internet connection because it is too expensive (Census Bureau). That’s a small minority of the population, but the United States is a large country with over 117 million households; that 7.3% represents over 8.5 million families that can’t afford the internet.

I had to take into account a new variable when examining America’s social media use: class. And indeed, regular internet usage is positively correlated with a person’s income. 73% of people making less than $30,000 a year report using the internet compared to 99% of people making over $75,000 a year (Madden).

I expected social media use to likewise be positively correlated with income. But surprisingly, people in lower income brackets are actually more likely to use social media than wealthier individuals. 72% of people making under $30,000 a year use social media. Only 65% of people in higher income brackets report using social networking sites. In addition, lower income groups are more likely to use both Facebook and Twitter (Madden).

The increasing use of cell phones as an internet browsing device might explain poorer people’s propensity to use social media. 43% of people making less than $30,000 a year do most of their internet browsing on their cellphone as compared to 21% of people making over $75,000 a year. Cellphones lower barriers to entry by allowing people to cheaply access the internet even when they don’t have access to a computer connected to the internet at home.

I was surprised that having a lower income—despite preventing people from having a computer with internet access—doesn’t prohibit people from using social media. Perhaps social media truly is a democratizing force that allows a greater portion of the population to participate in our country’s political discourse.

Given the increasing importance of social media in geopolitics, it would also be interesting to explore the relationship between income and internet usage in other countries. Are richer households more likely to have access to the internet in other countries? Are poor people in Hong Kong or Tehran able to access social networking sites? The answers to these questions may help us understand the demographic makeup of protest movements around the world.

“Computer & Internet Trends in America.” Measuring America. United States Census Bureau, 3 Feb. 2014. Web. 19 Oct. 2014. <http://www.census.gov/hhes/computer/files/2012/Computer_Use_Infographic_FINAL.pdf>.

Madden, Mary. “Technology Use by Different Income Groups.” Pew Research Internet Project. Pew Research, 29 May 2013. Web. 19 Oct. 2014. <http://www.pewinternet.org/2013/05/29/technology-use-by-different-income-groups/>.

Facebook in History

Earlier this week I spent time in Swem’s Special Collections looking at the notebook of a 19th century businessman. The Prince William County resident’s dated cursive was nearly illegible, the pages were faded and torn, and the text was overfilled with marginalia and curious attempts at multiplication. But the notebook is worth the trouble. For historians, such writings provide critical insights into the daily anxieties and hopes of middle class Virginians; the excitement over the world fair, the religious commodities purchased, the debts owed. From such documents we can historicize the experience of everyday life.

All of which led to a question: a hundred years after my death, what documents will historians utilize to historicize the ordinary, daily affects of my generation? Postings on social network sites are one immediate answer. Within my age cohort, 89% of people use social networking sites and 46% of internet users report posting original photos and videos (Pew 2014). On social media, we work out the inane realities of day to day living—information of little use to us, but of value to historians trying to figure out exactly what we were up to.

Historians no longer have to sift through mildewed, incomplete documents. Instead, we have a perfectly preserved ledger of the social lives of an entire generation. Paucity has been replaced with overabundance and researchers will struggle to construct coherent historical narratives out of a nearly infinite reserve of readily available data. In a matter of decades, the field of history stands to be revolutionized and historians may find themselves pining for the days when one had to crank through microfilm for a glimpse into the past.

But despite the radical changes social media threatens to impose upon Herodotus’ old discipline, there’s a relative lack of understanding as to what structures our online behavior. What motivates a post? Which topics are discussed? Who does the posting? This is to say, we have all the more reason to begin an inquiry into the ways people discuss politics online. No discipline stands alone, and history may find itself in need of political theory and computer science sooner rather than later.

“Social Networking Fact Sheet.” Pew Research Centers Internet American Life Project RSS. Jan. 2014. Web. 7 Oct. 2014. <http://www.pewinternet.org/fact-sheets/social-networking-fact-sheet/>.

Health Care Policy in the United States: A Historical Perspective

This semester, I am taking my Government senior seminar in 20th Century American Social Policy.  We have just begun covering developments in U.S. health policy from 1900-1950.  While doing reading for the class, I was shocked by how similar much of the discourse surrounding national health insurance in this time period was to accusations and criticisms used during the debate over the Affordable Care Act (ACA).  Events from as early as the 1900s have had a profound impact on the way opponents of national health insurance characterize federal programs.  The historical context of the first debates over government regulation of health care created a set of frames that are still used when discussion of health care arise.

For example, when California and New York began debating comprehensive health care bills in the early 20th century, opponents of the state involvement in health care decried these programs “socialized medicine” (“The Lie Factory,” Lepore).  The “Red Scare,” or fears of latent communist or socialist elements among the American population, provided a ready-made set of criticism for proponents of national or state health insurance programs.  The most fervent opponents of national health insurance, physicians and private insurance companies, capitalized on these widespread fears and labeled any form of government involvement “creeping socialism” (Hamovitch 281). Public opinion on government regulation of health services was generally positive, however the public was unsure as to what form they wanted this involvement to take. Organizations such as the American Medical Association (AMA) and congressional Republicans manipulated public opinion and frightened significant portions of the populace.

The rise of political consulting firms also helped opponents of national health care mount effective public relations campaigns.  Early health care battles represented one of the earliest instances of special interest campaigning.  The AMA assessed a $25 dollar fee on all of its members to pay “Campaign Inc.,” one of the first political consulting firms (Quadagno ).  Campaign Inc. had no qualms about quote misattribution, out-of-context “facts,” and outright falsified statistics.  Their efforts successfully derailed at least five attempts at either state or national health insurance plans.   Physicians who dared publicly support reform efforts were expelled from the AMA and lost their admitting privileges at local hospitals.

As seen through the ongoing debate over the Affordable Care Act (ACA), rhetoric surrounding government intervention in health care has remained remarkably similar.  The coinciding of the “Red Scare” and nascent efforts to reform health care allowed opponents of reform to permanently link communism and socialism to questions surrounding government healthcare provisions.

 

Works Cited

Hamovitch, Maurice B.  “History of the Movement for Compulsory Health Insurance in the United States,” Social Service Review 27, 3 (Sept. 1953): 281-99, available from JSTOR

Quadagno, Jill.  One Nation Uninsured: Why the U.S. Has No National Health Insurance (New York: Oxford University Press, 2005), ch. 1 (Blackboard).

Lepore, Jill. “The Lie Factory,” The New Yorker (Sept. 24, 2012), available athttp://www.newyorker.com/reporting/2012/09/24/120924fa_fact_lepore.