End of the Year Reflection

After spending most of the Fall Semester engaged in what the Obamacare team has come to affectionately term “data-hazing,” I was looking forward to starting my independent data project and moving on to move engaging elements of the research process.   Then I met R.  R is a free software program that allows researchers a great deal of data analysis freedom.  So much freedom, I might add, that learning the program adequately reflects the well-known saying, “give em’ enough rope and they’ll hang themselves.” The beauty of R is that you can tell it to do anything, as long as you know the command.  However, therein lies the greatest challenge as well.  The first few sessions were a haze of parentheses, brackets, and red error messages.  With the patient help of Professor Settle, Meg, and Taylor, the lab gradually became more acclimated to R.  Although the learning process could be tedious, successfully entering commands felt like a huge victory.

One of the major lessons I have learned about the research process is that is often, long, disappointing, and painfully slow.  However, this characteristics also make the small pay-offs along the way incredibly satisfying.  Over the course of the past year, I’ve had to grow used to scaling back my expectations, then scaling them back a little more, and then adjusting them perhaps one more time.  There are tangible things I’ve learning working in the lab, such as how to create a bar graph in R, but there are also so many intangibles that I may not be able to neatly fit on a line in a resume.  Growing used to slow and obstacle-riddled research process has been one of those invaluable intangibles.  As I prepare to begin my senior year of college,  I will need to remember the importance of remaining flexible and keeping an open-mind about the future. While I am excited to start putting my new-found R skills to use for my independent research project this summer, I am even happier about undertaking an independent project (and senior year!) with a better, continually evolving attitude about the process of research itself.

Reflection on the Lab Experiment Team

This semester, I was on the lab experiment team. My job was the preparation of the video stimulus. This turned out to be a more difficult job than I was expecting.

The idea is that this stimulus will consists of a set of political videos and a set of apolitical videos. While the stimulus is presented to the subject, their physiological reaction will be monitored with the BioPac hardware.

The process obviously began with selecting videos. The main difficulties were in finding videos that were practically equivalent in their levels of contention while being varied in their political leanings, and to match them with equivalently contentious apolitical videos. It was easy, for instance, to find contentious videos over Obamacare, but considerably more difficult to find direct confrontation on, say, abortion.

An additional, unexpected hurdle this semester has been file compatibility. Videos are a tricky medium, file-type-wise. The world is just barely getting over the .avi file. Copyright holders are scrambling to prevent users from using old filetypes so they have to convert to newer ones and buy new copies of old content. But not all systems (web-based systems especially) are equipped to handle the newer filetypes that are replacing .avi and its older companions. In the end we had to work around this issue by uploading to youtube (which is up-to-date) and embedding our youtube uploads instead of embedding the file natively.

Anyway, we ended up with, politically, two Obamacare clips, a clip from Occupy Wall Street, and a clip from a pro-choice rally, and apolitically, two Jerry Springer Show clips, an altercation between UC Berkeley students and police, and an Atheism v. Intelligent design debate.

To get the strongest, most measurable results, we had people watch and code for the most contentious portions of the videos. We cut the videos down to just these segments (hoping to conserve participant time and prevent physiological responses from dwindling over the course of the stimulus). These snippets were pilot tested for equivalence on MTURK, and the most evenly matched three videos from each set were chosen.

 

As it stands, we now have six videos embedded in powerpoint presentations in both political first and apolitical first orders. These presentations are ready for pilot testing.

 

 

Here are the videos, if you want to check them out: https://www.youtube.com/channel/UCOzJ6wqmouI5VJpYv5ir6NQ/videos

The Powerpoint presentations are on the shared drive.

On Quantitative Data, or How to Confound a Philosopher

“Suppose, now, that we wished so to organize our moral discourse that we did not accept the must implies ought principle…
In that case we would have both
Np
and either
O~p
or
P~p
where “P” represents permission and is connected to obligatoriness by the rule
Op ≡ ~P~p” (Wilson, 1984, p. 54).

Until this lab, this was my idea of analysis. Coming out of high school with policy debate under my belt and a philosophy major in my sights, I had no idea that I would be manipulating numbers.  The closest I had gotten (or ever planned to get) to quantitative data was dealing in passing with “utility,” which even most steadfast utilitarians will readily admit is only quantifiable in principle.

When I found the website, I was excited. “Working in the SNaPP Lab is a great way to get experience conducting research to prepare you to conduct your own project. If you are interested in political behavior—and specifically in the role of innate dispositions, social networks, or social media to influence political behavior—you should consider getting involved in the lab” (“Projects,” 2014, Fostering Research Opportunities for Undergraduates section, para. 3). Innate dispositions! Social networks! Political behavior! RESEARCH EXPERIENCE!!! It was everything I wanted to explore academically that wasn’t strictly philosophy. It couldn’t have been better.

Somehow, though, I missed just how quantitative it all was. I missed every mention of R, every mention of data, every mention of statistical analysis. Honestly, I don’t know what I thought the lab did; I was just sort of blindly excited about it. Had I read deeper, and had I realized the data focus, I may have been too scared to apply.

For maybe the first time in my life, I’m glad I didn’t read very deeply. Missing out on this would have been a horrible mistake.

While statistical analysis isn’t exactly my passion now, I find myself engaging with scholarship I never would have before. Quantitative linguistics papers about word distribution in childhood input, papers about quantitative analysis of incidence of cosmological properties in possible string-theory worlds… The list goes on. This experience has opened me up to a whole new type of scholarship, across disciplines, which prior to participation in this lab, I was not capable of appreciating.

 

While the year in SNaPP Lab wasn’t at all what I was expecting (due to my failure to read), I am glad it turned out the way it did, and I’m glad to have the year of experience. It’s been a great one.

 

References:

Projects (2014). Retrieved 4 May 2014 from http://snapp-lab.wm.edu/projects.html

Wilson, F. (1984). Hume’s cognitive stoicism. Hume Studies10, 52-68. Retrieved from http://www.humesociety.org/hs/issues/10th-ann/wilson/wilson-10th-ann.pdf

Technology Can’t Change Politics

A surprising amount of academic articles about the internet–particularly those written in the early 2000s–refer to internet technology as a transformative tool that has the potential to fundamentally alter American politics. Unfortunately, it seems as if technological innovation isn’t sufficient to spur political reform. The past 30 years have seen enormous technological change, including widespread adoption of personal computers, the internet, and cell phones. These technologies have had a profound impact on the ways in which we interact with others and perform daily tasks. And yet our political systems remain unchanged. The political debates and challenges of 1994—or even 1984—seem remarkably similar to those of 2014.

This, I believe, is due to the powerful effect of institutions. America’s political institutions create an incentive structure for politicians. New technologies can change how we elect people, who we elect, how we interact with elected officials, etc. But in some sense these changes are superfluous. Whether you use social media to help elect Barack Obama or print media to put George H. W. Bush in the White House, there will still be two dominant parties. Money will drive political outcomes. Capital will accrue wealth faster than labor, leading to inequality. One Senator will have the ability to block entire pieces of legislation. Organized political minorities will have a greater influence than apathetic majorities. Technology, however great its effect on our personal lives, is largely unable to alter the incentive structures America’s political institutions create.

The fight to reform institutions is not a battle that can be resolved through technology. Rather, it is a struggle that involves political and philosophic debate. Technology cannot alter the fundamental inequalities of power and wealth that distort political outcomes and make change so challenging. To pretend otherwise merely obfuscates the real issues and makes effecting positive social change more difficult for everyone involved.

What’s So Bad About Polarization?

The increasing use of the internet as a communications tool has fundamentally changed the way Americans discuss politics. Whereas once people used to bash politicians in local barbershops, now people have the ability to do so on social media sites with people around the world.  Some have posited that this will naturally diversify citizens’ political networks, thus enhancing the democratic process. A review of the literature, however, casts doubt upon this optimistic view. Instead, the future looks much like the past: research indicates that the internet likely increases polarization by allowing citizens to more easily self-select into ideologically homogenous groups (Bienenstock et al., 1990; Garner and Palmer 2011).

Take for example Facebook, America’s most popular social network site. Facebook is designed so that you only see content from people whose pages you visit the most, i.e. your closest friends. Your closest friends tend to be very similar to you, including in their political beliefs (McPherson et al., 2001). As such, if it is true that people self-select into homogenous networks on Facebook, online political discussion will lead to greater partisanship and polarization.

But could increased polarization actually be a positive development for American democracy? While closed discussion among a partisan, polarized group of people might seem like a negative thing, studies indicate that polarization actually leads to more informed and consistent voters (Levendusky 2010). Despite its negative connotations, polarization motivates citizens to become more politically engaged and knowledgeable; it also serves as a powerful heuristic that allows ordinary citizens to easily understand complex political issues. Perhaps the danger to our polity lays not so much in polarization itself as it does in broken political institutions that are unable to accommodate polarized parties. Unfortunately, that is a problem whose solution can only come from political imagination and will; an internet connection will not suffice.

Obamacare Team Update and Looking Back on my time in SNaPP

Well the semester has come to a close, and the Obamacare Team is happy to report we had a very productive semester. Between myself, Joanna, and Will we completed our article collection project, collected troves of new data, and made lots of headway on our individual projects.

As we reported before we spent much of this semester collecting new data for use in group and individual research. This data includes three categories of variables: policy (collected by Joanna, measures what states did in response to the ACA), health/demographics (collected by Will, includes a number of measures on the health and characteristics of state populations), and political (collected by me, and includes measures on the ideological climate of states).

Once we completed that we set about working on our individual projects. Will and Joanna will report on theirs more in the future, as both are spending time this summer on their projects. As a senior, however, I completed my project on measuring state ideology (see my earlier post for more details).

My departure from the SNaPP Lab (and from W&M) has made me want to reflect on my time in the lab. I don’t want to just take this opportunity to tell you I learned a lot though, or that I completed some really interesting research projects (even if they were super interesting). Instead, I want to encourage all current and future liberal arts students out there with a will to learn and a topic they’re passionate about to get out there and find research to do! Even if your final product/result isn’t what you expected (and trust me, it usually isn’t), you can learn so much about your field and about scholarly work in general by rolling your sleeves up and doing research. The transferrable skills you learn are invaluable, and you may find your efforts turning into an honors thesis, published article, or even a job.

I also want to encourage those students doing research now or in the future to stick with it. There were times these past few years where I ran into giant brick walls I was sure were insurmountable. I remember distinctly the day this past summer I learned that my project as I had designed it was completely infeasible. Yet I stuck with it, adapted to the challenges I ran into, and ended up learning so much about research and more.

Finally, a word of advice to my future and current Government/Public Policy lovers: do research! I know there is a temptation amongst students in our field to avoid methods courses and research work like the plague. Oftentimes we would rather read Politico articles and talk about Democrats and Republicans than sit down and complete a research project. The benefits of doing projects like these, however, are tremendous, and even if you don’t please consider taking as many methods courses as you can. I am so glad that I took the research courses and completed the projects I did because they taught me an immense amount about not just research, but also how to approach complex problems more methodically and successfully.

So if you know a favorite professor of yours is looking for research assistants, or you have the opportunity to apply for a summer research grant, don’t hesitate because you’re worried it would be too hard or that you wouldn’t learn from the experience. If you approach your research with enthusiasm and dedication you will learn an incredible amount, I promise.

A Reflection on the Year and Research Plans for the Future

As the semester comes to a close I have been reflecting on my first year as a SNaPP lab RA. I can certainly say that my experience in the lab has provided me with an incredible skill set that will surely come in handy throughout the rest of college but also in the real world. I collected data for the Obamacare Team, and have learned how to wade databases to find important information. I have developed a working knowledge of R that I will continue to build on. I have written a grant proposal, and developed my own unique research project. In addition to these tangible results, I have noticed myself developing better analytical and problem solving skills. Working in the lab has provided a wealth of opportunities to learn in a unique hands on way that adds allows me to learn by doing.

I also began work on my individual project this semester. My project aims to explore the relationship between bias newspaper coverage of the ACA and the corresponding newspaper readership’s ideology. Because of the generous funding from the Charles Center I will be able to continue work on my project throughout the summer! I will also be helping the Social Anxiety Team this summer by helping proctor their lab experiment. Overall, I have developed numerous new skill sets because of the SNaPP lab and this summer promises to an invaluable opportunity to get my hands dirty in a project that I have developed on my own! Below I have put a copy of my working abstract for my summer research project. I will continue to blog about my summer research experience on the Charles Center Summer Research Blog.

Working Abstract for Summer Research:

I will be studying the relationship between biased local newspaper coverage, and the political ideology of newspaper readerships. I will aim to answer the question: does biased local newspaper coverage of ideologically contentious legislation correlate with an ideologically biased newspaper readership? To explore this question, I will analyze newspaper coverage of the Affordable Care Act in California, Texas, and Florida. Using original data collected from newspaper articles written in August 2009 (the height of the healthcare debate) covering the Affordable Care Act, I will analyze the recurrence of ideologically charged key words. An abundance of specific ideologically biased keywords in an article will indicate the ideological biases of the publishing newspaper. Examples of these keywords are “ration” and “public option.” Throughout the healthcare debate, conservatives have emphasized the potential “rationing” of healthcare, while liberals have avoided the term because of its negative connotation. Therefore “ration” will likely appear more in conservative newspapers, aiming to highlight problems with the ACA and promote a conservative argument. The same concept applies to the keyword “public option,” a frequent element of the liberal argument in support of the ACA.

After determining the ideological biases of specific newspapers in Texas, California, and Florida, I will focus on understanding the relationship between these newspapers and the ideology of their readerships. I will analyze readership ideology by using local election results and DW nominate scores for representatives with districts that overlap with the readership area. By analyzing a liberal (CA), conservative (TX), and moderate (FL) state, I will better understand the relationship between local newspaper ideology and readership ideology across the American political spectrum.

 

Measuring State Ideology and My Research Journey this Semester

My research journey this semester was one full of twists, turns, and surprises. I began the semester finishing up on article collection for the Obamacare media project, and before long had transitioned into collecting group data for my team’s project. My original intention was to pursue a project investigating framing of the Affordable Care Act (ACA) by elites and average Americans but terminated that project when I ran into the brick wall of unavailable data.

Then Professor Settle steered me towards a new project, one that I embraced and made my focus for the semester: measuring the ideology of states.

This subject was one I had experience with. Throughout the Obamacare team’s quest to identify a coherent research focus we kept stumbling across the need to figure out how to measure the ideology of states. I had read several articles on the matter and found the subject interesting. To me the challenge of measuring the political climate of a state represented a fascinating opportunity to test my ability to take a complex phenomenon and condense it down into working measure. I embraced the challenge and ran with it.

 

The first step was to look at how scholars had operationalized state level ideology in the past. My review of the literature turned up, among others, two landmark studies that coincide well with the competing theories on how to develop a good measure. The first of these studies was published by Robert Erikson and his colleagues in September 1987 issue of the American Political Science Review. This article laid out an approach that relied on disaggregating national polling results to get state-level indicators of partisanship in states. The second, published in 1998 in the American Journal of Political Science by William Berry and his colleagues, presented a methodology focused on aggregating various indicators of elite ideology within states. These indicators include interest group evaluations of elites, partisanship of the state legislature, and more. I chose to follow a methodology modeled after Berry and his colleagues, largely because of the data available and the kind of analysis I wanted to conduct.

Having determined my procedure, I gathered my data. I chose to focus on four indicators of state ideology: party of the governor, partisan makeup of each state’s upper and lower houses, and the average DW-Nominate score of each state’s U.S. Senators (all data was from August 2009, the time frame for the articles collected by the Obamacare team). I had collected data for these variables earlier in the semester, and they seemed to be relatively strong predictors of ideology. The way these measures were structured/operationalized, each score was between -1 and 1 with -1 (exception: the party of governor was coded such that a state with a Democratic governor received a -.25 and a state with a Republican received a .25).  I then aggregated the four measures for each state and divided by four to arrive at my ideology score for each state.

To verify my results I compared them with my comparison variable, Presidential vote share for the 2008 election (as this election was closer to the 2009 time frame for my other data). My comparison variable was coded such that states that went blue in 2008 would receive a negative score between -1 and 0, while those that went red would receive a positive score between 0 and 1. The scatterplot, which I unfortunately was unable to upload, showed a strong positive correlatino between the two variables.

I arrived at the conclusion that my methodology, while not perfect, was a step in the right direction in terms of measuring ideology within a state. Granted there is significant room for improvement in this research design. For instance, my analysis relies on the assumption that Presidential vote share in 2008 serves as a valid comparison measure for the ideology scores I came up with. I believe, however, that my work this semester can serve future members of the SNaPP Lab and the Obamacare team specifically with future research.

Real or Fake? Everyone Wants to be Facebook Friends

During this semester in the SNaPP Lab, I spent time playing around with R and analyzing data from Facebook. A neat, little Facebook package allowed me to see almost all of my friends’ information that is present on Facebook. It was neatly put into table, I could run a series of testing using this data. I was interested in what it means to be a close friend with someone, and could certain factors on Facebook predict that.

I started to think about what factors “allow” me to be friends with someone. We usually have mutual friends, common interests, and similar upbringings. I went through and rated all 1000+ on of my friends on a scale of 0 to 100, with 100 being a very close friend and 0 being not friends at all. I decided that looking at mutual friends, likes, and cluster groups on Facebook would allow me to determine if the same person I would consider a close friend offline, would be a close friend with me online.

When looking at mutual friends, this was not a big of an indicator of closeness as I would have hoped. Many of the people that I had over 250 mutual friends with were friends that I grew up with from lower to upper school (that’s what we private school kids call elementary and high school). I’m not too close with them, but it makes sense that we would have mutual friends. One among those 250+ mutual friends in common was my brother…and I suppose I would consider him a close friend!

What truly helped identify close friendships online and off, were the likes. Going through everyone’s likes was absolutely ridiculous. I sat at my computer for 5 minutes and knew there had to be a different way to do that. After talking with Meg, she told me to pick 8 to 10 words that I knew that were pages I visit often on Facebook that interested me. Of those, I picked pages with the words:
Obama
Gossip Girl
Beyonce
Alpha Kappa Alpha
William and Mary
Z104 (radio station in Hampton Roads)
Beyonce
Les Miserables

These were things that I were interested in, and (hopefully) things that my close friends would be interested in as well. After running a few tests in R, it came back that friends who like pages containing the words William and Mary and Alpha Kappa Alpha were more likely to be my close friend. This made sense because it is such a small group of people with common and specific interests.

Online social media sites such a Facebook allow us to connect with people from all over the world. Having the most connections has become somewhat of a contest between people. Having the most friends or followers gives people such a confidence boost, and a strange appreciation for what they post, even if they are not offline friends with this audience. This phenomenon is actually quite interesting and it leads me to question why people need to feel connected to those who over a computer screen? I definitely understand social networks in the environment can enhance someone’s quality of life, but social networks online can help stay connected to close friends who are far away, but can also lead to some dangers interactions with strangers. (Okay, I’m sounding like my mother right now). I guess I’m trying to say is why must we be so connected to everyone all the time? What happened to having a perfectly happy normal life without Facebook, Twitter, Instagram, Vine, SnapChat, and all of those things? Doing this project has definitely taught me that yes social networks are good for staying connected with those around us, but it can also be damaging. I’m friends with so many people that I don’t talk to, but I won’t unfriend them. I’ve already started my social network detox. I deleted the Twitter app off my phone, and I think Facebook is next. I’m keeping Instagram because who doesn’t love a good picture?! (And I’m really nosy). The trend of social networks moving from offline to online is a movement that needs to keep being researched to see how we can use them for our benefit!