Several months ago, I came across a Chrome browser extension developed by Lauren McCarthy, an artist and programmer based in Brooklyn, NY, called the Facebook Mood Manipulator. With a cheeky nod to the Facebook data science team’s 2013 massive-scale emotional contagion experiment, the extension asks you to choose how you’d like to feel — the four options are positive, emotional, aggressive, and open — and filters your feed accordingly through the text and sentiment analysis program Linguistic Inquiry Word Count (LIWC).
It looks like this: Source. http://lauren-mccarthy.com/moodmanipulator/
I downloaded the extension and reactivated my Facebook account for a week to test it out on my own. The sentiment analysis component is definitely imperfect, perhaps because LIWC is less accurate with shorter bits of text like Facebook status updates and comments than longer excerpts. Nonetheless, I got some interesting results. Setting the manipulator to “positive” promoted a couple of personal status updates about friends and family, but “open” did not change my feed at all. Setting it to “aggressive” once caused my whole feed to go blank, and then to refresh with no change in the initial distribution of posts. The “emotional” filter, however, was particularly strong — it consistently brought some contentious political and social discussion threads to the top of my feed, as well as certain breaking news stories. After about a week of checking my Facebook News Feed daily on the “emotional” setting, I noticed that after logging off, the effects of the “treatment” lingered — I found myself downright concerned about some of the commentary I had come across, and I initiated face-to-face conversations about the news stories I saw posted more often than I typically do. Recurring daily exposure to “emotional” posts thus seemed not only to produce some sort of downstream effect on my mood, but affected my social behavior offline.*
I am wary of drawing any conclusions from this self-experiment — because I knew I was being “studied,” I probably overestimated my responses to the mood manipulator — but nonetheless, assessing my self-report results highlighted the utility of tracking downstream effects in social media experiments. Initial exposure to a disagreeable online political post or a contentious comment thread may trigger an initial emotional response or encourage someone to engage with the online content, for example, but recurring daily exposure to such content may also alter mood, lead people to reconsider opinions, or affect their motivation to engage in more traditional forms of political action, such as face-to-face deliberation or protest. For political scientists and social psychologists alike, it is these downstream effects on individuals’ attitudes and behavior, rather than individuals’ initial reactions to treatments, that matter most — social science experiments should thus take particular care to capture them.
*Note: I don’t think the “emotional” filter accounts for valence — when it works correctly, it displays both “positive” and “negative” posts. I wish the “positive” and “aggressive” filters had worked properly, so I could test whether repeated exposure to valence-charged posts affected my offline behavior differently than exposure to mixed-valence posts.