Health Care Policy in the United States: A Historical Perspective

This semester, I am taking my Government senior seminar in 20th Century American Social Policy.  We have just begun covering developments in U.S. health policy from 1900-1950.  While doing reading for the class, I was shocked by how similar much of the discourse surrounding national health insurance in this time period was to accusations and criticisms used during the debate over the Affordable Care Act (ACA).  Events from as early as the 1900s have had a profound impact on the way opponents of national health insurance characterize federal programs.  The historical context of the first debates over government regulation of health care created a set of frames that are still used when discussion of health care arise.

For example, when California and New York began debating comprehensive health care bills in the early 20th century, opponents of the state involvement in health care decried these programs “socialized medicine” (“The Lie Factory,” Lepore).  The “Red Scare,” or fears of latent communist or socialist elements among the American population, provided a ready-made set of criticism for proponents of national or state health insurance programs.  The most fervent opponents of national health insurance, physicians and private insurance companies, capitalized on these widespread fears and labeled any form of government involvement “creeping socialism” (Hamovitch 281). Public opinion on government regulation of health services was generally positive, however the public was unsure as to what form they wanted this involvement to take. Organizations such as the American Medical Association (AMA) and congressional Republicans manipulated public opinion and frightened significant portions of the populace.

The rise of political consulting firms also helped opponents of national health care mount effective public relations campaigns.  Early health care battles represented one of the earliest instances of special interest campaigning.  The AMA assessed a $25 dollar fee on all of its members to pay “Campaign Inc.,” one of the first political consulting firms (Quadagno ).  Campaign Inc. had no qualms about quote misattribution, out-of-context “facts,” and outright falsified statistics.  Their efforts successfully derailed at least five attempts at either state or national health insurance plans.   Physicians who dared publicly support reform efforts were expelled from the AMA and lost their admitting privileges at local hospitals.

As seen through the ongoing debate over the Affordable Care Act (ACA), rhetoric surrounding government intervention in health care has remained remarkably similar.  The coinciding of the “Red Scare” and nascent efforts to reform health care allowed opponents of reform to permanently link communism and socialism to questions surrounding government healthcare provisions.

 

Works Cited

Hamovitch, Maurice B.  “History of the Movement for Compulsory Health Insurance in the United States,” Social Service Review 27, 3 (Sept. 1953): 281-99, available from JSTOR

Quadagno, Jill.  One Nation Uninsured: Why the U.S. Has No National Health Insurance (New York: Oxford University Press, 2005), ch. 1 (Blackboard).

Lepore, Jill. “The Lie Factory,” The New Yorker (Sept. 24, 2012), available athttp://www.newyorker.com/reporting/2012/09/24/120924fa_fact_lepore.

 

Girl Talk: How to identify gender by online speech patterns

Do patterns of online political discussion differ based on the gender of the writer? One of the keys to answering this question may be LIWC, or Linguistic Inquiry and Word Count, a “a computerized text analysis program that categorizes and quantifies language use” (Kahn 263). LIWC analyzes text by recognizing words and grouping words into different categories. For example, “I” and “me” are grouped into the “self-referential words” category while verbs like “think” and “believe” are grouped into the “cognitive processes” category. These categories range in specificity from broad language descriptors like “affect” to specific emotions and topics like “sadness” and “occupation”.

LIWC will be especially useful for the Online Political Discussion Computer Science team as we begin working with our 2008 twitter data set. We will use hashtags that are co-occuring with #politics to create a social network diagram of political discourse. For example, each node will be a tweet, and it will be connected to every tweet with which it shares a hashtag. Overlaying LIWC data with the social network diagram will show how the language content of tweets is mapped out over the network. Specifically, I hope to use LIWC to focus on the relationship between gender and online political discussion. However, the twitter metadata does not disclose the gender of twitter authors. Instead, I will use LIWC to analyze the language patterns of tweets to figure out the gender of twitter users.
How do we differentiate the language patterns of males and females? This is a question that both linguists and feminists have confronted for years. Second wave Feminist writers tackled this question using the language of power and powerlessness. In “Discourse Competence: Or How to Theorize Strong Women Speakers,” Sara Mills argues that the linguistic elements that make women’s speech different from men’s speech, like expressions of uncertainty and reliance on verbal fillers are not unique to women, but are expressions of submissiveness (Mills 4). At the same time, Mills writes that women act as the facilitators of conversation. Instead of steering the course for conversation, women tend do the “repair-work” of the conversation by asking questions and avoiding awkward silences (Mills 5). It should be noted, however, that some of the feminist writings of the 1970s are more theoretical than quantitative. In Language and Woman’s Place—a text on the linguistics of gender that was ground-breaking in the 1970s—the author admits that “the data on which she bases her claims have been gathered mainly through introspection: she examined her own speech and that of her acquaintances, and used her own intuitions in analyzing it” (Lakoff 46). Nonetheless, these theories of the linguistics of gender create a useful framework for discussing online political discourse. For example, if women truly are the “facilitators” of conversation, will female-authored tweets have higher measures of centrality? Or does the nature of online communication destroy the need for conversation facilitators, in which case one might predict the marginalization of female-authored tweets. Or does Twitter, a female-dominated social media site, represent a completely different paradigm for female speech?
While these questions make a good framework for theorizing about gender in online political discussion, there is still the issue of analyzing tweets for gender. For that, I look to Koppel et al.’s work on automatically categorizing written work by author gender (Koppel 401-412). Koppel and his team used a comprehensive list of words and grammatical patterns to create an algorithm that was able to predict the gender of the author of a text with eighty-percent accuracy. Although Koppel did not use LIWC in his algorithm, his team’s methods will inform how I will manipulate LIWC, which allows users to add words or expressions to dictionaries.

Works Cited

Kahn, Jeffrey H., Renée M. Tobin, Audra E. Massey, and Jennifer A. Anderson. “Measuring Emotional Expression with the Linguistic Inquiry and Word Count.” The American Journal of Psychology 120.2 (2007): 263. Print.

Koppel, M.. “Automatically Categorizing Written Texts by Author Gender.” Literary and Linguistic Computing 17.4 (2002): 401-412. Print.

Lakoff, Robin Tolmach. Language and woman’s place. New York: Octagon Books, 19761975. Print.

Mills, Sara. “Discourse Competence: Or How to Theorize Strong Women Speakers.” Hypatia 7.2 (1992): 4-17. Print.

bigstock-girl-talk-retro-clip-art-17343083-e1392062333304

End of the Year Reflection

After spending most of the Fall Semester engaged in what the Obamacare team has come to affectionately term “data-hazing,” I was looking forward to starting my independent data project and moving on to move engaging elements of the research process.   Then I met R.  R is a free software program that allows researchers a great deal of data analysis freedom.  So much freedom, I might add, that learning the program adequately reflects the well-known saying, “give em’ enough rope and they’ll hang themselves.” The beauty of R is that you can tell it to do anything, as long as you know the command.  However, therein lies the greatest challenge as well.  The first few sessions were a haze of parentheses, brackets, and red error messages.  With the patient help of Professor Settle, Meg, and Taylor, the lab gradually became more acclimated to R.  Although the learning process could be tedious, successfully entering commands felt like a huge victory.

One of the major lessons I have learned about the research process is that is often, long, disappointing, and painfully slow.  However, this characteristics also make the small pay-offs along the way incredibly satisfying.  Over the course of the past year, I’ve had to grow used to scaling back my expectations, then scaling them back a little more, and then adjusting them perhaps one more time.  There are tangible things I’ve learning working in the lab, such as how to create a bar graph in R, but there are also so many intangibles that I may not be able to neatly fit on a line in a resume.  Growing used to slow and obstacle-riddled research process has been one of those invaluable intangibles.  As I prepare to begin my senior year of college,  I will need to remember the importance of remaining flexible and keeping an open-mind about the future. While I am excited to start putting my new-found R skills to use for my independent research project this summer, I am even happier about undertaking an independent project (and senior year!) with a better, continually evolving attitude about the process of research itself.

Reflection on the Lab Experiment Team

This semester, I was on the lab experiment team. My job was the preparation of the video stimulus. This turned out to be a more difficult job than I was expecting.

The idea is that this stimulus will consists of a set of political videos and a set of apolitical videos. While the stimulus is presented to the subject, their physiological reaction will be monitored with the BioPac hardware.

The process obviously began with selecting videos. The main difficulties were in finding videos that were practically equivalent in their levels of contention while being varied in their political leanings, and to match them with equivalently contentious apolitical videos. It was easy, for instance, to find contentious videos over Obamacare, but considerably more difficult to find direct confrontation on, say, abortion.

An additional, unexpected hurdle this semester has been file compatibility. Videos are a tricky medium, file-type-wise. The world is just barely getting over the .avi file. Copyright holders are scrambling to prevent users from using old filetypes so they have to convert to newer ones and buy new copies of old content. But not all systems (web-based systems especially) are equipped to handle the newer filetypes that are replacing .avi and its older companions. In the end we had to work around this issue by uploading to youtube (which is up-to-date) and embedding our youtube uploads instead of embedding the file natively.

Anyway, we ended up with, politically, two Obamacare clips, a clip from Occupy Wall Street, and a clip from a pro-choice rally, and apolitically, two Jerry Springer Show clips, an altercation between UC Berkeley students and police, and an Atheism v. Intelligent design debate.

To get the strongest, most measurable results, we had people watch and code for the most contentious portions of the videos. We cut the videos down to just these segments (hoping to conserve participant time and prevent physiological responses from dwindling over the course of the stimulus). These snippets were pilot tested for equivalence on MTURK, and the most evenly matched three videos from each set were chosen.

 

As it stands, we now have six videos embedded in powerpoint presentations in both political first and apolitical first orders. These presentations are ready for pilot testing.

 

 

Here are the videos, if you want to check them out: https://www.youtube.com/channel/UCOzJ6wqmouI5VJpYv5ir6NQ/videos

The Powerpoint presentations are on the shared drive.

On Quantitative Data, or How to Confound a Philosopher

“Suppose, now, that we wished so to organize our moral discourse that we did not accept the must implies ought principle…
In that case we would have both
Np
and either
O~p
or
P~p
where “P” represents permission and is connected to obligatoriness by the rule
Op ≡ ~P~p” (Wilson, 1984, p. 54).

Until this lab, this was my idea of analysis. Coming out of high school with policy debate under my belt and a philosophy major in my sights, I had no idea that I would be manipulating numbers.  The closest I had gotten (or ever planned to get) to quantitative data was dealing in passing with “utility,” which even most steadfast utilitarians will readily admit is only quantifiable in principle.

When I found the website, I was excited. “Working in the SNaPP Lab is a great way to get experience conducting research to prepare you to conduct your own project. If you are interested in political behavior—and specifically in the role of innate dispositions, social networks, or social media to influence political behavior—you should consider getting involved in the lab” (“Projects,” 2014, Fostering Research Opportunities for Undergraduates section, para. 3). Innate dispositions! Social networks! Political behavior! RESEARCH EXPERIENCE!!! It was everything I wanted to explore academically that wasn’t strictly philosophy. It couldn’t have been better.

Somehow, though, I missed just how quantitative it all was. I missed every mention of R, every mention of data, every mention of statistical analysis. Honestly, I don’t know what I thought the lab did; I was just sort of blindly excited about it. Had I read deeper, and had I realized the data focus, I may have been too scared to apply.

For maybe the first time in my life, I’m glad I didn’t read very deeply. Missing out on this would have been a horrible mistake.

While statistical analysis isn’t exactly my passion now, I find myself engaging with scholarship I never would have before. Quantitative linguistics papers about word distribution in childhood input, papers about quantitative analysis of incidence of cosmological properties in possible string-theory worlds… The list goes on. This experience has opened me up to a whole new type of scholarship, across disciplines, which prior to participation in this lab, I was not capable of appreciating.

 

While the year in SNaPP Lab wasn’t at all what I was expecting (due to my failure to read), I am glad it turned out the way it did, and I’m glad to have the year of experience. It’s been a great one.

 

References:

Projects (2014). Retrieved 4 May 2014 from http://snapp-lab.wm.edu/projects.html

Wilson, F. (1984). Hume’s cognitive stoicism. Hume Studies10, 52-68. Retrieved from http://www.humesociety.org/hs/issues/10th-ann/wilson/wilson-10th-ann.pdf

Technology Can’t Change Politics

A surprising amount of academic articles about the internet–particularly those written in the early 2000s–refer to internet technology as a transformative tool that has the potential to fundamentally alter American politics. Unfortunately, it seems as if technological innovation isn’t sufficient to spur political reform. The past 30 years have seen enormous technological change, including widespread adoption of personal computers, the internet, and cell phones. These technologies have had a profound impact on the ways in which we interact with others and perform daily tasks. And yet our political systems remain unchanged. The political debates and challenges of 1994—or even 1984—seem remarkably similar to those of 2014.

This, I believe, is due to the powerful effect of institutions. America’s political institutions create an incentive structure for politicians. New technologies can change how we elect people, who we elect, how we interact with elected officials, etc. But in some sense these changes are superfluous. Whether you use social media to help elect Barack Obama or print media to put George H. W. Bush in the White House, there will still be two dominant parties. Money will drive political outcomes. Capital will accrue wealth faster than labor, leading to inequality. One Senator will have the ability to block entire pieces of legislation. Organized political minorities will have a greater influence than apathetic majorities. Technology, however great its effect on our personal lives, is largely unable to alter the incentive structures America’s political institutions create.

The fight to reform institutions is not a battle that can be resolved through technology. Rather, it is a struggle that involves political and philosophic debate. Technology cannot alter the fundamental inequalities of power and wealth that distort political outcomes and make change so challenging. To pretend otherwise merely obfuscates the real issues and makes effecting positive social change more difficult for everyone involved.

What’s So Bad About Polarization?

The increasing use of the internet as a communications tool has fundamentally changed the way Americans discuss politics. Whereas once people used to bash politicians in local barbershops, now people have the ability to do so on social media sites with people around the world.  Some have posited that this will naturally diversify citizens’ political networks, thus enhancing the democratic process. A review of the literature, however, casts doubt upon this optimistic view. Instead, the future looks much like the past: research indicates that the internet likely increases polarization by allowing citizens to more easily self-select into ideologically homogenous groups (Bienenstock et al., 1990; Garner and Palmer 2011).

Take for example Facebook, America’s most popular social network site. Facebook is designed so that you only see content from people whose pages you visit the most, i.e. your closest friends. Your closest friends tend to be very similar to you, including in their political beliefs (McPherson et al., 2001). As such, if it is true that people self-select into homogenous networks on Facebook, online political discussion will lead to greater partisanship and polarization.

But could increased polarization actually be a positive development for American democracy? While closed discussion among a partisan, polarized group of people might seem like a negative thing, studies indicate that polarization actually leads to more informed and consistent voters (Levendusky 2010). Despite its negative connotations, polarization motivates citizens to become more politically engaged and knowledgeable; it also serves as a powerful heuristic that allows ordinary citizens to easily understand complex political issues. Perhaps the danger to our polity lays not so much in polarization itself as it does in broken political institutions that are unable to accommodate polarized parties. Unfortunately, that is a problem whose solution can only come from political imagination and will; an internet connection will not suffice.

Obamacare Team Update and Looking Back on my time in SNaPP

Well the semester has come to a close, and the Obamacare Team is happy to report we had a very productive semester. Between myself, Joanna, and Will we completed our article collection project, collected troves of new data, and made lots of headway on our individual projects.

As we reported before we spent much of this semester collecting new data for use in group and individual research. This data includes three categories of variables: policy (collected by Joanna, measures what states did in response to the ACA), health/demographics (collected by Will, includes a number of measures on the health and characteristics of state populations), and political (collected by me, and includes measures on the ideological climate of states).

Once we completed that we set about working on our individual projects. Will and Joanna will report on theirs more in the future, as both are spending time this summer on their projects. As a senior, however, I completed my project on measuring state ideology (see my earlier post for more details).

My departure from the SNaPP Lab (and from W&M) has made me want to reflect on my time in the lab. I don’t want to just take this opportunity to tell you I learned a lot though, or that I completed some really interesting research projects (even if they were super interesting). Instead, I want to encourage all current and future liberal arts students out there with a will to learn and a topic they’re passionate about to get out there and find research to do! Even if your final product/result isn’t what you expected (and trust me, it usually isn’t), you can learn so much about your field and about scholarly work in general by rolling your sleeves up and doing research. The transferrable skills you learn are invaluable, and you may find your efforts turning into an honors thesis, published article, or even a job.

I also want to encourage those students doing research now or in the future to stick with it. There were times these past few years where I ran into giant brick walls I was sure were insurmountable. I remember distinctly the day this past summer I learned that my project as I had designed it was completely infeasible. Yet I stuck with it, adapted to the challenges I ran into, and ended up learning so much about research and more.

Finally, a word of advice to my future and current Government/Public Policy lovers: do research! I know there is a temptation amongst students in our field to avoid methods courses and research work like the plague. Oftentimes we would rather read Politico articles and talk about Democrats and Republicans than sit down and complete a research project. The benefits of doing projects like these, however, are tremendous, and even if you don’t please consider taking as many methods courses as you can. I am so glad that I took the research courses and completed the projects I did because they taught me an immense amount about not just research, but also how to approach complex problems more methodically and successfully.

So if you know a favorite professor of yours is looking for research assistants, or you have the opportunity to apply for a summer research grant, don’t hesitate because you’re worried it would be too hard or that you wouldn’t learn from the experience. If you approach your research with enthusiasm and dedication you will learn an incredible amount, I promise.

A Reflection on the Year and Research Plans for the Future

As the semester comes to a close I have been reflecting on my first year as a SNaPP lab RA. I can certainly say that my experience in the lab has provided me with an incredible skill set that will surely come in handy throughout the rest of college but also in the real world. I collected data for the Obamacare Team, and have learned how to wade databases to find important information. I have developed a working knowledge of R that I will continue to build on. I have written a grant proposal, and developed my own unique research project. In addition to these tangible results, I have noticed myself developing better analytical and problem solving skills. Working in the lab has provided a wealth of opportunities to learn in a unique hands on way that adds allows me to learn by doing.

I also began work on my individual project this semester. My project aims to explore the relationship between bias newspaper coverage of the ACA and the corresponding newspaper readership’s ideology. Because of the generous funding from the Charles Center I will be able to continue work on my project throughout the summer! I will also be helping the Social Anxiety Team this summer by helping proctor their lab experiment. Overall, I have developed numerous new skill sets because of the SNaPP lab and this summer promises to an invaluable opportunity to get my hands dirty in a project that I have developed on my own! Below I have put a copy of my working abstract for my summer research project. I will continue to blog about my summer research experience on the Charles Center Summer Research Blog.

Working Abstract for Summer Research:

I will be studying the relationship between biased local newspaper coverage, and the political ideology of newspaper readerships. I will aim to answer the question: does biased local newspaper coverage of ideologically contentious legislation correlate with an ideologically biased newspaper readership? To explore this question, I will analyze newspaper coverage of the Affordable Care Act in California, Texas, and Florida. Using original data collected from newspaper articles written in August 2009 (the height of the healthcare debate) covering the Affordable Care Act, I will analyze the recurrence of ideologically charged key words. An abundance of specific ideologically biased keywords in an article will indicate the ideological biases of the publishing newspaper. Examples of these keywords are “ration” and “public option.” Throughout the healthcare debate, conservatives have emphasized the potential “rationing” of healthcare, while liberals have avoided the term because of its negative connotation. Therefore “ration” will likely appear more in conservative newspapers, aiming to highlight problems with the ACA and promote a conservative argument. The same concept applies to the keyword “public option,” a frequent element of the liberal argument in support of the ACA.

After determining the ideological biases of specific newspapers in Texas, California, and Florida, I will focus on understanding the relationship between these newspapers and the ideology of their readerships. I will analyze readership ideology by using local election results and DW nominate scores for representatives with districts that overlap with the readership area. By analyzing a liberal (CA), conservative (TX), and moderate (FL) state, I will better understand the relationship between local newspaper ideology and readership ideology across the American political spectrum.

 

Measuring State Ideology and My Research Journey this Semester

My research journey this semester was one full of twists, turns, and surprises. I began the semester finishing up on article collection for the Obamacare media project, and before long had transitioned into collecting group data for my team’s project. My original intention was to pursue a project investigating framing of the Affordable Care Act (ACA) by elites and average Americans but terminated that project when I ran into the brick wall of unavailable data.

Then Professor Settle steered me towards a new project, one that I embraced and made my focus for the semester: measuring the ideology of states.

This subject was one I had experience with. Throughout the Obamacare team’s quest to identify a coherent research focus we kept stumbling across the need to figure out how to measure the ideology of states. I had read several articles on the matter and found the subject interesting. To me the challenge of measuring the political climate of a state represented a fascinating opportunity to test my ability to take a complex phenomenon and condense it down into working measure. I embraced the challenge and ran with it.

 

The first step was to look at how scholars had operationalized state level ideology in the past. My review of the literature turned up, among others, two landmark studies that coincide well with the competing theories on how to develop a good measure. The first of these studies was published by Robert Erikson and his colleagues in September 1987 issue of the American Political Science Review. This article laid out an approach that relied on disaggregating national polling results to get state-level indicators of partisanship in states. The second, published in 1998 in the American Journal of Political Science by William Berry and his colleagues, presented a methodology focused on aggregating various indicators of elite ideology within states. These indicators include interest group evaluations of elites, partisanship of the state legislature, and more. I chose to follow a methodology modeled after Berry and his colleagues, largely because of the data available and the kind of analysis I wanted to conduct.

Having determined my procedure, I gathered my data. I chose to focus on four indicators of state ideology: party of the governor, partisan makeup of each state’s upper and lower houses, and the average DW-Nominate score of each state’s U.S. Senators (all data was from August 2009, the time frame for the articles collected by the Obamacare team). I had collected data for these variables earlier in the semester, and they seemed to be relatively strong predictors of ideology. The way these measures were structured/operationalized, each score was between -1 and 1 with -1 (exception: the party of governor was coded such that a state with a Democratic governor received a -.25 and a state with a Republican received a .25).  I then aggregated the four measures for each state and divided by four to arrive at my ideology score for each state.

To verify my results I compared them with my comparison variable, Presidential vote share for the 2008 election (as this election was closer to the 2009 time frame for my other data). My comparison variable was coded such that states that went blue in 2008 would receive a negative score between -1 and 0, while those that went red would receive a positive score between 0 and 1. The scatterplot, which I unfortunately was unable to upload, showed a strong positive correlatino between the two variables.

I arrived at the conclusion that my methodology, while not perfect, was a step in the right direction in terms of measuring ideology within a state. Granted there is significant room for improvement in this research design. For instance, my analysis relies on the assumption that Presidential vote share in 2008 serves as a valid comparison measure for the ideology scores I came up with. I believe, however, that my work this semester can serve future members of the SNaPP Lab and the Obamacare team specifically with future research.