Discussion Questions for Approaches and Methods

 

        I.  Approaches and Methods:  How Can Public Opinion Be Measured and Studied?

 

Experiments:

  1. James Druckman, et al. (Eds.), Cambridge Handbook of Experimental Political Science. Cambridge, 2011. Chs. 1-4, 36. 
  2. David Sears. 1988. “College sophomores in the laboratory:  Influences of a narrow data base on psychologists' views of human nature.”  In Letita Peplau, et al. (eds.), Readings in Social Psychology. (A classic critique)

Opinion Research:

  1. Roger Tourangeau. 2004. SURVEY RESEARCH AND SOCIETAL CHANGE. Annual Review of Psychology, 55:775–801.
  2. John Zaller and Stanley Feldman. 1992. "A Simple Theory of the Survey Response: Answering Questions or Revealing Preferences?" American Journal of Political Science, 36(3): 579-616. (Focus on the theory and the Summary & Discussion; skim the analysis [pp. 587-606]). (On the findings, see Ansolabehere et al. 2008. APSR, below).
  3. The Handbook of AttitudesChs. 2, 5, 18.
  4. Brian Gaines, James Kuklinski and Paul Quirk.2006. “The Logic of the Survey Experiment Reexamined.” Political Analysis 15, 1 (Winter 2007): 1-20.

       

        Additional Readings:

 

Intro comments: Donald Kinder. “Attitude and Action in the Realm of Politics.”

1.   Kinder’s Handbook chapter is a useful summary of broader trends and questions in the study of public opinion and political behavior and will be useful as a review when we take up various topics in more detail later the course. Note the social psychological bent of Kinder’s perspective, and the more limited treatment of self-interest or rational choice perspectives in his summary of the literature. One might ask, Why is it that rational choice models appear less useful as the analysis becomes more micro?  When might rational choice models be more useful, even necessary, in explaining political behavior?  

James Druckman, et al., Cambridge Handbook of Experimental Political Science. Cambridge, 2011. Ch. 1, “Experimentation in Political Science.”

1.     Wow! What a great book, at least the chapters I’ve assigned!

2.     As a mental warm-up, think of an area of research that you’d like to investigate, and propose four different research designs to shed new light on the phenomena of interest:  an observational study (not using any type of experiment), a lab experiment, a survey experiment and a field experiment. What are some of the strengths and weaknesses of each design?  Which is best, in your view and which is worst?  (PS: Don’t use media effects, since this was the running example throughout this chapter).

3.     What do the authors mean by “The growing interest in experimentation reflects the increasing value that the discipline places on causal inference and empirically-guided theoretical refinement?” How do experiments help achieve these two goals?

4.     How often do we hear political scientists say, “Boy, I wish PS was more like Psychology,” or “I wish it was more like Economics?” They have rigorous theories and methods and seem to have more credibility with the public.  For example, Senator Tom Coburn, Republican of Oklahoma, known as Dr. No, didn’t introduce an Amendment to Eliminate Economics or Psychology from NSF Funding, only Political Science. Although APSA bragged that Coburn's amendment was soundly defeated by a margin of 36-62, a lot of Republicans and both KY senators voted “yea.”  Setting aside the question of whether the Amendment was politically motivated, to what extent are any self-perceived weaknesses of political scientists due to method, theory, or substance, or some combination of the three (e.g., the match between theory and method)?  Or is this characterization completely unfounded? (Note the comments of Jeff Gill last year)  

5.     What other special functions do experiments serve, according to Roth, and why are they able to provide them?   

6.     What do Druckman and Lupia (2006) mean when they say that, “[c]ontext, not methodology, is what unites our discipline...” and how does this help or hurt the cause for encouraging experiments?

7.     What are the methodological challenges of each type of experiment and how severe are they?

8.     How do experiments typically differ in psychology and economics? (Remind me to tell you about Eileen Braman’s experience at IU.)  

James Druckman, et al., Ch 2, “Experiments: An Introduction to Core Concepts”

1.     What is the fundamental problem of causal inference?  In the example of studying the effects of watching a presidential debate, why can’t we just use observational research to determine the effect of debate-watching by comparing the post-debate behaviors of viewers and non-viewers? Why aren’t the statistical controls introduced by observational methods such as multiple regression analysis or matching adequate to eliminate other potential causes of the observed differences? 

2.     What’s the difference between random assignment and random sampling?  The difference in a between-subjects design and a  within-subjects design? Internal and external validity?

3.     In the section on “Documenting and Reporting Relationships,” the authors discuss the difficulties of “pinpointing mediators and moderators” using experiments.  Is doing this harder or easier when using experiments and why?

4.     What is the problem of Publication bias discussed by the authors and how does it interfere with accumulation of reliable knowledge?

5.     Has anyone dealt with the IRB at UK?

6.     Can someone please point out the inadequacies of the Neyman-Rubin Causal Model?

Rose McDermott, “Internal and External Validity,” Ch. 3

1.     How do different social science disciplines weight the importance or internal versus external validity, and why?  Why are political scientists obsessed with external validity?

2.     If you had to boil down McDermott’s warnings about ways to improve internal and external validity to a handful of dictums, what would they be? 

3.     Make an argument for what’s more important—internal or external validity? Which should political scientists care more about, and why? Is there always a trade-off between the two types of validity, or is it possible to have the best of both worlds?  What are the benefits of placing a greater premium on internal validity, according to McDermott?     

4.     What’s the difference between experimental realism mundane realism and which is more important?  Why?

David Sears. 1988. “College sophomores in the laboratory…”

1.   According to Sears, a social psychologist who uses survey data, what are some of the problems associated with the heavy reliance on experiments in social psychology? Is the problem Sears is addressing due more to the subject population, or the method of experimental research? Are the problems more severe for political scientists who study certain types of political behavior? Why?

2.   To turn Sears’ analysis around, how does an exclusive reliance on survey research limit the theories and explanations that can be developed in political behavior?

3.   I’m not sure Sears himself has ever done an experiment.  Even when some of his ideas (i.e., symbolic racism) would be better tested with experiments, he always uses correlational methods.  The result is that his measures and findings are lacking in internal validity. 

 

Druckman & Kam, “Students as Experimental Participants: A Defense of the “Narrow Data Base” Ch 4.

1.     Most political scientists have memorized Sears’ critique of using college sophomores and experiments because it squares with their own training and biases, that emphasizes external validity and observational methods and allows them to dismiss such experiments as lacking external validity. Is this fair? 

2.     What arguments can be made in defense of using student participants? 

3.     Again, how should one assess different aspects of external validity and realism, and how should this influence the way political scientists evaluate experiments?

4.     Under what conditions should we be more or less worried about experiments that use student participants?  What kinds of treatment effects, under what types of conditions are more or less worrisome?  Make an argument for why “the burden of proof should be shifted from the experimenter to the critic” when using student participants?   

5.     Why should researchers who rely on survey or field experiments not be so sanguine about the superiority of their method over student experiments?

Donald Kinder, “Campbell’s Ghost” 

1.     Remind me to tell you about my experience with student experiments to help prepare you for what Campbell called, a “poverty of results” because experiments can’t compensate for poor measurement or muddy conceptualization.

2.     Is Kinder a hypocrite, a curmudgeon (his term), or does he just prefer to impede progress by calling for caution in using experiments in political behavior research? In another article, Kinder has said, in the study of framing: “Enough already with the experiments!” in “Curmudgeonly Advice,” Journal of Communication 57 (2007) 155–162.

3.     According to Kinder and Palfrey (1993), what is “triangulation across multiple methods” and what are some of its advantages?

4.     Remind me to tell you about reviewers’ comments for Political Behavior.    

 

Tourangeau. 2004. SURVEY RESEARCH AND SOCIETAL CHANGE

 

1.     Why does Tourangeau describe “falling response rates as the greatest threat survey researchers have faced in the past 10 years?” What are some of the causes of nonresponse?  Is it really a problem?

2.     Note some of the advantages and disadvantages of face-to-face interviews, CATI, CAPI, CASI, etc.

3.     What exactly are the problems with web surveys, and why do so many political scientists seem to be using them?  How successful have survey outfits like Knowledge Networks been in overcoming the problems of web surveys?  Note free access to web survey software (Qualtrics). For comparisons of Internet and RDD survey responses, see Malhotra and Krosnick (2007) and Chang and Krosnick (2008)—not required. 

4.     For more on theories of mode effects, see Krosnick’s chapter (2) in The Handbook of Political Attitudes 

 

Zaller and Feldman, "A Simple Theory of the Survey Response…”

 

1.     This is a classic and heavily cited article that seeks to provide a new theory of the survey response and, in the process, provides something of a compromise between two views on response stability by Converse (errors are in respondents) and Achen (errors are in measures). In developing their theory of the survey response, Zaller and Feldman distinguish between explanations of response instability by Converse and Achen.  What are the differences between these two explanations and the problems with each? In what ways does Zaller and Feldman’s model agree with and yet depart from each of these two explanations?

2.     What are the three axioms of Zaller and Feldman’s theory of the survey response and where do they come from?

3.     What are some of the broader implications of the theory for the way public opinion should be studied, for studying response stability, persuasion, and democracy? Are survey responses “real,” or just epiphenomenal constructions? How malleable or fixed is public opinion?

4.     Interestingly, some analysts have taken Z&F’s theory of the survey response as a model of how citizens form their opinions in the real world, not just in an interview. Is this reasonable? How does the model help to put the political environment and “politics” back into the study of public opinion? 

5.     Questions to ponder now and later:

a)    Pick an issue on which public opinion has moved or hasn’t moved and do your best to apply this theory to explain public opinion on this issue.

b)    How might you critique this theory? Does it have enough axioms? Do the deductions follow directly from the axioms?  Can it be tested rigorously? Can it be falsified?  

c)   How does the model help explanation issue framing by elites? What implications does the model have for the fluidity of building coalitions of support or opposition among the public? What implications does the model have for helping to explain media influence on public opinion?

d)    The model, which is admittedly sparse, borrows selectively from theories of information processing, attitude change, framing and so on.  If one advantage of the model is parsimony, what are some of the costs of relying on this more abbreviated model? More generally, what are some of the major problems with the model, as you see them, both theoretically and in its application?

6.     Note parallels between Zaller & Feldman and different theories of attitudes, such as…