Studying EU Attitude Formation Using Experiments

Chapter contribution to: Crossroad in European Union Studies: Research Methods in European Union Studies (eds. K. Lynggaard, I. Manners & K. Löfgren) (Palgrave MacMillan - forthcoming)

Author: Ass. Professor, Julie Hassing Nielsen, University of Copenhagen

Introduction:

The past decades have witnessed a true revolution when it comes to the use of experiments in political science (e.g. Druckman et al., 2011; Druckman et al., 2006; McDermott, 2002; Morton and Williams,2010). Whereas experimentation has been extensively used as methodological tool for exploration of research questions in the neighbouring disciplines of psychology and economics, political scientists have until now only reluctantly endorsed experimental opportunities. Methodological aspects like external validity and representative sampling have been held against experimentation, leaving this methodological approach second-ranked amongst other research methods.

But what insights can an experimental research design yield in EU studies? And in what ways does it contribute further to the exploration of questions of interest to EU scholars? Experiments can, more precisely, help us to explore questions of political psychology that might otherwise be endogenously embedded (i.e. there is a loop of causality between the independent and the dependent variables). These include, for example, how citizens react to EU politics, or how other psychological aspects play a role in opinions about the EU. These are amongst the questions I explore in this chapter. In particular, I focus on the new perspectives experiments give us, exploring questions of relevance to political psychology, while providing an example of deliberation experiments from my own research. Additionally, I discuss the benefits and pitfalls of using experimentation asresearch tool, emphasising how experiments can help us overcome problems of endogeneity.

The remainder of this chapter is organised as follows. First, I show state-of-the-art use of experiments in EU studies. Then I conceptualise experiments by introducing Rubin’s Causal Model as well as exploringwhich research questions are best answered by experiments. I then introduce the most commonly used experimental designs. In the last part of the chapter, I turn to the empirical investigations of the concept of deliberation, and show how an experimental design yields new insights into this particular EU debate. I then briefly discuss the benefits and pitfalls of experimentation, and finally suggest guidelines for new research designs exploring causality from the perspective of experimental research.

State-of-the-Art: The Use of Experimentation in EU studies

The use of experimentation within EU studies has so far been limited to a few specialised research areas. Experimentation in EU studies, along with political sciences in general, has mainly focused on the study of political psychology where election studies and studies of news framing in particular can be mentioned as more predominant examples (in addition to deliberation studies).

Election studies and voter behaviour are core areas of interest in EU studies, where the European Parliament (EP) elections continuously suffer from very low voter turnout, and with a general disinterest in EU affairs in the populations of the EU MemberStates. The use of experimentation in election studies has a wide span, encompassing candidate evaluation and advertisement (for example, Gadarian and Lau, 2011; McGraw, 2011), legislative voting (for example, Miller, 2011), as well as electoral systems and strategic voting (for example, Morton and Williams, 2011). It is only recently that a wider range of experimental designs, including surveys, natural and laboratory experiments, have been applied in EU studies to gain more knowledge about political behaviour in EU elections in particular.

Examining the second-order election models of the EP elections at the micro-level, Hobolt and Wittrock(2011) use experiments to explore individual-level voting choice in a controlled laboratory setting (Hobolt and Wittrock, 2011). Furthermore, relying on real-world party communication stimuli in the 2009EP election campaign, Maier, Adam and Maier (2012) use cross-country survey experiments to explore the effect of party communication on EU support (Maier et al., 2012). Exploring the Danish (2000) and Swedish (2003) referendums on the adoption of the Euro as a natural experiment, Jupille and Leblang (2007) find cross-country differences in whether ‘economic calculations’ and ‘political community’ play a role in attitudes towards the policy (Jupille and Leblang, 2007).

Another area of researchwhere experiments has been used within EU studies is news framing. The pioneering study of the significance of news framing on political attitudes by Zaller (Zaller, 1992) resulted in a wave of framing studies that have maintained forefront prominence. Within EU studies, framing has been used to understand the role of the media in shaping attitudes towards EU integration. As an example, Schuck and De Vreese (2011) explore why politically disaffected individuals are more supportive of referendums. Using a laboratory experiment, they find that negative tabloid news fosters support for referendums (Schuck and Vreese, 2011). Along similar lines, Schuck and de Vreese (2012) find that positive news framing tends to mobilise Eurosceptics to vote in referendums (Schuck and Vreese, 2012). In an earlier framing experiment, Vreese (2004) finds that exposure to strategic news framing activates negative association towards EU enlargement,just asVreese and Kandyla (2009) show that framing EU Common Foreign and Security Policy (CFSP) in terms of either risk or opportunity significantly impactson the support of people for CFSP integration. Kumlin (2011) confirms in a framing experiment on the Europeanisation of social welfare the main “blame-avoidance”hypothesis:that is that subjects receiving a positive EU frame not only become more EU positive,they also become increasingly negative towards domestic politics. Finally, we know from a cross-country panel survey experiment from 2009 that it is not only exposure to positive or negative EU news that matter for the public’s alignment with the EU. Over time the effect of news accumulates; this accumulation effect is efficient enough to be labelled the “time bomb effect of news” by Bruter (Bruter, 2009). In sum, political behaviour in EU elections and especially framing EU in the news are studied a fair amount in EU studies, with much of the research relying on experimentation as the preferred research method.

Research Questions and Theoretical Positions: Why Experiments?

Because many scholars and students currently explore the prospects and boundaries of the experimental design, it is useful to conceptualise what an experiment is, which types of experiments exist, and to explore what type of research questions may with benefitbe approached through experimental research designs. In this section, I account for these aspects, enabling the reader to grasp more clearly experiments as a research methodology, its philosophical (ontological) premises, as well as presenting innovative avenues on how to use it within EU studies.

What is an Experiment?

The experimental method relies on the logic of causality. Causal questions are questions about cause and effect, and experiments are restricted to answering research questions of causality. Consequently, if we choose to use experiments, we get answers to questions such as what would have happened in the hypothetical counterfactual world where X did not lead to the observed outcome Y. Or, in other words, what would have been the outcome of the dependent variable if certain causes or conditions had not been in place. Applying the counterfactual logic to an example from EU studies, it would, for example, be interesting to know if the Netherlands would have rejected the Constitutional Treaty in a popular referendum in 2005 had not the French populationrejected the Treaty a few days prior to the Dutch referendum. Intuitively, most of us would assume the context of a French rejection would impact the outcome of the Dutch referendum. Yet we do not know if this intuition is indeed true. We only know that the Dutch did indeed reject the treaty in the context of the French “non”. In a hypothetical counterfactual world, we would be able to measure the outcome of the Dutch vote independently from the context of the French “no”.

At first glance, the counterfactual approach to causality implies an impossible research mission. If a certain cause X led to a certain outcome Y in real life, it is against the logic of nature to measure what would have been the state of Y had X remained ‘un-happened’. This is also known as the fundamental problem of causal inference (for example, Druckman et al., 2011, pp. 16). Yet it is on this logic, formally known as Rubin’s Causal Model (RCM), that experiments are based. In RCM the causal effect of a particular cause (the independent variable) for each individual is measured as the difference between the individual’s outcomein “two states of the world”; one state of the world where the condition was present, and one where it was not (for example, Morton and Williams, 2010, pp. 84-85). Theoretically, this is stated:

(1)Δi = Yi1 - Yio

Equation (1) shows RMC. The outcome δ for individual i is the difference between the outcome of individual i in one state of the world (Yi1), and the outcome of individual i in a state of the world where a certain condition or cause did not occur (Yi0). The cause, or the condition, is the independent variable you manipulate in experiments (called the experimental treatment). Equation (1) summarises RCM for one individual. Assuming more individuals - in experimental terms labelled subjects - take part in the same experiment, we are able not only to measure the discrepancy between an individual’s outcome between two states of the world as depicted in equation (1);we are also capable of calculating the average treatment effect (ATE). The ATE is the most frequently reported experimental statistic. Extending equation (1), the ATE is as follows:

(2)ATE = E (δi) = E (Yi1) – E (Yio)

The concept of ATE implicitly acknowledges variation in treatment effect across individuals (Druckman et al.,2011, pp. 23). This variation highlights the problems of the subjects’ self-selection into certain treatments, and hence a possibility of bias in the clean estimate assumed in RCM. To ensure that no such self-selection biases, or other kinds of biases, might influence the treatment effect, experiments crucially rely on the principle of randomisation of treatments (for example,Druckman et al., 2011; Morton and Williams, 2010). Thus, what distinguishes experiments from observational data is the fact that the studied entitiesare randomly assigned to different treatments. Random assignment means that each entity (often these entities are individual subjects, but they can also be other entities or groups) possesses equal chance of exposure to a particular treatment, so self-selection into treatment is avoided. This way, random assignment aims to avoid confounders or biases that might affect our estimate in equation (1) and (2).

But how do experiments and their underlying counterfactual logic work in practice? Applying it to the question from before, we know that a majority of the Dutch voters rejected the Constitutional Treaty in the early summer of 2005 - only three days after the French rejected the same Treaty under similar circumstances. What is interesting to know is if the French “non” influenced the Dutch voters, and perhaps even determined the crucial outcome of a Dutch rejection of the Constitutional Treaty. To explore this experimentally, we need a group of Dutch voters prior to the Dutch referendum (i.e. we need to generate a mock referendum as the real one has already taken place). We then develop a treatment based on the independent variable in question (i.e. information about the negative French turnout) after which we randomly assign the treatment to one group of the Dutch voters, while the un-treated group acts as control, being provided with neutral or no information. To estimate the influence of the negative French information on the Dutch voting choice, we compare the voting choice of the voters who received the information treatment with those who did not. The outcome of this comparison is the ATE. Crucially, the treatment must be randomly assigned to the two groups of subjects. If not, we are unable to say whether the subjects self-selected into the treatment, and thus if the treatment effect is confounded by other factors.

Which Research Questions do Experiments Answer?

So experiments are suitable for answering questions of causality. Furthermore, exploring the relationship between causes and conditions through experimentation enables us to overcome the endemic problem of endogeneity. By using experimental methods we generate a research design that overcomesendogeneity problems, as the experimenter allocates the treatment variable (the independent variable), and through randomisation of treatments, placebo and/or controls groups is able to assume that the differences in outcome between two groups are due to the imposed treatment.

Certain research areas are more challenged by endogeneity concerns than others. And in particular, questions related to political psychology have benefitted from exploration through the experimental methods. These include, for example, emotions and political attitude formation, which are often highly endogenous to the context one wishes to explore them in (for more see,for example, Druckman et al., 2011).

Experimentation has other benefits that distinguish it from other political science research methods. Where regular observational data like surveys rely on respondents’ self-reported answers, experiments rely on subjects’ actual measured behaviour. Needless to say, measured behaviour vis-à-vis self-reported behaviour may under certain circumstances prove to be a more reliable source to explore respondents’ behaviour, and psychological characteristics and personality traits.

Different Forms of Experiments

The logic of RCM and the principle of treatment randomisation are the core elements of experiments. Yet within the experimental realm different designs coexist, including natural experiments, field experiments, survey experiments, and laboratory experiments (henceforth lab experiments). These four designs differ in terms of (1) the setting in which the experiment is carried out, (2) the extent to which the researcher is involved with treatment allocation, and (3) the researcher’s ability to control intervening factors.

A natural experiment is a study where the assignment of treatments to subjects or entities is haphazard and occurred by natural chance. As a result, treatment randomisation and allocation is not conducted by the researcher, but is assumed. Because the mechanisms assigning treatments are naturally occurring, natural experiments rely on treatment randomisation assumptions labelled ‘as if random’ (For example, Dunning, 2008). Consequently, natural experiments rely on post hoc treatment identification, and control groups that might be comparable according to the treatment randomisation principle. Yet, as these groups are post hoc constructed, the randomisation assumptions have to be extensively accounted for (Sekhon and Titiunik, 2012). The fact that natural experiments are situated in the subjects’ natural environment imposes both weaknesses and strengthson the design. In the natural experiment, treatment randomisation cannot be strictly controlled and, consequently, it needs to be theoretically (and convincingly) accounted for. Still, natural experiments rank highly on the different validity concerns normally held against experiments, including mundane realism (i.e. how realistic the experiment is vis-à-vis the real world events it measures). I will return to this below.

Field experiments are also situated in the subjects’ natural environment. But here the researcher allocates treatments, and thus secures treatment randomisation (for example, Gerber, 2011). Hence, field experiments are attractive because researchers are in control of the treatment randomisation, while the treatments are still allocated in the subjects’ natural environment. If, for example, we wish to study the effect of different ways of canvassing on propensity to vote, the best way would probably be to allocate treatments (i.e. canvassing in various forms) to households while including non-treated households as control. Here, we would expect a better estimate than in, for example, natural experiments, as field experiment takes place in the subject’s own environment. Yet, the treatment allocation is randomised by the researcher, who is then in a better position to control the randomisation process and the treatments (for an example, see Gerber et al., 2010).

Survey experiments enclose treatments in a survey questionnaire, often relying on software randomisation techniques by assigning respondents into alternative versions of questionnaire items. Consequently, treatments are stealthily hidden in the survey, encompassing, for example, different frames of a political topic (for example,Sniderman, 2011). With the recent advancement of easy-to-manage software, the numbers of survey experiments have grown dramatically as this convenient method combines the benefits of a controlled experimental design, while simultaneously (often) ensuring a large-N representative sample. That being said, survey experiments are also criticised. Often the duration effect of the treatment is unreported, treatments are not reiterated as they would have been in real life, and often survey experiments do not contain a non-treated control group but simply multiple treatments (Gaines et al., 2006). Additionally, empirical studies show that the ATE in survey experiments is higher than in natural experiments measuring the same research questions (Barabas and Jerit, 2010). This last finding is perhaps unsurprising as treatments in survey experiments ensure direct exposure to subjects, whereas real-life treatment exposure is only part of the many contextual impulses the subjects constantly obtain.

In lab experiments, subjects are moved from their real-life environment to a closed lab setting. Along similar lines as field and survey experiments, treatments are randomly allocated by the experimenter. But this time the experiment takes place in a lab. The artificial lab setting is beneficial as it enables extensive control and exclusion of potential intervening or confounding factors. Simultaneously, however, lab experiments are accused of generating artificial treatment effects that would perhaps not occur in real life as they take place in an artificial lab (for example, Wilson et al., 2010). Consequently, lab experiments rank highly on the researcher’s ability to control potential intervening factors, yet this comes at the expense of the more natural occurring surroundings you find in, for example, field experiments. Table 1 summarises the four experimental designs, and how they differ on the three crucial dimensions mentioned in the introduction.

Table 1: Natural, Field, Survey and Laboratory Experiments: Setting, Treatments and Control

Setting / Treatments / Control-ability
Natural Experiments / Natural / Allocated naturally / Poor
Field Experiments / Natural / Allocated by experimenter / Poor
Survey Experiments / Natural / artificial / Allocated by experimenter / Medium
Laboratory Experiments / Artificial / Allocated by experimenter / Good

Research Designs: Exploring Deliberation using Experiments