POST PRINT VERSION.

Accepted by International Journal of Educational Research on 27 August 2012.

*Note – this is a copy of the final draft version submitted on 4 July 2012 after peer review.

International Journal of Educational Research 56 (2012) 23–34

Author details:

Corresponding author: Adrian Cherney

*Dr Adrian Cherney

School of Social Science

The University of Queensland

Brisbane, St Lucia 4072

ph + 61 7 3365 3236

fax + 61 7 3365 1544

email:

Dr Jenny Povey

Professor Brian Head

Professor Paul Boreham

Michele Ferguson

Institute for Social Science Research

The University of Queensland

Brisbane, St Lucia.

Acknowledgements: This project is supported through ARC Linkage project: LP100100380.

What influences the utilisation of educational research by policy-makers and practitioners? – The perspectives of academic educational researchers

Abstract

In the field of education much has been made of the need for academics to engage more closely with policy-makers and practitioners in the process of knowledge production and research uptake. This paper reports results from a survey of academic educational researchers in Australia on their experience of research uptake and engagement with policy-makers and practitioners. We examine a range of variables to understand factors influencing the use of educational research. The results indicate that while research uptake is enhanced through mechanisms that improve the intensity of interactions between academics and end-users, the dynamics of research collaborations have a significant bearing on research use. Our findings provide insights into the challenges that can be confronted when academics engage in research aimed at influencing policy or practice.

Keywords: research utilisation, research collaborations, education, knowledge translation, policy impact, academic research

1: Introduction

In the field of education the observation has been made that academic research rarely has a policy impact and often fails to meet the needs of policy-makers and practitioners (Coburn & Talbert, 2006; Hemsley-Brown & Sharp, 2003; Hess, 2008; Hillage et al 1998; Levin & Edelstein, 2010; Oancea, 2005). This disjunction is partly seen as originating in communication problems between policymakers, practitioners and academic researchers, drawing on the argument that they live in different worlds with differing languages, values and professional rewards (Bell et al., 2010; Kirst, 2000; Levin, 2011; Orland, 2009; Vanderlinde & van Braak, 2010). There is some validity to this argument, with studies in the field of education indicating that educational researchers, bureaucrats and teachers often have different priorities and perceptions about what constitutes useful and valid research, the role of theory, data quality and research methods, project outcomes, brevity of results and the practicality of research recommendations(Bell et al., 2010; Coburn & Talbert, 2006; Cousins & Leithwood, 1993; Honig & Coburn, 2008; Levin & Edelstein, 2010; Saha, Biddle, & Anderson, 1995; Vanderlinde & van Braak, 2010; Wilkins, 1988; Zeulie, 1994).

Greater collaboration between academic research producers and users, or consumers of research, is seen as one way of addressing the dissonance between knowledge production and its transfer or translation to policy and practitioner contexts. There is some evidence that when academic researchers and policy-makers or practitioners work closely in the formulation and execution of research projects, the research is more likely to have an influence on policy or practice (Cordingley, 2008; Cousins et al., 1996; Cousins & Simon, 1996; Huberman, 1990; Nutley et al.,2007). However, closer collaboration alone is insufficient to ensure that research has a policy or practice impact, with studies demonstrating that a range of variables influence the uptake and use of academic social research by non-academic end-users (Bell et al 2010; Bogenschneider & Corbett, 2010; Cherney & McGee, 2011; Huberman, 1990; Landry et al., 2001a, 2001b; Weiss & Bucuvalas, 1980). Moreover, research collaborations can be inherently problematic with participants’ involvement influenced by individual and institutional constraints and contingencies (Bell et al 2010; Bogenschneider & Corbett, 2010; Coburn & Talbert, 2006; Cousins & Simon, 1996; Edwards et al 2007).

The issue of closer synergies between educational researchers and non-academic end-users raise important issues concerning the role that educational research should play in relation to policy and practice. As Hammersley (2007) points out, ones position on this issue is influenced by judgments on whether academic educational research should be integral to practice or ought be judged as a value in its own right. While more nuanced positions are often adopted that recognise the multi-dimensional value of educational research (e.g. see Cooper, Levin and Campbell 2009) this is a highly contested issue (Burkhardt & Schoenfeld, 2003; Lingard, 2011). The intensity of debate about the value of academic research has also been increased through University Research Assessment Exercises, such as the Research Excellence Framework in the UK, which has required academics to demonstrate the impact of their research. Academic researchers in Australia are subject to similar pressures (e.g. through the Excellence in Research Australia initiative and at the time of writing the Excellence in Innovation for Australia trial).

While our aim here is not to elaborate upon all these issues, our paper does provide new analyses relevant to the debate around research impact and evidence-based policy in the field of education. Using survey data from academic university researchers in Australia who engage in research collaborations with external partners, the paper principally aims to examine factors that influence the uptake of social research, as interpreted through the experience of knowledge producers in the field of education. Using the scale of research utilisation (Knott & Wildavsky, 1980)we examine factors that appear to influence reported levels of research impact.

The paper is organised as follows. Firstly, the explanatory model underpinning this study (i.e. the scale or ladder of utilisation) will be discussed. Secondly, the paper outlines the data collection methods used for the survey administered to Australian academics. Thirdly, key results from the sample of educational researchers are provided, focusing on reported levels of research utilisation and variables that appear to influence knowledge transfer and application. Finally, the paper discusses the results (and some data limitations) and concludes with broader observations about the study of research uptake in the field of education.

2: Literature review

2.1: Measuring Research Use

When it comes to measuring research utilisation, no single conceptual model has been unanimously adopted(Belkhodja et al.,2007; Lester, 1993). One reason for this is the methodological problem associated with specifying the dependant variable of research use given it can be defined either as a process or an outcome. Furthermore social research can provide answers to technical questions, such as “did this program work?” to helping policy-makers or practitioners interpret problems in ways that change their understanding about issues or choices (Biesta 2007; Nutley et al., 2007). Hence the complexity surroundingthe use of academic social research can make it difficult to measure, particularly when attempting to quantify its interruptive function (Beista 2007). Scales of research use can be particularly helpful in understanding that the breadth of social research usages ranging from practices that encompass transmission through to actual application (Cherney & McGee, 2011).Such scales can be valuable because they can be used to identify how utilisation is related to various decision-making processes, particularly concerning the actions of the producers and consumers of social research (Landry et al., 2001b).

Methodologically this study replicated a modified version of the Knott & Wildavsky (1980) research use (RU) scale, , similar to that adopted in the study by Landry et al (2001a). This scale was adopted because it has been frequently cited in the literature, has been used to measure research use among government officials and academics, and has been shown to be reliable (Landry et al., 2001a; Lester and Wilds 1990; Lester 1993)[1]. Conceptually the research use scale can be referred to as the “ladder of utilisation” and table 1 provides the descriptions for each stage of research use (or rung of the ladder), as presented in our questionnaire to Australian social scientists. The benefit of this scale is that it operationalises research use as a cumulative process that progresses through a number of stages: transmission, cognition, reference, effort, influence and application. The scale is cumulative in the sense that cognition builds on transmission, reference builds on cognition, effort on reference, influence on effort, and application on influence. The RU scale has been criticised as perpetuating a linear understanding of research utilization (Davies & Nutley, 2008). However it does recognise the fact that the research utilisation process varies between a range of activities that involve knowledge transfer and uptake(Cherney & McGee, 2011; Knott & Wildavsky, 1980; Lester, 1993).

<Insert Table 1>

2.2: Independent Variables Influencing Research Use

Just as there is no agreed conceptual model relating to research utilisation, there is no definitive list of variables developed to help predict knowledge use(Lester, 1993). Most studies have categorised variables under broad groups relating to supply-side and demand-pull factors, as well as dissemination and interaction variables. Supply-side factors include research outputs and the context in which the researcher works. These can include the types of research outputs produced by academics (e.g. qualitative or quantitative studies[2]), whether research is focused on non-academic users, the importance of internal or external funding sources, and the institutional drivers that influence the initiation of collaborations with external partners and end-users (Bogenschneider & Corbett 2010; Cherney et al., 2011). Demand-pull factors concern whether end-users consider research to be pertinent, whether it coincides with end-users’ needs, whether users accord it credibility, and whether it reaches users at the right time to influence decision-making. Added to this are organisational processes such as level of skills to apply research knowledge, that could inhibit uptake of research and thus influence the overall demand for academic research within end-user organisations (Belkhodja et al., 2007; Coburn & Talbert, 2006; Ouimet et al 2009). Dissemination variables relate to efforts to adapt and tailor research products (e.g. reports) for end-users and to develop strategies focused on the communication of research(Huberman, 1990). The assumption is that the more researchers invest in adaptation and dissemination, the more likely research-based knowledge will be adopted. Adaptation includes efforts to make reports more readable and easier to understand, efforts to make conclusions and recommendations more specific or more operational, efforts to focus on variables amenable to interventions by users, and efforts to make reports appealing(Cherney & McGee, 2011). Dissemination efforts include strategies aimed at communicating research to targeted end-users, such as when researchers use different social media to communicate their research messages, hold meetings to discuss the scope and results of their projects with specific users or partners, and target particular forums, e.g. reporting on their research to government committees. Finally, interaction variables focus on the intensity of the relationships between knowledge producers and potential users. The types of factors considered relevant include informal personal contacts, participation in committees or experience with research partnerships, e.g. the number of research partnerships an academic has engaged in(Huberman, 1990; Landry et al., 2001a; Lomas, 2000).

3: Current Study

The data used in this research were drawn from a broader study examining evidence-based policy and practice (Cherney et al., 2011). The study has four phases and the data reported here was obtained from Phase 1, which used a purposive sampling technique to target academic social scientists in Australian Universities[3]. The final sample recruited was 693, which constitutes an overall response rate of 32 per cent. For the purpose of this analysis, only data pertaining to academics who identified their primary research discipline as education have been used (n=156). The academic survey was partially based on existing items or scales (Bogenschneider & Corbett, 2010; Landry et al., 2001a, 2001b) but with additional items included to gauge the dynamics of research partnerships.

3.1: Dependent variable

Knowledge utilisation was measured using a validated version of the Knott and Wildavsky (1980) research use scale. As indicated the scale is based on six stages namely: transmission, cognition, reference, effort, influence, and application. For each of these six stages respondents were asked to estimate what had become of their research using a 5-point scale ranging from 1 (never), 2 (rarely), 3 (sometimes), 4 (usually), to 5 (always).

Previous researchers (Cherney & McGee, 2011; Landry et al., 2001a)have used this scale cumulatively (with each stage building upon the next) and assigned a value of 1 when respondents replied always, usually, or sometimes, and with all other responses assigned the value of 0 or a fail. There are two ways that this cumulative approach can be analysed. The first is to run a separate logistic regression for each stage of research utilisation as Landry et al (2001a) did in their study. Hence respondents who pass all six stages would be represented in each stage or regression model (see Figure 1). This is particularly problematic with our sample, because the majority (75%) of the sample reported they passed all six stages. Hence the question arises whether such a method would really be determining what predicts movement from one stage to the next, or whether progression across each stage is masked by the dominant group. In order to address this criticism, a second approach would be to create an ordinal variable with seven levels, including in each level only those individuals who passed that level. Thus respondents in each level would be unique. Table 2 presents the number of respondents categorised in each level/echelon according to such progression criteria. For instance, two percent of the sample passed the transmission stage but did not progress further. However, an ordinal logistic regression analysis is not possible due to our small sample size and the number of cases in each level.

The next possible option would be to examine whether these stages are in fact exclusive. Does failure in one stage preclude academic researchers from progressing to other stages? Or should these stages comprise an index? Descriptive statistics, as presented in Figure 2, illustrate that failure in one stage does not preclude academic researchers from passing subsequent stages. This is an important consideration because the process of research utilisation has been argued to be non-linear, with data in Figure 2 indicating that one does not necessarily have to traverse in sequence each rung of the research utilisation ladder to reach the ultimate stage - i.e. application. A factor analysis of the items (or stages), revealed a 1-factor solution and a Cronbach’s alpha coefficient of 0.77 (Table 3). Thus, the results indicate that these items are measuring one construct and that the index seems to be reliable. It was thus decided to use the items as an index to measure research use. A mean index score was calculated for all 6 six stages. The mean score for the research utilisation index is presented in Table 3.

<Insert Figure 1>

<Insert Table 2>

<Insert Figure 2>

3.2: Independent variables

A number of indices were created and included in our model as independent variables. The items used in each index were determined by factor analyses, with each index comprising a 1-factor solution. The Cronbach’s alpha coefficients for these independent variables are presented in Table 3 and detailed descriptions of index compositions are presented in Appendix 1.

<Insert Table 3>

Descriptive statistics for each independent variable are presented in Table 4. Academic researchers from the discipline of education indicated that academic funding (i.e. national competitive grants such as Australian Research Council grants, and internal University funds) were more important than funding from government and non-government agencies in ensuring their research is conducted. Academic researchers indicated that the ‘relevance’ of the research is given a higher priority by end-users, compared to other features such as the quality or feasibility of the research. A higher level of importance was attributed to tailoring research to meet the needs of end-users and meetings to discuss findings with end-users. Table 4 also illustrates a high level of agreement among academic educational researchers concerning the fact that they encounter barriers in the transfer and uptake of their research. A very high level of importance is accorded by academic educational researchers to the use of refereed publications as a method through which to disseminate their research. The number of research partners with whom researchers engaged ranged between 0 and 35, with an average of 6 research partners per researcher. The number of grants received by these academic educational researchers varies between 0 and 44, with the average researcher having received 8 grants. In general, our sample has a high level of experience in engaging in research partnerships and securing research grants.

<Insert Table 4>

3.3: Data analysis

Given that our dependent variable is approximately continuous, an Ordinary Least Squares (OLS) regression model was used to estimate the associations between research utilization (our dependent variable) and a number of explanatory variables such as benefits and barriers associated with engaging in research with policy-makers and practitioners. As a preliminary check, we examined the correlations between all variables in the model. They ranged between .002 and .68, suggesting that multicollinearity was unlikely to be a problem (the correlation matrix was too large to depict in the Appendix). This was confirmed by a relatively low value of the mean Variance Inflation Factor (VIF) of 1.66, with the individual variable’s VIFs ranging from 1.18 to 2.48.The four highest correlations were between problems relating to the orientation of research partnerships and ‘consequences’ of investing in research partnerships (0.68); ‘consequences’ of investing in research partnerships and barriers academics experience in the transfer and uptake of research by end-users (0.57); importance of using contacts, seminars and reports to present research to policy-makers and practitioners and importance of meetings and dissemination activities with end-users (0.55); and importance of meetings and dissemination activities with end-users and importance of tailoring research when end-users are the focus (0.55). All four correlations were statistically significant.

3.4: Regression Results

The regression results are presented in Table 5. The results indicate that eight variables were significantly related to the utilisation of educational research. The more academic researchers perceived collaboration with external partners as beneficial, the more likely they report utilisation. As the number of grants increases so does the likelihood of research utilisation. The more negative the perceived consequences for academic educational researchers when engaging in research partnerships, the less likely it is reported that their research will be utilised by policy-makers or practitioners. The importance of tailoring the research for end-users is positively and significantly associated with reported levels of research use. Academic educational researchers also reported that when end-users felt that research was relevant it was more likely to lead to utilisation. Finally academic researchers indicated that the more policy-makers or practitioners prioritise the ‘feasibility’ of research (i.e. policy-makers or practitioners place greater emphasis on research being economically and politically feasible) the less likely were academics to perceive that end-users would use academic social research. When academic researchers perceived there to be problems associated with research partnerships they were less likely to report research uptake by external agencies.