Career Efficacy Synthesis 17
Running Head: Career Efficacy Synthesis
The Efficacy of Career Development Interventions:
A Synthesis of Research
Kris Magnusson, Ph.D.
The University of Lethbridge
Allison Roest
The University of Lethbridge
March, 2004
The Efficacy of Career Development Interventions: A Synthesis of Research
Background
Two recent symposia have highlighted the need for public policy to be guided by evidence pertaining to the efficacy of career development practice. The first was an international symposium, “Career Guidance and Public Policy: Bridging the Gap” held in Toronto in October of 2003, with 43 countries represented. The second was “Working Connections: A Pan-Canadian Symposium on Career Development, Lifelong Learning and Workforce Development”, which was held in Toronto in November of 2003. A consistent theme that emerged from both Symposia was the need to develop effective systems of gathering data that attest to the impact that career development/career guidance services have on a number of levels, such as individual well-being, social inclusion, and economic development. Furthermore, discussants at both symposia indicated that data was needed to inform and influence public policy related to the provision of career services.
The experiences of the participants at these Symposia echo a growing call among researchers for more comprehensive efficacy assessment of career practices. Herr (2003) calls for the development of cost-benefit analyses to document the results of career services and for the creation of national research databases to collect and distribute such information. Watts (2002; 2004) called for efficacy research to link career practices to economic efficiency, social equity and sustainability. In Canada, Hiebert (1994) has been making similar calls for increased and more precise efficacy assessment in career counselling. Currently, a number of Canadian researchers, including Bryan Hiebert and Vivian Lalonde at the University of Calgary and Bill Borgen and Norm Amundsen at the University of British Columbia, among others, have been working on the problem of accountability and efficacy measurement in career services.
Despite an increased awareness of the need to better understand how and why career services are effective, the number of outcome research studies has actually decreased in the last 20 years (Whiston, Brecheisen and Stephens, 2003). This decline may be attributable, in part, to the growing recognition of the complexity of career planning. Hughes (2004), for example, commenting on the difficulties associated with assessing the impact of career interventions, notes three major challenges to efficacy research: the range of factors influencing individual choice; the wide variance in client groups, issues and concerns that makes comparison of evidence difficult; and the lack of common outcome measures in the field of career development.
It is clear that a framework for creating, collecting and evaluating career services efficacy is needed. An initial step in that process was taken with the compilation of the Annotated Bibliography of Current Research on the Efficacy of Career Development Interventions and Programs (Roest and Magnusson, 2004). The primary focus of the annotated bibliography was on articles examining the efficacy of career development services and interventions that had been published in English-language career journals over the past 10 years. A parallel initiative, led by Michel Turcotte, is examining articles published in French-language journals. Time constraints did not permit a comprehensive review; however, the articles included provide a representative sampling of research in the field. The central themes and observations from the review of 53 English-language articles are presented here, in the following categories: target audience; populations and samples; research methods, general efficacy findings, and diverging theoretical assumptions.
Target Audience
The majority of articles reviewed spoke to an academic or research audience, and to a lesser extent, practitioners. Given that the review was focused on academic journals, this is hardly surprising. However, in the context of providing evidence to better inform practice, it does pose a few problems. For the most part, descriptions and results are not presented in a manner that would be accessible to many practitioners. Thus, even when specific positive results are found, they may not find their way into general practice. This in turn creates a situation in which there may be frequent replication of efficacy research efforts, and little systematic building upon known data.
The academic nature of the articles reviewed poses a secondary problem for practitioners. Even when positive treatment effects are found, very little description of the nature of the program, service or intervention is provided. Practitioners are thus left on their own to locate more detailed descriptions of what exactly proved to be effective. Furthermore, the majority of the reports focus on holistic program or intervention effects; there is very little analysis of the impact or efficacy of specific treatment or program components.
Practitioners are not the only ones who may not be deriving the full benefit of extant efficacy research. Research, as published in academic journals, rarely makes reference to the implications of the research for public policy. This is somewhat surprising because as Herr (2003) noted, “career counseling, in its many manifestations, is largely a creature of public policy” (p. 8). It would seem reasonable that increased attention would be paid to the constituency upon which most of the funding for career services depends. Herr’s cautions regarding too close of a linkage between career services and public policy are well worth noting; however, the fact remains that little focused research that would support or better inform policy is available.
Population and Samples
The primary participants in career efficacy research have been students of educational institutions. It may be said that the articles contained in the Annotated Bibliography illustrate the principle of “convenience sampling”. A total of 34 of the 41 specific research studies described intervention effects on students, mostly Caucasian, within educational settings. Of these, 20 studies were conducted with university or college students, 9 utilized high school students, and 5 were conducted with middle school students. This pattern is common in psychological research in general; most studies are done where it is convenient to gain access to participants. Although in one sense this a reasonable and understandable approach, it still leaves large gaps in our knowledge about the differential effects that career services may have on other groups, such as women, members of varying ethnic or cultural groups, or people from differing educational or socio-economic backgrounds. Based on the findings of this review, it is clear that the focus of research needs to be expanded to include a much broader spectrum of human experience.
Research Methods
The majority of the studies employed quantitative methodology, and some used mixed method designs (i.e., quantitative analysis supplemented by qualitative analysis). The most commonly employed research designs were variations on pretest-posttest, treatment group to control group experimental designs. In some cases, treatment group/control group post-test only designs were employed. Depending on the sophistication of the study, one or more predictor variables were related to one or two criteria variables. In general, the studies attempted to isolate specific treatment effects (e.g., computer assisted guidance systems) on specific outcome measures (e.g., occupational decision-making).
A major concern with the interpretation of the efficacy data is the imprecision of the outcome measures. Often, instruments with questionable standards of reliability and validity serve as the specific outcome measure. For example, studies of youth often employ measures of career maturity, despite the difficulties associated with measuring the career maturity construct. It is quite possible that even stronger efficacy results would be obtained with more accurate outcome measures.
A second concern with efficacy interpretation pertains to the assumptions related to the outcome measures. Often, specific outcomes are used in the assumption that they are linked to positive career planning. For example, increases in occupational exploration behaviours are commonly used as outcome measures, even though we have little evidence to support the assumption that such increases are related to making sound occupational decisions. An equally plausible hypothesis could be that increasing engagement in meaningful activities, regardless of occupational context, will lead to the discovery of satisfying opportunities. It would seem that the majority of the efficacy research published is rooted in what Weinrach (1979) calls the “structural approach” to career development. However, the underlying assumptions governing the selection, and subsequent measurement of appropriate outcomes are rarely made explicit.
Methods of establishing experimental conditions, and of measuring aggregate outcomes are problematic for career efficacy research. Very little attention has been paid to the differential effects interventions may have on sub-groups within the sample or on diverse samples. Furthermore, there are few studies that compare interventions and their treatment effects; one of the most commonly reported types of study is an assessment of a specific intervention or treatment (e.g., “the effects of treatment program A on outcome measure X”). Such studies usually reveal positive, but modest, support for the intervention; however, there are few studies that compare the efficacy of interventions with similar goals (e.g., “is treatment program A any more effective than treatment program B”). An exception to this pattern can be found in studies that attempt to assess the effects of computerized systems of guidance; the impact of these types of programs are frequently compared to individual counselling and/or to combined counselling and computerized interventions. More comparisons of this kind are needed. Furthermore, as Brown and Ryan Krane (2000) note, more attention needs to be paid to the combined effect of interventions.
Methods of data aggregation are problematic for career efficacy research, particularly in the analysis of the efficacy of programs of intervention. While many program evaluation studies provide multiple outcome measures, there are very few that analyze the differential impact of specific program components. The focus on global outcome measures does not help us understand what components, and in what combination, contributed to the outcome. Furthermore, unless process variables are specifically attended to, there is no way of knowing if poor results are related to actual program content or simply to the lack of adherence to program design. Although Hiebert (1994) called for both process and outcome assessment components in program evaluations nearly a decade ago, it would seem that few such comprehensive evaluations are making their way into academic publications.
There have been a few attempts to conduct meta-analyses of career efficacy research (e.g., Sexton, 1996; Whiston, Sexton and Lasoff, 1998; Whiston, Brecheisen and Stephens, 2003). Most of these attempts were hampered by questionable research methodology, insufficient information, or lack of integrity in the reporting of the data in the original studies. Furthermore, there is very little consistency in the choice of outcome measures, even when measuring identical constructs. Therefore, it is very difficult to draw conclusions pertaining to career intervention efficacy across studies. Despite these problems, most of the authors of the meta-analyses and literature reviews agreed that career development interventions are indeed effective. The problem is that little is known about why, how, or for whom they work. Overall, research in career intervention efficacy is piecemeal, fragmented and unsystematic.
Efficacy of Career Interventions
Given the limitations of audience, population samples and research methodology, one might be led to wonder what, if anything, we can conclude about career efficacy research. Despite these limitations, a few trends did emerge among the studies reported. The most common finding in the efficacy research was that career interventions or programs had a positive effect on participant satisfaction. For example, even in studies that demonstrated no specific treatment effects, the authors would report that clients were satisfied with the processes or interventions, or that they “reacted positively” to the different treatments. It can be concluded that participants generally express satisfaction with career interventions.
Much of the evidence for the efficacy of career interventions pertains to changes in client competence (37 of 41 studies) or client behaviour (8 of 41 studies). Even though a broad spectrum of interventions is represented in the studies reviewed, career interventions in general have been shown to have significant effect in two main areas. First, career interventions increase client exploratory behaviours. Participants are more likely to engage in activities that broaden their range of information and knowledge of career options after engaging in some form of career intervention. Second, participants in the studies presented are more likely to make career decisions after engaging in a career intervention. Unfortunately, there is little evidence to suggest if the range of interventions have differential effects; we do not know if one form of intervention is more effective than another for producing these effects.
Very little attention has been paid to aspects of career planning or career development processes other than exploration and decision-making behaviours. Examples of gaps include the role that engagement plays in career planning (e.g., the use of personal meaning in career planning, or the identification of sources of personal meaning as a motivator/guide for career exploration), the development of prerequisite and planning skills needed to actualize a decision, and the development of systems of social support and/or feedback when implementing career decisions. Overall, the research may be characterized by a central assumption that career planning is largely a cognitive process, and that once a decision is made, it can and will be implemented.
The cursory review of the literature reveals another problem with the assessment of career intervention efficacy: scant attention has been paid to broader outcomes of career interventions. There is little follow-up data on whether clients who use career services attain greater levels of later job satisfaction, work performance or life satisfaction compared with those who do not access the services; more longitudinal studies are necessary. Given that most agencies and services find themselves in an era of fiscal restraint, research into global outcomes is essential for sustaining existing programs and for providing evidence of the need for the development of new ones.
Finally, the global impact of career interventions remains virtually unknown. For example, it is very difficult to determine the economic benefits of career interventions. As Hughes (2004) reported, “research findings highlighted that measure the economic benefits of guidance is problematic, mainly because guidance effectiveness research in the United Kingdom is usually short-term and focused on immediate results” (p. 2). The same observation could be applied to studies conducted in North America. Even less is known about the social impact of career interventions. While it may be reasonable to speculate that good occupational decisions would lead to stronger, more stable families, increased connection with community and decreased isolation or alienation, no studies have been found that address such possibilities. Longitudinal research, that is able to build upon multiple sources of research evidence and address multiple factors, is clearly needed.