EVALUATION OF INTERPROFESSIONAL EDUCATION
STATE OF ART
THE IPE JET STUDY
BERA BELFAST 27.8.-30.8.98CONFERENCE SYMPOSIUM:
Interprofessional Education in Health and Social Care
Ivan Koppel: University of Westminster
on behalf of
Hugh Barr: University of WestminsterMarilyn Hammick: Oxford Brookes UniversityScott Reeves: City University
EVALUATING EVALUATION OF INTERPROFESSIONAL EDUCATION
INTRODUCTION
This paper is divided into four sections. First describes the context for the rise of importance of interprofessional education (IPE) and for a need for evaluation of its effectiveness. The second section focuses on debate on educational outcomes and the approaches to their evaluation. Third section describes the project itself, while the last part reports on the current state of the findings.
Definitions
We need to define our terms as this field of education is bedevilled by variety of definitions. For the purpose of this study the interprofessional education (IPE) is concerned with shared learning activities where learners from at least two professions learn with and from each other, while the multiprofessional education covers a much wider enterprise where learners share the same educational facilities. Whilst in the first case it is the interchange between learners that is important, the second case represents a trend for a more economic use of scarce educational resources. In this study we are focussing on the first modality where learners benefit from learning about each other with the aim of working better together.
CONTEXT OF THE STUDY
Context of IPE
Education and training of the health and social care professionals are changing in response to a challenge inherent in alteration in context within which they are involved in care provision. The major factors influencing on a profile of professionals’ work could be summarised as follows:
CHANGES IN WELFARE STATE
- increase in demand due to higher proportion of dependent groups, such as elderly, disabled and single mothers,
- enhanced expectations of the welfare state,
- enhanced expectation of medicine due to technological advances,
- increased costs of health and social care,
- diminishing resources as economic growth slows
- altered political philosophy;
CHANGES IN PROFESSIONS
- new professional groups,
- blurring of role boundaries,
- increased public scrutiny of professional conduct,
- increased specialisation;
ORGANISATION OF CARE
- more care delivered in primary care,
- more responsibility for organising care devolved to primary care,
POLICY CHANGES
- deficiencies in health and social care in vulnerable groups linked to inadequate teamworking,
- support for primary care professionals taking on delivery and planning of health care.
Developments in the education of professionals in response to these changes are reflected in alterations to its organisation, objectives, content, location and method of delivery. Underlying intention behind these changes is the need to prepare the professionals for the fast changing world of professional work, to allow them to develop sufficient flexibility to thrive in such an environment and equip them with the skills and attitudes for lifelong learning.
It is against this background of changes in professionals’ work and education that we can consider the place of IPE. Four main arguments for IPE are that it can:
•promote interprofessional collaboration,
- improve quality of care,
- create a flexible workforce,
- contribute to cost-effectiveness in education institutions.
Integrative aspect of the IPE could be seen as an opportunity to counter the fragmentation characterising organisational and professional aspects of professionals’ work. Devolution of responsibilities to smaller organisational units, reliance on teams to monitor their work, blurring of the professional boundaries and increased specialisation require means to improve the cooperation between the professionals involved.
Number of problem areas are identified where IPE can be seen as having sufficient potential to contribute to their resolution. These are the issues concerning understanding of each others role, ability to work in cohesive fashion, or overcoming professional stereotypes and prejudices (Pietroni, 1991; Jones, 1986; Horder, Soothill, Mackay, and Webb(eds.)1995).
IPE has become a legitimate aspect of professional development because of the above drivers for change. It has been gaining ground in recent years in thinking and practice of educational providers (Barr and Waterton, 1996; Pietroni and Spratley, 1993), in policy makers pronouncements (Baraclough et al. 1983; Department of Health and Social Services Inspectorate, 1991; PREPP, 1991; GMC, 1993) and in purchasing intentions of health care managers (NHS Executive, 1997; Hennessy, 1994). These stakeholders recognise that one of the principal means of achieving a better collaboration between the professionals in health and social care is the interprofessional education.
Major stimulus for development of such courses have been changes in provision of health and social care that demand better cooperation of diverse professional groups (Secretary of State for Health, 1989; Secretary of State for Social Services, 1989) and this is even more evident in recent proposals for restructuring of health service with the coming of primary care groups (Secretary of State for Health, 1997) where the substantial part of the education will need to be based in the workplace (Calman, 1998).
Need for evidence of effectiveness of IPE
However, in the current climate of financial restraint, funders of education are interested in value for money and they need to be persuaded that a particular educational fare will result in a desired outcome. There is an observable shift towards educational provision that fulfils a specific need, which equips a professional with desired knowledge and skills. Education funders now exert a much greater control over organisation and content of education and they require evaluation of the education to provide adequate evidence of the effectiveness of their investment (Tovey, 1994).
In any educational field showing a clear, linear link between educational ideas and its outcome has been difficult. Oxman in his review (Oxman, 1994) pointed out that education contributes anything between 20 and 50% towards a change of professionals’ behaviour. In other words, other factors in the environment, individual’s make-up or motivation shape the altered behaviour. In the literature concerned with IPE there appears to be a lack of an overview of its evaluation. Like for any other educational endeavor, there is a need to review its value, its quality and impact. Publications of IPE initiatives have multiplied in last 10-15 years and there have been some attempts to provide an evaluation of IPE activities (Barr and Shaw, 1995), but so far none has been sufficiently systematic.
Expert review of current state of knowledge, long influential in shaping the professional opinion, has been joined in recent years by use of meta-analysis, by which the carefully selected trials are subjected to statistical analysis, and by systematic overview, to produce the best available evidence of an effectiveness of therapeutic or preventive intervention (Sackett et al. 1997). Move towards a decision making based on best available evidence has been occasioned by the awareness that idiosyncratic, individualised professional behaviour does not always result in best outcomes for their patients or clients (Anon, 1994).
The need for a good evidence of IPE effectiveness needs to be seen in the light of a recent move towards evidence-based medicine (EBM). Arguments about the values and philosophies that converge in debate about EBM will not be explored in great detail here, but pointing out a further shift from considering effectiveness to efficiency is sufficient. In other words, the question is not only if a particular educational initiative works, but if does it so better than other educational intervention.
Cochrane systematic review
Background
Cochrane Centre was established in Oxford in 1992 (Chalmers, 1997) to provide an opportunity for professionals keen on investigating the strength of evidence of specific interventions in health care. Its principle of collaboration is enshrined in the title of Cochrane Collaboration under which aegis number of subsidiary centres support the work of systematic reviews. International collaborative review groups conduct the actual work of systematic reviews, each looking at one specific topic within a broader interest area. One of these centres deals with issues that are outside the strict biomedical remit and is concerned with an issue of professional practice in health care. It is called the Cochrane Effective Practice and Organisation of Care (EPOC) and its scope includes evaluation of intervention designed to improve professional performance and ultimately patient care. These cover education, quality assurance and audit and organisational interventions. Other issues of interest include interprofessional collaboration, revision of professional roles and consumer participation. It is within this centre that a review group for evaluating IPE has been established.
Within the EPOC group it has been accepted that studies other than those using randomised controlled trial (RCT) design can be considered when analysing the impact of interventions on professional behaviour and performance. Two additional designs are currently accepted as providing a sufficient strength of evidence - interrupted time series (ITS) and controlled before and after studies (CBA). The ITS study needs to show at least three measures before and after the intervention, the CBA study needs to have a control group, although with non-random allocation.
It is under the umbrella of this EPOC Collaborative centre that a systematic review of impact of IPE on patient outcomes was embarked on in 1997. Accompanying paper by Marilyn Hammick (Interprofessional Education and Systematic review: a new initiative in evaluation) describes in detail the process and achievements of that study up-to-date.
The Parallel study - the IPE JET (Joint Evaluation Team) study
The study described here is taking on a much wider remit, as it was felt that the focus of the EPOC study is too restrictive in terms of the evidence and type of studies admitted for consideration. We would like to present an argument, developed later, that it is valuable to review studies that use a wider variety of designs than those mentioned above and that a whole range of outcomes should be analysed beside those representing a direct impact on patients or clients.
Intention behind this study is to provide information for organisers of IPE in terms of which evaluative methodologies to employ. Secondly, both organisers and purchasers of education might be interested in which IPE approaches achieve particular outcomes.
Before describing in detail the study design, we need to review the debate on outcomes of learning and approaches to their evaluation.
EVALUATION OF EDUCATION
Here we will provide a short review of approaches to evaluative theory and methodologies. Perspectives on learning outcomes will be integrated in this debate to inform the consideration of summative, outcome-centered evaluation.
It will be suggested at the end that there is not one ideal approach to IPE evaluation, instead contextual factors will determine a choice of an approach. Contextual factors will be - the stage of IPE, duration, location, purpose and type of IPE and type of evaluation.
Definitions.
There is a problem with definitions commonly encountered when considering evaluation. Two concepts - evaluation and assessment need to be defined and their use clarified. It is common for the evaluation to be used when assessment would be a more correct term. We can turn to two distinguished authors to help with these definitions.
EVALUATION - ‘is act of examining and judging, concerning the worth, quality, significance, amount, degree and or condition of something. In short evaluation is the ascertainment of merit’. (Stufflebeam, 1975) (p8).
ASSESSMENT - ‘is a value-free ascertainment of the extent to which objectives determined at the outset of the program have been attained by the participants’. (Brookfield, 1986) (p264)
Thus strictly speaking evaluation of a program will stand for a much wider enterprise of examining the whole program, its context, its value not just in relationship to its outcome but in order that future developments can be planned. It would address issues such as learning environment, aspects of organisation such as integration in a wider structure of a teaching institution or in case of work-based learning the interplay between the learning and working.
It is possible to delineate two main types of evaluation after Scriven (Tyler, Gagne, and Scriven(eds.)1967) - formative contributing to the development of the program and summative evaluation concerned with the effectiveness of the educational program.
Assessment on the other hand is more focussed on students and how they have changed as a result of the learning experience. Depending on the intention of the evaluation the assessment of the outcomes can be an integral part of the process (Rowntree, 1987). In summative evaluation it contributes to establishing the actual value of teaching provision.
Philosophies of evaluation.
Like in other fields of social sciences we can observe polarity between two worldviews (Cohen and Manion, 1994) - positivist, scientific and antipositivist, interpretive in the approaches to curriculum design. The key movement has been from using predetermined objectives representing the positivist, reductionist approach to education to using learning outcomes that stem from the interpretive stance allowing a consideration of interaction of numerous, unplanned for factors producing among others unintended outcomes.
In evaluation we can discern corresponding stances (Shadish, Jr. et al. 1995). First one was based on the natural science perception of reality and argued for scientific, objective measurements. Second one rejected the objective approach as untenable due to its neglect of individual perceptions, meanings and interpretation of values.
Lastly, the integrative stance based on the contingency theory suggests choice of evaluative approaches according to the actual needs of the situation, whereby either approach can be used separately or both approaches can be used together.
Scientific paradigm.
A positivist paradigm has given rise to the movement concerned with behavioural objectives that can be predetermined by program designer. Tyler (1949) was the initiator of this movement. His approach to curriculum design and evaluation was based on a notion that a required behaviour and knowledge can be described with precision. Measurement of an achievement thus can be seen to be scientific due to logical progression ascribed to the process from learning to assessment of that learning. He argued for clearly specified attainable and measurable objectives that a learner could demonstrate behaviourally.
As it is derived from school experience it is deemed to be unsuitable for adult learning, as it does not cater for the differences in learners, for unexpected outcomes and is authoritarian. It is more applicable to setting such as training and is reflected to large extent in NVQ assessment.
Kirkpatrick (Kirkpatrick, Craig and Bittel(eds.)1967) within the same paradigm developed a hierarchy of evaluation. Four stages progress from learners’ reaction to the program, through assessment of learning (e.g. acquisition of knowledge and skills), to transference of behaviour to actual setting (see similarity to NVQ assessment), i.e. demonstrating application of skills in the workplace, to assessing the impact on community or organisation such as patient outcomes.This model does not take into account unintended outcomes of the program.
Research methodologies
Experimental or quasi-experimental methods are deemed to provide the most precise information about program effects and its effectiveness. Difficulties in designing true experiments in education should not be underestimated - it is difficult to control for all the extraneous factors and it may not be possible in natural setting to have truly randomised allocation of learners to intervention and control groups.
An interpretive paradigm
The debate about appropriateness of using behavioural objectives has been fuelled by two considerations. Firstly attempts to evaluate objectively the higher levels of a student’s achievement in a cognitive domain (Bloom and et al(eds.)1956) (such as critical thinking and creativity) and an affective domain (Karthwohl, Bloom, and Masia(eds.)1964) (such as resolving conflict between differing values) with required degree of precision have proved difficult. Secondly this approach became restrictive to curriculum designers as it negated teachers’ intentions to promote individuals’ growth in more general terms.
Thus, we begin to see a movement towards learning outcomes.
Eisner (1972) introduced the term expressive objectives and expressive outcomes with the intention of describing outcomes that include unplanned yet valuable outcomes that build on technical mastery of instructional objectives, but relate to individual personality and value systems. Expressive outcomes arise not only from teaching intentions as specified in learning objectives, but also from impact on students of hidden curriculum and other learning opportunities students encounter. These then are more appropriate class of learning outcomes to consider for adult learners.
Allen (1996) developed the notion of student outcomes further to include other discernable outcomes of the educational experience, those going beyond specific content area. There is an inherent dichotomy in considering this group of outcomes. On one hand this concept represents the acknowledgment of liberal orientation in education where personal, individualised learning and resultant sometimes-unpredictable growth and development is paramount. On the other hand Allen pointed their relevance in the current skill-orientated society as they cover so-called transferable skills. Number of IPE outcomes fall into this category, such as ability to work in teams, ability to interact with different groups and critical reflection so necessary for group learning. This again begins to point a way for discernment of appropriate and relevant outcomes in IPE evaluation.
Research methodology
Qualitative data collection and data interpretation methods are more appropriate in this interpretationist paradigm. Results of investigation do not aspire to be universally true, but mean to represent the local reality as agreed by those participating in evaluation. Process of evaluation then can be more flexible, can take into account developing aims of learners and teachers and changing contexts of the program. Number of approaches emerged - such as goal-free evaluation (Scriven, 1972), naturalistic (Guba and Lincoln, 1981) and participative (Kinsey, 1981).
Contingency approach to evaluation
Last strand in evaluation research is more inclusive and does not subscribe to one theoretical paradigm. Drawing on contingency theory that supports a use of methodology according to a need, it allows for multiple realities and for multiple approaches to the same problem. Proponents of this way of thinking suggest that it is necessary to accept that different methods can be used for different needs under different circumstances.