THE LEARNING ENVIRONMENT PREFERENCES:

AN INSTRUMENT MANUAL

All rights reserved

1988 (Revised, 1990, 2000)

William S. Moore, Ph.D.

Center for the Study of Intellectual Development

1505 Farwell Ct. NW

Olympia, Washington 98502

360-786-5094 360-528-1809 (cell)


MANUAL FOR THE

LEARNING ENVIRONMENT PREFERENCES

TABLE OF CONTENTS

Page

PREFACE 3

GENERAL CONTEXT OF INSTRUMENT 4

The Perry Scheme

Assessment Approaches

DEVELOPMENT OF THE INSTRUMENT 5

ADMINISTRATION AND SCORING 7

Administration/Uses

Scoring Procedures

PILOT STUDIES 8

RELIABILITY AND VALIDITY STUDIES 9

RECENT WORK AND CURRENT STATUS: SUMMER, 1990 UPDATE 11

SUMMARY 12

BIBLIOGRAPHY 15

TABLES 18

APPENDIX A:

THE LEARNING ENVIRONMENT PREFERENCES 27


UPDATED PREFACE, 2000

The LEP is a copywritten instrument (© Moore, 1987), and is available to researchers by contacting me at the following address:

Dr. William S. Moore

Center for the Study of Intellectual Development

1505 Farwell Ct. NW

Olympia, WA 98502

360-786-5094

I developed the Learning Environment Preferences, or LEP (Moore, 1987) as my dissertation research at the University of Maryland, primarily to explore the feasibility of an "objective-style" measure of William Perry's scheme of intellectual and ethical development (1970, 1981). [Perry’s book has just been re-issued by Jossey-Bass after long being out of print.] I was frankly skeptical at first, even though Jim Rest's work with the Defining Issues Test suggested that such a format could be applied successfully to complex developmental phenomena. Over the past 10 years, the LEP has proven to be a solid research instrument, and has been used fairly widely throughout the U.S. and Canada at a variety of educational institutions (I can provide a comprehensive list upon request).

I have done one follow-up round of psychometric analyses on the instrument subsequent to the work reported in this manual, with the factor analyses confirming the patterns found here. While the instrument is by no means flawless, it has held up fairly well over the years, and researchers continue to find the present format useful. I still believe improvements in individual items can be made, particularly in sharpening the differences between the position 3 and the position 5 items. Unfortunately I currently do not have the time or support to pursue this kind of additional development work on the instrument; if you know of graduate students interested in intellectual development and assessment, please let me know. I also welcome any feedback researchers have about their experiences with the instrument, either in terms of administration issues, approaches to data analyses, or the utility of the findings.

Also, feel free to contact me with any questions about the instrument, other Perry scheme instrumentation, or general issues related to research in intellectual development. For more information about other aspects of the Center, call me or refer to the Center brochure available from the address above.

One final note: this manual presumes a basic understanding of the Perry scheme. For more details, see Perry’s book and my recent book chapters addressing the scheme; I have appended an updated bibliography that includes those references.


GENERAL CONTEXT OF INSTRUMENT

The Perry Scheme

Since it is unlikely that anyone completely unfamiliar with the Perry scheme would find their way to this manual, there is no need to go into great detail about the model itself; a brief overview should suffice. For a more thorough introduction to the scheme, see Perry (1981) or my own review available from the Center (Moore, 1982).

William Perry's (1970, 1981) model of intellectual and ethical development in college students has become increasingly significant to higher education in the past several years in terms of both teaching/ learning issues (Knefelkamp,1974, 1981; Knefelkamp & Cornfeld, 1978; Mason, 1978; Touchton, Wertheimer, Cornfeld & Harrison, 1978; Widick, 1975) and outcomes assessment in colleges and universities (Mentkowski & Strait, 1983; Woditsch, Schlesinger, & Giardina, 1987). The forms of meaning-making described in Perry's work depict a nine-position (i.e., stages) progression of thought moving from a black-white view of the world--dualism--into a world in which there are essentially nothing but shades of gray and one makes meaning by making judgments--contextual relativism. This progression closely mirrors the vision of an educated person embedded in Western higher education (Bok, 1982; Gray, 1982), underscoring the importance of the Perry scheme as a major college outcomes measure.

Assessment Approaches

As I have noted elsewhere (Moore, 1982), a wide variety of assessment approaches to the Perry scheme have been attempted over the years (see King, 1978; Mines, 1982; and Perry, 1981 for overviews of specific Perry scheme measures). In Perry's original research (1970), and in early replication studies (e.g., Clinchy & Zimmerman, 1975), interviews were used to assess students' cognition. The original interview format used in the longitudinal studies at Harvard was almost completely unstructured, largely because by design Perry and his colleagues had no specific focus of inquiry for their study; the model ultimately emerged from the qualitative analyses of the interviews. Interviews were impractical for use with classroom intervention studies, so Knefelkamp (1974) and Widick (1975) developed the first major alternative to the interview format, a production-task measure consisting of sentence stems and semi-structured essay tasks now titled the Measure of Intellectual Development (Moore, 1982).

While in some respects the interview approach remains the richest source of information about a person's thinking, it has drawbacks of its own, including being expensive and time-consuming. Interview projects continue to be done (Benack, 1982; Murrell & Moore, 1987; Slepitza, 1983), and have evolved to more structured forms than the original work, but for the most part, Perry scheme researchers have pursued alternative assessment formats since the scheme was "discovered" by Knefelkamp (1974) and Widick (1975). Kurfiss (1977), for example, used a paraphrase or restatement task in her Perry scheme research, while more recently Taylor (1983) and Porterfield developed a variation of the essay stem approach based on Gibbs' and Widaman's (1982) work in sociomoral development. However, there have been few significant efforts to create an objective-style measure of the Perry scheme, and the measures that do exist--e.g., the Scale of Intellectual Development (Erwin, 1983), the Parker Cognitive Developmental Inventory (Parker, 1984), and the Learning Context Questionnaire (Griffith & Chapman, 1982)--are not well-grounded in the ongoing theoretical refinements of the model (Baxter Magolda, 1987; Moore, 1986).

Thus essay-style production tasks like the Measure of Intellectual Development and the Measure of Epistemological Reflection continue to be the most common approaches to the measurement of the Perry scheme. The problem is that both the MID and the MER require trained raters for scoring purposes, thus limiting the extent to which they have been used. Training raters is feasible (Moore & Taylor, 1986; Baxter Magolda, 1985; 1987), but the process is lengthy, and it is extremely difficult for raters to achieve research levels of interrater reliability without extensive practice. As a result the costs involved with such measures, while not unreasonable in comparison to the costs of interviews, are high in comparison to the more standardized instruments available for other models (e.g., Rest's Defining Issues Test, 1979). With these limitations in mind, and colleges and universities growing more concerned than ever about outcomes assessment with their students, developing a Perry measure that is valid, reliable, objectively-scored, and well-grounded theoretically in research on the scheme becomes a critical goal, since the Perry scheme is an excellent way of analyzing cognitive development in college, arguably the central focus of college outcomes.

____________________________________________________

DEVELOPMENT OF THE INSTRUMENT

The current version of the LEP consists of 65 items across five different content domains:

•view of knowledge/learning

•role of the instructor

•role of the student/peers

•classroom atmosphere/activities

•role of evaluation/grading

These domains focus on student preferences for specific aspects of the classroom learning environment shown to be associated with increasing complexity on the Perry scheme of intellectual development and reflect the major cue categories used in rating the Measure of Intellectual Development (Knefelkamp & Cornfeld, 1978; Knefelkamp, Fitch, Taylor, & Moore, 1982; Moore, 1987). Thus the LEP reflects the same perspective as the MID--namely, that the central issue of the Perry scheme is one's epistemology with respect to learning and related concerns. The features of thinking addressed reflect the major aspects of classroom learning, and as such represent a fairly narrow cognitive focus. The DIT, on the other hand, addresses thinking about moral judgements across a range of moral dilemmas; the Reflective Judgement model (Kitchener, 1977; King, 1977), a variation of the Perry scheme, uses dilemmas and a structured interview to measure a general cognitive approach to making judgements. The LEP and the MID narrow their focus to thinking about learning as a way of defining more clearly the rating criteria and/or salient cognitive issues involved. If one accepts Piaget's (1970) notion of cognitive decalage--variations in one's thinking across content domains--as it applies to adult thinking, this narrow focus seems crucial for more precise assessment.

Again like the MID, the LEP focuses exclusively on the primarily intellectual portion of the Perry scheme, positions one through five. Some research (Slepitza, 1983) has suggested that cognitive-structural change does not extend beyond position five; the issues "beyond" position five seem to involve finding meaningful ways to take personal stands in a contextually relativistic world. In any case, I feel that the complexity of positions six through nine can best be captured by qualitative research methods. Position one is not included because it has never been adequately verified empirically; even in the original study it was largely a hypothetical extension of the forms of thought found with freshmen.

Even with a narrow focus, both in terms of the Perry positions and the issues covered, constructing items which reflect the sequence of Perry positions, or any developmental scheme for that matter, is not as straightforward as it may appear. For one thing, the hierarchical nature of such schemes, including Perry, means, for instance, that a position 4 perspective integrates within it the earlier positions 1-3; it is also conceivable that someone reasoning from a "solid" position 4 perspective may well prefer some concerns associated with position 5. More significantly, the nature of the Perry scheme (and, of course, of the students' world views from which the scheme was derived) is such that positions 2 and 4 reflect similar perspectives, as do positions 3 and 5, with only subtle contextual differences. Such contextual differences are difficult to represent in relatively simple statements, especially as one attempts to make items across the positions reasonably parallel in terms of language and structure. This process is further complicated by the fact that "language" and "structure" are also important dimensions of the developmental change described by the scheme. Clearly, while a measure like the LEP avoids the cost and complexity of needing trained raters, at the same time it demands extraordinary time and care in its construction and refinement.

The first step in the construction of the LEP instrument involved an analysis of the most frequently-used cues, based on raters' evaluations and ratings over several years of research, as well as a review of actual MID essays collected in the past several years. The original item pool thus consisted of 134 statements based on significant MID rating criteria as well as essay excerpts and quotes reflecting these criteria. Once the original item pool was defined, the second step was to assign individual items to specific Perry positions two through five. The items were independently assigned to Perry positions by two trained rater in the Perry scheme, and those items rated more than one position apart by the two raters (6% of the item pool) were discarded. Items that were classified in adjacent positions by the two raters, or classified as transitional or ambiguous, for example, as a "position three/four" item, were reviewed by both raters and either reworded to clarify the position assignment or discarded. Through this editing process 54 items were rejected as being relatively ambiguous or unclear; the first pilot version of the instrument thus contained 80 items, four for each position per domain.

Finally, a series of pilot tests produced further editing based on empirical item performance and student comments, resulting in 60 scored items on the final research version of the measure. Additionally, however, the research form of the LEP includes five items, one per domain, not derived from the rating criteria or original item pool but based entirely on direct quotes from actual Measure of Intellectual Development essays. These items parallel the "M," or "meaningless," items on Rest's Defining Issues Test (1979); they are complex-sounding but are intended to be basically incomprehensible upon further reflection. As with the similar items on the DIT, these items provide a check on whether or not respondents are choosing preferences simply because they sound complex. The current version of the instrument, 6.0, is shown in Appendix A.

The Learning Environment Preferences measure, then, as the title suggests, is a preference task styled after the DIT. In assessing developmental phenomena, the data collection format one uses is a particularly critical concern, since there is reason to expect that different formats may well produce qualitatively different results. Production tasks like the MID or Taylor (now Baxter Magolda) & Porterfield's Measure of Epistemological Reflection (Taylor, 1983; Baxter Magolda & Porterfield, 1985) are more difficult challenges than an instrument like Rest's DIT or the LEP; the former require a generation task, the latter only the ability to recognize and indicate agreement. Joanne Kurfiss' work (1977) focused on a third kind of format, comprehension, using stage descriptions students are asked to paraphrase. These three tasks--generating, preferring, and comprehending--correspond to Rest's (1979) discussion of the possible ways researchers can use stage prototypic statements to assess moral judgement: rating/ranking, paraphrasing, or recalling. Furthermore, as I have noted elsewhere (Moore, 1986), these tasks may in fact be three distinct substages in the acquisition of a given cognitive position; at this point, however, research in the area is too sketchy to be anything but speculative.

____________________________________________________

ADMINISTRATION & SCORING OF THE INSTRUMENT

Administration & Uses

The Learning Environment Preferences is designed to be used with student populations, primarily in colleges and universities. It can be used to measure patterns of longitudinal intellectual development across various subgroups of students or for pre-post evaluations of specific courses or groups of courses. Theoretically, there are no age restrictions for potential populations; to date, however, the instrument has not been extensively validated on non-traditional-aged students (over 25). Moreover, while it is potentially applicable to all levels of higher education, so far the LEP has been used primarily with undergraduate students. The format and relative low cost of the instrument should encourage its use with a wider range of student populations.