Loading...

Top of Form

A MODEL FOR CONDUCTING A FUNCTIONAL ANALYSIS OF ACADEMIC PERFORMANCE PROBLEMS

Abstract: The purpose of this article is to describe a simple conceptual framework for academic intervention that extends functional analysis procedures to basic academic skills. We organized the empirical research on academic intervention into five hypotheses that can guide the selection of interventions. Treatment protocols for six academic interventions and procedures for quickly testing their effectiveness are presented. The description of procedures for simple tests includes discussion of design issues, measuring outcomes, and guidelines for selecting treatments.

There is an expectation in our society that children will attend school and that school, at the very least, will impart important academic skills. Children are expected to be able to perform basic skills such as reading, mathematics, and writing. Some fail to accomplish what is expected of them. When what a child can actually do is at variance with what is expected of that child, there is often an interest in understanding or explaining the deficits in academic performance. The goal of this article is (a) to suggest some explanations for the failure of some children to perform, (b) to offer a series of direct "tests" for each of the common reasons children fail, and (c) to provide some simple interventions which are linked directly to the outcome of the tests.

The reasons why children fail are complex. Reading, for example, involves translating symbols from a page into meaningful language, and innumerable parts or combinations of this visual and neurological process can malfunction. When school psychologists are asked by a parent or teacher why a child does not learn, often the correct answer is, "I don't know." This response, however, is not very satisfying to either the speaker or the listener. Therefore, we often attempt to explain student failure using diagnostic terms such as dyslexia or learning disability or mental retardation. Relating academic performance to other observed or inferred student characteristics represents a structural approach to understanding failure (Nelson & Hayes, 1986). Structural explanations rely upon previous correlational research to help school psychologists organize what is often a complex picture of student strengths and weaknesses, and structural labels typically occasion major changes in a child's life such as special education placement. From an intervention perspective, however, structural explanations for student failure are limited in two respects. First, student performance as well as the traits inferred from such performance cannot be manipulated directly (Nelson & Hayes, 1986). Rather, student behavior can only be altered indirectly by manipulating one or more factors external to the child (e.g., what is taught or the amount of teacher assistance provided). Second, because structural explanations emphasize student traits as causal agents, they do not identify those factors external to the child that may be contributing to academic failure.

Relating academic performance to aspects of classroom instruction that precede and follow student performance represents a functional approach to understanding failure. Functional explanations appeal to factors external to the child that have been shown experimentally to affect academic performance such as time for learning, feedback from the teacher, and reinforcement for correct responding. Because these factors are external to the child and subject to direct manipulation, functional explanations have the added advantage of identifying simple, practical targets for instructional programming.

Functional explanations of poor academic performance will perforce be related to what teachers do to teach students: arranging the instructional environment, sequencing how instruction is delivered as students progress through the curriculum, providing sufficient opportunities to respond, and/or structuring contingencies. Each of these teaching actions occurs as a result of instructional decision making. When students are not learning, the task is to analyze how these factors affect student performance to make explicit decisions about how best to teach the child. In this article, five common reasons why students fail will be described. These reasons are related to what teachers should be doing in the classroom to teach students effectively. Next, some simple methods for testing these hypotheses quickly and efficiently will be presented. This section will include protocols for effective academic interventions, how to structure these simple tests (i.e., design issues), and important outcome measures that can be obtained reliably. The goal of this article is to demonstrate that it is possible to apply the logic and procedures of functional analysis to academic skills, provide a framework for doing so, and offer protocols for effective academic interventions.

Five Common Reasons Why Students Fail and What You Can Do About Them

The field of behavior analysis has been particularly successful at linking assessment to effective interventions by identifying the functions of behavior. When the contingencies maintaining behavior are known, it is possible to rearrange the contingencies so that appropriate and adaptive choices are more likely and inappropriate responses are less likely (Martens, Witt, Daly, & Vollmer, in press). For example, when it is known that adult attention maintains tantrum behavior in a child (i.e., tantrum behavior functions to gain attention), the intervention might consist of not providing attention while a child is having a tantrum, but providing attention when it is sought in an appropriate manner. This line of research has been applied largely to aberrant social behaviors and has not yet been applied to academic skills in a way that systematically compares variables functionally related to student learning. A functional analysis of academic behaviors would provide information about the relative effects of different teaching strategies (e.g., modeling, error correction, or practice) on student performance. When the effects of different instructional strategies are known (e.g., modeling versus practice), it is possible to alter instruction to maximize the likelihood that the intervention will be successful.

Five common factors known to affect student academic performance can be used as a starting point for generating hypotheses that lead to interventions. Five of the most common reasons students perform academic work poorly are that (a) they do not want to do it, (b) they have not spent enough time doing it, (c) they have not had enough help to do it, (d) they have not had to do it that way before, or (e) it is too hard (see Figure 1).

In Figure 1, each is referred to as a "Reasonable Hypothesis" because they are factors over which educators have control and are functional reasons for why students fail to succeed in school. More detail is provided about no. 3 because of the different forms of assistance that students may need. These hypotheses are not intended to be independent of each other. Academic skills require a sequence of instructional activities that build upon one another to increase response rates in the presence of curricular materials. Therefore, the strategies that teachers use are often interrelated (e.g., modeling and error correction are always confounded with opportunities to respond). Therefore, the hypotheses are intended to reflect increasing levels of intrusiveness of academic intervention rather than independent hypotheses.

Time is a precious commodity. Educators need to be efficient when problem solving. Under many circumstances, the most efficient thing to do is to test the easiest hypothesis first, implement an intervention, and monitor and evaluate outcomes. If that approach fails to improve student performance, then something progressively more time intensive can be attempted until the probable cause of failure is identified. Also, easier solutions are more likely to be implemented consistently while solutions which are more time consuming or technically difficult for teachers and support personnel are less likely to be implemented correctly (Gresham, 1989). Therefore, the "reasonable hypotheses" presented in Figure 1 are ordered from those requiring the least intervention to those requiring the most intensive instructional intervention based on logical considerations of classroom environments in general.

This sequence is intended to be heuristic and not an invariant sequence to be implemented by practitioners. When conducting assessment and intervention for academic problems, practitioners might want to consider factors like teacher skill, classroom routines, time required for implementing an intervention, and student skill level in the hypothesis generation phase. Therefore, practitioners might consider alternate sequences of the proposed hypotheses depending upon the unique characteristics of the settings in which they work (e.g., changing instructional materials might be very acceptable in some settings and not in others). In some cases, practitioners might already have assessment data that indicate that one hypothesis is more likely than another (e.g., variable student performance might suggest a performance deficit; alternately, many errors might suggest an accuracy problem). A direct test might then be conducted to confirm or disconfirm the hypothesis.

We will examine the empirical support for the role of each hypothesis in student academic performance and related interventions before we describe strategies to test for each hypothesis. Figure 2 contains examples for interventions that have been effective at improving student performance, and each is associated with presumed functions of the target behavior. Appendix A contains references that can be useful in developing interventions for each of these areas.

They Do Not Want To Do It

Is the student not able to perform the skill (a skill deficit) or is the student able to perform the skill, but "just doesn't want to?" The distinction between skill and performance deficits was clarified by Lentz (1988, p. 354) who stated, "Skill problems will require interventions that produce new behavior; performance problems may require interventions involving manipulation of 'motivation' through contingency management." It is relatively easy to test the hypothesis of a performance deficit. Incentives for reading (Ayllon & Roberts, 1974; Staats & Butterfield, 1964; Staats, Minke, Finley, Wolf, & Brooks, 1964) and math (Broughton & Lahey, 1978) have been effective in improving students' motivation and performance (i.e., increasing active participation and decreasing disruptive behaviors). If a student fails to respond to incentives for increased academic performance, then either the wrong incentives were used or the student does not have the skills to perform the task.

The literature contains numerous examples of interventions that can be used to test this hypothesis. For example, Lovitt, Eaton, Kirkwood, and Pelander (1971) improved students' oral reading fluency by offering incentives for reading faster. Another strategy for improving students' motivation that is relatively easy to implement for most teachers is offering students a choice of work to be performed (e.g., a story about baseball versus a fairy tale) or the order in which work is performed (e.g., allowing the child the choice of a vocabulary drill or a silent reading first). Students' on-task behavior has been improved by giving students a choice of instructional activities (Dunlap et al., 1994; Seybert, Dunlap, & Ferro, 1996), a strategy that can be easily adapted to most instructional formats. It is noteworthy that in some of this research students displayed high rates of on-task behavior on those very assignments that they refused to do previously. The only difference between assignments completed and assignments refused was that students were allowed to choose among several instructional assignments during seatwork. When students were allowed to choose the assignment, their compliance and on-task behavior improved.

They Have Not Spent Enough Time Doing It

A student may not be progressing academically because he or she simply has not spent enough time actively practicing the skill. There are large differences in the amount of time students spend actively engaged in academic responding (Rosenshine, 1980). Large differences also have been observed across socioeconomic levels (Greenwood, 1991). For instance, longitudinal studies conducted by researchers at the Juniper Gardens Children's Project have identified large cumulative differences across socioeconomic levels in the amount of time students are actively responding. These differences amount to greater than 1.5 years more schooling by the end of middle school for students of higher socioeconomic levels than for students of lower socioeconomic levels (Greenwood, Hart, Walker, & Risley, 1994).

In a review of the literature on academic engaged time, Ysseldyke and Christenson (1993) concluded that variability across classrooms and schools leads to large differences in the amount of time that students are academically engaged. These differences increase the salience of engaged time as an important variable in the investigation of a student's academic problems and underscore the importance of examining this factor on an individual basis. The implications for intervention are obvious. As a first step, a student's current rate of active responding in the problematic subject area or time of day should be estimated. This task can be accomplished through recent advances in observation techniques such as the Ecobehavioral Assessment Systems Software (Greenwood, Carta, Kamps, & Delquadri, 1995) and the Behavior Observation of Students in Schools (Shapiro, 1996), two observation tools that provide estimates of student active engagement. The second step involves increasing the student's active responding. Various strategies such as providing highly structured tasks, allocating sufficient time for instruction, providing continuous and active instruction, maintaining high success rates, and providing immediate feedback have been shown to improve student engagement rates (Denham & Lieberman, 1980; Stallings, 1975; Ysseldyke & Christenson, 1993). Even simpler solutions may be equally as effective; for instance, allocating more time for student responding and decreasing intrusions (e.g., transition time) into instructional time (Gettinger, 1995; Rosenshine, 1980).

They Have Not Had Enough Help To Do It

Feedback.Ysseldyke and Christenson (1993) warn that engaged time is only moderately (though significantly) related to student achievement. Increasing time for engagement may not be sufficient ira student needs more help to perform instructional tasks successfully. Feedback for student responses may be necessary to assist a student to respond accurately and quickly (Heward, 1994). Feedback is an integral part of the learning trial and consists of an instructional antecedent (e.g., "Who was the first president of the United States?"), an active student response (e.g., "George Washington."), and a consequence (e.g., "Correct!"). When teachers actively provide feedback to students for responding, they increase the likelihood of student achievement (Rosenshine & Berliner, 1978).

Belfiore, Skinner, and Ferkis (1995) showed that complete learning trials were more effective in helping students to master sight words than merely having students repeatedly say the correct response. A learning trial consists of an antecedent (e.g., a flashcard with "3 x 3") prior to a response and a consequence (e.g., "Correct!" or "No, the correct answer is 9.") following a response. Another strategy for increasing feedback via complete learning trials is choral responding. Choral responding involves having all students respond verbally during group lessons. Choral responding has been shown to improve learning rates for diverse groups of students, including preschool children with developmental disabilities, children identified as Severe Behavior Handicap, first grade Chapter 1 students, general education students, and students identified as Developmentally Handicapped in special education classrooms (Heward, 1994). Choral responding has been shown to be more effective at improving learning rates when compared to on-task instruction in which the teacher praised students for paying attention while asking the same number and type of questions (Sterling, Barbetta, Heron, & Heward, 1997).

Another strategy for increasing feedback to students for nonverbal responses is response cards. To use response cards, teachers can instruct students to write the correct response on laminated cards during group instruction for math, spelling, or content lessons. When the teacher asks a question, the students are expected to write their answers on the cards and to hold up the correct response. The teacher scans students' responses and provides feedback to students. Heward (1994) provided guidelines for implementing the use of response cards. Response cards have been shown to improve (a) rates of responding and quiz scores relative to hand-raising during fourth-grade recitation social studies lessons (Narayan, Heward, Gardner, Courson, & Omness, 1990), (b) ontask behavior of students with disruptive and off-task behavior during social studies lessons (Gardner, Heward, & Grossi, 1994), and (c) quiz scores in earth science classes for high school students (Cavanaugh, Heward, & Donelson, 1996). The common strand of these strategies is that they increase the amount of feedback given to students immediately following responding. It is an opportunity to provide positive feedback for correct responses and to correct errors immediately rather than allowing a student to practice the wrong answer.

The Instructional Hierarchy. In the event that increasing feedback during time allocated for instruction is not sufficient for improving student performance, it may be necessary to look more carefully at a student's skill level as a basis for developing instructional interventions. How much assistance a student requires is dependent upon his or her level of skill mastery. Mastery, in turn, develops in a sequence of stages which lead to proficiency and use of the skill across time and contexts (Daly, Lentz, & Boyer, 1996; Haring, Lovitt, Eaton, & Hansen, 1978; Howell, Fox, & Morehead, 1993). Initially, effective instruction promotes accurate performance of the skill. At this stage, modeling the skill and observing the learner are critical components of good instruction. Also, explicit feedback about performance is necessary. After the learner becomes accurate, the next step is to become fluent in the use of the skill. For a skill (e.g., "4 x 2 = 8") to be useful in the future (e.g., with long division), the learner must be able to respond rapidly when presented with the problem. Practice improves skill fluency. Accurate and fluent skill use increases the chances that the learner will generalize the skill across time, settings, and other skills (Daly, Martens, Kilmer, & Massie, 1996; LaBerge & Samuels, 1974; Wolery, Bailey, & Sugai, 1988).

Daly, Lentz, and Boyer (1996) used the heuristic notion of an "instructional hierarchy" developed by Haring et al. (1978) to show that in many studies of academic interventions the effectiveness of instructional strategies for improving student accuracy and fluency could be predicted based on the strategies used (e.g., modeling versus practice). Although other instructional hierarchies have been developed during the past century, Haring et al.'s (1978) model is particularly useful because it explains patterns of results and allows us to predict which interventions are most likely to be effective based on the components of instruction and where students are in the learning sequence. The instructional hierarchy suggests that strategies that incorporate modeling, prompting, and error correction can be expected to improve accuracy and that strategies that incorporate practice and reinforcement for rapid responding can be expected to improve fluency. In a demonstration of the predictive power of this particular instructional hierarchy, Daly and Martens (1994) accurately predicted the pattern of results of three reading interventions based on the instructional strategies incorporated by each.