Wolfskill, L. A.

ALEC 601

Reaction to a Research Report on Teaching

Miller, G., & Pilcher, C. L. (2002). Can selected learning strategies influence the success of adult distance learners in agriculture? Journal of Agricultural Education, 43(2), 34-43.

Miller and Pilcher begin their paper with a brief review of the literature that describes the thought on the teaching environment of distance learning programs. They conclude from the literature that it is very complicated, causing agriculture faculty to request specialized training and assistance in such themes as teaching techniques and course delivery. From that thought, the authors transition to the learning side, commenting on how fraught the learning environment is with unique challenges. Using the literature (primarily their own journal articles), the authors develop the concept of aiding the learners by supplying them with a professionally created videotape that demonstrated three metacognitive learning strategies, and three resource management strategies for assisting distance learners and thus increasing success in distance-delivered courses. Generally, the authors provide an appropriate knowledge base for the introduction. The use of the literature to use theory to lay the theoretical groundwork for the current experimental study is well-developed.

Miller and Pilcher clearly identify their research purpose. They use a section entitled “Purpose” to state this purpose, that of determining “whether providing information about learning strategies to agricultural distance learners through a professionally developed videotape would result in increased satisfaction with the distance learning experience, (p. 35). They continue with two enumerated objectives that the research project will complete. However, both of them begin with the word “Describe,” which to me would indicate that they are not testing hypotheses in this study, but performing a descriptive analysis. Their purpose, however, is to use an intervention and to make statistically determined conclusions about its effectiveness versus a control group. The researchers enumerate three research hypotheses relating to 1) the experimental group using the strategies highlighted on the video tape more than the control group; 2) the experimental group earning higher grades in DE courses than the control; and 3) the experimental group being more satisfied with their distance learning experience than the control group.

The authors do a good job describing the population for the study, and they clearly indicate the sampling frame. They also go into detail on how and why they pared their sampling frame list to protect the validity of the conclusions. An example of this is that the researchers ensured that if several candidates were residing together, or sharing course material videotapes, only one of them could be included in the study, whether experimental or control. They also did not use any Agricultural Education courses, since the students could have been involved in instrument development. Also, so as not to unduly influence other ongoing research from the Education department, selected Agronomy classes were disqualified. Generally, the protections they included seemed well thought out.

Data were collected through a researcher-designed questionnaire specifically for this study. The instrument was checked for content and face validity by a panel of nine graduate students taking a course in distance learning in agriculture. All nine agreed that the instrument possessed the requisite validity. The authors presumably did not make any changes based on comments from the panel. The instrument was then pilot-tested on ten other graduate students. Cronbach’s alpha reliability coefficient was calculated for Parts 2, 3, and 4 of the instrument, yielding coefficients of 0.89, 0.84, and 0.91 respectively. These levels are considered plenty adequate in the field.

Key results (whether and to what degree the intervention actually occurred) were based on self-reported data by the subjects. There is no indication in the article that the researchers verified or sampled to check these data. Standard nonrespondent followup was made and described, so the final response rates were 65% for the experimental group, and 62% for the control group. The researchers analyzed nonresponse using a technique reported by Miller and Smith (1983), that of comparing based on characteristics that were known. As a result, it was determined that nonresponse was not completely random, and could be strongly correlated to GPA (measured two different ways).

Based on the instrument results, only 50% (36 of 72) of members of the experimental group actually watched the videotape, so the researchers only considered them in that group for purposes of statistical analysis. They apparently transferred the remaining 36 to the control group, since they reported that they had not watched the video. I suppose that they did this in an attempt to increase sample sizes and have more statistical power, and so to not throw away information that may have been in that groups other responses on the instrument. I question this decision, since just knowing about the videotape, and having seen the bookmark that contained reminders of the key techniques, could have influenced both their responses on the instrument and their results in the coursework that they attempted. After taking such pains to limit the sampling frame and eliminate threats to validity, it seems that they could have done better here.

What the authors did do was to convert the study from a posttest only true experimental design to a quasi-experimental design using static group comparison. They then had to assess additional threats to internal validity, which they describe in the paper. In particular, experimental mortality could have severely hurt the internal validity, but their tests demonstrate that its effect was negligible. They also assess threats to external validity, and correctly conclude that the findings of this study are not generalizable, neither to the population nor beyond. Results should only be applied to the respondents.

The authors used the commonly accepted measure of a 0.05 type I error level for statistical significance. In their first hypothesis, that of determining whether the treatment group used the highlighted learning strategies to a greater extent than the control group, they conclude that there is no statistically significant difference in actual use of the strategies based on the intervention, with a p value of 0.99.

For the second hypothesis, that of actual grade results, there was no statistically significant difference in the GPA between groups. The reported p-value for this t-test was 0.67.

The third hypothesis concerned satisfaction levels with the distance learning experience. The researchers hypothesized that students who had viewed the videotapes would report greater levels of satisfaction with their distance learning experience than those who did not view the tapes. This hypothesis was also not supported by the data. All in all, it was a difficult day for those wishing to affect distance education by using learning strategy videotapes.

These results to me were very counterintuitive. I would have expected at least the first hypothesis to be supported, and had hoped the second one would be also. Had it been, I would be ordering up a videotape for my own viewing enjoyment! I would imagine that the attitudes about DE in hypothesis three might be tied to success in the DE courses, and so if there were no increase in results, maybe we shouldn’t expect to see much increase in satisfaction. My own satisfaction is probably pretty closely tied to results, especially if the results are an A.

I think that part of the problem with this experiment had to do with the fact that distance students (probably in general, but at least in this study) were already top students. They had significantly higher grades than non-distance learners, and so had probably already self-selected into a class of students that used the six techniques already. Any student who would view a videotape on study skills was probably already head and shoulders above your typical student anyway. Perhaps replicating the experiment with a remedial type class or with non-degree credit leveling classes would put it in front of the kind of students that hadn’t already implemented the program. Overall, however, I thought the experiment was fairly well designed, and to me has plenty of face validity.

On some of the technical aspects of the paper, it is somewhat refreshing to see a published paper that didn’t even come close to a statistically significant result! These are important too, but don’t seem to be as publishable. It also strikes me as a little odd that in the references section, of the six journal articles cited, four are by the authors themselves. I would have expected them to be able to find more on the subject from others. They do also reference nine other non-journal sources, including an SPSS manual.

REFERENCES

Miller, L. & Smith, K. (1983). Handling non-response issues. Journal of Extension, 21(5), 45-50.