Running Head: ARTICLE REVIEW of CYBERCHEATS: IS INFORMATION AND

Running Head: ARTICLE REVIEW of CYBERCHEATS: IS INFORMATION AND

Article Review: Cybercheats 1

Running head: ARTICLE REVIEW: CYBERCHEATS

Article Review: Cybercheats: Is information and communication technology

fuelling academic dishonesty?

Thomas Calabrese

Educational Technology Comprehensive Examination Portfolio Component

University of Connecticut

Abstract

This paper provides a critical analysis and review of the research paper: Cybercheats: Is Information and Communication Technology fuelling academic dishonesty? by Szabo and Underwood (2004). Various perspectives are considered in this paper for improving the quality of this type of research, ranging from adding rigor to the scientific approach and method to rethinking the underlying theoretical perspective. The Szabo and Underwood article is compared and contrasted with similar investigations recently conducted by researchers at the University of Connecticut and at the Center for Academic Integrity. This document concludes with a discussion of potential technology based investigative techniques. These techniques may be helpful in uncovering more reliable and extensible data, which could be used to triangulate with existing research and target causes and potential interventions of “cybercheating”.

KEYWORDS: Academic Dishonesty, Internet, Plagiarism, Moral Judgment, Student.

Article Review: Cybercheats: Is information and communication technology

fuelling academic dishonesty?

The Szabo and Underwood paper (2004) describes the results of a UK based research effort aimed at determining relevant factors contributing to plagiarism at the university level. A sample of freshman, sophomore, and junior year science students (N=291) from a large university in the UK were administered a 12-item, anonymous questionnaire in the presence of a host speaker and a researcher. The investigator gave a verbal summary of the study and instructions on administering the questionnaire. Three questions addressed Internet competency, six questions focused on attitudes of Internet misuse and the risk associated with cheating, and three questions addressed the culture of cheating. All questions were administered using a nominal scale (3, 4, or 5 point Likert scale) and the results were statistically examined using a chi-squared procedure. Descriptive statistics are presented by the author supported by anecdotal discussion. Major findings include:

  1. Students are technologically capable and Internet savvy.
  2. Older students use the Internet more frequently as they progressed in school (especially in the case of completing an assignment).
  3. 32% of respondents admitted to plagiarism using the Internet and 50% would plagiarize rather than face failure.
  4. Males cheat more frequently than females.
  5. Level of academic task difficulty is correlated with cheating behaviors.
  6. 60% of students felt teachers would be unable to detect cheating.
  7. 30% stated plagiarism benefits outweighed the risk of discovery

The authors present a theoretical framework consisting of environmental and personal, push factors (likelihood of failure promotes cheating) and pull factors (ease of use and low level of work make cheating more desirable), either promoting or discouraging the cheating behavior (Graham & Hart, 2005). They interpret their results in light of this framework, concluding thefear of being caught is a powerful deterrent, but the perception of lax enforcement of university policy by faculty and minimal university penalties associated with academic dishonesty mitigate the effectiveness of this powerful moral deterrent.

Szabo and Underwood’s paper is representative of a collection of similar papers (Ercegovac & Richardson, 2004) providing striking data, minimal statistical analysis,and a theoretical lens through which the data was interpreted. Given the highly volatile nature of the topic (academic dishonesty), these type papers create a more dramatic view of the data in terms of its immediate shock value (e.g., 50% of technically capable students use the Internet for academically dishonest activities). The purpose of this critical review is to provide alternative perspectives on their research. The focus is on the scientific merit of the research conducted, the theoretical framework provided by the authors, and strength of discussion. This will help to better characterize the data and avoid misinterpretation of the data.

Scientific Merit and Critical Analysis of the Research Conducted

After careful analysis of the research method espoused by Szabo and Underwood, it appears to have several significant weaknesses in experimental design, statistical form, and conclusions implied.

The experimental design considers a relatively small population (n=291) of science students from 7 classes at a single university. This presents several problems unaccounted for in the paper. First, this is a cluster sample, not a random sample. Thus, it is imperative that the researcher provide details of the type of class and its particular implications for affecting the results (e.g., has the instructor raised plagiarism discussion with the class in the past?). A better design would be to open the survey up to all students of the university outside of the structured class environment, neutralizing the effects of any one teacher or classroom environment on the survey results (Hinkle, Wiersma, & Jurs, 2003). Second, all of the students were “science” students. This will effect the conclusions that can be drawn. For example, the author states that “more than 50% of students indicated an acceptance of using the internet for academically dishonest purposes” (Szabo & Underwood, 2004, p. 180). This in fact should read, ‘50% of science students from a large UK university indicated…’ as it is possible that something in the science program is causing the behavior.

The researchers used chi-squared tests to evaluate their data. This is a common practice for non-parametric data obtained on nominal scales. If the observed frequencies of the responses were too great to be attributed to sampling fluctuation, the test is considered significant. To determine which categories were major contributors to the statistical significance, the researchers should have calculated the standardized residual for each of the categories (Hinkle et al., 2003). Only p-values of the chi-squared tests are reported. Conclusions regarding major contributing factors seem to be drawn in the absence of the standardized residual or any other variance accountability metric. Chi-squared tests can only determine if the findings are correlated and should not be used to make causal statements (Hinkle et al., 2003).The authors need to take extra care not to make these types of statements as this leads to speculative assertions.

The questions being pursued by the researchers are a natural fit for a multi-variate analysis of variance design. The researcher defines some number of dependent variables (gender, year in school, risk avoidance score, etc.), and two between subject factors (particular class 1 -7 tested and self reported cheating status–cheaters and non-cheaters). This would have been a more careful and comprehensive statistical design allowing the researcher to answer more interesting questions, such as how the participation in a particular class affected the scores, or how self reported cheaters scored on a particular dependent variable. This would also have allowed the research to test the significance of the main effects, all possible interaction effects, and group effects. Further, the researchers could have run a factor analysis to determine the relative importance of each component in the person’s decision to cheat or not cheat. This type of rigorous statistical analysis would give more credibility to the conclusions drawn.

Another area of concern is the 12-item questionnaire. Given the complexity of the underlying model (Szabo & Underwood, 2004) in terms of the number of possible factors theorized to affect the cheating decision, it is unlikely that they could all be tested based on only a 12-itemsurvey. The authors did not specify the Cronbach’s alpha, a coefficient of reliability or the consistency of items in a scale. This allows the reader to understand how well the test instrument was constructed. The formula for Cronbach’s alpha is written as

where N is the number of items, and r-bar is the average of the inter-item correlation among the items (Cronbach’s Alpha, 2007). We see that by increasing the number of items one can increase the alpha score. Likewise, a very well constructed survey with r-bar being high will also produce a higher alpha. It is possible to create a reliable test instrument with only 12 items as Politi, Piccinelli, and Wilkinson (1994) did to assess young men’s health in Italy with an alpha of .81 to determine two factors. It is difficult to determine the reliability of Szabo and Underwood’s questionnaire, without the alpha score.

The use of in-person monitoring of the surveymay cause problems by promoting students to give socially acceptable answers.The effects of this approach have been studied and are well documented. According to the research of Wil Dijkstra (1987), respondents who are approached in a personal style during data collection,“wouldbe more inclined to attempt to ingratiate themselves with theinterviewer, leading to more socially desirable responses, conformity,and irrelevant information” (p. 309). Removing this shortcoming in the research design is done by adopting an online survey with standardized instructions. Technology exists so the researcher can still control for survey abuse (e.g., multiple surveys from one person).

Finally, Szabo and Underwood’s description of the test instrument (they never show it to you or discuss the questions themselves) and the reporting of results are confusing. This could lead a reader to misunderstand the authors’ position on various aspects of the research. For example, in their conclusions the authors reference the increased propensity of males to cheat over females. This statement is fine since the chi-squared test for these variables had a p-value of .0001. However, the authors then go on to say “such misuses are triggered by push factors…and also pull factors…,” (Szabo & Underwood, 2004, p. 197) with the word triggered implying causality. Based on the evidence presented, there is absolutely no statistical basis upon which to make such a causal claim. This style of writing is prolific throughout the paper. A strong correlation between variables does not imply causality. This requires a more rigorous statistical analysis (described above) coupled with a disciplined interpretation of those results by the researcher in light of the context (Hinkle et al., 2003).

Theoretical framework

The authors take a behaviorist approach to understanding academic misconduct with a focus on positive and negative reinforcements (Szabo & Underwood, 2004). They adopted a variant of this approach by adapting Love and Simmons’ (1998) model to their own to account for several mediating factors such as gender, academic status, etc.This framework seems too simplistic in terms of the very direct cause and effect relationships. While the model does discuss some aspects of context and the environment, it centers them-suggesting that they are background or noise against which students decide to cheat or not cheat.

There are other models which allow for a fuller interpretation of the data. They consider the person (including their moral development), the environment, the context, and the behavior (Ercegovac & Richardson, 2004). Models based on Bandura’s social cognitive approach hypothesize both cognitive and environmental factors affect moral reasoning (Nadelson, 2007). McCabe, Trevino, et al. (2001) illustrate this point by theorizing that based on the principles of social learning theory, “academic dishonesty not only is learned from observing the behavior of peers, but that peer’s behavior provides a kind of normative support for cheating” (p. 222).

Models based on Kohlberg’s theories of moral reasoning are viewed by many as the defacto standard upon which to build theories of why students cheat (Ercegovac & Richardson, 2004). Kohlberg extends this model to include the role of the teacher (which is significantly limited in the Szabo and Underwood model) as the translator of moral ideology into a working social atmosphere (Ercegovac & Richardson, 2004).

Research on academic misconduct and digital technology at the University of Connecticut (Calabrese, Stephens, & Young, 2005), embraces the eco-psychological perspective.This perspective considers the situated nature of the act of cheating as occurring in-the-moment, andnot subject to much, if any, reasoning, planning, or reflection (Young, 2004).

The strength of discussion in the Szabo and Underwood paper is weakened as it is based on a cause and effect model. Their arguments are largely unsupported by rigorous statistical analysis. Many of their results, while statistically significant,seem parochial and inconsistent with other studies on the topic (see McCabe, Trevino, et al., 2001, Berry, Thornton, & Baker, 2006, and Stephens, Young, & Calabrese, in press, for specific examples). The Szabo and Underwood study relies on a statistically biased sample and questionable implementation methods. Thus, the conclusions offered may seem logical and straight forward, but as Furedi (2004) states, in direct rebuttal to the ‘law of effect’ line of reasoning presented in Szabo and Underwood, “technological explanations of social and moral problems are highly suspect…Academics ought to exercise a degree of skepticism towards such simplistic claims” (p.2).

Discussion

The analysis presented necessitates a discussion of improved experimental design for this type of research. Obvious improvements include: using appropriate statistical analysis, using large and diverse random samples, removing the human intervention from the survey process, and standardizing the test instrument. Beyond these items, there are certainly many additional elements of the experimental design that could be strengthened to promote obtaining additional data (beyond self report) and to determine more plausible explanations of why students cheat.

The survey concept has always been a basic component of social and educational research. Improving the self reporting reliability and validity of the test instrument can be achieved through interesting statistical methods which utilize pilot study data and, “using principle component and factor analysis as analytic strategies generate new factors, including a set of questions with better content validity” (Afshinnia, & Afshimmia, 2002, p. 2).Given the number of survey based pilot studies (on digital academic dishonesty) available, there should be ample data to construct such a standardized and perfected instrument. Once completed such an instrument could be made accessible through organizations such as the Center for Academic Integrity, and routinely completed–online–by students of their member schools. This addresses the concerns for data quality, standardization, and large random sampling.

Ways of obtaining data, other than direct self report, could be done by taking advantage of the technology rich environments in which students spend considerable time. Hernandez, Ochoa, Munoz, and Burlak (2006) propose the use of data mining and knowledge discovery processes (KDPs) of online assessments to identify and develop operational behavior patterns (e.g., number of visits, times of visits, length of visit, attempts to be trained, most frequently viewed training materials, etc.) of cheaters. Similar KDPs are used to detect fraudulent credit card behavior and proactive interventions. Distance learning environments, where considerable student data is archived online could be used as test sites for KDP development.

Ercegovac and Richardson (2004) note Kohlberg’s thoughts on creating moral dilemmas for college level students to elicit beliefs and opinions of plagiarism and cheating. By combining this theory with an anchor based instructional vignette, similar to those created by The Cognition and Technology Group at Vanderbilt (1993), it would be possible to stimulate a virtual dialogue regarding the decision making process of a fictitious character faced with cheating behavior dilemmas. This would address the issue of the socially acceptable answer, if presented online.

Finally, the use of online monitored anonymous group discussion, real time or asynchronous, (e.g., blogs, chat, wiki, or threaded discussion) would allow for the collection of significant qualitative data. Similarly the use of virtual reality spaces may allow for researchersto interview, role play, or engage in games that elicit cheating behaviors.Plagiarism detection tools could assist researchers in tracking and monitoring common sources of plagiarism activity.

Conclusion

The popular perspective on the use of technology and academic misconduct is misleading. Many research efforts point to a sharp rise in digital cheating and online plagiarism and correlate this with the rise of technical competency of modern college students. While these results are startling, they should be tempered bycomparison with conventional cheating trends (specifically plagiarism) that have existed for decades.Our researchat the University of Connecticut, while confirming that cyber-plagiarism is a sizable problem, demonstrates that technology is merely a different means to an end.The moral judgment of cheaters seems to be consistent regardless of the form of the academic misconduct(Stephens, et al., in press).

There are other studies similar to Szabo and Underwood’s thatcontradict their findings (Berry et al., 2006).This draws into question the research methods, construction of questionnaires, and the rigor of statistical practices used by some researchers.These contradictions suggest the need for studies involving greater numbers of subjects and institutions, based on a perfected and standardized instrument.

As many institutions expand distance learning environments, facilitate online libraries, and offer online learning opportunities for many students, perceptions of ‘digital cheating as rampant’ may affect: the perceived worthiness of students in these programs, instructor performance, assessment methods, and perceived strength of programs.Research should focus on perfecting and unifying test instruments (e.g.,questionnaires), data mining Internet based testing sites for characteristics of online cheaters, digitally based information gathering, and inclusion of technology based intervention and analysis tools. As more learning gravitates towards our online culture, it is imperative that educators understand the nature of cybercheaters, their methods, and potential interventions.

References

Afshinnia, M., & Afshinnia, F. (2002, November). Principle component and factor analysis: An analytic strategy to increase content validity of questionnaire factors. Paper presented at the meeting of the International Conference On Questionnaire Development and Testing Methods (QDET), Charleston, SC.

Berry, P., Thornton, B., & Baker, R.K. (2006, March). Demographics of digital cheating: Whocheats, and what can we do about it! In M. Murray (Ed.), Ninth Annual Conference of the Southern Association for Information Systems.Jacksonville, FL: JacksonvilleUniversity.