RUNNING HEADER- CRITIQUE OF “HOW TECHNOLOGY CHEATS GIRLS” 1

Critique of

How Technology Cheats Girls

JVT2 Task 1

Jay P. Marlowe

October 20, 2013

A Written Project Presented to the Faculty of the Teachers College

of Western Governors University

Methodology

Caroline Csongradi’s initial aim was to determine whether male or female high school students would perform better on a project in the domain of philosophy and history. The hypothesis initially put forth by the author was that girls would do better on this type of project due to the subject matter. Her students did a chart comparing Aristotle and Plato before the beginning of the study, then completed a project on the origin of various ideas in the history of science. The students then completed their study with a multiple choice exam. While the students had the ability to use outside sources, including those from the Internet, none were required, nor was any class time spent on the research process. Students worked alone or in pairs, and those pairs were in some cases single gender and in other cases mixed gender. The researcher used quantitative data—specifically the students' grades on the final exam and the project—to determine who performed better, and thus to prove or disprove the hypothesis that female students would do better on projects involving history, philosophy, and/or writing.

Appropriateness of Methodology

The procedural steps do not match up with the design of the study. If the idea was to determine whether girls would do better on such a project than boys, then other variables, such as whether or not students worked with partners and the resources they were allowed to use, should have been controlled. As the study is currently designed, it would be impossible to know whether it was working in a pair or alone, rather than gender, that might have influenced the results. Also, what influenced results on one part of the study—such as the partnering on the project—might or might not have had an influence on the final exam grade.

Furthermore, students were given too much freedom with regard to the project for the results to tell the researcher much. The fact that data is collected from both the project and the test is an issue: the two types of skills being measured by these assessments are entirely different. For the test, students could have studied in any number of ways, some of which might be more or less useful than others; this variable is not controlled for, nor does the study specify much about the information on the test. Students' prior knowledge of the topic, which might have affected their scores on the test or, in particular, on the project, was not taken into account at all, and is thus another uncontrolled variable. While the study, as currently designed, might be used to show that a certain group of students mastered the content of this particular unit better than another group, there is nothing in the methodology to support a correlation with gender for the results.

Some students completed the pre-project chart assignment, while others did not. Those who did and who did well on it would likely have been more prepared than their peers for the rest of the assignments, but this is not taken into account at any point in the study.

Finally, the methodology of the study is suspect in that some of the data measures are not sufficiently nuanced or detailed. One of the data points for each student was their performance on the 100-question final exam. Unless the content of the final was predicated entirely on this project, this measure includes data which has nothing to do with the project's hypothesis. Hence, this might skew the results.

Data Analysis

Csongradi found that there was little data available to support the hypothesis. There seemed to be, according to her work, no correlation between gender and how well students did on the project. However, she did find that girls who worked alone did best overall. One significant finding was that girls and boys who worked together in a partnership tended to do the worst on the project. She also found that boys were more likely to use technology on the project, which led to the secondary focus of her study on the idea that, as the title of the study states, technology cheats girls, since they have less facility with it and a lower comfort level.

Reliability of Data Collection Strategies and Process

Reliability refers to whether or not the data collection measures are likely to produce stable, consistent results. It is unclear whether or not the data collected in this study is reliable. For one thing, test scores, which are one key piece of data in this study, may not be reflective of the students' true knowledge: not all students test equally well, so their test scores may not reflect their actual knowledge of the material. Since nothing about how the test was written was included in or controlled by the study, it is also possible that the test itself did not test the type of material about which the study attempted to gather data, or at least did not test it exclusively. Because it was the final exam for the whole class, comprising 100 questions, it might have covered other information than that directly related to the hypothesis.

In terms of the project, this data was not definitely reliable either because some of the students worked in pairs. Their choice of partner and interpersonal interactions could have influenced their grade, and there was no safeguard in place to ensure that one student in each pair did not do all the work even though both would get the same grade. This might skew results, since people who did not do the work could be getting a good grade, and similarly students who knew the material might suffer due to a partner who did not do his or her share of the work.

Data Analysis Techniques

The author of the study used basic statistical analysis to analyze the data collected. She averaged grades on the main project, the pre-project chart assignment, and the final exam, and broke these down by gender. In the case of the project, she also noted students who worked alone or together, and later coupled this with the gender of their partners. She looked for trends in the data in an attempt to understand whether gender correlated with a specific level of performance, after first noting that both boys and girls had statistically similar grades, on average, in the class to begin with. She did not include information on the pre-study chart grades, but only on the number of students of each gender who had turned in the project.

Appropriateness of Data Analysis Techniques

Inferential statistics were used to draw conclusions in this study. The author compared students by gender and, in one section of the project's work, also by whether or not they worked with a partner along with the gender of the partner. These statistics were appropriate, but the use of descriptive statistics to note the mean grade on the final exam, the project, and the pre-project activity would also have been useful for each of the groups mentioned.

Interpretation of Analysis Results

The major problem with the study comes with the interpretation of the results. The author believed that the data did not support her hypothesis that gender has anything to do with how well students perform on projects in history, philosophy, and writing. However, whether or not this is true is unclear because of the large number of variables unaccounted for. Furthermore, since all students scored relatively high on all measures of success, it is hard to make any determinations with regard to gender at all. The author notes that mixed-gender pairs scored the worst on the project, but does not connect that to the final exam score: in fact, no connections are made between students' scores on the pre-study assignment, the project, and the final exam.

The author also seemingly switches the focus of her study at this point. She notes that some students used technology to complete the project, while others did not, and draws conclusions about girls and their use of technology based on this information, even though this information does not pertain to the research questions. Moreover, no data was collected or reported regarding student use of technology. Csongradi mentions a follow-up survey given, but it is unclear where she gets the idea that technology in this class is a “male” domain, and she does not have data to correlate higher grades with the use of technology.

While the tables she includes are basic, they are easy to understand; however, they do not seem to account for all the data, since there is no explanation of how the three data measures compare to each other, nor any data about the use of technology in the project, which the author seems to find so significant in her conclusions. It seems, ultimately, that the researcher interpreted the results through the lens of explaining why women do worse in science classes, even though that was at no point backed up by data: in fact, student averages before the study were similar, and overall, so were the results of students' work during the study itself. The generalizations that the researcher makes are not at all consistent with the results, and in fact, the conclusions section introduces a whole new variable, the use of technology, to the project that has otherwise been unaccounted for. Students' overall facility with technology is not addressed, nor is the technology available to students (especially since the project was done outside of school), and these factors would be extremely important in any study related to students' comfort level with technology. Drawing conclusions about factors outside the hypothesis of the study, including about students' overall performance on the final exam or in the class, is not supported by the data.

"How Technology Cheats Girls" by Carolyn Csongradi