Critical Thinking Assessment Test (CAT) Scoring Session

SPC EPI Tech Campus, Room 1-324

July 10, 2009

Attendees: Janice Thiel, Carol Weideman, Maggie Tymms, Christy Powers, Katie Woods, Cathy King, Anna Neuzil, Sarah Moseley, Chad Mairn, Jane Till, Mika Nelson, Arlene Gillis, Darlene Westberg, Holly Hoopes, Irene Hayrapetian

“The CAT instrument is a unique tool designed to assess and promote the improvement of critical thinking and real-world problem solving skills. The instrument is the product of extensive development, testing, and refinement with a broad range of institutions, faculty, and students across the country. The National Science Foundation has provided support for many of these activities. The CAT Instrument is designed to assess a broad range of skills that faculty across the country feel are important components of critical thinking and real world problem solving. The test was designed to be interesting and engaging for students. All of the questions are derived from real world situations. Most of the questions require short answer essay responses and a detailed scoring guide helps insure good scoring reliability”. (TennesseeTechUniversity, Critical Thinking Assessment Test Overview).

In collaboration with TennesseeTechnologicalUniversity and with support from the National Science Foundation, St. PetersburgCollegereceived a grant to administer the Critical Thinking Assessment Test (CAT) instrument to a representative sample of approximately 100 students enrolled in the College during 2008.In 2009, St. PetersburgCollegeconducted a second administration of the Critical Thinking Assessment Test (CAT) instrument to a representative sample of students enrolled in the College during 2009. One SPC administrator attended a regional training workshop at TennesseeTechnologicalUniversity in March 2009. Subsequently, sixty-sixCAT assessments were administered to SPC students enrolled in the courses listed below.

Course / Discipline / Completed CAT Assessments
PLA 3474 / Legal / 9
PHI 2624 / Ethics / 15
SPC 1600 / Communications / 23
RED 4511 / COE / 12
PCB 3043C / COE / 7

SPC Faculty members were invited to participate in the CAT scoring session during various meetings held between January and May.

The CAT Scoring Session was held on July 10, 2009 at the EPI Tech Campus of St. Petersburg College. Although one-hundred and five CAT assessments were originally purchased from TennesseeTechUniversity, only sixty-six were administered, andsixty-three assessments were scored on this day as three were not fully completed.

The majority of the scoring faculty (12) and facilitators (3) arrived by 8:15a.m., and participated in a breakfast buffet. When most participants had arrived, Carol Weidemanwelcomed everyone, thanked them for participating, and asked for individual introductions. The CAT overview was presented using an overhead projector, and then five or six assessmentsand a scoring rubricwere placed in front of each faculty member.

The CAT overview consisted of a history and synopsis of the CAT development process, the purpose of creating the assessment as a tool for improving student success, Best Practices, and the importance of assessing Critical Thinking skills. Please see figures 1, 2 and 3 below from the CAT Overview presented to the scoring faculty. Figure 1 presented the History of CAT Development, Figure 2 presented the Development of the CAT Instrument, and Figure 3 presented Best Practices for Improving Critical Thinking.

Figure 1.

Source: CAT Overview, Center for Assessment & Improvement of Learning, Tennessee Tech University 2008.

Figure 2.

Source: CAT Overview, Center for Assessment & Improvement of Learning, Tennessee Tech University 2008.

Figure 3.

Source: CAT Overview, Center for Assessment & Improvement of Learning, Tennessee Tech University 2008.

Following the CAT Overview presentation and some group discussions, the CAT scoring sessions began. During each CAT scoring session, the procedure listed below was followed for each question, beginning with test item number one.

  1. The CAT Training Module, presented on a projection screen,provided the criterion and scoring rubric for a specific test item.
  2. Next, a sample test item was presented on the screen, and various responses were discussed and scored based on the scoring rubric given for the specific item, by the presenter on the training module.
  3. Lastly, each scorer reviewed the response provided for the specific item on his/her first assessment, and scored itbased on the scoring rubric. This process was repeated for each of the 5-6assessments they were given.
  4. Scorers who encountered a response which did not clearly follow the rubric discussed the response with the group for clarification.
  5. Each scorer then passed the scored assessments to the person to their left, and the same test item on all assessments was scored by the second scorer.
  6. In the event that two scores differed, the assessment was provided to a third scorer, and a third score was recorded.
  7. When all scoring for the specific test item on all assessments was completed, the assessments were collected and redistributed randomly.
  8. Finally, steps 1 through 7were repeatedfor each test item until all questions were scored

A fifteen minute break was offered in the morning, and another in the afternoon, while aone-hour working lunch provided ample time for discussion and review of the morning scoring sessions.

Once the scoring of all assessments was complete, a forty-five minute review and discussion session ensued. The day came to a close at approximately 3:00 p.m.

The sixty-threegraded assessments, three partially completed ones, and remaining unused assessments were returned to TennesseeTechUniversity, together with all the scoring material as required.

References:

TennesseeTechUniversity, Critical Thinking Assessment Test Overview Retrieved on July 17, 2008, from

Critical Thinking Assessment Test (CAT) Scoring Session

1