General Education Course Review - Submission Form

The purpose of the General Education Committee (GEC) is to provide guidance and direction to the VCAAR to improve the quality and relevance of the University's general education curriculum. The GEC considers proposals for modification of the general education curriculum and reviews each course in the general education program once every four years to determine its acceptability as a general education course. The GEC will review assessment data on the general education program provided by the Assessment Office and make recommendations to the VCAAR.

The GEC is a University Shared Governance Committee composed of members of the faculty with representatives from every college. The “general education program develops a foundation and motivation for the lifelong pursuit of learning in undergraduate students at Arkansas State University by introducing them to a broad range of essential areas of knowledge that will enable them to participate in our democratic nation and in a global society” (Undergraduate Bulletin).

General Education Goal: Developing a strong foundation in the social sciences

The following course has been accepted into the General Education Curriculum to promote the goal of training students to “develop a strong foundation in the social sciences,” and has set as the appropriate learning outcome that students should be able to

  • Explain the processes and effects of individual and group behavior;
  • Analyze events in terms of the concepts and relational propositions generated by the social science tradition
  1. Title of Course

POSC 1003 Introduction to Politics

  1. Course description as it appears in the Undergraduate Bulletin

INTRODUCTION TO POLITICS.An introduction to the use of politics for the resolution of conflict in communities, nations, and the international system through the study of political concepts and relationships, with applications to current problems.. Fall, Spring, Summer

  1. All prerequisites

None

  1. Course Frequency (e.g. fall, spring, summer)
  2. Fall, Spring, Summer
  1. General education outcome the Department has chosen to assess for allsections of this course (check only one)

Explain the processes and effects of individual and group behavior

Analyze events in terms of the concepts and relational propositions generated by the social science tradition

  1. Core content currently taught across all sections of the course
  1. Please detail the staffing for all sections of this course for the previous fall and spring semesters.

Instructor delivering
50% or more content / Terminal degree / Discipline of terminal degree / Department issuing terminal degree / Instructor of record if different
Fall 1003
1 / P. Hilson / MPA / Political Science / Political Science
2 / S. Lee / PhD / Public Administration / Political Science
3 / B. Warner / PhD / Political Science / Political Science
4 / S. Childress / MA / Public Administration / Political Science

Spring 2014
1 / P. Hilson / MPA / Public Administration / Political Science
2 / S. Lee / PhD / Political Science / Political Science
3 / B. Warner / PhD / Political Science / Political Science
4 / H Hacker / PhD / Political Science / Political Science

Please attach to this form an assessment report including the following:

  • Method or methods used to measure the level of proficiency of all students completing the course.
  • Documentation of how the resulting assessment data from two or more successive years were used to improve student learning outcomes. (You may list “See Tracdat” if documentation already has been sent to the Office of Assessment – Student Learning Outcomes.)

Please submit a copy of every syllabus from the current semester with faculty identifiers removed. If a faculty teaches more than one section of the course using the same syllabus and delivery method, a single syllabus may be submitted. [The Committee will include an analysis of syllabi by accessing documents submitted to the Office of Assessment and Student Learning Outcomes.]

Assessment Coordinator ______Date ______

(if appropriate)

Department Chair ______Date______

Dean ______Date______

Received by GEC Chair ______Date______

Approved 10/30/2012 ; Revised 12/11/2012; Customized 29 Aug 14

Quadrennial General Education Review: POSC 1003 Intro to Politics

In the Fall semester of 2012, DOPS instituted a standard pre-test/post-test assessment of POSC 1003, Introduction to Politics.

The members of the DOPS faculty worked together to design an instrument which would assess student abilities to

  • Explain the processes and effects of individual and group behavior;
  • Analyze events in terms of the concepts and relational propositions generated by the social science tradition

Ultimately, this resulted in an instrument containing 26 multiple choice items and a short essay question. (The essay question is graded pass/fail and counts the same in the total score as each of the multiple choice questions.) The items were designed to cover a range of difficulty in order to capture the full breadth of student abilities. In other words, we were interested in examining change and improvement over the course of the semester and not in designing an measure of whether a student should pass the class. Assessment, as the saying goes, is distinct from grading. To ensure that students knew of this distinction, they were instructed that their performance on the assessment would not affect their grade in the class. To increase participation in the ungraded assessment, students were also told that their overall class grade would be adversely affected if they did not participate. (Students lost 5 points on their overall class grade if they did not take the pre-test and 5 points if they did not take the post test.)

Table 1. Overview of Assessment Scores and Improvements

Pre-Test / Post-test / Change
Raw (of 27) / Percent / Raw (of 27) / Percent / Raw / Percentage Points
Mean / 12.7 / 47.0 / 16.7 / 61.9 / 4.4 / 14.9
Median / 13 / 48.1 / 17 / 63.0 / 4 / 14.9

As shown in Table 1, the typical student got about half of the questions correct on the pre-test, a situation which we feel confirms that we have created an instrument that well serves our purpose. The mean score on the post-test (using the same questions) rose 4.4 points or about 15 percentage points. This improvement will represent a baseline against which future assessment results may be compared.

Table 2 confirms the efficacy of our instrument design, with scores that range from three to twenty-two points (eleven to eighty-one percent). The instrument captures the full range of student preparation with very little clustering of students at any given point in the range.

Table 2. Detailed Pre-test Results

# Correct / % Correct / Frequency / Percent / Valid Percent
3 / 11 / 1 / .8 / .9
4 / 15 / 2 / 1.6 / 1.7
5 / 19 / 3 / 2.5 / 2.6
6 / 22 / 1 / .8 / .9
7 / 26 / 4 / 3.3 / 3.4
8 / 30 / 8 / 6.6 / 6.8
9 / 33 / 11 / 9.0 / 9.4
10 / 37 / 6 / 4.9 / 5.1
11 / 41 / 8 / 6.6 / 6.8
12 / 44 / 12 / 9.8 / 10.3
13 / 48 / 11 / 9.0 / 9.4
14 / 52 / 14 / 11.5 / 12.0
15 / 56 / 9 / 7.4 / 7.7
16 / 59 / 7 / 5.7 / 6.0
17 / 63 / 4 / 3.3 / 3.4
18 / 67 / 5 / 4.1 / 4.3
19 / 70 / 2 / 1.6 / 1.7
20 / 74 / 7 / 5.7 / 6.0
22 / 81 / 2 / 1.6 / 1.7
Total / 117 / 95.9 / 100.0
Missing / System / 5 / 4.1
Total / 122 / 100.0

Table 3. Detailed Post-Test Results

# Correct / % Correct / Frequency / Percent / Valid Percent
3 / 11 / 1 / .8 / 1.0
5 / 19 / 1 / .8 / 1.0
6 / 22 / 1 / .8 / 1.0
7 / 26 / 1 / .8 / 1.0
10 / 37 / 2 / 1.6 / 2.1
11 / 41 / 2 / 1.6 / 2.1
12 / 44 / 3 / 2.5 / 3.1
13 / 48 / 6 / 4.9 / 6.2
14 / 52 / 9 / 7.4 / 9.3
15 / 56 / 13 / 10.7 / 13.4
16 / 59 / 7 / 5.7 / 7.2
17 / 63 / 7 / 5.7 / 7.2
18 / 67 / 9 / 7.4 / 9.3
19 / 70 / 8 / 6.6 / 8.2
20 / 74 / 7 / 5.7 / 7.2
21 / 78 / 7 / 5.7 / 7.2
22 / 81 / 7 / 5.7 / 7.2
23 / 85 / 5 / 4.1 / 5.2
25 / 93 / 1 / .8 / 1.0
Total / 97 / 79.5 / 100.0
Missing / System / 25 / 20.5
Total / 122 / 100.0

Table 3 shows the frequency distribution of scores on the post-test. While we do not wish to make too much of the scores themselves, one interesting observation can be made in comparing Tables 2 and 3. If we take a score of 70 percent(19 or more correct answers) as indicating a basic proficiency, there is an important difference in the two distributions. Whereas, on the pretest, roughly one in ten (9.4 percent) of students scored 70 percent or more, well over one-third (36 percent) of students fell in this range on the pretest. Again, this demonstrates that students experience considerable and noteworthy gains during the class. It also establishes another baseline against which future performance may be compared.

One explanation for the increase in scores might be that the poorest performing students dropped out of the class or simply did not take the post-test (an example of the so-called ‘mortality’ threat to internal validity). This seems not to be the case. Among the 23 students who took the pretest did not take the post test, the mean was virtually identical (12.7 correct) with scores ranging from 4 to 22.

Tables 4 and 5 help us to look at changes at the individual level. For each student, we subtracted the pre-test score from the post-test score to determine an Individual Change Score. The mean individual change was 4.4, with the median student answering 4 additional questions correctly on the post-test. Expressed as percentages out of 27 possible points, this is roughly a sixteen point change in scores.

Table 4. Summary of Individual Level Changes Pre/Post

Individual Change in
Raw Score / Percentage Points
Mean / 4.4 / 16.3
Median / 4 / 14.8

Table 5. Detailed Individual Change Scores

Change in Raw Score
Pre to Post / Frequency / Percent / Valid Percent
-17 / 1 / .8 / 1.0
-9 / 1 / .8 / 1.0
-8 / 1 / .8 / 1.0
-3 / 1 / .8 / 1.0
-1 / 6 / 4.9 / 6.3
0 / 9 / 7.4 / 9.4
1 / 6 / 4.9 / 6.3
2 / 8 / 6.6 / 8.3
3 / 12 / 9.8 / 12.5
4 / 9 / 7.4 / 9.4
5 / 5 / 4.1 / 5.2
6 / 3 / 2.5 / 3.1
7 / 8 / 6.6 / 8.3
8 / 8 / 6.6 / 8.3
9 / 4 / 3.3 / 4.2
10 / 3 / 2.5 / 3.1
11 / 3 / 2.5 / 3.1
12 / 3 / 2.5 / 3.1
13 / 3 / 2.5 / 3.1
15 / 2 / 1.6 / 2.1
Total / 96 / 78.7 / 100.0
Missing / System / 26 / 21.3
Total / 122 / 100.0

Table 5 provides for a detailed analysis of these changes. First, take note that approximately four out of every five students increased his or her score on the post –test. As is true with all ungraded assessments, it is difficult to know just how much effort students put in to the assessments when they had little to lose by just checking boxes, ensuring that they had ‘taken’ the assessment and thus would not lose points in the class. However, it would seem that most students did take it seriously and were not just checking boxes.

Table 6. Assessment Scores and Improvements by Grade

Final Class Grade / Pre-Test Mean / Post-Test Mean / Difference
(Raw) / Difference
(% Change in Mean)
A / 15.1 / 19.2 / 4.1 / 27.2
B / 13.0 / 17.2 / 4.2 / 32.3
C / 11.0 / 15.4 / 4.4 / 40.0
D / 11.50 / 14.6 / 3.1 / 27.0
F / 10.5 / 15.8 / 5.3 / 50.5

Finally, Table 6 examines the link between a student’s final grade for the class and the assessment scores. First of all, note that there is a clear linear relationship between class grade and scores on both the pre-test and post-test. Students who knew more coming in to the class did better in the class.

Policy change to be made based on this evidence: Beginning in the Spring 2015 semester, instructors teaching POSC 1003 will be asked to engage in an outreach effort to students performing especially poorly on the pre-test, notifying each individually of additional resources available on campus (tutoring and study-habits help). The department is also engaged in discussion regarding other methods of targeting these students and their needs.

It is interesting, but perhaps not unexpected, that when it comes to comparing changes in a student’s knowledge over the course of the semester, students who started off with little knowledge showed the greatest improvement on the assessment measure. Students making a C or an F in the class showed a greater degree of improvement as a percentage of their pre-test scores than did those who eventually earned A’s or B’s. Essentially, both groups showed about the same improvement in raw scores (4 or 5 points), but with lower baselines, these changes show up as larger percentage increases. Thus, it is an important point for all of us to remember that even though a student might end up with an F in a class, this does not mean that the teacher has failed to teach that student. This may be a bitter pill for some involved in assessment to swallow.

Implications for teaching:

  1. Instructors should not feel demoralized when faced with students who fail a class. Many, if not most, students who fail a class do learn, and often exhibit as much change (measured either absolutely or relatively) as students who pass. The problem is that they need to learn so many new facts or skills that they cannot catch up, cannot meet the standards set for passing the class.
  2. This shifts the question, making it more specific. The search for improvements to teaching becomes one of identifying students who are in need of catching up more quickly. . In some sense, this is a form of remediation, something which is not normally considered part of the duties assigned to an instructor in a general education class.

Policy changes: No policy changes have been implemented as of the writing of this report. However, the department is engaged in a discussion of possible changes, and especially of the resources needed to support any such change.