First-Year English Committee Assessment Report

2006-07

Prepared by Kevin Brooks, Writing Program Administrator,

For the First Year English Committee

Submitted: August 2nd, 2007

FEC Assessment Report2006-071

Table of Contents

Executive Summary......

Direct assessment: Early and Late Semester Analyses......

Understanding literacy: Fall Direct Assessment......

Indirect measure: Students report on their own change......

Rhetorical analysis and course concepts......

Understanding Leadership: Spring Direct Assessment......

Indirect measure: Students report on their own change......

Rhetorical analysis and course concepts......

SROI Data......

Fall 2006......

Spring 2007......

Instructor Surveys......

What do we do with this information?......

Executive Summary

The assessment of English 110 and 120 for school year 2006-07 consisted of one direct measure and two indirect means of assessment.

  1. A direct assessment of student work through an early- and late-semester set of tasks related to the content goals of “Understanding Literacy” (110) and “Understanding Leadership” (120)
  2. A student rating of general education goals and supporting program philosophies.
  3. A survey of English 110 and 120 instructors.

1. The early- and late-semester assessment tasksasked students to define “literacy” and “leadership” at both points in the semester, and then rhetorically analyze a commentary about each topic. We were generally disappointed with students’ late semester performance on both tasks; we have determined that key concepts in the courses, like “genre,” “voice/style,” and “social context” need greater attention, and we think some changes to assessment procedures will perhaps elicit better work from students in the future.

We continue to get good instructor participation in the process, we increased the percentage of student work read, which should increase the reliability of our assessment, and our instructors continue to show a high level of assessment agreement.

Fall 2006Spring 2007

34 instructors participated (89%)30 instructors participated (86%)

192 students’ work was read(14%)224 students’ work was read (15%)

Reader agreement: 92%Reader agreement: 95%

2. The student rating of general education goals was conducted as part of the required end-of-semester Student Rating of Instruction. This survey presents a numerical picture of how well students perceive themselves to be meeting the course goals, and the numbers will serve as a baseline for annual assessment. Students the last four years have consistently perceived that they are meeting the program goals (just under 4.0 on a 5.0 scale, which is defined as “agree”). We don’t perceive any need for programmatic change based on these numbers.

The only survey number that stands out is the “neutral” score for the prompt: “The course textbook supported me in completing my assignments,” scoring 3.1 or 3.2 each semester.

In 2007-08, our English 110 instructors are going to try a new combination of texts based on a pilot study this year that indicated students responded favorably to Thinking for Yourself and The Sundance Reader, while English 120 instructors will continue to use The Call to Write as the primary text.

3. Instructor surveys are important to our overall assessment plan. The success of our first-year English program depends on instructors; we survey instructors every semester in order to gather information about how they were teaching their courses, and how well they perceived their students to be meeting the program goals. Instructors both semesters reported being satisfied with the overall program approach (2.0 and 2.1, where 2.0 = “satisfied”), but they continue to report the least confidence in teaching the goals we assed this year, “Understanding Literacy” and “Understanding Leadership.” This set of goals is more abstract than our other program goals, “To communicate effectively in a variety of genres, for various purposes and audiences” and “Integrate knowledge and ideas in a coherent and meaningful manner,” so we are not surprised to see that our instructors feel least comfortable with these goals. Our first-year English program will continue to work on ways to support “understanding leadership” and “understanding literacy.”

Ongoing plans

Assessment in 2007-08 will focus on “Communicate effectively in a variety of genres for various audiences and purposes.” We will be returning to a program goal for the first time since we began systematic program assessment in 2004-05, so it will be interesting to see if we and our students can make any improvements on the bench marks we set that year. We will need to make some adjustments to our assessment schedule; we will need to assess both 110 and 120 in the fall, and then 120 only in the spring. It will be interesting to see if there are noticeable skill differences next year when we place students in English 110 and 120.

Direct assessment: Early and Late Semester Analyses

At the end of each semester, instructors of English 110 (32/36) and instructors of English 120 (30/35) participated in an assessment of students’ ability to show their understanding of literacy (fall) and leadership (spring), as well as their ability to rhetorically analyze short commentaries. “Understanding literacy” and “Understanding Leadership” are departmental goals for English 110 and 120 respectively, and the assessment activities were designed to not only see how students defined literacy and leadership early and late in the semester, but the activities were designed to see how well students were learning key course concepts like audience, purpose, genre, social context, and style. More generally, this year’s assessment was an attempt to look at students’ meta-cognition, rather than their performance as writers. Our program emphasizes meta-cognitive development—we believe that students need to have good conceptual understanding of writing in order to be good writers—so looking at meta-cognitive knowledge every three years is going to be an important part of assessment plan.

Students were asked to complete roughly the same two tasks early and late each semester:

  1. Define literacy and leadership; identify your literacy and leadership skill set.
  2. Analyze a commentary about “literacy” and “leadership,” employing the concepts of rhetorical analysis taught in English 110 and 120.

In the fall, we used two different commentaries, but such an approach compared apples and oranges, as well as complicated the assessment procedures. In the spring we used the same commentary early and late in the semester; this was a good procedural adjustment, although we would like to use a more challenging leadership essay in the future.

Assessment readers were asked to score students early- and late-semester work based on scoring guides generated by the first-year English committee. One hour of the assessment sessions was devoted to explaining the assessment process and norming the assessment group via two sample responses; two hours were devoted to reading and scoring student work, as well as collecting qualitative observations about both student work and the assessment process.

We have the following procedural numbers to report from the two-day process.

Fall 2006Spring 2007

34 instructors participated (89%)30 instructors read portfolios (86%)

192 students’ work was read(14%)224 students’ work was read (15%)

Reader agreement: 92%Reader agreement: 95%

These numbers indicate high participation on the part of the department’s teaching staff, without whom we could neither teach all these sections nor complete this kind of assessment. The numbers also indicate, just as we have found the last two years via portfolio assessment, that our teaching staff, and our guest readers, consistently make the same kinds of evaluation of student work. This is the first year we have been able to assess 15% of our students; that number is considered a reliable sample in the scholarship of assessment.

Understanding literacy: Fall Direct Assessment

The English department has identified “understanding literacy” as a course goal for English 110, and that goal is defined as follows:

  • Students should come to understand that “literacy” and “being literate” means being able to use their reading and writing skills in dynamic and critical ways. Reading and writing are not only fundamental skills for success in school and life, but they are skills that are flexible, varied, and require life-long practice and development. In order to achieve this content goal, students will be asked to:
  1. Reflect on, and do research on, the meaning of “literacy” beyond the basic skills definition.
  2. Reflect on the work they have done in the course as a means of reflecting on their development of increasingly specialized and sophisticated literacy skills.

Scores / Definitions + Skill self-assessment
Early / Late
4.0 (excellent) / 1 / 4
3.5 (very good) / 5 / 12
3.0 (good) / 11 / 24
2.5 (between acceptable and good) / 35 / 34
2.0 (acceptable) / 99 / 57
1.5 (between acceptable and incorrect or unacceptable) / 33 / 40
1.0 (incorrect or unacceptable) / 7 / 21
X (no definition) / 1 / 0

The early and late semester assessment strategy was to have students define literacy and their literacy skill set within the first two weeks of the semester, and then, based on course readings, assignments, and activities, define literacy and skill sets again during the last two weeks of the semester. The results showed that the bulk of students performed both the early and late definition tasks at an acceptable, or slightly above acceptable level (2.0 and 2.5), although our readers found that many more students were able to provide good to excellent definitions (40 students) late in the semester than were able to provide good to excellent definitions early in the semester (17 students). Our readers also found, however, that more students wrote definitions that were below acceptable (61) late in the semester than early in the semester (40).

We think many of the poor scores for late-semester definitions was due to the fact that our assessment instrument was slightly flawed; our expectations for late semester definitions was higher than our expectations for early semester definitions. For example, an adequate early semester definition was defined as, “Basic definition of literacy means being able to read and write. Student can identify current skills, not likely to identify additional / future skills.” A late semester adequate definition was defined as “No significant change in definition, but adequate use of source. Basic list of skill set, but no development.” Most problematic in our late semester definitions was the fact that we asked students to refer to course readings to support their definition, and many of the 61 students who scored 1.5 or 1.0 on late semester definitions simply did not refer to any sources. Some of these students had definitions that were otherwise an improvement on their early semester definition, but to follow our own assessment key, we had to score those definitions as unacceptable.

Overall, the progress that we did see is appropriate, but we recognize that programmatically there is a need to clarify “understanding literacy” and we need to improve the assessment of this goal.

Indirect measure: Students report on their own change

Scores / Student
change
4.0 / 46
3.5 / 20
3.0 / 104
2.5 / 9
2.0 / 4
1.5 / 1
1.0 / 2
X (did not answer) / 6
Total / 192

As part of the late-definition task, we also asked students to indicate whether they felt like their skills had:

4. Improved significantly.

3. Improved a little bit.

2. Did not change.

1. Got worse.

We asked students to respond to this prompt in writing, rather than offer us a numerical answer, so in 30 cases, the assessment readers did not perfectly agree on what a student was saying.

While the program can be pleased that so many students (170/192) indicated that their skills improved anywhere from a little to quite a bit, our assessment readers were a little bit skeptical about “assumed improvement” (i.e. students are invested in saying they have improved, otherwise a semester has been wasted; students being students also tend to give teachers what they think teachers want to hear). Assessment readers were also often quite skeptical of reported “4”s when little evidence of improvement could be found in the assessment documents.

Having just raised concerns about the successes reported by our students, it is also valuable to put this score in the context of some recent scholarship on first-year students and how they learn. Nancy Sommers and Laura Saltz (2004) have found through a large-scale (400 students), longitudinal study (4 years) of students’ development as writers that the most significant changes for first year students will not appear on the page, but will be changes in attitude towards and thinking about writing. If we take our students self-assessment at face value, rather than with a grain of salt, the program and the university can be pleased to see that students are recognizing improvement in their attitude towards and thinking about writing, and these students should continue to improve over the course of their college careers.

Sommers and Saltz also reported that improvement is a slow and uneven process, sometimes going through periods of regression. As one of our readers said after the session, our program might be so different from the high schools’ approach to English that it takes students a long time to understand what we are asking them to do. Such uncertainty or confusion could lead to a sense that one’s skills have indeed gotten worse during the first semester of college.

Rhetorical analysis and course concepts

The First-year English committee asked students to perform a rhetorical analysis of a short commentary about literacy early and late in the semester to further assess students’ understanding of literacy, but more importantly, to assess their reading and rhetorical analysis skills. Almost every instructor in English 110 assigns a rhetorical analysis or a similar analytical assignment, so we would expect significant improvement over the course of the semester. The rhetorical analysis is organized around what our program’s common textbook, The Call to Write calls the “Five Factors” that writers and readers should consider in any communicative act: What’s the purpose for communicating? Who is the audience? What is the genre being used? What is the social context for the communicative act? And what style or voice is used to communicate?

Scores / Rhetorical Analysis
Early / Late
4.0
(excellent) / 0 / 4
3.5 / 0 / 7
3.0 (good) / 2 / 17
2.5 / 7 / 38
2.0 (acceptable) / 18 / 42
1.5 / 67 / 51
1.0 (unacceptable or incomplete / 98 / 33
X / 0 / 0
Total / 192 / 192

Students’ performance on these tasks follows a predictable pattern of general improvement. What is surprising, and will be an issue the program has to address, is the fact that 84 of 192 students were still performing below an acceptable level on the late-semester assessment. Assessment readers were particularly surprised to find so few students able to identify the late-semester reading sample as an example of the genre of commentary, even though many students read and wrote commentaries during the semester. Students tended to rely on what might be a high school vocabulary: the genre was frequently identified as “informative” or “persuasive,” which are purposes for writing (to inform, to persuade), but most students could not recognized the features of these commentaries: using labels to understand a complex issue, pointing out patterns and trends, offering a perspective on an issue without offering a clear proposal. Less surprising was an almost complete inability to identify a social context for either piece of writing. Understanding the social context of a piece of writing requires students to understand how a single commentary is part of a larger conversation about a topic, in this case literacy, that is being researched, debated, and re-defined by teachers, administrators, librarians, parents, and others with a stake in education. Understanding a social context requires the ability to grasp the big picture of any issue, and first-year students are neither knowledgeable enough nor mature enough to see the social context that has shaped and influenced individual pieces of writing. One assessment reader doubted if many instructors would be able to identify the social context of the sample commentaries.

Students were not particularly strong at identifying style and voice, either, although students generally seemed to have a better grasp of this concept than genre or social context.

The results here indicate that the planned emphasis on teaching reading in English 110 is going to be the right emphasis for that course. The program, however, will have to be careful to not assume strong reading skills in those students who are placed directly into 120—with close to 44% of students scoring below acceptable on the late-semester assessment, we have strong evidence that there will be a need to continue teaching good reading and analytical skills in English 120.

Understanding Leadership: Spring Direct Assessment

The English department has identified “Understanding leadership” as a course goal for English 120, and that goal is defined as follows:

The English department has adopted “understanding leadership” as a content goal for English 120 because part of being an effective writer and communicator can also mean being an effective leader or collaborator. While civic leaders are often examples of good communicators, students should come to see through the collaborative assignments and explorations of leadership in this course that leadership can take many forms, and individuals who communicate well can either take leadership roles or support strong teams throughout college, into their careers, and within their communities. In order to achieve this goal, students will:

  1. Work collaboratively on at least one writing assignment and reflect on their experiences as a collaborator as a means of understanding their own experiences in a group, as a leader or member.
  2. Reflect on, and in some cases do research on, the concept of leadership generally, and the concept of “transformative leadership” specifically. Transformative leadership, as defined in Leadership Reconsidered, is leadership that seeks positive change and understands leadership as a collaborative process. All students have leadership potential; being a leader does not require holding a title.
  3. Address an issue or issues of social consequence in their writing, with the goal of increasing their own understanding of that issue and/or formulating a plan of action to address a problem.

The early and late semester assessment strategy was to have students define leadership within the first two weeks of the semester, and then, based on course readings, assignments, and activities, define leadership again during the last two weeks of the semester. Students in both instances were encouraged to draw on texts, ideas, quotations, experiences, and models that shaped their understanding of leadership. Answers were scored based on the following definitions: