Academic Affairs:AssessmentAugust 2010

CENTRAL WASHINGTON UNIVERSITY

2009-2010 Assessment of Student Learning Report

Feedback for the Department of Computer Science

Degree Award: BS Program: Computer Science

  1. What student learning outcomes were assessed this year, and why?

Guidelines for Assessing a Program’s Reporting of Student Learning Outcomes (Target = 2)
Program
Score / Value / Demonstrated Characteristics
2 / 4 / Outcomes are written in clear, measurable terms and include knowledge, skills, and attitudes. All outcomes are linked to department, college and university mission and goals.
3 / Outcomes are written in clear, measurable terms and include knowledge, skills, and attitudes. Some outcomes are linked to department, college and university mission and goals.
2 / Outcomes are written in clear, measurable terms and include knowledge, skills, or attitudes. Outcomes may be linked to department, college and university mission and goals.
1 / Some outcomes may be written as general, broad, or abstract statements. Outcomes include knowledge, skills, orattitudes. Outcomes may be linked to department, college and university mission and goals.
0 / Outcomes are not identified.

Comments:

The program assessed nine (9) student learning outcomes for the BS degree in Computer Science. From the very lengthy document provided Table 12 addresses this area. The outcomes are written in clear measureable terms. These are linked to college and university goals.

The program addresses knowledge and skills, but does not seem in this section to address any attitudinal behaviors. The potential location for attitudes Outcome #5 regarding ethics develops along the line of a knowledge rather than an attitude.

The program is commended for the production of a very detailed document, yet is also recommended that the inclusion of only relevant data would be helpful in future reviews.

Needing to go to page 50 of a document for information related to the first aspect is difficult to say the least.

Again, the program is encouraged to included or develop from this strong list of well-crafted aspects which could be identified as attitudinal in nature.

Additionally, it is noted that the date for this material is May 2009, the updating of materials for the current review is helpful.

  1. How were they assessed?
  1. What methods were used?
  2. Who was assessed?
  3. When was it assessed?

Guidelines for Assessing a Program's Reporting of Assessment Methods (Target = 2)
Program Score / Value / Demonstrated Characteristics
1 / 4 / A variety of methods, both direct and indirect are used for assessing each outcome. Reporting of assessment method includes population assessed, number assessed, and when applicable, survey response rate. Each method has a clear standard of mastery (criterion) against which results will be assessed
3 / Some outcomes may be assessed using a single method, which may be either direct or indirect. All assessment methods are described in terms of population assessed, number assessed, and when applicable, survey response rate. Each method has a clear standard of mastery (criterion) against which results will be assessed.
2 / Some outcomes may be assessed using a single method, which may be either direct or indirect. All assessment methods are described in terms of population assessed, number assessed, and when applicable, survey response rate. Some methods mayhave a clear standard of mastery (criterion) against which results will be assessed.
1 / Each outcome is assessed using a single method, which may be either direct or indirect. Some assessment methods may be described in terms of population assessed, number assessed, and when applicable, survey response rate. Some methodsmayhave a clear standard of mastery (criterion) against which results will be assessed.
0 / Assessment methods are nonexistent, not reported, or include grades, student/faculty ratios, program evaluations, or other “non-measures” of actual student performance or satisfaction.

Comments:

The program identifies a number of direct (Major Field Tests, meeting course outcomes,) and indirect (group participation, SOURCE participation) measures in assessing learning outcomes.

Some outcomes are measured by a single method. The population and time of assessment is clear.

However, not all methods have clearly established standards of mastery (e.g., increasing participation at SOURCE). Also course grades and overall major GPAs are very indefinite measures of student achievement.

The program is recognized for using rubrics regarding certain assignments.

Previous reviews note that the use of course grades and GPAs are very problematic assessment tools.

Because this seems to be a report from the previous year, comments from last year's review are valid as it relates to the current review. Again, the course is not the method but rather the place where the measurement collection takes place. Do more than one faculty member apply rubrics? Slightly more clarity would help here. How many employers/internship employers are included in the report?

  1. What was learned (assessment results)?

Guidelines for Assessing a Program’s Reporting of Assessment Results (Target = 2)
Program Score / Value / Demonstrated Characteristics
2 / 4 / Results are presented in specific quantitative and/or qualitative terms. Results are explicitly linked to outcomes and compared to the established standard of mastery. Reporting of results includes interpretation and conclusions about the results.
3 / Results are presented in specific quantitative and/or qualitative terms and are explicitly linked to outcomes and compared to the established standard of mastery.
2 / Results are presented in specific quantitative and/or qualitative terms, although they may not all be explicitly linked to outcomes and compared to the established standard of mastery.
1 / Results are presented in general statements.
0 / Results are not reported.

Comments:

Referring to the end of the document, the program identifies a number of staffing and infrastructure issues which are not related to the identified student learning outcomes.

The review does present a number of specific results. However, these results are neither linked to student learning outcomes nor compared to established standards of mastery.

Where the program did assess a general education requirement, the results were presented as an examination of course grades which has already been identified as a less-than-desirable method of assessing student learning outcomes.

While it is clear that the program is producing a large amount of data related to a multitude of areas of assessment it is not clear from the report that there is not a steady linkage of student assessment goals-the collection of data-and the analysis and action on that data.

  1. What will the department or program do as a result of that information (feedback/program improvement)?

Guidelines for Assessing a Program’s Reporting of Planned Program Improvements (Target = 2)
Program Score / Value / Demonstrated Characteristics
1 / 2 / Program improvement is related to pedagogical or curricular decisions described in specific terms congruent with assessment results. The department reports the results and changes to internal and/or external constituents.
1 / Program improvement is related to pedagogical or curricular decisions described only in global or ambiguous terms, or plans for improvement do not match assessment results.The department may report the results and changes to internal or external constituents.
NA / Program improvement is not indicated by assessment results.
0 / Program improvement is not addressed.

Comments:

Again, there is discussion of program improvement, but much of this section is related to non-assessable student leaning outcomes. In the areas which did identify course change or improvement, there was no linkage to the results of the assessed learning outcomes, but to identify IEEE curriculum which is not identified as a student learning outcome.

The program is acknowledged for its discussion of the increased use of surveys and outreach.

  1. How did the department or program make use of the feedback from last year’s assessment?

Guidelines for Assessing a Program’s Reporting of Previous Feedback (Target = 2)
Program
Score / Value / Demonstrated Characteristics
2 / 2 / Discussion of feedback indicates that assessment results and feedback from previous assessment reports are being used for long-term curricular and pedagogical decisions.
1 / Discussion of feedback indicates that assessment results and feedback from previous assessment reports are acknowledged.
NA / This is a first year report.
0 / There is no discussion of assessment results or feedback from previous assessment reports.

Comments:

There is a listing of five action items from previous assessments. The report identifies areas of improvement.

Overall, the program's report could have been updated from 2009 and significantly reduced in its length to address the student learning outcomes.

A strengthening of these outcomes through more measureable means and a clearer linkage of these results to proposed changes in the curriculum would have improved the scoring.

Dr. TracyPellett and Dr. Ian Quitadamo, Academic Assessment Committee Co-chairs

1