1

“Value Added Assessment of Programs of Intense Student-Faculty Interaction: Developing Intentional Learners”

A Teagle Foundation Project

Report on Year 1: 2006-2007

Project Goals

In this project, five national liberal arts institutions--Drew University, Moravian College (administrator), Muhlenberg College, Roanoke College, and Susquehanna University—are assessing selected programs of intense student-faculty interaction for their contributions to the liberal education goal of developing intentional learners. The members of this collaborative have established a variety of academic programs intended to intensify the liberal arts experience, including first-year seminars, writing-across-the-curriculum, undergraduate research, and capstone experiences. Institutions will assess program-specific outcomes and in collaboration, they will develop methods and tools to assess the value of such programs in promoting intentional learning.

Project Activities, Year 1

Developing intentional learning definitions and rubrics:

At our October workshop, held at Muhlenberg College, we considered the meaning of intentional learning as described by the AAC&U[1] and distinguished between our understanding of intentional learning and other desirable characteristics of graduates discussed by the AAC&U within this context. For us, intentional learners are self-aware, self-directed, and aware of multiple perspectives. They make connections and apply skills and knowledge to different contexts.

Intentional Learning Outcomes

Learners who are self-aware and self-directed can:

1. Articulate their reasons for study within the context of a liberal arts education

2. Describe and evaluate their learning processes

3. Develop plans for pursuing learning goals

4. Set, pursue, and reflect upon their learning goals

Learners who are aware of multiple perspectives can:

5. Identify diverse or conflicting concepts, viewpoints, and/or priorities

6. Articulate the value of considering multiple perspectives

7. Examine phenomena from multiple viewpoints

Learners who make connections can:

8. See connections in seemingly disparate information

9. Recognize links among topics and concepts presented in different courses

10. Synthesize disparate facts, theories, and concepts

11. Work within a context of diverse or conflicting concepts, viewpoints, and/or

priorities

Learners who apply skills and knowledge to different contexts can:

12. Adapt what is learned in one situation to problems encountered in another

13. Connect intellectual study to personal life

14. Draw on a wide range of knowledge to make decisions

We also developed a scoring rubric that has guided our work. (See Appendix A) Our rubric is based on operational definitions for each of four categories of achievement:

I. Below Basic: The student does not demonstrate a basic understanding of the learning outcome being assessed. Work at this level does not meet expectations for a college student.

II. Basic:The student appears to grasp some elements of the learning outcome being assessed, but does not demonstrate proficiency. Work at this level may be acceptable for lower-division students, but does not meet expectations for a student about to graduate from college.

III. Proficient:The student demonstrates understanding of the learning outcome being assessed. Work at this level meets expectations for a student about to graduate from college.

IV. Advanced:The student demonstrates sophisticated or exemplary mastery of the learning outcome. Work at this level exceeds expectations for a student about to graduate from college.

We agreed that while no institution would likely assess progress on all of the characteristics on our list, the list would serve as a starting point for intentional learning assessment on each campus and as a basis for the development of common assessment tools. Moravian College, for example, assessed points 1, 4, 7, and 9. Muhlenberg College adopted a four-dimension rubric for its assessment of student essays(See Appendix B):

1. Considers multiple perspectives

2. Makes connections

3. Can create new knowledge, theories, and representations

4. Can critically evaluate knowledge, theories, and representations

Our workshop leader, Dr. Mary Allen, developed an additional scoring rubric with six student learning outcome dimensions:

1. Self-awareness as a learner

2. Self-directed learning

3. Considers multiple perspectives

4. Makes connections

5. Can deal with conflicting ideas

6. Adapts learning to new situations

Assessment Activities

As a result of a Teagle Foundation planning grant in 2005-2006, we began the 2006-2007 year having asked selected students on each campus to complete essays related to intentional learning. In order to gain experience with essay prompts, each institution provided students with a different prompt, but each prompt tried to elicit how far students had progressed in becoming intentional learners. Each institution also used different methods to solicit the essays. At the time we collected the essays, we had not developed the above categories or rubrics, but we nonetheless decided to apply aspects of the rubrics to the essays and found the exercise worthwhile.

In the spring, we experimented further with writing prompts, with some institutions using new versions of the fall prompts, Moravian developing a writing exercise that required students to respond to a variety of information about an issue (multiple perspectives) and Muhlenberg using a four-dimension rubric to assess its essays.

We also analyzed the NSSE survey to identify questions that provide information about student progress in intentional learning. In discussions with NSSE, we considered administering either the entire survey or our selected questions to a sample of students at Muhlenberg and Roanoke, the two institutions that did not administer the NSSE this year. We decided the personnel cost to administer the survey was too great. At our May workshop, we agreed instead to use Muhlenberg 2008 data and Roanoke 2006 data. In the fall of 2007, we will decide on how we will share the data.

The following chart summarizes the Teagle Foundation assessment activities of individual institutions for the year:

Campus / Focus Groups / Writing Prompts / Other Direct Assessment / Local Surveys / NSSE/Other
Drew
University / -Direct assessment of planning and intentionality.
-Administered writing prompt three times during year in First Year Seminars
-Scored essays in FYS instructor training using 2 dimensions of rubric
-revised prompt and rubric after scoring. / Administered survey developed last year of adapted NSSE and CIRP items
-Compiled data
-Shared with FYS instructors / -Provided FYS instructors with advising model in response to NSSE data from prior year
-Questions selected from NSSE that will be analyzed once data becomes available in fall 2007
Moravian
College / -Two student-led focus groups consisting of about-to-graduate seniors on writing, revising, and intentional learning. / -Prompts about intentional learning – for first-year students and seniors
-Prompts to analyze material from multiple perspectives – for first-year students and seniors
-Reflective letters from first-year and senior students in writing courses / -Multiple drafts of papers from first-year and senior students in writing courses / -Questions selected from NSSE that will be analyzed once data becomes available in fall 2007
Muhlenberg
College / -Writing prompts about direct experiences administered in capstone courses
-Scored essays in capstone instructor training using broad intentional learning categories / -Analyzed 2005 NSSE data to compare responses from capstone and noncapstone students
Roanoke
College / -Five staff-led focus groups with second-semester first-year students / -Thirty writing samples collected from three first-year writing classes based on two intentional learning prompts / -Same thirty students surveyed on integrated and intentional learning
Susquehanna
University / -Data collected by Core Perspectives Steering Committee / -Analyzed essays from 2006 administration / -Developed plan to collect student work samples from capstone courses and adapted Teagle rubrics to analyze these / -Developed, administered, and analyzed pre- and post-course surveys with most sections of Core Perspectives / -Questions selected from NSSE that will be analyzed once data becomes available in fall 2007

Summary of Assessment Methodology Lessons Learned

Rubrics

  • The group process of delineating dimensions of intentional learning and devising rubrics was exceptionally engaging and promoted inter- and intra-institutional cooperation.
  • When writing rubrics, think about the kind of evidence you’ll be assessing. Revise rubrics when necessary to assess the data you have.
  • Construct rubrics that do more than count instances of things—more emphasis on quality than quantity.
  • Revise rubrics to avoid the “ceiling effect,” where students too easily reach the advanced category.

Focus Groups

  • At Moravian College, selected students did an excellent job leading focus groups. They received training and a list of prepared questions. One student led the discussion and a second acted as recorder, taking notes and operating a digital recorder. The students provided transcripts (without attributions) and a summary of major points.
  • The focus groups were especially useful for raising questions and providing context to what we learned direct assessment and surveys.

Student Essays

  • Choose writing prompts carefully; very direct instructions seem to work the best.
  • When students complete an in-class essay, ask them to write essays in computer labs to avoid handwriting problems.

Surveys

  • Develop a plan for sharing NSSE and other data.
  • Developing common survey questions requires significant effort.

Ways to get data from students

  • Where available, course-embedded assessment is easier to gather than “add-on” assessment using student volunteers, but it will likely elicit different responses than volunteer responses will.
  • Offer a direct incentive –e.g., money, food cards, pizza, “nominal gift.”
  • Collect data under controlled conditions—e.g., use CLA model.
  • Offer a lottery entry to possibly win a prize.
  • Recruit from existing groups of students—e.g., athletic teams, clubs, etc.
  • Include data collection during required exit interviews.
  • Catch students right after a class—go to where they are.
  • Personal invitation by a professor, with explanation of why it’s important.
  • Give students a week so they can think about their responses, then a “nominal gift” for responding.
  • Use email to distribute writing prompt, etc. in addition to instructor announcement and encouragement.
  • Data collection timing can be important—e.g., collect final projects in the last week of classes.

Faculty involvement

  • Clarify the role of assessment—we are not evaluating faculty members.
  • Don’t analyze data in ways that identify individual faculty members or departments. Focus on the overall program’s effectiveness—e.g., the impact of first-year seminars or capstone experiences
  • Involve relevant faculty members in assessment—faculty members whose teaching/professional life may be affected by results.
  • Encourage faculty members to make use of and participate in this project, including course and curriculum design.
  • In calibration before reading student work, choose exemplars that are unambiguously weak, medium, and strong.
  • Secure faculty cooperation in data collection. Ensure that faculty members understand their role in collecting embedded assessment data.
  • Assessment participation leads to faculty development—e.g., faculty members learn about and think about the outcomes, student learning of those outcomes, their own courses.

Other

  • Consider ethical issues in assessment, including informed consent and Institutional Review Board review.
  • Don’t over-survey. Collaborate with others interested in surveying the same populations.
  • Perhaps over-sample some populations to get full range of data.
  • Integrate rubrics into relevant classes—share the rubrics with your students.
  • Give data to the class-faculty and students for follow-up.

Plans

Common Essay Prompt

Before our October 2007 workshop, we will administer a pilot of a common essay prompt that will assess our Intentional Learning Outcomes numbers 7 and 11. (Examine phenomena from multiple viewpoints; and work within a context of diverse or conflicting concepts, viewpoints, and /or priorities). We agreed at our May 2007 workshop and through subsequent e-mails that the prompt would be something like the following:

You are the president of [insert your college name]. The student government has presented you with a proposal to abolish letter grades in all courses and replace them with written evaluations, as is the practice in some other liberal arts colleges. In your position as President, your first task is to examine this proposal from the viewpoints of various groups (such as students, professors, administrators, parents, employers, etc). Your second task is to write your own response to the proposal taking these various perspectives and priorities into account.

At the October 2007 workshop, we will create assessment rubrics, spend some time working with the pilot essays, and revise the prompt. In the late fall 2007 and spring 2008, individual campuses will collect additional essays. We are considering a spring 2008 assessment workshop that includes faculty from each campus who are not yet involved in this project. At that workshop, we would create new rubrics and assess the additional essays.

The expected outcomes for this effort are that we will i) Test the efficacy of a common rubric, ii) Answer an important question relevant to at least one of our student learning outcomes, iii) Give our faculty opportunities to learn methodology and share ideas and perspectives across institutions (faculty development), iv) Collect first-year and senior data on each campus to demonstrate value added.

Focus Groups

If possible, we will experiment with faculty members from one campus conducting a focus group for another campus.

Surveys

In 2007-2008 we will develop a common survey based on the larger list of desirable student learning outcomes. In addition, we will work toward an agreement regarding the sharing of NSSE data. In the fall, each campus will beasked to share their latest data on the 40 NSSE items we selected using 4-year private norms. We will also attempt to add 5-6 common questions regarding intentional learning to our 2008 administration of the NSSE survey (Muhlenberg and Susquehanna), and the other institutions will collect a sample of responses to these questions from first-year and senior students. We need to pilot the questions.

Other Assessment

Each institution is working on additional assessments of student work related to its individual projects. One goal is to increase the direct assessment of student work.

Workshops

At the fall workshop at Roanoke College, we will share our work, as we have in the past; work on the projects listed here; and have a presentation on effective syllabi by Dr. Barbara Tewksbury of Hamilton College.

In the spring, we may schedule an additional workshop for assessing student essays, as described above. The program for our regular workshop, in late May or early June at Moravian College, remains open, but will likely concentrate on taking stock and planning.

Conference Presentation

We have proposed to present our project at the January 2008 AAC&U conference in Washington DC. The theme of the conference is intentional learning, although the organizers define the topic somewhat differently than the authors of Greater Expectations.

Website

We are constructing a public website for our project so we can share key documents and findings, including this report.

Overall Conclusions

We have found great value in learning from each other’s experiences and perspectives. Indeed, we have learned more from each other than was anticipated in our grant proposal. Much of our success has been due to the leadership of our consultant, Dr. Mary Allen, who has helped us understand what we need to do to achieve our goals. Several of our institutions have invited Dr. Allen to present assessment workshops to their faculty members and to consult on other projects. Our collaboration has also brought attention to differences between our institutions and thus has revealed conflicts that we have had to acknowledge and resolve before taking action. In this regard, we found that defining intentional learning as a group was especially important. We also found that it was very important to be open to differences in value added characteristics developed at different campuses and within different disciplines.

We are still early in the process of data collection, but we are already developing insights into our teaching and our students. At Moravian, for example, faculty learned that while students grow significantly in their understanding of how liberal education fosters their personal and professional engagement, they show very little growth in connecting liberal education to civic engagement. In a similar manner, the Moravian faculty members involved in the project discovered that an unacceptable number of students does not understand how to revise their written work to improve more than surface errors. Thus we are organizing a faculty workshop, led by Dr. Carol Rutz of Carleton College, on teaching students how to revise. At each institution, we have a sense that we are asking important questions and finding useful answers.

We have also had many early methodological insights as we collaborate, read, and practice. It is apparent that doing assessment gets easier with practice. At the same time, we recognize that we still have a way to go before we complete our project. We want to improve our collaboration so that more of our data are comparable across institutions. And we want to be sure that we keep a focus on assessing the impact our institutions have on intentional learning and on how we can improve student learning in this regard. The group is discovering that we may need longer, longitudinal studies to understand fully how to improve intentional learning among our students.

On each campus we have found that while it is important to improve our collection and analysis of data, we can have excellent faculty discussions based on imperfect assessment studies. If the task of assessment is to demonstrate to outsiders that our institutions perform well, or even that they perform better than others, then assessment studies must be meticulously scientific. But if assessment is aimed at improving student learning, then studies must meet a different standard, one that is accessible and interesting to our busy colleagues. There is still resistance in various campus quarters to the idea of assessment, so relevance and methodological ease are important. Moreover, we have found that it is especially important to keep our college stakeholders (faculty members who worked on the project, relevant faculty committees, administrators, trustees, etc.) apprised of our findings.