Conference Evaluation Summary

Adding Value to the Mathematics and Science Partnership Evaluations

Wisconsin Center for Education Research, University of Wisconsin

March 4-5, 2004

On March 4-5, 2004 at the Wisconsin Center for Education Research, 30 participants met for the third semiannual Adding Value to the Mathematics and Science Partnership (MSP) Evaluations Conference. Participants included 25 evaluators of the comprehensive and targeted MSPs, one principal investigator of the MSPs, and four Research, Evaluation, and Technical Assistance (RETA) representatives including three from the Adding Value Evaluation Project.

The Conference covered important aspects of evaluation including growth modeling and needed analytic tool, with practical exercise in both qualitative and quantitative research. Following an introduction and site round robin by each participant, Norman Webb gave an overview of evaluation, partnership, and reporting issues and participants asked questions and raised questions regarding evaluation issues and concerns. Following the simultaneous short course sessions on both qualitative and quantitative research was a session on assessment of content and a debriefing session to identify procedures for continued work and topics to cover at the fourth semiannual conference in September 2004. The notes from the conference are available on the Adding Value web site under Conference and Minutes ( The discussion and networking opportunities aided the Conference in achieving its overall goal of clarifying evaluation needs through networking and intellectually rich conversations among MSP evaluators and RETA representatives.

Summary of Responses

What follows is a summary of the evaluations completed by the conference participants. The response rate was 67 percent, with 18 of the 27 participants (not including the 3 Adding Value Team members) completing evaluations (not all participants responded to all questions). Participants were asked to respond to how much they gained from each of the following aspects of the conference:

Session I: Evaluation, Partnership and Reporting Issues

Course A: Design Issues, Reliability and Validity

Course B: Data Set Analysis

Course C: Data Collection and Correction

Course D: Data Set Analysis

Session II: Assessment of Content

Session III: Debriefing Session

The small group discussions

Large group sharing of projects

Other opportunities for networking

The conference overall

Participants were asked to rate each of these items on a scale from 1 to 5, with 1 signifying the lowest rating, “none,” and 5 signifying the highest rating, “a great deal.”

How much did you gain from Session I: Case Analysis?

The average rating for Session I was 3.5 (between some what and a lot), with 94 percent giving Session I a rating of 3 or higher.

How much did you gain from Course A: Design Issues, Reliability and Validity?

The average rating for Course A was 3.7 (between some what and a lot), with 100 percent giving Course A a rating of 3 or higher. The following comment was made regarding Course A:

No new information, but rephrasing or explaining information in a different way was useful. I enjoyed the interactive nature of the session.

How much did you gain from Course B: Data Set Analysis?

The average rating for Course B was 4.1 (a lot), with 100 percent giving Course B a rating of 3 or higher.

How much did you gain from Course C: Data Collection and Correction?

The average rating for Course C was 4.4 (a lot), with 100 percent giving Course C a rating of 4 or higher.

How much did you gain from Course D: Data Set Analysis?

The average rating for Course D was 3.9 (nearly a lot), with 100 percent giving Course D a rating of 3 or higher.

How much did you gain Session II: Assessment of Content?

The average rating for Session II was 3.5 (between some what and a lot), with 91 percent giving Session II a rating of 3 or higher.

How much did you gain Session III: Debriefing Session?

The average rating for Session III was 3.0 (some what), with 83 percent giving Session III a rating of 3 or higher. The following comment was made regarding Session III:

How much did you gain from attending the small group discussions?

The average rating on attending the small group discussions was 3.8 (between some what and a lot), with 100 percent giving a rating of 3 or higher.

How much did you gain from the large group sharing of projects?

The average rating on the large group sharing of was 3.4 (between some what and a lot), with 89 percent giving a rating of 3 or higher.

How much did you gain from other opportunities for networking?

The average rating on opportunities for networking was 4.2 (a lot), with 100 percent giving a rating of 3 or higher.

How much did you gain from the conference overall?

The average rating on how much was gained from the conference overall was 4.2 (a lot), with 100 percent giving a rating of 3 or higher.

How would you rate the conference overall?

The average rating on the conference overall was 4.1 (high), with 100 percent giving a rating of 3 or higher.

What are the most important and relevant topics to be discussed at the next meeting on September 16 and17, 2004?

Participants provided the following responses:

Options for testing/assessing teacher content knowledge

Development and use of case studies

Evaluation design, especially as related to controls – options and pros/cons of each

Refocusing evaluation and midcourse corrections – what can be done, what to watch out for

Maximizing mixed-methods

Review of qualitative software packages and how they work

Identifying and reporting results of value-added evaluation

Use of Proc Mixed

Methods for error correction and bias

Assessing partnerships

Creative ways to overcome lack of control groups

Closer examination of existing resources (tools, software, etc.) and evaluating appropriateness of application to program

Online instruments/issues of anonymity

Internal/external balance and collaboration

Partnership building – there are a number of questions about whether “true” partnerships are being built and the value-added of partnerships between IHE and K-12

Measuring program effects in a meaningful way. Most MSPs are struggling with how to measure the effects of professional development of teachers

I would like to see more strategies for project management. That promises to be trickier as we go along

Other quantitative measure beyond test scores

Integrative ideas for reporting – integration of qualitative and quantitative

Modeling of data in particular MSP cases to study project effects

Qualitative evaluation – computer programs for qualitative data analysis (N-Vivo, Nud*ist)

Strategies to organize data vs. analysis

Identifying common issues in evaluation design/analysis and discussing them together

Emerging issues – for example, what does collaboration in IHE look like? What is the incentive?

How to address the continuing pressure to and from NSF to retrofit experimental models on existing projects

Continuation of courses with demonstration of software, both for quantitative and qualitative

Display/discussion of evaluation implementation plans from other NSF projects (the big picture)

Participants had several additional comments to add:

The session was extremely valuable! Thanks for the organization and opportunities.

Please improve communication beforehand. Nice job.

Great conference.

Great opportunity to learn. Thank you for your attention to detail! (Food, etc.).

The short courses should be defined by research questions rather than methodology. For example, participants could split into groups and work together on a model case based on typical MSP issues and put together an evaluation. Each team should present the methods they would use for evaluation. Led by research staff proficient in both qualitative quantitative methods. There would be a discussion of the various methodological issues in the evaluation approach. Using multiple staff would probably be needed for this format.

I appreciated the community building a great deal. I also appreciated the attention to the differing expertise of the participants. As a relative newcomer, I didn’t feel lost.

At the data set analysis sometimes it was very difficult for me to follow the presentation; the handouts were not in the order or organization that facilitated it. Also, sometimes it was difficult to understand some issues or concepts that Rob was using. Comments by Norman in the second day session helped in this regard.

It was a great experience and very projective one. The interventions of other colleagues complemented the presentations given.

The course sessions were wonderful; I wish there had been the opportunity to attend both. Lots of information and “take home” ideas, references, and techniques. The sharing out sessions are fine, but so many of the projects run together, I am not sure how much we gain. Norman as a facilitator is great.

Summary

The third semiannual Adding Value Conference provided a variety of opportunities for both formal and informal conversations to identify critical evaluation needs in the review of the Mathematics and Science Partnerships. A summary of the evaluation responses indicates that the participants valued the conference and gained new information.

1