This week, you will retrieve an article of current literature that reports on the evaluation of a public health program. Review the article completely and discuss the type of evaluation used and the process described. Determine if the evaluation completed made sense based on the "Step-By-Step Guide to Effective Evaluation" described in Chapter 13.

According to our textbook, an impact evaluation asks whether the program had a direct effect on health behaviors and an outcome evaluation asks whether the program had a direct effect on indicators of disease or actual reduction in morbidity or mortality (DiClemente, Salazar, Crosby, 2031). According to Hughes, Black, and Kennedy, impact evaluation is concerned with the assessment of the immediate effects of the intervention and usually corresponds with the measurement of the intervention objective. Outcome evaluation is concerned with measuring the longer-term effects of the intervention and this usually corresponds to the intervention goal (2008). Therefore, the evaluation of the childhood obesity program Kids – Go for Your Life (K-GFYL) in Victoria, Australia, is an impact evaluation. However, in this article, it claims to be both an impact and outcome evaluation. Based on the descriptions above, it may not be accurate to call this an outcome evaluation (or at least not yet) as the long term goal of obesity prevention is not actually mentioned in the goals/aims of the evaluation. The only goals mentioned relate to comparing the health environments and behaviors between “member-only” schools and “award” schools.

The aim of this program is to reduce the risk of childhood obesity by improving the socio-cultural, policy, and physical environments in childcare and educational settings. Membership of the K-GFYL program is open to all primary and pre-schools and early childhood services across the state of Victoria. Once in the program, member schools and services are centrally supported to undertake the health promotion (intervention) activities, and when program 'criteria' are reached, the school/service is assessed and 'awarded.' The evaluation of the K-GFYL program aims to (Silva-Sanigorski, Prosser, Carpenter, Honisett, Gibbs, Moodie,Sheppard, Swinburn, and Waters, 2010):

  • Determine if K-GFYL award status is associated with more health promoting environments in schools/services compared to those who are members only.
  • Determine if children attending K-GFYL award schools/services have higher levels of healthy eating and physical activity-related behaviors compared to those who are members only.
  • Examine the barriers to implementing and achieving the K-GFYL award.
  • Determine the economic cost of implementing K-GFYL in primary schools.

Again, the evaluation claims to be both impact and outcome-based, however, I didn’t feel this program evaluation currently results in evaluating outcomes. Per Figure 1, the evaluation claims to show that the impacts of the program will result in the following outcomes for the child: increased healthy weight, decreased obesity, and increased quality of life. However, this article did not reveal any method to measure these outcomes or intent to follow these children for years to come. Therefore, I’m not going to try to explain why this is an outcome evaluation. However, I will explain that this is an impact evaluation because they are assessing if there was a difference in the “amount” of health promoting programs between “award” schools and schools that were members only as well as whether the “award” schools had better (i.e. higher levels) healthy behaviors than member only schools. As described by DiClemente et al. and Hughes et al, an impact evaluation looks at the direct effects of the intervention in regards to the programs’ goals. This particular evaluation is also looking at the cost of implementing the program as well as barriers to implementation which, if addressed, will increase the impact (i.e. the difference between “award” schools and member-only schools).

In my opinion, the effectiveness of this program is good overall. However, there are some factors that limit its effectiveness. For example:

  • Defining research population
  • to ensure consistency across the sample and simplifythe study design, sampling will be restricted to Government schools
  • 80 total primary schools will be sampled – 30 “member-only” schools, 30 “newly awarded” schools, and 20 “long-time awarded” schools
  • 50 preschools and early childhood centers will be sampled – 20 “member-only” and 30 “award” schools
  • Approximately 20-30 students will be sampled per school
  • Identifying Stakeholders and Collaborators
  • all preschools, primary schools, and early childhood servicesacross the state of Victoria, Australia, are eligible to participate
  • schools and services are supported in their implementation of the program by a state-wide coordination team and a local government coordinator
  • parent and caregivers will be asked for information via surveys
  • Defining evaluation objective
  • The aims/goals of the evaluation are listed above. These goals are both precise and specific as they are comparing two specific statuses – “award” schools and “member-only” schools
  • Selecting a research design that meets the evaluation objective (according to (Silva-Sanigorski, Prosser, Carpenter, Honisett, Gibbs, Moodie, Sheppard, Swinburn, and Waters, 2010):
  • When determining the appropriate evaluation design, theresearch team considered a number of contextual andlimiting factors. These included:
  • the implementation ofthe intervention program across the entire State and current high-level recruitment drive
  • the funding of thisevaluation more than 12 months after state-wide implementation began
  • the requirement for the evaluationdata to be available in a short time frame (all data was tobe collected within one school term of 12 weeks)
  • the limited evaluation funding for this large-scale impactand outcome evaluation in all of the settings targeted by the intervention
  • Taking these factors and the study aimsinto consideration, a mixed method, cross sectional evaluation design was deemed the most appropriate to determine differences in impact and outcome measuresbetween those settings/services that were K-GFYL members only and those that were K-GFYL awarded.
  • In theschool setting, there will be two groups of awardedschools: newly awarded (<12 months) and longer termawarded (≥12 months), to enable an examination of thesustainability of program impacts after the award is conferred
  • Selecting variables for measurement
  • The nature of the setting, data collection costs and accessibility to the setting/service and participants will vary from setting to setting.
  • School Environment Questionnaire, Child Healthcare Questionnaire, Economic Resource Questionnaire, child lunch box survey, and environmental questionnaires will be used to assess effectiveness
  • All materials will be filled out by an adult, but demographic factors, willingness to fill out a survey/questionnaire, attitude about intervention (parents, staff, child), and/or educational levels can affect results
  • Selecting the sampling procedure
  • A random sample from each strata (school) will be sampled.
  • The samples that are taken should be representative of the whole in order to preserve external validity.
  • For example: A set of open-ended questions will be given to preschool parents (from 5-7 participating kindergartens) toexamine their experiences with the K-GFYL intervention.
  • This sample will be purposively drawn from the participating kindergartens to include member and awardedkindergartens from socio-economically disadvantagedareas and where possible, from at least one area with ahigh culturally and linguistically diverse population.
  • Implementing the research plan
  • This evaluation describes how samples will be chosen, how data will be analyzed, how assessments will be given, and that there is limited funding and time which doesn’t allow for a control group.
  • Specific data analysis can be found in the paper.
  • Internal validity is intended with this plan.
  • Analyzing the data
  • The questionnaires/surveys have distinct answers (i.e. never, sometimes, often, always) to choose from which allows for consistent scoring.
  • The article doesn’t explain how the data will be cleaned, but it does say that “cleaning” will occur before analysis (reference data analysis section of article)
  • Communicating the findings
  • The program does not define how it will communicate its findings, but it concludes that the results will inform future public health policies and the development and implementation of health promotion programs to prevent childhood obesity at a population level.

So, like I stated previously. I do believe this evaluation will be effective. However, as one can tell when looking at the 9 steps to effective evaluations, there are areas that could be improved for data quality.

DiClemente, R.J., Salazar, L.F., and Crosby, R.A. (2013).Health behavior theory for public health. Burlington, MA: Jones and Bartlett Learning

Hughes R, Black C, Kennedy NP. (2008). Public health nutrition intervention management: Impact and outcome evaluation. JobNut Project, Trinity College Dublin. Retrieved from

Silva-Sanigorski, A., Prosser, L., Carpenter, L., Honisett, S., Gibbs, L., Moodie, M., & ... Waters, E. (2010). Evaluation of the childhood obesity prevention program Kids -- 'Go for your life'. BMC Public Health, 10288-295. Retrieved from