Lying With Statistics Assignment

RSCH 6110

Guidelines for the Class Presentation and Paper on Critiquing a Use or Misuse of Statistics

Dr. Richard Lambert

UNC Charlotte

3135 Colvard

704-547-3735

email:

website: http://education.uncc.edu/rglamber

“There are three kinds of lies: lies, damned lies, and statistics.”

Benjamin Disraeli

“You can weigh manure with a jewelers scale,

You can slice bologna with a laser,

But in the end

All you have is manure and bologna.”

Find an example of a use of statistics in the popular or professional press. Ideally find an example that includes some use of graphical presentation of data. Try to make the example relevant to your field. Feel free to select an example from a journal or trade publication in your discipline. If you are having trouble finding something to critique, feel free to use one of the “stat. boxes” from the front page of USA Today just like the ones we have been critiquing in class. They are archived on the web at

Simply click on the “stat. box” at the bottom of the front page and you will be linked to the archive. Please check with me at least one week in advance of your presentation date to verify that the example you have selected is appropriate. Please prepare a 10-minute presentation to the class that addresses the seven questions listed below as well as any other deceptive or misleading aspects of the example. You will need to project your example on the screen for the whole class to see. If you need help scanning it see me before class and we can convert it to an electronic format that can be displayed on the projection panel. Your paper should be 3-5 pages in length and should also include your answers to the questions listed below.

Questions to Ask as You Critique an Example of the Use of Statistics

in the Popular or Professional Press

  1. What statistics were used?
  1. What graphical presentation techniques were used?
  1. Were the statistics and graphical presentation techniques appropriate given the nature of the data represented?
  1. Toward what conclusions are the presenters trying to point the reader? Are there any causal inferences that the are being made?
  1. Are these conclusions or inferences really justifiable?
  1. What additional information would you want to have in order to properly interpret the data? Would you run any additional statistical procedures to enhance interpretation?
  1. What qualifications or limitations do you think should be added to the presentation of results?

Use the following pages to take notes as we critique examples from USA Today throughout the semester.

Some Issues to Consider when Evaluating the Use of Statistics in the Popular or Professional Press

Does the study have defined objectives?

Does the study have well defined outcome measures or variables?

Were the outcomes measured well or without measurement error?

Have appropriate adjustments been made to the statistics presented to enhance interpretation?

For example:

Rates to adjust for unequal time periods

Rates to adjust for unequal population or sample sizes

Rates or Percentages rather than raw counts or totals

Fixed and clearly explained time periods of equal length

Statistical adjustment for other relevant factors

Can you judge the precision of the results presented? Are you given information about sampling error or confidence intervals?

Was there a defined target population for the purpose of making conclusions or inferences?

Can you judge whether a representative sample was used?

What about the influence of the wording of survey questions?

What about the possible influence of Response Bias?

What about the possible influence of Non-Response Bias?

Are their Order Effects, that is does something about the order in which the questions were asked present a possible alternative explanation for the results?

Sponsorship / Funding / Motivation Issues

Incomplete Information. What is not provided to the reader that would enhance interpretation?

Can you check the match between the study’s defined goals and what was actually done? Are you really looking at serendipitous findings?

Consider all the issues related to the graphical presentation of data from the handout that we covered in class. Have any of these guidelines been violated?

Is premature peeking at the data, or the lack of blinding influencing the results?

Are the writers implying causality from correlational or observational data?

If the study presented itself as an experimental study, did it conform to the characteristics of a Randomized Clinical Trial according to the guidelines from the handout that we covered in class?

If the study was quasi-experimental in nature, remember to consider Cook & Campbell’s Threats to the Internal, External, Construct, and Statistical Conclusion Validity of a study from the handout we covered in class. For example:

Attrition / Differential Attrition

Pre-Existing Differences Between Groups / Non-Equivalence of Groups

History

Maturation

Practice Effect

Hawthorne Effect

Implementation Effects

Experimenter Expectancies, etc.

Have the writers implied that statistical significance directly translates into Practical or Clinical Significance?

Remember that the best a statistical significance test can do for you is tell you that the observed results are unlikely to have been the result of sampling error under the condition that the null hypothesis, as you have specified it, is exactly true. It can never tell you that you have an important result.

Remember the questions a p value can not answer:

Is this finding important?

Was a statistical significance test necessary at all?

Was the right test used?

Were the assumptions of the test met?

What is the effect size?

What about cost benefit or cost effectiveness issues related to the treatment?

What are the side effects of the treatment?

What confounding variables were present in the study?

Things project officers say about Federal program evaluations that the public never gets to hear.

“The treatment really wasn’t completely implemented as designed.”

“All the subjects didn’t really get the same treatment.”

“They didn’t really do anything meaningful with their money over in that part of the country.”

“The contractor started collecting the data too late in the game.”

“Nobody told the evaluator about that issue.”

“We really don’t know how to measure the effects we’re looking for.”

“No one will ever get a grant to go back and analyze the data the right way.”

“We can always explain away findings that don’t show the program in a positive light.”