Educational Research

250:205

Writing Chapter 3 – Part III

Chapters 10-12: Statistical Analysis

  1. A Systematic Process for Writing the Methods Chapter:
  2. Subjects.
  3. Instrumentation.
  4. Procedures.
  5. Experimental Design.
  6. Statistical Analysis.
  7. Introduction:
  8. Methods of Reporting Information:
  9. Words: Qualitative studies.
  10. Figures or tables: Qualitative or quantitative studies.
  11. Numbers: Quantitative studies.
  12. Methods of analyzing quantitative data:
  13. Descriptive statistics.
  14. Inferential statistics.
  15. Correlational statistics.
  16. Statistics are scores or indices taken from a sample that predict/estimate parameters of a population.
  17. Methods of Displaying Data:
  18. Displaying Categorical Data:
  19. Frequency tables: Table 10.8.
  1. Bar graphs and pie charts: Figures 10.23, 10.24.
  1. Cross-break tables: Tables 10.9 – 10.16.
  1. Examples: Table 10.9.
  1. Example: Table 10.10.
  1. Example:Table 10.11.
  1. Examples: Tables 10.12-10.14.
  1. Displaying Ordinal, Interval and Ratio Data:
  2. Frequency polygons:
  1. Histograms:
  1. Stem-leaf plots:
  1. Scatterplots:
  1. Descriptive Statistics:
  2. Measures of Central Tendency:
  3. Mode:
  1. Median:
  1. Mean:
  1. Note: In a perfectly normal distribution, the mode = median = mean.
  1. Measures of Variability (Spread):
  2. Frequency:
  1. Range:
  1. Variance (s2):
  1. Standard deviation:
  1. Derived Scores:
  2. Derived scores use a common scale to convert raw scores into a different form so that they indicate how an individual compares to other individuals in a group.
  3. Several types:
  4. Age and grade-level equivalent:
  5. Percentile ranks:
  6. Standard scores:
  7. Particularly useful when creating standardized tests where raw scores are converted into the standard score. The most common standard scores in education are the z-score and t-score.
  8. Z-score:
  1. T-score:
  1. Advantages of standard scores:
  2. Comparison of performances between different tests.
  • Prediction and probability:

Probability: The percent stated in decimal form and refers to the likelihood of an event occurring.

In a normal distribution, the 68/95/99 rule applies (Figure 10.12):

  1. Inferential Statistics:
  2. Terminology:
  3. Inferential statistics:
  1. Sampling error:
  1. Sampling distribution:
  1. Example: Assume a population of 1, 2 and 3 (population mean = 2).
  1. Standard Error of the Mean (SEM):
  1. Applications of Inferential Statistics:
  2. Prediction of the mean of a single population  Confidence intervals.
  3. Comparison of means  Hypothesis testing.
  4. Prediction of relationships  Correlations
  1. Application of Inferential Statistics: Prediction of Population Means Confidence Intervals:
  2. A confidence interval is a set of number limits within which the population mean is estimated to lie.
  3. Recall the 68/95/99.7 rule and the estimations:
  1. The exact values for 95 and 99% are as follows:
  1. 95% CI = mean +/- 1.96*SEM
  2. Example: A researcher wants to predict the average math achievement score in 6th grade female students in BlackHawkCounty. The researcher randomly samples 101 female 6th grade students from across the county and determines the following:
  1. Application of Inferential Statistics: Comparison of Means  Hypothesis Testing.
  2. Introduction:
  1. Basic types of comparisons:
  2. Compare a mean to a value:
  3. Example: In the 1980’s the mean IQ of the 3rd graders in Iowa was 75 (reference, 1980). As a researcher you feel that for whatever reason, the math achievement of 3rd graders in Iowa is less than 75.
  1. Compare the means between two different independent samples:
  2. Example: You are curious whether 3rd graders in BlackHawkCounty have a greater or lesser math achievement than 3rd graders in another county.
  1. Compare the means between two different dependent/related measurements within the same sample (pre-post set-up/paired-samples):
  2. Example: You are curious as to whether a specific curriculum would increase the math achievement in 3rd graders in BlackHawkCounty.
  1. Compare the means between three or more different independent samples:
  1. Compare the means between three or more different dependent/related measurements within the same sample (repeated measures):
  1. The Process:
  2. State the null hypothesis:
  3. Null hypothesis: The null hypothesis is stated as “no difference” or “no effect.”
  1. It can be written out or it can be written as a mathematical statement.
  2. The statistical test will either support or reject the null hypothesis.
  3. Examples:
  4. Compare a mean to a value:
  • Compare the means between two different independent samples:
  • Compare the means between two different dependent/related measurements within the same sample (pre-post set-up):
  • Compare the means between three or more different independent samples:
  • Compare the means between three or more different dependent/related measurements within the same sample (repeated measures):
  1. Select alpha:Alpha () is the error factor or the chance occurrence that the results of your study are due to chance.
  1. Select the appropriate statistic:
  2. Parametric Statistics: Data is normal and interval/ratio:
  3. Compare sample to known value:

Single sample t-test.

  • Compare two samples from independent populations:

Independent t-test or t-test.

Design: Single-Shot Case Study: Static-Group Comparison Design:

X O

 O

Static Group Pretest-Posttest Design

OXO

OO

  • Compare two dependent/related samples:

Paired samples t-test or repeated measures t-test or dependent t-test.

Design: One group pretest posttest design.

OXO

Matched Design:

MOXO

MOO

  • Compare three or more samples from independent populations:

ANOVA.

Designs:

  • Compare three or more dependent/related samples:

Repeated measures ANOVA.

Designs:

  1. Nonparametric Statistics: If data is non-normal or the data is not interval/ratio.
  2. Categorical data:

Chi-square.

  • Compare two samples from independent populations:

Mann-WhitneyU.

  • Compare two dependent/related samples:

Wilcoxon.

  • Compare three or more samples from independent populations:

Kruskal-Wallis.

  • Compare three or more dependent/related samples:

Friedman.

  1. Interpret the p-value(s):
  2. The statistical output associated with each of these statistical tests is the p-value:
  3. p-value: The probability of making an error if you reject the null hypothesis (type I error).
  4. Bottom line:
  5. If p-value < alpha  reject the null hypothesis.
  6. If p-value > alpha  accept the null hypothesis.
  1. Post-hoc analysis:
  2. Only used when there are three or more samples being compared.
  3. The first null hypothesis tests the overall experiment. That is, is there an effect somewhere?
  1. Example: Ho: pre = post1 = post2, p = 0.02.
  • Post hoc analysis:

Ho: pre = post 1, p = 0.03

Ho: pre = post 2, p = 0.74

Ho: post 1 = post 2, p = 0.04

  1. More statistical variations:
  2. Measure the effect of multiple factors on the dependent variable (Factorial Design).
  3. Example:
  4. In the simple ANOVA, you compare multiple levels within one factor.

Factor = grade level with 4 levels (1st, 2nd, 3rd and 4th grade).

  • In the factorial ANOVA, you compare the effect of multiple factors on the dependent variable (reading enjoyment).
  1. Add multiple dependent variables (Multivariate Design):
  2. Examples:
  3. Compare 1st, 2nd, 3rd, 4th graders in regards to two dependent variables (reading enjoyment, reading frequency).
  1. Application of Inferential Statistics: Prediction of Relationships Correlational Statistics:
  2. Introduction:
  3. Used when the researcher is interested in quantifying the relationship between two or more quantitative variables.
  4. Interpretation: The r value.
  1. Techniques:
  2. Pearson:
  1. Spearman rho:
  1. Linear regression: