Independent-samples t test

The independent-samples t test is used when you want to compare 2 independent groups. Unlike simply comparing subgroups by using the split-file command explained in chapter 5, the independent-samples t test compares the means of the two groups while taking into account error.

The t statistic is calculated as follows:

Another way to say this is:

t = Between group differences  Within group differences (or error)

SPSS will calculate t for you, and will also provide you with the degrees of freedom (df), which is an index of sample size essential in statistical significance testing.

To compute df for an independent-samples t:

df = Number of participants – number of groups; OR df = N - 2

Critical Assumptions of the independent-samples t test:

1)Groups are independent

2)IV (or grouping variable) is nominal & dichotomous

3)DV (or outcome variable) is interval or ratio

Other, less critical, Assumptions:

1)DV is normally distributed. (But you can still run a t test if this assumption is violated)

2)The variability (or SD) in both groups are about equal. (A test called the Levene’s test will tell you if this assumption is violated, and if it is you simply need to report a different t and df).

A Simple Experiment Example:

A nonprofit agency has an annual fundraiser in which they sell candy bars. The fundraising chair wonders if attire will impact sales. She randomly assigns the 20 men selling candy bars to one of two groups. One group is told to wear jeans and a t-shirt, the other group is told to wear khakis, a dress-shirt, and a tie.

The Independent Variable is attire (jeans vs. khakis).

The Dependent Variable is number of candy bars sold.

Null hypothesis: Attire will not affect the number of candy bars sold.

Alternative hypothesis (directional): The men wearing khakis and ties will sell more candy bars than the men wearing jeans.

Data Entry:

You need at least two variables to run an independent-samples t test. One variable is the or grouping variable (or IV if you have an experiment), the other is the test variable (DV).

For the simple experiment example, we have a variable called “group” that is coded as 0=jeans and 1=khakis. We also have the dependent variable called “candy” that is the amount of candy sold.

The variable view would look like this:

And the data view would look like this:

Conducting an independent-samples t test

On SPSS Menu Bar, Click

Analyze  Compare Means  Independent Samples t test

The define groups box will open:

SPSS OUTPUT

T-Test

*What is Levene’s test?

Levene’s tests the assumption that the groups have equal variances. In other words, that one group doesn’t have a much higher variance (or SD) than the other. You want Levene’s to be ns because it allows you to use a more powerful test. If Levene’s is significant, you have to use a less powerful test with less df, which may impact your ability to find a statistically significant difference between the 2 groups.

Remember that interpreting the independent-samples t test is a two step process.

Step 1: Look at Levene’s test to tell you what row to

use. The p value of the Levene’s test compares the variance in each group, NOT the mean.

Step 2: Look at the t test results in the appropriate

row. The p value of the t test tells you whether or not there is a statistically significant difference between the means of the two groups. In other words, the t test is what tests your hypothesis.

Effect Size

One drawback of statistical significance testing is that it doesn’t tell you the magnitude of the effect. That information is provided by an effect size. For a t test, it gives you a way of explaining how strong the difference is between the two groups. Another advantage is that effect sizes can be compared across different studies.

Keep in mind that some researchers prefer to only report the effect size for a t test if the results are statistically significant. Check with your professor to see what he or she prefers.

A correlation coefficient is an effect size. A t test has a nominal variable and an interval/ratio variable, so we would use a point-biserial correlation (rpb).

If you square a correlation, you will get the proportion of variance accounted for. The rpb2 tells you the percentage of variance in the interval/ratio variable (e.g., the DV) which can be accounted for by the nominal or grouping variable (e.g., the IV).

Use the following equation to convert your t statistic to a squared correlation coefficient:

Using our example:

Alternatively, you can run a bivariate correlation in SPSS just as you would run a Pearson r.

group / candy
group / Pearson Correlation / 1.000 / .471*
Sig. (2-tailed) / .036
N / 20.000 / 20
candy / Pearson Correlation / .471* / 1.000
Sig. (2-tailed) / .036
N / 20 / 20.000

Writing up Results

Include the following:

The type of analysis you conducted and the variables examined.

A comparison of the mean and SD for each group

  • Here are a couple examples from the candy sales study:
  • The khakis group (M = 23.20, SD = 2.85) sold more candy bars than the jeans group (M = 20.80, SD = 1.75).
  • The men who wore jeans sold an average of 20.8 candy bars (SD = 1.75) whereas the men in khakis sold an average of 23.2 candy bars (SD = 1.75).

The t value, df, p value, and proportion of variance accounted for.

  • Examples:
  • t(18)= -2.26, p < .05. Twenty-two percent of the variance in sales was accounted for by attire.
  • t(18)= -2.26, p = .036, rpb2=.22.

A statement about whether or not the difference between the two groups was statisitically significant. You can simply add a word or two to one of the above statements.

  • Examples:
  • The khakis group (M = 23.20, SD = 2.85) sold significantly more candy bars than the jeans group (M = 20.80, SD = 1.75).
  • The difference between the groups was statistically significant,t(18)= -2.26, p = .036, rpb2=.22.

If Levene’s test was not significant, you don’t need to mention it at all. If Levene’s was significant, explain that you had to use a more stringent test because Leven’es was significant (or because the assumption of equal variances was violated).