SURVEY – MID-TERM EXAM STUDY GUIDE

1)  Validity – What are the 3 types and how is it assessed?

a)  General definition: an evaluative judgment about the extent to which the measurement of constructs and their relationships is a relatively accurate portrayal of the constructs of interest.

b)  Three types:

i)  Content Validity: the extent to which a measure adequately collects data concerning the construct domain (i.e., does the measure have the content necessary to measure the construct of interest?)

(1)  Assessed by content validation: including Q-sort, content adequacy, or ANOVA approach to content validation (Hinkin & Tracey).

ii)  Construct Validity: the extent to which the construct being measured is related to similar or related constructs and not related to (or negatively related to) dissimilar constructs or opposite constructs.

(1)  Assessed by measuring relationships of measures of target construct with measures of other constructs. Especially via use of MTMM or SEM.

iii)  Criterion Validity: the extent to which the target construct can predict scores on a future measured construct.

(1)  Assessed by longitudinal studies

2)  What are the accepted (conventional) levels of reliability?

a)  For applied work, .8 is wanted, for basic research, .7 is the usually norm.

3)  Reliability – What are the 4 main types and what error types are they concerned with measuring?

a)  Test-retest: designed to measure errors introduced by variables related with the passing of time.

b)  Internal consistency: error associated with the items of a measure

c)  Interrater agreement: errors associated with different raters

d)  Parallel forms: errors associated with different forms of the same measure.

4)  What are the purposes of survey research?

a)  Description: describing the characteristics of a population; age, gender, % employeed, % that agree/disagree, etc.

b)  Explanation: attempting to examine why the population (the sample) contains the characteristics that they do. Why do women obtain smaller pay for same work level, ed status, etc.?

c)  Exploration: when you don’t understand something, use survey research to investigate what may be going on.

5)  Hinkin’s Procedures for Creating a Strong Survey:

a)  Stage I: ITEM GENERATION

i)  Literature review

ii)  define construct

iii)  generate items

iv)  assess adequacy of items: content analysis

(1)  Q-sort: no definitions, have you generate own categories for items

(2)  content adequacy: give definitions, say how well each item relates to each def; use EFA or ANOVA approach

b)  Stage II: SCALE DEVELOPMENT:

i)  Sample: must be representative and large enough to accommodate intended analysis

(1)  EFA = 150, CFA = 200

(2)  Response to item ratio = 5:1 to 10:1.

ii)  Reverse-scored items: bad

iii)  Negatively worded items: bad

iv)  Scale size: between 3-7items is best

v)  Scaling: scale should provide sufficient variance

vi)  Reliability Assessment:

vii) Factor Structure:

c)  Stage III: SCALE EVALUATION:

i)  construct validity – MTMM, CFA, etc

ii)  criterion validity

6)  What are the procedures used to validate a survey?

a)  Define your variables

b)  Demonstrate content validity

c)  demonstrate construct validity

d)  demonstrate criterion validity

7)  What are the different types of method effects?

a)  Common rater:

i)  consistency motif, social desirability, leniency, acquienscence, mood, etc.

b)  Item characteristic:

i)  Item social desirability, item demand characteristics, item abiguity, common scale formats, common scale anchors, positive and negative wording.

c)  Item context:

i)  priming, embeddedness, context-induced mood, scale length, grouping of items

d)  Measurement Context:

i)  predictor and criterion measured at same time

ii)  measured at same location

iii)  using same medium