Module 6: Behavioral Transfer Final Exam
Please select the one BEST answer for each question.
1.Describe how on-the-job behavior is viewed from both the traditional and new
training evaluation models.
a.On-the-job behavior is just another measure under the traditional training
evaluation model whereas under the new model actual behavior is the key
to any evaluation.
b.On-the-job behavior is viewed as the primary focus of any evaluation in
the traditional training evaluation model. Whereas, in the new model,
on-the-job behavior or actual behavior is one of many training outcomes.
c.The traditional training evaluation model is in hierarchical order and so
too is the new evaluation model. Thus, both view behavior as fundamental
to any evaluation.
d.Both evaluation models view actual behavior as one of many evaluation
outcomes. This fact is illustrated by the new model's Tier 4 which is
actual behavior and behavioral intentions.
2.Name three types of data collection methods used to measure on-the-job behavior.
a.inter-rater reliability, observation, and interview
b.self-assessment, interview, and focus group
c.observation, interview, and focus group
d.observation, self-assessment, and interview
3.List the benefits of the three data collection methods.
a.Inter-rater reliability correlates the data better than other methods;
observation may produce more objective analyses, and, interview
generates more details.
b.Interview generates more details; self-assessment is easier to collect; and,
focus group provides a well rounded perspective.
c.Focus group provides a well rounded perspective; observation may
produce more objective analyses; and, interview generates more details.
d.Self-assessment is easier to collect; observation may produce more
objective analyses; and, interview generates more details.
4.List the limitations of the three data collection methods.
a.Inter-rater reliability cannot determine consistency for more than two
raters; observations are time consuming; and, interview data may be
difficult to validate and determine if it is reliable.
b.Interview data may be difficult to validate and determine if it is reliable;
self-assessments may not accurately evaluate trainees' performances; and
focus groups tell the positive to save face.
c.Observations are time consuming; interview data may be difficult to
validate and determine if it is reliable; and, self-assessments may not
accurately evaluate trainees' performances.
d.Focus groups tell the positive to save face; Interview data may be difficult to validate and determine if it is reliable; and observations are time consuming.
5.How would you support your behavioral instrument in terms of content
validity?
a.I would tape the behaviors three times and then have two raters score
them. Next, I would correlate the results to see if the two raters have
high agreement. If there is high agreement, then I would conclude that
the skills test is content valid.
b.I would show the behavioral instrument to a group of people and ask
them what they think the instrument is measuring. If 90% or more of
them responds that the instrument is measuring what it is suppose to,
then you have support for content validity.
c.I would make sure the behavioral instrument items parallel the
training performances, eliminate any irrelevant information, and weight
the behavioral items so that the items are balanced to the training topic
emphasis.
d.I would make sure the behavioral instrument items are parallel to the
training performances, and I would eliminate any irrelevant information.
6.How would you support your behavioral instrument in terms of face
validity?
a.I would make sure the behavioral instrument items parallel to the
training performances, eliminate any irrelevant information, and weight
the behavioral items so that the items are balanced to the training topic
emphasis.
b.I would show the behavioral instrument to a group of people and ask
them what they think the instrument is measuring. If 90% or more of
them responds that the instrument is measuring what it is suppose to,
then you have support for face validity.
c.I would tape the behaviors three times and then have two raters score
them. Next, I would correlate the results to see if the two raters have
high agreement. If there is high agreement, then I would conclude that
the skills test is face valid.
d.I would make sure the behavioral instrument items are parallel to the
training performances, and I would eliminate any irrelevant information.
7.How would you support your behavioral instrument in terms of test-retest
reliability?
a.I would make sure the behavioral instrument items parallel to the
training performances, eliminate any irrelevant information, and weight
the behavioral items so that the items are balanced to the training topic
emphasis.
b.I would gather a group of subjects and ask them to take the behavioral
instrument. The following day, I would ask the same group of subjects to
take the same behavioral instrument again. I would then correlate the
results. If the correlation is high, then I would have support for test-retest
reliability.
c.I would tape the behaviors three times and then have two raters score
them. Next, I would correlate the results to see if the two raters have
high agreement. If there is high agreement, then I would conclude that
the skills test has test-retest reliability.
d.I would gather a group of subjects and ask them to take the behavioral
instrument. I would then run a reliability analysis using Alpha. If Alpha is
above .80, then you have support for test-retest reliability.
8.How would you support your behavioral instrument in terms of internal
consistency reliability?
a.I would make sure the behavioral instrument items parallel to the
training performances, eliminate any irrelevant information, and weight
the behavioral items so that the items are balanced to the training topic
emphasis.
b.I would gather a group of subjects and ask them to take the behavioral
instrument. The following day, I would ask the same group of subjects to
take the same behavioral instrument again. I would then correlate the
results. If the correlation is high, then I would have support for internal
consistency reliability.
c.I would tape the behaviors three times and then have two raters score
them. Next, I would correlate the results to see if the two raters have
high agreement. If there is high agreement, then I would conclude that
the skills test has internal consistency reliability.
d.I would gather a group of subjects and ask them to take the behavioral
instrument. I would then run a reliability analysis using Alpha. If Alpha is
above .80, then you have support for internal consistency reliability.
9.Define intra and inter-rater reliability?
a.Intra-rater reliability involves two raters while inter-rater reliability
involves three or more raters.
b.Inter-rater reliability involves two raters, while intra-rater reliability
involves one rater.
c.Inter-rater reliability involves at least two raters but it could be more, while intra-rater reliability always involves one rater.
d.Intra-rater reliability is similar to self-assessment, where as inter-rater
reliability is similar to observation.
10.Illustrate how you would calculate intra and inter-rater reliability?
a.In calculating intra and inter-rater reliability, you would correlate
the scores from the multiple assessments.
b.In calculating intra and inter-rater reliability, you would run a t-test
on the scores from the multiple assessments.
c.In calculating intra and inter-rater reliability, you would check the mean
scores from the multiple assessments to determine if the mean scores
were statistically significant.
d.In calculating intra and inter-rater reliability, you would check the
standard deviation to determine if the scores were statistically significant.
Module 6 TestCopyright © 2012 Third House Inc.1