COMS 4322-11-4

Assessing Learning Outcomes

COMS 4322--Chapter 11

The Importance of Assessment: Training assessment refers to the process of evaluating the programs taught to see if they measured and accomplished what you wanted them to do. Early on the trainer assesses the trainees’ needs and sets objectives (Chapters 3 and 4). At the end, the trainer should determine if what was taught fulfilled those needs. One central question is that of why is assessment important to us? Below are the two reasons why it is:

1. Assessment Ensures Survival of the Fittest—in today’s business world the only departments that survive are ones that contribute to the “bottom line” in some measurable way. Trainers must be able to show corporate management that training makes a measurable and positive impact on earnings.

2. Assessment Ensures Quality Training—This argues that training needs to be done right the first time. Total quality management says that doing the job right is preferred to doing it quickly. Training from this perspective means you that you get and use feedback from your trainees to ensure you’re meeting the training objectives.

Measuring Learning Outcomes:

Those to be discussed are the Kirkpatrick model, the Bell Systems Approach, the IBM Approach, the Saratoga Institute Approach, and the Xerox Approach.

1. Donald Kirkpatrick’s “Levels of Training Evaluation” is probably the best-known framework designed to assess training outcomes. There are three sets of measuring instruments for each domain of learning.

· Assessing Cognitive Learning: Did they learn it? Exams are commonly used to measure this domain. The measures must be objective and quantifiable as to how well the participants understood and absorbed the material. Types of common exam questions are multiple-choice, and matching. Pages 242-243 provide some advantages of and disadvantages of using multiple-choice questions. The same thing is true with matching and essay questions. What is important is choosing the right way to measure the learning objectives. And that often depends on the trainees. Ideally, we’d design questions best suited to each trainee, but that is often not possible.

· Assessing Affective Learning: Did they like it? Appreciate it? You want them to like what you teach and have respect for you in the process of doing so. Positive reactions to a training program may make it easier in the future to encourage employees to attend more training. Of course, if the trainees didn’t like it, it may limit future attendance. The biggest drawback to this level of learning is that the information you from the trainees does not indicate whether the program met its objectives beyond ensuring participant satisfaction. Surveys are commonly used to measure this via Likert scales (of which there are many). Pages 247-250 offers several different scales designed to measure feelings.

· Assessing Behavioral Learning: Can they do it? Just because someone has been trained on how to do something does not mean they can actually perform the activity or skill. They need to be able to take the training and turn it into something productive in the workplace. You will one to measure at one of the following areas:

1. atomistic assessment—the lowest level is concerned only if the behaviors were performed or not. How well; the depth of the skill or refinement is not important.

2. analytic assessment—more concerned with how well the behaviors were performed. Looks at not just were the behaviors performed, but to what degree of skill did they reflect.

3. holistic assessment—the concern is on the finished product and less with the steps in how it was reached. Ideally, you want the trainees to follow the prescribed steps in the process, but in the end, what matters is the end result. And ultimately and most important, does the training positively impact the corporation’s productivity and profitability.

The trainers will want to decide which of these behavioral measures need be measured and then design a model or survey to do so. Critique sheets for assignments in classes are examples of this. And our text offers some other samples on pages 252-253.

---------------------------------------------------

There are many other approaches for evaluating training. Below are a few of the other more commonly used variations from the Kirkpatrick model:

2. The Bell System Approach—training program results are measured via the following outcomes:

· Reaction outcomes—This is the participants’ opinions of the training’s content, materials, methods, etc. In short, did they like the training or not? Did they accept the program?

· Capability outcomes—This covers what the trainees are expected to know, think, do, or produce by the end of the program.

· Application outcomes—This involves what the trainees know, think, do, or produce in the real-world settings for this training program has prepared t hem to hopefully do.

· Worth outcomes—This is the most significant result because it shows the value of the training in relation to its cost. This shows how much the organization benefits from the training in comparison to what it costs to present it.

The first two results from this approach represent the immediate goals of the training, while the last two are more long-term results.

3. The IBM Approach—this approach is a variation from the popular Kirkpatrick model. IBM spends more than a billion dollars annually on training and evaluates its efforts on four general categories:

· Reaction—This simply asks the trainees what they thought of the training; how valuable they found the program.

· Testing—pre and post-program measurements that assess knowledge and skills improvement as a result of the training program.

· Application—The extent to which the participants applied the new skills on their jobs and got the results hoped for in the training.

· Business Results—what IBM expected from the program in the form of a return that can be converted into a dollar value.

4. The Saratoga Institute Approach—This approach, also similar to the Kirkpatrick model, evaluates training on the following levels:

· Training Satisfaction—The degree to which the participants are satisfied with the training they received.

· Learning Change—The actual learning that has occurred; with pre and post-test instruments used to measure such.

· Behavior Change—The on-the-job change in behavior as a result of the training program.

· Organizational Change—The improvements in the organization as a result of the training program. This needs to be measured in concrete or quantitative terms.

5. The Xerox Approach—This model focuses on four levels:

· Entry capability—The evaluation of the trainees at the time they enter a program to determine if the prerequisites for the program are satisfied.

· End-of-Course Performance—Addresses the issue of whether or not trainees achieved the desired training outcomes. This is linked to the training objectives.

· Mastery Job Performance—Focuses on the question of whether or not graduates of the program exhibit mastery performance under normal job conditions after a practical period of on-the-job experience.

· Organizational Performance—Focuses on which participants meet or exceed organizational targets after a practical period of on-the-job experience.

This model is similar to the others with the exception of having the entry capability measuring assessment.

-----------------------------------------------------

Assessment Designs: It’s never easy to determine just exactly what the training has accomplished. You must be careful and measure with caution to make an educated assessment. Below are the basic designs for measuring just how effective the training was:

1. Pre/Post Test Design—The assessment instrument is given to the trainees before the training takes place to find out what they knew going in. And then you give them the same test after finishing the training. The key limitation to this design is that you can’t be certain the training, itself, caused the change—if any—in the trainees. There “could” be other factors that affected the change. How much of a problem this is varies. Sometimes it could be significant; other times minimal. Another concern is the amount of time passes from pre to post test taking.

2. Pre/Post Test with Control Group Design—Adding a control group (those who do not take the training) helps strengthen the design of the above example. This helps eliminate the maturation effect from just a pre-post test design.

3. Post-Test Only with Control Group Design—Give only a post-test assessment. With only one test there is no way to know if the two groups (control and trainees) were equal to begin with. But it does help control for maturation and testing effects (page 257).

4. Repeated Measure Design—both a pre and post-test is given by the trainer, along with tests at various intervals during the training program. Is helpful in seeing where behaviors and/or knowledge start to change during the training. And this can be helpful in determining how effective the training will be long-term for the class.

5. Qualitative Assessment Design—quantitative assessment employees the use of numbers and statistics to measure the training results. Qualitative assessment, however, features the description of learning outcomes, without the numerical interpretations. The best ways to get these descriptions are through (a) focus groups and (b) personal interviews. Using both qualitative and quantitative assessment would be ideal for trainers; given them what is referred to as triangulation. End result: you get the best idea of how well your training worked.

Interpreting Assessment Information: You finally need to determine what all your assessment means that ensure that your bosses know it as well:

· Analyzing Assessment Data—You will want to compare the learning objectives sought (Chapter 4) with how well the trainees did. If you have no objectives to start with, it’s impossible to know if you’ve reached them.

· Using Assessment Data—with the data you learn from your assessment, you can use this feedback to improve and/or refine future training sessions. All too often this does not get done due to a lack of time and resources. But using feedback in constructive ways is smart if you can possibly do it.

· Reporting Assessment Data—essentially this means making sure the decision-makers, the top bosses in your organization are aware of the success of your training. As the book notes, if you cannot justify your existence in today’s corporate environment, there is little chance your training efforts will be continued.

· Refer to the article from Time magazine, “The Diversity Delusion.” A good example of what can happen when companies either do not or cannot measure the effectiveness of their training programs. Read the article and be ready to discuss it in class.