Primer Research Foundations of

Organizational Behavior

Organizational behavior, or “OB” as we call it, is an applied social science that combines basic theory and practical applications. Because the discipline deals with people in organizations, you will find many sources that provide commentary and advice on OB-related issues. Some are based on solid scientific methods and concepts, while others are speculative. You may find it difficult to decide what to believe and what to dismiss. In order to make good decisions about OB insights and applications you must understand the elements of good theory and be able to ask good questions.

Theory in OB

In a very broad sense, a theory is simply a story of what to look for, how the things you are looking at are related, and why the pieces do or do not fit together into some meaningful tale. The purpose of a theory is to explain and predict. The better the theory, the better the explanation and prediction. More formally stated, a theory is a set of systematically interrelated concepts and hypotheses that are advanced to explain and predict phenomena.1

In OB some scholars also incorporate an applications aspect. That is, a good theory also can be applied with confidence. John Miner is one of those who has outlined some bases for judging theory in OB.2 These include:

1. It should aid in understanding, permit prediction, and facilitate influence.

2. There should be clear boundaries for application.

3. It should direct efforts toward important, high-priority items.

4. It should produce generalizable results beyond a single setting.

5. It should be tested using clearly defined concepts and operational measures.

6. It should be both internally consistent and consistent with studies derived from it.

7. It should be stated in understandable terms.

Now that is a very tall order for any theory, and we know of no theory in OB that passes muster on all accounts. Clearly some are better than others. Some theories are pretty good at explanation but lousy at prediction, while others do a reasonable job of prediction but do not facilitate influence. For example, if circumstances are highly similar, predicting an individual will repeat a behavior is a sound bet. Unfortunately, this prediction is rarely supported by a theory explaining why the individual acted in a given manner in the first place. As a manager, even if you know that an individual will repeat a behavior, you also need to know how to change it. And so it goes.

The bottom line is that theory and research go together. The theory tells one what to look for, and the research tells what was found. What was found also tells us what to look for again. It is important to realize that we may not see what we do not conceptualize. But it is equally important to note that for an acceptable theory, others must understand, see, and verify what we see and understand. Among OB researchers this process of seeing, understanding, and verifying is generally accomplished through the scientific method.

Scientific Method

A key part of OB research foundations is the scientific method, which involves four steps. First, a research question or problem is specified. Then one or more hypotheses or explanations of what the research parties expect to find are formulated. These may come from many sources, including previous experience and careful review of the literature covering the problem area. The next step is the creation of a research design—an overall plan or strategy for conducting the research to test the hypothesis(es). Finally, data gathering, analysis, and interpretation are carried out.3

The Vocabulary of Research

The previous discussion conveyed a quick summary of the scientific method. It’s important to go beyond that summation and further develop a number of aspects of the scientific method. Before doing that, we consider the vocabulary of research. Knowing that vocabulary can help you feel comfortable with several terms used in OB research as well as help in our later discussion.4

Variable A variable is a measure used to describe a real-world phenomenon. For example, a researcher may count the number of parts produced by workers in a week’s time as a measure of the workers’ individual productivity.

Hypothesis Building on our earlier use of the term, we can define a hypothesis as a tentative explanation about the relationship between two or more variables. For example, OB researchers have hypothesized that an increase in

supervisory participation will increase productivity. Hypotheses are “predictive” statements. Once supported through empirical research, a hypothesis can be a source of direct action implications. Confirmation of the above hypothesis would lead to the following implication: If you want to increase individual productivity in a work unit, increase the level of supervisory participation.

Dependent Variable The dependent variable is the event or occurrence expressed in a hypothesis that indicates what the researcher is interested in explaining. In the previous example, individual performance was the dependent variable of interest. OB researchers often try to determine what factors appear to predict increases in performance.

Independent Variable An independent variable is the event or occurrence that is presumed by a hypothesis to affect one or more other events or occurrences as dependent variables. In the example of individual performance, supervisory participation is the independent variable.

Intervening Variable An intervening variable is an event or occurrence that provides the linkage through which an independent variable is presumed to affect a dependent variable. It has been hypothesized, for instance, that participative supervisory practices “independent variable” improve worker satisfaction “intervening variable” and therefore increase performance “dependent variable”.

Moderator Variable A moderator variable is an event or occurrence that, when systematically varied, changes the relationship between an independent variable and a dependent variable. The relationship between these two variables differs depending on the level—for instance, high/low, young/old, male/female— of the moderator variable. To illustrate, consider again the previous example of the individual performance hypothesis that participative supervision leads to increased productivity. It may well be that this relationship holds true only when the employees feel that their participation is real and legitimate—a moderator variable. Likewise, it may be that participative supervision leads to increased performance for Canadian workers but not those from Brazil—here, country is a moderator variable.

Validity Validity is concerned with the degree of confidence one can have in the results of a research study. It is focused on limiting research errors so that results are accurate and usable.5 There are two key types of validity: internal and external. Internal validity is the degree to which the results of a study can be relied upon to be correct. It is strongest when alternative interpretations of the study’s findings can be ruled out.6 To illustrate, if performance improves with more participative supervisory practices, these results have a higher degree of internal validity if we can rule out the effects of differences in old and new machines.

External validity is the degree to which the study’s results can be generalized across the entire population of people, settings, and other similar conditions.7 We cannot have external validity unless we first have internal validity; that is, we must have confidence that the results are caused by what the study says they are before we can generalize to a broader context.

Reliability Reliability is the consistency and stability of a score from a measurement scale. There must be reliability for there to be validity or accuracy. Think of shooting at a bull’s-eye. If the shots land all over the target, there is neither reliability (consistency) nor validity (accuracy). If the shots are clustered close together but outside the outer ring of the target, they are reliable but not valid. If they are grouped together within the bull’s-eye, they are both reliable and valid.8

Causality Causality is the assumption that change in the independent variable caused change in the dependent variable. This assumption is very difficult to prove in OB research. Three types of evidence are necessary to demonstrate causality: (1) the variables must show a linkage or association; (2) one variable must precede the other in time; and (3) there must be an absence of other causal factors. 9 For example, say we note that participation and performance increase together—there is an association. If we can then show that an increase in participation has preceded an increase in performance and that other factors, such as new machinery, haven’t been responsible for the increased performance, we can say that participation probably has caused performance.

Research Designs

As noted earlier, a research design is an overall plan or strategy for conducting the research to test the hypothesis(es). Four of the most popular research designs are laboratory experiments, field experiments, case studies, and field surveys.10

Laboratory Experiments

Laboratory experiments are conducted in an artificial setting in which the researcher intervenes and manipulates one or more independent variables in a highly controlled situation. Although there is a high degree of control, which, in turn, encourages internal validity, since these studies are done in an artificial setting, they may suffer from a lack of external validity.

To illustrate, assume we are interested in the impact of three different incentive systems on employee absenteeism: (1) a lottery with a monetary reward; (2) a lottery with a compensatory time-off reward; and (3) a lottery with a large prize, such as a car. The researcher randomly selects individuals in an organization to come to an office to take part in the study. This randomization is important because it means that variables that are not measured are randomly distributed across the subjects so that unknown variables shouldn’t be causing whatever is found. However, often it is not possible to obtain subjects randomly in organizations since they may be needed elsewhere by management.

The researcher is next able to randomly select each worker to one of the three incentive systems as well as a control group with no incentive system. The employees report to work in their new work stations under highly artificial but controlled conditions, and their absenteeism is measured both at the beginning and end of the experiment. Statistical comparisons are made across each group, considering before and after measures.

Ultimately, the researcher develops hypotheses about the effects of each of the lottery treatments on absenteeism. Given support for these hypotheses, the researcher could feel with a high degree of confidence that a given incentive condition caused less absenteeism than did the others since randomized subjects, pre and posttest measures, and a comparison with a control group were used. However, since the work stations were artificial and the lottery conditions were highly simplified to provide control, external validity could be questioned. Ideally, the researcher would conduct a follow-up study with another design to check for external validity.

Field Experiments

Field experiments are research studies that are conducted in a realistic setting. Here, the researcher intervenes and manipulates one or more independent variables and controls the situation as carefully as the situation permits.

Applying the same research question as before, the researcher obtains management permission to assign one incentive treatment to each of three similar organizational departments, similar in terms of the various characteristics of people. A fourth control department keeps the current payment plan. The rest of the experiment is similar to the laboratory study except that the lottery treatments are more realistic but also less controlled. Also, it may be particularly difficult to obtain random assignment in this case since it may disrupt day-to-day work schedules, and so on. When random assignment is not possible, the other manipulations may still be possible. An experimental research design without any randomization is called a quasi-experimental design and does not control for unmeasured variables as well as a randomized design.

Case Studies

Case studies are in-depth analyses of one or a small number of settings. Case studies often are used when little is known about a phenomenon and the researcher wants to examine relevant concepts intensely and thoroughly. They can sometimes be used to help develop theory that can then be tested with one of the other research designs. Returning to the participation and performance example, one might look at one or more organizations and intensely study organizational success or failure in designing or implementing participation. You might look for differences in how employees and managers define participation. This information could provide insights to be investigated further with additional case studies or other research designs.

A major strength of case studies is their realism and the richness of data and insights they can provide. Some disadvantages are their lack of control by the researcher, the difficulty of interpreting the results because of their richness, and the large amount of time and cost that may be involved.

Field Surveys

Field surveys typically depend on the use of some form of questionnaire for the primary purpose of describing and/or predicting some phenomenon. Typically, they utilize a sample drawn from some large population. A key objective of field surveys is to look for relationships between or among variables. Two major advantages are

their ability to examine and describe large populations quickly and inexpensively, and their flexibility. They can be used to do many kinds of OB research, such as testing hypotheses and theories and evaluating programs. Field surveys assume that the researcher has enough knowledge of the problem area to know the kinds of questions to ask; sometimes, earlier case studies help provide this knowledge.

A key disadvantage of field surveys is the lack of control. The researcher does not manipulate variables; even such things as who completes the surveys and their timing may not be under the researcher’s control. Another disadvantage is the lack of depth of the standardized responses; thus, sometimes the data obtained are superficial.

Data Gathering and Analysis

Once the research design has been established, we are ready for data gathering, analysis, and interpretation—the final step in the scientific method. Four common OB data-gathering approaches are interviews, observation, questionnaires, and nonreactive measures.11

Interviews

Interviews involve face-to-face, telephone, or computer-assisted interactions to ask respondents questions of interest. Structured interviews ask the respondents the same questions in the same sequence. Unstructured interviews are more spontaneous and do not require the same format. Often a mixture of structured and unstructured formats is used. Interviews allow for in-depth responses and probing. They are generally time consuming, however, and require increasing amounts of training and skill, depending on their depth and amount of structure.