Randomized Clinical Trials?

When Should Clinical Trials Be Randomized?

Statistical, Logistic, and Ethical Issues in Nursing Care Research

Richard A. Zeller

Dean L. Zeller

Kent State University

Summer, 2006

Corresponding Authors:

3

Randomized Clinical Trials?

Richard A. Zeller, Ph.D.

College of Nursing

Kent State University

P.O. Box 5190

Kent, Ohio 44242-0001

330-308-0924 (phone)

330-672-2433 (fax)

Richard A. Zeller, Ph.D. earned his B.A. degree in 1966 from LaVerne College, California; his M.A. degree in 1967 and his Ph.D. degree in 1972 from the University of Wisconsin, Madison. All of his degrees are in the discipline of Sociology. He was a visiting scholar at the InterUniversity Consortium for Political and Social Research; University of Michigan, Ann Arbor, Michigan, in 1979, and worked on sabbatical leaves in 1987 and 1994-1995. Dr. Zeller is visiting professor of Nursing at Kent State University; he recently retired from a 31 year professional career in Sociology at the University of Minnesota, Morris, the State University of New York at Buffalo, and at Bowling Green State University in Ohio.

Zeller has authored or coauthored 3 books, 3 manuals, more than 90 professional articles. He has made more than 90 professional presentations in the U.S.A. and around the world (England, Germany, Yugoslavia, etc.), has given numerous interviews and seminars, and has received numerous awards. Dr. Zeller has served on more than 50 thesis and dissertation committees; he has served more than 100 professional clients in universities (University of Miami; University of Pittsburgh; Case Western Reserve University; Indiana University; University of Utah; etc.), governmental organizations (Development Research Centre, Ottawa, Canada, etc.), and private industry (Owens Corning Fiberglas, Akron General Hospital, etc.). He recently sold his partnership in AZG Research, Inc. Zeller's research interests focus on social statistics, measurement, and the use of Experimental, Survey, and Field Research Designs. His areas of application include Nursing Research, Communication, Sports, Social Psychology, Human Sexuality, and Political Correctness.


Dean L. Zeller

Department of Computer Science

Kent State University

P.O. Box 5190

Kent, Ohio 44242-0001

330-871-2365 (phone)

330-672-7824 (fax)

Dean L. Zeller is in the computer science Ph.D. program at Kent State University. He earned his Masters of Science in computer science from Bowling Green State University in 1996, as well as his Bachelors of Science in mathematics and computer science in 1992.

He has over ten years of teaching experience in the education industry. Eight of the ten years have been in higher education environments, teaching classes introducing students to computers, mid-level programming classes such as data structures and computer architecture, and up to advanced courses on compiler design, artificial intelligence, and computability theory. He also has two years of experience teaching to high- and middle-school students in the Kansas City Missouri School District. He has published an article on the Minimization statistical algorithm in 1997 in the Journal of Nursing Research and has presented at the Ohio Collaborative Conference on Bioinformatics (OCCBIO) in 2006. He is the chair of the pedagogy presentation track for OCCBIO ’07 and is team leader of the Computer Science Education (CSEd) team at KSU. His research interests include: bioinformatics, graph theory, statistical algorithms, user interface design, and computer science education.

3

Randomized Clinical Trials?

When Should Clinical Trials Be Randomized?

Statistic, Logistic, and Ethical Issues in Nursing Care Research

Table of Contents

I. Introduction 3

1.1. Randomized vs. Non-Randomized Trials 4

1.2. Research Design Standards for Credible Causal Inferences...... 4

II. Causation 5

2.1 Correlation 6

2.2 Temporal Priority 7

2.3 Non-Spuriousness 8

III. Strategies for Controlling the Effects of Covariates 9

3.1 Selection and Specification 9

3.2 Residualization 9

3.3 Random Assignment 11

3.4 A Comparison of Control Strategies...... 11

IV. Implications for Clinical Trials 12

4.1. Logistic and Practical Challenges of the Randomized Trial 12

4.2. The Ethical Dilemma of the Randomized Trial 13

V. Discussion 14

Figures 15

References 18


When Should Clinical Trials Be Randomized?

Statistical, Logistic, and Ethical Issues in Nursing Care Research

Abstract – The purpose of clinical trials research is to demonstrate causal effects. Clinical trials research that requires that the random assignment of the independent variables is more credible in demonstrating causation than protocols that merely call for the measurement of independent variables. The purpose of this paper is to explore why this is so. In order to do this, the criteria for demonstrating causation (correlation, temporal priority, and non-spuriousness) are discussed. Strategies for controlling the effects of covariates include selection/specification, residualization, and random assignment. Implications of using these strategies include logistic and practical challenges and ethical dilemmas. Quasi-experimentation is presented as an alternative research design.

Index terms – clinical trails, ethical issues, cause, spurious, residuals, random assignment, ceteris paribus, quasi-experimentation

For thousands of years man’s everyday experience with falling objects did not suffice to bring him to a correct theory of gravity.

Kurt Lewin (1890-1947)

Field Theory Social Psychologist

The discovery and use of scientific reasoning by Galileo ... taught us that intuitive conclusions based on immediate observations are not always to be trusted. Galileo’s contribution was to destroy the intuitive view and replace it by a new one.

Albert Einstein (1879-1955)

German/U.S. physicist

I. Introduction

Most theories in research involve finding a causal effect between two events, i.e. X causes Y. The search for a causal effect starts with a theoretical hypothesis. A theoretical hypothesis is a theory-driven educated “guess” that the hypothesized effect, Y, is caused by the hypothesized cause, X. The hypothesis is based on derivation from theory, observation, prior research, or a researcher’s “hunch.” The purpose of clinical trials research is to evaluate the causal hypothesis by observation. The result of the clinical trials research to increase or decrease the confidence with which the discipline asserts that the causal effect that is theoretically-alleged to be “true” is “true.” One of the most common research designs created to evaluate the credibility of causal effects is the randomized trial. Clinical trials research is usually conducted using two or more groups. Table 1 presents a common clinical trials research design. Each group receives a “usual care” protocol. One group, identified as the “treatment” group, receives usual care plus the treatment. The other group, identified as the “control” group, receives usual care. Some research designs compare different treatments. In this research design, the groups are called the “treatment” groups. Differences in post-treatment outcome measures are used to evaluate the effect of the treatment. Alternatively, the researcher may wish to measure “improvement” by comparing “pre” and “post” measures on the outcome variable. Research statisticians can determine the statistical significance of the difference between groups. Using “pre-post” repeated measures designs, research statisticians can detect these effects with far fewer subjects than a comparison of post “treatment-control” differences. If the “improvement” is greater in the treatment group than in the control group, the causal inference is that the treatment caused the greater level of improvement.

------

Insert Table 1 Here

------

Technically, Table 1 provides the symbols commonly used in clinical trials research design. In the Treatment row (row 1), O1 is the pre-treatment observation while O3 is the post-treatment observation. The Control group receives no treatment, Hence, in the Control row (row 2), O2 is the pre-observation while O3 is the post-observation. The comparison of O1 and O2 with a finding of “no difference” shows that the two groups were equal prior to the treatment. The comparison of O3 and O4 with a “significant difference” shows that the treatment worked. The comparison of O1 and O3 with a significant difference shows the change in the Treatment group over time. The comparison of O2 and O4 with a significant difference shows the change in the Control group over time.

Proper statistical analysis will capitalize on the strong positive between the pre-scores and the post-scores by using a “repeated measures” design. This analysis will be illustrated for this design later in this paper. We now turn to a discussion of randomized vs. non-randomized trials.

1.1. Randomized vs. Non-Randomized Clinical Trials

The most credible methodology for evaluating causal hypotheses is the randomized clinical trial design. In order to demonstrate causation, the researcher randomly splits the subjects into two or more groups, assigning the “treatment” to some but not all subjects. In a non-randomized clinical trial, there are control and treatment groups but subjects are not randomly assigned to those groups. Instead, subjects usually decide which group they wish to be in.

The credibility of the causal inference is much stronger when using randomized clinical trials. But, if the “improvement” is greater in the treatment group than in the control group, the appropriate inference is that the treatment caused the greater level of improvement may or may not be so!

Why? Why can we make a credible causal inference from a randomized clinical trial but not from a non-randomized clinical trial? And why, if the randomized clinical trial results in a credible causal inference but a non-randomized clinical trial does not support such an inference, would we ever want to use non-randomized clinical trials? Answers to these questions are the purpose of this paper.

Research design is the art of getting the maximum credible causal inference from the minimum use of resources. Randomized and non-randomized clinical trials each have advantages and disadvantages. Randomized clinical trials maximizes credible causal inference; non-randomized clinical trials minimizes use of resources. Neither technique does both. The researcher will wish to consider these assets and liabilities when deciding which research design to use. Generally speaking, the causal inference from randomized trials are more credible than similar causal inferences from non-randomized trials. At the same time, randomized trials are more cumbersome and difficult to administer and more ethically ambiguous. Non-randomized clinical trials are simpler to administer and are less likely to place the researcher on the horns of ethical dilemmas, but do not provide the same credibility of causal inferences.

Why is the credibility of the causal inference stronger for randomized clinical trials compared to non-randomized clinical trials? Answer: Credible causal inference comes from the meetings of research design standards for causal inference, the randomized clinical trials are superior at meeting these standards than non-randomized clinical trials. When using non-randomized clinical trials, there are legitimate alternative hypotheses concerning why the improvement occurred. Specifically, post-differences between treatment and control groups could be due to selection, subject dropout, variation in subject participation levels, and a variety of interaction effects between the treatment and the research setting.

Why, if the randomized clinical trial results in a credible causal inference but a non-randomized clinical trial does not support such an inference, would we ever want to use non-randomized clinical trials? Answer: We use non-randomized clinical trials because we cannot or should not use randomized clinical trials. We may not have the resources. We may not with to be as invasive in the lives of our subjects as randomized clinical trials require. Randomized clinical trials may place the researcher and/or the subjects in ethically unacceptable positions. We may not be able to get the required sample size to achieve statistically significant results.

We now turn to a discussion of research design standards necessary for making credible causal inference.

1.2. Research Design Standards for Credible Causal Inference

Clinical trials research attempts to demonstrate whether or not there is a causal treatment effect. But we do not know, in advance, whether there is a causal treatment effect. We deal with evidence consistent with or inconsistent with the existence of a causal treatment effect. Negative evidence contradicts, but does not disprove, causal effect. Negative evidence decreases, but does not eliminate, the credibility of the belief in that causal effect. That is, the researcher can find negative evidence even when there is a causal treatment effect in the universe. Such evidence is called a “False Negative.”

Similarly, positive evidence is consistent with, but does not prove, the causal effect. Positive evidence increases the credibility of the causal effect. But positive evidence does not guarantee that there is a causal effect. That is, the researcher can find positive evidence for an alleged causal effect that does not exist. Such evidence is called a “False Positive.” Table 2 illustrates these outcomes of causal inference from clinical trials research.

------

Insert Table 2 Here

------

The credibility of a causal assertion is a judgment made by informed observers. The criterion on which members of a discipline make this judgment is based on the credibility of the evidence from clinical trials research. The credibility of this evidence depends upon the characteristics of the research design (e.g. random assignment), the care used to implement the protocol specified in the research design, and the handling and statistical of the data, etc. Examples of professional judgments about causal effects are:

a)  antihistames reduce itch in burn patients

b)  calcium prevents bone loss in osteoporosis patients

c)  confusion causes nurses to restrain patients

d)  loneliness causes depression among widows and widowers

e)  diversion decreases post-operative pain

f)  education increases salary of workers

Clinical researchers would like to find or create data that allow them to evaluate the credibility of these and many other causal assertions. There are methods to credibly establish causation. However, these methods are difficult to understand, to design, to implement, and to analyze. The purpose of this paper is to address these issues. We now turn to a discussion of causation, and the three criteria for enhancing the credibility of causal assertions, correlation, temporal ordering, and non-spuriousness.

II. Causation

Causal statements are “strong” statements. The causal statement, X causes Y means that a change in X forces a change in Y. For a discipline to consider a relationship to be causal, there must be a credible conceptual reason why a change in X forces a change in Y that meets the causation criteria. Causal theoretical statements can be illustrated as follows:

(a)  An increase in a patient’s antihistamine level will reduce a burn patient’s burn itch.

(b)  An increase in calcium consumption will prevent bone loss that would have occurred had calcium consumption been low.