Randomized Controlled Trials and Experimental Evidence

I am surprised to find that there is so much material on the limitations of randomized controlled trials (RCT). It is funny that this knowledge has not reached the consciousness of many specialists in the medical community.

In order to understand randomized controlled trials, it helps to understand experimental research methods in general. There are big issues pertaining to research designs, controls, confounds, randomization, statistics, bias, incentive and deception which anyone using scientific evidence to make a judgment or buttress an argument should understand. The discussions become very technical. All scientific evidence is underdetermined in some manner; that is another way of saying that our understanding is confounded. One study does not establish a case; it only points in a certain direction.

A Cheat Sheet on Key Ideas Underlying Research

Below is a cheat sheet on some key scientific ideas, of relevance to research, including randomized controlled trials. It may seem peripheral to the topic of RCT, but I think that understanding some basic issues around research and inference will help in better understanding the documents discussed in the second part of the essay.


Epistemology is the study of knowledge. The philosophy of science applies epistemological thinking to scientific research. Those espousing methods of research should really have some familiarity with this discipline, since it underpins science.


Inference is the process of making a judgment or buttressing an argument from available evidence. It has three varieties:

  1. Deduction is the method of applying the established patterns of formal logic to assertions to assess whether conclusions follow correctly from the premises. There is no guarantee that the original premises are correct, are sound.
  2. Induction is the process of generalizing from past experience to predict future events based on perceived patterns, perceived regular occurrences.
  3. Abduction is sometimes called “inference to the best explanation.”

Scientific evidence

One study does not establish a case; it only points in a certain direction. All scientific evidence is underdetermined. Evidence follows these dictates:

  1. It must be obtained, either produced or found
  2. It must be evaluated for reliability, correctness, provenance
  3. It must be interpreted, the implications made clear, the fit within an existing body of knowledge examined

Is there scientific objectivity? All of this assessment of evidence happens within the context of current beliefs, scientific and other, and biases. It could not be otherwise.


Probabilities are the odds of future events happening. They can be based on deduction and counting, or based on intuition and conjecture, possibly well-informed, but subjective.

Statistics are generally classed as descriptive or inferential. In descriptive statistics, data is collected, sorted, categorized and summarized in various mathematical ways. In inferential statistics, statistics based on samples are generalized to larger populations of interest, based on the odds of any observed results being true.

Frequentist thinking looks at chances as frequencies, or proportions, in similar events or items.

Bayesian thinking looks at prior probabilities, determined in some fashion, and applies the rules of conditional probability to assess fresh probabilities.

Inferential statistics reasons from sample to populations. From samples we can make predictions about the broader population from which the samples are drawn. This has problematic aspects.


Association lets us see that certain things seem to occur together with regularity.

Correlation gives us a measure of how well, how often this association holds. Mathematically a correlation of zero says that there is no relationship, a correlation of positive one says that they are perfectly correlated, always going together, and a correlation of negative one says that they are perfectly inversely correlated.


Causation gives us the idea that this association is one in which one factor depends, to a lesser or greater extent, on a second factor. By varying the first factor, the independent factor, we can produce a repeatable change in the second factor, the dependent factor. We can quantify these factors and abstract them, or operationalize them, and call them variables.


Scientific research methods do not necessarily involve experimental studies, or laboratory work. There are many fields where experiment plays a secondary role.

However, in many fields, research does involve the manipulation of various independent factors in order to see the effects on assumed dependent factors. In general, the experiment is designed to test a research hypothesis, which may be part of a larger theory. In order to conduct experiments, certain standard methods have been developed in order to design experiments which have a good chance of providing correct results and allowing for interpretation. This interpretation is always done within the context of a broader set of beliefs about the nature of the research area.

One of the key aspects of all research is that the factors being manipulated are not necessarily the factors determining the experimental outcomes. We call factors which might be affecting the outcome and which are not being manipulated confounders, or confounds. These possible confounds may be identified and controlled for to some extent through good research design. However, not all confounds will be identified, and cannot be controlled for so easily. Matching and stratifying across subgroups on known factors and randomization across subgroups are mechanisms used to control for confounds. There is a lot more to research design than this, but these ideas are basic.

It is sometimes said that randomization controls for unconscious bias and for biasing factors independent of human judgment.  This is at least partially true. We can regard bias as a set of confounding factors. Bias can occur at all stages of a study, from sample selection, to recording of data, analysis of data, and interpretation of results. Randomization will not help with all of these sorts of bias.

By making the subjects, if human, unaware of the experimental group that they are in, a certain sort of bias can be controlled. This is called a blind trial. By making the experimenters unaware of the groups, we can remove some experimenter bias. This, combined with the first method, is called a double blind trial. There are cases where this works, but there is a gap between theory and practice.

Research Samples

In performing an experiment, subjects are required. This means that they must be selected in some fashion. There will be at least two groups in a well designed experiment, a control group and a treatment group. If the sample control group is different on some significant confounding factor from the sample treatment group, we can end up with worthless results due to this bias. We can help control for this confound by matching across groups on factors considered important such as age, sex, weight and so on. We can also assign subjects to the different groups on some randomized basis. We can combine these strategies. In order to have some confidence in our results, we should use the largest samples we can obtain. Randomization is not particularly effective as a strategy for small sized groups.

We can have unconscious bias in our sample selection, and end up with groups that are not equivalent in key dimensions. We do not want to use ad hoc methods of assigning subjects to a group, or in obtaining subjects for the sample, but real life contingencies such as funding, time frames, available subjects, and other things often get in the way of experimental rigour.

Integrity and Research Design

Experimental design cannot control for dishonesty, for deception. This is worse than bias, for it can not be detected in a reliable fashion, and research designs can not control for it.

RCT major themes

There seem to be a number of significant considerations for the use and interpretation of randomized controlled trials in research, and arguments for the use of other methods in many cases. This is not to say that RCTs do not have an important role to play in research, but they are just one of various methods.  For further reading, see Medicine’s fundamentalists and Randomized controlled trials.

I will not attempt to summarize such a voluminous literature on RTC, but instead will focus on a few articles, published in journals, by respected researchers. The first is by Thomas R. Frieden, M.D.,M.P.H., a former director of the Center for Disease Control and Prevention. In the New England Journal of Medicine he writes “Evidence for Health Decision Making – Beyond Randomized, Controlled Trials.”

The second is by Angus Deaton, FBA, Dwight D. Eisenhower Professor of Economics and International Affairs Emeritus at the Princeton School of Public and International Affairs and the Economics Department at Princeton University and a winner of the Nobel Memorial Prize in Economic Sciences and by co-author Nancy Cartwright, FBA FAcSS, Professor of Philosophy at Durham University and a Distinguished Professor at the University of California, San Diego (UCSD). In Social Science & Medicine, they write, “Understanding and misunderstanding randomized controlled trials.”

I also reference in the Bibliography a piece Alexander Krauss in the Annals of Medicine,Why all randomised controlled trials produce biased resultsand a response by Althouse, Andrew D., Abebe, Kaleab Z. , Collins, Gary S. & Harrell Jr, Frank E. , Journal Annals of Medicine,  Response to “Why all randomized controlled trials produce biased results”, without attempting commentary.

Evidence for Health Decision Making – Beyond Randomized, Controlled Trials

In this paper by a healthcare professional, randomized controlled trials are discussed with respect to benefits and limitations, and other methods of gathering evidence for medical research are presented, along with considerations for use.

From the paper by Frieden (above), the author makes the following points (précised here):

  1. There is no single, best approach to the study of health interventions
  2. Clinical and public health decisions are almost always made with imperfect data
  3. We should be promoting transparency in study methods
  4. We should be ensuring standardized data collection for key outcomes
  5. We need to use new approaches to improve data synthesis
  6. Improved data synthesis provides critical steps in the interpretation of findings
  7. Improved data synthesis allows us to better identify data for action
  8. It must be recognized that conclusions may change over time
  9. There will always be an argument for more research and for better data
  10. Waiting for more data is often an implicit decision not to act or to act on the basis of past practice rather than best available evidence
  11. The goal must be actionable data — data that are sufficient for clinical and public health action
  12. We need methods which produce data that have been derived openly and objectively
  13. We need data which enable us to say, “Here’s what we recommend and why.”

Understanding and misunderstanding randomized controlled trials

In this paper by a Nobel prize winning economist and by co-author who is a professor of philosophy discuss various issues in research and in the use of randomized controlled trials.

From the paper by Deaton and Cartwright (above), the authors make the following points:

  1. Randomization does not balance confounders in any single trial.
  2. Unbiasedness is of limited practical value compared with precision.
  3. Asymmetric distributions of treatment effects pose threats to significance testing.
  4. The best method depends on hypothesis tested, what’s known, and cost of mistakes.
  5. RCT results can serve science but are weak ground for inferring ‘what works’.


Althouse, A. D., Abebe, K. Z., Collins, G. S., & Harrell, F. E. (2018). Response to “Why all randomized controlled trials produce biased results.” Annals of Medicine, 50(7), 545–548. https://doi.org/10.1080/07853890.2018.1514529
Deaton, A., & Cartwright, N. (2018). Understanding and misunderstanding randomized controlled trials. Randomized Controlled Trials and Evidence-Based Policy: A Multidisciplinary Dialogue, 210, 2–21. https://doi.org/10.1016/j.socscimed.2017.12.005
Frieden, T. R. (2017). Evidence for Health Decision Making—Beyond Randomized, Controlled Trials. New England Journal of Medicine, 377(5), 465–475. https://doi.org/10.1056/NEJMra1614394
Krauss, A. (2018). Why all randomised controlled trials produce biased results. Annals of Medicine, 50(4), 312–322. https://doi.org/10.1080/07853890.2018.1453233


Leave a Reply

Your email address will not be published. Required fields are marked *