Nowadays, there have been many variations and extensions in the RCT methodologies. RCTs are conducted to measure comparative effects of a cause, i.e., a policy intervention randomly administered to a select group (treatment group) while keeping the behaviour or status of a compared group, known as the control group, constant.
This year’s Sveriges Riksbank prize in economic sciences in memory of Alfred Nobel has gone to three American development economists—Abhijit Banerjee, Esther Duflo and Michael Kremer (BDK henceforth). The prize recognises the importance of using experimental approaches in global poverty research and development economics, using randomised controlled trials (RCT), also known as field experiments. Their organisation The Abdul Latif Jameel Poverty Action Lab (J-PAL) has been funding, organising and supervising many field experiments. At present, there are many researchers using RCT.
Economics is an inexact science of social behaviour and true to its fame, each theory or study can be subject to biases, critiques and prejudices.
Nowadays, there have been many variations and extensions in the RCT methodologies. RCTs are conducted to measure comparative effects of a cause, i.e., a policy intervention randomly administered to a select group (treatment group) while keeping the behaviour or status of a compared group, known as the control group, constant. This type of methodology is popularly called as the ‘difference in differences’ (DID) regression method. Esther Duflo’s early influential paper on the effect of school construction in Indonesia on education outcomes used this DID method. Duflo’s most cited work to date, which she had co-authored with Marianne Bertrand and Sendhil Mullainathan in 2004, analysed the efficacy of the DID estimates. Not just DID, other methods like propensity score matching and regression discontinuity design have also become popular within the field experiment literature.
This also is an opportune time to remember and appreciate that experimental approaches are not the only guiding light or the best of the methods to study development phenomena. Thanks to a lot of methodological pitfalls, RCTs are not the best approach to study development issues in many settings. Another Nobel laureate Angus Deaton, and others, have been vocal critics and rightfully so, about the often erroneous lessons or approaches we derive from RCTs. Angus Deaton and Nancy Cartwright (DC henceforth) have written a seminal critique against RCTs titled “Understanding and misunderstanding randomised controlled trials”, published in the journal Social Science & Medicine in 2018.
Most people do not believe that a certain finding of an RCT in a certain region or for a certain population could be valid to other regions or populations due to contextual differences. RCTs are prone to such issues of lack of external validity. In many circumstances, randomising the intervention raises ethical and social issues. Moreover, RCTs are very expensive.
In another article titled “Reflections on RCTs” published in the same journal, DC remarked, “Experiments are sometimes the best that can be done, but they are often not. Hierarchies that privilege RCTs over any other evidence irrespective of context or quality are indefensible and can lead to harmful policies. Different methods have different relative advantages and disadvantages.”
All the estimation methods need meeting of requisite assumptions and causal inference methods are no exception to it. If causal assumptions are met, many methods including RCTs can provide credible parameter estimates. In this sense, RCTs are not special. One of the assumptions of an RCT is that the treatment or intervention is independent (orthogonal) to all causes of the outcome other than its own downstream effects. Many ‘randomistas’ (believers of RCTs) cite randomisation as the source of such orthogonality conditions and DC argue it is not true. There are concerns about whether the treatment assignment process was really random or made to be random following a statistical scheme. Further, post-randomisation problems can occur. Owing to reasons like lack of blinding, varying levels of time/place/length of treatment for the two groups, an RCT cannot rule out correlations with other factors affecting the outcome. Many people ignore the fact that full blinding of subjects in experiments is impossible and they take the RCT results at their face value. Even if full blinding is assumed, the RCTs must ensure, and not assume, that no other relevant systematic differences occurred between treatment and report of results. This orthogonality condition must be satisfied in any case and not just assumed.
It is observed that RCT-using researchers are signing agreements with provincial and national governments to obtain funds and approvals for such studies. Since RCTs are very expensive, some critics point to the incentive structure of tailoring results. Then, there is ‘publication bias’ that afflict some researchers.
To sum up, we quote DC, “They show that the treatment worked somewhere, usually a very special somewhere (often made more special by the stringent requirements of doing a good RCT) and it is a long and often difficult evidential road from that to ‘It will work here’. And for this endpoint, the RCT is not a particularly natural starting point. Indeed, this is one of the misunderstandings that we are most concerned about, that a well-done RCT can be automatically transported, simply by virtue of being an RCT.” It’s a good time to listen to Deaton and put the RCT studies in perspective.
Assistant professor (Economics), IIT Bhilai. Views are personal.