Randomised controlled trials: A non-random approach to policymaking

Published: October 26, 2019 2:34:27 AM

If these institutions cooperate in their pursuit of ‘truth’ and help bridge this gap between policymaking and academia, it will make the former more effective, and give the latter a better name.

Importantly, even if research in development economics veers towards randomisation, the collective approach must move beyond trial-and-error, and involve a non-random process of policy formulation and implementation.

By Madhav Malhotra

This year’s Sveriges Riksbank Prize in Economic Sciences—awarded in the memory of Alfred Nobel—to Abhijit Banerjee, Esther Duflo and Michael Kremer, for their pioneering work in the use of experimental methods in development economics was undoubtedly a well-deserved one. While some academics would argue that the committee has overlooked some veterans better suited for the prestigious award, both in age and contribution to the discipline that chronologically precede that of the awardees, it is difficult to downplay the impact that experimental methods have had on the discipline of economics. Also, when has there ever been a unanimous approval of the Nobel committee’s decision? Rather, it may be worthwhile to take a constructive approach and use this opportunity to address some important questions that this year’s prize has raised, namely where do experimental methods fit in with the discipline of economics and how should they be used to formulate better policies for development?

One of the most commonly used experimental methods in economics is Randomised Controlled Trials, or RCTs. In their simplest form, RCTs involve randomly assigning a sample of individuals, or any economic agent, into two subgroups—a treatment and a control group. Not unlike in medicine, the former is then administered a ‘treatment’, which can be any intervention such as prenatal care for expecting mothers or cash vouchers for households below the poverty line. Subsequently, the difference in averages of the outcomes of interest between the two groups is recorded, with the aim of identifying the causal effect of the intervention on the outcomes. The popularity of RCTs can perhaps be attributed to the fact that the methods in use earlier often captured other confounding factors that could obfuscate the causal relationship and, therefore, provided misleading results. Randomisation simply ensures that the effect of the intervention is independent of possible confounding factors to the extent the experiment design permits.

The success of RCTs as a methodological tool in economics is in no small part due to research and policy advocacy organisations like J-PAL (Abdul Latif Jameel Poverty Action Lab) and IPA (Innovations for Poverty Action). Their network of researchers and surveyors, accompanied by several others in the academic community, have contributed their insights on how to design robust experiments and tackle problems that emerge in practice. Banerjee, Duflo and Kremer have been at the forefront of this ‘Randomista’ movement and have popularised the use of these methods, particularly in the context of public interventions in health and education. Indeed, it is nothing shy of a revolution if one looks at the sheer volume of research published in economics using RCTs in the last few decades. Martin Ravallion, a notable economist, offers numbers to substantiate this claim. In 2015, there were 333 published research papers using randomisation as a tool in impact evaluation in economics, with an annual growth rate of 20%, almost double of that for all scientific publishing post World War II (Ravallion, CGD Working Paper 492, 2018).

The use of these methods in economics helps respond to a classic criticism of the discipline—that economic policies are often shaped by outdated theoretical models or thumb rules, and not founded in empirical reality. By importing from the discipline of medicine, economists have been able to develop tools that approximate scientific experiments and obtain informed answers to specific policy questions.

As an academic exercise, this is right on the mark in an applied economist’s never-ending pursuit to identify exact causal relationships in an otherwise complex world. It would also not be surprising to observe an exponential rise in the use of these methods in other social sciences in the coming years. As Banerjee put it himself, it allows the researcher to design an experiment and tailor the questions she wants to ask to the relevant context, rather than be restricted by the available data.

However, RCTs are extremely costly to run, often millions of dollars, even in the narrow context that they are usually administered in. That many, if not most, RCTs are not scaled up and do not translate into policy at the national or sub-national levels of operation, begs the question whether this really is a worthwhile exercise? Critics argue that the latter is often due to issues of external validity, i.e. the effect of an intervention may be context-specific and inapplicable for a broader setting. Another reason, closer to home, is possibly the lack of political will to deal in evidence-based policymaking. Irrespective of the rationale, there seems to be an insufficient return for these experimental methods in the realm of policymaking.

RCTs are also consistent with a general trend in economic research that gives primacy to the study of individuals and firms as primary decision-making agents, a focus on understanding the micro-foundations of relationships and precise mechanisms for the transmission of effects between economic agents. Any normative implication of such research would typically be, at worst, non-revelational or at best, revelational, but contextual. But this seems at odds with the game-changing macro-level trade and industrial policy initiatives witnessed in the development transformation of East Asian countries in the late 20th century and more recently in China, which have been credited for the improvement in living standards and poverty eradication at an unprecedented scale and pace.

This apparent disconnect has been acknowledged by this year’s Nobel awardees. In a paper prepared for the World Bank’s ‘The State of Economics, The State of the World’ conference in 2016, they admit “The view is that the ‘academic’ desire to come up with the cleverest research design may not line up with the practitioners need to identify scalable innovations (the next cellphone), or ‘change systems’ (healthcare) or reform institutions (democracy).” The question then is where does the middle ground lie in the debate on the role of RCT in policymaking?

The answer may depend on the camp that one is a part of academics or policy practitioners, and how these groups envision their role in society. The privilege awarded to academics in running the experiments they do bestows upon them a responsibility to generate returns beyond the incentives given to the sample of participants involved in their studies. Such considerations should feature significantly in the determination of their research agenda and research design.
At the same time, practitioners should be subject to stricter ‘perform or perish’ standards, not unlike their academic counterparts. They must think of causality as the gold standard to aspire for, but need not be beholden to strict adherence to that objective. Indeed, collectively, they need to provide the best possible answers, with best signifying proximity to causal identification and possible the need to be as close as feasible to the relevant context. There is also a stockpile of existing research (both experimental and observational), which is often ignored in the formulation of new policies. With it, an equally large supply of young researchers and graduate students in economics, and closely related fields, who can be hired to aid in the process of consulting this overlooked material. Importantly, even if research in development economics veers towards randomisation, the collective approach must move beyond trial-and-error, and involve a non-random process of policy formulation and implementation.

Observational and experimental research approaches need to be seen as complementary and their use in policy formulation non-negotiable. Indeed, where experimental methods are infeasible or highly contextualised, the use of large observational datasets is vital for decision-making and priority-setting, particularly at national and sub-national levels. This contention rests on two underlying requirements.

First, practitioners in the government should have the requisite analytical expertise and a workplace environment conducive to consult rigorous evidence in taking policies from the drawing board to the real world. Second, the integrity of grant-making institutions in the arena of policy-oriented research should be uncompromised by ideological considerations and dedicated to the goal of encouraging academic research that responds to what developing countries actually need.

If these institutions cooperate in their pursuit of ‘truth’ and help bridge this gap between policymaking and academia, it will make the former more effective, and give the latter a better name.

Get live Stock Prices from BSE and NSE and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Next Stories
1Regularisation of unauthorised colonies helps governance, but also creates a moral hazard
2Expectations, not policy, key to recovery
3E-commerce vs brick-and-mortar: Govt’s intentions to ensure level playing field good, but may impact foreign entities