From the Washington Post (Mar 16):
How empirical studies of political violence (can) help policymakers
U.S. and British soldiers talk at the site of a suicide car bomb attack in Kabul, Afghanistan, on Jan. 5. (AP Photo/Massoud Hossaini)
In a recent
New York Times opinion piece, “Where Terrorism Research Goes Wrong,” social psychologist Anthony Biglan argues that, given the importance of antiterrorism programs and the huge resources devoted to them, far too few are subjected to randomized controlled trials (RCTs) evaluating their efficacy. To his knowledge only two such studies have been undertaken: one evaluating the effects of aid in Afghanistan, the other evaluating the effects of an anti-violence campaign preceding a Nigerian election. While we heartily agree with his appeal for more RCTs, his evaluation of the state of the field is inaccurate, and it’s worth discussing why.
As members of the
Empirical Studies of Conflict Project (ESOC), a research collective dedicated to the study of violence and what can be done to reduce it, we agree with Biglan’s central thesis. If a larger portion of antiterrorism budgets were devoted to evaluation, we would have better information to guide policymakers and a better chance at decreasing violence. Also, since rigorous evaluations can identify cheaper interventions that outperform more expensive ones, funding more of them might save money in the long run – and hopefully save lives, as well.
But as an overview of the field, Biglan’s count of RCTs is simply off the mark. First and foremost, breaking off “terrorism research” as a field distinct from broader research into political violence is like studying diabetes without thinking about peoples’ eating habits. Most terrorism happens
within the context of broader conflicts and is often fed by them. The complex web of interconnections between the Afghan civil wars, terrorism in Pakistan, and the threat of transnational terrorism being organized in Pakistan’s Federally Administered Tribal Areas (FATA) is but one example of these links.
And once we widen our scope beyond the strict boundaries of terrorism studies, then, as readers of the Monkey Cage well know, there is more RCT-based work.
Chris Blattman has used randomized trials in such countries as Liberia to study policies intended to keep former combatants from returning to crime and violence.
Michael Callen and James Long successfully used one to demonstrate how to reduce corruption in Afghan elections, and in a
recent follow-up (with Eli Berman and Clark Gibson) showed how that intervention improved attitudes toward the government.
Christine Fair, Neil Malhotra and Jake Shapiro have worked with coauthors to conduct
surveys and
experiments in Pakistan, which address the complexity of local attitudes towards militancy, as have
Jason Lyall and colleagues in Afghanistan.
These results should be celebrated rather than overlooked, given the logistic and bureaucratic difficulties in implementing RCTs and primary data collection conflict-affected environments. Implementers and funders often resist evaluation and the provision of data to academic researchers – typically citing security concerns when they do so. Government agencies also regularly argue that research could endanger human subjects and staff, or that uncertainty about the security environment could shut down studies before they are completed and thus resist starting them.
To be sure, sometimes implementer resistance is well founded. Ethical guidelines of RCTs can be hard to meet when assessing programs intended to combat terrorism. Still, these challenges can and should be overcome by incorporating good research design into costly programming. At one point Biglan asks, “Do we know whether drones are increasing or decreasing the rate of terrorists’ attacks?” To use a randomized trial to find out would be impossible for scientific and ethical reasons. Scientifically, the drone program is designed to impact a small number of outcomes: attacks by al-Qaeda and the small number of other targeted groups. In medical terms the number of doses is large, but it all goes to a few patients, thus we have an insufficient sample to establish the treatment contrast.
Ethically, the proposal to use RCTs here is even more dubious. In medical trials, it is considered unethical to withhold a drug from the control group not receiving it once one believes the treatment would work. The U.S. government is not capacity constrained for drone strikes, at least not once the target is under sufficient surveillance to decide a strike is worthwhile, and officials do not order strikes unless they strongly believe the target is valuable. Withholding them for research purposes necessarily fails a basic ethical test. But as
Patrick Johnston and Anoop Sarbahi show, there are quasi-experimental routes to better understanding how drone strikes work.
More broadly, while we can celebrate of the quality of evidence randomized trials have been producing in the field of economic development, they are by no means giving a comprehensive picture of how development works. Several years ago we conducted a survey of all the randomized trials of social programs conducted outside the US and either published in top journals or still underway. The 640 we came up with can be considered a representative undercount. Strikingly, 30 percent took place in only three countries (India, Kenya and Mexico) suggesting how scholarship gravitates toward poor but research-friendly nations. Only 20 percent of studies took place in autocracies, although 35 percent of the world’s population lives in such countries.
This is not surprising; people in power in autocracies and conflict-ridden countries make it difficult to conduct studies and produce reliable results. To begin with, they tend to be resistant to share data. Their budgets for evaluation and after-the-fact learning are tiny. And these are precisely the leaders and countries we would need to work with to conduct randomized controlled trials of antiterrorism programs.
There are serious questions about how much randomized trials of social programs tell us about how a program that tests successful will work on a bigger group (say, if it is rolled out nationwide) or a different group (say, if it is
exported to a different country). Trials conducted in collaboration with researchers, as they usually are, often work much differently
when taken over by government ministries. In counterterrorism extrapolation is even more suspect, since organizations are both heterogeneous and strategic: when air travel was secured after a wave of hostage taking in the 1970s,
they switched to other kinds of attacks; as profiling of terrorists improves at our borders they switch to recruiting within our borders.
Luckily, strict randomization is not the only way to establish causal links. “Natural experiments” are created when an outside force creates assignment into treatment and control groups that is independent of programs’ expected impacts. For example,
Nathan Nunn and Nancy Qian published a study last year where they used random fluctuations in US wheat production to show that, broadly speaking, food aid can stoke violence in conflict-ridden countries.
David Yanagizawa-Drott used the distribution of preexisting geographical features that blocked radio transmissions to quantify how hate-mongering broadcasts fueled the Rwandan Genocide.
In the Philippines,
our group exploited the government’s arbitrary poverty threshold that determined communities’ eligibility for a community-based aid program. Aside from being just-eligible or just-ineligible, the villages we examined showed no major differences. This, along with detailed conflict data from the Armed Forces of the Philippines, allowed us to show how the influx of targeted development projects into communities affected violence in several dimensions – against whom and at whose hands.
If you widen the horizon beyond randomized or quasi-randomized trials, you’ll find even more interesting research that exposes causal relationships behind insurgent violence and the programs which could combat it.
Reed Wood in Africa,
Oeindrila Dube and Juan Vargas in Columbia and
Oliver Vanden Eynde in India have shown how economic fluctuations affect conflict.
Efraim Benmelech, Claude Berrebi and Esteban Klor have shown how punitive home demolition affects suicide bombings in Palestine and which kinds of terrorist get sent against which targets.
Melissa Dell has shown how changes in government policy affect gang violence in Mexico.
Jeff Clemens has illuminated the role of opium eradication policies in stoking the Afghan insurgency, and
Jason Lyall has demonstrated how indiscriminate violence by government forces can sometimes quell violence in some places even as it stokes civilian sympathy with insurgents in others.
Perhaps most of these studies go beyond Biglan’s strict requirement of randomized evaluations as the evidentiary standard, but they all meet high standards for program evaluation. Until governments and aid organizations make the right move and attach evaluation budgets to more programs and provide the quality of data necessary for this research, an ad hoc scholarly effort may have to do. And while it is a second-best to what could be done with systematic support from implementing agencies, this body of work should be recognized.
As in other fields of research, RCTs alone cannot address our crucial questions. If we want to know “what works” in terms of violence reduction in fragile and conflict affected states we need to thoughtfully triage between experimental and observational work.
[Eli Berman is a professor in the department of economics at the University of California at San Diego. Joseph Felter is a senior research scholar at the Center for International Security and Cooperation at Stanford University. Ethan B. Kapstein is the Arizona Centennial professor at Arizona State University and senior research fellow at the University of Oxford and the UK Department of International Development. Jake Shapiro is associate professor of politics and international affairs at Princeton University. The authors direct the Empirical Studies of Conflict program, a research initiative based at Princeton, UCSD and Stanford funded by grants from a range of organizations including the U.S. Department of Defense through the Minerva Research Initiative.]
http://www.washingtonpost.com/blogs/monkey-cage/wp/2015/03/16/how-empirical-studies-of-political-violence-can-help-policymakers/