Berkeley Initiative for Transparency in the Social Sciences

Home » Posts tagged 'EGAP'

Tag Archives: EGAP

Replicate it! A Proposal to Foster Knowledge Accumulation

Thad Dunning and Susan D. Hyde in the Washington Post:

Like many social scientists, we take it almost as an article of faith that scientific methods will advance our knowledge about how the world works. The growing use by social scientists of strong research designs — for example, randomized controlled experiments or natural experiments — increases the reliability of causal claims in individual studies. Yet building scientific knowledge across studies is much more difficult than many acknowledge. As The Economist has recently summarized, if science is based on a principle of “trust but verify,” there is a growing realization that there is too much “trust” and not enough “verify.”

What can be done about this?

One potential solution is to change the incentives researchers face, in part by funding new research in a manner that requires replication. If incentives are improved, important studies can be replicated across contexts, and enough scholars may be willing to build in additional research time to coordinate across studies such that their work better contributes to the accumulation of knowledge. This is exactly what the Experiments in Governance and Politics (EGAP) network, in conjunction with UC Berkeley’s Center on the Politics of Development (CPD), is attempting to do as it pilots its first research “regranting” round. EGAP is now soliciting proposals on a focused question from researchers around the world, with a goal of making four to six awards, each within the $200,000-$300,000 range.

The deadline for submitting proposals is June 16, 2014.

(more…)

Panel on Transparency and Replication @ EGAP 11 (Berkeley, CA — Friday 4/11)

Experiments in Governance and Politics (EGAP) will be holding its eleventh bi-annual meeting in Berkeley, CA this Friday and Saturday (April 11-12). In addition to research design workshops and recent papers presentations, the meeting will feature an interdisciplinary panel on transparency and replication (Friday, 3.50-6.00pm):

  • 3:50 – 4:10 PM Thad Dunning “EGAP Regranting Initiative”
  • 4:10 – 4:30 PM Ted Miguel “Berkeley Initiative on Transparency in Social Sciences (BITSS)”
  • 4:30 – 4:50 PM Macartan Humphreys “White Paper on Journal Best Practices”
  • 4:50 – 5:10 PM Annette Brown “3ie Internal Replication Project”
  • 5:10 – 5:30 PM Discussion and wrap-up

More details & the full agenda are available on the EGAP website.

Funding Opportunity for Coordinated Research on Political Accountability

The Experiments in Governance and Politics network (EGAP) is requesting statements of interest for leading edge experimental research projects on political accountability in developing countries. This grant round is specifically designed to foster knowledge cumulation across studies. Successful applicants will engage in closely related projects and adhere to a common set of research standards. The $1.8 million pool will support 4-6 research projects that address a common theme and/or one or more “grouped” applications that link 2-3 individual projects across di fferent research sites. This request for short statements of interest seeks to identify clusters of research projects with comparable interventions and outcome measures which will form the basis of the main call. The deadline for submission of statements is March 17, 2014. More info here.

BITSS Affiliates Advocate for Higher Transparency Standards in Science Magazine

In the January 3, 2014 edition of Science Magazine, an interdisciplinary group of 19 BITSS affiliates reviews recent efforts to promote transparency in the social sciences and make the case for more stringent norms and practices to help boost the quality and credibility of research findings.

The authors, led by UC Berkeley economist Ted Miguel, deplore a dysfunctional reward structure in which statistically significant, novel, and theoretically tidy results get published more easily that null, complicated, or replication outcomes. This misalignment between scholarly incentives and scholarly values, the authors argue, spur researchers to present their data in a way that is more “publishable” – at the expense of accuracy.

Coupled with limited accountability for researchers’ errors and mistakes, this problem has had the effect of producing a somewhat distorted body of evidence that exaggerate the effectiveness of social and economic programs. The stakes are high because policy decisions based on flawed research affect millions of people’s lives.

(more…)

Monkey Business

By Macartan Humphreys (Political Science, Columbia & EGAP)

I am sold on the idea of research registration. Two things convinced me.

First I have been teaching courses in which each week we try to replicate prominent results produced by political scientists and economists working on the political economy of development. I advise against doing this because it is very depressing. In many cases data is not available or results cannot be replicated even when it is. But even when results can be replicated, they often turn out to be extremely fragile. Look at them sideways and they fall over. The canon is a lot more delicate than it lets on.

Second I have tried out registration for myself. That was also depressing, this time because of what I learned about how I usually work. Before doing the real analysis on data from a big field experiment on development aid in Congo, we (Raul Sanchez de la Sierra, Peter van der Windt and I) wrote up a “mock report” using fake data on our outcome variables. Doing this forced us to make myriad decisions about how to do our analysis without the benefits of seeing how the analyses would play out. We did this partly for political reasons: a lot of people had a lot invested in this study and if they had different ideas about what constituted evidence, we wanted to know that upfront and not after the results came in. But what really surprised us was how hard it was to do it. I found that not having access to the results made it all the more obvious how much I am used to drawing on them when crafting analyses and writing; for simple decisions such as which exact measure to use for a given concept, which analyses to deepen, and which results to emphasize. More broadly that’s how our discipline works: the most important peer feedback we receive, from reviewers or in talks, generally comes after our main analyses are complete and after our peers are exposed to the patterns in the data. For some purposes that’s fine, but it is not hard to see how it could produce just the kind of fragility I was seeing in published work.

These experiences convinced me that our current system is flawed. Registration offers one possible solution.

(more…)