Berkeley Initiative for Transparency in the Social Sciences

Home » Posts tagged 'Pre-analysis Plans'

Tag Archives: Pre-analysis Plans

Emerging Researcher Perspectives: Replication as a Credible Pre-Analysis Plan

Guest post by Raphael Calel, Ciriacy-Wantrup Postdoctoral Fellow at the Department of Agricultural and Resource Economics at the University of California, Berkeley.


One of the most important tools for enhancing the credibility of research is the pre-analysis plan, or the PAP. Simply put, we feel more confident in someone’s inferences if we can verify that they weren’t data mining, engaging in motivated reasoning, or otherwise manipulating their results, knowingly or unknowingly. By publishing a PAP before collecting data, and then closely following that plan, researchers can credibly demonstrate to us skeptics that their analyses were not manipulated in light of the data they collected.

Still, PAPs are credible only when the researcher can anticipate and wait for the collection of new data.  The vast majority of social science research, however, does not satisfy these conditions. For instance, while it is perfectly reasonable to test new hypotheses about the causes of the recent financial crisis, it is unreasonable to expect researchers to have pre-specified their analyses before the crisis hit. To give another example, no one analysing a time series of more than a couple of years can reasonably be expected to publish a PAP and then wait for years or decades before implementing the study. Most observational studies face this problem in one form or another.

(more…)

Emerging Researcher Perspectives: Get it Right the First Time!

Guest post by Olivia D’Aoust, Ph.D. in Economics from Université libre de Bruxelles, and former Fulbright Visiting Ph.D. student at the University of California, Berkeley.


As a Fulbright PhD student in development economics from Brussels, my experience this past year on the Berkeley campus has been eye opening. In particular, I discovered a new movement toward improving the standards of openness and integrity in economics, political science, psychology, and related disciplines lead by the Berkeley Initiative for Transparency in the Social Sciences (BITSS).

When I first discovered BITSS, it struck me how little I knew about research on research in the social sciences, the pervasiveness of fraud in science in general (from data cleaning and specification searching to faking data altogether), and the basic lack of consensus on what is the right and wrong way to do research. These issues are essential, yet too often they are left by the wayside. Transparency, reproducibility, replicability, and integrity are the building blocks of scientific research.

(more…)

Three Transparency Working Papers You Need to Read

Garret Christensen, BITSS Project Scientist


Several great working papers on transparency and replication in economics have been released in the last few months. Two of them are intended for a symposium in The Journal of Economic Perspectives, to which I am very much looking forward, and are about pre-analysis plans. The first of these, by Muriel Niederle and Lucas Coffman, doesn’t pull any punches with its title: “Pre-Analysis Plans are not the Solution, Replication Might Be.” Niederle and Coffman claim that PAPs don’t decrease the rate of false positives sufficiently to be worth the effort, and that replication may be a better way to get at the truth. Some of their concern about PAPs stems from concerns about the assumption “[t]hat one published paper is the result of one pre-registered hypothesis, and that one pre-registered hypothesis corresponds to one experimental protocol. Neither can be guaranteed.” They’re also not crazy about design-based publications (or “registered reports“). They instead offer a proposal to get replication to take off, calling for the establishment of a Journal of Replication Studies, and for researchers to start citing replications, both positive and negative, whenever they cite an original work. They claim if these changes were made, researchers might begin to expect to see replications, and thus the value of writing and publishing them would increase.

Another working paper on PAPs in economics, titled simply “Pre-Analysis Plans in Economics” was released recently by Ben Olken. Olken gives a lot of useful background on the origin on PAP and discusses in detail what should go into them. A reference I found particularly informative is “E9 Statistical Principles for Clinical Trials,” the FDA’s official guidance for trials, especially section V on Data Analysis Considerations. Obviously a lot of the transparency practices we’re trying to adopt in economics and social sciences come from medicine, so it’s nice to see the original source. He compares the benefits: increasing confidence in results, making full use of statistical power, and improving relationships with partners (governments or corporations that may have vested interests in the outcomes of trials), with the costs: complexity and the challenge of writing all possible papers in advance, PAPs pushing towards simple, less interesting papers with less nuance, and reducing the ability to learn ex-post about your data. He cites Brodeur et al to say the problem of false positives isn’t that large, and that with the exception of the trials involving parties with vested interests, the costs outweigh the benefits.

(more…)

Call for Papers: Working Group in African Political Economy (WGAPE)

BITSS is co-sponsoring the 4th Working Group in African Political Economy (WGAPE) Annual Meeting, taking place May 29-30 at the Watson Institute for International and Public Affairs, Brown University.


WGAPE brings together faculty and advanced graduate students in Economics and Political Science who combine field research experience in Africa with training in political economy methods. Paper submissions must reflect WGAPE’s broad research agenda on core issues within the political economy of African development and encourages submissions of works-in-progress.

In addition, this year’s call for papers invites submissions of Pre-Analysis Plans (PAPs). There will be one dedicated session to the presentation and discussion of a PAP at this meeting.

  • Please find the full call for paper here.
  • Papers must be uploaded here by 11:59pm PT on April 19th.
  • Successful applicants will be notified by May 1st and will be invited to attend the full symposium.
  • WGAPE will cover the cost of economy travel, accommodation and dining (capped).

For further information, please contact Elisa Cascardi (CEGA) at wgapeworkshop@gmail.com.

Creating Standards for Reproducible Research: Overview of COS Meeting

By Garret Christensen (BITSS)


Representatives from BITSS (CEGA Faculty Director Ted Miguel, CEGA Executive Director Temina Madon, and BITSS Assistant Project Scientist Garret Christensen–that’s me) spent Monday and Tuesday of this week at a very interesting workshop at the Center for Open Science aimed at creating standards for promoting reproducible research in the social-behavioral sciences. Perhaps the workshop could have used a catchier name or acronym for wider awareness, but we seemed to accomplish a great deal.  Representatives from across disciplines (economics, political science, psychology, sociology, medicine), from funders (NIH, NSF, Laura and John Arnold Foundation, Sloan Foundation), publishers (Science/AAAS, APA, Nature Publishing Group), editors (American Political Science Review, Psychological Science, Perspectives on Psychological Science, Science), data archivists (ICPSR), and researchers from over 40 leading institutions (UC Berkeley, MIT, University of Michigan, University of British Columbia, UVA, UPenn, Northwestern, among many others) came together to push forward on specific action items researchers and publishers can do to promote transparent and reproducible research.

The work was divided into five subcommittees:

1) Reporting standards in research design

2) Reporting standards in analysis

3) Replications

4) Pre-Registration/Registered Reports

5) Sharing data, code, and materials

(more…)

First Swedish Graduate Student Training in Transparency in the Social Sciences

Guest Post by Anja Tolonen (University of Gothenburg, Sweden)


(PHOTO CREDIT: www.gu.se)

(PHOTO CREDIT: GU.SE)

Seventeen excited graduate students in Economics met at the University of Gothenburg, a Monday in September, to initiate an ongoing discussion about transparency practices in Economics. The students came from all over the world: from Kenya, Romania, Hong Kong, Australia and Sweden of course. The initiative itself also came from across an ocean too: Berkeley, California. The students had different interests within Economics: many of us focus on Environmental or Development Economics but there were also Financial Economists and Macroeconomists present.

The teaching material, which was mostly based on material from the first Summer Institute, organized by BITSS in June 2014, quickly prompted many questions. “Is it feasible to pre-register analysis on survey data?”, “Are graduate students more at risk of P-hacking than their senior peers?”, “Are some problems intrinsic to the publishing industry?” and “Does this really relate to my field?” several students asked. Some students think yes:

(more…)

New Study Sheds Light on File Drawer Problem

A new study recently published in Science provides striking insights into publication bias in the social sciences:

Stanford political economist Neil Malhotra and two of his graduate students examined every study since 2002 that was funded by a competitive grants program called TESS (Time-sharing Experiments for the Social Sciences). TESS allows scientists to order up Internet-based surveys of a representative sample of U.S. adults to test a particular hypothesis […] Malhotra’s team tracked down working papers from most of the experiments that weren’t published, and for the rest asked grantees what had happened to their results.

What did they find?

There is a strong relationship between the results of a study and whether it was published, a pattern indicative of publication bias […] While around half of the total studies in [the] sample were published, only 20% of those with null results appeared in print. In contrast, roughly 60% of studies with strong results and 50% of those with mixed results were published […] However, what is perhaps most striking is not that so few null results are published, but that so many of them are never even written up (65%).

(more…)