Home » Publications
Category Archives: Publications
Garret Christensen, BITSS Project Scientist
Several great working papers on transparency and replication in economics have been released in the last few months. Two of them are intended for a symposium in The Journal of Economic Perspectives, to which I am very much looking forward, and are about pre-analysis plans. The first of these, by Muriel Niederle and Lucas Coffman, doesn’t pull any punches with its title: “Pre-Analysis Plans are not the Solution, Replication Might Be.” Niederle and Coffman claim that PAPs don’t decrease the rate of false positives sufficiently to be worth the effort, and that replication may be a better way to get at the truth. Some of their concern about PAPs stems from concerns about the assumption “[t]hat one published paper is the result of one pre-registered hypothesis, and that one pre-registered hypothesis corresponds to one experimental protocol. Neither can be guaranteed.” They’re also not crazy about design-based publications (or “registered reports“). They instead offer a proposal to get replication to take off, calling for the establishment of a Journal of Replication Studies, and for researchers to start citing replications, both positive and negative, whenever they cite an original work. They claim if these changes were made, researchers might begin to expect to see replications, and thus the value of writing and publishing them would increase.
Another working paper on PAPs in economics, titled simply “Pre-Analysis Plans in Economics” was released recently by Ben Olken. Olken gives a lot of useful background on the origin on PAP and discusses in detail what should go into them. A reference I found particularly informative is “E9 Statistical Principles for Clinical Trials,” the FDA’s official guidance for trials, especially section V on Data Analysis Considerations. Obviously a lot of the transparency practices we’re trying to adopt in economics and social sciences come from medicine, so it’s nice to see the original source. He compares the benefits: increasing confidence in results, making full use of statistical power, and improving relationships with partners (governments or corporations that may have vested interests in the outcomes of trials), with the costs: complexity and the challenge of writing all possible papers in advance, PAPs pushing towards simple, less interesting papers with less nuance, and reducing the ability to learn ex-post about your data. He cites Brodeur et al to say the problem of false positives isn’t that large, and that with the exception of the trials involving parties with vested interests, the costs outweigh the benefits.
The original Upshot article advocates for a new publishing structure called Registered Reports (RRs):
A research publishing format in which protocols and analysis plans are peer reviewed and registered prior to data collection, then published regardless of the outcome.
In the following interview with the Washington Post, Nyhan explains in greater detail why RRs are more effective than other tools at preventing publication bias and data mining. He begins by explaining the limitations of preregistration.
As I argued in a white paper, […] it is still too easy for publication bias to creep in to decisions by authors to submit papers to journals as well as evaluations by reviewers and editors after results are known. We’ve seen this problem with clinical trials, where selective and inaccurate reporting persists even though preregistration is mandatory.
The University of California’s publishing house, UC Press, has announced the launch of two new publications, Collabra and Luminos. Collabra will publish academic articles across many academic disciplines including the life, environmental, social and behavioral sciences. Luminos will publish monographs across all fields of study. As the UC Press website indicates, the publications will “not only [share] the research but also the value created by the academic community.”
With low up-front APCs [Article Processing Charges], a sponsorship fund for authors unable to pay, and sharing actual revenue with editors and reviewers, Collabra builds a fair and welcoming ecosystem. (Collabra Website)
Luminos is also based on an innovative publishing model:
[It] shares the cost burden of publishing in manageable amounts across the academic community. For each title, UC Press makes a significant contribution, augmented by membership funds from supporting libraries. Authors will then be asked to secure a title publication fee to cover the remaining costs. Additional revenue from supporting libraries and print sales will help to support an author waiver fund. (UCSD Library Blog)
The publication of the two initiatives coincides with a well needed push towards greater access and openness in academic publishing and will hopefully increase the benefits of the public goods provided by researchers and their affiliated academic institutions.
For more information on either publication, contact Lorraine Weston at email@example.com.
Yesterday, January 29th, Science Magazine released a new Special Issue entitled The End of Privacy. In line with its theme, the edition will be made available online at no cost for the first week following publication. Take this chance to look through!
For scientists, the vast amounts of data that people shed every day offer great new opportunities but new dilemmas as well. New computational techniques can identify people or trace their behavior by combining just a few snippets of data. There are ways to protect the private information hidden in big data files, but they limit what scientists can learn; a balance must be struck.
Boldly declaring “Privacy as we have known is ending and we’re only beginning to fathom the consequences,” the Special Issue deals with a host of topics related to different facets of data deposition, use, and confidentiality. Included in the publication are formal reports and new articles.
Dalson Britto Figueiredo Filho, Adjunct Professor of Political Science at the Federal University of Pernambuco in Recife, Brazil, who attended the BITSS Summer Institute in June 2014, recently published a paper on the importance of replications in Revista Política Hoje.
“The BITSS experience really changed my mind on how to do good science”, said Figueiredo Filho. “Now I am working to diffuse both replication and transparency as default procedures of scientific inquiry among Brazilian undergraduate and graduate students. I am very thankful to the 2014 BITSS workshop for a unique opportunity to become a better scientist.”
The paper, written in Portuguese, is available here. Below is an abstract in English.
A new study recently published in Science provides striking insights into publication bias in the social sciences:
Stanford political economist Neil Malhotra and two of his graduate students examined every study since 2002 that was funded by a competitive grants program called TESS (Time-sharing Experiments for the Social Sciences). TESS allows scientists to order up Internet-based surveys of a representative sample of U.S. adults to test a particular hypothesis […] Malhotra’s team tracked down working papers from most of the experiments that weren’t published, and for the rest asked grantees what had happened to their results.
What did they find?
There is a strong relationship between the results of a study and whether it was published, a pattern indicative of publication bias […] While around half of the total studies in [the] sample were published, only 20% of those with null results appeared in print. In contrast, roughly 60% of studies with strong results and 50% of those with mixed results were published […] However, what is perhaps most striking is not that so few null results are published, but that so many of them are never even written up (65%).
A new paper by Jennifer Ware and Marcus Munafò (University of Bristol, UK)
Background and Aims
The low reproducibility of findings within the scientific literature is a growing concern. This may be due to many findings being false positives which, in turn, can misdirect research effort and waste money.
We review factors that may contribute to poor study reproducibility and an excess of ‘significant’ findings within the published literature. Specifically, we consider the influence of current incentive structures and the impact of these on research practices.