Berkeley Initiative for Transparency in the Social Sciences

Influential Paper on Gay Marriage Might Be Marred by Fraudulent Data

Harsh scrutiny of an influential political science experiment highlights the importance of transparency in research.


The paper, from UCLA graduate student Michael LaCour and Columbia University Professor Donald Green, was published in Science in December 2014. It asserted that short conversations with gay canvassers could not only change people’s minds on a divisive social issue like same-sex marriage, but could also have a contagious effect on the relatives of those in contact with the canvassers. The paper received wide attention in the press.

Yet three days ago, two graduate students from UC Berkeley, David Broockman and Joshua Kalla, published a response to the study, pointing to a number of statistical oddities, and discrepancies between how the experiment was reported and how the authors said it was conducted. Earlier in the year, impressed by the paper findings, Broockman and Kalla had attempted to conduct an extension of the study, building on the original data set. This is when they became aware of irregularities in the study methodology and decided to notify Green.

Reviewing the comments from Broockman and Kalla, Green, who was not involved in the original data collection, quickly became convinced that something was wrong – and on Tuesday, he submitted a letter to Science requesting the retraction of the paper. Green shared his view on the controversy in a recent interview, reflecting on what it meant for the broader practice of social science and highlighting the importance of integrity in research.

(more…)

P-values are Just the Tip of the Iceberg

Roger Peng and Jeffrey Leek of John Hopkins University claim that “ridding science of shoddy statistics will require scrutiny of every step, not merely the last one.”


This blog post originally appeared in Nature on April 28, 2015 (see here).

There is no statistic more maligned than the P value. Hundreds of papers and blogposts have been written about what some statisticians deride as ‘null hypothesis significance testing’ (NHST; see, for example, go.nature.com/pfvgqe). NHST deems whether the results of a data analysis are important on the basis of whether a summary statistic (such as a P value) has crossed a threshold. Given the discourse, it is no surprise that some hailed as a victory the banning of NHST methods (and all of statistical inference) in the journal Basic and Applied Social Psychology in February.

Such a ban will in fact have scant effect on the quality of published science. There are many stages to the design and analysis of a successful study. The last of these steps is the calculation of an inferential statistic such as a P value, and the application of a ‘decision rule’ to it (for example, P < 0.05). In practice, decisions that are made earlier in data analysis have a much greater impact on results — from experimental design to batch effects, lack of adjustment for confounding factors, or simple measurement error. Arbitrary levels of statistical significance can be achieved by changing the ways in which data are cleaned, summarized or modelled2.

(more…)

Announcing the Leamer-Rosenthal Prizes for Open Social Science

New prizes will recognize and reward transparency in social science research.


BERKELEY, CA (May 13, 2015) – Transparent research is integral to the validity of science. Openness is especially important in such social science disciplines as economics, political science and psychology, because this research shapes policy and influences clinical practices that affect millions of lives. To encourage openness in research and the teaching of best practices, the Berkeley Initiative for Transparency in the Social Sciences (BITSS) has established The Leamer-Rosenthal Prizes for Open Social Science. BITSS is an initiative of the Center for Effective Global Action (CEGA) at the University of California, Berkeley. The prizes, which provide recognition, visibility and cash awards to both the next generation of researchers and senior faculty, are generously supported by the John Templeton Foundation.

The competition is open to scholars and educators worldwide.

“In academia, career advances and research funding are usually awarded on the basis of how many journal articles a scientist publishes. This incentive structure can encourage researchers to dramatize their findings in ways that increase the probability of publication, sometimes even at the expense of transparency and integrity,” said Edward Miguel, PhD, Professor of Economics at UC Berkeley and Faculty Director of CEGA. “The Leamer-Rosenthal Prizes will help speed the adoption of transparent practices by recognizing and rewarding researchers and educators whose work and teaching exemplify the best in open social science.”

(more…)

Recent BITSS Presentations

Garret Christensen–BITSS Project Scientist


BITSS participated in a pair of conferences/workshops recently that we should probably tell you about. First, BITSS was part of a research transparency conference in Washington DC put together by the Laura and John Arnold Foundation. Many of the presentations from the conference can be found here. The idea was to bring together academics, researchers on federal contracts, and federal government research sponsors and policy makers. Just a few things that were new to me or which stuck out were:

(more…)

A Rough Guide to Spotting Bad Science

Twelve points that will help separate the science from the pseudoscience, written by Compound Interest (see here).


Compound Interest

Compound Interest


Arnold Foundation Launches New Evidence-Based Policy Division

The Coalition for Evidence-Based Policy, equivalent to CEGA’s domestic counterpart and a leading force working to institutionalize evidence-based policy making, will merge with one of its funders, the Laura and John Arnold Foundation (LJAF). Also a funder of BITSS, LJAF will integrate the staff of the Coalition into its newly established Evidence-Based Policy and Innovation division. The mission of the new division will be very similar to the one of the Coalition it is replacing which will close down its operations in the next few days and transition its staff to the LJAF in the coming weeks.

According to a LJAF press release, the evidence-based policy subdivision, that will be led by Jon Baron, the former president of the Coalition, will focus on “strategic investments in rigorous evaluations, collaborations with policy officials to advance evidence-based reforms, and evidence reviews to identify promising and proven programs” (LJAF). The innovation subdivision, to be led by Kathy Stack, former adviser for evidence-based innovation at the White House Office of Management and Budget, “will bring policymakers, researchers, and data experts from the public and private sectors together to strengthen the infrastructure and processes needed to support evidence-based decision making” (LJAF).

(more…)

Three Transparency Working Papers You Need to Read

Garret Christensen, BITSS Project Scientist


Several great working papers on transparency and replication in economics have been released in the last few months. Two of them are intended for a symposium in The Journal of Economic Perspectives, to which I am very much looking forward, and are about pre-analysis plans. The first of these, by Muriel Niederle and Lucas Coffman, doesn’t pull any punches with its title: “Pre-Analysis Plans are not the Solution, Replication Might Be.” Niederle and Coffman claim that PAPs don’t decrease the rate of false positives sufficiently to be worth the effort, and that replication may be a better way to get at the truth. Some of their concern about PAPs stems from concerns about the assumption “[t]hat one published paper is the result of one pre-registered hypothesis, and that one pre-registered hypothesis corresponds to one experimental protocol. Neither can be guaranteed.” They’re also not crazy about design-based publications (or “registered reports“). They instead offer a proposal to get replication to take off, calling for the establishment of a Journal of Replication Studies, and for researchers to start citing replications, both positive and negative, whenever they cite an original work. They claim if these changes were made, researchers might begin to expect to see replications, and thus the value of writing and publishing them would increase.

Another working paper on PAPs in economics, titled simply “Pre-Analysis Plans in Economics” was released recently by Ben Olken. Olken gives a lot of useful background on the origin on PAP and discusses in detail what should go into them. A reference I found particularly informative is “E9 Statistical Principles for Clinical Trials,” the FDA’s official guidance for trials, especially section V on Data Analysis Considerations. Obviously a lot of the transparency practices we’re trying to adopt in economics and social sciences come from medicine, so it’s nice to see the original source. He compares the benefits: increasing confidence in results, making full use of statistical power, and improving relationships with partners (governments or corporations that may have vested interests in the outcomes of trials), with the costs: complexity and the challenge of writing all possible papers in advance, PAPs pushing towards simple, less interesting papers with less nuance, and reducing the ability to learn ex-post about your data. He cites Brodeur et al to say the problem of false positives isn’t that large, and that with the exception of the trials involving parties with vested interests, the costs outweigh the benefits.

(more…)

All the latest on research transparency

Here you can find information about the Berkeley Initiative for Transparency in the Social Sciences (BITSS), read and comment on opinion blog posts, learn about our annual meeting, find useful tools and resources, and contribute to the discussion by adding your voice.

Enter your email address to follow this blog and receive notifications of new posts by email.

Archives