Berkeley Initiative for Transparency in the Social Sciences

Home » Posts tagged 'Publication Bias'

Tag Archives: Publication Bias

Registered Reports to the Rescue?

After writing an article for The Upshot, Brendan Nyhan (Assistant Professor at Dartmouth) was interviewed by The Washington Post.


The original Upshot article advocates for a new publishing structure called Registered Reports (RRs):

A research publishing format in which protocols and analysis plans are peer reviewed and registered prior to data collection, then published regardless of the outcome.

In the following interview with the Washington Post, Nyhan explains in greater detail why RRs are more effective than other tools at preventing publication bias and data mining. He begins by explaining the limitations of preregistration.

As I argued in a white paper, […] it is still too easy for publication bias to creep in to decisions by authors to submit papers to journals as well as evaluations by reviewers and editors after results are known. We’ve seen this problem with clinical trials, where selective and inaccurate reporting persists even though preregistration is mandatory.

(more…)

Win a prize guessing how much trial registration reduces publication bias!

Does trial registration make an impact on publication bias? Knowing the answer could earn you a cash prize!


Macartan Humphreys (Columbia, Political Science) and collaborators Albert Fang and Grant Gordon are doing research on how publication (and publication bias) changed after the introduction of registration in clinical trials. They also want you to guess what the changes were. The bidder with the closest guess will win a $200 cash prize. Click here to read more and enter a guess.

Enthusiastic supporters of research transparency are often keen on advocating for the registration of trial experiments. But in the social sciences the practice remains fairly rare and its impact on publication bias is relatively unknown. Fortunately, social scientists can learn from their peers in the medical sciences who have been required to register their medical trials since 2005. The research of Humphreys et al. will look to see if there was a change in the share of p values just below 0.01 and 0.05 before and after 2005 in published medical trials. Their results will provide valuable insight as to whether or not registration should be a high priority on the transparency agenda.

Scientific Irreproducibility and the Prospects of Meta-Research

recent article from The Economist featuring John Ioannidis’ Meta-Research Innovation Center (METRICS), whose work to advance the credibility of research will be presented next week at the BITSS Annual Meeting.


“Why most published research findings are false” is not, as the title of an academic paper, likely to win friends in the ivory tower. But it has certainly influenced people (including journalists at The Economist). The paper it introduced was published in 2005 by John Ioannidis, an epidemiologist who was then at the University of Ioannina, in Greece, and is now at Stanford. It exposed the ways, most notably the overinterpreting of statistical significance in studies with small sample sizes, that scientific findings can end up being irreproducible—or, as a layman might put it, wrong […] Dr Ioannidis has been waging war on sloppy science ever since, helping to develop a discipline called meta-research (ie, research about research).

METRICS’ mission is to “identify and minimize persistent threats to medical-research quality.” These include irreproducibility of research findings (the inability of external researchers to reproduce someone else’s work, most often because research data is not shared or data manipulations are not correctly detailed), funding inefficiencies (supporting flawed research), and publication bias (not all studies that are conducted get published, and the ones which do tend to be show significant results, leaving a skewed impression of the evidence).

(more…)

Can Greater Transparency Lead to Better Social Science?

In a recent article on the Monkey Cage, professors Mike FindleyNathan JensenEdmund Malesky and Tom Pepinsky  discuss publication bias, the “file drawer problem” and how a special issue of the journal Comparative Political Studies will help address these problems. 


Similar to a recent article by Brendan Nyhan, reposted on the BITSS blog, the university professors writing the article assert:

[S]cholars may think strategically about what editors will want […] this means that “boring” findings, or findings that fail to support an author’s preferred hypotheses, are unlikely to be published — the so-called “file drawer problem.” More perniciously, it can incentivize scholars to hide known problems in their research or even encourage outright fraud, as evinced by the recent cases of psychologist Diederik Stapel and acoustician Peter Chen.

To address these problems, the authors of the article have worked with the journal for Comparative Political Studies to release a special edition in which:

[A]uthors will submit manuscripts with all mention of the results eliminated […] Other authors will submit manuscripts with full descriptions of research projects that have yet to be executed […] In both cases, reviewers and editors must judge manuscripts solely on the coherence of their theories, the quality of their design, the appropriateness of their empirical methods, and the importance of their research question.

(more…)

To Get More Out of Science, Show the Rejected Research

CARL WIENS

CARL WIENS

In a recent opinion piece on the New York Times news portal the Upshot, Brendan Nyhan, an assistant professor of government at Dartmouth College, comments on a host of transparency related issues.

Closely echoing the mission of BITSS, Nyhan identifies the potential of research transparency to improve the rigor and ultimately the benefits of federally funded scientific research writing:

The problem is that the research conducted using federal funds is driven — and distorted — by the academic publishing model. The intense competition for space in top journals creates strong pressures for novel, statistically significant effects. As a result, studies that do not turn out as planned or find no evidence of effects claimed in previous research often go unpublished, even though their findings can be important and informative.

(more…)

New Study Sheds Light on File Drawer Problem

A new study recently published in Science provides striking insights into publication bias in the social sciences:

Stanford political economist Neil Malhotra and two of his graduate students examined every study since 2002 that was funded by a competitive grants program called TESS (Time-sharing Experiments for the Social Sciences). TESS allows scientists to order up Internet-based surveys of a representative sample of U.S. adults to test a particular hypothesis […] Malhotra’s team tracked down working papers from most of the experiments that weren’t published, and for the rest asked grantees what had happened to their results.

What did they find?

There is a strong relationship between the results of a study and whether it was published, a pattern indicative of publication bias […] While around half of the total studies in [the] sample were published, only 20% of those with null results appeared in print. In contrast, roughly 60% of studies with strong results and 50% of those with mixed results were published […] However, what is perhaps most striking is not that so few null results are published, but that so many of them are never even written up (65%).

(more…)

Call for Papers on Research Transparency

BITSS will be holding its 3rd annual conference at UC Berkeley on December 11-12, 2014. The goal of the meeting is to bring together leaders from academia, scholarly publishing, and policy to strengthen the standards of openness and integrity across social science disciplines.

This Call for Papers focuses on work that elaborates new tools and strategies to increase the transparency and reproducibility of research. A committee of reviewers will select a limited number of papers to be presented and discussed. Topics for papers include, but are not limited to:

  • Pre-registration and the use of pre-analysis plans;
  • Disclosure and transparent reporting;
  • Replicability and reproducibility;
  • Data sharing;
  • Methods for detecting and reducing publication bias or data mining.
Papers or long abstracts must be submitted by Friday, October 10th (midnight Pacific time) through CEGA’s Submission Platform. Travel funds may be provided for presenters. Eligible submissions include completed papers or works in progress.

The 2014 BITSS Conference is organized by the Center for Effective Global Action and co-sponsored by the Alfred P. Sloan Foundation and the Laura and John Arnold Foundation.