Home » Posts tagged 'Publication Bias'
Tag Archives: Publication Bias
The original Upshot article advocates for a new publishing structure called Registered Reports (RRs):
A research publishing format in which protocols and analysis plans are peer reviewed and registered prior to data collection, then published regardless of the outcome.
In the following interview with the Washington Post, Nyhan explains in greater detail why RRs are more effective than other tools at preventing publication bias and data mining. He begins by explaining the limitations of preregistration.
As I argued in a white paper, […] it is still too easy for publication bias to creep in to decisions by authors to submit papers to journals as well as evaluations by reviewers and editors after results are known. We’ve seen this problem with clinical trials, where selective and inaccurate reporting persists even though preregistration is mandatory.
Does trial registration make an impact on publication bias? Knowing the answer could earn you a cash prize!
Macartan Humphreys (Columbia, Political Science) and collaborators Albert Fang and Grant Gordon are doing research on how publication (and publication bias) changed after the introduction of registration in clinical trials. They also want you to guess what the changes were. The bidder with the closest guess will win a $200 cash prize. Click here to read more and enter a guess.
Enthusiastic supporters of research transparency are often keen on advocating for the registration of trial experiments. But in the social sciences the practice remains fairly rare and its impact on publication bias is relatively unknown. Fortunately, social scientists can learn from their peers in the medical sciences who have been required to register their medical trials since 2005. The research of Humphreys et al. will look to see if there was a change in the share of p values just below 0.01 and 0.05 before and after 2005 in published medical trials. Their results will provide valuable insight as to whether or not registration should be a high priority on the transparency agenda.
In a recent article on the Monkey Cage, professors Mike Findley, Nathan Jensen, Edmund Malesky and Tom Pepinsky discuss publication bias, the “file drawer problem” and how a special issue of the journal Comparative Political Studies will help address these problems.
[S]cholars may think strategically about what editors will want […] this means that “boring” findings, or findings that fail to support an author’s preferred hypotheses, are unlikely to be published — the so-called “file drawer problem.” More perniciously, it can incentivize scholars to hide known problems in their research or even encourage outright fraud, as evinced by the recent cases of psychologist Diederik Stapel and acoustician Peter Chen.
To address these problems, the authors of the article have worked with the journal for Comparative Political Studies to release a special edition in which:
[A]uthors will submit manuscripts with all mention of the results eliminated […] Other authors will submit manuscripts with full descriptions of research projects that have yet to be executed […] In both cases, reviewers and editors must judge manuscripts solely on the coherence of their theories, the quality of their design, the appropriateness of their empirical methods, and the importance of their research question.
Closely echoing the mission of BITSS, Nyhan identifies the potential of research transparency to improve the rigor and ultimately the benefits of federally funded scientific research writing:
The problem is that the research conducted using federal funds is driven — and distorted — by the academic publishing model. The intense competition for space in top journals creates strong pressures for novel, statistically significant effects. As a result, studies that do not turn out as planned or find no evidence of effects claimed in previous research often go unpublished, even though their findings can be important and informative.
A new study recently published in Science provides striking insights into publication bias in the social sciences:
Stanford political economist Neil Malhotra and two of his graduate students examined every study since 2002 that was funded by a competitive grants program called TESS (Time-sharing Experiments for the Social Sciences). TESS allows scientists to order up Internet-based surveys of a representative sample of U.S. adults to test a particular hypothesis […] Malhotra’s team tracked down working papers from most of the experiments that weren’t published, and for the rest asked grantees what had happened to their results.
What did they find?
There is a strong relationship between the results of a study and whether it was published, a pattern indicative of publication bias […] While around half of the total studies in [the] sample were published, only 20% of those with null results appeared in print. In contrast, roughly 60% of studies with strong results and 50% of those with mixed results were published […] However, what is perhaps most striking is not that so few null results are published, but that so many of them are never even written up (65%).
BITSS will be holding its 3rd annual conference at UC Berkeley on December 11-12, 2014. The goal of the meeting is to bring together leaders from academia, scholarly publishing, and policy to strengthen the standards of openness and integrity across social science disciplines.
This Call for Papers focuses on work that elaborates new tools and strategies to increase the transparency and reproducibility of research. A committee of reviewers will select a limited number of papers to be presented and discussed. Topics for papers include, but are not limited to:
- Pre-registration and the use of pre-analysis plans;
- Disclosure and transparent reporting;
- Replicability and reproducibility;
- Data sharing;
- Methods for detecting and reducing publication bias or data mining.