Home » Posts tagged 'Study Registration'
Tag Archives: Study Registration
The original Upshot article advocates for a new publishing structure called Registered Reports (RRs):
A research publishing format in which protocols and analysis plans are peer reviewed and registered prior to data collection, then published regardless of the outcome.
In the following interview with the Washington Post, Nyhan explains in greater detail why RRs are more effective than other tools at preventing publication bias and data mining. He begins by explaining the limitations of preregistration.
As I argued in a white paper, […] it is still too easy for publication bias to creep in to decisions by authors to submit papers to journals as well as evaluations by reviewers and editors after results are known. We’ve seen this problem with clinical trials, where selective and inaccurate reporting persists even though preregistration is mandatory.
Closely echoing the mission of BITSS, Nyhan identifies the potential of research transparency to improve the rigor and ultimately the benefits of federally funded scientific research writing:
The problem is that the research conducted using federal funds is driven — and distorted — by the academic publishing model. The intense competition for space in top journals creates strong pressures for novel, statistically significant effects. As a result, studies that do not turn out as planned or find no evidence of effects claimed in previous research often go unpublished, even though their findings can be important and informative.
A new study recently published in Science provides striking insights into publication bias in the social sciences:
Stanford political economist Neil Malhotra and two of his graduate students examined every study since 2002 that was funded by a competitive grants program called TESS (Time-sharing Experiments for the Social Sciences). TESS allows scientists to order up Internet-based surveys of a representative sample of U.S. adults to test a particular hypothesis […] Malhotra’s team tracked down working papers from most of the experiments that weren’t published, and for the rest asked grantees what had happened to their results.
What did they find?
There is a strong relationship between the results of a study and whether it was published, a pattern indicative of publication bias […] While around half of the total studies in [the] sample were published, only 20% of those with null results appeared in print. In contrast, roughly 60% of studies with strong results and 50% of those with mixed results were published […] However, what is perhaps most striking is not that so few null results are published, but that so many of them are never even written up (65%).
A new paper by Jennifer Ware and Marcus Munafò (University of Bristol, UK)
Background and Aims
The low reproducibility of findings within the scientific literature is a growing concern. This may be due to many findings being false positives which, in turn, can misdirect research effort and waste money.
We review factors that may contribute to poor study reproducibility and an excess of ‘significant’ findings within the published literature. Specifically, we consider the influence of current incentive structures and the impact of these on research practices.
Guest post by Jamie Monogan (University of Georgia)
A conversation is emerging in the social sciences over the merits of study registration and whether it should be the next step we take in raising research transparency. The notion of study registration is that, prior to observing outcome data, a researcher can publicly release a data analysis plan that enables a correct test of a hypothesis.
Proponents argue that preregistration can curb four causes of publication bias, or the disproportionate publishing of positive, rather than null, findings:
- Preregistration would make evaluating the research design more central to the review process, reducing the importance of significance tests in publication decisions. The journal Cortex now allows for a publication decision based strictly on the research design (Chambers 2013), taking significance tests out of the publication decision. Whether the decision is made before or after observing results, releasing a design early would emphasize study quality.
- Preregistration would help the problem of null findings that stay in the author’s file drawer because the discipline would at least have a record of the registered study, even if no publication emerged. This will convey where past research was conducted that may not have been fruitful (Monogan 2013, 23).
- Preregistration would reduce the ability to add observations to achieve significance because the registered design would signal in advance the appropriate sample size. It is possible to monitor the analysis until a positive result emerges before stopping data collection.
- Preregistration can prevent fishing, or manipulating the model to achieve a desired result, because the researcher must describe the model specification ahead of time (Humphreys, de la Sierra, and van der Windt 2013). By sorting out the best specification of a model using theory and past work ahead of time, a researcher can commit to the results of a well-reasoned model.
Important changes are underway in psychology. Transparency, reliability, and adherence to scientific methods are the key words for 2014, says a recent article in The Guardian.
A growing number of psychologists – particularly the younger generation – are fed up with results that don’t replicate, journals that value story-telling over truth, and an academic culture in which researchers treat data as their personal property. Psychologists are realising that major scientific advances will require us to stamp out malpractice, face our own weaknesses, and overcome the ego-driven ideals that maintain the status quo.
Leif Nelson making the case for pre-registration:
I recently joined a large group of academics in co-authoring a paper looking at how political science, economics, and psychology are working to increase transparency in scientific publications. Psychology is leading, by the way.
Working on that paper (and the figure below) actually changed my mind about something. A couple of years ago, when Joe, Uri, and I wrote False Positive Psychology, we were not really advocates of preregistration (a la clinicaltrials.gov). We saw it as an implausible superstructure of unspecified regulation. Now I am an advocate. What changed?