Home » Reblogged
Category Archives: Reblogged
Roger Peng and Jeffrey Leek of John Hopkins University claim that “ridding science of shoddy statistics will require scrutiny of every step, not merely the last one.”
This blog post originally appeared in Nature on April 28, 2015 (see here).
There is no statistic more maligned than the P value. Hundreds of papers and blogposts have been written about what some statisticians deride as ‘null hypothesis significance testing’ (NHST; see, for example, go.nature.com/pfvgqe). NHST deems whether the results of a data analysis are important on the basis of whether a summary statistic (such as a P value) has crossed a threshold. Given the discourse, it is no surprise that some hailed as a victory the banning of NHST methods (and all of statistical inference) in the journal Basic and Applied Social Psychology in February.
Such a ban will in fact have scant effect on the quality of published science. There are many stages to the design and analysis of a successful study. The last of these steps is the calculation of an inferential statistic such as a P value, and the application of a ‘decision rule’ to it (for example, P < 0.05). In practice, decisions that are made earlier in data analysis have a much greater impact on results — from experimental design to batch effects, lack of adjustment for confounding factors, or simple measurement error. Arbitrary levels of statistical significance can be achieved by changing the ways in which data are cleaned, summarized or modelled2.
The original Upshot article advocates for a new publishing structure called Registered Reports (RRs):
A research publishing format in which protocols and analysis plans are peer reviewed and registered prior to data collection, then published regardless of the outcome.
In the following interview with the Washington Post, Nyhan explains in greater detail why RRs are more effective than other tools at preventing publication bias and data mining. He begins by explaining the limitations of preregistration.
As I argued in a white paper, […] it is still too easy for publication bias to creep in to decisions by authors to submit papers to journals as well as evaluations by reviewers and editors after results are known. We’ve seen this problem with clinical trials, where selective and inaccurate reporting persists even though preregistration is mandatory.
Originally posted on the Open Science Collaboration by Denny Borsboom
This train won’t stop anytime soon.
That’s what I kept thinking during the two-day sessions in Charlottesville, where a diverse array of scientific stakeholders worked hard to reach agreement on new journal standards for open and transparent scientific reporting. The aspired standards are intended to specify practices for authors, reviewers, and editors to follow in order to achieve higher levels of openness than currently exist. The leading idea is that a journal, funding agency, or professional organization, could take these standards off-the-shelf and adopt them in their policy. So that when, say, The Journal for Previously Hard To Get Data starts to turn to a more open data practice, they don’t have to puzzle on how to implement this, but may instead just copy the data-sharing guideline out of the new standards and post it on their website.
The organizers1 of the sessions, which were presided by Brian Nosek of the Center for Open Science, had approached methodologists, funding agencies, journal editors, and representatives of professional organizations to achieve a broad set of perspectives on what open science means and how it should be institutionalized. As a result, the meeting felt almost like a political summit. It included high officials from professional organizations like the American Psychological Association (APA) and the Association for Psychological Science (APS), programme directors from the National Institutes of Health (NIH) and the National Science Foundation (NSF), editors of a wide variety of psychological, political, economic, and general science journals (including Science and Nature), and a loose collection of open science enthusiasts and methodologists (that would be me).
In a recent post on Data Colada, University of Pennsylvania Professor Uri Simonsohn discusses what do in the event you (a researcher) are accused of having altered your data to increase statistical significance.
It has become more common to publicly speculate, upon noticing a paper with unusual analyses, that a reported finding was obtained via p-hacking.
For example “a Slate.com post by Andrew Gelman suspected p-hacking in a paper that collected data on 10 colors of clothing, but analyzed red & pink as a single color” [.html] (see authors’ response to the accusation .html) or “a statistics blog suspected p-hacking after noticing a paper studying number of hurricane deaths relied on the somewhat unusual Negative-Binomial Regression” [.html].
Instinctively, Simonsohn says, a researcher may react to accusations of p-hacking by attempting to justify the specifics of his/her research design but if that justification is ex-post, the explanation will not be good enough. In fact:
P-hacked findings are by definition justifiable. Unjustifiable research practices involve incompetence or fraud, not p-hacking.
Richard Ball (Economics Professor at Haverford College and presenter at the 2014 BITSS Summer Institute) and Norm Medeiros (Associate Librarian at Haverford College) in a recent interview appearing on the Library of Congress based blog The Signal, discussed Project TIER (Teaching Integrity in Empirical Research) and their experience educating students how to document their empirical analysis.
What is Project TIER
For close to a decade, we have been teaching our students how to assemble comprehensive documentation of the data management and analysis they do in the course of writing an original empirical research paper. Project TIER is an effort to reach out to instructors of undergraduate and graduate statistical methods classes in all the social sciences to share with them lessons we have learned from this experience.
What is the TIER documentation protocol?
We gradually developed detailed instructions describing all the components that should be included in the documentation and how they should be formatted and organized. We now refer to these instructions as the TIER documentation protocol. The protocol specifies a set of electronic files (including data, computer code and supporting information) that would be sufficient to allow an independent researcher to reproduce–easily and exactly–all the statistical results reported in the paper.
That’s an important question, with an answer that may help determine how much attention some people pay to research misconduct. But it’s one that hasn’t been rigorously addressed.
Seeking some clarity, Andrew Stern, Arturo Casadevall, Grant Steen, and Ferric Fang looked at cases in which the Office of Research Integrity had determined there was misconduct in particular papers. In their study, published today in eLife:
View original post 774 more words