Berkeley Initiative for Transparency in the Social Sciences

Home » Reblogged

Category Archives: Reblogged

P-values are Just the Tip of the Iceberg

Roger Peng and Jeffrey Leek of John Hopkins University claim that “ridding science of shoddy statistics will require scrutiny of every step, not merely the last one.”


This blog post originally appeared in Nature on April 28, 2015 (see here).

There is no statistic more maligned than the P value. Hundreds of papers and blogposts have been written about what some statisticians deride as ‘null hypothesis significance testing’ (NHST; see, for example, go.nature.com/pfvgqe). NHST deems whether the results of a data analysis are important on the basis of whether a summary statistic (such as a P value) has crossed a threshold. Given the discourse, it is no surprise that some hailed as a victory the banning of NHST methods (and all of statistical inference) in the journal Basic and Applied Social Psychology in February.

Such a ban will in fact have scant effect on the quality of published science. There are many stages to the design and analysis of a successful study. The last of these steps is the calculation of an inferential statistic such as a P value, and the application of a ‘decision rule’ to it (for example, P < 0.05). In practice, decisions that are made earlier in data analysis have a much greater impact on results — from experimental design to batch effects, lack of adjustment for confounding factors, or simple measurement error. Arbitrary levels of statistical significance can be achieved by changing the ways in which data are cleaned, summarized or modelled2.

(more…)

Registered Reports to the Rescue?

After writing an article for The Upshot, Brendan Nyhan (Assistant Professor at Dartmouth) was interviewed by The Washington Post.


The original Upshot article advocates for a new publishing structure called Registered Reports (RRs):

A research publishing format in which protocols and analysis plans are peer reviewed and registered prior to data collection, then published regardless of the outcome.

In the following interview with the Washington Post, Nyhan explains in greater detail why RRs are more effective than other tools at preventing publication bias and data mining. He begins by explaining the limitations of preregistration.

As I argued in a white paper, […] it is still too easy for publication bias to creep in to decisions by authors to submit papers to journals as well as evaluations by reviewers and editors after results are known. We’ve seen this problem with clinical trials, where selective and inaccurate reporting persists even though preregistration is mandatory.

(more…)

Facilitating Radical Change in Publication Standards: Overview of COS Meeting Part II

Originally posted on the Open Science Collaboration by Denny Borsboom


This train won’t stop anytime soon.

That’s what I kept thinking during the two-day sessions in Charlottesville, where a diverse array of scientific stakeholders worked hard to reach agreement on new journal standards for open and transparent scientific reporting. The aspired standards are intended to specify practices for authors, reviewers, and editors to follow in order to achieve higher levels of openness than currently exist. The leading idea is that a journal, funding agency, or professional organization, could take these standards off-the-shelf and adopt them in their policy. So that when, say, The Journal for Previously Hard To Get Data starts to turn to a more open data practice, they don’t have to puzzle on how to implement this, but may instead just copy the data-sharing guideline out of the new standards and post it on their website.

The organizers1 of the sessions, which were presided by Brian Nosek of the Center for Open Science, had approached methodologists, funding agencies, journal editors, and representatives of professional organizations to achieve a broad set of perspectives on what open science means and how it should be institutionalized. As a result, the meeting felt almost like a political summit. It included high officials from professional organizations like the American Psychological Association (APA) and the Association for Psychological Science (APS), programme directors from the National Institutes of Health (NIH) and the National Science Foundation (NSF), editors of a wide variety of psychological, political, economic, and general science journals (including Science and Nature), and a loose collection of open science enthusiasts and methodologists (that would be me).

(more…)

What to Do If You Are Accused of P-Hacking

In a recent post on Data Colada, University of Pennsylvania Professor Uri Simonsohn discusses what do in the event you (a researcher) are accused of having altered your data to increase statistical significance.


Simonsohn states:

It has become more common to publicly speculate, upon noticing a paper with unusual analyses, that a reported finding was obtained via p-hacking.

For example “a Slate.com post by Andrew Gelman suspected p-hacking in a paper that collected data on 10 colors of clothing, but analyzed red & pink as a single color” [.html] (see authors’ response to the accusation .html) or “a statistics blog suspected p-hacking after noticing a paper studying number of hurricane deaths relied on the somewhat unusual Negative-Binomial Regression” [.html].

Instinctively, Simonsohn says, a researcher may react to accusations of p-hacking by attempting to justify the specifics of his/her research design but if that justification is ex-post, the explanation will not be good enough. In fact:

P-hacked findings are by definition justifiable. Unjustifiable research practices involve incompetence or fraud, not p-hacking.

(more…)

Teaching Integrity in Empirical Research

Richard Ball (Economics Professor at Haverford College and presenter at the 2014 BITSS Summer Institute) and Norm Medeiros (Associate Librarian at Haverford College) in a recent interview appearing on  the Library of Congress based blog The Signal, discussed Project TIER (Teaching Integrity in Empirical Research) and their experience educating students how to document their empirical analysis.  


What is Project TIER

For close to a decade, we have been teaching our students how to assemble comprehensive documentation of the data management and analysis they do in the course of writing an original empirical research paper. Project TIER is an effort to reach out to instructors of undergraduate and graduate statistical methods classes in all the social sciences to share with them lessons we have learned from this experience.

What is the TIER documentation protocol?

We gradually developed detailed instructions describing all the components that should be included in the documentation and how they should be formatted and organized. We now refer to these instructions as the TIER documentation protocol. The protocol specifies a set of electronic files (including data, computer code and supporting information) that would be sufficient to allow an independent researcher to reproduce–easily and exactly–all the statistical results reported in the paper.

(more…)

“Research misconduct accounts for a small percentage of total funding”: Study

Retraction Watch

elifeHow much money does scientific fraud waste?

That’s an important question, with an answer that may help determine how much attention some people pay to research misconduct. But it’s one that hasn’t been rigorously addressed.

Seeking some clarity,  Andrew Stern, Arturo Casadevall, Grant Steen, and Ferric Fang looked at cases in which the Office of Research Integrity had determined there was misconduct in particular papers. In their study, published today in eLife:

View original post 774 more words

Your Question for the Day — What Is “Peer Review”?

Scientific Misconduct in the Middle East

On plagiarism and fraud in the Middle Eastern research community (by Ranya Stamboliyska):

Gallons of digital ink have been spilt discussing depressing laundry lists of misconduct cases in the west (and more recently, in China). There is, however, very little on unethical behaviour in the Arab world, despite the wide number of Mid-Eastern students from local and foreign universities who work and publish, both at home and abroad, prior to entering academia […] Editors cannot and must not, however, be solely held to account for frauds, forgery and plagiarism. Yet nothing suggests that research institutions and universities in the Arab world have engaged in actual policy-making to prevent misconduct […] One cannot speak of policy-making and public oversight without mentioning one very worrisome trend in the MENA [Middle East and North Africa]: the politicisation of science. The most recent example that springs to mind is of course the Egyptian army’s miraculous cure for hepatitis C and AIDS.

Read the full article here.

“Replication Bullying:” Who replicates the replicators?

Political Science Replication

A recent special issue in Social Psychology adds fuel to the debate on data transparency and faulty research. Following an innovative approach, the journal published failed and successful replications instead of typical research papers. A Cambridge scholar, whose paper could not be replicated, now feels treated unfairly by the “data detectives.” She says that the replicators had aimed to “declare the verdict” that they failed to reproduce her results. Her response raises important questions for replications, reproducibility and research transparency.

View original post 1,413 more words

What can be done to prevent the proliferation of errors in academic publications?

Every now and again a paper is published on the number of errors made in academic articles. These papers document the frequency of conceptual errors, factual errors, errors in abstracts, errors in quotations, and errors in reference lists. James Hartley reports that the data are alarming, but suggests a possible way of reducing them. Perhaps in future there might be a single computer program that matches references in the text with correct (pre-stored) references as one writes the text.

(more…)