Home » Posts tagged 'Pre-Specification'
Tag Archives: Pre-Specification
The original Upshot article advocates for a new publishing structure called Registered Reports (RRs):
A research publishing format in which protocols and analysis plans are peer reviewed and registered prior to data collection, then published regardless of the outcome.
In the following interview with the Washington Post, Nyhan explains in greater detail why RRs are more effective than other tools at preventing publication bias and data mining. He begins by explaining the limitations of preregistration.
As I argued in a white paper, […] it is still too easy for publication bias to creep in to decisions by authors to submit papers to journals as well as evaluations by reviewers and editors after results are known. We’ve seen this problem with clinical trials, where selective and inaccurate reporting persists even though preregistration is mandatory.
“Interdisciplinary initiatives are where most progress happens and where exciting innovations come from”
Maya Petersen, Perspective from Medicine
This is the fourth post of a video series in which we ask leading social science academics and experts to discuss research transparency in their discipline. The interview was recorded on December 13, 2013 at the University of California, Berkeley.
Leif Nelson making the case for pre-registration:
I recently joined a large group of academics in co-authoring a paper looking at how political science, economics, and psychology are working to increase transparency in scientific publications. Psychology is leading, by the way.
Working on that paper (and the figure below) actually changed my mind about something. A couple of years ago, when Joe, Uri, and I wrote False Positive Psychology, we were not really advocates of preregistration (a la clinicaltrials.gov). We saw it as an implausible superstructure of unspecified regulation. Now I am an advocate. What changed?
By Guillaume Kroll (CEGA)
BITSS held a session on research transparency at the 2014 Annual Meeting of the American Economic Association on January 5 in Philadelphia. UC Berkeley economist Ted Miguel, who was presiding over the session, kicked off the discussion by highlighting the growing interest in transparency across social science disciplines. Drawing from influential work in economics, psychology, political science, and medical trials, Miguel argues that the use of rigorous experimental research designs, which has become more widespread over the last decade, may not be enough to ensure credible bodies of scientific evidence on which policy decisions can be based.
“The incentives, norms and institutions that govern social science research contribute to these problems”, says Miguel. “There is ample evidence of publication bias, with large number of studies with p-value just below 0.05 “. “Statistically significant, novel and theoretically ‘tidy’ results are published more easily than null, replication, and perplexing results, even conditional on the quality of the research design”. In addition, “social science journals too rarely require data-sharing or adherence to reporting standards”.
Miguel proposes the adoption of a new set of practices by the scientific community. Based on a paper that was recently published in Science, in which he was a co-author, he puts forward three mechanisms to increase transparency in scientific reporting: the disclosure of key details about data collection and analysis, the registration of pre-analysis plans, and open access to research data and material. “The emerging registration system in social science may become a model for medical trials, where research plans have traditionally been much less detailed”, says Miguel. “We need to foster the adoption of new practices to improve the quality and credibility of the social science enterprise”.
In the January 3, 2014 edition of Science Magazine, an interdisciplinary group of 19 BITSS affiliates reviews recent efforts to promote transparency in the social sciences and make the case for more stringent norms and practices to help boost the quality and credibility of research findings.
The authors, led by UC Berkeley economist Ted Miguel, deplore a dysfunctional reward structure in which statistically significant, novel, and theoretically tidy results get published more easily that null, complicated, or replication outcomes. This misalignment between scholarly incentives and scholarly values, the authors argue, spur researchers to present their data in a way that is more “publishable” – at the expense of accuracy.
Coupled with limited accountability for researchers’ errors and mistakes, this problem has had the effect of producing a somewhat distorted body of evidence that exaggerate the effectiveness of social and economic programs. The stakes are high because policy decisions based on flawed research affect millions of people’s lives.
An eight-step strategy to increase the integrity and credibility of social science research using the new statistics, by Geoff Cumming.
We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial.
The full article is available here.