Berkeley Initiative for Transparency in the Social Sciences

Home » Posts tagged 'Academic Journals'

Tag Archives: Academic Journals

Transparency and Openness Promotion Guidelines

By Garret Christensen (BITSS)


BITSS is proud to announce the publication of the Transparency and Openness Promotion Guidelines in Science. The Guidelines are a set of standards in eight areas of research publication:

  • Citation Standardstop1
  • Data Transparency
  • Analytic Methods (Code) Transparency
  • Research Materials Transparency
  • Design and Analysis Transparency
  • Preregistration of Sudies
  • Preregistration of Analysis Plans
  • Replication

(more…)

Three Transparency Working Papers You Need to Read

Garret Christensen, BITSS Project Scientist


Several great working papers on transparency and replication in economics have been released in the last few months. Two of them are intended for a symposium in The Journal of Economic Perspectives, to which I am very much looking forward, and are about pre-analysis plans. The first of these, by Muriel Niederle and Lucas Coffman, doesn’t pull any punches with its title: “Pre-Analysis Plans are not the Solution, Replication Might Be.” Niederle and Coffman claim that PAPs don’t decrease the rate of false positives sufficiently to be worth the effort, and that replication may be a better way to get at the truth. Some of their concern about PAPs stems from concerns about the assumption “[t]hat one published paper is the result of one pre-registered hypothesis, and that one pre-registered hypothesis corresponds to one experimental protocol. Neither can be guaranteed.” They’re also not crazy about design-based publications (or “registered reports“). They instead offer a proposal to get replication to take off, calling for the establishment of a Journal of Replication Studies, and for researchers to start citing replications, both positive and negative, whenever they cite an original work. They claim if these changes were made, researchers might begin to expect to see replications, and thus the value of writing and publishing them would increase.

Another working paper on PAPs in economics, titled simply “Pre-Analysis Plans in Economics” was released recently by Ben Olken. Olken gives a lot of useful background on the origin on PAP and discusses in detail what should go into them. A reference I found particularly informative is “E9 Statistical Principles for Clinical Trials,” the FDA’s official guidance for trials, especially section V on Data Analysis Considerations. Obviously a lot of the transparency practices we’re trying to adopt in economics and social sciences come from medicine, so it’s nice to see the original source. He compares the benefits: increasing confidence in results, making full use of statistical power, and improving relationships with partners (governments or corporations that may have vested interests in the outcomes of trials), with the costs: complexity and the challenge of writing all possible papers in advance, PAPs pushing towards simple, less interesting papers with less nuance, and reducing the ability to learn ex-post about your data. He cites Brodeur et al to say the problem of false positives isn’t that large, and that with the exception of the trials involving parties with vested interests, the costs outweigh the benefits.

(more…)

The End of p-values?

Psychology Professors David Trafimow and Michael Marks of New Mexico State University discuss the implications of banning p-values from appearing in published articles.


To combat the practice of p-hacking, the editors of Basic and Applied Social Psychology (BASP) will no longer publish p-values included in articles submitted to the journal. The unprecedented move by the journal’s editorial board signals publishing norms may be changing faster than previously believed, but also raises certain issues. In a recent article published by Rutledge, editors of BASP, David Trafimow and Michael Marks, bring up 3 key questions associated with the banning of the null hypothesis significance testing procedure (NHSTP).

Question 1: Will manuscripts with p-values be desk rejected automatically?

Answer 1: No […] But prior to publication, authors will have to remove all vestiges of the NHSTP (p-values, t-values, F-values, statements about ‘‘significant’’ differences or lack thereof, and so on).

Question 2: What about other types of inferential statistics such as confidence intervals or Bayesian methods?

Answer 2: Analogous to how the NHSTP fails to provide the probability of the null hypothesis, […] confidence intervals do not provide a strong case for concluding that the population parameter of interest is likely to be within the stated interval. Therefore, confidence intervals also are banned from BASP.

(more…)

The Disturbing Influence of Flawed Research on Your Living Habits

Last year, we featured a story on our blog about the so-called cardiovascular benefits of fish oil, largely based on a seminal research study that had more to do with hearsay than with actual science. After your diet, flawed research is now trying to meddle with your sports life.


A Danish study published in the Journal of the American College of Cardiology recently made the headlines for suggesting that too much jogging could have a negative impact on life expectancy. In a recent post in the New York Times, economist Justin Wolfers carefully analyses the study and provides a brilliant response discrediting this overly-confident claim:

The researchers asked Danish runners about the speed, frequency and duration of their workouts, categorizing 878 of them as light, moderate or strenuous joggers. Ten years later, the researchers checked government records to see how many of them had died […] Happily, only 17 had. While this was good news for the surviving runners, it was bad news for the researchers, because 17 was clearly too few deaths to discern whether the risk of death was related to running intensity.

Nonetheless, the study claimed that too much jogging was associated with a higher mortality rate […] The evidentiary basis for this claim is weak. It is based on 40 people who were categorized as “strenuous joggers” — among whom only two died. That’s right: The conclusions that received so much attention were based on a grand total of two deaths among strenuous joggers. As Alex Hutchinson of Runner’s World wrote, “Thank goodness a third person didn’t die, or public health authorities would be banning jogging.”

Because the sample size was so small, this difference is not statistically significant. You may have heard the related phrase “absence of evidence does not equal evidence of absence,” and it is particularly relevant here […] In fact, the main thing the study shows is that small samples yield unreliable estimates that cannot be reliably discerned from the effects of chance.

Wolfers goes on highlighting other weaknesses in the Danish study. This new case of unreliable research finding receiving wide media coverage brings out a couple of important points that are central to our work at BITSS:

(more…)

UC Press Launches New Open Access Publications

The University of California’s publishing house, UC Press, has announced the launch of two new publications, Collabra and Luminos. Collabra will publish academic articles across many academic disciplines including the life, environmental, social and behavioral sciences. Luminos will publish monographs across all fields of study. As the UC Press website indicates, the publications will “not only [share] the research but also the value created by the academic community.”

With low up-front APCs [Article Processing Charges], a sponsorship fund for authors unable to pay, and sharing actual revenue with editors and reviewers, Collabra builds a fair and welcoming ecosystem. (Collabra Website)

Luminos is also based on an innovative publishing model:

[It] shares the cost burden of publishing in manageable amounts across the academic community. For each title, UC Press makes a significant contribution, augmented by membership funds from supporting libraries. Authors will then be asked to secure a title publication fee to cover the remaining costs. Additional revenue from supporting libraries and print sales will help to support an author waiver fund. (UCSD Library Blog)

The publication of the two initiatives coincides with a well needed push towards greater access and openness in academic publishing and will hopefully increase the benefits of the public goods provided by researchers and their affiliated academic institutions.

For more information on either publication, contact Lorraine Weston at lweston@ucpress.edu.

Science Magazine Releases Special Issue on Digital Privacy

Yesterday, January 29th, Science Magazine released a new Special Issue entitled The End of Privacy. In line with its theme, the edition will be made available online at no cost for the first week following publication. Take this chance to look through!

For scientists, the vast amounts of data that people shed every day offer great new opportunities but new dilemmas as well. New computational techniques can identify people or trace their behavior by combining just a few snippets of data. There are ways to protect the private information hidden in big data files, but they limit what scientists can learn; a balance must be struck.

Boldly declaring “Privacy as we have known is ending and we’re only beginning to fathom the consequences,” the Special Issue deals with a host of topics related to different facets of data deposition, use, and confidentiality. Included in the publication are formal reports and new articles.

(more…)

Facilitating Radical Change in Publication Standards: Overview of COS Meeting Part II

Originally posted on the Open Science Collaboration by Denny Borsboom


This train won’t stop anytime soon.

That’s what I kept thinking during the two-day sessions in Charlottesville, where a diverse array of scientific stakeholders worked hard to reach agreement on new journal standards for open and transparent scientific reporting. The aspired standards are intended to specify practices for authors, reviewers, and editors to follow in order to achieve higher levels of openness than currently exist. The leading idea is that a journal, funding agency, or professional organization, could take these standards off-the-shelf and adopt them in their policy. So that when, say, The Journal for Previously Hard To Get Data starts to turn to a more open data practice, they don’t have to puzzle on how to implement this, but may instead just copy the data-sharing guideline out of the new standards and post it on their website.

The organizers1 of the sessions, which were presided by Brian Nosek of the Center for Open Science, had approached methodologists, funding agencies, journal editors, and representatives of professional organizations to achieve a broad set of perspectives on what open science means and how it should be institutionalized. As a result, the meeting felt almost like a political summit. It included high officials from professional organizations like the American Psychological Association (APA) and the Association for Psychological Science (APS), programme directors from the National Institutes of Health (NIH) and the National Science Foundation (NSF), editors of a wide variety of psychological, political, economic, and general science journals (including Science and Nature), and a loose collection of open science enthusiasts and methodologists (that would be me).

(more…)