Berkeley Initiative for Transparency in the Social Sciences

Home » Posts tagged 'Integrity'

Tag Archives: Integrity

Influential Paper on Gay Marriage Might Be Marred by Fraudulent Data

Harsh scrutiny of an influential political science experiment highlights the importance of transparency in research.


The paper, from UCLA graduate student Michael LaCour and Columbia University Professor Donald Green, was published in Science in December 2014. It asserted that short conversations with gay canvassers could not only change people’s minds on a divisive social issue like same-sex marriage, but could also have a contagious effect on the relatives of those in contact with the canvassers. The paper received wide attention in the press.

Yet three days ago, two graduate students from UC Berkeley, David Broockman and Joshua Kalla, published a response to the study, pointing to a number of statistical oddities, and discrepancies between how the experiment was reported and how the authors said it was conducted. Earlier in the year, impressed by the paper findings, Broockman and Kalla had attempted to conduct an extension of the study, building on the original data set. This is when they became aware of irregularities in the study methodology and decided to notify Green.

Reviewing the comments from Broockman and Kalla, Green, who was not involved in the original data collection, quickly became convinced that something was wrong – and on Tuesday, he submitted a letter to Science requesting the retraction of the paper. Green shared his view on the controversy in a recent interview, reflecting on what it meant for the broader practice of social science and highlighting the importance of integrity in research.

(more…)

P-values are Just the Tip of the Iceberg

Roger Peng and Jeffrey Leek of John Hopkins University claim that “ridding science of shoddy statistics will require scrutiny of every step, not merely the last one.”


This blog post originally appeared in Nature on April 28, 2015 (see here).

There is no statistic more maligned than the P value. Hundreds of papers and blogposts have been written about what some statisticians deride as ‘null hypothesis significance testing’ (NHST; see, for example, go.nature.com/pfvgqe). NHST deems whether the results of a data analysis are important on the basis of whether a summary statistic (such as a P value) has crossed a threshold. Given the discourse, it is no surprise that some hailed as a victory the banning of NHST methods (and all of statistical inference) in the journal Basic and Applied Social Psychology in February.

Such a ban will in fact have scant effect on the quality of published science. There are many stages to the design and analysis of a successful study. The last of these steps is the calculation of an inferential statistic such as a P value, and the application of a ‘decision rule’ to it (for example, P < 0.05). In practice, decisions that are made earlier in data analysis have a much greater impact on results — from experimental design to batch effects, lack of adjustment for confounding factors, or simple measurement error. Arbitrary levels of statistical significance can be achieved by changing the ways in which data are cleaned, summarized or modelled2.

(more…)

Recap of Research Integrity in Economics Session at ASSA 2015

By Garret Christensen (BITSS)


BITSS just got back from the ASSA conference, the major annual gathering of economists. The conference largely serves to help new PhD economists find jobs, but there are of course sessions of research presentations, a media presence and sometimes big names like the Chair-of-the-Federal-Reserve in attendance. BITSS faculty Ted Miguel presided a session on research transparency. The session featured presentations by Eva Vivalt (NYU), Brian Nosek (UVA) and Richard Ball (Haverford College).

Vivalt presented part of her job market paper which shows that, at least in development economics, randomized trials seems to result in less publication bias and/or specification-searching than other types of evaluations.

Nosek’s presentation covered a broad range of transparency topics, from his perspective as a psychologist. His discussant, economist Justin Wolfers, concurred completely and focused on how Nosek’s lessons could apply to economics.

As an economist myself, I thought a few of his points were interesting:

  1. The Quarterly Journal of Economics should really have a data-sharing requirement.
  2. Economists don’t do enough meta-analysis (Ashenfelter et al.’s paper on the estimates of the returns to education is a great example of the work we could and should be doing)
  3. Somewhat tongue-in-cheek (I think), Wolfers discussed the fool/cheat paradox: whenever anyone is caught with a mistake in their research, they can either admit to having made an honest mistake, or having cheated. If they choose the “fool” option, as most do, there’s not much one can do to change one’s own intelligence. Why does nobody cop to having cheated? You could more easily make a case for mending your ways if you admitted to cheating.

(more…)

This Monday at AEA2015: Transparency and Integrity in Economic Research Panel

This January 5th, 10.15am at the American Economic Association Annual Meeting in Boston, MA (Sheraton Hotel, Commonwealth Room).


Session: Promoting New Norms for Transparency and Integrity in Economic Research

Presiding: Edward Miguel (UC Berkeley)

Panelists:

  • Brian Nosek (University of Virginia): “Scientific Utopia: Improving Openness and Reproducibility in Scientific Research”
  • Richard Ball (Haverford College): “Replicability of Empirical Research: Classroom Instruction and Professional Practice”
  • Eva Vivalt (New York University): “Bias and Research Method: Evidence from 600 Studies”

Discussants:

  • Aprajit Mahajan (UC Berkeley)
  • Justin Wolfers (UC Michigan)
  • Kate Casey (Stanford University)

More info here. Plus don’t miss the BITSS/COS Exhibition Booth at the John B. Hynes Convention Center (Level 2, Exhibition Hall D).

The New Statistics: A Pathway to Research Integrity

An eight-step strategy to increase the integrity and credibility of social science research using the new statistics, by Geoff Cumming.

We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial.

The full article is available here.