Berkeley Initiative for Transparency in the Social Sciences

Home » Opinion Pieces » What to Do If You Are Accused of P-Hacking

What to Do If You Are Accused of P-Hacking

In a recent post on Data Colada, University of Pennsylvania Professor Uri Simonsohn discusses what do in the event you (a researcher) are accused of having altered your data to increase statistical significance.


Simonsohn states:

It has become more common to publicly speculate, upon noticing a paper with unusual analyses, that a reported finding was obtained via p-hacking.

For example “a Slate.com post by Andrew Gelman suspected p-hacking in a paper that collected data on 10 colors of clothing, but analyzed red & pink as a single color” [.html] (see authors’ response to the accusation .html) or “a statistics blog suspected p-hacking after noticing a paper studying number of hurricane deaths relied on the somewhat unusual Negative-Binomial Regression” [.html].

Instinctively, Simonsohn says, a researcher may react to accusations of p-hacking by attempting to justify the specifics of his/her research design but if that justification is ex-post, the explanation will not be good enough. In fact:

P-hacked findings are by definition justifiable. Unjustifiable research practices involve incompetence or fraud, not p-hacking.

Simonsohn describes three appropriate ways to respond to an accusation of p-hacking:

  • Right Response #1.  “We decided in advance” 

    If a research design decision, which increased the statistical significance of your findings, was made before the results were known than you have not p-hacked.

  • Right Response #2.  “We didn’t decide in advance, but the results are robust” 

    If you cannot show or did not make the methodological choice in question ex-ante but can demonstrate that the significance of the results were not dramatically increased by the specifics of your design than you did not p-hack.

  • Right Response 3. “We didn’t decide in advance, and the results are not robust. So we run a direct replication.” 

    Run a direct replication and if the results aren’t significant consider reporting null results.

Read “False-Positive Psychology” to learn how “flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates.”

Find the full the post here.


1 Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

%d bloggers like this: