An interesting piece on p-fishing and what we can do about it.
There is a lot of attention recently to the perils of p-fishing in political science. P-fishing refers to the selective reporting of statistical results that cross some boundary of statistical significance (customarily, p < .05). The problem arises because researchers may run many more analyses than they actually report, and given model or specification uncertainty, it is easy to report only those results that are consistent with the inferences that the analyst wants to make. A good discussion of p-fishing may be found in this CEGA blog forum. The issue of p-fishing isn’t new, but the attention that it's getting is. This goes along with a renewed attention to publication bias and other related concerns.
Solutions to the p-fishing problem come in two flavors, one targeting temptations and the other incentives. One is pre-registration (see e.g. this EGAP initiative): specify publicly how you intend to analyze your data…
View original post 1,064 more words