Is there really a "credibility crisis" in the sciences that use statistics, as some seem to fear these days? I think not; generally I'm on board with Demming's "In God we trust, all others bring data." Of course there are issues, but they're hardly new. Some simply reflect poor understanding of statistics. For example, a Bayesian calculation of post-study probability, \( P(H_0 ~ true ~|~ data ) \), is very different from a classical \(p\)-value, \( P(data ~|~ H_0 ~ true) \). The former can be large even when the latter is very small -- notwithstanding the fact that the two are often naively confused. Other issues are real -- like the effects of "searching for asterisks" (data mining, in the bad sense) and the corresponding "file-drawer problem" in which "insignificant" results languish in file drawers, unsubmitted, unpublished and unseen -- but lots of existing and ongoing work is helping us to confront them.
It's important, however, always to be on alert. Here's some reading on the issue of \( P(H_0 ~ true ~|~ data ) \) vs. \( P(data ~|~ H_0 ~ true) \), which has gotten fresh attention recently. Cohen (1994) is classic, as is its title, "The Earth Is Round (\(p < .05\))." Fast-forwarding twenty years, the Maniadis et al. (2014) AER piece is very interesting (also see the January 2014 Issue of Econ Journal Watch, which arrived as spam but turned out to contain an interesting comment with a rejoinder by Maniadis et al.). Last and not least, see Dick Startz's 2013 working paper.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.