The term "file drawer problem" was coined long ago. It refers to the bias in published empirical studies toward "large", or "significant", or "good" estimates. That is, "small"/"insignificant"/"bad" estimates remain unpublished, in file drawers (or, in modern times, on hard drives). Correcting the bias is a tough nut to crack, since little is known about the nature or number of unpublished studies. For the latest, together with references to the relevant earlier literature, see the interesting new NBER working paper, IDENTIFICATION OF AND CORRECTION FOR PUBLICATION BIAS, by Isaiah AndrewsMaximilian Kasy. There's an ungated version and appendix here, and a nice set of slides here.
Abstract: Some empirical results are more likely to be published than others. Such selective publication leads to biased estimators and distorted inference. This paper proposes two approaches for identifying the conditional probability of publication as a function of a study's results, the first based on systematic replication studies and the second based on meta-studies. For known conditional publication probabilities, we propose median-unbiased estimators and associated confidence sets that correct for selective publication. We apply our methods to recent large-scale replication studies in experimental economics and psychology, and to meta-studies of the effects of minimum wages and de-worming programs.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.