Why are journals so obsessed with "impact factors"? (The five-year impact factor is average citations/article in a five-year window.) They're often calculated to three decimal places, and publishers trumpet victory when they go from (say) 1.225 to 1.311! It's hard to think of a dumber statistic, or dumber over-interpretation. Are the numbers after the decimal point anything more than noise, and for that matter, are the numbers before the decimal much more than noise?
Why don't journals instead use the same citation indexes used for individuals? The leading index seems to be the h-index, which is the largest integer h such that an individual has h papers, each cited at least h times. I don't know who cooked up the h-index, and surely it has issues too, but the gurus love it, and in my experience it tells the truth.
Even better, why not stop obsessing over clearly-insufficient statistics of any kind? I propose instead looking at what I'll call a "citation signature plot" (CSP), simply plotting the number of cites for the most-cited paper, the number of cites for the second-most-cited paper, and so on. (Use whatever window(s) you want.) The CSP reveals everything, instantly and visually. How high is the CSP for the top papers? How quickly, and with what pattern, does it approach zero? etc., etc. It's all there.
Google-Scholar CSP's are easy to make for individuals, and they're tremendously informative. They'd be only slightly harder to make for journals. I'd love to see some.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.