By the way, hadn't yet been to the new Chicago economics "cathedral" (Saieh Hall for Economics) and Becker-Friedman Institute. Wow. What an institution, both intellectually and physically.
Monday, September 26, 2016
Fascinating Conference at Chicago
I just returned from the University of Chicago conference, "Machine Learning: What's in it for Economics?" Lots of cool things percolating. I'm teaching a Penn Ph.D. course later this fall on aspects of the ML/econometrics interface. Feeling really charged.
Tuesday, September 20, 2016
On "Shorter Papers"
Journals should not corral shorter papers into sections like "Shorter Papers". Doing so sends a subtle (actually unsubtle) message that shorter papers are basically second-class citizens, somehow less good, or less important, or less something -- not just less long -- than longer papers. If a paper is above the bar, then it's above the bar, and regardless of its length it should then be published simply as a paper, not a "shorter paper", or a "note", or anything else. Many shorter papers are much more important than the vast majority of longer papers.
Monday, September 12, 2016
Time-Series Econometrics and Climate Change
It's exciting to see time series econometrics contributing to the climate change discussion.
Check out the upcoming CREATES conference, "Econometric Models of Climate Change", here.
Here are a few good examples of recent time-series climate research, in chronological order. (There are many more. Look through the reference lists, for example, in the 2016 and 2017 papers below.)
Check out the upcoming CREATES conference, "Econometric Models of Climate Change", here.
Here are a few good examples of recent time-series climate research, in chronological order. (There are many more. Look through the reference lists, for example, in the 2016 and 2017 papers below.)
Pierre Perron et al. (2013) in Nature.
Peter Phillips et al. (2016) in Nature.
Proietti and Hillebrand (2017), forthcoming in Journal of the Royal Statistical Society.
Tuesday, September 6, 2016
Inane Journal "Impact Factors"
Why are journals so obsessed with "impact factors"? (The five-year impact factor is average citations/article in a five-year window.) They're often calculated to three decimal places, and publishers trumpet victory when they go from (say) 1.225 to 1.311! It's hard to think of a dumber statistic, or dumber over-interpretation. Are the numbers after the decimal point anything more than noise, and for that matter, are the numbers before the decimal much more than noise?
Why don't journals instead use the same citation indexes used for individuals? The leading index seems to be the h-index, which is the largest integer h such that an individual has h papers, each cited at least h times. I don't know who cooked up the h-index, and surely it has issues too, but the gurus love it, and in my experience it tells the truth.
Even better, why not stop obsessing over clearly-insufficient statistics of any kind? I propose instead looking at what I'll call a "citation signature plot" (CSP), simply plotting the number of cites for the most-cited paper, the number of cites for the second-most-cited paper, and so on. (Use whatever window(s) you want.) The CSP reveals everything, instantly and visually. How high is the CSP for the top papers? How quickly, and with what pattern, does it approach zero? etc., etc. It's all there.
Google-Scholar CSP's are easy to make for individuals, and they're tremendously informative. They'd be only slightly harder to make for journals. I'd love to see some.
Why don't journals instead use the same citation indexes used for individuals? The leading index seems to be the h-index, which is the largest integer h such that an individual has h papers, each cited at least h times. I don't know who cooked up the h-index, and surely it has issues too, but the gurus love it, and in my experience it tells the truth.
Even better, why not stop obsessing over clearly-insufficient statistics of any kind? I propose instead looking at what I'll call a "citation signature plot" (CSP), simply plotting the number of cites for the most-cited paper, the number of cites for the second-most-cited paper, and so on. (Use whatever window(s) you want.) The CSP reveals everything, instantly and visually. How high is the CSP for the top papers? How quickly, and with what pattern, does it approach zero? etc., etc. It's all there.
Google-Scholar CSP's are easy to make for individuals, and they're tremendously informative. They'd be only slightly harder to make for journals. I'd love to see some.
Subscribe to:
Posts (Atom)