Check this out this "Retraction Watch" post, forwarded by a reader:
http://retractionwatch.com/2014/11/11/overly-honest-references-should-we-cite-the-crappy-gabor-paper-here/
Really funny. Except that it's a little close to home. I suspect that we've all had a few such accidents, or at least near-accidents, and with adjectives significantly stronger than "crappy". I know I have.
Sunday, September 27, 2015
Thursday, September 24, 2015
Coolest Paper at 2015 Jackson Hole
The Faust-Leeper paper is wild and wonderful. The friend who emailed it said, "Be prepared, it’s very different but a great picture of real-time forecasting..." He got it right.
Actually his full email was, "Be prepared, it’s very different but a great picture of real-time forecasting, and they quote Zarnowitz." (He and I always liked and admired Victor Zarnowitz. But that's another post.)
The paper shines its light all over the place, and different people will read it differently. I did some spot checks with colleagues. My interpretation below resonated with some, while others wondered if we had read the same paper. Perhaps, as with Keynes, we'll never know exactly what Faust-Leeper really, really, really meant.
I read Faust-Leeper as speaking to factor analysis in macroeconomics and finance, arguing that dimensionality reduction via factor structure, at least as typically implemented and interpreted, is of limited value to policymakers, although the paper never uses wording like "dimensionality reduction" or "factor structure".
If Faust-Leeper are doubting factor structure itself, then I think they're way off base. It's no accident that factor structure is at the center of both modern empirical/theoretical macro and modern empirical/theoretical finance. It's really there and it really works.
Alternatively, if they're implicitly saying something like this, then I'm interested:
Small-scale factor models involving just a few variables and a single common factor (or even two factors like "real activity" and "inflation") are likely missing important things, and are therefore incomplete guides for policy analysis.
Or, closely related and more constructively:
We should cast a wide net in terms of the universe of observables from which we extract common factors, and the number of factors that we extract. Moreover we should examine and interpret not only common factors, but also allegedly "idiosyncratic" factors, which may actually be contemporaneously correlated, time dependent, or even trending, due to mis-specification.
Enough. Read it for yourself.
[General note: My use of terms like "factor modeling" throughout this post should be broadly interpreted to include not only explicit reduced-form statistical/econometric dynamic factor modeling, but also structural DSGE modeling.]
Actually his full email was, "Be prepared, it’s very different but a great picture of real-time forecasting, and they quote Zarnowitz." (He and I always liked and admired Victor Zarnowitz. But that's another post.)
The paper shines its light all over the place, and different people will read it differently. I did some spot checks with colleagues. My interpretation below resonated with some, while others wondered if we had read the same paper. Perhaps, as with Keynes, we'll never know exactly what Faust-Leeper really, really, really meant.
I read Faust-Leeper as speaking to factor analysis in macroeconomics and finance, arguing that dimensionality reduction via factor structure, at least as typically implemented and interpreted, is of limited value to policymakers, although the paper never uses wording like "dimensionality reduction" or "factor structure".
If Faust-Leeper are doubting factor structure itself, then I think they're way off base. It's no accident that factor structure is at the center of both modern empirical/theoretical macro and modern empirical/theoretical finance. It's really there and it really works.
Alternatively, if they're implicitly saying something like this, then I'm interested:
Small-scale factor models involving just a few variables and a single common factor (or even two factors like "real activity" and "inflation") are likely missing important things, and are therefore incomplete guides for policy analysis.
Or, closely related and more constructively:
We should cast a wide net in terms of the universe of observables from which we extract common factors, and the number of factors that we extract. Moreover we should examine and interpret not only common factors, but also allegedly "idiosyncratic" factors, which may actually be contemporaneously correlated, time dependent, or even trending, due to mis-specification.
Enough. Read it for yourself.
[General note: My use of terms like "factor modeling" throughout this post should be broadly interpreted to include not only explicit reduced-form statistical/econometric dynamic factor modeling, but also structural DSGE modeling.]
Wednesday, September 16, 2015
Warning Problem Hopefully Solved
If during the last month you got a warning when accessing No Hesitations, I may have found and fixed the problem, finally. (This happened once before.) There were a couple of clearly bogus comments, which contained links that may have been phishing, posted anonymously. I have now deleted/banned comments, and No Hesitations has now been removed from any/all blacklists, as far as I know.
If for some reason you still get a warning -- now or ever -- please email me with as much information as possible (browser, any add-ons like Microsoft Smartscreen, any other security software on your machine or institution-wide, etc.). And if you're offered a way to report the warning as incorrect, please do.
Thanks for your support.
If for some reason you still get a warning -- now or ever -- please email me with as much information as possible (browser, any add-ons like Microsoft Smartscreen, any other security software on your machine or institution-wide, etc.). And if you're offered a way to report the warning as incorrect, please do.
Thanks for your support.
Monday, September 14, 2015
Cochrane on Point vs. Density Forecasting
I recently blogged on Manski's call for uncertainty estimates for economic statistics. Of course we should also acknowledge the uncertainty in economic forecasts (with or without acknowledgment of data uncertainty, and with is better than without).
Some of us have been pushing applied interval and density forecasting for years, and of course Bayesians have been pushing for centuries. The quantitative finance and risk management communities have been largely receptive, whereas macroeconomics has been slower, not withstanding the Bank of England's justly-famous "fan charts."
From a recent post in John Cochrane's Grumpy Economist:
Here, here!
Surely statistical "error bars" as conventionally calculated are themselves often too tight, as they rely on a host of assumptions and in any event fail to capture unknown and unknowable sources of forecast uncertainty. But they're certainly a step in the right direction.
Some of us have been pushing applied interval and density forecasting for years, and of course Bayesians have been pushing for centuries. The quantitative finance and risk management communities have been largely receptive, whereas macroeconomics has been slower, not withstanding the Bank of England's justly-famous "fan charts."
From a recent post in John Cochrane's Grumpy Economist:
... conditioning decisions on a forecast, cranked out to two decimal places, is a bad idea. Economic policy should embrace uncertainty! ... This is really a big deal. ... All forecasts ... should have error bars. ... Knowing what you don't know is real knowledge.
Here, here!
Surely statistical "error bars" as conventionally calculated are themselves often too tight, as they rely on a host of assumptions and in any event fail to capture unknown and unknowable sources of forecast uncertainty. But they're certainly a step in the right direction.
Thursday, September 10, 2015
The Econ Ph.D. Placement Network
Speaking of interesting applications of network connectedness measures, check out Ricky Vohra's latest post in Leisure of the Theory Class. It reports on a fascinating paper by Chu Kin ("Roy") Chan, a talented undergrad who visited Penn for the past year. Congratulations Roy!
Monday, September 7, 2015
BEA to Resume Provision of NSA GDP
In an earlier post, I argued for publication of non-seasonally-adjusted (NSA) series. Thanks to a helpful communication from Jonathan Wright, I recently learned (as did he) that BEA will resume compilation and publication of NSA U.S. GDP.
The official announcement is simply, "BEA will develop a NSA GDP that will be released in parallel with BEA’s quarterly GDP estimates." It's buried at the end of the box on p. 5 of "Preview of the 2015 Annual Revision of the National Income and Product Accounts," by Stephanie H. McCulla and Shelly Smith in the June 2015 Survey of Current Business. Rumor has it that we should look for the new NSA series to appear starting in late 2016 or early 2017.
Obviously my No Hesitations post was too late to have influenced the
BEA’s decision, but other academic work may have played a role, notably Jonathan Wright's 2013 Brookings Papers piece (which stresses "overadjustment" in seasonally-adjusted data) and Chuck Manski's forthcoming 2015 Journal of Economic Literature piece (which stresses conceptual difficulties with seasonally-adjusted data).
Thanks BEA, for resuscitating NSA GDP. It’s the right thing to do.
Thanks BEA, for resuscitating NSA GDP. It’s the right thing to do.
Subscribe to:
Posts (Atom)