Here's a bit more related to the FRB St. Louis conference.
The fully-correct approach to mixed-frequency time-series modeling is: (1) write out the state-space system at the highest available data frequency or higher (e.g., even if your highest frequency is weekly, you might want to write the system daily to account for different numbers of days in different months), (2) appropriately treat most of the lower-frequency data as missing and handle it optimally using the appropriate filter (e.g., the Kalman filter in the linear-Gaussian case). My favorite example (no surprise) is here.
Until recently, however, the prescription above was limited in practice to low-dimensional linear-Gaussian environments, and even there it can be tedious to implement if one insists on MLE. Hence the well-deserved popularity of the MIDAS approach to approximating the prescription, recently also in high-dimensional environments.
But now the sands are shifting. Recent work enables exact Bayesian posterior mixed-frequency analysis even in high-dimensional structural models. I've known Schorfheide-Song (2015, JBES; 2013 working paper version here) for a long time, but I never fully appreciated the breakthrough that it represents -- that is, how straightforward exact mixed-frequency estimation is becoming -- until I saw the stimulating Justiniano presentation at FRBSL (older 2016 version here). And now it's working its way into important substantive applications, as in Schorfheide-Song-Yaron (2017, forthcoming in Econometrica).
Sunday, November 26, 2017
Monday, November 20, 2017
More on Path Forecasts
I blogged on path forecasts yesterday. A reader just forwarded this interesting paper, of which I was unaware. Lots of ideas and up-to-date references.
Thursday, November 16, 2017
Forecasting Path Averages
Consider two standard types of \(h\)-step forecast:
(a). \(h\)-step forecast, \(y_{t+h,t}\), of \(y_{t+h}\)
(b). \(h\)-step path forecast, \(p_{t+h,t}\), of \(p_{t+h} = \{ y_{t+1}, y_{t+2}, ..., y_{t+h} \}\).
Clive Granger used to emphasize the distinction between (a) and (b).
As regards path forecasts, lately there's been some focus not on forecasting the entire path \(p_{t+h}\), but rather on forecasting the path average:
(c). \(h\)-step path average forecast, \(a_{t+h,t}\), of \(a_{t+h} = 1/h [y_{t+1} + y_{t+2} + ... + y_{t+h}]\)
The leading case is forecasting "average growth", as in Mueller and Waston (2016).
Forecasting path averages (c) never resonated thoroughly with me. After all, (b) is sufficient for (c), but not conversely -- the average is just one aspect of the path, and additional aspects (overall shape, etc.) might be of interest.
Then, listening to Ken West's FRB SL talk, my eyes opened. Of course the path average is insufficient for the whole path, but it's surely the most important aspect of the path -- if you could know just one thing about the path, you'd almost surely ask for the average. Moreover -- and this is important -- it might be much easier to provide credible point, interval, and density forecasts of \(a_{t+h}\) than of \(p_{t+h}\).
So I still prefer full path forecasts when feasible/credible, but I'm now much more appreciative of path averages.
(a). \(h\)-step forecast, \(y_{t+h,t}\), of \(y_{t+h}\)
(b). \(h\)-step path forecast, \(p_{t+h,t}\), of \(p_{t+h} = \{ y_{t+1}, y_{t+2}, ..., y_{t+h} \}\).
Clive Granger used to emphasize the distinction between (a) and (b).
As regards path forecasts, lately there's been some focus not on forecasting the entire path \(p_{t+h}\), but rather on forecasting the path average:
(c). \(h\)-step path average forecast, \(a_{t+h,t}\), of \(a_{t+h} = 1/h [y_{t+1} + y_{t+2} + ... + y_{t+h}]\)
The leading case is forecasting "average growth", as in Mueller and Waston (2016).
Forecasting path averages (c) never resonated thoroughly with me. After all, (b) is sufficient for (c), but not conversely -- the average is just one aspect of the path, and additional aspects (overall shape, etc.) might be of interest.
Then, listening to Ken West's FRB SL talk, my eyes opened. Of course the path average is insufficient for the whole path, but it's surely the most important aspect of the path -- if you could know just one thing about the path, you'd almost surely ask for the average. Moreover -- and this is important -- it might be much easier to provide credible point, interval, and density forecasts of \(a_{t+h}\) than of \(p_{t+h}\).
So I still prefer full path forecasts when feasible/credible, but I'm now much more appreciative of path averages.
Wednesday, November 15, 2017
FRB St. Louis Forecasting Conference
Got back a couple days ago. Great lineup. Wonderful to see such sharp focus. Many thanks to FRBSL and the organizers (Domenico Giannone, George Kapetanios, and Mike McCracken). I'll hopefully blog on one or two of the papers shortly. Meanwhile, the program is here.
Wednesday, November 8, 2017
Artificial Intelligence, Machine Learning, and Productivity
As Bob Solow famously quipped, "You can see the computer age everywhere but in the productivity statistics". That was in 1987. The new "Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics," NBER w.p. 24001, by Brynjolfsson, Rock, and Syverson, brings us up to 2017. Still a puzzle. Fascinating. Ungated version here.
Sunday, November 5, 2017
Regression on Term Structures
An important insight regarding use of dynamic Nelson Siegel (DNS) and related term-structure modeling strategies (see here and here) is that they facilitate regression on an entire term structure. Regressing something on a curve might initially sound strange, or ill-posed. The insight, of course, is that DNS distills curves into level, slope, and curvature factors; hence if you know the factors, you know the whole curve. And those factors can be estimated and included in regressions, effectively enabling regression on a curve.
In a stimulating new paper, “The Time-Varying Effects of Conventional and Unconventional Monetary Policy: Results from a New Identification Procedure”, Atsushi Inoue and Barbara Rossi put that insight to very good use. They use DNS yield curve factors to explore the effects of monetary policy during the Great Recession. That monetary policy is often dubbed "unconventional" insofar as it involved the entire yield curve, not just a very short "policy rate".
I recently saw Atsushi present it at NBER-NSF and Barbara present it at Penn's econometrics seminar. It was posted today, here.
In a stimulating new paper, “The Time-Varying Effects of Conventional and Unconventional Monetary Policy: Results from a New Identification Procedure”, Atsushi Inoue and Barbara Rossi put that insight to very good use. They use DNS yield curve factors to explore the effects of monetary policy during the Great Recession. That monetary policy is often dubbed "unconventional" insofar as it involved the entire yield curve, not just a very short "policy rate".
I recently saw Atsushi present it at NBER-NSF and Barbara present it at Penn's econometrics seminar. It was posted today, here.
Subscribe to:
Posts (Atom)