Monday, June 25, 2018

Peter Christoffersen and Forecast Evaluation

For obvious reasons Peter Christoffersen has been on my mind. Here's an example of how his influence extended in important ways. Hopefully it's also an entertaining and revealing story.

Everyone knows Peter's classic 1998 "Evaluating Interval Forecasts" paper, which was part of his Penn dissertation. The key insight was that correct conditional calibration requires not only that the 0-1 "hit sequence" of course have the right mean ((1-\(\alpha\)) for a nominal 1-\(\alpha\) percent interval), but also that it be iid (assuming 1-step-ahead forecasts). More precisely, it must be iid Bernoulli(1-\(\alpha\)).

Around the same time I naturally became interested in going all the way to density forecasts and managed to get some more students interested (Todd Gunther and Anthony Tay). Initially it seemed hopeless, as correct density forecast conditional calibration requires correct conditional calibration of all possible intervals that could be constructed from the density, of which there are uncountably infinitely many.

Then it hit us. Peter had effectively found the right notion of an optimal forecast error for interval forecasts. And just as optimal point forecast errors generally must be independent, so too must optimal interval forecast errors (the Christoffersen hit sequence). Both the point and interval versions are manifestations of "the golden rule of forecast evaluation": Errors from optimal forecasts can't be forecastable. The key to moving to density forecasts, then, would be to uncover the right notion of forecast error for a density forecast. That is, to uncover the function of the density forecast and realization that must be independent under correct conditional calibration. The answer turns out to be the Probability Integral Transform, \(PIT_t=\int_{-\infty}^{y_t} p_t(y_t)\), as discussed in Diebold, Gunther and Tay (1998), who show that correct density forecast conditional calibration implies \(PIT \sim iid U(0,1)\). 


The meta-result that emerges is coherent and beautiful: optimality of point, interval, and density forecasts implies, respectively, independence of forecast error, hit, and \(PIT\) sequencesThe overarching point is that a large share of the last two-thirds of the three-part independence result -- not just the middle third -- is due to Peter. He not only cracked the interval forecast evaluation problem, but also supplied key ingredients for cracking the density forecast evaluation problem.

Wonderfully and appropriately, Peter's paper and ours were published together, indeed contiguously, in the International Economic Review. Each is one of the IER's ten most cited since its founding in 1960, but Peter's is clearly in the lead!

Friday, June 22, 2018

In Memoriam Peter Christoffersen

It brings me great sadness to report that Peter Christoffersen passed away this morning after a long and valiant struggle with cancer. (University of Toronto page here, personal page here.) He departed peacefully, surrounded by loving family. I knew Peter and worked closely with him for nearly thirty years. He was the finest husband, father, and friend imaginable. He was also the finest scholar imaginable, certainly among the leading financial economists and financial econometricians of his generation. I will miss him immensely, both personally and professionally.

Monday, June 18, 2018

10th ECB Workshop on Forecasting Techniques, Frankfurt

Starts now, program hereLooks like a great lineup. Most of the papers are posted, and the organizers also plan to post presentation slides following the conference. Presumably in future weeks I'll blog on some of the presentations.

Monday, June 11, 2018

Deep Neural Nets for Volatility Dynamics

There doesn't seem to be much need for nonparametric nonlinear modeling in empirical macro and finance. Not that lots of smart people haven't tried. The two key nonlinearities (volatility dynamics and regime switching) just seem to be remarkably well handled by tightly-parametric customized models (GARCH/SV and Markov-switching, respectively). 

But the popular volatility models are effectively linear (ARMA) in squares. Maybe that's too rigidly constrained. Volatility dynamics seem like something that could be nonlinear in ways much richer than just ARMA in squares. 

Here's an attempt using deep neural nets. I'm not convinced by the paper -- much more thorough analysis and results are required than the 22 numbers reported in the "GARCH" and "stocvol" columns of its Table 1 -- but I'm intrigued.

It's quite striking that neural nets, which have been absolutely transformative in other areas of predictive modeling, have thus far contributed so little in economic / financial contexts. Maybe the "deep" versions will change that, at least for volatility modeling. Or maybe not. 

Thursday, June 7, 2018

Machines Learning Finance

FRB Atlanta recently hosted a meeting on "Machines Learning Finance". Kind of an ominous, threatening (Orwellian?) title, but there were lots of (non-threatening...) pieces. I found the surveys by Ryan Adams and John Cunningham particularly entertaining. A clear theme on display throughout the meeting was that "supervised learning" -- the main strand of machine learning -- is just function estimation, and in particular, conditional mean estimation. That is, regression. It may involve high dimensions, non-linearities, binary variables, etc., but at the end of the day it's still just regression. If you're a regular No Hesitations reader, the "insight" that supervised learning = regression will hardly be novel to you, but still it's good to see it disseminating widely.