Thursday, October 31, 2013

On the Wastefulness of (Pseudo-) Out-of-Sample Predictive Model Comparisons

Peter Hansen and Allan Timmermann have a fantastic new paper, "Equivalence Between Out-of-Sample Forecast Comparisons and Wald Statistics."

The finite-sample wastefulness of (pseudo-) out-of-sample model comparisons seems obvious, as they effectively discard the (pseudo-) in-sample observations. That intuition should be true for both nested and non-nested comparisons, but it seems most obvious in the nested case: How could anything systematically dominate full-sample Wald, LR or LM for testing nested hypotheses? Hansen and Timmermann consider the nested case and verify the intuition with elegance and precision. In doing so they greatly clarify the misguided nature of most (pseudo-) out-of-sample model comparisons.

Consider the predictive regression model with \(h\)-period forecast horizon
$$
y_{t}=\beta_{1}^{\prime}X_{1,t-h}+\beta_{2}^{\prime}X_{2,t-h}+\varepsilon_{t},
$$ \(t=1,\ldots,n\), where \(X_{1t}\in\mathbb{R}^{k}\) and \(X_{2t}\in\mathbb{R}^{q}\). We obtain out-of-sample forecasts with recursively estimated parameter values by regressing \(y_{s}\) on \(X_{s-h}=(X_{1,s-h}^{\prime},X_{2,s-h}^{\prime})^{\prime}\) for \(s=1,\ldots,t\) (resulting in the least squares estimate \(\hat{\beta}_{t}=(\hat{\beta}_{1t}^{\prime},\hat{\beta}_{2t}^{\prime})^{\prime}\)) and using
$$\hat{y}_{t+h|t}(\hat{\beta}_{t})=\hat{\beta}_{1t}^{\prime}X_{1t}+\hat{\beta}_{2t}^{\prime}X_{2t}$$ to forecast \(y_{t+h}\).

Now consider a smaller (nested) regression model,
$$
y_{t}=\delta^{\prime}X_{1,t-h}+\eta_{t}.
$$ In similar fashion we proceed by regressing \(y_{s}\) on \(X_{1,s-h}\)  for \(s=1,\ldots,t\) (resulting in the least squares estimate \(\hat{\delta}_t\)) and using
$$\tilde{y}_{t+h|t}(\hat{\delta}_{t})=\hat{\delta}_{t}^{\prime}X_{1t}$$ to forecast \(y_{t+h}\).

In a representative and leading contribution to the (pseudo-) out-of-sample model comparison literature in the tradition of West (1996), McCracken (2007) suggests comparing such nested models via expected loss evaluated at population parameters. Under quadratic loss the null hypothesis is
$$
H_{0}:\mathrm{E}[y_{t}-\hat{y}_{t|t-h}(\beta)]^{2}=\mathrm{E}[y_{t}-\tilde{y}_{t|t-h}(\delta)]^{2}.$$ McCracken considers the test statistic
$$
T_{n}=\frac{\sum_{t=n_{\rho}+1}^{n}(y_{t}-\tilde{y}_{t|t-h}(\hat{\delta}_{t-h}))^{2}-(y_{t}-\hat{y}_{t|t-h}(\hat{\beta}_{t-h}))^{2}}{\hat{\sigma}_{\varepsilon}^{2}},
$$ where \(\hat{\sigma}_{\varepsilon}^{2}\) is a consistent estimator of \(\sigma_{\varepsilon}^{2}=\mathrm{var}(\varepsilon_{t+h})\) and \(n_{\rho}\) is the number of observations set aside for the initial estimation of \(\beta\), taken to be a fraction \(\rho\in(0,1)\) of the full sample, \(n\mbox{,}\) i.e., \(n_{\rho}=\lfloor n\rho\rfloor\). The asymptotic null distribution of \(T_{n}\) turns out to be rather complicated; McCracken shows that it is a convolution of \(q\) independent random variables, each with a distribution of \(2\int_{\rho}^{1}u^{-1}B(u)\mathrm{d}B(u)-\int_{\rho}^{1}u^{-2}B(u)^{2}\mathrm{d}u\).

Hansen and Timmermann show that \(T_{n}\) is just the difference between two Wald statistics of the hypothesis that \(\beta_{2}=0\), the first based on the full sample and the second based on the initial estimation sample. That is, \(T_{n}\) is just the increase in the Wald statistic obtained by using the full sample as opposed to the initial estimation sample. Hence the power of \(T_{n}\) derives entirely from the post-split sample, so it must be less powerful than using the entire sample.  Indeed Hansen and Timmermann show that power decreases as \(\rho\) increases.

On the one hand, the Hansen-Timmermann results render trivial the calculation of \(T_{n}\) and greatly clarify its limit distribution (that of the difference between two independent \(\chi^{2}\)-distributions and their convolutions). So if one insists on doing \(T_{n}\)-type tests, then the Hansen-Timmermann results are good news. On the other hand, the real news is bad: the Hansen-Timmerman results make clear that, at least in the environments they consider, (pseudo-) out-of-sample model comparison comes at high cost (power reduction) and delivers no extra benefit.

[By the way, my paper, "Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests," makes many related points. Drafts are here. The final (?) version will be delivered as the JBES Invited Lecture at the January 2014 ASSA meetings in Philadelphia. Commentary at the meeting will be by Andrew Patton and Allan Timmerman. The JBES published version will contain the Patton and Timmermann remarks, plus those of Atsushi Inoue, Lutz Kilian, and Jonathan Wright. Should be entertaining!]

Friday, October 25, 2013

NBER/NSF Time-Series Conference: Retrospect and Prospect

I recently reported here on the Barrigozzi-Brownlees paper, "Network Estimation for Time Series." I heard it presented a few weeks ago at the 2013 NBER/NSF Time Series Conference, hosted this year by the Federal Reserve Board in Washington (a sign, by the way, of the FED's ongoing research commitment, notwithstanding my earlier-posted doubts).

I hadn't attended NBER/NSFTime Series meeting in several years. Attending reminded me of how special it is and jogged me into this post on NBER/NSF more generally. What's most unique is the way the conference spans so many different communities, all of which do top work in time series but not all of which communicate regularly. For some reason my mind groups into pairs many of the great researchers who participated regularly over the years: Rob Engle and Clive Granger, George Tiao and Arnold Zellner, Jim Stock and Mark Watson, Ted Hannan and Manfred Deistler, Torben Anderson and Tim Bollerslev, Peter Brockwell and Richard Davis, Ron Gallant and George Tauchen, David Findley and Bill Bell, and on and on.

General ongoing info about the conference is here (including upcoming 2014-2016 meetings in St. Louis, Vienna and New York), and an interesting brief history -- including year-by-year locations -- is here. Programs for recent years appear here. Does anyone know whether a complete set of conference programs is available? It would be fascinating to watch the parade of paper titles and authors marching forward from the earliest times.

FYI this year's program follows.

2013 NBER-NSF Time Series Conference


A conference hosted by the Federal Reserve Board
September 26-27, 2013, Washington, D.C.

Thursday, September 26, 2013

Conference Registration and Box Lunch: 12:00 – 1:15
Opening Remarks: 1:15 – 1:30
Main Program Session: Factor Models and Latent Variables: 1:30 – 3:00
"Generalized Method of Moments with Latent Variables"
A. Ronald Gallant, Raffaella Giacomini, Giuseppe Ragusa
"Shrinkage Estimation of Dynamic Factor Models with Structural Instabilities"
Xu Cheng, Zhipeng Liao, Frank Schorfheide
"Structural FECM: Cointegration in Large-scale Structural FAVAR Models"
Anindya Banerjee, Massimiliano Marcellino, Igor Masten
Coffee Break: 3:00 – 3:30
Main Program Session: Forecasting and Model Evaluation: 3:30 – 5:00
"Alternative Tests for Correct Specification of Conditional Predictive Densities"
Barbara Rossi, Tatevik Sekhposyan
"Non-nested Model Comparisons for Time Series via the Gaussian Likelihood Ratio Statistic"
Tucker McElroy, Christopher Blakely
"Efficient Test for Long-Run Predictability: Hybrid of the Q-test and Long-Horizon Regressions"
Natalia Sizova
Cocktail Reception and Poster Session 1: 5:00 – 6:30
Conference Dinner: 6:30 – 8:30
Dinner Speaker: Professor George Tiao, University of Chicago, Booth School of Business, "A Tribute to Professor George E.P. Box"

Friday, September 27, 2013

Continental Breakfast: 8:00 – 9:00
Main Program Session: Time Series Analysis: 9:00 – 10:30
"Thresholded Multivariate Regression with Application to Robust Forecasting"
Ranye Sun, Mohsen Pourahmadi
"Detecting Seasonality in Unadjusted and Seasonally Adjusted Time Series"
David F. Findley, Demetra P. Lytras
"Approximate Bias in Time Series Regressions"
Kenneth D. West
Coffee Break: 10:30 – 11:00
Main Program Session: Macroeconomics: 11:00 – 12:30
"Reverse Kalman Filtering US Inflation with Sticky Professional Forecasts"
James M. Nason, Gregor W. Smith
"Improving GDP Measurement: A Measurement-Error Perspective"
Boragan Aruoba, Francis X. Diebold, Jeremy Nalewaik, Frank Schorfheide, Dongho Song
"Systemic Risk and the Macroeconomy: An Empirical Evaluation"
Stefano Giglio, Bryan Kelly, Seth Pruitt, Xiao Qiao
Lunch and Poster Session 2: 12:30 – 2:00
Main Program Session: Macro/Finance: 2:00 – 3:30
"Daily House Price Indexes: Construction, Modeling, and Longer-Run Predictions"
Tim Bollerslev, Andrew Patton, Wenjing Wang
"Estimation of non-Gaussian Affine Term Structure Models"
Drew D. Creal, Jing Cynthia Wu
"Robust joint Models of Yield Curve Dynamics and Euro Area (non-)standard Monetary Policy"
Geert Mesters , Berd Schwaab, Siem Jan Koopman
Coffee Break: 3:30 – 4:00
Main Program Session: Estimation: 4:00 – 5:30
"Nets: Network Estimation for Time Series"
Matteo Barigozzi, Christian Brownlees
"A Parameter Driven Logit Regression Model for Binary time Series"
Rongning Wu, Yunwei Cui
"Definitions and representations of multivariate long-range dependent time series"
Stefanos Kechagias, Vladas Pipiras 
Poster Session 1
Extended Yule-Walker Identification of a VARMA Model Using Single- or Mixed-Frequency Data"
Peter A. Zadrozny
"Testing for Cointegration with Temporally Aggregated and Mixed-frequency Time Series"
Eric Ghysels, J. Isaac Miller
"Co-summability: From Linear to Non-linear Co-integration"
Vanessa Berenguer-Rico, Jesus Gonzalo
"An Asymptotically Normal Out-Of-Sample Test of Equal Predictive Accuracy for Nested Models"
Gray Calhoun
"Nonparametric HAC Estimation for Time Series Data with Missing Observations"
Deepa Dhume Datta, Wenxin Du
"Evaluating Forecasts from Bayesian Vector Autoregressions Conditional on Policy Paths"
Todd E. Clark, Michael W. McCracken
"Marcenko-Pastur Law for Time Series"
Haoyang Liu, Alexander Aue, Debashis Paul
"Dynamic Compositional Regression in Financial Time Series and Application in Portfolio Decisions"
Zoey Yi Zhao, Mike West
"Diagnosing the Distribution of GARCH Innovations"
Pengfei Sun, Chen Zhou
"Nonlinearity, Breaks, and Long-Range Dependence in Time-Series Models"
Eric Hillebrand, Marcelo C. Medeiros
"Measuring Nonlinear Granger Causality in Mean"
Xiaojun Song, Abderrahim Taamouti
"Penalized Forecasting in Panel Data Models: Predicting Household Electricity Demand from Smart Meter Data"
Matthew Harding, Carlos Lamarche, M. Hashem Pesaran
Poster Session 2
"What is the Chance that the Equity Premium Varies over Time? Evidence from Regressions on the Dividend-Price Ratio"
Jessica A. Wachter, Missaka Warusawitharana
"Forecasting with Many Models: Model Confidence Sets and Forecast Combination"
Jon D. Samuels, Rodrigo M. Sekkel
"Modelling Financial Markets Comovements: A Dynamic Multi Factor Approach"
Martin Belvisi, Riccardo Pianeti, Giovanni Urga
"On the Reliability of Output-Gap Estimates in Realtime"
Elmar Mertens
"Testing for Granger Causality with Mixed Frequency Data"
Eric Ghysels, Jonathan B. Hill, Kaiji Motegi
"Testing Stationarity for Unobserved Components Models"
James Morley, Irina B. Panovska, Tara M. Sinclair
"Symmetry and Separability in Two–Country Cointegrated VAR Models: Representation and Testing"
Hans–Martin Krolzig, Reinhold Heinlein
"Detecting and Forecasting Large Deviations and Bubbles in a Near-Explosive Random Coefficient Model"
Anurag Banerjee, Guillaume Chevillon, Marie Kratz
"A Spatio-Temporal Mixture Model for Point Processes with Application to Ambulance Demand"
David Matteson
"Empirical Evidence on Inflation Expectations in the New Keynesian Phillips Curve"
Sophocles Mavroeidis, Mikkel Plagborg-Moller, James H. Stock
"A Non-Gaussian Asymmetric Volatility Model"
Geert Bekaert, Eric Engstrom
"Gaussian Term Structure Models and Bond Risk Premia"
Bruno Feunou, Jean-Sebastien Fontain

Monday, October 21, 2013

Lawrence R. Klein, 1920-2013


I am sad to report that Lawrence R. Klein has passed away. He was in many respects the father of modern econometrics and empirical macroeconomics; indeed his 1980 Nobel Prize citation was "for the creation of econometric models and their application to the analysis of economic fluctuations and economic policies." He was also a dear friend and mentor to legions of Penn faculty and students, including me. I am grateful to him for many things, including his serving on my Penn Ph.D. dissertation committee nearly thirty years ago.

You can find a beautiful and fascinating autobiographical essay written in 1980, and updated in 2005, here.

Check back during the coming days as I update this post with additional links and materials.

Update 1: KLEIN LAWRENCE, October 20, 2013, of Gladwyne, Pa. Husband of Sonia (nee Adelson). Father of Hannah Klein, Rebecca (James) Kennedy, Rachel (Lyle) Klein and Jonathan (Blandina) Klein. Also survived by 7 grandchildren and 4 great-grandchildren. Services and Interment are private. Relatives and friends are invited to the residence of Mrs. Sonia Klein Wednesday, October 23, 2-4 P.M. AND Saturday, October 26, 2-4 P.M. (only). Contributions in his memory may be made to the University of Pennsylvania Department of Economics.

Update 2: Extensive New York Times obituary here.

Update 3: Penn Economics memorial statement here.

Update 4: Saturday 26 October Financial Times Weekend will contain an extensive obituary.

Wednesday, October 16, 2013

Network Estimation for Time Series

Matteo Barigozzi and Christian Brownlees have a fascinating new paper, "Network Estimation for Time Series" that connects the econometric time series literature and the statistical graphical modeling (network) literature. It's not only useful, but also elegant: they get a beautiful decomposition into contemporaneous and dynamic aspects of network connectedness. Granger causality and "long-run covariance matrices" (spectra at frequency zero), centerpieces of modern time-series econometrics, feature prominently. It also incorporates sparsity, allowing analysis of very high-dimensional networks.

If I could figure out how get LaTeX/Mathjax running inside Blogger, I could show you some details, but no luck after five minutes of fiddling last week, and I haven't yet gotten a chance to return to it. (Anyone know? Maybe Daughter 1 is right and I should switch to WordPress?) For now you'll just have to click on the Barigozzi-Brownlees paper above, and see for yourself.

It's interesting to see that Granger causality is alive and well after all these years, still contributing to new research advances. And Barigozzi-Brownlees is hardly alone in that regard, as the recent biomedical imaging literature illustrates. Some of Vic Solo's recent work is a great example.

Finally, it's also interesting to note that both the Barigozzi-Brownlees and Diebold-Yilmaz approaches to network connectedness work in vector-autoregressive frameworks, yet they proceed in very different, complementary, ways.

    Monday, October 14, 2013

    A Nobel for Financial Econometrics


    First it was Engle and Granger (2003); now it's Fama, Hansen and Shiller.

    A central issue in the economics of financial markets is whether and how those markets process information efficiently, to arrive at fair prices. Inextricably linked to that central issue is a central tension: certain lines of argument suggest that financial markets should be highly efficient, yet other lines of argument suggest limits to market efficiency. Gene Fama, Lars Hansen and Bob Shiller have individually and collectively made landmark contributions that now shape both academic and practitioner thinking as regards that tension. In so doing they've built much of the foundations of modern financial economics and financial econometrics. Fama empirically championed the efficient markets hypothesis, which in many respects represents the pinnacle of neoclassical financial economics. Shiller countered with additional empirical evidence that seemingly indicated the failure of market efficiency, setting the stage for several decades of subsequent work. Throughout, Hansen supplied both powerful economic theory that brought asset pricing in closer touch with macroeconomics, and powerful econometric theory (GMM) that proved invaluable for empirical asset pricing, where moment conditions are often available but likelihoods are not.

    If today we celebrate, then tomorrow we return to work -- obviously there's more to be done. But for today, a resounding bravo to the three deserving winners!

    Monday, October 7, 2013

    Why You Should Join Twitter

    Sounds silly, but it's not. I got talked into joining a few weeks ago, and I'm glad I did. I rarely tweet (except to announce new No Hesitations posts), but I follow others. Several times in the last few weeks alone, various pieces of valuable information arrived. Great stuff.

    FYI here are some random things that I'm currently following. (In total I follow about 25, but for some reason Google Blogger crashes if I try to paste them all here.)

    [By the way, I will soon stop posting announcements of new blog posts to Facebook groups, instead announcing exclusively with a tweet. SERIOUSLY. So join Twitter and follow @FrancisDiebold.]

    Saturday, October 5, 2013

    Pure Brilliance From FRB St. Louis: EconomicAcademics.org

    This just in from Christian Zimmermann and the RePEc Team at FRB St. Louis:

    "Congratulations, you made the list! .. The Federal Reserve Bank of St. Louis is launching a blog aggregator, EconomicAcademics.org, to highlight and promote the discussion of economics research. Your blog is part of this effort. This email explains why and how you can help promote the discussion of economic research in the blogosphere ... EconAcademics.org lives at http://econacademics.org/ and aggregates blog posts that discuss economic research. The aggregator looks through blog posts for a link to some research indexed on a RePEc service, currently EconPapers, IDEAS and NEP. IDEAS then also links back from the abstract page to the blog posts ... This blog aggregator is provided by the Federal Reserve Bank of St. Louis, which also offers with FRED database and graphing tool as a useful resources for bloggers. Feel free to use the graphs on your blog, best done by embedding them so that readers can click on them to get more details about the data. FRED lives at http://research.stlouisfed.org/fred2/." 

    This is totally brilliant. First, it's a brilliant public service. I am grateful. Everyone should be grateful. But second, and this is really what I want to emphasize, ya gotta love the brilliant business/marketing move. Instantly, every blogger now has a strong incentive to report on (and link to) RePEc papers whenever possible -- in case you missed it above, blog posts get noticed by the aggregator only if/when they link to a RePEc paper -- and hence authors have a correspondingly strong incentive to put their papers on RePEc. And it's all tangled up with the wonderful FRED. The idea may not make billions for FRBSL/RePEc/FRED, but in its own way it's as brilliant as Google's pagerank (and cynics will say as obvious -- just sour grapes).

    Oh wait. I forgot to mention a RePEc paper above, so this post won't get picked up by EconAcademics. Hmmm... In the future I'll have to change that...

    Anyway, SSRN et al. must be reeling! Of course it will be interesting to see how they and others respond. FRBSL/RePEc/FRED have scored a significant first-mover blow, but surely the fight isn't over. And as usual with healthy competition, everyone will benefit.

    Friday, October 4, 2013

    Federal Reserve Research: Wake Up Before It's Too Late

    I am familiar with the U.S. Federal Reserve System. Long ago I spent the first three (wonderful) years of my working life as an economist at the Board of Governors in DC, 1986-1989. Most recently I chaired the Fed's Model Validation Council, 2012-2013. In the intervening years I've had many engagements with the System, and I've sent many of my Ph.D. students, perhaps twenty, to work there.

    So believe me when I say that during the last half-century, there were few better places in the world for a research economist to work, universities included. The research staff quality and esprit de corps were unmatched. Runner-up institutions, world-wide, were miles behind. And believe me as well when I say that I'm now worried.

    When I read the recent Huffington Post piece, "Federal Reserve Employees Afraid To Speak Put Financial System At Risk," I was pretty alarmed. I figured it must be strongly negatively biased, so I made some personal inquiries. No -- pretty accurate. Wow. Well, I noticed, it focuses mostly on the Board's division of Supervision and Regulation (Sup&Reg), filled with lawyers. Surely the Board's key research divisions (Research and Statistics, Monetary Affairs, International Finance), filled with economists, are as healthy as ever. So I made some more inquiries. Not yet a Sup&Reg situation, but lots of bewilderment, concern, and top talent looking, or thinking of looking, for greener pastures. Wow.

    I understand that we just went through the worst recession since the Great Depression, and that enforcing the ensuing legislation requires a major effort. But I also understand that effective institutions and stellar reputations take half-centuries to build but can collapse quickly, and moreover that, at a deep level, the Fed's research prowess is largely responsible for its respect and effectiveness. So if a new Fed regulatory culture must be built, then build it, but Fed senior management needs simultaneously to preserve and promote the serious research culture that drives Fed effectiveness. Related, people who don't deeply understand and appreciate serious research should never, ever, be promoted to senior management in divisions like Research and Statistics, Monetary Affairs, and International Finance.

    Tuesday, October 1, 2013

    Big Data the Big Hassle

    The hype surrounding "Big Data" has escalated to borderline nauseating. Is it just a sham?

    Yes, I know, I have earlier gushed about the wonders of Big Data. But that was then, and now is now, and I hear my inner contrarian alarm sounding.

    One thing is clear: Big Data the phenomenon is not a sham. It's here, it's real, and it must be taken seriously. The ongoing explosion in the quantity of available data, largely the result of recent and unprecedented advancements in data recording and storage technology, is not going away. It's emerging as one of the defining characteristics of our time.

    Big Data the business isn't really a sham either, even if it's impossible not to smirk when told, for example, that that major firms are rushing to create new executive titles like "Vice President for Big Data." (I'm not making this up. See Steve Lohr's New York Times piece.) Big Data consultants and software peddlers smell Big Money, and they're salivating profusely. But there's nothing necessarily wrong with that, even if it isn't pretty.

    But what about Big Data the scientific field? What is it? Where's the beef?

    What's really new, for example, statistically? Of course Big Data has stimulated much fine new work in dimensionality reduction, shrinkage, selection, sparsity, regularization, etc. But are those not traditional areas? In what sense is the scientific Big Data whole truly greater than the sum of its earlier-existing parts?

    But primarily: Why all the endless optimistic Big Data buzz about endless Big Data opportunities? What about pitfalls? Isn't Big Data in many respects just a hassle? Aren't we still searching for needles in a haystack, except that the haystack is now growing much more quickly than the needle-discovering technology is improving? Why is that cause for celebration?