The finite-sample wastefulness of (pseudo-) out-of-sample model comparisons seems obvious, as they effectively discard the (pseudo-) in-sample observations. That intuition should be true for both nested and non-nested comparisons, but it seems most obvious in the nested case: How could anything systematically dominate full-sample Wald, LR or LM for testing nested hypotheses? Hansen and Timmermann consider the nested case and verify the intuition with elegance and precision. In doing so they greatly clarify the misguided nature of most (pseudo-) out-of-sample model comparisons.

Consider the predictive regression model with \(h\)-period forecast horizon

$$

y_{t}=\beta_{1}^{\prime}X_{1,t-h}+\beta_{2}^{\prime}X_{2,t-h}+\varepsilon_{t},

$$ \(t=1,\ldots,n\), where \(X_{1t}\in\mathbb{R}^{k}\) and \(X_{2t}\in\mathbb{R}^{q}\). We obtain out-of-sample forecasts with recursively estimated parameter values by regressing \(y_{s}\) on \(X_{s-h}=(X_{1,s-h}^{\prime},X_{2,s-h}^{\prime})^{\prime}\) for \(s=1,\ldots,t\) (resulting in the least squares estimate \(\hat{\beta}_{t}=(\hat{\beta}_{1t}^{\prime},\hat{\beta}_{2t}^{\prime})^{\prime}\)) and using

$$\hat{y}_{t+h|t}(\hat{\beta}_{t})=\hat{\beta}_{1t}^{\prime}X_{1t}+\hat{\beta}_{2t}^{\prime}X_{2t}$$ to forecast \(y_{t+h}\).

Now consider a smaller (nested) regression model,

$$

y_{t}=\delta^{\prime}X_{1,t-h}+\eta_{t}.

$$ In similar fashion we proceed by regressing \(y_{s}\) on \(X_{1,s-h}\) for \(s=1,\ldots,t\) (resulting in the least squares estimate \(\hat{\delta}_t\)) and using

$$\tilde{y}_{t+h|t}(\hat{\delta}_{t})=\hat{\delta}_{t}^{\prime}X_{1t}$$ to forecast \(y_{t+h}\).

In a representative and leading contribution to the (pseudo-) out-of-sample model comparison literature in the tradition of West (1996), McCracken (2007) suggests comparing such nested models via expected loss evaluated at population parameters. Under quadratic loss the null hypothesis is

$$

H_{0}:\mathrm{E}[y_{t}-\hat{y}_{t|t-h}(\beta)]^{2}=\mathrm{E}[y_{t}-\tilde{y}_{t|t-h}(\delta)]^{2}.$$ McCracken considers the test statistic

$$

T_{n}=\frac{\sum_{t=n_{\rho}+1}^{n}(y_{t}-\tilde{y}_{t|t-h}(\hat{\delta}_{t-h}))^{2}-(y_{t}-\hat{y}_{t|t-h}(\hat{\beta}_{t-h}))^{2}}{\hat{\sigma}_{\varepsilon}^{2}},

$$ where \(\hat{\sigma}_{\varepsilon}^{2}\) is a consistent estimator of \(\sigma_{\varepsilon}^{2}=\mathrm{var}(\varepsilon_{t+h})\) and \(n_{\rho}\) is the number of observations set aside for the initial estimation of \(\beta\), taken to be a fraction \(\rho\in(0,1)\) of the full sample, \(n\mbox{,}\) i.e., \(n_{\rho}=\lfloor n\rho\rfloor\). The asymptotic null distribution of \(T_{n}\) turns out to be rather complicated; McCracken shows that it is a convolution of \(q\) independent random variables, each with a distribution of \(2\int_{\rho}^{1}u^{-1}B(u)\mathrm{d}B(u)-\int_{\rho}^{1}u^{-2}B(u)^{2}\mathrm{d}u\).

Hansen and Timmermann show that \(T_{n}\) is just the difference between two Wald statistics of the hypothesis that \(\beta_{2}=0\), the first based on the full sample and the second based on the initial estimation sample. That is, \(T_{n}\) is just the increase in the Wald statistic obtained by using the full sample as opposed to the initial estimation sample. Hence the power of \(T_{n}\) derives entirely from the post-split sample, so it must be less powerful than using the entire sample. Indeed Hansen and Timmermann show that power decreases as \(\rho\) increases.

On the one hand, the Hansen-Timmermann results render trivial the calculation of \(T_{n}\) and greatly clarify its limit distribution (that of the difference between two independent \(\chi^{2}\)-distributions and their convolutions). So if one insists on doing \(T_{n}\)-type tests, then the Hansen-Timmermann results are good news. On the other hand, the

*real*news is bad: the Hansen-Timmerman results make clear that, at least in the environments they consider,

*(pseudo-) out-of-sample model comparison comes at high cost (power reduction)*

*and delivers no extra benefit*.

[By the way, my paper, "Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests," makes many related points. Drafts are here. The final (?) version will be delivered as the

*JBES Invited Lecture*at the January 2014 ASSA meetings in Philadelphia. Commentary at the meeting will be by Andrew Patton and Allan Timmerman. The

*JBES*published version will contain the Patton and Timmermann remarks, plus those of Atsushi Inoue, Lutz Kilian, and Jonathan Wright. Should be entertaining!]

## No comments:

## Post a Comment

Note: Only a member of this blog may post a comment.