###
Measuring Predictability

A friend writes the following. (I have edited very slightly for clarity.)
Based on forecasts you've seen, what would you say is a "reasonable" ratio of the standard deviation of the forecast error to the standard deviation of a covariance-stationary series being forecast? ... It would be great if you can tell me "I'd consider x reasonable and y too high."

The problem is that the premise underlying the question (namely, that there *is* such a "reasonable" value of the ratio \(r\) of innovation variance to unconditional variance) is false. That is, there's no small value \(c\) of \(r\) such that \(r<c\) means that we've done a good forecasting job. Equivalently, there's no large value \(c'\) of the predictive \( R^2~ (R^2 = 1 - r^2) \) such that \(R^2 > c'\) means that we've done a good forecasting job. Instead, "good" \(c\) or \(c'\) values depend critically on the dynamic nature of the series being forecast. Consider, for example, a covariance-stationary AR(1) process, \(y_t = \phi y_{t-1} + \varepsilon_t\), where \(\varepsilon_t \sim iid (0, \sigma^2)\). The innovation variance is \(\sigma^2\) and the unconditional variance is \(\sigma^2 / (1 - \phi^2)\), so the lower bound on \(r\) (and hence the upper bound on \(R^2\)) depends entirely on \(\phi\) and can be anywhere in the unit interval! This is an important lesson: "predictability" can (and does) differ greatly across economic series. For more than you ever wanted to know, see Diebold and Kilian (2001), "Measuring Predictability: Theory and Macroeconomic Applications".