Monday, August 27, 2018

Long Memory / Scaling Laws in Return Volatility

The 25-year accumulation of evidence for long memory / fractional integration / self-similarity / scaling laws in financial asset return volatility continues unabated.  For the latest see this nice new paper from Bank of Portugal, in particular its key Table 6. Of course the interval estimates of the fractional integration parameter "d" are massively far from both 0 and 1 -- that's the well-known long memory. But what's new and interesting is the systematic difference in the intervals depending on whether one uses absolute or range-based volatility. The absolute d intervals tend to be completely below 1/2 (0<d<1/2 corresponds to covariance-stationary dynamics), whereas the range-based d intervals tend to include 1/2 (1/2<d<1 corresponds to mean-reverting but not covariance- stationary dynamics, due to infinite unconditional variance). 

Realized vol based on the range is less noisy than realized vol based on absolute returns. But least noisy of all, and not considered in the paper above, is realized vol calculated directly from high-frequency return data (HFD-vol), as done by numerous authors in recent decades. Interestingly, recent work for HFD-vol also reports d intervals that tend to poke above 1/2. See this earlier post.

Monday, August 20, 2018

More on the New U.S. GDP Series

BEA's new publishing of NSA GDP is a massive step forward. Now it should take one more step, if it insists on continuing to publish SA GDP.

Publishing only indirect SA GDP ("adjust the components and add them up") lends it an undeserved "official" stamp of credibility, so BEA should also publish a complementary official direct SA GDP ("adjust the aggregate directly"), which is now possible. 

This is a really big deal. Real GDP is undoubtedly the most important data series in all of macroeconomics, and indirect vs. direct SA GDP growth estimates can differ greatly. Their average absolute deviation is about one percent, and average real GDP growth itself is only about two percent! And which series you use has large implications for important issues, such as the widely-discussed puzzle of weak first-quarter growth (see Rudebusch et al.), among other things.

How do we know all this about properties of indirect vs. direct SA GDP growth estimates, since BEA doesn't provide direct SA GDP? You can now take the newly-provided NSA GDP and directly adjust it yourself. See Jonathan Wright's wonderful new paper. (Ungated version here.)

Of course direct SA has many issues of its own.  Ultimately significant parts of both direct and indirect SA GDP are likely spurious artifacts of various direct vs. indirect SA assumptions / methods. 

So another, more radical, idea, is simply to stop publishing SA GDP in any form, instead publishing only NSA GDP (and its NSA components). Sound crazy? Why, exactly? Are official government attempts to define and remove "seasonality" any less dubious than, say, official attempts to define and remove "trend"? (The latter is, mercifully, not attempted...)

Tuesday, August 7, 2018

Factor Model w Time-Varying Loadings

Markus Pelger has a nice paper on factor modeling with time-varying loadings in high dimensions. There are many possible applications. He applies it to level-slope-curvature yield-curve models. 

For me another really interesting application would be measuring connectedness in financial markets, as a way of tracking systemic risk. The Diebold-Yilmaz (DY) connectedness framework is based on a high-dimensional VAR with time-varying coefficients, but not factor structure. An obvious alternative in financial markets, which we used to discuss a lot but never pursued, is factor structure with time-varying loadings, exactly in Pelger! 

It would seem, however, that any reasonable connectedness measure in a factor environment would need to be based not only time-varying loadings but also time-varying idiosynchratic shock variances, or more precisely a time-varying noise/signal ratio (e.g., in a 1-factor model, the ratio of the idiosyncratic shock variance to the factor innovation variance). That is, connectedness in factor environments is driven by BOTH the size of the loadings on the factor(s) AND the amount of variation in the data explained by the factor(s). Time-varying loadings don't really change anything if the factors are swamped by massive noise. 

Typically one might fix the factor innovation variance for identification, but allow for time-varying idiosyncratic shock variance in addition to time-varying factor loadings. It seems that Pelger's framework does allow for that. Crudely, and continuing the 1-factor example, consider y_t  =  lambda_t  f_t  +  e_t. His methods deliver estimates of the time series of loadings lambda_t and factor f_t, robust to heteroskedasticity in the idiosyncratic shock e_t. Then in a second step one could back out an estimate of the time series of e_t and fit a volatility model to it. 
Then the entire system would be estimated and one could calculate connectedness measures based, for example, on variance decompositions as in the DY framework