Check out the new paper, "Regression Discontinuity in Time [RDiT]: Considerations for Empirical Applications", by Catherine Hausman and David S. Rapson. (NBER Working Paper No. 23602, July 2017. Ungated copy here.)
It's interesting in part because it documents and contributes to the largely cross-section regression discontinuity design literature's awakening to time series. But the elephant in the room is the large time-series "event study" (ES) literature, mentioned but not emphasized by Hausman and Rapson. [In a one-sentence nutshell, here's how an ES works: model the pre-event period, use the fitted pre-event model to predict the post-event period, and ascribe any systematic forecast error to the causal impact of the event.] ES's trace to the classic Fama et al. (1969). Among many others, MacKinlay's 1997 overview is still fresh, and Gürkaynak and Wright (2013) provide additional perspective.
One question is what the RDiT approach adds to the ES approach, and related, what it adds to well-developed time-series toolkit of other methods for assessing structural change. At present, and notwithstanding the Hausman-Rapson paper, my view is "little or nothing". Indeed in most respects it would seem that a RDiT study *is* an ES, and conversely. So call it what you will, "ES" or "RDiT".
But there are important open issues in ES / RDiT, and Hausman-Rapson correctly emphasize one of them, namely issues and difficulties associated with "wide" pre- and post-event windows, which is often the relevant case in time series.
Things are generally "easy" in cross sections, where we can usually take narrow windows (e.g., in the classic scholarship exam example, we use only test scores very close to the scholarship threshold). Things are similarly "easy" in time series *IF* we can take similarly narrow windows (e.g., high-frequency asset return data facilitate taking narrow pre- and post-event windows in financial applications). In such cases it's comparatively easy to credibly ascribe a post-event break to the causal impact of the event.
But in other time-series areas like macro and environmental, we might want (or need) to use wide pre- and post-event windows. Then the trick becomes modeling the pre- and post-event periods successfully enough so that we can credibly assert that any structural change is due exclusively to the event -- very challenging, but not hopeless.
Hats off to Hausman and Rapson for beginning to bridge the ES and regression discontinuity literatures, and for implicitly helping to push the ES literature forward.
I'll have something to say in next week's post. Meanwhile check out the interesting new paper, "Regression Discontinuity in Time: Considerations for Empirical Applications", by Catherine Hausman and David S. Rapson, NBER Working Paper No. 23602, July 2017. (Ungated version here.)
Efron and Hastie note that the "frequentist" term "seems to have been suggested by Neyman as a statistical analogue of Richard von Mises' frequentist theory of probability, the connection being made explicit in his 1977 paper, 'Frequentist Probability and Frequentist Statistics'". It strikes me that I may have always subconsciously assumed that the term originated with one or another Bayesian, in an attempt to steer toward something more neutral than "classical", which could be interpreted as "canonical" or "foundational" or "the first and best". Quite fascinating that the ultimate "classical" statistician, Neyman, seems to have initiated the switch to "frequentist".
Here are my slides from yesterday.
I want to clarify an aspect of the Diebold-Yilmaz framework (e.g., here or here). It is simply a method for summarizing and visualizing dynamic network connectedness, based on a variance decomposition matrix. The variance decomposition is not a part of our technology; rather, it is the key input to our technology. Calculation of a variance decomposition of course requires an identified model. We have nothing new to say about that; numerous models/identifications have appeared over the years, and it's your choice (but you will of course have to defend your choice).
For certain reasons (e.g., comparatively easy extension to high dimensions) Yilmaz and I generally use a vector-autoregressive model and Koop-Pesaran-Shin "generalized identification". Again, however, if you don't find that appealing, you can use whatever model and identification scheme you want. As long as you can supply a credible / defensible variance decomposition matrix, the network summarization / visualization technology can then take over.
In Ch. 3 of their brilliant book, Efron and Tibshirani (ET) assert that:
Jeffreys’ brand of Bayesianism [i.e., "uninformative" Jeffreys priors] had a dubious reputation among Bayesians in the period 1950-1990, with preference going to subjective analysis of the type advocated
by Savage and de Finetti. The introduction of Markov chain Monte Carlo
methodology was the kind of technological innovation that changes philosophies.
MCMC ... being very well suited to Jeffreys-style analysis
of Big Data problems, moved Bayesian statistics out of the textbooks
and into the world of computer-age applications.
Interestingly, the situation in econometrics strikes me as rather the opposite. Pre-MCMC, much of the leading work emphasized Jeffreys priors (RIP Arnold Zellner), whereas post-MCMC I see uniform at best (still hardly uninformative as is well known and as noted by ET), and often Gaussian or Wishart or whatever. MCMC of course still came to dominate modern Bayesian econometrics, but for a different reason: It facilitates calculation of the marginal posteriors of interest, in contrast to the conditional posteriors of old-style analytical calculations. (In an obvious notation and for an obvious normal-gamma regression problem, for example, one wants posterior(beta), not posterior(beta | sigma).) So MCMC has moved us toward marginal posteriors, but moved us away from uninformative priors.