Tuesday, July 26, 2016
An important Example of Simultaneously Wide and Dense Data
By the way, related to my last post on wide and dense data, an important example of analysis of data that are both wide and dense is the high-frequency high-dimensional factor modeling of Pelger and Ait-Sahalia and Xiu. Effectively they treat wide sets of realized volatilities, each of which is constructed from underlying dense data.
Monday, July 25, 2016
The Action is in Wide and/or Dense Data
I recently blogged on varieties of Big Data: (1) tall, (2) wide, and (3) dense.
Presumably tall data are the least interesting insofar as the only way to get a long calendar span is to sit around and wait, in contrast to wide and dense data, which now appear routinely.
But it occurs to me that tall data are also the least interesting for another reason: wide data make tall data impossible from a certain perspective. In particular, non-parametric estimation in high dimensions (that is, with wide data) is always subject to the fundamental and inescapable "curse of dimensionality": the rate at which estimation error vanishes gets hopelessly slow, very quickly, as dimension grows. [Wonks will recall that the Stone-optimal rate in \(d\) dimensions is \( \sqrt{T^{1- \frac{d}{d+4}}}\).]
The upshot: As our datasets get wider, they also implicitly get less tall. That's all the more reason to downplay tall data. The action is in wide and dense data (whether separately or jointly).
Presumably tall data are the least interesting insofar as the only way to get a long calendar span is to sit around and wait, in contrast to wide and dense data, which now appear routinely.
But it occurs to me that tall data are also the least interesting for another reason: wide data make tall data impossible from a certain perspective. In particular, non-parametric estimation in high dimensions (that is, with wide data) is always subject to the fundamental and inescapable "curse of dimensionality": the rate at which estimation error vanishes gets hopelessly slow, very quickly, as dimension grows. [Wonks will recall that the Stone-optimal rate in \(d\) dimensions is \( \sqrt{T^{1- \frac{d}{d+4}}}\).]
The upshot: As our datasets get wider, they also implicitly get less tall. That's all the more reason to downplay tall data. The action is in wide and dense data (whether separately or jointly).
Monday, July 18, 2016
The HAC Emperor has no Clothes: Part 2
The time-series kernel-HAC literature seems to have forgotten about pre-whitening. But most of the action is in the pre-whitening, as stressed in my earlier post. In time-series contexts, parametric allowance for good-old ARMA-GARCH disturbances (with AIC order selection, say) is likely to be all that's needed, cleaning out whatever conditional-mean and conditional-variance dynamics are operative, after which there's little/no need for anything else. (And although I say "parametric" ARMA/GARCH, it's actually fully non-parametric from a sieve perspective.)
Instead, people focus on kernel-HAC sans prewhitening, and obsess over truncation lag selection. Truncation lag selection is indeed very important when pre-whitening is forgotten, as too short a lag can lead to seriously distorted inference, as emphasized in the brilliant early work of Kiefer-Vogelsang and in important recent work by Lewis, Lazarus, Stock and Watson. But all of that becomes much less important when pre-whitening is successfully implemented.
[Of course spectra need not be rational, so ARMA is just an approximation to a more general Wold representation (and remember, GARCH(1,1) is just an ARMA(1,1) in squares). But is that really a problem? In econometrics don't we feel comfortable with ARMA approximations 99.9 percent of the time? The only econometrically-interesting process I can think of that doesn't admit a finite-ordered ARMA representation is long memory (fractional integration). But that too can be handled parametrically by introducing just one more parameter, moving from ARMA(p,q) to ARFIMA(p,d,q).]
My earlier post linked to the key early work of Den Haan and Levin, which remains unpublished. I am confident that their basic message remains intact. Indeed recent work revisits and amplifies it in important ways; see Kapetanios and Psaradakis (2016) and new work in progress by Richard Baillie to be presented at the September 2016 NBER/NSF time-series meeting at Columbia ("Is Robust Inference with OLS Sensible in Time Series Regressions?").
Instead, people focus on kernel-HAC sans prewhitening, and obsess over truncation lag selection. Truncation lag selection is indeed very important when pre-whitening is forgotten, as too short a lag can lead to seriously distorted inference, as emphasized in the brilliant early work of Kiefer-Vogelsang and in important recent work by Lewis, Lazarus, Stock and Watson. But all of that becomes much less important when pre-whitening is successfully implemented.
[Of course spectra need not be rational, so ARMA is just an approximation to a more general Wold representation (and remember, GARCH(1,1) is just an ARMA(1,1) in squares). But is that really a problem? In econometrics don't we feel comfortable with ARMA approximations 99.9 percent of the time? The only econometrically-interesting process I can think of that doesn't admit a finite-ordered ARMA representation is long memory (fractional integration). But that too can be handled parametrically by introducing just one more parameter, moving from ARMA(p,q) to ARFIMA(p,d,q).]
My earlier post linked to the key early work of Den Haan and Levin, which remains unpublished. I am confident that their basic message remains intact. Indeed recent work revisits and amplifies it in important ways; see Kapetanios and Psaradakis (2016) and new work in progress by Richard Baillie to be presented at the September 2016 NBER/NSF time-series meeting at Columbia ("Is Robust Inference with OLS Sensible in Time Series Regressions?").
Sunday, July 10, 2016
Contemporaneous, Independent, and Complementary
You've probably been in a situation where you and someone else discovered something "contemporaneously and independently". Despite the initial sinking feeling, I've come to realize that there's usually nothing to worry about.
First, normal-time science has a certain internal momentum -- it simply must evolve in certain ways -- so people often identify and pluck the low-hanging fruit more-or-less simultaneously.
Second, and crucially, such incidents are usually not just the same discovery made twice. Rather, although intimately-related, the two contributions usually differ in subtle but important ways, rendering them complements, not substitutes.
Here's a good recent example in financial econometrics, working out asymptotics for high-frequency high-dimensional factor models. On the one hand, consider Pelger, and on the other hand consider Ait-Sahalia and Xiu. There's plenty of room in the world for both, and the whole is even greater than the sum of the (individually-impressive) parts.
First, normal-time science has a certain internal momentum -- it simply must evolve in certain ways -- so people often identify and pluck the low-hanging fruit more-or-less simultaneously.
Second, and crucially, such incidents are usually not just the same discovery made twice. Rather, although intimately-related, the two contributions usually differ in subtle but important ways, rendering them complements, not substitutes.
Here's a good recent example in financial econometrics, working out asymptotics for high-frequency high-dimensional factor models. On the one hand, consider Pelger, and on the other hand consider Ait-Sahalia and Xiu. There's plenty of room in the world for both, and the whole is even greater than the sum of the (individually-impressive) parts.
Sunday, July 3, 2016
DAG Software
Some time ago I mentioned the DAG (directed acyclical graph) primer by Judea Pearl et al. As noted in Pearl's recent blog post, a manual will be available with software solutions based on a DAGitty R package. See http://dagitty.net/primer/.
More generally -- that is, quite apart from the Pearl et al. primer -- check out DAGity at http://dagitty.net. Click on "launch" and play around for a few minutes. Very cool.
More generally -- that is, quite apart from the Pearl et al. primer -- check out DAGity at http://dagitty.net. Click on "launch" and play around for a few minutes. Very cool.
Subscribe to:
Posts (Atom)