Monday, April 15, 2019

Hedging Realized vs. Expected Volatility

Not all conferences can be above average, let alone in the extreme right tail of the distribution, so it's wonderful when it happens, as with last week's AP conference. Fine papers all -- timely, thought provoking, and empirically sophisticated.  Thanks to Jan Eberly and Konstantin Milbradt for assembling the program, here (including links to papers). 

I keep thinking about the Dew-Becker-Giglio-Kelly paper. For returns r, they produce evidence that (1) investors are willing to pay a lot to insure against movements in realized volatility, r^2_{t}, but (2) investors are not willing to pay to insure against movements in expected future realized volatility (conditional variance), E_t(r^2_{t+1} | I_t). On the one hand, as a realized volatility guy I'm really intrigued by (1). On the other hand, it seems hard to reconcile (1) and (2), a concern that was raised at the meeting. On the third hand, maybe it's not so hard.  Hmmm...

Wednesday, April 10, 2019

Bad News for IV Estimation

Alwyn Young has an eye-opening recent paper, "Consistency without Inference: Instrumental Variables in Practical Application".  There's a lot going on worth thinking about in his Monte Carlo:  OLS vs. IV; robust/clustered s.e.'s vs. not; testing/accounting for weak instruments vs. not; jacknife/bootstrap vs. "conventional" inference; etc.  IV as typically implemented comes up looking, well, dubious.

Alwyn's related analysis of published studies is even more striking.  He shows that, in a sample of 1359 IV regressions in 31 papers published in the journals of the American Economic Association,
"... statistically significant IV results generally depend upon only one or two observations or clusters, excluded instruments often appear to be irrelevant, there is little statistical evidence that OLS is actually substantively biased, and IV confidence intervals almost always include OLS point estimates." 
Wow.

Perhaps the high leverage is Alwyn's most striking result, particularly as many empirical economists seem to have skipped class on the day when leverage assessment was taught.  Decades ago, Marjorie Flavin attempted some remedial education in her 1991 paper, "The Joint Consumption/Asset Demand Decision: A Case Study in Robust Estimation".  She concluded that
"Compared to the conventional results, the robust instrumental variables estimates are more stable across different subsamples, more consistent with the theoretical specification of the model, and indicate that some of the most striking findings in the conventional results were attributable to a single, highly unusual observation." 
Sound familiar?  The non-robustness of conventional IV seems disturbingly robust, from Flavin to Young.

Flavin's paper evidently fell on deaf ears and remains unpublished. Hopefully Young's will not meet the same fate.

Monday, April 8, 2019

Identification via the ZLB and More

Sophocles Mavroeidis at Oxford has a very nice paper on using the nominal interest rate zero lower bound (ZLB) to identify VAR's.  Effectively, hitting the ZLB is a form of (endogenous) structural change that can be exploited for identification.  He has results showing whether/when one has point identification, set identification, or no identification. Really good stuff.

An interesting question is whether there may be SETS of bounds that may be hit. Suppose so, and suppose that we don't know whether/when they'll be hit, but we do know that if/when one bound is hit, all bounds are hit. An example might be nominal short rates in two countries with tightly-integrated money markets.

Now recall the literature on testing for multivariate structural change, which reveals large power increases in such situations (Bai, Lumsdaine and Stock). In Sophocles' case, it suggests the potential for greatly sharpened set ID.  Of course it all depends on the truth/relevance of my supposition...




Friday, April 5, 2019

Inference with Social Network Dependence

I'm running behind as usual. I meant to post this right after the seminar, about two weeks ago.  Really interesting stuff -- spatial correlation due to network dependence.  A Google search will find the associated paper(s) instantly. Again, really good stuff.  BUT I would humbly suggest that the biostat people need to read more econometrics. A good start is this survey (itself four years old, and distilled for practitioners as the basic insights were known/published decades ago). The cool question moving forward is whether/when/how network structure can be used to determine/inform clustering.


Elizabeth L. Ogburn
Department of Biostatistics
Johns Hopkins University

Social Network dependence,
the replication crisis, and (in)valid inference

                                                               ABSTRACT
In the first part of this talk, I will show that social network structure can result in a new kind of structural confounding, confounding by network structure, potentially contributing to replication crises across the health and social sciences.  Researchers in these fields frequently sample subjects from one or a small number of communities, schools, hospitals, etc., and while many of the limitations of such convenience samples are well-known, the issue of statistical dependence due to social network ties has not previously been addressed. A paradigmatic example of this is the Framingham Heart Study (FHS). Using a statistic that we adapted to measure network dependence, we test for network dependence and for possible confounding by network structure in several of the thousands of influential papers published using FHS data. Results suggest that some of the many decades of research on coronary heart disease, other health outcomes, and peer influence using FHS data may be biased (away from the null) and anticonservative due to unacknowledged network structure.

But data with network dependence abounds, and in many settings researchers are explicitly interested in learning about social network dynamics.  Therefore, there is high demand for methods for causal and statistical inference with social network data. In the second part of the talk, I will describe recent work on causal inference for observational data from a single social network, focusing on (1) new types of causal estimands that are of interest in social network settings, and (2) conditions under which central limit theorems hold and inference based on approximate normality is licensed.

Monday, March 25, 2019

Ensemble Methods for Causal Prediction

Great to see ensemble learning methods (i.e., forecast combination) moving into areas of econometrics beyond time series / macro-econometrics, where they have thrived ever since Bates and Granger (1969), generating a massive and vibrant literature.  (For a recent contribution, including historical references, see Diebold and Shin, 2019.)  In particular, the micro-econometric / panel / causal literature is coming on board.  See for example this new and interesting paper by Susan Athey et al.

Saturday, March 23, 2019

Monday, March 18, 2019

Alan Krueger RIP

Very sad to report that Alan Krueger has passed away.  He was a tremendously gifted empirical economist, with a fine feel for identifying issues that were truly important, and for designing novel and powerful empirical strategies to address them.

The Housing Risk Premium is Huge

Earlier I blogged on Jorda et al.'s fascinating paper, "The Rate of Return on Everything".  Now they're putting their rich dataset to good use.  Check out the new paper, NBER w.p. 25653.

The Total Risk Premium Puzzle
Òscar Jordà, Moritz Schularick, and Alan M. Taylor

Abstract:
The risk premium puzzle is worse than you think. Using a new database for the U.S. and 15 other advanced economies from 1870 to the present that includes housing as well as equity returns (to capture the full risky capital portfolio of the representative agent), standard calculations using returns to total wealth and consumption show that: housing returns in the long run are comparable to those of equities, and yet housing returns have lower volatility and lower covariance with consumption growth than equities. The same applies to a weighted total-wealth portfolio, and over a range of horizons. As a result, the implied risk aversion parameters for housing wealth and total wealth are even larger than those for equities, often by a factor of 2 or more. We find that more exotic models cannot resolve these even bigger puzzles, and we see little role for limited participation, idiosyncratic housing risk, transaction costs, or liquidity premiums. 

Friday, March 15, 2019

Neyman-Pearson Classification

Neyman-Pearson (NP) hypothesis testing insists on fixed asymptotic test size (5%, say) and then takes whatever power it can get. Bayesian hypothesis assessment, in contrast, treats type I and II errors symmetrically, with size approaching 0 and power approaching 1 asymptotically. 

Classification tends to parallel Bayesian hypothesis assessment, again treating type I and II errors symmetrically.  For example, I might do a logit regression and classify cases with fitted P(I=1)<1/2 as group 0 and cases with fitted P(I=1)>1/2 as group 1.  The classification threshold of 1/2 produces a ``Bayes classifier".  

Bayes classifiers seem natural, and in many applications they are.  But an interesting insight is that some classification problems may have hugely different costs of type I and II errors, in which case an NP classification approach may be entirely natural, not clumsy.  (Consider, for example, deciding whether to convict someone of a crime that carries the death penalty.  Many people would view the cost of a false declaration of "guilty" as much greater than the cost of a false "innocent".) 

This leads to the idea and desirability of NP classifiers.  The issue is how to bound the type I classification error probability at some small chosen value.  Obviously it involves moving the classification threshold away from 1/2, but figuring out exactly what to do turns out to be a challenging problem.  Xin Tong and co-authors have made good progress.  Here are some of his papers (from his USC site):
  1. Chen, Y., Li, J.J., and Tong, X.* (2019) Neyman-Pearson criterion (NPC): a model selection criterion for asymmetric binary classification. arXiv:1903.05262.
  2. Tong, X., Xia, L., Wang, J., and Feng, Y. (2018) Neyman-Pearson classification: parametrics and power enhancement. arXiv:1802.02557v3.
  3. Xia, L., Zhao, R., Wu, Y., and Tong, X.* (2018) Intentional control of type I error over unconscious data distortion: a Neyman-Pearson approach to text classification. arXiv:1802.02558.
  4. Tong, X.*, Feng, Y. and Li, J.J. (2018) Neyman-Pearson (NP) classification algorithms and NP receiver operating characteristics (NP-ROC). Science Advances, 4(2):eaao1659.
  5. Zhao, A., Feng, Y., Wang, L., and Tong, X.* (2016) Neyman-Pearson classification under high-dimensional settings. Journal of Machine Learning Research, 17:1−39.
  6. Li, J.J. and Tong, X. (2016) Genomic applications of the Neyman-Pearson classification paradigm. Chapter in Big Data Analytics in Genomics. Springer (New York). DOI: 10.1007/978-3-319-41279-5; eBook ISBN: 978-3-319-41279-5.
  7. Tong, X.*, Feng, Y. and Zhao, A. (2016) A survey on Neyman-Pearson classification and suggestions for future research. Wiley Interdisciplinary Reviews: Computational Statistics, 8:64-81.
  8. Tong, X.* (2013). A plug-in approach to Neyman-Pearson classification. Journal of Machine Learning Research, 14:3011-3040.
  9. Rigollet, P. and Tong, X. (2011) Neyman-Pearson classification, convexity and stochastic constraints. Journal of Machine Learning Research, 12:2825-2849.

Machine Learning and Alternative Data for Predicting Economic Indicators

I discussed an interesting paper by Chen et al. today at the CRIW.  My slides are here.