Great to see ensemble learning methods (i.e., forecast combination) moving into areas of econometrics beyond time series / macro-econometrics, where they have thrived ever since Bates and Granger (1969), generating a massive and vibrant literature. (For a recent contribution, including historical references, see Diebold and Shin, 2019.) In particular, the micro-econometric / panel / causal literature is coming on board. See for example this new and interesting paper by Susan Athey et al.
Monday, March 25, 2019
Saturday, March 23, 2019
Big Data in Dynamic Predictive Modeling
Our Journal of Econometrics issue, Big Data in Dynamic Predictive Econometric Modeling, is now in press. It is partly based on a Penn conference, generously supported by Penn's Warren Center for Network and Data Sciences, University of Chicago's Stevanovich Center for Financial Mathematics, and Penn's Institute for Economic Research. The intro is here and the paper list is here.
Monday, March 18, 2019
Alan Krueger RIP
Very sad to report that Alan Krueger has passed away. He was a tremendously gifted empirical economist, with a fine feel for identifying issues that were truly important, and for designing novel and powerful empirical strategies to address them.
The Housing Risk Premium is Huge
Earlier I blogged on Jorda et al.'s fascinating paper, "The Rate of Return on Everything". Now they're putting their rich dataset to good use. Check out the new paper, NBER w.p. 25653.
The Total Risk Premium Puzzle
Òscar Jordà, Moritz Schularick, and Alan M. Taylor
Abstract:
The risk premium puzzle is worse than you think. Using a new database for the U.S. and 15 other advanced economies from 1870 to the present that includes housing as well as equity returns (to capture the full risky capital portfolio of the representative agent), standard calculations using returns to total wealth and consumption show that: housing returns in the long run are comparable to those of equities, and yet housing returns have lower volatility and lower covariance with consumption growth than equities. The same applies to a weighted total-wealth portfolio, and over a range of horizons. As a result, the implied risk aversion parameters for housing wealth and total wealth are even larger than those for equities, often by a factor of 2 or more. We find that more exotic models cannot resolve these even bigger puzzles, and we see little role for limited participation, idiosyncratic housing risk, transaction costs, or liquidity premiums.
The Total Risk Premium Puzzle
Òscar Jordà, Moritz Schularick, and Alan M. Taylor
Abstract:
The risk premium puzzle is worse than you think. Using a new database for the U.S. and 15 other advanced economies from 1870 to the present that includes housing as well as equity returns (to capture the full risky capital portfolio of the representative agent), standard calculations using returns to total wealth and consumption show that: housing returns in the long run are comparable to those of equities, and yet housing returns have lower volatility and lower covariance with consumption growth than equities. The same applies to a weighted total-wealth portfolio, and over a range of horizons. As a result, the implied risk aversion parameters for housing wealth and total wealth are even larger than those for equities, often by a factor of 2 or more. We find that more exotic models cannot resolve these even bigger puzzles, and we see little role for limited participation, idiosyncratic housing risk, transaction costs, or liquidity premiums.
Friday, March 15, 2019
Neyman-Pearson Classification
Neyman-Pearson (NP) hypothesis testing insists on fixed asymptotic test size (5%, say) and then takes whatever power it can get. Bayesian hypothesis assessment, in contrast, treats type I and II errors symmetrically, with size approaching 0 and power approaching 1 asymptotically.
Classification tends to parallel Bayesian hypothesis assessment, again treating type I and II errors symmetrically. For example, I might do a logit regression and classify cases with fitted P(I=1)<1/2 as group 0 and cases with fitted P(I=1)>1/2 as group 1. The classification threshold of 1/2 produces a ``Bayes classifier".
Bayes classifiers seem natural, and in many applications they are. But an interesting insight is that some classification problems may have hugely different costs of type I and II errors, in which case an NP classification approach may be entirely natural, not clumsy. (Consider, for example, deciding whether to convict someone of a crime that carries the death penalty. Many people would view the cost of a false declaration of "guilty" as much greater than the cost of a false "innocent".)
Classification tends to parallel Bayesian hypothesis assessment, again treating type I and II errors symmetrically. For example, I might do a logit regression and classify cases with fitted P(I=1)<1/2 as group 0 and cases with fitted P(I=1)>1/2 as group 1. The classification threshold of 1/2 produces a ``Bayes classifier".
Bayes classifiers seem natural, and in many applications they are. But an interesting insight is that some classification problems may have hugely different costs of type I and II errors, in which case an NP classification approach may be entirely natural, not clumsy. (Consider, for example, deciding whether to convict someone of a crime that carries the death penalty. Many people would view the cost of a false declaration of "guilty" as much greater than the cost of a false "innocent".)
This leads to the idea and desirability of NP classifiers. The issue is how to bound the type I classification error probability at some small chosen value. Obviously it involves moving the classification threshold away from 1/2, but figuring out exactly what to do turns out to be a challenging problem. Xin Tong and co-authors have made good progress. Here are some of his papers (from his USC site):
- Chen, Y., Li, J.J., and Tong, X.* (2019) Neyman-Pearson criterion (NPC): a model selection criterion for asymmetric binary classification. arXiv:1903.05262.
- Tong, X., Xia, L., Wang, J., and Feng, Y. (2018) Neyman-Pearson classification: parametrics and power enhancement. arXiv:1802.02557v3.
- Xia, L., Zhao, R., Wu, Y., and Tong, X.* (2018) Intentional control of type I error over unconscious data distortion: a Neyman-Pearson approach to text classification. arXiv:1802.02558.
- Tong, X.*, Feng, Y. and Li, J.J. (2018) Neyman-Pearson (NP) classification algorithms and NP receiver operating characteristics (NP-ROC). Science Advances, 4(2):eaao1659.
- Zhao, A., Feng, Y., Wang, L., and Tong, X.* (2016) Neyman-Pearson classification under high-dimensional settings. Journal of Machine Learning Research, 17:1−39.
- Li, J.J. and Tong, X. (2016) Genomic applications of the Neyman-Pearson classification paradigm. Chapter in Big Data Analytics in Genomics. Springer (New York). DOI: 10.1007/978-3-319-41279-5; eBook ISBN: 978-3-319-41279-5.
- Tong, X.*, Feng, Y. and Zhao, A. (2016) A survey on Neyman-Pearson classification and suggestions for future research. Wiley Interdisciplinary Reviews: Computational Statistics, 8:64-81.
- Tong, X.* (2013). A plug-in approach to Neyman-Pearson classification. Journal of Machine Learning Research, 14:3011-3040.
- Rigollet, P. and Tong, X. (2011) Neyman-Pearson classification, convexity and stochastic constraints. Journal of Machine Learning Research, 12:2825-2849.
Machine Learning and Alternative Data for Predicting Economic Indicators
I discussed an interesting paper by Chen et al. today at the CRIW. My slides are here.
Wednesday, March 6, 2019
Significance Testing as a Noise Amplifier
See this insightful post on why statistical significance testing is effectively a noise amplifier. I find it interesting along the lines of "something not usually conceptualized in terms of XX is revealed to be very much about XX". In this case XX is noise amplification / reduction. Like many good insights, it seems obvious ex post, but no one recognized it before the "eureka moment".
So significance testing is really a filter: The input is data and the output is an accept/reject decision for some hypothesis. But what a non-linear, imprecisely-defined, filter -- we're a long way from looking at the gain functions of simple linear filters as in classical frequency-domain filter analysis!
See also this earlier post on significance testing.
So significance testing is really a filter: The input is data and the output is an accept/reject decision for some hypothesis. But what a non-linear, imprecisely-defined, filter -- we're a long way from looking at the gain functions of simple linear filters as in classical frequency-domain filter analysis!
See also this earlier post on significance testing.
Sunday, March 3, 2019
Standard Errors for Things that Matter
Many times in applied / empirical seminars I have seen something like this:
The paper estimates a parameter vector b and dutifully reports asymptotic s.e.'s. But then the ultimate object of interest turns out not to be b, but rather some nonlinear but continuous function of the elements of b, say c = f(b). So the paper calculates and reports an estimate of c as c_hat = f(b_hat). Fine, insofar as c_hat is consistent if b_hat is consistent. But then the paper forgets to calculate an asymptotic s.e. for c_hat.
So c is the object of interest, and hundreds, maybe thousands, of person-hours are devoted to producing a point estimate of c, but then no one remembers (cares?) to assess its estimation uncertainty. Geez. Of course one could do delta method, simulation, etc.
The paper estimates a parameter vector b and dutifully reports asymptotic s.e.'s. But then the ultimate object of interest turns out not to be b, but rather some nonlinear but continuous function of the elements of b, say c = f(b). So the paper calculates and reports an estimate of c as c_hat = f(b_hat). Fine, insofar as c_hat is consistent if b_hat is consistent. But then the paper forgets to calculate an asymptotic s.e. for c_hat.
So c is the object of interest, and hundreds, maybe thousands, of person-hours are devoted to producing a point estimate of c, but then no one remembers (cares?) to assess its estimation uncertainty. Geez. Of course one could do delta method, simulation, etc.
Subscribe to:
Posts (Atom)