Saturday, March 23, 2019

Big Data in Dynamic Predictive Modeling

Our Journal of Econometrics issue, Big Data in Dynamic Predictive Econometric Modeling, is now in press.  It is partly based on a Penn conference, generously supported by Penn's Warren Center for Network and Data Sciences, University of Chicago's Stevanovich Center for Financial Mathematics, and Penn's Institute for Economic Research.  The intro is here and the paper list is here.

Monday, March 18, 2019

Alan Krueger RIP

Very sad to report that Alan Krueger has passed away.  He was a tremendously gifted empirical economist, with a fine feel for identifying issues that were truly important, and for designing novel and powerful empirical strategies to address them.

The Housing Risk Premium is Huge

Earlier I blogged on Jorda et al.'s fascinating paper, "The Rate of Return on Everything".  Now they're putting their rich dataset to good use.  Check out the new paper, NBER w.p. 25653.

The Total Risk Premium Puzzle
Òscar Jordà, Moritz Schularick, and Alan M. Taylor

Abstract:
The risk premium puzzle is worse than you think. Using a new database for the U.S. and 15 other advanced economies from 1870 to the present that includes housing as well as equity returns (to capture the full risky capital portfolio of the representative agent), standard calculations using returns to total wealth and consumption show that: housing returns in the long run are comparable to those of equities, and yet housing returns have lower volatility and lower covariance with consumption growth than equities. The same applies to a weighted total-wealth portfolio, and over a range of horizons. As a result, the implied risk aversion parameters for housing wealth and total wealth are even larger than those for equities, often by a factor of 2 or more. We find that more exotic models cannot resolve these even bigger puzzles, and we see little role for limited participation, idiosyncratic housing risk, transaction costs, or liquidity premiums. 

Friday, March 15, 2019

Neyman-Pearson Classification

Neyman-Pearson (NP) hypothesis testing insists on fixed asymptotic test size (5%, say) and then takes whatever power it can get. Bayesian hypothesis assessment, in contrast, treats type I and II errors symmetrically, with size approaching 0 and power approaching 1 asymptotically. 

Classification tends to parallel Bayesian hypothesis assessment, again treating type I and II errors symmetrically.  For example, I might do a logit regression and classify cases with fitted P(I=1)<1/2 as group 0 and cases with fitted P(I=1)>1/2 as group 1.  The classification threshold of 1/2 produces a ``Bayes classifier".  

Bayes classifiers seem natural, and in many applications they are.  But an interesting insight is that some classification problems may have hugely different costs of type I and II errors, in which case an NP classification approach may be entirely natural, not clumsy.  (Consider, for example, deciding whether to convict someone of a crime that carries a 50-year sentence.  Many people would view the cost of a false declaration of "guilty" as much greater than the cost of a false "innocent".) 

This leads to the idea and desirability of NP classifiers.  The issue is how to bound the type I classification error probability at some small chosen value.  Obviously it involves moving the classification threshold away from 1/2, but figuring out exactly what to do turns out to be a challenging problem.  Xin Tong and co-authors have made good progress.  Here are some of his papers (from his USC site):
  1. Chen, Y., Li, J.J., and Tong, X.* (2019) Neyman-Pearson criterion (NPC): a model selection criterion for asymmetric binary classification. arXiv:1903.05262.
  2. Tong, X., Xia, L., Wang, J., and Feng, Y. (2018) Neyman-Pearson classification: parametrics and power enhancement. arXiv:1802.02557v3.
  3. Xia, L., Zhao, R., Wu, Y., and Tong, X.* (2018) Intentional control of type I error over unconscious data distortion: a Neyman-Pearson approach to text classification. arXiv:1802.02558.
  4. Tong, X.*, Feng, Y. and Li, J.J. (2018) Neyman-Pearson (NP) classification algorithms and NP receiver operating characteristics (NP-ROC). Science Advances, 4(2):eaao1659.
  5. Zhao, A., Feng, Y., Wang, L., and Tong, X.* (2016) Neyman-Pearson classification under high-dimensional settings. Journal of Machine Learning Research, 17:1−39.
  6. Li, J.J. and Tong, X. (2016) Genomic applications of the Neyman-Pearson classification paradigm. Chapter in Big Data Analytics in Genomics. Springer (New York). DOI: 10.1007/978-3-319-41279-5; eBook ISBN: 978-3-319-41279-5.
  7. Tong, X.*, Feng, Y. and Zhao, A. (2016) A survey on Neyman-Pearson classification and suggestions for future research. Wiley Interdisciplinary Reviews: Computational Statistics, 8:64-81.
  8. Tong, X.* (2013). A plug-in approach to Neyman-Pearson classification. Journal of Machine Learning Research, 14:3011-3040.
  9. Rigollet, P. and Tong, X. (2011) Neyman-Pearson classification, convexity and stochastic constraints. Journal of Machine Learning Research, 12:2825-2849.

Machine Learning and Alternative Data for Predicting Economic Indicators

I discussed an interesting paper by Chen et al. today at the CRIW.  My slides are here.

Wednesday, March 6, 2019

Significance Testing as a Noise Amplifier

See this insightful post on why statistical significance testing is effectively a noise amplifier. I find it interesting along the lines of "something not usually conceptualized in terms of XX is revealed to be very much about XX".  In this case XX is noise amplification / reduction.  Like many good insights, it seems obvious ex post, but no one recognized it before the "eureka moment".

So significance testing is really a filter:  The input is data and the output is an accept/reject decision for some hypothesis.  But what a non-linear, imprecisely-defined, filter -- we're a long way from looking at the gain functions of simple linear filters as in classical frequency-domain filter analysis!

See also this earlier post on significance testing.

Sunday, March 3, 2019

Standard Errors for Things that Matter

Many times in applied / empirical seminars I have seen something like this:

The paper estimates a parameter vector b and dutifully reports asymptotic s.e.'s.  But then the ultimate object of interest turns out not to be b, but rather some nonlinear but continuous function of the elements of b, say c = f(b). So the paper calculates and reports an estimate of c as c_hat = f(b_hat).  Fine, insofar as c_hat is consistent if b_hat is consistent.  But then the paper forgets to calculate an asymptotic s.e. for c_hat.

So c is the object of interest, and hundreds, maybe thousands, of person-hours are devoted to producing a point estimate of c, but then no one remembers (cares?) to assess its estimation uncertainty.  Geez.  Of course one could do delta method, simulation, etc.


Monday, February 25, 2019

Big Data for 21st Century Economic Statistics


I earlier posted here when the call for papers was announced for the NBER's CRIW meeting on Big Data for 21st Century Economic Statistics. The wheels have been turning, and the meeting will soon transpire. The program is here, with links to papers. [For general info on the CRIW's impressive contributions over the decades, see here.]

Wednesday, February 20, 2019

Modified CRLB with Differential Privacy

It turns out that with differential privacy the Cramer-Rao lower bound (CRLB) is not achievable (too bad for MLE), but you can figure out what *is* achievable, and find estimators that do the trick. (See the interesting talk here by Feng Ruan, and the associated papers on his web site.) The key point is that estimation efficiency is degraded by privacy. The new frontier seems to me to be this: Let's go beyond stark "privacy" or "no privacy" situations, because in reality there is a spectrum of "epsilon-strengths" of "epsilon-differential" privacy.  (Right?)  Then there is a tension: I like privacy, but I also like estimation efficiency, and the two trade off against each other. So there is a choice to be made, and the optimum depends on preferences.

Tuesday, February 19, 2019

Berk-Nash Equilibrium and Pseudo MLE

The Berk-White statistics/econometrics tradition is alive and well, appearing now as Berk-Nash equilibrium in cutting-edge economic theory. See for example Kevin He's Harvard job-market paper here and the references therein, and the slides from yesterday's lunch talk by my Penn colleague Yuichi Yamamoto. But the connection between Berk-Nash equilibrium of economic theory and KLIC-minimizing pseudo-MLE of econometric theory is under-developed. When the Berk-Nash people get better acquainted with Berk-White people, good things may happen. Effectively Yuichi is pushing in that direction, working toward characterizing log-run behavior of likelihood maximizers rather than beliefs.