Monday, May 20, 2019

Climate Change Heterogeneity

One can only go so far in climate econometrics studying time series like the proverbial "global average temperature", just as one can only go so far in macroeconomics with the proverbial "representative agent".  Disaggregation will be key to additional progress, as different people in different places experience different climate "treatments" and different economic outcomes.  The impressive new paper below begins to confront the massive tasks of data collection, manipulation, analysis, and visualization, in the context of a disaggregated analysis of the effects of temperature change on aggregate output.

"Climatic Constraints on Aggregate Economic Output", by Marshall Burke and Vincent Tanutama, NBER Working Paper No. 25779, 2019.

Abstract:  Efficient responses to climate change require accurate estimates of both aggregate damages and where and to whom they occur. While specific case studies and simulations have suggested that climate change disproportionately affects the poor, large-scale direct evidence of the magnitude and origins of this disparity is lacking. Similarly, evidence on aggregate damages, which is a central input into the evaluation of mitigation policy, often relies on country-level data whose accuracy has been questioned. Here we assemble longitudinal data on economic output from over 11,000 districts across 37 countries, including previously nondigitized sources in multiple languages, to assess both the aggregate and distributional impacts of warming temperatures. We find that local-level growth in aggregate output responds non-linearly to temperature across all regions, with output peaking at cooler temperatures (<10°C) than estimated in earlier country analyses and declining steeply thereafter. Long difference estimates of the impact of longer-term (decadal) trends in temperature on income are larger than estimates from an annual panel model, providing additional evidence for growth effects. Impacts of a given temperature exposure do not vary meaningfully between rich and poor regions, but exposure to damaging temperatures is much more common in poor regions. These results indicate that additional warming will exacerbate inequality, particularly across countries, and that economic development alone will be unlikely to reduce damages, as commonly hypothesized. We estimate that since 2000, warming has already cost both the US and the EU at least $4 trillion in lost output, and tropical countries are >5% poorer than they would have been without this warming.

Monday, May 13, 2019

Understanding the Bad News for IV Estimation

In an earlier post I discussed Alwyn Young's bad news for IV estimation, obtained by Monte Carlo. Immediately thereafter, Narayana Kocherlakota sent his new paper, "A Near-Exact Finite Sample Theory for an Instrumental Variable Estimator", which provides complementary analytic insights. Really nice stuff.





Monday, April 15, 2019

Hedging Realized vs. Expected Volatility

Not all conferences can be above average, let alone in the extreme right tail of the distribution, so it's wonderful when it happens, as with last week's AP conference. Fine papers all -- timely, thought provoking, and empirically sophisticated.  Thanks to Jan Eberly and Konstantin Milbradt for assembling the program, here (including links to papers). 

I keep thinking about the Dew-Becker-Giglio-Kelly paper. For returns r, they produce evidence that (1) investors are willing to pay a lot to insure against movements in realized volatility, r^2_{t}, but (2) investors are not willing to pay to insure against movements in expected future realized volatility (conditional variance), E_t(r^2_{t+1} | I_t). On the one hand, as a realized volatility guy I'm really intrigued by (1). On the other hand, it seems hard to reconcile (1) and (2), a concern that was raised at the meeting. On the third hand, maybe it's not so hard.  Hmmm...

Wednesday, April 10, 2019

Bad News for IV Estimation

Alwyn Young has an eye-opening recent paper, "Consistency without Inference: Instrumental Variables in Practical Application".  There's a lot going on worth thinking about in his Monte Carlo:  OLS vs. IV; robust/clustered s.e.'s vs. not; testing/accounting for weak instruments vs. not; jacknife/bootstrap vs. "conventional" inference; etc.  IV as typically implemented comes up looking, well, dubious.

Alwyn's related analysis of published studies is even more striking.  He shows that, in a sample of 1359 IV regressions in 31 papers published in the journals of the American Economic Association,
"... statistically significant IV results generally depend upon only one or two observations or clusters, excluded instruments often appear to be irrelevant, there is little statistical evidence that OLS is actually substantively biased, and IV confidence intervals almost always include OLS point estimates." 
Wow.

Perhaps the high leverage is Alwyn's most striking result, particularly as many empirical economists seem to have skipped class on the day when leverage assessment was taught.  Decades ago, Marjorie Flavin attempted some remedial education in her 1991 paper, "The Joint Consumption/Asset Demand Decision: A Case Study in Robust Estimation".  She concluded that
"Compared to the conventional results, the robust instrumental variables estimates are more stable across different subsamples, more consistent with the theoretical specification of the model, and indicate that some of the most striking findings in the conventional results were attributable to a single, highly unusual observation." 
Sound familiar?  The non-robustness of conventional IV seems disturbingly robust, from Flavin to Young.

Flavin's paper evidently fell on deaf ears and remains unpublished. Hopefully Young's will not meet the same fate.

Monday, April 8, 2019

Identification via the ZLB and More

Sophocles Mavroeidis at Oxford has a very nice paper on using the nominal interest rate zero lower bound (ZLB) to identify VAR's.  Effectively, hitting the ZLB is a form of (endogenous) structural change that can be exploited for identification.  He has results showing whether/when one has point identification, set identification, or no identification. Really good stuff.

An interesting question is whether there may be SETS of bounds that may be hit. Suppose so, and suppose that we don't know whether/when they'll be hit, but we do know that if/when one bound is hit, all bounds are hit. An example might be nominal short rates in two countries with tightly-integrated money markets.

Now recall the literature on testing for multivariate structural change, which reveals large power increases in such situations (Bai, Lumsdaine and Stock). In Sophocles' case, it suggests the potential for greatly sharpened set ID.  Of course it all depends on the truth/relevance of my supposition...




Friday, April 5, 2019

Inference with Social Network Dependence

I'm running behind as usual. I meant to post this right after the seminar, about two weeks ago.  Really interesting stuff -- spatial correlation due to network dependence.  A Google search will find the associated paper(s) instantly. Again, really good stuff.  BUT I would humbly suggest that the biostat people need to read more econometrics. A good start is this survey (itself four years old, and distilled for practitioners as the basic insights were known/published decades ago). The cool question moving forward is whether/when/how network structure can be used to determine/inform clustering.


Elizabeth L. Ogburn
Department of Biostatistics
Johns Hopkins University

Social Network dependence,
the replication crisis, and (in)valid inference

                                                               ABSTRACT
In the first part of this talk, I will show that social network structure can result in a new kind of structural confounding, confounding by network structure, potentially contributing to replication crises across the health and social sciences.  Researchers in these fields frequently sample subjects from one or a small number of communities, schools, hospitals, etc., and while many of the limitations of such convenience samples are well-known, the issue of statistical dependence due to social network ties has not previously been addressed. A paradigmatic example of this is the Framingham Heart Study (FHS). Using a statistic that we adapted to measure network dependence, we test for network dependence and for possible confounding by network structure in several of the thousands of influential papers published using FHS data. Results suggest that some of the many decades of research on coronary heart disease, other health outcomes, and peer influence using FHS data may be biased (away from the null) and anticonservative due to unacknowledged network structure.

But data with network dependence abounds, and in many settings researchers are explicitly interested in learning about social network dynamics.  Therefore, there is high demand for methods for causal and statistical inference with social network data. In the second part of the talk, I will describe recent work on causal inference for observational data from a single social network, focusing on (1) new types of causal estimands that are of interest in social network settings, and (2) conditions under which central limit theorems hold and inference based on approximate normality is licensed.

Monday, March 25, 2019

Ensemble Methods for Causal Prediction

Great to see ensemble learning methods (i.e., forecast combination) moving into areas of econometrics beyond time series / macro-econometrics, where they have thrived ever since Bates and Granger (1969), generating a massive and vibrant literature.  (For a recent contribution, including historical references, see Diebold and Shin, 2019.)  In particular, the micro-econometric / panel / causal literature is coming on board.  See for example this new and interesting paper by Susan Athey et al.

Saturday, March 23, 2019

Monday, March 18, 2019

Alan Krueger RIP

Very sad to report that Alan Krueger has passed away.  He was a tremendously gifted empirical economist, with a fine feel for identifying issues that were truly important, and for designing novel and powerful empirical strategies to address them.

The Housing Risk Premium is Huge

Earlier I blogged on Jorda et al.'s fascinating paper, "The Rate of Return on Everything".  Now they're putting their rich dataset to good use.  Check out the new paper, NBER w.p. 25653.

The Total Risk Premium Puzzle
Òscar Jordà, Moritz Schularick, and Alan M. Taylor

Abstract:
The risk premium puzzle is worse than you think. Using a new database for the U.S. and 15 other advanced economies from 1870 to the present that includes housing as well as equity returns (to capture the full risky capital portfolio of the representative agent), standard calculations using returns to total wealth and consumption show that: housing returns in the long run are comparable to those of equities, and yet housing returns have lower volatility and lower covariance with consumption growth than equities. The same applies to a weighted total-wealth portfolio, and over a range of horizons. As a result, the implied risk aversion parameters for housing wealth and total wealth are even larger than those for equities, often by a factor of 2 or more. We find that more exotic models cannot resolve these even bigger puzzles, and we see little role for limited participation, idiosyncratic housing risk, transaction costs, or liquidity premiums. 

Friday, March 15, 2019

Neyman-Pearson Classification

Neyman-Pearson (NP) hypothesis testing insists on fixed asymptotic test size (5%, say) and then takes whatever power it can get. Bayesian hypothesis assessment, in contrast, treats type I and II errors symmetrically, with size approaching 0 and power approaching 1 asymptotically. 

Classification tends to parallel Bayesian hypothesis assessment, again treating type I and II errors symmetrically.  For example, I might do a logit regression and classify cases with fitted P(I=1)<1/2 as group 0 and cases with fitted P(I=1)>1/2 as group 1.  The classification threshold of 1/2 produces a ``Bayes classifier".  

Bayes classifiers seem natural, and in many applications they are.  But an interesting insight is that some classification problems may have hugely different costs of type I and II errors, in which case an NP classification approach may be entirely natural, not clumsy.  (Consider, for example, deciding whether to convict someone of a crime that carries the death penalty.  Many people would view the cost of a false declaration of "guilty" as much greater than the cost of a false "innocent".) 

This leads to the idea and desirability of NP classifiers.  The issue is how to bound the type I classification error probability at some small chosen value.  Obviously it involves moving the classification threshold away from 1/2, but figuring out exactly what to do turns out to be a challenging problem.  Xin Tong and co-authors have made good progress.  Here are some of his papers (from his USC site):
  1. Chen, Y., Li, J.J., and Tong, X.* (2019) Neyman-Pearson criterion (NPC): a model selection criterion for asymmetric binary classification. arXiv:1903.05262.
  2. Tong, X., Xia, L., Wang, J., and Feng, Y. (2018) Neyman-Pearson classification: parametrics and power enhancement. arXiv:1802.02557v3.
  3. Xia, L., Zhao, R., Wu, Y., and Tong, X.* (2018) Intentional control of type I error over unconscious data distortion: a Neyman-Pearson approach to text classification. arXiv:1802.02558.
  4. Tong, X.*, Feng, Y. and Li, J.J. (2018) Neyman-Pearson (NP) classification algorithms and NP receiver operating characteristics (NP-ROC). Science Advances, 4(2):eaao1659.
  5. Zhao, A., Feng, Y., Wang, L., and Tong, X.* (2016) Neyman-Pearson classification under high-dimensional settings. Journal of Machine Learning Research, 17:1−39.
  6. Li, J.J. and Tong, X. (2016) Genomic applications of the Neyman-Pearson classification paradigm. Chapter in Big Data Analytics in Genomics. Springer (New York). DOI: 10.1007/978-3-319-41279-5; eBook ISBN: 978-3-319-41279-5.
  7. Tong, X.*, Feng, Y. and Zhao, A. (2016) A survey on Neyman-Pearson classification and suggestions for future research. Wiley Interdisciplinary Reviews: Computational Statistics, 8:64-81.
  8. Tong, X.* (2013). A plug-in approach to Neyman-Pearson classification. Journal of Machine Learning Research, 14:3011-3040.
  9. Rigollet, P. and Tong, X. (2011) Neyman-Pearson classification, convexity and stochastic constraints. Journal of Machine Learning Research, 12:2825-2849.

Machine Learning and Alternative Data for Predicting Economic Indicators

I discussed an interesting paper by Chen et al. today at the CRIW.  My slides are here.

Wednesday, March 6, 2019

Significance Testing as a Noise Amplifier

See this insightful post on why statistical significance testing is effectively a noise amplifier. I find it interesting along the lines of "something not usually conceptualized in terms of XX is revealed to be very much about XX".  In this case XX is noise amplification / reduction.  Like many good insights, it seems obvious ex post, but no one recognized it before the "eureka moment".

So significance testing is really a filter:  The input is data and the output is an accept/reject decision for some hypothesis.  But what a non-linear, imprecisely-defined, filter -- we're a long way from looking at the gain functions of simple linear filters as in classical frequency-domain filter analysis!

See also this earlier post on significance testing.

Sunday, March 3, 2019

Standard Errors for Things that Matter

Many times in applied / empirical seminars I have seen something like this:

The paper estimates a parameter vector b and dutifully reports asymptotic s.e.'s.  But then the ultimate object of interest turns out not to be b, but rather some nonlinear but continuous function of the elements of b, say c = f(b). So the paper calculates and reports an estimate of c as c_hat = f(b_hat).  Fine, insofar as c_hat is consistent if b_hat is consistent.  But then the paper forgets to calculate an asymptotic s.e. for c_hat.

So c is the object of interest, and hundreds, maybe thousands, of person-hours are devoted to producing a point estimate of c, but then no one remembers (cares?) to assess its estimation uncertainty.  Geez.  Of course one could do delta method, simulation, etc.

Monday, February 25, 2019

Big Data for 21st Century Economic Statistics


I earlier posted here when the call for papers was announced for the NBER's CRIW meeting on Big Data for 21st Century Economic Statistics. The wheels have been turning, and the meeting will soon transpire. The program is here, with links to papers. [For general info on the CRIW's impressive contributions over the decades, see here.]

Wednesday, February 20, 2019

Modified CRLB with Differential Privacy

It turns out that with differential privacy the Cramer-Rao lower bound (CRLB) is not achievable (too bad for MLE), but you can figure out what *is* achievable, and find estimators that do the trick. (See the interesting talk here by Feng Ruan, and the associated papers on his web site.) The key point is that estimation efficiency is degraded by privacy. The new frontier seems to me to be this: Let's go beyond stark "privacy" or "no privacy" situations, because in reality there is a spectrum of "epsilon-strengths" of "epsilon-differential" privacy.  (Right?)  Then there is a tension: I like privacy, but I also like estimation efficiency, and the two trade off against each other. So there is a choice to be made, and the optimum depends on preferences.

Tuesday, February 19, 2019

Berk-Nash Equilibrium and Pseudo MLE

The Berk-White statistics/econometrics tradition is alive and well, appearing now as Berk-Nash equilibrium in cutting-edge economic theory. See for example Kevin He's Harvard job-market paper here and the references therein, and the slides from yesterday's lunch talk by my Penn colleague Yuichi Yamamoto. But the connection between Berk-Nash equilibrium of economic theory and KLIC-minimizing pseudo-MLE of econometric theory is under-developed. When the Berk-Nash people get better acquainted with Berk-White people, good things may happen. Effectively Yuichi is pushing in that direction, working toward characterizing log-run behavior of likelihood maximizers rather than beliefs.

Sunday, January 27, 2019

Mixed-Frequency Big Data

Of course I have blogged on this earlier, e.g. here and here, and I am a fan. The latest is Andreou, Gagliardini, Ghysels, and Rubin, "Inference in Group Factor Models with an Application to Mixed Frequency Data". The latest revision, available here, is now forthcoming in Econometrica.

Friday, January 25, 2019

Score-Driven and Nonlinear Time-Series Models

Check out the upcoming conference here.  Definitely worth reading through the program.  Earlier related post here.




Network Data and Machine Learning

This just arrived, announcing an upcoming conference on the ML/networks interface.  It's definitely worth reading through the synopsis and topics and titles and authors.

"An exciting workshop on Machine Learning for Network Data is taking place at New York University on January 29. The event will discuss emerging challenges on generalizing the successes of image and speech processing to information domains with irregular structure. The workshop includes highlight talks by Yann LeCun and Brian Sadler as well as short talks by a collection of national leaders in the development of machine learning techniques for processing network data. The event is free to attend and open to the public but registration is required because of space limitations. Please visit the workshop site to access the registration form."

Monday, January 21, 2019

Machine Learning for Economists

My Penn colleague Jesus Fernandez-Villaverde has a nice slide deck here.  He asked me to warn you that this is a highly-preliminary version (0.1!), and to thank, without implicating, Stephen Hansen, as the deck draws on joint work.

Monday, January 7, 2019

Papers of the Moment

Happy New Year!

I was surprised at the interest generated when I last listed a few new intriguing working papers that I'm reading and enjoying.  Maybe another such posting is a good way to start the new year.  Hear are three:

Understanding Regressions with Observations Collected at High Frequency over Long Span

Chang, Yoosoon; Lu, Ye; Park, Joon Y.

Abstract:
In this paper, we analyze regressions with observations collected at small time interval over long period of time. For the formal asymptotic analysis, we assume that samples are obtained from continuous time stochastic processes, and let the sampling interval δ shrink down to zero and the sample span T increase up to infinity. In this setup, we show that the standard Wald statistic diverges to infinity and the regression becomes spurious as long as δ → 0 sufficiently fast relative to T → ∞. Such a phenomenon is indeed what is frequently observed in practice for the type of regressions considered in the paper. In contrast, our asymptotic theory predicts that the spuriousness disappears if we use the robust version of the Wald test with an appropriate longrun variance estimate. This is supported, strongly and unambiguously, by our empirical illustration.

http://d.repec.org/n?u=RePEc:syd:wpaper:2018-10&r=ets

-------------

Equity Concerns are Narrowly Framed

Christine L. Exley and Judd B. Kessler

Abstract:
We show that individuals narrowly bracket their equity concerns. Across four experiments including 1,600 subjects, individuals equalize components of payoffs rather than overall payoffs. When earnings are comprised of "small tokens" worth 1 cent and "large tokens" worth 2 cents, subjects frequently equalize the distribution of small (or large) tokens rather than equalizing total earnings. When payoffs are comprised of time and money, subjects similarly equalize the distribution of time (or money) rather than total payoffs. In addition, subjects are more likely to equalize time than money. These findings can help explain a variety of behavioral phenomena including the structure of social insurance programs, patterns of public good provision, and why transactions that turn money into time are often deemed repugnant. 

https://www.nber.org/papers/w25326?utm_campaign=ntwh&utm_medium=email&utm_source=ntwg9

----------------------

Shackling the Identification Police?

Christopher J. Ruhm

Abstract:
This paper examines potential tradeoffs between research methods in answering important questions versus providing more cleanly identified estimates on problems that are potentially of lesser interest. The strengths and limitations of experimental and quasi-experimental methods are discussed and it is postulated that confidence in the results obtained may sometimes be overvalued compared to the importance of the topics addressed. The consequences of this are modeled and several suggestions are provided regarding possible steps to encourage greater focus on questions of fundamental importance. 

http://papers.nber.org/tmp/51337-w25320.pdf