Monday, January 7, 2019

Papers of the Moment

Happy New Year!

I was surprised at the interest generated when I last listed a few new intriguing working papers that I'm reading and enjoying.  Maybe another such posting is a good way to start the new year.  Hear are three:

Understanding Regressions with Observations Collected at High Frequency over Long Span

Chang, Yoosoon; Lu, Ye; Park, Joon Y.

Abstract:
In this paper, we analyze regressions with observations collected at small time interval over long period of time. For the formal asymptotic analysis, we assume that samples are obtained from continuous time stochastic processes, and let the sampling interval δ shrink down to zero and the sample span T increase up to infinity. In this setup, we show that the standard Wald statistic diverges to infinity and the regression becomes spurious as long as δ → 0 sufficiently fast relative to T → ∞. Such a phenomenon is indeed what is frequently observed in practice for the type of regressions considered in the paper. In contrast, our asymptotic theory predicts that the spuriousness disappears if we use the robust version of the Wald test with an appropriate longrun variance estimate. This is supported, strongly and unambiguously, by our empirical illustration.

http://d.repec.org/n?u=RePEc:syd:wpaper:2018-10&r=ets

-------------

Equity Concerns are Narrowly Framed

Christine L. Exley and Judd B. Kessler

Abstract:
We show that individuals narrowly bracket their equity concerns. Across four experiments including 1,600 subjects, individuals equalize components of payoffs rather than overall payoffs. When earnings are comprised of "small tokens" worth 1 cent and "large tokens" worth 2 cents, subjects frequently equalize the distribution of small (or large) tokens rather than equalizing total earnings. When payoffs are comprised of time and money, subjects similarly equalize the distribution of time (or money) rather than total payoffs. In addition, subjects are more likely to equalize time than money. These findings can help explain a variety of behavioral phenomena including the structure of social insurance programs, patterns of public good provision, and why transactions that turn money into time are often deemed repugnant. 

https://www.nber.org/papers/w25326?utm_campaign=ntwh&utm_medium=email&utm_source=ntwg9

----------------------

Shackling the Identification Police?

Christopher J. Ruhm

Abstract:
This paper examines potential tradeoffs between research methods in answering important questions versus providing more cleanly identified estimates on problems that are potentially of lesser interest. The strengths and limitations of experimental and quasi-experimental methods are discussed and it is postulated that confidence in the results obtained may sometimes be overvalued compared to the importance of the topics addressed. The consequences of this are modeled and several suggestions are provided regarding possible steps to encourage greater focus on questions of fundamental importance. 

http://papers.nber.org/tmp/51337-w25320.pdf

Friday, December 21, 2018

Holiday Haze

File:Happy Holidays (5318408861).jpg
Happy holidays! 

Your blogger is about to vanish, returning in the new year. Many thanks for your past, present, and future support. 

If you're at ASSA Atlanta, I hope you'll come to the Penn Economics and Finance parties.

Sunday, December 16, 2018

Causality as Robust Prediction

I like thinking about causal estimation as a type of prediction (e.g., here). Here's a very nice slide deck from Peter Buhlmann at ETH Zurich detailing his group's recent and ongoing work in that tradition.














Thursday, December 13, 2018

More on Google Dataset Search

Some months ago I blogged on Google's new development of a dataset search tool.  Evidently it's coming along.  Check out the beta version here. Also, on dataset supply as opposed to demand, see here for how to maximize visibility of your datasets to the search engine.

[With thanks to the IIF's Oracle newsletter for alerting me.]


Monday, December 10, 2018

Greater New York Area Econometrics Colloquium

Last week's 13th annual Greater New York Area Econometrics Colloquium, generously hosted by Princeton, was a great success, with strong papers throughout. The program is below. I found two papers especially interesting. I already blogged on Spady and Stouli's “Simultaneous Mean-Variance Regression”. The other was "Nonparametric Sample Splitting", by Lee and Wang.

Think of a nonlinear classification problem. In general the decision boundary is of course a highly nonlinear surface, but it's a supervised learning situation, so it's "easy" to learn the surface using standard nonlinear regression methods. Lee and Wang, in contrast, study an unsupervised learning situation, effectively a threshold regression model, where the threshold is determined by an unknown nonparametric relation. And they have very cool applications to things like estimating effective economic borders, gerrymandering, etc. 

The 13th Greater New York Metropolitan Area Econometrics Colloquium

Princeton University, Saturday, December 1, 2018

9.00am-10.30am: Session 1
“Simple Inference for Projections and Linear Programs” by Hiroaki Kaido (BU), Francesca Molinari (Cornell), and Jörg Stoye (Cornell)
“Clustering for multi-dimensional heterogeneity with application to production function estimation” by Xu Cheng (UPenn), Peng Shao (UPenn), and Frank Schorfheide (UPenn)
“Adaptive Bayesian Estimation of Mixed Discrete-Continuous Distributions Under Smoothness and Sparsity” by Andriy Norets (Brown) and Justinas Pelenis (Vienna IAS)

11.00am-12.30pm: Session 2
“Factor-Driven Two-Regime Regression” by Sokbae Lee (Columbia), Yuan Liao (Rutgers), Myung Hwan Seo (Cowles), and Youngki Shin (McMaster)
“Semiparametric Estimation in Continuous-Time: Asymptotics for Integrated Volatility Functionals with Small and Large Bandwidths” by Xiye Yang (Rutgers)
“Nonparametric Sample Splitting” by Yoonseok Lee (Syracuse) and Yulong Wang (Syracuse)

2.00pm-3.30pm: Session 3
“Counterfactual Sensitivity and Robustness” by Timothy Christensen (NYU) and Benjamin Connault (IEX Group)
“Dynamically Optimal Treatment Allocation Using Reinforcement Learning” by Karun Adusumilli (UPenn), Friedrich Geiecke (LSE), and Claudio Schilter (LSE)
“Simultaneous Mean-Variance Regression” by Richard Spady (Johns Hopkins) and Sami Stouli (Bristol)

4.00pm-5.30pm: Session 4
“Semi-parametric instrument-free demand estimation: relaxing optimality and equilibrium assumptions” by Sungjin Cho (Seoul National), Gong Lee (Georgetown), John Rust (Georgetown), and Mengkai Yu (Georgetown)
“Nonparametric analysis of monotone choice” by Natalia Lazzati (UCSC), John Quah (Johns Hopkins), and Koji Shirai (Kwansei Gakuin)
“Discrete Choice under Risk with Limited Consideration” by Levon Barseghyan (Cornell), Francesca Molinari (Cornell), and Matthew Thirkettle (Cornell)

Organizing Committee
Bo Honoré, Michal Kolesár, Ulrich Müller, and Mikkel Plagborg-Møller

Participants

Adusumilli 
Karun
UPenn

Althoff
Lukas
Princeton
Anderson
Rachel
Princeton
Bai
Jushan
Columbia
Beresteanu
Arie
Pitt
Callaway
Brantly
Temple
Chao
John
Maryland
Cheng
Xu
UPenn
Choi
Jungjun
Rutgers
Choi
Sung Hoon
Rutgers
Cox
Gregory
Columbia
Christensen
Timothy
NYU
Diebold
Frank
UPenn
Dou
Liyu
Princeton
Gao
Wayne
Yale
Gaurav
Abhishek
Princeton
Henry
Marc
Penn State
Ho
Paul
Princeton
Honoré
Bo
Princeton
Hu
Yingyao
Johns Hopkins
Kolesar
Michal
Princeton
Lazzati
Natalia
UCSC
Lee
Simon
Columbia
Li
Dake
Princeton
Li
Lixiong
Penn State
Liao
Yuan
Rutgers
Menzel
Konrad
NYU
Molinari
Francesca
Cornell
Montiel Olea
José Luis
Columbia
Müller
Ulrich
Princeton
Norets
Andriy
Brown
Plagborg-Møller
Mikkel
Princeton
Poirier
Alexandre
Georgetown
Quah
John
Johns Hopkins
Rust
John
Georgetown
Schorfheide
Frank
UPenn
Seo
Myung
SNU & Cowles
Shin
Youngki
McMaster
Sims
Christopher
Princeton
Spady
Richard
Johns Hopkins
Stoye
Jörg
Cornell
Taylor
Larry
Lehigh
Vinod
Hrishikesh
Fordham
Wang
Yulong
Syracuse
Yang
Xiye
Rutgers
Zeleneev
Andrei
Princeton

Monday, December 3, 2018

Dual Regression and Prediction

Richard Spady and Sami Stouli have an interesting new paper, “Dual Regression". They change the usual OLS loss function from quadratic to something related but different, as per their equation (2.2), and they get impressive properties for estimation under correct specification. They also have some results under misspecification.

I'd like to understand more regarding dual regression's properties for prediction under misspecification. Generally we're comfortable with quadratic loss, in which case OLS delivers the goods (the conditional mean or linear projection) in large samples under great generality (e.g., see here). The dual regression estimator, in contrast, has a different probability limit under misspecification -- it's not providing a KLIC-optimal approximation.

If the above sounds negative, note well that the issue raised may be an opportunity, not a pitfall! Certainly there is nothing sacred about quadratic loss, even if the conditional mean is usually a natural predictor. We sometimes move to absolute-error loss (conditional median predictor), check-function loss (conditional quantile predictor), or all sorts of other predictive loss functions depending on the situation. But movements away from conditional mean or median prediction generally require some justification and interpretation. Equivalently, movements away from quadratic or absolute predictive loss generally require some justification and interpretation. I look forward to seeing that for the loss function that drives dual regression.

Friday, November 16, 2018

Nearest-Neighbor Prediction

The beautiful idea has been around for ages. Find the N closest H-histories to the current H-history (you choose/tune N and H), for each H-history see what followed, take an average, and use that as your forecast. Of course there are many variations and extensions. Interesting new work by Dendramis, Kapetanios, and Marcellino is in exactly that tradition, except that Dendramis et al.  don't show much awareness of the tradition, or attempt to stand on its shoulders, which I find odd. I find myself hungry for tighter connections, for example to my favorite old nearest-neighbor prediction piece, Sid Yakowitz's well-known "Nearest-Neighbor Methods for Time Series Analysis,” Journal of Time Series Analysis, 1987.

Thursday, November 15, 2018

JFEC Special Issue for Peter Christoffersen

No, I have not gone into seclusion. Well actually I have, but not intentionally and certainly not for lack of interest in the blog. Just the usual crazy time of year, only worse this year for some reason. Anyway I'll be back very soon, with lots to say! But here's something important and timely, so it can't wait:

Journal of Financial Econometrics

Call for Papers

Special Issue in Honor of Peter Christoffersen

The Journal of Financial Econometrics is organizing a special issue in memory of Professor Peter
Christoffersen, our friend and colleague, who passed away in June 2018. Peter held the TMX Chair in Capital Markets and a Bank of Canada Fellowship and was a widely respected member of the Rotman School at the University of Toronto since 2010. Prior to 2010, Peter was a valued member of the Desautels Faculty of Management at McGill University. In addition to his transformative work in econometrics and volatility models, financial risk and financial innovation had been the focus of Peter’s work in recent years.

We invite paper submissions on topics related to Peter’s contributions to Finance and Econometrics. We are particularly interested in papers related to the following topics:

1)   The use of option-implied information for forecasting; Rare disasters and portfolio
management; Factor structures in derivatives and futures markets.

2)   Volatility, correlation, extreme events, systemic risk and Value-at-Risk modeling for
financial market risk management.

3)   The econometrics of digital assets; Big data and Machine Learning.

To submit a paper, authors should login to the Journal of Financial Econometrics online submission system and follow the submission instructions as per journal policy.  The due date for submissions is June 30, 2019.  It is important to specify in the cover letter that the paper is submitted to the special issue in honor of Peter Christoffersen, otherwise your paper will not be assigned to the guest editors.

Guest Editors

•    Francis X. Diebold, University of Pennsylvania

•    René Garcia, Université de Montréal and Toulouse School of Economics

•    Kris Jacobs, University of Houston

Monday, October 29, 2018

Becker Friedman Expectations Conference

I just returned from a great BFI Conference at U Chicago, Developing and Using Business Expectations Data, organized by Nick Bloom and Steve Davis.

Wonderfully, density as opposed to point survey forecasts were featured throughout. There was the latest on central bank surveys (e.g., Binder et al.), but most informative (to me) was the emphasis on surveys that I'm less familiar with, typically soliciting density expectations from hundreds or thousands of C-suite types at major firms. Examples include Germany's important IFO survey (e.g., Bachman et al.), the U.S. Census Management and Organizational Practices Survey (e.g., Bloom et al.)., and fascinating work in progress at FRB Atlanta. 

The Census survey is especially interesting due to its innovative structuring of histogram bins. There are no fixed bins. Instead users give 5 bins of their own choice, and five corresponding probabilities (which add to 1). This solves the problem in fixed-bin surveys of  (lazy? behaviorally-biased?) respondents routinely and repeatedly assigning 0 probability to subsequently-realized events.

Sunday, October 28, 2018

Expansions Don't Die of Old Age

As the expansion ages, there's progressively more discussion of whether its advanced age makes it more likely to end. The answer is no. More formally, postwar U.S. expansion hazards are basically flat, in contrast to contraction hazards, which are sharply increasing. Of course the present expansion will eventually end, and it may even end soon, but its age it unrelated to its probability of ending.

All of this is very clear in Diebold, Rudebusch and Sichel (1992). See Figure 6.2 on p. 271. (Sorry for the poor photocopy quality.) The flat expansion hazard result has held up well (e.g., Rudebusch (2016)), and moreover it would only be strengthened by the current long expansion.

[I blogged on flat expansion hazards before, but the message bears repeating as the expansion continues to age.]