Monday, December 3, 2018

Dual Regression and Prediction

Richard Spady and Sami Stouli have an interesting new paper, “Dual Regression". They change the usual OLS loss function from quadratic to something related but different, as per their equation (2.2), and they get impressive properties for estimation under correct specification. They also have some results under misspecification.

I'd like to understand more regarding dual regression's properties for prediction under misspecification. Generally we're comfortable with quadratic loss, in which case OLS delivers the goods (the conditional mean or linear projection) in large samples under great generality (e.g., see here). The dual regression estimator, in contrast, has a different probability limit under misspecification -- it's not providing a KLIC-optimal approximation.

If the above sounds negative, note well that the issue raised may be an opportunity, not a pitfall! Certainly there is nothing sacred about quadratic loss, even if the conditional mean is usually a natural predictor. We sometimes move to absolute-error loss (conditional median predictor), check-function loss (conditional quantile predictor), or all sorts of other predictive loss functions depending on the situation. But movements away from conditional mean or median prediction generally require some justification and interpretation. Equivalently, movements away from quadratic or absolute predictive loss generally require some justification and interpretation. I look forward to seeing that for the loss function that drives dual regression.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.