Sunday, November 22, 2020

Classification Under Asymmetric Loss

I just read the stimulating new paper by Babii et al. on binary choice / classification w asymmetric loss, https://arxiv.org/abs/2010.08463.

It led me to recall some work of mine with Peter Christoffersen that may be related in interesting ways. The hyperlinked papers are below. We study optimal prediction under asy loss, focusing not only on how the amount of loss asymmetry drives the optimal bias (of course, as in Granger's seminal work), but also focusing on how heteroskedasticity​ (H), interacting with loss asymmetry, drives the optimal bias.  (The optimal bias increases as variance increases, and conversely.)  

We focus on time-series H, but of course cross section H is massively relevant as well, so I wonder how it would all work out in theory and practice in the Babii et al. cross-section classification environment.  Of course everyone talks about H destroying consistency in logit and related models, but that's deeper econometric consistency for marginal effects etc. I don't see why it would destroy consistency for the optimal prediction / classification, which is automatically induced by virtue of the estimation criterion as routinely exploited in the ML literature.

In any event the key recognition is that heteroskedasticity and asymmetric loss interact. Asymmetric loss of course influences the optimal prediction / classification, but it influences it more in regions (cross section) or periods (time series) where / when variance is high.


Christoffersen, P. and Diebold, F.X. (1997), "Optimal Prediction Under Asymmetric Loss," Econometric Theory, 13, 808-817.


Christoffersen
, P.F. and Diebold, F.X. (1996)
, "Further Results on Forecasting and Model Selection Under Asymmetric Loss," Journal of Applied Econometrics, 11, 561-571.

(Somewhat) related earlier No Hesitations post:
https://fxdiebold.blogspot.com/search/label/Classification

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.