Monday, December 18, 2017

Holiday Haze

File:Happy Holidays (5318408861).jpg
Your blogger is about to vanish, returning in the new year.  Meanwhile, all best wishes for the holidays, and many thanks for your wonderful support. If you're Philadelphia for the January meetings, please come to the Penn reception (joint Economics, Finance, etc.), Friday, 6-8:30, Center for Architecture and Design, 1218 Arch Street.  

[Photo credit:  Public domain, by Marcus Quigmire, from Florida, USA (Happy Holidays  Uploaded by Princess Mérida) [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons]

Sunday, December 10, 2017

More on the Problem with Bayesian Model Averaging

I blogged earlier on a problem with Bayesian model averaging (BMA) and gave some links to new work that chips away at it. The interesting thing about that new work is that it stays very close to traditional BMA while acknowledging that all models are misspecified.

But there are also other Bayesian approaches to combining density forecasts, such as prediction pools formed to optimize a predictive score. (See, e.g. Amisano and Geweke, 2017, and the references therein.  Ungated final draft, and code, here.)

Another relevant strand of new work, less familiar to econometricians, is "Bayesian predictive synthesis" (BPS), which builds on the expert opinions analysis literature. The framework, which traces to Lindley et al. (1979), concerns a Bayesian faced with multiple priors coming from multiple experts, and explores how to get a posterior distribution utilizing all of the information available. Earlier work by Genest and Schervish (1985) and West and Crosse (1992) develops the basic theory, and new work (McAlinn and West, 2017), extends it to density forecast combination.

Thanks to Ken McAlinn for reminding me about BPS. Mike West gave a nice presentation at the FRBSL forecasting meeting. [Parts of this post are adapted from private correspondence with Ken.]

Sunday, December 3, 2017

The Problem With Bayesian Model Averaging...

The problem is that one of the models considered is traditionally assumed true (explicitly or implicitly) since the prior model probabilities sum to one. Hence all posterior weight gets placed on a single model asymptotically -- just what you don't want when constructing a portfolio of surely-misspecified models. The earliest paper I know that makes and explores this point is one of mine, here. Recent and ongoing research is starting to address it much more thoroughly, for example here and here. (Thanks to Veronika Rockova for sending.)