Monday, August 5, 2013

Still More on the Strange American Estimator: Indirect Inference, MLE and the Particle Filter

In my last post I praised indirect inference (IE) for its ease-of-use: just simulate the model and fit a simple auxiliary model to the simulated and real-world data, after which evaluation of the objective is immediate. In contrast, likelihood analysis and MLE can be challenging, as the likelihood may be difficult to derive and evaluate.

Some might wonder whether that’s a completely fair assessment in modern time-series contexts. In particular, one might claim that evaluation of the likelihood is now as trivial as simulating. As Andrew Harvey and others have emphasized for decades, for any linear model cast in finite-dimensional state-space form one can simply run the Kalman filter and then evaluate the Gaussian likelihood via a prediction-error decomposition. And much more recently, thanks to path-breaking work by Arnaud Doucet and others (e.g., JRSS B, 2010, 1-33), filtering now also provides full likelihood analysis in general non-linear / non-Gaussian environments. In particular, so-called "particle MCMC" -- a simulation method! -- does the trick. So it would seem that likelihood analysis is made trivial by simulation, just as IE is made trivial by simulation. 

Hence we can dispense with comparatively-inefficient IE, right?

Whoaaa…not so fast. The points I made in an earlier post remain valid.

First, IE simulation is good-old “model simulation,” typically simple and always a good check of model understanding. Successful particle MCMC, in contrast, is a different and often-recalcitrant simulation beast.

Second, even if particle MCMC does make MLE as mechanical as simple model simulation (and again, that’s not at all clear), desirable consistency properties under misspecification are generally more easily achieved for IE. Under misspecification, the necessity of thinking hard about which moments to match, or which auxiliary model to use, is a good thing.