Monday, July 29, 2013

More on the Strange American Estimator: GMM, Simulation, and Misspecification

What's so interesting, then, about GMM? For me there are two key things: its implementation by simulation, and its properties under misspecification.

First consider the implementation of GMM by simulation (so-called simulated method of moments, SMM).

GMM is widely-advertised as potentially useful when a likelihood is unavailable. In other cases the likelihood may be "available" but very difficult to derive or evaluate. But model moments may also be seemingly unavailable (i.e., analytically intractable). SMM recognizes that model moments are effectively never intractable, because they can be calculated arbitrarily accurately from an arbitrarily long model simulation. That's really exciting, because simulation ability is a fine litmus test of model understanding. If you can't figure out how to simulate pseudo-data from a given probabilistic model, then you don't really understand the model (or the model is ill-posed). Assembling everything: If you understand a model you can simulate it, and if you can simulate it you can estimate it consistently by SMM, choosing parameters to minimize divergence between data moments and (simulated) model moments. Eureka! No need to work out complex likelihoods, even if they are in principle "available," and in this age of Big Data, MLE efficiency lost may be a small price for SMM tractability gained.

Now consider the properties of GMM/SMM under misspecification, which is what intrigues me the most.

All econometric models are approximations to a true but unknown data-generating process (DGP), and hence likely misspecified. GMM/SMM has special appeal from that perspective. Under correct specification any consistent estimator (e.g., MLE or GMM/SMM) unambiguously gets you to the right place asymptotically, and MLE has the extra benefit of efficiency, so it's preferable. But under misspecification, consistency distinguishes the estimators, quite apart from the secondary issue of efficiency. In particular, under misspecification the best asymptotic DGP approximation for one purpose may be very different from the best for another. GMM/SMM is appealing in such situations, because it forces you to think about which features of the data (moments, M) you'd like to match, and then by construction it's consistent for the M-optimal approximation.

In contrast to GMM/SMM, pseudo-MLE ties your hands. Gaussian pseudo-MLE, for example, may be consistent for the KLIC-optimal approximation, but KLIC optimality may not be of maximal relevance. From a predictive perspective, for example, the KLIC-optimal approximation minimizes 1-step-ahead mean-squared prediction error, but 1-step quadratic loss may not be the relevant loss function. The bottom line: under misspecification MLE may not be consistent for what you want, whereas by construction GMM is consistent for what you want (once you decide what you want).

So, at least in part, GMM/SMM continues to intrigue me. It's hard to believe that it's been three decades since Lars Hansen's classic GMM paper (1982, Econometrica), and two decades since the similarly-classic indirect inference papers of Tony Smith (1990, Duke Ph.D. Dissertation, and 1993, J. Applied Econometrics) and Christian GourierouxAlain Monfort and Eric Renault (1993, J. Applied Econometrics). (SMM is a special case of indirect inference.) If by now the Hansen-Smith-Gourieroux-Monfort-Renault insights seem obvious, it's only because many good insights are obvious, ex post.

Monday, July 22, 2013

GMM, the "Strange American Estimator"

At three separate recent non-American conferences, I heard three separate European econometricians refer to generalized method of moments (GMM) as a "strange American estimator." Needless to say, that raised my eyebrows. One doesn't hear that phrase too often in, say, Stanford or Chicago or Cambridge (Massachusetts, that is).

Although I am American, I have some sympathy for the European view (if I may be so bold as to assert that my sample of size three has indeed uncovered a "view"). I may even have significantly more sympathy than do most Americans.  But ultimately my feelings are mixed.

On the one hand, it seems clear that frequentist statisticians dismissed method-of-moments and minimum chi-squared (their term for GMM) ages ago as inefficient relative to MLE, and that Bayesian statisticians never dismissed them because they never paid them any attention in the first place. Instead, both communities have always thoroughly and intentionally focused on the likelihood -- frequentists on the location of its max and its curvature in an epsilon-neighborhood of the max, and Bayesians on its entire shape.

Surely this historical background is what drives the European view.  And against that background, I too am always a bit perplexed by the GMM phenomenon, as distilled for example in Hayashi's classic econometrics text, which reads in significant part as something of a prayer book for the GMM congregation. (Never mind that my friend and former-colleague Hayashi is Japanese; his econometrics training and style are thoroughly American.)

That is, I must admit that, in part, I too am rather skeptical. Somehow my community just never got the religion. My belief is probably restrained significantly by the fact that my interest centers on dynamic predictive econometric modeling, which is often best done in reduced-form (see No Hesitations, June 12, 2013). Hence one of the grand sources of GMM moment conditions -- orthogonality between instruments and disturbances in estimating causal effects -- is, for me, typically neither here nor there.

On the other hand, my sympathy for the European view is far from complete. For example, some important classes of economic models produce moment restrictions but not full likelihoods. Despite the GMM crowd's repeating that mantra ad nauseum, it's as true now as ever. But if the story of GMM's appeal ended with its usefulness when a model fails to produce a likelihood, I'd be underwhelmed. Maybe I'd even move to Europe.

What then do I find so additionally impressive about GMM?  Stay tuned for the next post.

Tuesday, July 16, 2013

The Latest in Statistical Graphics

In a recent gushing review, The Economist made Data Points: Visualization that Means Something by Nathan Yau (Wiley, 2013) sound like the elusive "Tufte for the 21st-Century" discussed in an earlier post (Statistical Graphics: The Good, The Bad, and the Ugly, June 21, 2013). Alas, it's not. Much of it is just inferior re-hash of old 20th-century Tufte. Nevertheless I like it and I'm glad I bought it. Among other things, there are some really cute examples, like this one showing the available colors of Crayola crayons over time.

Published by Stephen Von Worley. Designed by Velociraptor. See links below.

(Yes, I know it's not original to Yau, and I know it's comparatively easy to use clever color in a Crayola graphic, but still...) Crayola also brings back good memories: I was a user/fan in the 64-color days of 1958-1972, not only for the awesome 64 colors but also for the totally-cool tiered box with built-in sharpener!

Perhaps most interesting is Yau's final chapter, where he offers opinions on graphics software environments. (After all, he spends his life doing this stuff, so it's interesting to learn his preferences.) At a high "canned" level, he likes Tableau, the "Tableau Public" version of which is free. Well, nothing is really free, and Tableau Public follows an interesting paradigm: the price of using the web-based software is that users must upload their data so that others can use it.

But readers of this blog will be more interested in lower-level scientific software that allows for significant graph customization.  In that regard, and not surprisingly, Yau is an R fan. (See my earlier post, Research Computing / Data / Writing Environments, May 31, 2013.) He basically does all his graphics in R, but quite interestingly, he doesn't like to tune his graphs completely in R. Instead, he finalizes them using illustration software like the open-source Inkscape. Hmmm...

Yau's book also introduced me to his blog, FlowingData, which is interesting and entertaining. Also see Data Pointed, a fine blog by scientist and artist Sephen Von Worley, the author of the Crayola graphic above. And if you're really a Crayola maven, see his post, Somewhere Over the Crayon-Bow.

Finally, and ironically, the most interesting thing about Yau's book is not explicitly discussed in it, yet it lurks massively throughout: Big Data and its interaction with graphics. More on that soon.

Wednesday, July 10, 2013

Financial Regulation, Part Three: The Known, the Unknown and the Unknowable

Dick Herring, Neil Doherty and I recently worked on an eye-opening research project that resulted in our book, The Known, the Unknown and the Unknowable in Financial Risk Management. I always liked the cover art (thanks to Princeton University Press, which did its usual fine job throughout).  I feel bad for the poor guy in the necktie, in the maze. But that's life in financial markets.

bookjacket

Poor Little Guy

We abbreviate the known, the unknown and the unknowable by K, u and U, respectively. Roughly, K is risk (known outcomes, known probabilities), u is uncertainty (known outcomes, unknown probabilities), and U is ignorance (unknown outcomes, unknown probabilities).

The book blurb on my web page reads as follows:

"On the successes and failures of various parts of modern financial risk management, emphasizing the known (K), the unknown (u) and the unknowable (U). We illustrate a KuU-based perspective for conceptualizing financial risks and designing effective risk management strategies. Sometimes we focus on K, and sometimes on U, but most often our concerns blend aspects of K and u and U. Indeed K and U are extremes of a smooth spectrum, with many of the most interesting and relevant situations interior."

The blurb continues:

"Statistical issues emerge as central to risk measurement, and we push toward additional progress. But economic issues of incentives and strategic behavior emerge as central for risk management, as we illustrate in a variety of contexts."

The book's table of contents reveals the breadth of that insight, from risk management, asset allocation, and asset pricing, to insurance, crisis management, real estate, corporate governance, monetary policy, and private investing:

TABLE OF CONTENTS

Chapter 1: Introduction by Francis X. Diebold, Neil A. Doherty, and Richard J. Herring 1

Chapter 2: Risk: A Decision Maker's Perspective by Sir Clive W. J. Granger 31

Chapter 3: Mild vs. Wild Randomness: Focusing on Those Risks That Matter by Benoit B. Mandelbrot and Nassim Nicholas Taleb 47

Chapter 4: The Term Structure of Risk, the Role of Known and Unknown Risks, and Nonstationary Distributions by Riccardo Colacito and Robert F. Engle 59

Chapter 5: Crisis and Non-crisis Risk in Financial Markets: A Unified Approach to Risk Management by Robert H. Litzenberger and David M. Modest 74

Chapter 6: What We Know, Don't Know, and Can't Know about Bank Risk: A View from the Trenches by Andrew Kuritzkes and Til Schuermann 103

Chapter 7: Real Estate through the Ages: The Known, the Unknown, and the Unknowable by Ashok Bardhan and Robert H. Edelstein 145

Chapter 8: Reflections on Decision-making under Uncertainty by Paul R. Kleindorfer 164

Chapter 9: On the Role of Insurance Brokers in Resolving the Known, the Unknown, and the Unknowable by Neil A. Doherty and Alexander Muermann 194

Chapter 10: Insuring against Catastrophes by Howard Kunreuther and Mark V. Pauly 210

Chapter 11: Managing Increased Capital Markets Intensity: The Chief Financial Officer's Role in Navigating the Known, the Unknown, and the Unknowable by Charles N. Bralver and Daniel Borge 239

Chapter 12: The Role of Corporate Governance in Coping with Risk and Unknowns by Kenneth E. Scott 277

Chapter 13: Domestic Banking Problems by Charles A. E. Goodhart 286

Chapter 14: Crisis Management: The Known, The Unknown, and the Unknowable by Donald L. Kohn 296

Chapter 15: Investing in the Unknown and Unknowable by Richard J. Zeckhauser 304


I say that the project was "eye opening" (for me) because I went into it thinking about econometrics, but I came out of it thinking about economics.  Econometrics is invaluable for risk measurement, systemic aspects of which are embodied for example in the Diebold-Yilmaz network connectedness framework, or in parts of Rob Engle's V-Lab. But risk management, in particular risk prevention, is equally (or more) about creating incentives to guide strategic behavior, particularly in situations of u and U.

The real question, then, is how to write contracts (design organizations, design policies, design
rules) that incent firms to "do the right thing," whatever that might mean, in myriad situations, many of which may be inconceivable at present -- that is, across the KuU spectrum.

How to do it?

To be continued...

Tuesday, July 2, 2013

Financial Regulation, Part Two: Rules

I concluded lmy last post with the question, "Do you really believe that ... this time we've fixed the too-big-to-fail (TBTF) incentive problem, that this time is different?" Presumably your answer depends on your feelings regarding the efficacy of Dodd-Frank's (DF's) increased capital requirements and intensified scrutiny of financial institutions.

Needless to say, I have my doubts.

Left to their own devices, lawyerly types (and politicians and regulators come disproportionately from that realm) tend to aspire to write exhaustive sets of rules that dictate what can and can't be done, when, and by whom. Economists call that a "complete contract." DF, at 2000+ pages, is an example of an attempt at such a complete contract, which financial institutions were forced to "sign."

One can entertain the idea of a complete contract in principle, but the idea of making rules to govern all possible contingencies is preposterous in practice. (Note that many important possible contingencies are surely not even remotely conceivable now -- more on that in the next post.) No one is so naive as to believe that DF is truly a complete contract, but the spirit of the attempt is one of complete contracting. Let's call it "rules-based" regulation.

Of course all legislation, regulatory or otherwise, must be rules-based. Rules, after all, are the essence of law. And rules, even massive sets of rules, can be very good things. There is little doubt, for example, that rules enforcing contractual and property rights can play a large role in generating economic prosperity. And there are some good entries in the DF rulebook.

But there are three key related problems with naive implementations of rules-based regulation, and it's important to be aware of them vis-à-vis DF. First, naive rules-based regulation is mostly backward-looking, effectively regulating earlier crises, with potentially very little relevance for future crises. It's terribly hard, as they say, to drive forward when looking only in the rear-view mirror. When the next major financial crisis hits, it will likely arrive via avenues that DF missed, and then DF will be augmented with another 2000+ pages of rules looking backward at that crisis, and so on, and on and on.

Second, naive rules-based regulation invites regulatory arbitrage. That is, as soon as rules are announced, firms start devising ways to skirt them. Indeed modern finance is in many respects an industry of very smart people whose job, for a given set of rules, is to work furiously to reverse-engineer those rules (they're doing it right now with respect to DF), devising clever ways to legally bear as much risk as possible while holding as little capital as possible, by finding and taking risks missed by the rules.

Third, naive rules-based regulation invites regulatory capture. That is, the regulated and the regulators get cozy, and the regulated eventually "capture" the regulator. First the regulators and regulated work side by side, implicitly or explicitly as in DF. Next the regulated are making "suggestions" for creative rule interpretation. Before long the regulated are helping to write the rules, crafting the very loopholes that they'll later exploit.

What to do? How to deal with the fundamental incompleteness of the regulatory contract? More specifically, given the long-term impotence of naive implementations of rules-based regulation, how really to thwart the adverse incentives of TBTF? If rules are unavoidable, and if naive rule implementations are problematic, are there better, sophisticated, implementations?

To be continued...