Monday, December 22, 2014

Holiday Haze

File:Happy Holidays (5318408861).jpg

Your dedicated blogger is about to vanish in the holiday haze, presumably returning early in the new year. I hope to see you at the Boston ASSA Penn party. (I promise to show up this time. Seriously.) Meanwhile, all best wishes for the holidays.

[Photo credit:  Public domain, by Marcus Quigmire, from Florida, USA (Happy Holidays  Uploaded by Princess Mérida) [CC-BY-SA-2.0 (], via Wikimedia Commons]

Monday, December 15, 2014

Causal Modeling Update

In an earlier post on predictive modeling and causal inference, I mentioned my summer "reading list" for causal modeling:
Re-read Pearl, and read the Heckman-Pinto critique.
Re-read White et al. on settable systems and testing conditional independence.
Read Angrist-Pischke.
Read Wolpin, and Rust's review.
Read Dawid 2007 and Dawid 2014.
Get and read Imbens-Rubin (available early 2015).
I'm sure I'll blog on some of the above in due course. Meanwhile I want to add something to the list: Pearl's response to Heckman-Pinto. (I just learned of it.)

One of its themes resonates with me: Econometrics needs to examine more thoroughly the statistical causal modeling literature vis-à-vis standard econometric approaches. (The irreplaceable Hal White started building some magnificent bridges, but alas, he was taken from us much too soon.) Reasonable people will have sharply different views as to what's of value there, and the discussion is far from over, but I'm grateful to the likes of Pearl, White, Heckman, and Pinto for starting it.

Monday, December 8, 2014

A Tennis Match Graphic

I know you're not thinking about tennis in December (at least those of you north of the equator). I'm generally not either. But this post is really about graphics, and I may have something that will interest you. And remember, the Australian Open and the 2015 season will soon be here.

Tennis scoring is different and tricky compared to other sports. A 2008 New York Times piece, "In Tennis, the Numbers Sometimes Don't Add Up," is apt:
If you were told that in a particular match, Player A won more points and more games and had a higher first-serve percentage, fewer unforced errors and a higher winning percentage at the net, you would deduce that Player A was the winner. But leaping to that conclusion would be a mistake. ... In tennis, it is not the events that constitute a match, but the timing of those events. In team sports like baseball, basketball and football, and even in boxing, the competitor who scores first or last may have little bearing on the outcome. In tennis, the player who scores last is always the winner.
Tricky tennis scoring makes for tricky match summarization, whether graphically or otherwise. Not that people haven't tried, with all sorts of devices in use. See, for example, another good 2014 New York Times piece, "How to Keep Score: However You Like," and the fascinating, "A blog devoted to maps about tennis," emphasizing spatial aspects but going farther on occasion.

Glenn Rudebusch and I have been working on a graphic for tennis match summarization. We have a great team of Penn undergraduate research assistants, including Bas Bergmans, Joonyup Park, Hong Teoh, and Han Tian. We don't want a graphic that keeps score per se, or a graphic that emphasizes spatial aspects. Rather, we simply want a graphic that summarizes a match's evolution, drama, and outcome. We want it to convey a wealth of information, instantaneously and intuitively, yet also to repay longer study. Hopefully we're getting close.

Here's an example, for the classic Federer-Monfils 2014 U.S. Open match. I'm not going to explain it, because it should be self-explanatory -- if it's not, we're off track. (But of course see the notes below the graph. Sorry if they're hard to read; we had to reduce the graphic to fit the blog layout.)

Does it resonate with you? How to improve it? This is version 1; we hope to post a version 2, already in the works, during the Australian open in early 2015. Again, interim suggestions are most welcome.

Monday, December 1, 2014

Quantum Computing and Annealing

My head is spinning. The quantum computing thing is really happening. Or not. Or most likely it's happening in small but significant part and continuing to advance slowly but surely. The Slate piece from last May still seems about right (but read on). 

Optimization by simulated annealing cum quantum computing is amazing. It turns out that the large and important class of problems that map into global optimization by simulated annealing is marvelously well-suited to quantum computing, so much so that the D-Wave machine is explicitly and almost exclusively designed for solving "quantum annealing" problems. We're used to doing simulated annealing on deterministic "classical" computers, where the simulation is done in software, and it's fake (done with deterministic pseudo-random deviates). In quantum annealing the randomization is in the hardware, and it's real

From the D-Wave site:
Quantum computing uses an entirely different approach than classical computing. A useful analogy is to think of a landscape with mountains and valleys. Solving optimization problems can be thought of as trying to find the lowest point on this landscape. Every possible solution is mapped to coordinates on the landscape, and the altitude of the landscape is the “energy’” or “cost” of the solution at that point. The aim is to find the lowest point on the map and read the coordinates, as this gives the lowest energy, or optimal solution to the problem. Classical computers running classical algorithms can only "walk over this landscape". Quantum computers can tunnel through the landscape making it faster to find the lowest point.
Remember the old days of "math co-processors"? Soon you may have a "quantum co-processor" for those really tough optimization problems! And you thought you were cool if you had a GPU or two.

Except that your quantum co-processor may not work. Or it may not work well. Or at any rate today's version (the D-Wave machine; never mind that it occupies a large room) may not work, or work well. And it's annoyingly hard to tell. In any event, even if it works, the workings are subtle and still poorly understood -- the D-Wave tunneling description above is not only simplistic, but also potentially incorrect.

Here's the latest, an abstract of a lecture to be given at Penn on 4 December 2014 by one of the world's leading quantum computing researchers, Umesh Vazirani of UC Berkeley, titled "How 'Quantum' is the D-Wave Machine?":
A special purpose "quantum computer" manufactured by the Canadian company D-Wave has led to intense excitement in the mainstream media (including a Time magazine cover dubbing it "the infinity machine") and the computer industry, and a lively debate in the academic community. Scientifically it leads to the interesting question of whether it is possible to obtain quantum effects on a large scale with qubits that are not individually well protected from decoherence.

We propose a simple and natural classical model for the D-Wave  machine - replacing their superconducting qubits with classical magnets, coupled with nearest neighbor interactions whose strength is taken from D-Wave's specifications. The behavior of this classical model agrees remarkably well with posted experimental data about the input-output behavior of the D-Wave machine.

Further investigation of our classical model shows that despite its simplicity, it exhibits novel algorithmic properties. Its behavior is fundamentally different from that of its close cousin, classical heuristic simulated annealing. In particular, the major motivation behind the D-Wave machine was the hope that it would tunnel through local minima in the energy landscape, minima that simulated annealing got stuck in. The reproduction of D-Wave's behavior by our classical model demonstrates that tunneling on a large scale may be a more subtle phenomenon than was previously understood...
Wow. I'm there.

All this raises the issue of how to test untrusted quantum devices, which brings us to the very latest, Vizrani's second lecture on 5 December, "Science in the Limit of Exponential Complexity." Here's the abstract:
One of the remarkable discoveries of the last quarter century is that quantum many-body systems violate the extended Church-Turing thesis and exhibit exponential complexity -- describing the state of such a system of even a few hundred particles would require a classical memory larger than the size of the Universe. This means that a test of quantum mechanics in the limit of high complexity would require scientists to experimentally study systems that are inherently exponentially more powerful than human thought itself! 
A little reflection shows that the standard scientific method of "predict and verify" is no longer viable in this regime, since a calculation of the theory's prediction is rendered computationally infeasible. Does this mean that it is impossible to do science in this regime? A remarkable connection with the theory of interactive proof systems (the crown jewel of computational complexity theory) suggests a potential way around this impasse: interactive experiments. Rather than carry out a single experiment, the experimentalist performs a sequence of experiments, and rather than predicting the outcome of each experiment, the experimentalist checks a posteriori that the outcomes of the experiments are consistent with each other and the theory to be tested. Whether this approach will formally work is intimately related to the power of a certain kind of interactive proof system; a question that is currently wide open. Two natural variants of this question have been recently answered in the affirmative, and have resulted in a breakthrough in the closely related area of testing untrusted quantum devices. 
Wow. Now my head is really spinning.  I'm there too, for sure.

Monday, November 24, 2014

More on Big Data

An earlier post, "Big Data the Big Hassle," waxed negative. So let me now give credit where credit is due.

What's true in time-series econometrics is that it's very hard to list the third-most-important, or even second-most-important, contribution of Big Data. Which makes all the more remarkable the mind-boggling -- I mean completely off-the-charts -- success of the first-most-important contribution: volatility estimation from high-frequency trading data. Yacine Ait-Sahalia and Jean Jacod give a masterful overview in their new book, High-Frequency Financial Econometrics.

What do financial econometricians learn from high-frequency data? Although largely uninformative for some purposes (e.g., trend estimation), high-frequency data are highly informative for others (volatility estimation), an insight that traces at least to Merton's early work. Roughly put: as we sample returns arbitrarily finely, we can infer underlying volatility arbitrarily well. Accurate volatility estimation and forecasting, in turn, are crucial for financial risk management, asset pricing, and portfolio allocation. And it's all facilitated by by the trade-by-trade data captured in modern electronic markets.

In stressing "high frequency" financial data, I have thus far implicitly stressed only the massive time-series dimension, with its now nearly-continuous record. But of course we're ultimately concerned with covariance matrices, not just scalar variances, for tens of thousands of assets, so the cross-section dimension is huge as well. (A new term: "Big Big Data"? No, please, no.) Indeed multivariate now defines significant parts of both the theoretical and applied research frontiers; see Andersen et al. (2013).

Monday, November 17, 2014

Quantitative Tools for Macro Policy Analysis

Penn's First Annual PIER Workshop on Quantitative Tools for Macroeconomic Policy Analysis will take place in May 2015.  The poster appears below (and here if the one below is a bit too small), and the website is here. We are interested in contacting anyone who might benefit from attending. Research staff at central banks and related organizations are an obvious focal point, but all are welcome. Please help spread the word, and of course, please consider attending. We hope to see you there!

Monday, November 10, 2014

Penn Econometrics Reading Group Materials Online

Locals who come to the Friday research/reading group will obviously be interested in this post, but others may also be interested in following and influencing the group's path.

The schedule has been online here for a while. Starting now, it will contain not only paper titles but also links to papers when available. (Five are there now.) We'll leave the titles and papers up, instead of deleting them as was the earlier custom. We'll also try to post presenters' slides as we move forward.

Don't hesitate to suggest new papers that would be good for the Group.

Sunday, November 2, 2014

A Tribute to Lawrence R. Klein

(Remarks given at the Klein Legacy Dinner, October 24, 2014, Lower Egyptian Gallery, University of Pennsylvania Museum of Archaeology and Anthropology.)

I owe an immense debt of gratitude to Larry Klein, who helped guide, support and inspire my career for more than three decades. Let me offer just a few vignettes.

Circa 1979 I was an undergraduate studying finance and economics at Penn's Wharton School, where I had my first economics job. I was as a research assistant at Larry's firm, Wharton Econometric Forecasting Associates (WEFA). I didn't know Larry at the time; I got the job via a professor whose course I had taken, who was a friend of a friend of Larry's. I worked for a year or so, perhaps ten or fifteen hours per week, on regional electricity demand modeling and forecasting. Down the hall were the U.S. quarterly and annual modeling groups, where I eventually moved and spent another year. Lots of fascinating people roamed the maze of cubicles, from eccentric genius-at-large Mike McCarthy, to Larry and Sonia Klein themselves, widely revered within WEFA as god and goddess. During fall of 1980 I took Larry's Wharton graduate macro-econometrics course and got to know him. He won the Nobel Prize that semester, on a class day, resulting in a classroom filled with television cameras. What a heady mix!

I stayed at Penn for graduate studies, moving in 1981 from Wharton to Arts and Sciences, home of the Department of Economics and Larry Klein. I have no doubt that my decision to stay at Penn, and to move to the Economics Department, was heavily dependent on Larry's presence there. During the summer following my first year of the Ph.D. program, I worked on a variety of country models for Project LINK, under the supervision Larry and another leading modeler in the Klein tradition, Peter Pauly.  It turned out that the LINK summer job pushed me over the annual salary cap for a graduate student -- $6000 or so 1982 dollars, if I remember correctly -- so Larry and Peter paid me the balance in kind, taking me to the Project LINK annual meeting in Wiesbaden, Germany. More excitement, and also my first trip abroad.

Both Larry and Peter helped supervise my 1986 Penn Ph.D. dissertation, on ARCH modeling of asset return volatility. I couldn't imagine a better trio of advisors: Marc Nerlove as main advisor, with committee members Larry and Peter (who introduced me to ARCH). I took a job at the Federal Reserve Board, with the Special Studies Section led by Peter Tinsley, a pioneer in optimal control of macro-econometric models. Circa 1986 Larry had more Ph.D. students at the Board than anyone else, by a wide margin. Surely that helped me land the Special Studies job. Another Klein student, Glenn Rudebusch, also went from Penn to the Board that year, and we wound up co-authoring a dozen articles and two books over nearly thirty years. My work and lasting friendship with Glenn trace in significant part to our melding in the Klein crucible.

I returned to Penn in 1989 as an assistant professor. Although I have no behind-the-scenes knowledge, it's hard to imagine that Larry's input didn't contribute to my invitation to return. Those early years were memorable for many things, including econometric socializing. During the 1990's my wife Susan and I had lots of parties at our home for faculty and students. The Kleins were often part of the group, as were Bob and Anita Summers, Herb and Helene Levine, Bobby and Julie Mariano, Jere Behrman and Barbara Ventresco, Jerry Adams, and many more. I recall a big party on one of Penn's annual Economics Days, which that year celebrated The Keynesian Revolution, Larry's landmark 1947 monograph.

The story continues, but I'll mention just one more thing. I was honored and humbled to deliver the Lawrence R. Klein Lecture at the 2005 Project LINK annual meeting in Mexico City, some 25 years after Larry invited a green 22-year-old to observe the 1982 meeting in Wiesbaden.

I have stressed guidance and support, but in closing let me not forget inspiration, which Larry also provided for three decades, in spades. He was the ultimate scholar, focused and steady, and the ultimate gentleman, remarkably gracious under pressure.

A key point, of course, is that it's not about what Larry provided me, whether guidance, support or inspiration -- I'm just one member of this large group. Larry generously provided for all of us, and for thousands of others who couldn't be here tonight, enriching all our lives. Thanks Larry. We look forward to working daily to honor and advance your legacy.


(For more, see the materials here.)

Tuesday, October 21, 2014

Rant: Academic "Letterhead" Requirements

(All rants, including this one, are here.)

Countless times, from me to Chair/Dean xxx at Some Other University: 

I am happy to help with your evaluation of Professor zzz. This email will serve as my letter. [email here]...
Countless times, from Chair/Dean xxx to me: 
Thanks very much for your thoughtful evaluation. Can you please put it on your university letterhead and re-send?
Fantasy response from me to Chair/Dean xxx:
Sure, no problem at all. My time is completely worthless, so I'm happy to oblige, despite the fact that email conveys precisely the same information and is every bit as legally binding (whatever that even means in this context) as a "signed" "letter" on "letterhead." So now I’ll copy my email, try to find some dusty old Word doc letterhead on my hard drive, paste the email into the Word doc, try to beat it into submission depending on how poor the formatting / font / color / blocking looks when first pasted, print from Word to pdf, attach the pdf to a new email, and re-send it to you. How 1990’s.
Actually last week I did send something approximating the fantasy email to a dean at a leading institution. I suspect that he didn't find it amusing. (I never heard back.) But as I also said at the end of that email,
"Please don’t be annoyed. I...know that these sorts of 'requirements' have nothing to do with you per se. Instead I’m just trying to push us both forward in our joint battle with red tape."

Monday, October 13, 2014

Lawrence R. Klein Legacy Colloquium

In Memoriam

The Department of Economics of the University of Pennsylvania, with kind support from the School of Arts and Sciences, the Wharton School, PIER and IER, is pleased is pleased to host a colloquium, "The Legacy of Lawrence R. Klein: Macroeconomic Measurement, Theory, Prediction and Policy," on Penn’s campus, Saturday, October 25, 2014. The full program and related information are here. We look forward to honoring Larry’s legacy throughout the day. Please join us if you can.  

  • Olav Bjerkholt, Professor of Economics, University of Oslo
  • Harold L. Cole, Professor of Economics and Editor of International Economic Review, University of Pennsylvania
  • Thomas F. Cooley, Paganelli-Bull Professor of Economics, New York University 
  • Francis X. Diebold, Paul F. Miller, Jr. and E. Warren Shafer Miller Professor of Economics, University of Pennsylvania
  • Jesus Fernandez-Villaverde, Professor of Economics, University of Pennsylvania
  • Dirk Krueger, Professor and Chair of the Department of Economics, University of Pennsylvania
  • Enrique G. Mendoza, Presidential Professor of Economics and Director of Penn Institute for Economic Research, University of Pennsylvania
  • Glenn D. Rudebusch, Executive Vice President and Director of Research, Federal Reserve Bank of San Francisco
  • Frank Schorfheide, Professor of Economics, University of Pennsylvania
  • Christopher A. Sims, John F. Sherrerd ‘52 University Professor of Economics, Princeton University 
  • Ignazio Visco, Governor of the Bank of Italy

Monday, October 6, 2014

Intuition for Prediction Under Bregman Loss

Elements of the Bregman family of loss functions, denoted \(B(y, \hat{y})\), take the form:
$$B(y, \hat{y}) = \phi(y) - \phi(\hat{y}) - \phi'(\hat{y}) (y-\hat{y})
$$ where \(\phi: \mathcal{Y} \rightarrow R\) is any strictly convex function, and \(\mathcal{Y}\) is the support of \(Y\).

Several readers have asked for intuition for equivalence between the predictive optimality of \( E[y|\mathcal{F}]\) and Bregman loss function \(B(y, \hat{y})\).  The simplest answers come from the proof itself, which is straightforward.

First consider \(B(y, \hat{y}) \Rightarrow E[y|\mathcal{F}]\).  The derivative of expected Bregman loss with respect to \(\hat{y}\) is
\frac{\partial}{\partial \hat{y}} E[B(y, \hat{y})] = \frac{\partial}{\partial \hat{y}} \int B(y,\hat{y}) \;f(y|\mathcal{F}) \; dy
=  \int \frac{\partial}{\partial \hat{y}} \left ( \phi(y) - \phi(\hat{y}) - \phi'(\hat{y}) (y-\hat{y}) \right ) \; f(y|\mathcal{F}) \; dy
=  \int (-\phi'(\hat{y}) - \phi''(\hat{y}) (y-\hat{y}) + \phi'(\hat{y})) \; f(y|\mathcal{F}) \; dy
= -\phi''(\hat{y}) \left( E[y|\mathcal{F}] - \hat{y} \right).
Hence the first order condition is
-\phi''(\hat{y}) \left(E[y|\mathcal{F}] - \hat{y} \right) = 0,
so the optimal forecast is the conditional mean, \( E[y|\mathcal{F}] \).

Now consider \( E[y|\mathcal{F}] \Rightarrow B(y, \hat{y}) \). It's a simple task of reverse-engineering. We need the f.o.c. to be of the form
const \times \left(E[y|\mathcal{F}] - \hat{y} \right) = 0,
so that the optimal forecast is the conditional mean, \( E[y|\mathcal{F}] \). Inspection reveals that \( B(y, \hat{y}) \) (and only \( B(y, \hat{y}) \)) does the trick.

One might still want more intuition for the optimality of the conditional mean under Bregman loss, despite its asymmetry.  The answer, I conjecture, is that the Bregman family is not asymmetric! At least not for an appropriate definition of asymmetry in the general \(L(y, \hat{y})\) case, which is more complicated and subtle than the \(L(e)\) case.  Asymmetric loss plots like those in Patton (2014), on which I reported last week, are for fixed \(y\) (in Patton's case, \(y=2\) ), whereas for a complete treatment we need to look across all \(y\). More on that soon.

[I would like to thank -- without implicating -- Minchul Shin for helpful discussions.]

Monday, September 29, 2014

A Mind-Blowing Optimal Prediction Result

I concluded my previous post with:
Consider, for example, the following folk theorem: "Under asymmetric loss, the optimal prediction is conditionally biased." The folk theorem is false. But how can that be?
What's true is this: The conditional mean is the L-optimal forecast if and only if the loss function L is in the Bregman family, given by
$$L(y, \hat{y}) = \phi (y) - \phi (\hat{y}) - \phi ' ( \hat{y}) (y - \hat{y}).$$ Quadratic loss is in the Bregman family, so the optimal prediction is the conditional mean.  But the Bregman family has many asymmetric members, for which the conditional mean remains optimal despite the loss asymmetry. It just happens that the most heavily-studied asymmetric loss functions are not in the Bregman family (e.g., linex, linlin), so the optimal prediction is not the conditional mean.

So the Bregman result (basically unseen in econometrics until Patton's fine new 2014 paper) is not only (1) a beautiful and perfectly-precise (necessary and sufficient) characterization of optimality of the conditional mean, but also (2) a clear statement that the conditional mean can be optimal even under highly-asymmetric loss.

Truly mind-blowing! Indeed it sounds bizarre, if not impossible. You'd think that such asymmetric Bregman families must must be somehow pathological or contrived. Nope. Consider for example, Kneiting's (2011) "homogeneous" Bregman family obtained by taking \( \phi (x; k) = |x|^k \) for \( k>1 \), and Patton's (2014) "exponential" Bregman family, obtained by taking \( \phi (x; a) =  2 a^{-2} exp(ax) \) for \(a \ne 0  \). Patton (2014) plots them (see Figure 1 from his paper, reproduced below with his kind permission). The Kneiting homogeneous Bregman family has a few funky plateaus on the left, but certainly nothing bizarre, and the Patton exponential Bregman family has nothing funky whatsoever. Look, for example, at the upper right element of Patton's figure. Perfectly natural looking -- and highly asymmetric.

For your reading pleasure, see: Bregman (1967)Savage (1971)Christoffersen and Diebold (1997)Gneiting (2011)Patton (2014).

Monday, September 22, 2014

Prelude to a Mind-Blowing Result

A mind-blowing optimal prediction result will come next week. This post sets the stage.

My earlier post, "Musings on Prediction Under Asymmetric Loss," got me thinking and re-thinking about the predictive conditions under which the conditional mean is optimal, in the sense of minimizing expected loss.

To strip things to the simplest case possible, consider a conditionally-Gaussian process.

(1) Under quadratic loss, the conditional mean is of course optimal. But the conditional mean is also optimal under other loss functions, like absolute-error loss (in general the conditional median is optimal under absolute-error loss, but by symmetry of the conditionally-Gaussian process, the conditional median is the conditional mean).

(2) Under asymmetric loss like linex or linlin, the conditional mean is generally not the optimal prediction. One would naturally expect the optimal forecast to be biased, to lower the probability of making errors of the more hated sign. That intuition is generally correct. More precisely, the following result from Christoffersen and Diebold (1997) obtains:
If \(y_{t}\) is a conditionally Gaussian process and \( L(e_{t+h} )\) is any loss function defined on the \(h\)-step-ahead prediction error \(e_{t+h |t}\), then the \(L\)-optimal predictor is of the form \begin{equation} y_{t+h | t} = \mu _{t+h,t} +  \alpha _{t}, \end{equation}where \( \mu _{t+h,t} = E(y_{t+h} | \Omega_t) \), \( \Omega_t = y_t, y_{t-1}, ...\), and \(\alpha _{t}\) depends only on the loss function \(L\) and the conditional prediction-error variance \( var(e _{t+h} | \Omega _{t} )\).
That is, the optimal forecast is a "shifted" version of the conditional mean, where the generally time-varying bias depends only on the loss function (no explanation needed) and on the conditional variance (explanation: when the conditional variance is high, you're more likely to make a large error, including an error of the sign you hate, so under asymmetric loss it's optimal to inject more bias at such times).

(1) and (2) are true. A broad and correct lesson emerging from them is that the conditional mean is the central object for optimal prediction under any loss function. Either it is the optimal prediction, or it's a key ingredient.

But casual readings of (1) and (2) can produce false interpretations. Consider, for example, the following folk theorem: "Under asymmetric loss, the optimal prediction is conditionally biased." The folk theorem is false. But how can that be? Isn't the folk theorem basically just (2)?

Things get really interesting.

To be continued...

Monday, September 15, 2014

1976 NBER-Census Time Series Conference

What a great blast from the past -- check out the program of the 1976 NBER-Census Time-Series Conference. (Thanks to Bill Wei for forwarding, via Hang Kim.)

The 1976 conference was a pioneer in bridging time-series econometrics and statistics. Econometricians at the table included Zellner, Engle, Granger, Klein, Sims, Howrey, Wallis, Nelson, Sargent, Geweke, and Chow. Statisticians included Tukey, Durbin, Bloomfield, Cleveland, Watts, and Parzen. Wow!

The 1976 conference also clearly provided the model for the subsequent long-running and hugely-successful NBER-NSF Time-Series Conference, the hallmark of which is also bridging the time-series econometrics and statistics communities. An historical listing is here, and the tradition continues with the upcoming 2014 NBER-NSF meeting at the Federal Reserve Bank of St. Louis. (Registration deadline Wednesday!)

Monday, September 8, 2014

Network Econometrics at Dinner

At a seminar dinner at Duke last week, I asked the leading young econometrician at the table for his forecast of the Next Big Thing, now that the partial-identification set-estimation literature has matured. The speed and forcefulness of his answer -- network econometrics -- raised my eyebrows, and I agree with it. (Obviously I've been working on network econometrics, so maybe he was just stroking me, but I don't think so.) Related, the Acemoglu-Jackson 2014 NBER Methods Lectures, "Theory and Application of Network Models," are now online (both videos and slides). Great stuff!

Tuesday, September 2, 2014 Site Now Up


The Financial and Macroeconomic Connectedness site is now up, thanks largely to the hard work of Kamil Yilmaz and Mert Demirer. Check it out at It implements the Diebold-Yilmaz framework for network connecteness measurement in global stock, sovereign bond, FX and CDS markets, both statically and dynamically (in real time). It includes results, data, code, bibliography, etc. Presently it's all financial markets and no macro (e.g., no global business cycle connectedness), but macro is coming soon. Check back in the coming months as the site grows and evolves.

Monday, August 25, 2014

Musings on Prediction Under Asymmetric Loss

As has been known for more than a half-century, linear-quadratic-Gaussian (LQG) decision/control problems deliver certainty equivalence (CE). That is, in LQG situations we can first predict/extract (form a conditional expectation) and then simply plug the result into the rest of the problem. Hence the huge literature on prediction under quadratic loss, without specific reference to the eventual decision environment.

But two-step forecast-decision separation (i.e., CE) is very special. Situations of asymmetric loss, for example, immediately diverge from LQG, so certainty equivalence is lost. That is, the two-step CE prescription of “forecast first, and then make a decision conditional on the forecast” no longer works under asymmetric loss.

Yet forecasting under asymmetric loss -- again without reference to the decision environment -- seems to pass the market test. People are interested in it, and a significant literature has arisen. (See, for example, Elliott and Timmermann, "Economic Forecasting," Journal of Economic Literature, 46, 3-56.)

What gives? Perhaps the implicit hope is that CE two-step procedures might be acceptably-close approximations to fully-optimal procedures even in non-CE situations. Maybe they are, sometimes. Or perhaps we haven't thought enough about non-CE environments, and the literature on prediction under asymmetric loss is misguided. Maybe it is, sometimes. Maybe it's a little of both.

Monday, August 18, 2014

Models Didn't Cause the Crisis

Some of the comments engendered by the Black Swan post remind me of something I've wanted to say for a while: In sharp contrast to much popular perception, the financial crisis wasn't caused by models or modelers.

Rather, the crisis was caused by huge numbers of smart, self-interested people involved with the financial services industry -- buy-side industry, sell-side industry, institutional and retail customers, regulators, everyone -- responding rationally to the distorted incentives created by too-big-to-fail (TBTF), sometimes consciously, often unconsciously. Of course modelers were part of the crowd looking the other way, but that misses the point: TBTF coaxed everyone into looking the other way. So the key to financial crisis management isn't as simple as executing the modelers, who perform invaluable and ongoing tasks. Instead it's credibly committing to end TBTF, but no one has found a way. Ironically, Dodd-Frank steps backward, institutionalizing TBTF, potentially making the financial system riskier now than ever. Need it really be so hard to end TBTF? As Nick Kiefer once wisely said (as the cognoscenti rolled their eyes), "If they're too big to fail, then break them up."

[For more, see my earlier financial regulation posts:  part 1part 2 and part 3.]

Monday, August 11, 2014

You Can Now Browse by Topic

You can now browse No Hesitations by topic.  Check it out -- just look in the right column, scrolling down a bit. I hope it's useful.

On Rude and Risky "Calls for Papers"

You have likely seen calls for papers that include this script, or something similar: 
You will not hear from the organizers unless they decide to use your paper.
It started with one leading group's calls, which go so even farther:
You will not hear from the organizers unless they decide to use your paper.  They are not journal editors or program committee chairmen for a society. 
Now it's spreading.

Bad form, folks.

(1) It's rude. Submissions are not spam to be acted upon by the organizers if interesting, and deleted otherwise. On the contrary, they're solicited, so the least the organizer can do is acknowledge receipt and outcome with costless "thanks for your submission" and "sorry but we couldn't use your paper" emails (which, by the way, are automatically sent in leading software like Conference Maker). As for gratuitous additions like "They are not journal editors or program committee chairmen...," well, I'll hold my tongue.

(2) It's risky. Consider an author whose fine submission somehow fails to reach the organizer, which happens surprisingly often. The lost opportunity hurts everyone -- the author whose career would have been enhanced, the organizer whose reputation would have been enhanced, and the conference participants whose knowledge would have been enhanced, not to mention the general advancement of science -- and no one is the wiser. That doesn't happen when the announced procedure includes acknowledgement of submissions, in which case the above author would simply email the organizer saying, "Hey, where's my acknowledgement? Didn't you receive my submission?"

(Note the interplay between (1) and (2). Social norms like "courtesy" arise in part to promote efficiency.)

Monday, August 4, 2014

The Black Swan Spectrum

Speaking of the newly-updated draft of Econometrics, now for some fun. Here's a question from the Chapter 6 EPC (exercises, problems and complements). Where does your reaction fall on the A-B spectrum below?
Nassim Taleb is a financial markets trader (and Wharton graduate) turned pop author. His book, The Black Swan, deals with many of the issues raised in this chapter. "Black swans"' are seemingly impossible or very low-probability events -- after all, swans are supposed to be white -- that occur with annoying regularity in reality. Read his book. Where does your reaction fall on the A-B spectrum below? 
A. Taleb offers crucial lessons for econometricians, heightening awareness in ways otherwise difficult to achieve. After reading Taleb, it's hard to stop worrying about non-normality, model misspecification, and so on.
B. Taleb belabors the obvious for hundreds of pages, arrogantly "informing"' us that non-normality is prevalent, that all models are misspecified, and so on. Moreover, it takes a model to beat a model, and Taleb offers nothing new.
The book is worth reading, regardless of where your reaction falls on the A-B spectrum.

Thursday, July 31, 2014

Open Econometrics Text Updated for Fall Use

I have just posted an update of my introductory undergraduate Econometrics (book, slides, R code, EViews code, data, etc.). Warning: although it is significantly improved, it nevertheless remains highly (alas, woefullypreliminary and incomplete.

I intend to keep everything permanently "open," freely available on the web, continuously evolving and improving.

If you use the materials in your teaching this fall (and even if you don't), I would be grateful for feedback.

Monday, July 28, 2014

A Second NBER Econometrics Group?

The NBER is a massive consumer of econometrics, so it needs at least a group or two devoted to producing econometrics. Hence I'm thrilled that the "Forecasting and Empirical Methods in Macroeconomics and Finance" group, now led by Allan Timmermann and Jonathan Wright, continues to thrive. Timmermann-Wright is strongly and appropriately time-series in flavor, focusing on developing econometric methods for macroeconomics, financial economics, and other areas that feature time series prominently.

In my view, there's a strong and obvious argument favoring creation of a second NBER working group in econometrics, focusing on micro-econometrics. Quite simply, econometiric methods are central to the NBER's mission, which has both macro/finance and micro components. Timmermann-Wright addresses the former, but there's still no explicit group addressing the latter. (The Bureau's wonderful and recently-instituted Econometrics Methods Lectures include micro, but the Methods Lectures are surveys/tutorials and hence fill a very different void.) An ongoing working group led by Chernozhukov-Imbens-Wooldridge (for example -- I'm just making this up) would be a fine addition and would nicely complement Timmermann-Wright.

Tuesday, July 22, 2014

Chinese Diebold-Rudebusch Yield Curve Modeling and Forecasting

A Chinese edition of Diebold-Rudebusch, Yield Curve Modeling and Forecasting: The Dynamic Nelson-Siegel Approach, just arrived. (I'm traveling -- actually at IMF talking about Diebold-Rudebusch among other things -- but Glenn informed me that he received it in San Francisco.) I'm not even sure that I knew it was in the works. Anyway, totally cool. I love the "DNS" ("Dynamic Nelson-Siegel") in the Chinese subtitle. Not sure how/where to buy it. In any event, the English first chapter is available free from Princeton University Press, and the English complete book is available almost for free (USD 39.50 -- as they used to say in MAD Magazine: Cheap!).

Sunday, July 20, 2014

Some History of NBER Econometrics

My last post led me to reminisce. So, for history buffs, here's a bit on the origins and development of the NBER working group on Forecasting and Empirical Methods in Macro and Finance. (No promises of complete accuracy -- some of my memory may be fuzzy.)

In the early 1990's Steve Durlauf had an idea for an "Empirical Methods in Macro" NBER group, and he asked me to join him in leading it. Bob Hall kindly supported the idea, so we launched. Some years later Steve stepped down, Ken West joined, and we decided to add "Finance." I was also leading a "Forecasting" group with highly-related interests, so we merged the two, and Diebold-Durlauf "Empirical Methods in Macro" then became Diebold-West, "Forecasting and Empirical Methods in Macro and Finance." Quite a mouthful, but it worked!

We met at least once per year at the NBER Summer Institute, sometimes more. Papers drawn from the meetings sometimes appeared as journal symposia. I'm particularly fond of those in International Economic Review (1998, 811-1144), which contains Andersen-Bollerslev on realized volatility from underlying diffusions, Rudebusch on measuring monetary policy in VAR's (with discussion by Sims and a feisty rejoinder by Rudebusch), Christoffersen on interval forecast calibration, Diebold-Gunther-Tay on density forecast calibration, among others, and Review of Economics and Statistics (1999, 553-673), which contains Baxter-King on bandpass filters for business-cycle measurement, Kim-Nelson on measuring changes in business-cycle stability using Bayesian dynamic-factor Markov-switching models, and Gallant-Hsu-Tauchen on range-based asset return volatility estimation, among others.

I eventually stepped down around 2005, and Mark Watson joined. (Mark and I had earlier edited another group symposium in Journal of Applied Econometrics (1996, 453-593).) So Diebold-West became Watson-West, and the group continued to thrive. In 2013, Mark and Ken passed their batons to Allan Timmermann and Jonathan Wright, who are off and running. This summer's program was one of the best ever, and the meeting was heavily over-subscribed.

Tuesday, July 15, 2014

Time to Re-Think NBER Programs?

Check out John Cochrane's recent NBER post if you didn't already. It ends with:
A last thought. Economic Fluctuations [an NBER program] merged with Growth [another NBER program] in the mid 1990s. At the time there was a great confluence of method as well as interest. Growth theorists were studying growth with Bellman equations, dynamic general equilibrium models of innovation and transmission of ideas, thinking about where productivity shocks came from. Macroeconomists were using Bellman equations and studying dynamic general equilibrium models with stochastic technology, along with various frictions and other propagation mechanisms. 
That confluence has now diverged. ...  ...when Daron Acemoglu, who seems to know everything about everything, has to preface his comments on macro papers with repeated disclaimers of lack of expertise, it's clear that the two fields [fluctuations and growth] really have gone their separate ways. Perhaps it's time to merge fluctuations with finance, where we seem to be talking about the same issues and using the same methods, and to merge growth with institutions and political or social economics.

[Material in square brackets and bold added by me.]
I agree. Fluctuations and finance belong together. (I'm talking about asset pricing broadly defined, not corporate finance.) Yes, the methods are basically the same, and moreover, the substance is inextricably linked. Aspects of fluctuations are effectively the fundamentals priced in financial markets, and conversely, financial markets can most definitely impact fluctuations. (Remember that little recession a few years back?)

The NBER Summer Institute group in which I most actively participate, Forecasting and Empirical Methods in Macroeconomics and Finance (one of several so-called working groups under the umbrella of the NBER's program in Economic Fluctuations and Growth), has been blending fluctuations and finance for decades. But we're intentionally narrowly focused on applied econometric aspects. It would be wonderful and appropriate to see broader fluctuations-finance links formalized at the NBER, not just at the working group level, but also at the program level.

Sunday, June 29, 2014

ADS Perspective on the First-Quarter Contraction

Following on my last post about the first-quarter GDP contraction, now look at the FRB Philadelphia's Aruoba-Diebold-Scotti (ADS) Index. 2014Q1 is the rightmost downward blip. It's due mostly to the huge drop in expenditure-side GDP (GDP_E), which is one of the indicators in the ADS index. But it's just a blip, nothing to be too worried about. [Perhaps one of these days we'll get around to working with FRB Philadelphia to replace GDP_E with GDPplus in the ADS Index, or simply to include income-side GDP (GDP_I) directly as an additional indicator in the ADS Index.]

Plot of ADS Business Conditions Index in 2007

Source: FRB Philadelphia

One might wonder why the huge drop in measured GDP_E didn't cause a bigger drop in the ADS Index. The reason is that all real activity indicators are noisy (GDP_E is just one), and by averaging across them, as in ADS, we can eliminate much of the noise, and most of the other ADS component indicators fared much better. (See the component indicator plots.)

Note well the important lesson: both the ADS Index (designed for real-time analysis of broad real activity) and GDPplus (designed mostly for historical analysis of real GDP, an important part of real activity) reduce, if not eliminate, measurement error by "averaging it out."

All told, ADS paints a clear picture: conditional on the underlying indicator data available now, real growth appears to be typical (ADS is constructed so that 0 corresponds to average growth) -- not especially strong, but simultaneously, not especially weak.

Friday, June 27, 2014

The First Quarter GDP Contraction was Less Severe than you Think

As discussed in an earlier post, my co-authors and I believe that our "GDPplus," obtained by optimally blending the noisy expenditure- and income-side GDP estimates, provides a superior U.S. GDP measure. (Check it out online; the Federal Reserve Bank of Philadelphia now calculates and reports it.) A few days ago we revised and re-posted the working paper on which it's based (Aruoba, Diebold, Nalewaik, Schorfheide, and Song, "Improving GDP Measurement: A Measurement Error Perspective," Manuscript, University of Maryland, Federal Reserve Board and University of Pennsylvania, Revised June 2014).

It's important to note that GDPplus is not simply a convex combination of the expenditure- and income-side estimates; rather, it is produced via the Kalman filter, which averages optimally over both space and time. Hence, although GDPplus is usually between the expenditure- and income-side estimates, it need not be. Presently we're in just such a situation, as shown in the graph below. 2014Q1 real growth as measured by GDPplus (in red) is well above both of the corresponding expenditure- and income-side GDP growth estimates (in black), which are almost identical. 
Plot of GDPplus
Source:  FRB Philadelphia

Thursday, June 19, 2014

Fine Work by Mueller and Watson at ECB

The ECB's Eurotower in Frankfurt, Germany
ECB Eurotower

Ulrich Mueller and Mark Watson's "Measuring Uncertainty About Long-Run Predictions" is important and original. I like it more (and understand it more) every time I see it. The latest was last week in Frankfurt at the ECB's Eighth Annual Workshop on Forecasting. No sense transcribing my discussion; just view it directly.

Wednesday, June 18, 2014

Windows File Copy

Of course we've all wondered for decades, but during the usual summertime cleanup I recently had to copy massive numbers of files, so it's on my mind. Seriously, what is going on with the Windows file copy "remaining time" estimate? Could an average twelve-year-old not code a better algorithm? (Comic from XKCD.)

Saturday, June 14, 2014

Another 180 on Piketty's Measurement

My first Piketty Post unabashedly praised Piketty's measurement (if not his theory):
"Piketty's book truly shines on the data side. ... Its tables and figures...provide a rich and jaw-dropping image, like a new high-resolution photo of a previously-unseen galaxy. I'm grateful to Piketty for sending it our way, for heightening awareness, and for raising important questions."  
Measurement endorsements don't come much stronger.

Then I did a 180. Upon belatedly reading the Financial Times' Piketty piece, I felt I'd been had, truly had. Out poured my second Piketty Post, written in a near-rage, without time to digest Piketty's response.

Now, with the benefit of more time to read, re-read, and reflect, yes, I'm doing another 180. It seems clear that the bulk of the evidence suggests that the FT, not Piketty, is guilty of sloppiness. Piketty's response is convincing, and all-told, his book appears to remain a model of careful social-science measurement (thoughtful discussion, meticulous footnotes, detailed online technical appendix, freely-available datasets, etc. -- see his website).

Ironically, then, as the smoke clears, my first Piketty post remains an accurate statement of my views.

Monday, June 9, 2014

Piketty's Empirics Are as Bad as His Theory

In my earlier Piketty post, I wrote, "If much of its "reasoning" is little more than neo-Marxist drivel, much of its underlying measurement is nevertheless marvelous." The next day, recognizing the general possibility of a Reinhart-Rogoff error, but with no suspicion that that anything was actually remiss, I added "(assuming of course that it's trustworthy)."

Perhaps I really should read some newspapers. Thanks to Boragan Aruoba for noting this, and for educating me. Turns out that the Financial Times -- clearly a centrist publication with no ax to grind -- got hold of Piketty's data (underlying source data, constructed series, etc.) and published a scathing May 23 indictment.

The chart above -- just one example -- is from The Economist, reporting on the FT piece. Somehow Piketty managed to fit the dark blue curves to the light blue dots of source data. Huh? Sure looks like he conveniently ignored a boatload of recent data that happen to work against him. Put differently, his fits appear much more revealing of his sharp prior view than of data-based information. Evidently he forgot to talk about that in his book.

In my view, Reinhart-Rogoff was a one-off and innocent (if unfortunate) mistake, whereas the FT analysis clearly suggests that Piketty's "mistakes," in contrast, are systematic and egregious.

Saturday, May 31, 2014

More on Piketty -- Oh God No, Please, No...

Cover: Capital in the Twenty-First Century in HARDCOVER

Piketty, Piketty, Piketty! How did the Piketty phenomenon happen? Surely Piketty must be one of the all-time great economists. Maybe even as great as Marx.
Yes, parts of the emerging backlash against Piketty's Capital resonate with me. Guido Menzio nails its spirit in a recent post, announcing to the Facebook universe that he'll "send you $10 and a nice Hallmark card with kitties if you refrain from talking/writing about Piketty's book for the next six months." (The irony of my now writing this Piketty post has not escaped me.)
As I see it, the problem is that Piketty's book is popularly viewed as a landmark contribution to economic theory, which it most definitely is not. In another Facebook post, leading economic theorist David Levine gets it right:
People keep referring to economists who have favorable views of Piketty's book. Leaving aside Krugman, I would be interested in knowing the name of any economist who asserts that Piketty's reasoning ... is other than gibberish.

    So the backlash is focused on dubious "reasoning" touted as penetrating by a book-buying
    public that unfortunately can't tell scientific wheat from chaff. I'm there.

But what of Piketty's data and conclusion? I admire Piketty's data -- more on that below.  I also agree with his conclusion, which I interpret broadly to be that the poor in developed countries have apparently become relatively much more poor since 1980, and that we should care, and that we should try to understand why. 
In my view, Piketty's book truly shines on the data side. If much of its "reasoning" is little more than neo-Marxist drivel, much of its underlying measurement is nevertheless marvelous (assuming of course that it's trustworthy). Its tables and figures -- there's no need to look at anything else -- provide a rich and jaw-dropping image, like a new high-resolution photo of a previously-unseen galaxy. I'm grateful to Piketty for sending it our way, for heightening awareness, and for raising important questions. Now we just need those questions answered.

Tuesday, May 27, 2014

Absent No More

Hello my friends. I'm back. It's been a crazy couple of weeks, with end-of-year travel, crew regattas, graduations, etc.

A highlight was lecturing at European University Institute (EUI) in Florence. I tortured a pan-European audience of forty or so Ph.D.'s, mostly from central bank research departments, with nine two-hour seminars on almost every paper I've ever written. (The syllabus is here, and related information is here.) But seriously, nothing is so exhilarating as a talented and advanced group completely interested in one's work. Above I show some of the participants with me, on the porch of our villa looking outward, and at right I show part of the the storybook Tuscan scene on which they're gazing (just to make you insanely jealous).

Anyway, great things are happening in econometrics these days at EUI / Florence. Full-time EUI Economics faculty include Fabio Canova and Peter Hansen, frequent EUI visitors include Christian Brownlees (Universitat Pompeu Fabra, Barcelona) and Max Marcellino (Bocconi University, Milan), and just down the road is Giampiero Gallo (University of Florence). Wow!

Monday, May 12, 2014

Student Advice III: Succeeding in Academia

Lasse Pedersen's advice is wonderful. Study it. Of course there's something or another for everyone to quibble with. My pet quibble is that it's rather long. Lasse correctly suggests roughly twenty pages for a ninety minute talk, so presumably this slide deck is for a talk approaching three hours. But who cares? Study it, carefully. (And thanks to Glenn Rudebusch for calling it to my attention.)

Tuesday, May 6, 2014

Predictive Modeling, Causal Inference, and Imbens-Rubin (Among Others)

When most people (including me) say predictive modeling, they mean non-causal predictive modeling, i.e., addressing questions of "What will likely happen if the gears keep grinding in the usual way"? Examples are ubiquitous and tremendously important in economics, finance, business, etc., and that's just my little neck of the woods.

So-called causal modeling is of course also predictive (so more accurate terms would be non-causal predictive modeling and causal predictive modeling), but the questions are very different: "What will likely happen if a certain treatment (or intervention, or policy -- call it what you want) is applied"? Important examples again abound.

Credible non-causal predictive modeling is much easier to obtain than credible causal predictive modeling. (See my earlier related post.) That's why I usually stay non-causal, even if causal holds the potential for deeper science. I'd rather tackle simpler problems that I can actually solve, in my lifetime.

The existence of competing ferocious causal predictive modeling tribes, even just within econometrics, testifies to the unresolved difficulties of causal analysis. As I see it, the key issue in causal econometrics is what one might call instrument-generating mechanisms.

One tribe at one end of the spectrum, call it the "Deep Structural Modelers," relies almost completely on an economic theory to generate instruments. But will fashionable theory ten years hence resemble fashionable theory today, and generate the same instruments?

Another tribe at the other end of the spectrum, call it the "Natural Experimenters," relies little on theory, but rather on natural experiments, among other things, to generate instruments. But are the instruments so-generated truly exogenous and strong? And what if there's no relevant natural experiment available?

A variety of other instrument-generating mechanisms lie interior, but they're equally fragile.

Of course the above sermon may simply be naive drivel from a non-causal modeler lost in causal territory. We'll see. In any event I need more education (who doesn't?), and I have some causal reading plans for the summer:

Re-read Pearl, and read Heckman's critique.

Read White on settable systems and testing conditional independence.

Read Angrist-Pischke. (No, I haven't read it. It's been sitting on the shelf next to me for years, but the osmosis thing just doesn't seem to work.)

Read Wolpin, and Rust's review.

Read Dawid 2007 and Dawid 2014.

Last and hardly least, get and read Imbens-Rubin (not yet available but likely a blockbuster).

Wednesday, April 30, 2014

Student Advice II: How to Give a Seminar

Check out Jesse Shapiro's view. He calls it "How to Give an Applied Micro Talk," but it's basically relevant everywhere. Of course reasonable people might quibble with a few things said, or regret the absence of a few things unsaid, but overall it's apt and witty. I love the "No pressure though" (you'll have to find it for yourself).

Monday, April 28, 2014

More on Kaggle Forecasting Competitions: Performance Assessment and Forecast Combination

Here are a few more thoughts on Kaggle competitions, continuing my earlier Kaggle post.

It's a shame that Kaggle doesn't make available (post-competition) the test-sample data and the set of test-sample forecasts submitted. If they did, then lots of interesting things could be explored. For example:

(1) Absolute aspects of performance. What sorts of out-of-sample accuracy are attainable across different fields of application? How are the forecast errors distributed, both over time and across forecasters within a competition, and also across competitions -- Gaussian, fat-tailed, skewed? Do the various forecasts pass Mincer-Zarnowitz tests?

(2) Relative aspects of performance. Within and across competitions, what is the distribution of accuracy across forecasters? Are accuracy differences across forecasters statistically significant? Related, is the winner statistically significantly more accurate than a simple benchmark?

(3) Combination. What about combining forecasts? Can one regularly find combinations that outperform all or most individual forecasts? What combining methods perform best? (Simple averages or medians could be explored instantly. Exploration of "optimal" combinations would require estimating weights from part of the test sample.)

Friday, April 25, 2014

Yield Curve Modeling Update

An earlier post, DNS/AFNS Yield Curve Modeling FAQs, ended with:

"What next? Job 1 is flexible incorporation of stochastic volatility, moving from \(A_0(N)\) to \(A_x(N)\) for \(x>0\), as bond yields are most definitely conditionally heteroskedastic. Doing so is important for everything from estimating time-varying risk premia to forming correctly-calibrated interval and density forecasts. Work along those lines is starting to appear. Christensen-Lopez-Rudebusch (2010), Creal-Wu (2013) and Mauabbi (2013) are good recent examples."

Good news. Creal-Wu (2013) is now Creal-Wu (2014), revised and extended to allow both spanned and unspanned stochastic volatility. Really nice stuff.

Monday, April 21, 2014

On Kaggle Forecasting Competitions, Part 1: The Hold-Out Sample(s)

Kaggle competitions are potentially pretty cool. Kaggle supplies in-sample data ("training data"), and you build a model and forecast out-of-sample data that they withhold ("test data"). The winner gets a significant prize, often $100,000.00 or more. Kaggle typically runs several such competitions simultaneously.

The Kaggle paradigm is clever because it effectively removes the ability for modelers to peek at the test data, which is a key criticism of model-selection procedures that claim to insure against finite-sample over-fitting by use of split samples. (See my earlier post, Comparing Predictive Accuracy, Twenty Years Later, and the associated paper of the same name.)

Well, sort of. Actually, Kaggle partly reveals part of the test data. In the time before a competition deadline, participants are typically allowed to submit one forecast per day, which Kaggle scores against part of the test data. Then, when the deadline arrives, forecasts are actually scored against the remaining test data. Suppose, for example, that there are 100 observations in total. Kaggle gives you 1, ..., 60 (training) and holds out 61, ..., 100 (test). But each day before the deadline, you can submit a forecast for 61, ..., 75, which they score against the held-out realization of 61,..., 75 and use to update the "leaderboard." Then when the deadline arrives, you submit your forecast for 61, .., 100, but they score it only against the truly held-out realizations 76, ..., 100. So honesty is enforced for 76, ..., 100 (good) , but convoluted games are played with 61, ..., 75 (bad). Is having a leaderboard really that important? Why not cut the games? Simply give people 1, ..., 75 and ask them to forecast 76, ..., 100.

To be continued.

Friday, April 18, 2014

Monday, April 14, 2014

Frequentists vs. Bayesians on the Exploding Sun

Time for something light.  Check out, "A webcomic of romance, sarcasm, math, and language," written by a literate former NASA engineer.  Really fine stuff.  Thanks to my student M.D. for introducing me to it.  Here's one on Fisher vs. Bayes:

Frequentists vs. Bayesians

Monday, April 7, 2014

Point Forecast Accuracy Evaluation

Here's a new one for your reading pleasure. Interesting history. Minchul and I went in trying to escape the expected loss minimization paradigm. We came out realizing that we hadn't escaped, but simultaneously, that not all loss functions are created equal. In particular, there's a direct and natural connection between our stochastic error divergence (SED) and absolute-error loss, elevating the status of absolute-error loss in our minds and perhaps now making it our default benchmark of choice. Put differently, "quadratic loss is for squares." (Thanks to Roger Koenker for the cute mantra.)

Diebold, F.X. and Shin, M. (2014), "Assessing Point Forecast Accuracy by Stochastic Divergence from Zero," PIER Working Paper 14-011, Department of Economics, University of Pennsylvania.

Abstract: We propose point forecast accuracy measures based directly on the divergence of the forecast-error c.d.f. F(e) from the unit step function at 0, and we explore several variations on the basic theme. We also provide a precise characterization of the relationship between our approach of stochastic error divergence (SED) minimization and the conventional approach of expected loss minimization. The results reveal a particularly strong connection between SED and absolute-error loss and generalizations such as the ``check function" loss that underlies quantile regression.