Monday, December 23, 2013

Holiday Haze

Your dedicated blogger is about to vanish in the holiday haze, presumably stumbling back sometime early in the new year.

Random thought: Obviously I guessed that I'd enjoy writing this blog, or I wouldn't have started, but I had no idea how truly satisfying it would be, or, for that matter, that anyone would actually read it! Many thanks my friends. I look forward to returning soon. Meanwhile, all best wishes for the holidays.

[Photo credit:  Public domain, by Marcus Quigmire, from Florida, USA (Happy Holidays  Uploaded by Princess Mérida) [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons]

Monday, December 16, 2013

FRB St. Louis is Far Ahead of the Data Pack

The email below arrived recently from the Federal Reserve Bank of St. Louis. It reminds me of something that's hardly a secret, but that nevertheless merits applause, namely that FRBSL's Research Department is a wonderful source of economic and financial data provision (FRED and much more...), and related information provision broadly defined (RePEc and much more...).

FRED, ALFRED, GeoFRED, RePEc, FRASER, etc. -- wow!  FRBSL supplies not only the data, but also intuitive and seamless delivery interfaces. They're very much on the cutting edge, constantly innovating and leading.

Other Feds of course supply some great data as well. To take just one example close to home, the Real-Time Data Research Center within FRB Philadelphia's Research Department maintains a widely-respected Real-Time Dataset and Survey of Professional Forecasters (and of course my favorites, the ADS Index and GDPplus).

But FRBSL is in a league of its own. Maybe there's been an implicit decision within the System that FRBSL will be the de facto data guru? Or maybe it's just me, not looking around thoroughly enough? I suspect it's a bit of both.

In any event I applaud FRBSL for a job marvelously well done.

Subject: Come visit the St. Louis Fed at the 2014 AEA Conference in Philadelphia

 Please join the Federal Reserve Bank of St. Louis at the American Economic Association meeting in Philadelphia Jan. 3-5, 2014 Philadelphia Marriott Downtown | Franklin Hall Stop by our booths, B322 and B321, to talk to St. Louis Fed experts and learn more about our free data toolkit available to researchers, teachers, journalists and bloggers. The toolkit includes: RePEc Representatives of the popular bibliographic database will be available to discuss the various websites, answer questions and take suggestions. FRED® (Federal Reserve Economic Data), our signature database with 150,000 data series from 59 regional, national and international sources; ALFRED® (Archival Federal Reserve Economic Data) Retrieve versions of economic data that were available on specific dates in history. Test economic forecasting models and analyze the decisions made by policymakers; GeoFRED® Map U.S. economic data at a state, county or metropolitan statistical area (MSA) level; FRASER® (Federal Reserve Archival System for Economic Research), a digital library for economic, financial and banking materials covering the economic and financial history of the United States and the Federal Reserve System; FRED add-in for Microsoft Excel, mobile apps for iPad, iPhone and Android devices; Also, take the opportunity to learn more about EconLowdown, our award-winning, FREE classroom resources for K-16 educators and consumers. Learn about money and banking, economics, personal finance, and the Federal Reserve. See you there.
Federal Reserve Bank of St. Louis | www.stlouisfed.org

 To stop receiving these emails, click here to unsubscribe. Federal Reserve Bank of St. Louis | P.O. Box 442 | St. Louis, MO 63166

Monday, December 9, 2013

Comparing Predictive Accuracy, Twenty Years Later

I have now posted the final pre-meeting draft of the "Use and Abuse" paper (well, more-or-less "final").

I'll present it as the JBES Lecture, January 2014 ASSA meetings, Philadelphia. Please join if you're around. It's Friday January 3, 2:30, Pennsylvania Convention Center Room 2004-C (I think).

By the way, the 2010 Peter Hansen paper that I now cite in my final paragraph, "A Winners Curse for Econometric Models: On the Joint Distribution of In-Sample Fit and Out-of-Sample Fit and its Implications for Model Selection," is tremendously insightful. I saw Peter present it a few years ago at a Stanford summer workshop, but I didn't fully appreciate it and had forgotten about it until he reminded me when he visited Penn last week. He's withheld the 2010 and later revisions from general circulation evidently because one section still needs work. Let's hope that he gets it revised and posted soon! (A more preliminary 2009 version remains online from a University of Chicago seminar.) One of Peter's key points is that although split-sample model comparisons can be "tricked" by data mining in finite samples, just as can all model comparison procedures, split-sample comparisons appear to be harder to trick, in a sense that he makes precise. That's potentially a very big deal.

Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests

Abstract: The Diebold-Mariano (DM) test was intended for comparing forecasts; it has been, and remains, useful in that regard. The DM test was not intended for comparing models. Much of the large ensuing literature, however, uses DM-type tests for comparing models, in (pseudo-) out-of-sample environments. In that case, simpler yet more compelling full-sample model comparison procedures exist; they have been, and should continue to be, widely used. The hunch that (pseudo-) out-of-sample analysis is somehow the only," or best," or even necessarily a good" way to provide insurance against in-sample over-fitting in model comparisons proves largely false. On the other hand, (pseudo-) out-of-sample analysis remains useful for certain tasks, most notably for providing information about comparative predictive performance during particular historical episodes.

Monday, December 2, 2013

The e-Writing Jungle Part 3: Web-Based e-books Using Python / Sphinx

In the previous Parts 1 and 2, I essentially dealt with two extremes: (1) LaTeX to pdf to web, and (2) raw HTML (however arrived at) with math rendered by MathJax. Now let's look at something of a middle ground: the Python package, Sphinx, for producing e-books.

Part 3: Python / Sphinx

Parts 1 and 2 of Quantitative Economics, by Stachurski and Sargent, are great routes into Python for economists. There's lots of good comparative discussion of Python vs. Matlab or Julia, the benefits of public-domain, open-source code, etc. And it's always up to the minute, because it's an on-line e-book! Just check it out.

Of course we're interested here in e-books, not Python per se. It turns out, however, that Stachurski and Sargent is also a cutting-edge example of a beautiful e-book. It's effectively written in Python using Sphinx, which is a Python package that started as a vehicle for writing software manuals. But a manual is just a book, and one can fill a book with whatever one wants.

Sphinx is instantly downloadable, beautifully documented (the documentation is written in Sphinx, of course!), open source, and public domain (licensed under BSD). ReStructuredText is the powerful markup language. (You can learn all you need in ten minutes, since math is the only complicated thing, and math stays in LaTeX, rendered either by JavaScript via MathJax or as png images, your choice.) In addition to publishing to HTML, you can publish to LaTeX or pdf.

Want to see how Sphinx performs with math even more dense than Stachurcski and Sargent's? Just check, for example, the Sphinx book Theoretical Physics Reference.  Want to  see how it performs with graphics even more slick than Stachurcski and Sargent's? Just check the Matplotlib Documentation. It's all done in Sphinx.

Sphinx is a totally class act. In my humble opinion, nothing else in its genre comes close.

Monday, November 25, 2013

Collaboration Distance and the Math Genealogy Project

The American Mathematical Society has a fun site on "collaboration distance" between various mathematicians. The idea is simple: If, for example, I wrote with X, and X wrote with Z, then my collaboration distance to Z is two. There's a good description here, and the actual calculator is here.

You can track your collaboration distance not only to Erdos (of course), but also to all-time giants like Gauss or Laplace. The calculator reveals, for example, that my collaboration distance to Gauss is just eight:

I co-authored with Marc Nerlove
Marc Nerlove co-authored with Kenneth J. Arrow
Kenneth J. Arrow co-authored with Theodore E. Harris
Theodore E. Harris co-authored with Richard E. Bellman
Richard E. Bellman co-authored with Ernst G. Straus
Ernst G. Straus co-authored with Albert Einstein
Albert Einstein co-authored with Hermann Minkowski
Hermann Minkowski co-authored with Carl Friedrich Gauss.

Wow -- and some great company along the way, quite apart from the origin at old Carl Friedrich!

Of course I understand the "small-world" network phenomenon, but it's nevertheless hard not to be astounded at first.

So how truly astounding is my eight-step connection to Gauss? Let's do a back-of-the-envelope calculation. For a benchmark Erdos-Renyi network we have:

$$max \approx \frac{\ln N}{\ln \mu},$$
where $$max$$ is the maximum collaboration distance, $$N$$ is the number of authors in the network, and $$\mu$$ is the mean number of co-authors. Suppose there are 1,000,000 authors ($$N=1,000,000$$), each with 5 co-authors (so, trivially, $$\mu=5$$). Then we have $$max \approx 9$$.

Hmmm...I'm no longer feeling so special.

Monday, November 18, 2013

The e-Writing Jungle Part 2: The MathML Impasse and the MathJax Solution

Back to LaTeX and MathJax and MathML and Python and Sphinx and IPython and R and Knitter and Firefox and Chrome and ...

In Part 1, I praised e-books done as LaTeX to pdf to the web, perhaps surprisingly. Now let's go the other way, to an e-book done natively on the web as HTML. Each approach is worth considering, depending on the application, as each has different costs and benefits.

Part 2: The MathML Impasse and the MathJax Solution

All we want is an HTML version with native support and beautiful rendering of mathematics. That's what HTML5 does, except for a small detail: many browsers (IE, Chrome, ...) won't display HTML5. The real problem is MathML, which is embedded in HTML5, and which is the key to math fonts in HTML5 or anywhere else. It's not just a question of browser suppliers finally waking up and flipping on the MathML switch; rather, successful MathML integration turns out to be really hard (seriously, although I don't really know why), and there are also security issues (again seriously, and again I don't really know why). For those reasons, the good folks at Microsoft and Google, for example, have now basically decided that they'll never support MathML. There's a lot of noise about all this swirling around right now -- some of it quite bitter -- but a single recent informative and entertaining piece will catapult you to the cutting edge, "Google Subtracts MathML from Chrome, and Anger Multiplies," by Steven Shankland.

The bottom line: Math has now been officially sentenced to an eternity of second-class web citizenship, in the sense that native and broad math browser support is not going to happen. But that brings us to MathJax, a JavaScript app that works with HTML. You simply type in LaTeX and MathJax finds any math expressions and renders them beautifully. (For an example see my recent post On the Wastefulness of (Pseudo-) Out-of-Sample Predictive Model Comparisons, which was done in LaTeX and rendered using MathJax.) Note well that MathJax is not just pasting graphics images; hence its output scales nicely and works well on mobile devices too. For all you need to know, check out "MathML Forges On," by Peter Krautzberger.

So what's the big problem? Doesn't HTML plus MathJax basically equal HTML5, with the major additional benefit that it actually works? Of course it's somewhat insulting to us math folk, and certainly it's aesthetically unappealing, to have to overlay something on HTML just to get it to display math. (I'm reminded of the old days of PC hardware, with separate "math co-processors.") And there are other issues. For example, MathJax loads from the cloud (unless it's on your machine(s), which requires installations and updates, and which can't be done for mobile devices), and the MathJax math rendering may take a few seconds or more, depending on the speed of your connection and the complexity/length of your math.

But are any of the above "problems" truly serious? I don't think so. On the contrary, MathJax strikes me as a versatile and long-overdue solution for web-based math. And its future looks very bright, with official supporters now ranging from the American Mathematical Society to Springer to Matlab. (Not that I'm a fan of Matlab any longer -- please join the resistance, purge Matlab from your life, and replace it with Python and R -- but that's a topic for another day.)

[Next: Python, Sphinx, ...]

Monday, November 11, 2013

A New Center to Watch for Predictive Macroeconomic and Financial Modeling

Check out USC's fine new Center for Applied Financial Economics, led by the indefatigable Hashem Pesaran. The first event is a fascinating conference, "Recent Developments on Forecasting Techniques for Macro and Finance."  Lots of information here, and program below.

PROGRAM

Wednesday, November 20th, 2013

8:00-8:45 a.m. Registration and Continental Breakfast

8:45-9:00 a.m. WELCOME
Hashem Pesaran, John E. Elliott Distinguished Chair of Economics and Director of the Centre for Applied Financial Economics (CAFE), USC Dornsife

9:00-9:50 a.m. SESSION I Chair: Robert Dekle
Speaker: Òscar Jordà
Title: Semiparametric Estimates of Monetary Policy
Effects: String Theory Revisited. With Joshua D. Angrist and Guido Kuersteiner.
Discussant: Eleonora Granziera

9:50-10:40 a.m. SESSION II Chair: Yu-Wei Hsieh
Speaker: Michael W. McCracken
Title: Evaluating Forecasts from Vector Autoregressions Conditional on Policy Paths. With Todd E. Clark.
Discussant: Andreas Pick

11:00-11:50 p.m. SESSION III Chair: Michael Magill
Speaker: Jose A. Lopez
Title: A Probability-Based Stress Test of Federal Reserve Assets and Income. With Jens H.E. Christensen and Glenn D. Rudebusch.
Discussant: Wayne Ferson

11:50-12:40 p.m. SESSION IV Chair: Yilmaz Kocer
Speaker: Tae-Hwy Lee
Title: Density and Risk Forecast of Financial Returns Using Decomposition and Maximum Entropy. With Zhou Xi and Ru Zhang.
Discussant: Hyungsik Roger Moon

2:00-2:50 p.m. SESSION V Chair: Juan D. Carrillo
Speaker: Allan Timmermann
Title: Equivalence Between Out-of-Sample Forecast
Comparisons and Wald Statistics. With Peter Reinhard Hansen.
Discussant: Hashem Pesaran

2:50-3:40 p.m. SESSION VI Chair: Jeffrey B. Nugent
Speaker: Gloria Gonzalez-Rivera
Title: In-Sample and Out-of-Sample Performance of
Autocontour-Testing in Unstable Environments. With
Yingying Sun.
Discussant: Cheng Hsiao

4:00-4:50 p.m. SESSION VII Chair: Giorgio Coricelli
. Speaker: Gareth M. James
Title: Functional Response Additive Model Estimation with
Online Virtual Stock Markets. With Yingying Fan, Natasha Foutz, and Wolfgang Jank.
Discussant: Dalia A. Ghanem

4:50-5:40 p.m. SESSION VIII Chair: Joel David
Speaker: Marcelle Chauvet
Title: Nowcasting of Nominal GDP. With William A. Barnett and Danilo Leiva-Leon.
Discussant: Michael Bauer

5:40 p.m. Concluding Remarks

Thursday, November 7, 2013

The e-Writing Jungle Part 1: LaTeX to pdf to the Web

LaTeX and MathML and MathJax and Python and Sphinx and IPython and R and Knitter and Firefox and Chrome and ...

My head is spinning with all this stuff. Maybe yours is too.

One thing is clear: The traditional academic book publishing paradigm (broadly defined) is cracking and will soon be crumbling. In the emerging e-paradigm there will be essentially no difference among books, courses, e-books, e-courses, web sites, blogs, and so on. With no loss of generality, then, let's just call it all "e-books," filled with text, color graphics, audio/video, animations, interactive learning tools, massive numbers of internal and external hyper-links, etc.

An interesting question is how to create (write"?) and distribute such e-books. The amazing thing is that the answer remains unclear. Both pitfalls and opportunities abound. Here are some thoughts.

Part 1:  LaTeX to pdf to the Web

One obvious e-book creation and distribution route is traditional LaTeX, compiled to pdf and posted on the web. Effete insiders now sneer at that, viewing it as little more than posting page photos of an old-fashioned B&W paper book. I beg to differ. What's true is that most people still fail to use the e-capabilities of LaTeX, so of course their pdf product is little more than an e-copy of an old paper book, but that's their fault. All of the above-mentioned e-desiderata are readily available in LaTeX/pdf/web; one just has to use them!

Moreover, LaTeX/pdf/web has at least two extra benefits relative to a website (say). First, trivially, the pdf is instantly printable on-demand as a beautiful traditional book, which is sometimes useful. Second, and more importantly, the linear beginning-to-end layout of a "book" -- in contrast to the non-linear jumble of links that is that is a website -- is pedagogically invaluable when done well. That is, good authors put things in a precise order for a reason, and readers benefit by reading in that order.

OK, you say, but how to restrict access only to those who pay for a LaTeX/pdf/web e-book? (It's true, a pdf web post is basically impossible to copy-protect.) My present view is very simple: Just get over it and forget the chump change. Scholarly monographs and texts are labors of love; the real compensation is satisfaction from helping to advance and spread knowledge. And if that's not quite enough, rest assured that if you write a great book you'll reap handsome monetary rewards in subtle but nevertheless very real ways, even if you post it gratis.

[To be continued. Next: HTML and MathML and LaTeXtoHTML5 and MathJax and ...]

Monday, November 4, 2013

Federal Reserve Bank of Philadelphia Launches Improved U.S. GDP Growth Series

Exciting news for empirical macroeconomics and finance: The Federal Reserve Bank of Philadelphia today released a new and improved $$GDP$$ growth series, $$GDPplus$$. It's an optimal blend of the BEA's expenditure-side and income-side estimates (call them $$GDP_E$$ and $$GDP_I$$, respectively). The $$GDPplus$$ web page contains extensive background information and will be updated whenever new or revised data for $$GDP_E$$ and/or $$GDP_I$$, and hence $$GDPplus$$, are released.

$$GDPplus$$ (developed in Aruoba, Diebold, Nalewaik, Schorfheide and Song, "Improving GDP Measurement: A Measurement-Error Perspective," NBER Working Paper 18954, 2013) is based on a dynamic-factor model,

$$\begin{pmatrix} GDP_{Et} \\ GDP_{It} \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \end{pmatrix} GDP_t + \begin{pmatrix} \epsilon_{Et} \\ \epsilon_{It} \end{pmatrix}$$
$$GDP_{t} = \mu (1- \rho) + \rho GDP_{t-1} + \epsilon_{Gt},$$
where $$GDP_E$$ and $$GDP_I$$ are noisy indicators of latent true $$GDP$$, $$\epsilon_{E}$$ and $$\epsilon_{I}$$ are expenditure- and income-side stochastic measurement errors, and $$\epsilon_{G}$$ is a stochastic shock to true $$GDP$$. The Kalman smoother provides an optimal estimate of $$GDP$$ based on the noisy indicators $$GDP_{E}$$ and $$GDP_{I}$$. That optimal estimate is $$GDPplus$$. Note that $$GDPplus$$ is not just a period-by-period simple average, or even a weighted average, of $$GDP_E$$ and $$GDP_I$$, because optimal signal extraction averages not only across the $$GDP_E$$ and $$GDP_I$$ series, but also over time.

The historical perspective on $$GDP$$ provided by $$GDPplus$$ complements the real-time perspective on the overall business cycle provided by the Aruoba-Diebold-Scotti (ADS) Index, also published by the Federal Reserve Bank of Philadelphia.

Moving forward, $$GDPplus$$ will be updated at 2 PM on every day that new and/or revised $$GDP_E$$ and/or $$GDP_I$$ data are released. The next update will be November 7, the day of BEA's NIPA release for Q3 (delayed due to the government shutdown).

Friday, November 1, 2013

LaTeX/MathJax Rendering in Blog Posts

It seems that LaTeX/MathJax is working fine with my blog, including with mobile devices, which is great (see, for example, my recent post On the Wastefulness of (Pseudo-) Out-of-Sample Predictive Model Comparisons). However, a problem exists for those with email delivery, who just get raw LaTeX dumped into the email. If that happens to you, simply click on the blog post title in the email. Then you'll be taken to the actual blog, and it should render well.

Please let me know of any other problems!

Thursday, October 31, 2013

On the Wastefulness of (Pseudo-) Out-of-Sample Predictive Model Comparisons

Peter Hansen and Allan Timmermann have a fantastic new paper, "Equivalence Between Out-of-Sample Forecast Comparisons and Wald Statistics."

The finite-sample wastefulness of (pseudo-) out-of-sample model comparisons seems obvious, as they effectively discard the (pseudo-) in-sample observations. That intuition should be true for both nested and non-nested comparisons, but it seems most obvious in the nested case: How could anything systematically dominate full-sample Wald, LR or LM for testing nested hypotheses? Hansen and Timmermann consider the nested case and verify the intuition with elegance and precision. In doing so they greatly clarify the misguided nature of most (pseudo-) out-of-sample model comparisons.

Consider the predictive regression model with $$h$$-period forecast horizon
$$y_{t}=\beta_{1}^{\prime}X_{1,t-h}+\beta_{2}^{\prime}X_{2,t-h}+\varepsilon_{t},$$ $$t=1,\ldots,n$$, where $$X_{1t}\in\mathbb{R}^{k}$$ and $$X_{2t}\in\mathbb{R}^{q}$$. We obtain out-of-sample forecasts with recursively estimated parameter values by regressing $$y_{s}$$ on $$X_{s-h}=(X_{1,s-h}^{\prime},X_{2,s-h}^{\prime})^{\prime}$$ for $$s=1,\ldots,t$$ (resulting in the least squares estimate $$\hat{\beta}_{t}=(\hat{\beta}_{1t}^{\prime},\hat{\beta}_{2t}^{\prime})^{\prime}$$) and using
$$\hat{y}_{t+h|t}(\hat{\beta}_{t})=\hat{\beta}_{1t}^{\prime}X_{1t}+\hat{\beta}_{2t}^{\prime}X_{2t}$$ to forecast $$y_{t+h}$$.

Now consider a smaller (nested) regression model,
$$y_{t}=\delta^{\prime}X_{1,t-h}+\eta_{t}.$$ In similar fashion we proceed by regressing $$y_{s}$$ on $$X_{1,s-h}$$  for $$s=1,\ldots,t$$ (resulting in the least squares estimate $$\hat{\delta}_t$$) and using
$$\tilde{y}_{t+h|t}(\hat{\delta}_{t})=\hat{\delta}_{t}^{\prime}X_{1t}$$ to forecast $$y_{t+h}$$.

In a representative and leading contribution to the (pseudo-) out-of-sample model comparison literature in the tradition of West (1996), McCracken (2007) suggests comparing such nested models via expected loss evaluated at population parameters. Under quadratic loss the null hypothesis is
$$H_{0}:\mathrm{E}[y_{t}-\hat{y}_{t|t-h}(\beta)]^{2}=\mathrm{E}[y_{t}-\tilde{y}_{t|t-h}(\delta)]^{2}.$$ McCracken considers the test statistic
$$T_{n}=\frac{\sum_{t=n_{\rho}+1}^{n}(y_{t}-\tilde{y}_{t|t-h}(\hat{\delta}_{t-h}))^{2}-(y_{t}-\hat{y}_{t|t-h}(\hat{\beta}_{t-h}))^{2}}{\hat{\sigma}_{\varepsilon}^{2}},$$ where $$\hat{\sigma}_{\varepsilon}^{2}$$ is a consistent estimator of $$\sigma_{\varepsilon}^{2}=\mathrm{var}(\varepsilon_{t+h})$$ and $$n_{\rho}$$ is the number of observations set aside for the initial estimation of $$\beta$$, taken to be a fraction $$\rho\in(0,1)$$ of the full sample, $$n\mbox{,}$$ i.e., $$n_{\rho}=\lfloor n\rho\rfloor$$. The asymptotic null distribution of $$T_{n}$$ turns out to be rather complicated; McCracken shows that it is a convolution of $$q$$ independent random variables, each with a distribution of $$2\int_{\rho}^{1}u^{-1}B(u)\mathrm{d}B(u)-\int_{\rho}^{1}u^{-2}B(u)^{2}\mathrm{d}u$$.

Hansen and Timmermann show that $$T_{n}$$ is just the difference between two Wald statistics of the hypothesis that $$\beta_{2}=0$$, the first based on the full sample and the second based on the initial estimation sample. That is, $$T_{n}$$ is just the increase in the Wald statistic obtained by using the full sample as opposed to the initial estimation sample. Hence the power of $$T_{n}$$ derives entirely from the post-split sample, so it must be less powerful than using the entire sample.  Indeed Hansen and Timmermann show that power decreases as $$\rho$$ increases.

On the one hand, the Hansen-Timmermann results render trivial the calculation of $$T_{n}$$ and greatly clarify its limit distribution (that of the difference between two independent $$\chi^{2}$$-distributions and their convolutions). So if one insists on doing $$T_{n}$$-type tests, then the Hansen-Timmermann results are good news. On the other hand, the real news is bad: the Hansen-Timmerman results make clear that, at least in the environments they consider, (pseudo-) out-of-sample model comparison comes at high cost (power reduction) and delivers no extra benefit.

[By the way, my paper, "Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests," makes many related points. Drafts are here. The final (?) version will be delivered as the JBES Invited Lecture at the January 2014 ASSA meetings in Philadelphia. Commentary at the meeting will be by Andrew Patton and Allan Timmerman. The JBES published version will contain the Patton and Timmermann remarks, plus those of Atsushi Inoue, Lutz Kilian, and Jonathan Wright. Should be entertaining!]

Friday, October 25, 2013

NBER/NSF Time-Series Conference: Retrospect and Prospect

I recently reported here on the Barrigozzi-Brownlees paper, "Network Estimation for Time Series." I heard it presented a few weeks ago at the 2013 NBER/NSF Time Series Conference, hosted this year by the Federal Reserve Board in Washington (a sign, by the way, of the FED's ongoing research commitment, notwithstanding my earlier-posted doubts).

I hadn't attended NBER/NSFTime Series meeting in several years. Attending reminded me of how special it is and jogged me into this post on NBER/NSF more generally. What's most unique is the way the conference spans so many different communities, all of which do top work in time series but not all of which communicate regularly. For some reason my mind groups into pairs many of the great researchers who participated regularly over the years: Rob Engle and Clive Granger, George Tiao and Arnold Zellner, Jim Stock and Mark Watson, Ted Hannan and Manfred Deistler, Torben Anderson and Tim Bollerslev, Peter Brockwell and Richard Davis, Ron Gallant and George Tauchen, David Findley and Bill Bell, and on and on.

General ongoing info about the conference is here (including upcoming 2014-2016 meetings in St. Louis, Vienna and New York), and an interesting brief history -- including year-by-year locations -- is here. Programs for recent years appear here. Does anyone know whether a complete set of conference programs is available? It would be fascinating to watch the parade of paper titles and authors marching forward from the earliest times.

FYI this year's program follows.

2013 NBER-NSF Time Series Conference

A conference hosted by the Federal Reserve Board
September 26-27, 2013, Washington, D.C.

Thursday, September 26, 2013

Conference Registration and Box Lunch: 12:00 – 1:15
Opening Remarks: 1:15 – 1:30
Main Program Session: Factor Models and Latent Variables: 1:30 – 3:00
"Generalized Method of Moments with Latent Variables"
A. Ronald Gallant, Raffaella Giacomini, Giuseppe Ragusa
"Shrinkage Estimation of Dynamic Factor Models with Structural Instabilities"
Xu Cheng, Zhipeng Liao, Frank Schorfheide
"Structural FECM: Cointegration in Large-scale Structural FAVAR Models"
Anindya Banerjee, Massimiliano Marcellino, Igor Masten
Coffee Break: 3:00 – 3:30
Main Program Session: Forecasting and Model Evaluation: 3:30 – 5:00
"Alternative Tests for Correct Specification of Conditional Predictive Densities"
Barbara Rossi, Tatevik Sekhposyan
"Non-nested Model Comparisons for Time Series via the Gaussian Likelihood Ratio Statistic"
Tucker McElroy, Christopher Blakely
"Efficient Test for Long-Run Predictability: Hybrid of the Q-test and Long-Horizon Regressions"
Natalia Sizova
Cocktail Reception and Poster Session 1: 5:00 – 6:30
Conference Dinner: 6:30 – 8:30
Dinner Speaker: Professor George Tiao, University of Chicago, Booth School of Business, "A Tribute to Professor George E.P. Box"

Friday, September 27, 2013

Continental Breakfast: 8:00 – 9:00
Main Program Session: Time Series Analysis: 9:00 – 10:30
"Thresholded Multivariate Regression with Application to Robust Forecasting"
David F. Findley, Demetra P. Lytras
"Approximate Bias in Time Series Regressions"
Kenneth D. West
Coffee Break: 10:30 – 11:00
Main Program Session: Macroeconomics: 11:00 – 12:30
"Reverse Kalman Filtering US Inflation with Sticky Professional Forecasts"
James M. Nason, Gregor W. Smith
"Improving GDP Measurement: A Measurement-Error Perspective"
Boragan Aruoba, Francis X. Diebold, Jeremy Nalewaik, Frank Schorfheide, Dongho Song
"Systemic Risk and the Macroeconomy: An Empirical Evaluation"
Stefano Giglio, Bryan Kelly, Seth Pruitt, Xiao Qiao
Lunch and Poster Session 2: 12:30 – 2:00
Main Program Session: Macro/Finance: 2:00 – 3:30
"Daily House Price Indexes: Construction, Modeling, and Longer-Run Predictions"
Tim Bollerslev, Andrew Patton, Wenjing Wang
"Estimation of non-Gaussian Affine Term Structure Models"
Drew D. Creal, Jing Cynthia Wu
"Robust joint Models of Yield Curve Dynamics and Euro Area (non-)standard Monetary Policy"
Geert Mesters , Berd Schwaab, Siem Jan Koopman
Coffee Break: 3:30 – 4:00
Main Program Session: Estimation: 4:00 – 5:30
"Nets: Network Estimation for Time Series"
Matteo Barigozzi, Christian Brownlees
"A Parameter Driven Logit Regression Model for Binary time Series"
Rongning Wu, Yunwei Cui
"Definitions and representations of multivariate long-range dependent time series"
Poster Session 1
Extended Yule-Walker Identification of a VARMA Model Using Single- or Mixed-Frequency Data"
"Testing for Cointegration with Temporally Aggregated and Mixed-frequency Time Series"
Eric Ghysels, J. Isaac Miller
"Co-summability: From Linear to Non-linear Co-integration"
Vanessa Berenguer-Rico, Jesus Gonzalo
"An Asymptotically Normal Out-Of-Sample Test of Equal Predictive Accuracy for Nested Models"
Gray Calhoun
"Nonparametric HAC Estimation for Time Series Data with Missing Observations"
Deepa Dhume Datta, Wenxin Du
"Evaluating Forecasts from Bayesian Vector Autoregressions Conditional on Policy Paths"
Todd E. Clark, Michael W. McCracken
"Marcenko-Pastur Law for Time Series"
Haoyang Liu, Alexander Aue, Debashis Paul
"Dynamic Compositional Regression in Financial Time Series and Application in Portfolio Decisions"
Zoey Yi Zhao, Mike West
"Diagnosing the Distribution of GARCH Innovations"
Pengfei Sun, Chen Zhou
"Nonlinearity, Breaks, and Long-Range Dependence in Time-Series Models"
Eric Hillebrand, Marcelo C. Medeiros
"Measuring Nonlinear Granger Causality in Mean"
Xiaojun Song, Abderrahim Taamouti
"Penalized Forecasting in Panel Data Models: Predicting Household Electricity Demand from Smart Meter Data"
Matthew Harding, Carlos Lamarche, M. Hashem Pesaran
Poster Session 2
"What is the Chance that the Equity Premium Varies over Time? Evidence from Regressions on the Dividend-Price Ratio"
Jessica A. Wachter, Missaka Warusawitharana
"Forecasting with Many Models: Model Confidence Sets and Forecast Combination"
Jon D. Samuels, Rodrigo M. Sekkel
"Modelling Financial Markets Comovements: A Dynamic Multi Factor Approach"
Martin Belvisi, Riccardo Pianeti, Giovanni Urga
"On the Reliability of Output-Gap Estimates in Realtime"
Elmar Mertens
"Testing for Granger Causality with Mixed Frequency Data"
Eric Ghysels, Jonathan B. Hill, Kaiji Motegi
"Testing Stationarity for Unobserved Components Models"
James Morley, Irina B. Panovska, Tara M. Sinclair
"Symmetry and Separability in Two–Country Cointegrated VAR Models: Representation and Testing"
Hans–Martin Krolzig, Reinhold Heinlein
"Detecting and Forecasting Large Deviations and Bubbles in a Near-Explosive Random Coefficient Model"
Anurag Banerjee, Guillaume Chevillon, Marie Kratz
"A Spatio-Temporal Mixture Model for Point Processes with Application to Ambulance Demand"
David Matteson
"Empirical Evidence on Inflation Expectations in the New Keynesian Phillips Curve"
Sophocles Mavroeidis, Mikkel Plagborg-Moller, James H. Stock
"A Non-Gaussian Asymmetric Volatility Model"
Geert Bekaert, Eric Engstrom
"Gaussian Term Structure Models and Bond Risk Premia"
Bruno Feunou, Jean-Sebastien Fontain

Monday, October 21, 2013

Lawrence R. Klein, 1920-2013

I am sad to report that Lawrence R. Klein has passed away. He was in many respects the father of modern econometrics and empirical macroeconomics; indeed his 1980 Nobel Prize citation was "for the creation of econometric models and their application to the analysis of economic fluctuations and economic policies." He was also a dear friend and mentor to legions of Penn faculty and students, including me. I am grateful to him for many things, including his serving on my Penn Ph.D. dissertation committee nearly thirty years ago.

You can find a beautiful and fascinating autobiographical essay written in 1980, and updated in 2005, here.

Check back during the coming days as I update this post with additional links and materials.

Update 1: KLEIN LAWRENCE, October 20, 2013, of Gladwyne, Pa. Husband of Sonia (nee Adelson). Father of Hannah Klein, Rebecca (James) Kennedy, Rachel (Lyle) Klein and Jonathan (Blandina) Klein. Also survived by 7 grandchildren and 4 great-grandchildren. Services and Interment are private. Relatives and friends are invited to the residence of Mrs. Sonia Klein Wednesday, October 23, 2-4 P.M. AND Saturday, October 26, 2-4 P.M. (only). Contributions in his memory may be made to the University of Pennsylvania Department of Economics.

Update 2: Extensive New York Times obituary here.

Update 3: Penn Economics memorial statement here.

Update 4: Saturday 26 October Financial Times Weekend will contain an extensive obituary.

Wednesday, October 16, 2013

Network Estimation for Time Series

Matteo Barigozzi and Christian Brownlees have a fascinating new paper, "Network Estimation for Time Series" that connects the econometric time series literature and the statistical graphical modeling (network) literature. It's not only useful, but also elegant: they get a beautiful decomposition into contemporaneous and dynamic aspects of network connectedness. Granger causality and "long-run covariance matrices" (spectra at frequency zero), centerpieces of modern time-series econometrics, feature prominently. It also incorporates sparsity, allowing analysis of very high-dimensional networks.

If I could figure out how get LaTeX/Mathjax running inside Blogger, I could show you some details, but no luck after five minutes of fiddling last week, and I haven't yet gotten a chance to return to it. (Anyone know? Maybe Daughter 1 is right and I should switch to WordPress?) For now you'll just have to click on the Barigozzi-Brownlees paper above, and see for yourself.

It's interesting to see that Granger causality is alive and well after all these years, still contributing to new research advances. And Barigozzi-Brownlees is hardly alone in that regard, as the recent biomedical imaging literature illustrates. Some of Vic Solo's recent work is a great example.

Finally, it's also interesting to note that both the Barigozzi-Brownlees and Diebold-Yilmaz approaches to network connectedness work in vector-autoregressive frameworks, yet they proceed in very different, complementary, ways.

Monday, October 14, 2013

A Nobel for Financial Econometrics

First it was Engle and Granger (2003); now it's Fama, Hansen and Shiller.

A central issue in the economics of financial markets is whether and how those markets process information efficiently, to arrive at fair prices. Inextricably linked to that central issue is a central tension: certain lines of argument suggest that financial markets should be highly efficient, yet other lines of argument suggest limits to market efficiency. Gene Fama, Lars Hansen and Bob Shiller have individually and collectively made landmark contributions that now shape both academic and practitioner thinking as regards that tension. In so doing they've built much of the foundations of modern financial economics and financial econometrics. Fama empirically championed the efficient markets hypothesis, which in many respects represents the pinnacle of neoclassical financial economics. Shiller countered with additional empirical evidence that seemingly indicated the failure of market efficiency, setting the stage for several decades of subsequent work. Throughout, Hansen supplied both powerful economic theory that brought asset pricing in closer touch with macroeconomics, and powerful econometric theory (GMM) that proved invaluable for empirical asset pricing, where moment conditions are often available but likelihoods are not.

If today we celebrate, then tomorrow we return to work -- obviously there's more to be done. But for today, a resounding bravo to the three deserving winners!