Monday, December 23, 2013

Holiday Haze


File:Happy Holidays (5318408861).jpg

Your dedicated blogger is about to vanish in the holiday haze, presumably stumbling back sometime early in the new year.

Random thought: Obviously I guessed that I'd enjoy writing this blog, or I wouldn't have started, but I had no idea how truly satisfying it would be, or, for that matter, that anyone would actually read it! Many thanks my friends. I look forward to returning soon. Meanwhile, all best wishes for the holidays.


[Photo credit:  Public domain, by Marcus Quigmire, from Florida, USA (Happy Holidays  Uploaded by Princess Mérida) [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons]

Monday, December 16, 2013

FRB St. Louis is Far Ahead of the Data Pack

The email below arrived recently from the Federal Reserve Bank of St. Louis. It reminds me of something that's hardly a secret, but that nevertheless merits applause, namely that FRBSL's Research Department is a wonderful source of economic and financial data provision (FRED and much more...), and related information provision broadly defined (RePEc and much more...).

FRED, ALFRED, GeoFRED, RePEc, FRASER, etc. -- wow!  FRBSL supplies not only the data, but also intuitive and seamless delivery interfaces. They're very much on the cutting edge, constantly innovating and leading.

Other Feds of course supply some great data as well. To take just one example close to home, the Real-Time Data Research Center within FRB Philadelphia's Research Department maintains a widely-respected Real-Time Dataset and Survey of Professional Forecasters (and of course my favorites, the ADS Index and GDPplus).

But FRBSL is in a league of its own. Maybe there's been an implicit decision within the System that FRBSL will be the de facto data guru? Or maybe it's just me, not looking around thoroughly enough? I suspect it's a bit of both.

In any event I applaud FRBSL for a job marvelously well done.

Subject: Come visit the St. Louis Fed at the 2014 AEA Conference in Philadelphia


AEA 2014
Please join the Federal Reserve Bank of St. Louis at the
American Economic Association meeting in Philadelphia
Jan. 3-5, 2014
Philadelphia Marriott Downtown | Franklin Hall
Stop by our booths, B322 and B321, to talk to St. Louis Fed experts and learn more about our free data toolkit available to researchers, teachers, journalists and bloggers. The toolkit includes:
  • RePEc Representatives of the popular bibliographic database will be available to discuss the various websites, answer questions and take suggestions.
  • FRED® (Federal Reserve Economic Data), our signature database with 150,000 data series from 59 regional, national and international sources;
  • ALFRED® (Archival Federal Reserve Economic Data) Retrieve versions of economic data that were available on specific dates in history. Test economic forecasting models and analyze the decisions made by policymakers;
  • GeoFRED® Map U.S. economic data at a state, county or metropolitan statistical area (MSA) level;
  • FRASER® (Federal Reserve Archival System for Economic Research), a digital library for economic, financial and banking materials covering the economic and financial history of the United States and the Federal Reserve System;
  • FRED add-in for Microsoft Excel, mobile apps for iPad, iPhone and Android devices;
Also, take the opportunity to learn more about EconLowdown, our award-winning, FREE classroom resources for K-16 educators and consumers. Learn about money and banking, economics, personal finance, and the Federal Reserve.
See you there.
Follow the Fed
Federal Reserve Bank of St. Louis | www.stlouisfed.org


To stop receiving these emails, click here to unsubscribe.
Federal Reserve Bank of St. Louis | P.O. Box 442 | St. Louis, MO 63166

Monday, December 9, 2013

Comparing Predictive Accuracy, Twenty Years Later

I have now posted the final pre-meeting draft of the "Use and Abuse" paper (well, more-or-less "final").

I'll present it as the JBES Lecture, January 2014 ASSA meetings, Philadelphia. Please join if you're around. It's Friday January 3, 2:30, Pennsylvania Convention Center Room 2004-C (I think).

By the way, the 2010 Peter Hansen paper that I now cite in my final paragraph, "A Winners Curse for Econometric Models: On the Joint Distribution of In-Sample Fit and Out-of-Sample Fit and its Implications for Model Selection," is tremendously insightful. I saw Peter present it a few years ago at a Stanford summer workshop, but I didn't fully appreciate it and had forgotten about it until he reminded me when he visited Penn last week. He's withheld the 2010 and later revisions from general circulation evidently because one section still needs work. Let's hope that he gets it revised and posted soon! (A more preliminary 2009 version remains online from a University of Chicago seminar.) One of Peter's key points is that although split-sample model comparisons can be "tricked" by data mining in finite samples, just as can all model comparison procedures, split-sample comparisons appear to be harder to trick, in a sense that he makes precise. That's potentially a very big deal.

Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests

Abstract: The Diebold-Mariano (DM) test was intended for comparing forecasts; it has been, and remains, useful in that regard. The DM test was not intended for comparing models. Much of the large ensuing literature, however, uses DM-type tests for comparing models, in (pseudo-) out-of-sample environments. In that case, simpler yet more compelling full-sample model comparison procedures exist; they have been, and should continue to be, widely used. The hunch that (pseudo-) out-of-sample analysis is somehow the ``only," or ``best," or even necessarily a ``good" way to provide insurance against in-sample over-fitting in model comparisons proves largely false. On the other hand, (pseudo-) out-of-sample analysis remains useful for certain tasks, most notably for providing information about comparative predictive performance during particular historical episodes.

Monday, December 2, 2013

The e-Writing Jungle Part 3: Web-Based e-books Using Python / Sphinx

In the previous Parts 1 and 2, I essentially dealt with two extremes: (1) LaTeX to pdf to web, and (2) raw HTML (however arrived at) with math rendered by MathJax. Now let's look at something of a middle ground: the Python package, Sphinx, for producing e-books.

Part 3: Python / Sphinx

Parts 1 and 2 of Quantitative Economics, by Stachurski and Sargent, are great routes into Python for economists. There's lots of good comparative discussion of Python vs. Matlab or Julia, the benefits of public-domain, open-source code, etc. And it's always up to the minute, because it's an on-line e-book! Just check it out.

Of course we're interested here in e-books, not Python per se. It turns out, however, that Stachurski and Sargent is also a cutting-edge example of a beautiful e-book. It's effectively written in Python using Sphinx, which is a Python package that started as a vehicle for writing software manuals. But a manual is just a book, and one can fill a book with whatever one wants.

Sphinx is instantly downloadable, beautifully documented (the documentation is written in Sphinx, of course!), open source, and public domain (licensed under BSD). ReStructuredText is the powerful markup language. (You can learn all you need in ten minutes, since math is the only complicated thing, and math stays in LaTeX, rendered either by JavaScript via MathJax or as png images, your choice.) In addition to publishing to HTML, you can publish to LaTeX or pdf.

Want to see how Sphinx performs with math even more dense than Stachurcski and Sargent's? Just check, for example, the Sphinx book Theoretical Physics Reference.  Want to  see how it performs with graphics even more slick than Stachurcski and Sargent's? Just check the Matplotlib Documentation. It's all done in Sphinx.

Sphinx is a totally class act. In my humble opinion, nothing else in its genre comes close.