Friday, May 29, 2015

Three Reasons to Prefer GDPplus to Simple GDP Averages

Let's start with some notation. GDPe is expenditure-side GDP from BEA. GDPi is income-side GDP from BEA. GDPavg is the average of GDPe and GDPi recently introduced by BEA. GDPplus is the Kalman-smoother extraction of GDP from GDPe and GDPi, produced and published to the web by FRB Philadelphia.

The key insight is that the optimal Kalman-smoother extraction that underlies GDPplus involves averaging not only over series (i.e., GDPe and GDPi), but also over time. Hence:

(1) GDPplus can be calculated for the most recent quarter for which GDPe data are available, even if GDPi data are not yet available for that quarter, because the Kalman smoother optimally interpolates the missing GDPi data and includes that prediction in its assessment. In contrast, GDPavg simply cannot be calculated if GDPi is unavailable.

(2) Desirably, GDPplus is not constrained to be between the expenditure- and income-side estimates, let alone exactly midway between, as with as with GDPavg.  Look, for example, at 2014Q1 in the FRB Philadelphia plot here.

(3) Related, GDPplus is robust to the problem of spuriously low Q1 GDP reported a nice recent NYT piece by Justin Wolfers. For example, the much-discussed mysterious apparent GDP collapse of 2014Q1, based on GDPe, is largely absent from GDPplus, or at least much less pronounced. (Again see the FRB Philadelphia plot here, as well Tom Stark's fascinating recent FRB Philadelphia "Research Rap".) Evidently GDPplus doesn't suffer as much from the Q1 anomaly for two reasons. You guessed it: (a) it blends GDPe with GDPi, which is not as influenced by the Q1 distortion, and (b) it smooths over time.

OK, I'll Continue With Google Blogger

First, apologies for the posts about my Google Blogger situation. I fear that they border on spam, but at the same time I need to communicate the situation.

Anyway, wow, what a flow of communications regarding the last post, from more routes and with more concerns than I knew existed (e.g., I had no idea that so many people were using RSS feeds).  OK, I'll leave things on Google Blogger, just as always, as it's true that many of the Google problems seem now to have been resolved. (Let me know if you're still getting warnings or whatever.) One remaining thing that's really annoying is that Twitter won't let me tweet anything containing the URL "fxdiebold.blogspot.com". (Evidently Twitter shares blacklists with Google but is negligent in refreshing the list.) That's why the tweets, now contain "fxdiebold dot blogspot dot com" instead, which makes it harder for you, since you have to edit the "dots" and can't just tap directly on a live hyperlink. Thanks for putting up with such nonsense; hopefully that too will soon be fixed.

And finally, many thanks for your support. It's most gratifying to know that I'm serving you well. I view No Hesitations as one of the most important things that I do. Thanks again.

Thursday, May 28, 2015

Cutting Ties With Google Blogger

The nonsense reported in my May 23 post actually goes well beyond what I reported, to the point where I've lost patience.  (I'm sure I'll eventually blog further on it.)  So henceforth I plan to disintermediate Google, instead hosting the No Hesitations blog on my University of Pennsylvania site, at http://www.ssc.upenn.edu/~fdiebold/NH.html. A crude skeleton site is there now. I will continue to tweet announcements of new posts (again they'll be at the new URL), which should be automatically forwarded to Facebook. As for Google+ followers, I'm not yet sure what I'll do, but don't worry, I'll figure out a way to keep you alerted.  As for people who get automatic email alerts, first-best is to switch to following on twitter, second best is to email me and I'll put you on a new blog notification emailing list if I create one. (But please, if at all possible go the first-best route.) I'm sure that it will be a bit of a rough ride for a couple months, but we'll get there. Thanks very, very, much for your support. Should be interesting.

Tuesday, May 26, 2015

New GDP Series From BEA

BEA's "new product" (see below) -- a U.S. GDP estimate that's a simple average of expenditure- and income-side GDP estimates -- is not yet at the cutting-edge of historical GDP estimation.

On the benefits of blending the expenditure- and income-side historical GDP estimates, see ADNSS1 for a forecast-combination perspective and ADNSS2 for a Kalman-filtering signal-extraction perspective.  The ADNSS1 "combined" GDP estimate is a convex combination of expenditure- and income-side GDP estimates, but the BEA equal-weight case is very special and generally sub-optimal. Moreover, ADNSS2's Kalman-filter approach is likely superior to ADNSS1's convex-combination approach for reasons detailed by ADNSS2, and for some years now it has been implemented and published to the web by FRB Philadelphia as "GDPplus".


Neverthess, I applaud the BEA's new averaged GDP. If it's not at the cutting edge, it's nevertheless much superior to the standard approach of doing nothing -- that is, using expenditure-side GDP alone -- and it's an official acknowledgment of the wastefulness of doing so. Hence it's a significant step in the right direction. Hopefully its publication by BEA will nudge people away from uncritical and exclusive reliance on expenditure-side GDP.    







May 14, 2015
Twitter: @BEA_News
www.bea.gov

Coming in July: 
BEA to Launch New Tools for Analyzing Economic Growth

WASHINGTON – The Bureau of Economic Analysis plans to launch two new statistics that will serve as tools to help businesses, economists, policymakers and the American public better analyze the performance of the U.S. economy. These tools will be available on July 30 and emerge from an annual BEA process where improvements and revisions to GDP data are implemented. BEA created these two new tools in response to demand from our customers.

Average of Gross Domestic Product (GDP) and Gross Domestic Income (GDI)

-- BEA will launch a new series that is an average of GDP and GDI, giving users another way to track U.S. economic growth.

-- BEA will present a nominal (or current-dollar) measure of the series and an inflation-adjusted (or chained-dollar) measure of the series.

-- For current dollars, the new measure will be a simple, equally weighted average of GDP and GDI for any given quarter or year.

-- For chained dollars, the new measure will be the current-dollar value deflated by the GDP price index.

-- The new series will be available back to 1929 on an annual basis and to 1947 on a quarterly basis.

-- The new series will not only provide users with another barometer on the U.S. economy but also make available series that several independent experts have recommended using in their analysis of the nation’s economic growth.

-- The new series could help account for known measurement inconsistencies between the two statistics. Those may include timing differences, gaps in underlying source data, and survey measurement errors.

-- The new statistics will be available in BEA’s interactive database as well as in the GDP news release tables.

Saturday, May 23, 2015

No Hesitations is Not a Phishing Site!

Google's "automatic system" a few days ago "determined" that No Hesitations (fxdiebold.blogspot.com) was a phishing site. (Phishing sites attempt to scam users into revealing credit card numbers, etc. No Hesitations is not a phishing site! Indeed it's obviously impossible, as readers are never asked for any information of any kind.) So Google shut it down, with only a terse and uninformative machine-generated "no-reply" email to me. Literally, No Hesitations just vanished! I then found a way to request a human review, which Google did. They immediately agreed that they were mistaken, and they restored the site. HOWEVER, they neglected to remove No Hesitations from Google's "Safe Browsing" blacklist, so that a warning may appear when you attempt to access the site, depending on your browser and its settings. Obviously I have notified Google of this remaining problem, which they'll hopefully fix soon (although it's not obvious, as all the automated Google stuff is very much a black box). Meanwhile, if you get the bogus warning, just click on "details" and then "visit this infected site," and you'll be in. (Regarding "this infected site," thanks a lot, Google. Totally insulting, and surely harmful to the site's traffic. Isn't your motto "Don't be evil"?

Wednesday, May 20, 2015

Bond Yields, Macro Fundamentals, and Policy

Greetings my friends from Eurovision in Vienna. Yes, OK, that's not exactly the real reason I'm here, but still...

As I said in an earlier post that stressed DNS/AFNS yield-curve modeling with the zero lower bound imposed, "although Nelson-Siegel is almost thirty years old, and DNS/AFNS is almost a teenager, interesting and useful new variations keep coming." Another intriguing DNS/AFNS literature strand concerns the interaction of bond yields and macro fundamentals. That's hardly a new area, but recent work has some interesting twists.

Mesters, Schwaab and Koopman (2015) (MSK) focus on the effects of central bank policy on bond yields. There's lots of interesting new tech (stochastic volatility in measurement errors, interactions with non-Gaussian variables, a novel importance sampler for likelihood evaluation, ...). But most interestingly, MSK explore not only conventional policy tools like the overnight lending rate, but also direct measures of bond purchases.

MSK build on Diebold, Rudebusch and Aruoba (2006) (DRA), but the DRA emphasis is different. DRA were interested in whether and how the yield curve is linked to "standard" macro fundamentals. So DRA emphasized inflation, with an eye toward the yield curve level, and real activity, with an eye toward the yield curve slope. DRA also included an overnight lending rate, but certainly no measures of bond purchases.


Lots of interesting MSK-style work remains to be done. For example, someone needs to do an MSK-style analysis in a shadow-rate model that respects the zero lower bound and imposes no-arb. Also, someone needs to explore both causal directions more thoroughly. (Of course central bank bond purchases might influence the yield curve, but so too does the yield curve influence central bank bond purchases.)

Thursday, May 14, 2015

Interesting New Work on Yield Curve Modeling

Loved last week's PIER lectures at Penn. Good people, good times, good spring weather.  (Please join us next year in May 2016! More information in due course.) On Thursday we did yield curves, which had me thinking about what's new that I like in that area. Not surprisingly, I'm a fan of dynamic Nelson-Siegel (DNS), arbitrage-Free Nelson-Siegel (AFNS), and the many variations.  (See the Diebold-Rudebusch 2013 book.) What's more surprising is that although Nelson-Siegel is almost thirty years old, and DNS/AFNS is almost a teenager, interesting and useful new variations keep coming along.

The most important new work concerns imposition of the zero lower bound (ZLB). Fischer Black's "shadow rate" approach has influenced me most. Recently it's been taken to new heights by Glenn Rudebusch and coauthors at the Federal Reserve Bank of San Francisco (e.g., Christensen and Rudebusch 2015 -- just published in Journal of Financial Econometrics), and Leo Krippner at the Reserve Bank of New Zealand (see his wonderful 2015 book). The amazing thing is that one can stay in the DNS/AFNS framework -- the key tractable subclass of Gaussian affine models -- and still respect the ZLB by appropriately truncating simple simulations. The figure below, assembled from some of Krippner's, says it all. Also see these slides.   




I'm also partial to shadow-rate ZLB work by Cynthia Wu and coauthors at Chicago and San Diego (e.g. Wu and Xia, 2014). (Thanks to Jim Hamilton, her Ph.D. advisor, for reminding me!) See the monthly Wu-Xia shadow short rate series, produced and published to the web by FRB Atlanta.


Last and not at all least is the recent "ARG0" work of Monfort et al., which imposes the ZLB in a very different and elegant way. Again see these slides.   


Another interesting strand of recent DNS/AFNS progress concerns modeling the interaction of bond yield factors, macro fundamentals, and central bank policy.  More on that sometime soon.

Sunday, May 10, 2015

JPMorgan, Data-Rich Analyses, and the Public Good

I recently received an invitation to the JPMorgan Chase event below.

Reaction 1: JPMC should stick to its business, which is business, working to maximize the shareholder wealth with which it is entrusted, leaving to others (like me) the "provision of data-rich analyses and expert insights for promotion of the public good."

Reaction 2: JPMC is sticking to business, maximizing shareholder wealth, but not in appropriate ways.  Seriously, is it just me, or does this absolutely reek of Wall Street financiers working to capture Pennsylvania Avenue regulators? (I love that the event is actually on Pennsylvania Avenue.) By the way, I was wondering what Tony Blair knows about "provision of data-rich analyses." I still have no idea, but a quick Googling of "Tony Blair JPMorgan Chase" reveals that he's now very prominently on the JPMC payroll.

The silver lining:  After this blog, I doubt I'll ever again be invited.



Jamie Dimon
Chairman and CEO of JPMorgan Chase & Co.

and

Diana Farrell
Founding President and CEO of the JPMorgan Chase Institute 

Invite you to the launch of the JPMorgan Chase Institute –
a global think tank dedicated to delivering data-rich analyses and expert insights for the public good.


Discussion
A preview of the JPMorgan Chase Institute's consumer data asset and groundbreaking first research report on individual income and consumption volatility
Speakers
Jamie Dimon, Chairman and CEO of JPMorgan Chase & Co.
Diana Farrell, Founding President and CEO of the JPMorgan Chase Institute
Tony Blair, Quartet Representative and Former Prime Minister of Great Britain and Northern Ireland
Panel
David Wessel, Senior Fellow at the Brookings Institution, Former Economics Editor at The Wall Street Journal
Zoƫ Baird, CEO and President of the Markle Foundation
Heather Boushey, Executive Director of Washington Center for Equitable Growth
Robert Groves, Provost of Georgetown University, Former Director of US Census Bureau
Location
The Newseum
Knight Conference Center
555 Pennsylvania Ave NW
Washington, DC 20001

This invitation is non-transferrable.
JPMorgan Chase seeks to comply with applicable rules concerning meals, gifts and entertainment offered to public officials and employees, including related disclosure requirements. We estimate the cost of hospitality to be provided at JPMorgan Chase & Co. Institute Launch to be $27.00 per person. To the extent you wish to pay the cost of , or to decline, the hospitality to be provided at this event please contact Kathryn Kulp at kathryn.kulp@jpmchase.com to make the necessary arrangements.
©JPMorgan Chase & Co. All Rights Reserved. JPMorgan Chase Bank, N.A. Member FDIC. All services are subject to applicable laws and regulations and service terms. Not all products and services are available in all geographic areas. Eligibility for particular products and services is subject to final determination by J.P. Morgan and/or its affiliates/subsidiaries.
Please note that you may not be able to complete this registration on your mobile device or tablet. For best results, please use a laptop or desktop computer.


Friday, May 8, 2015

Monday, May 4, 2015

Measuring Predictability

A friend writes the following.  (I have edited very slightly for clarity.)
Based on forecasts you've seen, what would you say is a "reasonable" ratio of the standard deviation of the forecast error to the standard deviation of a covariance-stationary series being forecast? ... It would be great if you can tell me "I'd consider x reasonable and y too high." 
The problem is that the premise underlying the question (namely, that there is such a "reasonable" value of the ratio \(r\) of innovation variance to unconditional  variance) is false.  That is, there's no small value \(c\) of \(r\) such that \(r<c\) means that we've done a good forecasting job.  Equivalently, there's no large value \(c'\) of the predictive \( R^2~ (R^2 = 1 - r^2) \) such that \(R^2 > c'\) means that we've done a good forecasting job.  Instead, "good" \(c\) or \(c'\) values depend critically on the dynamic nature of the series being forecast.  Consider, for example, a covariance-stationary AR(1) process, \(y_t = \phi y_{t-1} + \varepsilon_t\), where \(\varepsilon_t \sim iid (0, \sigma^2)\). The innovation variance is \(\sigma^2\) and the unconditional variance is \(\sigma^2 / (1 - \phi^2)\), so the lower bound on  \(r\) (and hence the upper bound on  \(R^2\)) depends entirely on \(\phi\) and can be anywhere in the unit interval! This is an important lesson: "predictability" can (and does) differ greatly across economic series. For more than you ever wanted to know, see Diebold and Kilian (2001), "Measuring Predictability: Theory and Macroeconomic Applications".