Monday, October 29, 2018

Becker Friedman Expectations Conference

I just returned from a great BFI Conference at U Chicago, Developing and Using Business Expectations Data, organized by Nick Bloom and Steve Davis.

Wonderfully, density as opposed to point survey forecasts were featured throughout. There was the latest on central bank surveys (e.g., Binder et al.), but most informative (to me) was the emphasis on surveys that I'm less familiar with, typically soliciting density expectations from hundreds or thousands of C-suite types at major firms. Examples include Germany's important IFO survey (e.g., Bachman et al.), the U.S. Census Management and Organizational Practices Survey (e.g., Bloom et al.)., and fascinating work in progress at FRB Atlanta. 

The Census survey is especially interesting due to its innovative structuring of histogram bins. There are no fixed bins. Instead users give 5 bins of their own choice, and five corresponding probabilities (which add to 1). This solves the problem in fixed-bin surveys of  (lazy? behaviorally-biased?) respondents routinely and repeatedly assigning 0 probability to subsequently-realized events.

Sunday, October 28, 2018

Expansions Don't Die of Old Age

As the expansion ages, there's progressively more discussion of whether its advanced age makes it more likely to end. The answer is no. More formally, postwar U.S. expansion hazards are basically flat, in contrast to contraction hazards, which are sharply increasing. Of course the present expansion will eventually end, and it may even end soon, but its age it unrelated to its probability of ending.

All of this is very clear in Diebold, Rudebusch and Sichel (1992). See Figure 6.2 on p. 271. (Sorry for the poor photocopy quality.) The flat expansion hazard result has held up well (e.g., Rudebusch (2016)), and moreover it would only be strengthened by the current long expansion.

[I blogged on flat expansion hazards before, but the message bears repeating as the expansion continues to age.]

Thursday, October 4, 2018

In Memoriam Herman Stekler

I am sad to report that Herman Stekler passed away last month. I didn't know until now. He was a very early and important and colorful -- indeed unique -- personage in the forecasting community, making especially noteworthy contributions to forecast evaluation.  
https://forecasters.org/herman-stekler_oracle-oct-2018/

Tuesday, October 2, 2018

Tyranny of the Top 5 Econ Journals

Check out:

PUBLISHING AND PROMOTION IN ECONOMICS: THE TYRANNY OF THE TOP FIVE 
by
James J. Heckman and Sidharth Moktan 
NBER Working Paper 25093
http://www.nber.org/papers/w25093

Heckman et al. examine a range of data from a variety of perspectives, analyze them thoroughly, and pull no punches in describing their striking results.

It's a great paper. There's a lot I could add, maybe in a future post, but my blood pressure is already high enough for today. So I'll just leave you with a few choice quotes from the paper ["T5" means "top-5 economics journals" ]:

"The results ... support the hypothesis that the T5 influence operates through channels that are independent of article quality."

"Reliance on the T5 to screen talent incentivizes careerism over creativity."

"Economists at highly ranked departments with established reputations are increasingly not publishing in T5 or field journals and more often post papers online in influential working paper series, which are highly cited, but not counted as T5s."

"Many non-T5 articles are better cited than many articles in T5 journals. ...  Indeed, many of the most important papers published in the past 50 years have been too innovative to survive the T5 gauntlet."

"The [list of] most cited non-T5 papers reads like an honor roll of economic analysis."

"The T5 ignores publication of books. Becker’s Human Capital
(1964) has more than 4 times the number of citations of any paper listed on RePEc. The exclusion of books from citation warps incentives against broad and integrated research and towards writing bite-sized fragments of ideas."

Saturday, September 29, 2018

RCT's vs. RDD's

Art Owen and Hal Varian have an eye-opening new paper, "Optimizing the Tie-Breaker Regression Discontinuity Design".

Randomized controlled trials (RCT's) are clearly the gold standard in terms of statistical efficiency for teasing out causal effects. Assume that you really can do an RCT. Why then would you ever want to do anything else?

Answer: There may be important considerations beyond statistical efficiency. Take the famous "scholarship example". (You want to know whether receipt of an academic scholarship causes enhanced academic performance among strong scholarship test performers.) In an RCT approach you're going to give lots of academic scholarships to lots of randomly-selected people, many of whom are not strong performers. That's wasteful. In a regression discontinuity design (RDD) approach ("give scholarships only to strong performers who score above X in the scholarship exam, and compare the performances of students who scored just above and below X"), you don't give any scholarships to weak performers. So it's not wasteful -- but the resulting inference is statistically inefficient. 

"Tie breakers" implement a middle ground: Definitely don't give scholarships to bottom performers, definitely do give scholarships to top performers, and randomize for a middle group. So you gain some efficiency relative to pure RDD (but you're a little wasteful), and you're less wasteful than a pure RCT (but you lose some efficiency).

Hence there's an trade-off, and your location on it depends on the size of the your middle group. Owen and Varian characterize the trade-off and show how to optimize the size of the middle group. Really nice, clean, and useful.

[Sorry but I'm running way behind. I saw Hal present this work a few months ago at a fine ECB meeting on predictive modeling.]

Sunday, September 23, 2018

NBER WP's Hit 25,000



A few weeks ago the NBER released WP25000, What a great NBER service -- there have been 7.6 million downloads of NBER WP's in the last year alone. 


This milestone is 
of both current and historical interest. The history is especially interesting. As Jim Poterba notes in a recent communication:
This morning's "New this Week" email included the release of the 25000th NBER working paper, a study of the intergenerational transmission of human capital by David Card, Ciprian Domnisoru, and Lowell Taylor.  The NBER working paper series was launched in 1973, at the inspiration of Robert Michael, who sought a way for NBER-affiliated researchers to share their findings and obtain feedback prior to publication.  The first working paper was "Education, Information, and Efficiency" by Finis Welch.  The design for the working papers -- which many will recall appeared with yellow covers in the pre-digital age -- was created by H. Irving Forman, the NBER's long-serving chart-maker and graphic artist.
Initially there were only a few dozen working papers per year, but as the number of NBER-affiliated researchers grew, particularly after Martin Feldstein became NBER president in 1977, the NBER working paper series also expanded.  In recent years, there have been about 1150 papers per year.  Over the 45 year history of the working paper series, the Economic Fluctuations and Growth Program has accounted for nearly twenty percent (4916) of the papers, closely followed by Labor Studies (4891) and Public Economics (4877).

Wednesday, September 19, 2018

Wonderful Network Connectedness Piece

Very cool NYT graphics summarizing U.S. Facebook network connectedness.  Check it out:
https://www.nytimes.com/interactive/2018/09/19/upshot/facebook-county-friendships.html?action=click&module=In%20Other%20News&pgtype=Homepage&action=click&module=News&pgtype=Homepage


They get the same result that Kamil Yilmaz and I have gotten for years in our analyses of economic and financial network connectedness:  There is a strong "gravity effect" -- that is, even in the electronic age, physical proximity is the key ingredient to network relationships. See for example:

Maybe not as surprising for facebook friends as for financial institutions (say).  But still... 

Sunday, September 16, 2018

Banque de France’s Open Data Room

See below for announcement of a useful new product from the Bank of France and its Representative Office in New York. 
Banque de France has been for many years at the forefront of disseminating statistical data to academics and other interested parties. Through Banque de France’s dedicated public portal http://webstat.banque-france.fr/en/, we offer a large set of free downloadable series (about 40 000 mainly aggregated series).

Banque de France has expanded further the service provided and launched, in Paris, in November 2016 an “Open Data Room”, providing researchers with a free access to granular data.
We are glad to announce that the “Open Data Room” service is now also available to US researchers through Banque de France Representative Office in New York City.

Saturday, September 15, 2018

An Open Letter to Tren Griffin

[I tried quite hard to email this privately. I post it here only because Griffin has, as far as I can tell, been very successful in scrubbing his email address from the web. Please forward it to him if you can figure out how.]

Mr. Griffin:

A colleague forwarded me your post, https://25iq.com/2018/09/08/risk-uncertainty-and-ignorance-in-investing-and-business-lessons-from-richard-zeckhauser/.  I enjoyed it, and Zeckhauser definitely deserves everyone's highest praise. 

However your post misses the bigger picture.  Diebold, Doherty, and Herring conceptualized and promoted the "Known, Unknown, Unknowable" (KuU) framework for financial risk management, which runs blatantly throughout your twelve "Lessons From Richard Zeckhauser".  Indeed the key Zeckhauser article on which you draw appeared in our book, "The Known, the Unknown and the Unknowable in Financial Risk Management", https://press.princeton.edu/titles/9223.html, which we also conceptualized, and for which we solicited the papers and authors, mentored them as regards integrating their thoughts into the KuU framework, etc.  The book was published almost a decade ago by Princeton University Press. 

I say all this not only to reveal my surprise and annoyance at your apparent unawareness, but also, and more constructively, because you and your readers may be interested in our KuU book, which has many other interesting parts (great as the Zeckhauser part may be), and which
, moreover, is more than the sum of its parts. A pdf of the first chapter has been available for many years at http://assets.press.princeton.edu/chapters/s9223.pdf.

Sincerely,

Friday, September 14, 2018

Machine Learning for Forecast Combination

How could I have forgotten to announce my latest paper, "Machine Learning for Regularized Survey Forecast Combination: Partially-Egalitarian Lasso and its Derivatives"? (Actually a heavily-revised version of an earlier paper, including a new title.) Came out as an NBER w.p. a week or two ago.