Monday, March 12, 2018

Sims on Bayes

Here's a complementary and little-known set of slide decks from Chris Sims, deeply insightful as always. Together they address some tensions associated with Bayesian analysis and sketch some resolutions. The titles are nice, and revealing. The first is "Why Econometrics Should Always and Everywhere Be Bayesian". The second is "Limits to Probability Modeling" (with Chris' suggested possible sub-title: "Why are There no Real Bayesians?").

Thursday, March 8, 2018

H-Index for Journals

In an earlier rant, I suggested that journals move from tracking inane citation "impact factors" to citation "H indexes" or similar, just as routinely done when evaluating individual authors. It turns out that RePEc already does it, here. There are literally many thousands of journals ranked. I show the top 25 below. Interestingly, four "field" journals actually make the top 10, effectively making them "super (uber?) field journals" (J. Finance, J. Financial Economics, J. Monetary Economics, and J. Econometrics). For example, J. Econometrics is basically indistinguishable from Review of Economic Studies. 

The rankings

1American Economic Review, American Economic Association2623897539641395549
2Journal of Political Economy, University of Chicago Press2352275342978229129
3Econometrica, Econometric Society (also covers Econometrica, Econometric Society )2332660313530267883
4The Quarterly Journal of Economics, Oxford University Press2312116382311212892
5Journal of Finance, American Finance Association2102130764558215655
6Journal of Financial Economics, Elsevier1651380732654149337
7Journal of Monetary Economics, Elsevier (also covers Carnegie-Rochester Conference Series on Public Policy, Elsevier )1581237053334128056
8Review of Economic Studies, Oxford University Press1571143592317115072
9Journal of Econometrics, Elsevier1491318234160141376
10Journal of Economic Literature, American Economic Association1457291689173201
11Journal of Economic Perspectives, American Economic Association14577073167377758
12The Review of Economics and Statistics, MIT Press1401093303953109953
13Economic Journal, Royal Economic Society (also covers Economic Journal, Royal Economic Society )1371037633663104399
14Journal of International Economics, Elsevier12276446298381228
15Review of Financial Studies, Society for Financial Studies11966261165767063
16Journal of Public Economics, Elsevier11790038372295659
17Journal of Development Economics, Elsevier11365314311168204
18European Economic Review, Elsevier11174123387075847
19Journal of Economic Theory, Elsevier10883540423890223
20Journal of Business & Economic Statistics, Taylor & Francis Journals (also covers Journal of Business & Economic Statistics, American Statistical Association )10544499172644729
21Journal of Money, Credit and Banking, Blackwell Publishing (also covers Journal of Money, Credit and Banking, Blackwell Publishing )10451991295552610
22Management Science, INFORMS9570575689176945
23Journal of Banking & Finance, Elsevier9361760482972982
24International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association (also covers International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association )9148542253748823
25Journal of Labor Economics, University of Chicago Press9137921108438993

Wednesday, February 28, 2018

The Rate of Return on Everything

Jorda, Knoll, Kuvshinov, Schularick and Taylor deliver more than just a memorable title, "The Rate of Return on Everything, 1870-2015". (Dec 2017 NBER version here; earlier ungated June 2017 version here.) Their paper is a fascinating exercise in data construction and analysis. It goes well beyond, say, the earlier and also-fascinating Dimson et al. (2002) book, by including housing, among other things.

Caveat emptor: In this case two words suffice -- survivorship bias. Jorda et al. are well aware of it, and they work hard to assess and address it. But still.

Monday, February 26, 2018


I recently discussed how the nonparametric consistency of wide NN's proved underwhelming, which is partly why econometricians lost interest in NN's in the 1990s.

The other thing was the realization that NN objective surfaces are notoriously bumpy, so that arrival at a local optimum (e.g., by the stochastic gradient descent popular in NN circles) offered little comfort.

So econometricians' interest declined on both counts. But now both issues are being addressed. The new focus on NN depth as opposed to width is bearing much fruit. And recent advances in "reinforcement learning" methods effectively promote global as opposed to just local optimization, by experimenting (injecting randomness) in clever ways. (See, e.g., Taddy section 6, here.)

All told, it seems like quite an exciting new time for NN's. I've been away for 15 years. Time to start following again...

Wednesday, February 21, 2018

Larry Brown

Larry Brown has passed away.  Larry was a giant of modern statistics and a towering presence at Penn.  Simultaneously, everyone who knew him liked him, immensely. He will be missed dearly, both professionally and personally.

I received the obituary below from Penn's Statistics Department.

Lawrence David Brown Lawrence D. Brown died peacefully at 6:30 a.m. on Feb. 21, 2018, at the age of 77. Larry preserved his unfailing fortitude and good humor to his last day. Larry was born on Dec. 16, 1940, in Los Angeles, California. His parents moved to Alexandria, VA, during World War II, then returned to California. His father, Louis Brown, was a successful tax lawyer and later a professor of law at the University of Southern California, where he worked tirelessly on behalf of client services and conflict prevention, for which he coined the phrase preventive law. His mother, Hermione Kopp Brown, studied law in Virginia and then in Los Angeles and became one of the leading women lawyers in Los Angeles in the field of entertainment law, with emphasis on estate planning. Larry inherited their dedication for service, their mental acuity and resourcefulness, and their selfless good spirits. Larry graduated from Beverly Hills High School in 1957 and from the California Institute of Technology in 1961 and earned his Ph.D. in mathematics from Cornell University three years later. Initially hired at the University of California, Berkeley, he then taught in the mathematics department at Cornell University from 1966-72 and 1978-94 and in the statistics department at Rutgers University from 1972-78; he moved to the Wharton School at the University of Pennsylvania in 1994 and taught his last course there as the Miers Busch Professor of Statistics in the fall of 2017. One of the leading statisticians of his generation, he was the recipient of many honors, including devoted service as a member of the National Academy of Sciences, election to the American Academy of Arts and Sciences, the presidency of the Institute of Mathematical Statistics, and an honorary doctorate from Purdue University. He was much loved by his colleagues and his students, many of whom hold leading positions in the United States and abroad. His passion for his work was matched by his devotion to his family. His wife Linda Zhao survives him, as do their sons Frank and Louie, their daughter Yiwen Zhao, his daughters from his first marriage, Yona Alpers and Sarah Ackman, his brothers Marshall and Harold and their wives Jane and Eileen, and 19 grandchildren.

Monday, February 19, 2018

More on Neural Nets and ML

I earlier mentioned Matt Taddy's "The Technological Elements of Artificial Intelligence" (ungated version here).

Among other things the paper has good perspective on the past and present of neural nets. (Read:  his views mostly, if not exactly, match mine...)  

Here's my personal take on some of the history vis a vis econometrics:

Econometricians lost interest in NN's in the 1990's. The celebrated Hal White et al. proof of NN non-parametric consistency as NN width (number of neurons) gets large at an appropriate rate was ultimately underwhelming, insofar as it merely established for NN's what had been known for decades for various other non-parametric estimators (kernel, series, nearest-neighbor, trees, spline, etc.). That is, it seemed that there was nothing special about NN's, so why bother? 

But the non-parametric consistency focus was all on NN width; no one thought or cared much about NN depth. Then, more recently, people noticed that adding NN depth (more hidden layers) could be seriously helpful, and the "deep learning" boom took off. 

Here are some questions/observations on the new "deep learning":

1.  Adding NN depth often seems helpful, insofar as deep learning often seems to "work" in various engineering applications, but where/what are the theorems? What can be said rigorously about depth?

2. Taddy emphasizes what might be called two-step deep learning. In the first step, "pre-trained" hidden layer nodes are obtained based on unsupervised learning (e.g., principle components (PC)) from various sets of variables. And then the second step proceeds as usual. That's very similar to the age-old idea of PC regression. Or, in multivariate dynamic environments and econometrics language, "factor-augmented vector autoregression" (FAVAR), as in Bernanke et al. (2005). So, are modern implementations of deep NN's effectively just nonlinear FAVAR's? If so, doesn't that also seem underwhelming, in the sense of -- dare I say it -- there being nothing really new about deep NN's?

3. Moreover, PC regressions and FAVAR's have issues of their own relative to one-step procedures like ridge or LASSO.  See this and this

Tuesday, February 13, 2018

Neural Nets, ML and AI

"The Technological Elements of Artificial Intelligence", by Matt Taddy, is packed with insight on the development of neural nets and ML as related to the broader development of AI. I have lots to say, but it will have to wait until next week. For now I just want you to have the paper. Ungated version at


We have seen in the past decade a sharp increase in the extent that companies use data to optimize their businesses.  Variously called the `Big Data' or `Data Science' revolution, this has been characterized by massive amounts of data, including unstructured and nontraditional data like text and images, and the use of fast and flexible Machine Learning (ML) algorithms in analysis.  With recent improvements in Deep Neural Networks (DNNs) and related methods, application of high-performance ML algorithms has become more automatic and robust to different data scenarios.  That has led to the rapid rise of an Artificial Intelligence (AI) that works by combining many ML algorithms together - each targeting a straightforward prediction task - to solve complex problems.  

We will define a framework for thinking about the ingredients of this new ML-driven AI.  Having an understanding of the pieces that make up these systems and how they fit together is important for those who will be building businesses around this technology. Those studying the economics of AI can use these definitions to remove ambiguity from the conversation on AI's projected productivity impacts and data requirements.  Finally, this framework should help clarify the role for AI in the practice of modern business analytics and economic measurement.

Monday, February 12, 2018

ML, Forecasting, and Market Design

Nice stuff from Milgrom and Tadelis. Improved forecasting via improved machine learning in turn helps improve our ability to design effective markets -- better anticipating consumer/producer demand/supply movements, more finely segmenting and targeting consumers/producers, more accurately setting auction reserve prices, etc. Presumably full density forecasts, not just the point forecasts on which ML tends to focus, should soon move to center stage.

Monday, February 5, 2018

Big Data, Machine Learning, and Economic Statistics

Greetings from a very happy Philadelphia celebrating the Eagles' victory!

The following is adapted from the "background" and "purpose" statements for a planned 2019 NBER/CRIW conference, "Big Data for 21st Century Economic Statistics". Prescient and fascinating reading. (The full call for papers is here.)

Background: The coming decades will witness significant changes in the production of the social and economic statistics on which government officials, business decision makers, and private citizens rely. The statistical information currently produced by the federal statistical agencies rests primarily on “designed data” -- that is, data collected through household and business surveys. The increasing cost of fielding these surveys, the difficulty of obtaining survey responses, and questions about the reliability of some of the information collected, have raised questions about the sustainability of that model. At the same time, the potential for using “big data” -- very large data sets built to meet governments’ and businesses’ administrative and operational needs rather than for statistical purposes -- in the production of official statistics has grown.

These naturally-occurring data include not only administrative data maintained by government agencies but also scanner data, data scraped from the Web, credit card company records, data maintained by payroll providers, medical records, insurance company records, sensor data, and the Internet of Things. If the challenges associated with their use can be satisfactorily resolved, these emerging sorts of data could allow the statistical agencies not only to supplement or replace the survey data on which they currently depend, but also to introduce new statistics that are more granular, more up-to-date, and of higher quality than those currently being produced.

Purpose: The purpose of this conference is to provide a forum where economists, data providers, and data analysts can meet to present research on the use of big data in the production of federal social and economic statistics. Among other things, this involves discussing (1) Methods for combining multiple data sources, whether they be carefully designed surveys or experiments, large government administrative datasets, or private sector big data, to produce economic and social statistics; (2) Case studies illustrating how big data can be used to improve or replace existing statistical data series or create new statistical data series; (3) Best practices for characterizing the quality of big data sources and blended estimates constructed using data from multiple sources.

Monday, January 29, 2018

Structural VAR Analysis

Kilian and Lutkepohl's Structural Vector Autoregressive Analysis is now out. The back-cover blurbs below are not hyperbole. Indeed Harald Uhlig's is an understatement in certain respects -- to his list of important modern topics covered I would certainly add the "external instrument" approach. For more on that, beyond K&L, which went to press some time ago, see Stock and Watson's masterful 2018 external-instrument survey and extension, just now released as an NBER working paper. (Ungated K&L draft here; ungated S&W draft here.)