Monday, February 19, 2018

More on Neural Nets and ML

I earlier mentioned Matt Taddy's "The Technological Elements of Artificial Intelligence" (ungated version here).

Among other things the paper has good perspective on the past and present of neural nets. (Read:  his views mostly, if not exactly, match mine...)  

Here's my personal take on some of the history vis a vis econometrics:

Econometricians lost interest in NN's in the 1990's. The celebrated Hal White et al. proof of NN non-parametric consistency as NN width (number of neurons) gets large at an appropriate rate was ultimately underwhelming, insofar as it merely established for NN's what had been known for decades for various other non-parametric estimators (kernel, series, nearest-neighbor, trees, spline, etc.). That is, it seemed that there was nothing special about NN's, so why bother? 

But the non-parametric consistency focus was all on NN width; no one thought or cared much about NN depth. Then, more recently, people noticed that adding NN depth (more hidden layers) could be seriously helpful, and the "deep learning" boom took off. 

Here are some questions/observations on the new "deep learning":

1.  Adding NN depth often seems helpful, insofar as deep learning often seems to "work" in various engineering applications, but where/what are the theorems? What can be said rigorously about depth?

2. Taddy emphasizes what might be called two-step deep learning. In the first step, "pre-trained" hidden layer nodes are obtained based on unsupervised learning (e.g., principle components (PC)) from various sets of variables. And then the second step proceeds as usual. That's very similar to the age-old idea of PC regression. Or, in multivariate dynamic environments and econometrics language, "factor-augmented vector autoregression" (FAVAR), as in Bernanke et al. (2005). So, are modern implementations of deep NN's effectively just nonlinear FAVAR's? If so, doesn't that also seem underwhelming, in the sense of -- dare I say it -- there being nothing really new about deep NN's?

3. Moreover, PC regressions and FAVAR's have issues of their own relative to one-step procedures like ridge or LASSO.  See this and this

Tuesday, February 13, 2018

Neural Nets, ML and AI

"The Technological Elements of Artificial Intelligence", by Matt Taddy, is packed with insight on the development of neural nets and ML as related to the broader development of AI. I have lots to say, but it will have to wait until next week. For now I just want you to have the paper. Ungated version at http://www.nber.org/chapters/c14021.pdf.

Abstract:

We have seen in the past decade a sharp increase in the extent that companies use data to optimize their businesses.  Variously called the `Big Data' or `Data Science' revolution, this has been characterized by massive amounts of data, including unstructured and nontraditional data like text and images, and the use of fast and flexible Machine Learning (ML) algorithms in analysis.  With recent improvements in Deep Neural Networks (DNNs) and related methods, application of high-performance ML algorithms has become more automatic and robust to different data scenarios.  That has led to the rapid rise of an Artificial Intelligence (AI) that works by combining many ML algorithms together - each targeting a straightforward prediction task - to solve complex problems.  

We will define a framework for thinking about the ingredients of this new ML-driven AI.  Having an understanding of the pieces that make up these systems and how they fit together is important for those who will be building businesses around this technology. Those studying the economics of AI can use these definitions to remove ambiguity from the conversation on AI's projected productivity impacts and data requirements.  Finally, this framework should help clarify the role for AI in the practice of modern business analytics and economic measurement.

Monday, February 12, 2018

ML, Forecasting, and Market Design

Nice stuff from Milgrom and Tadelis. Improved forecasting via improved machine learning in turn helps improve our ability to design effective markets -- better anticipating consumer/producer demand/supply movements, more finely segmenting and targeting consumers/producers, more accurately setting auction reserve prices, etc. Presumably full density forecasts, not just the point forecasts on which ML tends to focus, should soon move to center stage.
http://www.nber.org/chapters/c14008.pdf

Monday, February 5, 2018

Big Data, Machine Learning, and Economic Statistics

Greetings from a very happy Philadelphia celebrating the Eagles' victory!

The following is adapted from the "background" and "purpose" statements for a planned 2019 NBER/CRIW conference, "Big Data for 21st Century Economic Statistics". Prescient and fascinating reading. (The full call for papers is here.)

Background: The coming decades will witness significant changes in the production of the social and economic statistics on which government officials, business decision makers, and private citizens rely. The statistical information currently produced by the federal statistical agencies rests primarily on “designed data” -- that is, data collected through household and business surveys. The increasing cost of fielding these surveys, the difficulty of obtaining survey responses, and questions about the reliability of some of the information collected, have raised questions about the sustainability of that model. At the same time, the potential for using “big data” -- very large data sets built to meet governments’ and businesses’ administrative and operational needs rather than for statistical purposes -- in the production of official statistics has grown.

These naturally-occurring data include not only administrative data maintained by government agencies but also scanner data, data scraped from the Web, credit card company records, data maintained by payroll providers, medical records, insurance company records, sensor data, and the Internet of Things. If the challenges associated with their use can be satisfactorily resolved, these emerging sorts of data could allow the statistical agencies not only to supplement or replace the survey data on which they currently depend, but also to introduce new statistics that are more granular, more up-to-date, and of higher quality than those currently being produced.

Purpose: The purpose of this conference is to provide a forum where economists, data providers, and data analysts can meet to present research on the use of big data in the production of federal social and economic statistics. Among other things, this involves discussing (1) Methods for combining multiple data sources, whether they be carefully designed surveys or experiments, large government administrative datasets, or private sector big data, to produce economic and social statistics; (2) Case studies illustrating how big data can be used to improve or replace existing statistical data series or create new statistical data series; (3) Best practices for characterizing the quality of big data sources and blended estimates constructed using data from multiple sources.