Friday, November 11, 2022

Eight steps to Gauss

 Just eight co-authorship steps to Gauss! Small world indeed. And the route backward is not too shabby… 
--> Marc Nerlove --> Kenneth Arrow --> David Blackwell --> Richard Bellman --> Ernst Straus --> Albert Einstein --> Hermann Minkowski --> Carl Friedrich Gauss

Thursday, November 10, 2022

Something May Be Wrong With Me

It strikes me that something may be wrong with me.  

In a new paper in progress I wanted to cite the famous and beautiful Sims, Stock and Watson (1990). I found the bibtex on Jim Stock's Harvard site. Fine. Then I noticed that it listed the authors as Stock, Sims, and Watson. OK, fine, I changed it to the correct alphabetical order of Sims, Stock and Watson. (Probably just Jim's administrative assistant aggrandizing on his behalf.)

Anyway I also noticed that the bibtex omitted middle initials, just giving C. Sims, J. Stock, and M. Watson. The amazing thing, and why something may be wrong with me, is that I was instantly able to supply from memory the full C.A. Sims, J.H. Stock, and M.W. Watson. Do I not have anything better with which to fill my head?!

Indeed it gets worse.  Not only do I have burned into my memory C.W.J. Granger and P.C.B. Phillips, but also their full names, Clive William John Granger and Peter Charles Bonest Phillips.  I really don't know how I learned them, or why I retain them. Of course people like Granger, Phillips, Sims, Stock, and Watson are my heroes, among the very greatest of the past sixty years of econometrics, but still...  

Sunday, October 30, 2022

The Econometrics of Macroeconomic and Financial Data

Last week I received the full published special issue of Journal of Econometrics, 231(2), 2022 (The Econometrics of Macroeconomic and Financial Data). I am deeply grateful and humbled. What a wonderful gesture. Heartfelt thanks to the J. Econometrics Editorial Board, and to all the students, co-authors, and colleagues who contributed. Special thanks to Atsushi Inoue, Lutz Kilian and Andrew Patton for their thoughtful introduction and meticulous editing, and for so generously attempting (twice) to host the associated 60th birthday conference. Clearly COVID did not defeat us!

Thursday, October 27, 2022

Moral Hazard in Climate Change Adaptation

Fascinating color on sea level rise in Jakarta, and good insight into the moral hazard associated with certain types of adaptation.

Abstract:  Sea level rise poses an existential threat to Jakarta, which faces frequent and worsening flooding. The government has responded with a proposed sea wall. In this setting, I study how government intervention complicates long-run adaptation to climate change. I show that government intervention creates coastal moral hazard, and I quantify this force with a dynamic spatial model in which developers and residents act with flood risk in mind. I find that moral hazard generates severe lock-in and limits migration inland, even over the long run.

Wednesday, October 12, 2022

Machine Learning and Central Banking

 Of course machine learning (ML) is everywhere now.  The time-series analysis perspective has matched that of ML for decades (parsimonious predictive modeling allowing for misspecification; out-of-sample evaluation; ensemble averaging; etc.), so there are many areas of overlap even if there are also many differences.

It's interesting to see ML emerging as particularly useful in central banking contexts.  The Federal Reserve Bank of Philadelphia, for example, now explicitly recruits and hires "Machine Learning Economists".  Presently they have three, and they're looking for a fourth!

In that regard it's especially interesting to learn of a call for papers for a special themed issue of Journal of Econometrics on "Machine Learning for Economic Policy"with guest editors from a variety of leading central banks and universities.

See and below.


Machine learning techniques are increasingly being evaluated in the academic community and at the same time leveraged by practitioners at policy institutions, like central banks or governments.  A themed issue in the Journal of Econometrics aims to present frontier research that sits at the intersection of machine learning and economic policy.

There are good reasons for policy makers to embrace these new techniques. Tree-based models or artificial neural networks, often in conjunction with novel and rich data sources, like text or high-frequency indicators, can provide prediction accuracy and information that standard models cannot.  For example, machine learning can uncover potentially unknown but important nonlinearities within in the data generating process.  Moreover, natural language processing − made possible by advances in machine learning is increasingly being applied to better understand the economic landscape that policymakers must survey.

These upsides of these new techniques come with the downside that it often is not clear what the mechanism through which the machine learning model operates, i.e. the black box critique. Much of the existence of the black box critique is due to how machine learning models evolved with a focus on accuracy. However, this single focus can be particularly problematic in decision making situations, where all stakeholders have an interest in understanding all pieces of information which enter the decision-making process, irrespective of model accuracy. The tools of economics and econometrics can help to address this problem thereby building bridges between disciplines.

Tuesday, October 4, 2022

The Latest in Observation-Driven TVP Models

Check this out.  The implicit stochastic-gradient update seems very appealing relative to the "standard" GAS/DCS explicit update.

"Robust Observation-Driven Models Using Proximal-Parameter Updates", by Rutger-Jan Lange, Bram van Os, and Dick van Dijk.

Sunday, September 18, 2022

Factor Network Autoregressions

 Check this out, by Barigozzi, Cavaliere, and Moramarco:

Very cool methods for dynamic "multilayer networks".  In a standard N-dim net there's one NxN adjacency matrix.  But richer nets may have many kinds of connections, each governed by its own adjacency matrix.  (What a great insight -- so natural and obvious once you hear it.  A nice "ah-ha moment"!)  So perhaps there are K operative NxN adjacency matrices.  Then there is actually a grand 3-dim adjacency matrix (NxNxK) operative -- a cubic rather than a square matrix.  Parsimonious modeling then becomes absolutely crucial, and in that regard BCM effectively propose a modeling framework with a "factor structure" for the set of adjacency matrices.  Really eye-opening.  Lots to think about.     

Saturday, September 3, 2022

Memories of Ted Anderson

Ted is among the very greatest statisticians/econometricians of the 20th-century.  I feel very close to him, as my former Penn colleague, Larry Klein, worked closely with him at Cowles in the 1940s, and another former colleague, Bobby Mariano, was his student at Stanford before coming to Penn around 1970.  I recall a Penn seminar he gave late in his career, on unit moving-average roots.  He started painfully slowly, defining, for example, things like "time series" and "covariance stationarity".  Some eyes were rolling.  Ten minutes later, he was far beyond the frontier.  No eyes were rolling.  Indeed jaws were dropping.  When I visited Stanford in the 1990s for a seminar, he rolled out the red carpet for me.  Amazing, him doing that for me.  What a gentleman.  

Check out this fascinating new take from Peter Phillips:

By:Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland, Singapore Management University, University of Southampton)
Abstract:T. W. Anderson did pathbreaking work in econometrics during his remarkable career as an eminent statistician. His primary contributions to econometrics are reviewed here, including his early research on estimation and inference in simultaneous equations models and reduced rank regression. Some of his later works that connect in important ways to econometrics are also briefly covered, including limit theory in explosive autoregression, asymptotic expansions, and exact distribution theory for econometric estimators. The research is considered in the light of its influence on subsequent and ongoing developments in econometrics, notably confidence interval construction under weak instruments and inference in mildly explosive regressions.


Equal-weight HAR combination

This just blows me away.  So full of great insight.  Equal-weight combinations rule, in yet another context!  See also my papers with Minchul Shin that clearly lead to equal weights for point and density forecasts, respectively:

Diebold, F.X. and Shin, M. (2019), "Machine Learning for Regularized Survey Forecast Combination: Partially-Egalitarian Lasso and its Derivatives," International Journal of Forecasting, 35, 1679-1691. 

Diebold, F.X., Shin, M. and Zhang, B. (2022), “On the Aggregation of Probability Assessments: Regularized Mixtures of Predictive Densities for Eurozone Inflation and Real Interest Rates,” Journal of Econometrics, forthcoming.  Working paper at arXiv:2012.11649.

 Forecast combination puzzle in the HAR model

By:Clements, AdamVasnev, Andrey
Abstract:The Heterogeneous Autoregressive (HAR) model of Corsi (2009) has become the benchmark model for predicting realized volatility given its simplicity and consistent empirical performance. Many modifications and extensions to the original model have been proposed that often only provide incremental forecast improvements. In this paper, we take a step back and view the HAR model as a forecast combination that combines three predictors: previous day realization (or random walk forecast), previous week average, and previous month average. When applying the Ordinary Least Squares (OLS) to combine the predictors, the HAR model uses optimal weights that are known to be problematic in the forecast combination literature. In fact, the simple average forecast often outperforms the optimal combination in many empirical applications. We investigate the performance of the simple average forecast for the realized volatility of the Dow Jones Industrial Average equity index. We find dramatic improvements in forecast accuracy across all horizons and different time periods. This is the first time the forecast combination puzzle is identified in this context.
Keywords:Realized volatility, forecast combination, HAR model
JEL:C53 C58

Long memory and weak ID

I've thus far never been a big fan of the weak ID literature.  Always seemed to me that if you wind up with weak ID, it's time to think harder about the underlying economics rather than fancier econometrics.  But this opened my eyes and changed my mind.  Totally cool.  

 Weak Identification of Long Memory with Implications for Inference

By:Jia Li (Singapore Management University); Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland, Singapore Management University, University of Southampton); Shuping Shi (Macquarie University); Jun Yu (Singapore Management University)
Abstract:This paper explores weak identification issues arising in commonly used models of economic and financial time series. Two highly popular configurations are shown to be asymptotically observationally equivalent: one with long memory and weak autoregressive dynamics, the other with antipersistent shocks and a near-unit autoregressive root. We develop a data-driven semiparametric and identification-robust approach to inference that reveals such ambiguities and documents the prevalence of weak identification in many realized volatility and trading volume series. The identification-robust empirical evidence generally favors long memory dynamics in volatility and volume, a conclusion that is corroborated using social-media news flow data.
Keywords:Realized volatility; Weak identification; Disjoint confidence sets, Trading volume, Long memory
JEL:C12 C13 C58