Sunday, September 18, 2022

Factor Network Autoregressions

 Check this out, by Barigozzi, Cavaliere, and Moramarco:
http://d.repec.org/n?u=RePEc:arx:papers:2208.02925&r=

Very cool methods for dynamic "multilayer networks".  In a standard N-dim net there's one NxN adjacency matrix.  But richer nets may have many kinds of connections, each governed by its own adjacency matrix.  (What a great insight -- so natural and obvious once you hear it.  A nice "ah-ha moment"!)  So perhaps there are K operative NxN adjacency matrices.  Then there is actually a grand 3-dim adjacency matrix (NxNxK) operative -- a cubic rather than a square matrix.  Parsimonious modeling then becomes absolutely crucial, and in that regard BCM effectively propose a modeling framework with a "factor structure" for the set of adjacency matrices.  Really eye-opening.  Lots to think about.     


Saturday, September 3, 2022

Memories of Ted Anderson

Ted is among the very greatest statisticians/econometricians of the 20th-century.  I feel very close to him, as my former Penn colleague, Larry Klein, worked closely with him at Cowles in the 1940s, and another former colleague, Bobby Mariano, was his student at Stanford before coming to Penn around 1970.  I recall a Penn seminar he gave late in his career, on unit moving-average roots.  He started painfully slowly, defining, for example, things like "time series" and "covariance stationarity".  Some eyes were rolling.  Ten minutes later, he was far beyond the frontier.  No eyes were rolling.  Indeed jaws were dropping.  When I visited Stanford in the 1990s for a seminar, he rolled out the red carpet for me.  Amazing, him doing that for me.  What a gentleman.  

Check out this fascinating new take from Peter Phillips:

By:Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland, Singapore Management University, University of Southampton)
Abstract:T. W. Anderson did pathbreaking work in econometrics during his remarkable career as an eminent statistician. His primary contributions to econometrics are reviewed here, including his early research on estimation and inference in simultaneous equations models and reduced rank regression. Some of his later works that connect in important ways to econometrics are also briefly covered, including limit theory in explosive autoregression, asymptotic expansions, and exact distribution theory for econometric estimators. The research is considered in the light of its influence on subsequent and ongoing developments in econometrics, notably confidence interval construction under weak instruments and inference in mildly explosive regressions.

URL:http://d.repec.org/n?u=RePEc:cwl:cwldpp:2333&r=

Equal-weight HAR combination

This just blows me away.  So full of great insight.  Equal-weight combinations rule, in yet another context!  See also my papers with Minchul Shin that clearly lead to equal weights for point and density forecasts, respectively:

Diebold, F.X. and Shin, M. (2019), "Machine Learning for Regularized Survey Forecast Combination: Partially-Egalitarian Lasso and its Derivatives," International Journal of Forecasting, 35, 1679-1691. 

Diebold, F.X., Shin, M. and Zhang, B. (2022), “On the Aggregation of Probability Assessments: Regularized Mixtures of Predictive Densities for Eurozone Inflation and Real Interest Rates,” Journal of Econometrics, forthcoming.  Working paper at arXiv:2012.11649.

 Forecast combination puzzle in the HAR model

By:Clements, AdamVasnev, Andrey
Abstract:The Heterogeneous Autoregressive (HAR) model of Corsi (2009) has become the benchmark model for predicting realized volatility given its simplicity and consistent empirical performance. Many modifications and extensions to the original model have been proposed that often only provide incremental forecast improvements. In this paper, we take a step back and view the HAR model as a forecast combination that combines three predictors: previous day realization (or random walk forecast), previous week average, and previous month average. When applying the Ordinary Least Squares (OLS) to combine the predictors, the HAR model uses optimal weights that are known to be problematic in the forecast combination literature. In fact, the simple average forecast often outperforms the optimal combination in many empirical applications. We investigate the performance of the simple average forecast for the realized volatility of the Dow Jones Industrial Average equity index. We find dramatic improvements in forecast accuracy across all horizons and different time periods. This is the first time the forecast combination puzzle is identified in this context.
Keywords:Realized volatility, forecast combination, HAR model
JEL:C53 C58
Date:2021–02–24
URL:http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/25045&r=

Long memory and weak ID

I've thus far never been a big fan of the weak ID literature.  Always seemed to me that if you wind up with weak ID, it's time to think harder about the underlying economics rather than fancier econometrics.  But this opened my eyes and changed my mind.  Totally cool.  

 Weak Identification of Long Memory with Implications for Inference

By:Jia Li (Singapore Management University); Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland, Singapore Management University, University of Southampton); Shuping Shi (Macquarie University); Jun Yu (Singapore Management University)
Abstract:This paper explores weak identification issues arising in commonly used models of economic and financial time series. Two highly popular configurations are shown to be asymptotically observationally equivalent: one with long memory and weak autoregressive dynamics, the other with antipersistent shocks and a near-unit autoregressive root. We develop a data-driven semiparametric and identification-robust approach to inference that reveals such ambiguities and documents the prevalence of weak identification in many realized volatility and trading volume series. The identification-robust empirical evidence generally favors long memory dynamics in volatility and volume, a conclusion that is corroborated using social-media news flow data.
Keywords:Realized volatility; Weak identification; Disjoint confidence sets, Trading volume, Long memory
JEL:C12 C13 C58
Date:2022–06
URL:http://d.repec.org/n?u=RePEc:cwl:cwldpp:2334&r=

Tuesday, August 30, 2022

How Did I Miss This??

Great stuff, forthcoming JBES (2022).  

TIME SERIES APPROACH TO THE EVOLUTION OF NETWORKS: PREDICTION AND ESTIMATION 

ANNA BYKHOVSKAYA 

Abstract. The paper analyzes non-negative multivariate time series which we interpret as weighted networks. We introduce a model where each coordinate of the time series represents a given edge across time. The number of time periods is treated as large compared to the size of the network. The model specifies the temporal evolution of a weighted network that combines classical autoregression with non-negativity, a positive probability of vanishing, and peer effect interactions between weights assigned to edges in the process. The main results provide criteria for stationarity vs. explosiveness of the network evolution process and techniques for estimation of the parameters of the model and for prediction of its future values.

https://abykhovskaya.files.wordpress.com/2021/07/networks_jbes_3.pdf

See also https://annabykhovskaya.com





Wednesday, August 24, 2022

The Complexity Principle (!)

Continuing the previous post, I'm sorry if I seem to be gushing over the recent Kelly et al. program (indeed I am), but it just blows me away.  The famous "parsimony" and "KISS (keep it sophisticatedly simple)" principles turned on their heads!  George Box and Arnold Zellner must be rolling in their graves...

 The Virtue of Complexity Everywhere


Bryan T. Kelly (Yale SOM; AQR Capital Management, LLC; National Bureau of Economic Research (NBER)); Semyon Malamud (Ecole Polytechnique Federale de Lausanne; Centre for Economic Policy Research (CEPR); Swiss Finance Institute); Kangying Zhou (Yale School of Management)


We investigate the performance of non-linear return prediction models in the high complexity regime, i.e., when the number of model parameters exceeds the number of observations. We document a "virtue of complexity" in all asset classes that we study (US equities, international equities, bonds, commodities, currencies, and interest rates). Specifically, return prediction R2 and optimal portfolio Sharpe ratio generally increase with model parameterization for every asset class. The virtue of complexity is present even in extremely data-scarce environments, e.g., for predictive models with less than twenty observations and tens of thousands of predictors. The empirical association between model complexity and out-of-sample model performance exhibits a striking consistency with theoretical predictions.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4171581


Friday, August 19, 2022

Complexity in Prediction

Really glad to see that Kelly et al. are keeping at it, moving well into the "double dip" zone and adding regularization.

 The Virtue of Complexity in Return Prediction (2022)


Bryan T. KellySemyon MalamudKangying Zhou




The extant literature predicts market returns with “simple” models that use only a few parameters. Contrary to conventional wisdom, we theoretically prove that simple models severely understate return predictability compared to “complex” models in which the number of parameters exceeds the number of observations. We empirically document the virtue of complexity in US equity market return prediction. Our findings establish the rationale for modeling expected returns through machine learning.



http://d.repec.org/n?u=RePEc:nbr:nberwo:30217&r=

Wednesday, August 10, 2022

Instrumental Variables in Practical Application

I have always been fascinated by Alwyn Young's paper,  "Consistency without Inference:  Instrumental Variables in Practical Application."   On-line appendix.  Glad to see that it's now published in the European Economic Review.  Note the key role of non-white disturbances.

From the intro:

The economics profession is in the midst of a “credibility revolution” (Angrist and Pischke 2010) in which careful research design has become firmly established as a necessary characteristic of applied work.  A key element in this revolution has been the use of instruments to identify causal effects free of the potential biases carried by endogenous ordinary least squares regressors.  The growing emphasis on research design has not gone hand in hand, however, with equal demands on the quality of inference.  Despite the widespread use of Eicker (1963)-Hinkley (1977)-White (1980) heteroskedasticity robust covariance estimates and their clustered extensions, the implications of non-iid error processes for the quality of inference, and their interaction in this regard with regression and research design, has not received the attention it deserves.  Heteroskedastic and correlated errors in highly leveraged regressions produce test statistics whose dispersion is typically much greater than believed, exaggerating the statistical significance of both 1st and 2nd stage tests, while lowering power to detect meaningful alternatives.  Furthermore, the bias of 2SLS relative to OLS rises as predicted second stage values are increasingly determined by the realization of a few errors, thereby eliminating much of the benefit of IV.  This paper shows that these problems exist in a substantial fraction of published work. 

Saturday, June 11, 2022

Great Summer Courses in Glasgow


Summer School Sept 5-9, Adam Smith Business School, University of Glasgow:

Kamil Yilmaz will teach a two-day Network Connectedness course Sept 5-6, covering both methods and applications ("Financial and Macroeconomic Connectedness: A Network Approach to Measurement and Monitoring").

Refet Gurkaynak will teach a two-day High-Frequency Finance course Sept 7-8, again covering both methods and applications ("Asset Price Reactions to News: High Frequency Methods and Applications").

Both courses will be helpful for researchers and policy analysts at universities, central banks, international policy institutes, and think tanks.

Looks great!

Monday, February 28, 2022

New and Novel ARCH Model Application (Seriously)

 The Variability and Volatility of Sleep: An Archetypal Approach

By:Hamermesh, Daniel S. (Barnard College); Pfann, Gerard A. (Maastricht University)
Abstract:Using Dutch time-diary data from 1975-2005 covering over 10,000 respondents for 7 consecutive days each, we show that individuals' sleep time exhibits both variability and volatility characterized by stationary autoregressive conditional heteroscedasticity: The absolute values of deviations from a person's average sleep on one day are positively correlated with those on the next day. Sleep is more variable on weekends and among people with less education, who are younger and who do not have young children at home. Volatility is greater among parents with young children, slightly greater among men than women, but independent of other demographics. A theory of economic incentives to minimize the dispersion of sleep predicts that higher-wage workers will exhibit less dispersion, a result demonstrated using extraneous estimates of earnings equations to impute wage rates. Volatility in sleep spills over onto volatility in other personal activities, with no reverse causation onto sleep. The results illustrate a novel dimension of economic inequality and could be applied to a wide variety of human behavior and biological processes.
Keywords:time use, ARCH, economic incentives in biological processes, volatility
JEL:C22 J22 I14
Date:2022–01
URL:http://d.repec.org/n?u=RePEc:iza:izadps:dp15001&r=&r=ets