Friday, November 11, 2022

Eight steps to Gauss

 Just eight co-authorship steps to Gauss! Small world indeed. And the route backward is not too shabby… 
--> Marc Nerlove --> Kenneth Arrow --> David Blackwell --> Richard Bellman --> Ernst Straus --> Albert Einstein --> Hermann Minkowski --> Carl Friedrich Gauss


Thursday, November 10, 2022

Something May Be Wrong With Me

It strikes me that something may be wrong with me.  

In a new paper in progress I wanted to cite the famous and beautiful Sims, Stock and Watson (1990). I found the bibtex on Jim Stock's Harvard site. Fine. Then I noticed that it listed the authors as Stock, Sims, and Watson. OK, fine, I changed it to the correct alphabetical order of Sims, Stock and Watson. (Probably just Jim's administrative assistant aggrandizing on his behalf.)

Anyway I also noticed that the bibtex omitted middle initials, just giving C. Sims, J. Stock, and M. Watson. The amazing thing, and why something may be wrong with me, is that I was instantly able to supply from memory the full C.A. Sims, J.H. Stock, and M.W. Watson. Do I not have anything better with which to fill my head?!

Indeed it gets worse.  Not only do I have burned into my memory C.W.J. Granger and P.C.B. Phillips, but also their full names, Clive William John Granger and Peter Charles Bonest Phillips.  I really don't know how I learned them, or why I retain them. Of course people like Granger, Phillips, Sims, Stock, and Watson are my heroes, among the very greatest of the past sixty years of econometrics, but still...  

Sunday, October 30, 2022

The Econometrics of Macroeconomic and Financial Data

Last week I received the full published special issue of Journal of Econometrics, 231(2), 2022 (The Econometrics of Macroeconomic and Financial Data). I am deeply grateful and humbled. What a wonderful gesture. Heartfelt thanks to the J. Econometrics Editorial Board, and to all the students, co-authors, and colleagues who contributed. Special thanks to Atsushi Inoue, Lutz Kilian and Andrew Patton for their thoughtful introduction and meticulous editing, and for so generously attempting (twice) to host the associated 60th birthday conference. Clearly COVID did not defeat us!

Thursday, October 27, 2022

Moral Hazard in Climate Change Adaptation

Fascinating color on sea level rise in Jakarta, and good insight into the moral hazard associated with certain types of adaptation. 

 https://allanhsiao.github.io/files/Hsiao_jakarta.pdf

https://allanhsiao.github.io/

Abstract:  Sea level rise poses an existential threat to Jakarta, which faces frequent and worsening flooding. The government has responded with a proposed sea wall. In this setting, I study how government intervention complicates long-run adaptation to climate change. I show that government intervention creates coastal moral hazard, and I quantify this force with a dynamic spatial model in which developers and residents act with flood risk in mind. I find that moral hazard generates severe lock-in and limits migration inland, even over the long run.


Wednesday, October 12, 2022

Machine Learning and Central Banking

 Of course machine learning (ML) is everywhere now.  The time-series analysis perspective has matched that of ML for decades (parsimonious predictive modeling allowing for misspecification; out-of-sample evaluation; ensemble averaging; etc.), so there are many areas of overlap even if there are also many differences.

It's interesting to see ML emerging as particularly useful in central banking contexts.  The Federal Reserve Bank of Philadelphia, for example, now explicitly recruits and hires "Machine Learning Economists".  Presently they have three, and they're looking for a fourth!

In that regard it's especially interesting to learn of a call for papers for a special themed issue of Journal of Econometrics on "Machine Learning for Economic Policy"with guest editors from a variety of leading central banks and universities.

See https://www.bankofengland.co.uk/events/2022/october/call-for-papers-machine-learning-for-economic-policy and below.

----------

Machine learning techniques are increasingly being evaluated in the academic community and at the same time leveraged by practitioners at policy institutions, like central banks or governments.  A themed issue in the Journal of Econometrics aims to present frontier research that sits at the intersection of machine learning and economic policy.

There are good reasons for policy makers to embrace these new techniques. Tree-based models or artificial neural networks, often in conjunction with novel and rich data sources, like text or high-frequency indicators, can provide prediction accuracy and information that standard models cannot.  For example, machine learning can uncover potentially unknown but important nonlinearities within in the data generating process.  Moreover, natural language processing − made possible by advances in machine learning is increasingly being applied to better understand the economic landscape that policymakers must survey.

These upsides of these new techniques come with the downside that it often is not clear what the mechanism through which the machine learning model operates, i.e. the black box critique. Much of the existence of the black box critique is due to how machine learning models evolved with a focus on accuracy. However, this single focus can be particularly problematic in decision making situations, where all stakeholders have an interest in understanding all pieces of information which enter the decision-making process, irrespective of model accuracy. The tools of economics and econometrics can help to address this problem thereby building bridges between disciplines.

Tuesday, October 4, 2022

The Latest in Observation-Driven TVP Models

Check this out.  The implicit stochastic-gradient update seems very appealing relative to the "standard" GAS/DCS explicit update.

"Robust Observation-Driven Models Using Proximal-Parameter Updates", by Rutger-Jan Lange, Bram van Os, and Dick van Dijk.

https://www.tinbergen.nl/discussion-paper/6188/22-066-iii-robust-observation-driven-models-using-proximal-parameter-updates


Sunday, September 18, 2022

Factor Network Autoregressions

 Check this out, by Barigozzi, Cavaliere, and Moramarco:
http://d.repec.org/n?u=RePEc:arx:papers:2208.02925&r=

Very cool methods for dynamic "multilayer networks".  In a standard N-dim net there's one NxN adjacency matrix.  But richer nets may have many kinds of connections, each governed by its own adjacency matrix.  (What a great insight -- so natural and obvious once you hear it.  A nice "ah-ha moment"!)  So perhaps there are K operative NxN adjacency matrices.  Then there is actually a grand 3-dim adjacency matrix (NxNxK) operative -- a cubic rather than a square matrix.  Parsimonious modeling then becomes absolutely crucial, and in that regard BCM effectively propose a modeling framework with a "factor structure" for the set of adjacency matrices.  Really eye-opening.  Lots to think about.     


Saturday, September 3, 2022

Memories of Ted Anderson

Ted is among the very greatest statisticians/econometricians of the 20th-century.  I feel very close to him, as my former Penn colleague, Larry Klein, worked closely with him at Cowles in the 1940s, and another former colleague, Bobby Mariano, was his student at Stanford before coming to Penn around 1970.  I recall a Penn seminar he gave late in his career, on unit moving-average roots.  He started painfully slowly, defining, for example, things like "time series" and "covariance stationarity".  Some eyes were rolling.  Ten minutes later, he was far beyond the frontier.  No eyes were rolling.  Indeed jaws were dropping.  When I visited Stanford in the 1990s for a seminar, he rolled out the red carpet for me.  Amazing, him doing that for me.  What a gentleman.  

Check out this fascinating new take from Peter Phillips:

By:Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland, Singapore Management University, University of Southampton)
Abstract:T. W. Anderson did pathbreaking work in econometrics during his remarkable career as an eminent statistician. His primary contributions to econometrics are reviewed here, including his early research on estimation and inference in simultaneous equations models and reduced rank regression. Some of his later works that connect in important ways to econometrics are also briefly covered, including limit theory in explosive autoregression, asymptotic expansions, and exact distribution theory for econometric estimators. The research is considered in the light of its influence on subsequent and ongoing developments in econometrics, notably confidence interval construction under weak instruments and inference in mildly explosive regressions.

URL:http://d.repec.org/n?u=RePEc:cwl:cwldpp:2333&r=

Equal-weight HAR combination

This just blows me away.  So full of great insight.  Equal-weight combinations rule, in yet another context!  See also my papers with Minchul Shin that clearly lead to equal weights for point and density forecasts, respectively:

Diebold, F.X. and Shin, M. (2019), "Machine Learning for Regularized Survey Forecast Combination: Partially-Egalitarian Lasso and its Derivatives," International Journal of Forecasting, 35, 1679-1691. 

Diebold, F.X., Shin, M. and Zhang, B. (2022), “On the Aggregation of Probability Assessments: Regularized Mixtures of Predictive Densities for Eurozone Inflation and Real Interest Rates,” Journal of Econometrics, forthcoming.  Working paper at arXiv:2012.11649.

 Forecast combination puzzle in the HAR model

By:Clements, AdamVasnev, Andrey
Abstract:The Heterogeneous Autoregressive (HAR) model of Corsi (2009) has become the benchmark model for predicting realized volatility given its simplicity and consistent empirical performance. Many modifications and extensions to the original model have been proposed that often only provide incremental forecast improvements. In this paper, we take a step back and view the HAR model as a forecast combination that combines three predictors: previous day realization (or random walk forecast), previous week average, and previous month average. When applying the Ordinary Least Squares (OLS) to combine the predictors, the HAR model uses optimal weights that are known to be problematic in the forecast combination literature. In fact, the simple average forecast often outperforms the optimal combination in many empirical applications. We investigate the performance of the simple average forecast for the realized volatility of the Dow Jones Industrial Average equity index. We find dramatic improvements in forecast accuracy across all horizons and different time periods. This is the first time the forecast combination puzzle is identified in this context.
Keywords:Realized volatility, forecast combination, HAR model
JEL:C53 C58
Date:2021–02–24
URL:http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/25045&r=

Long memory and weak ID

I've thus far never been a big fan of the weak ID literature.  Always seemed to me that if you wind up with weak ID, it's time to think harder about the underlying economics rather than fancier econometrics.  But this opened my eyes and changed my mind.  Totally cool.  

 Weak Identification of Long Memory with Implications for Inference

By:Jia Li (Singapore Management University); Peter C. B. Phillips (Cowles Foundation, Yale University, University of Auckland, Singapore Management University, University of Southampton); Shuping Shi (Macquarie University); Jun Yu (Singapore Management University)
Abstract:This paper explores weak identification issues arising in commonly used models of economic and financial time series. Two highly popular configurations are shown to be asymptotically observationally equivalent: one with long memory and weak autoregressive dynamics, the other with antipersistent shocks and a near-unit autoregressive root. We develop a data-driven semiparametric and identification-robust approach to inference that reveals such ambiguities and documents the prevalence of weak identification in many realized volatility and trading volume series. The identification-robust empirical evidence generally favors long memory dynamics in volatility and volume, a conclusion that is corroborated using social-media news flow data.
Keywords:Realized volatility; Weak identification; Disjoint confidence sets, Trading volume, Long memory
JEL:C12 C13 C58
Date:2022–06
URL:http://d.repec.org/n?u=RePEc:cwl:cwldpp:2334&r=

Tuesday, August 30, 2022

How Did I Miss This??

Great stuff, forthcoming JBES (2022).  

TIME SERIES APPROACH TO THE EVOLUTION OF NETWORKS: PREDICTION AND ESTIMATION 

ANNA BYKHOVSKAYA 

Abstract. The paper analyzes non-negative multivariate time series which we interpret as weighted networks. We introduce a model where each coordinate of the time series represents a given edge across time. The number of time periods is treated as large compared to the size of the network. The model specifies the temporal evolution of a weighted network that combines classical autoregression with non-negativity, a positive probability of vanishing, and peer effect interactions between weights assigned to edges in the process. The main results provide criteria for stationarity vs. explosiveness of the network evolution process and techniques for estimation of the parameters of the model and for prediction of its future values.

https://abykhovskaya.files.wordpress.com/2021/07/networks_jbes_3.pdf

See also https://annabykhovskaya.com





Wednesday, August 24, 2022

The Complexity Principle (!)

Continuing the previous post, I'm sorry if I seem to be gushing over the recent Kelly et al. program (indeed I am), but it just blows me away.  The famous "parsimony" and "KISS (keep it sophisticatedly simple)" principles turned on their heads!  George Box and Arnold Zellner must be rolling in their graves...

 The Virtue of Complexity Everywhere


Bryan T. Kelly (Yale SOM; AQR Capital Management, LLC; National Bureau of Economic Research (NBER)); Semyon Malamud (Ecole Polytechnique Federale de Lausanne; Centre for Economic Policy Research (CEPR); Swiss Finance Institute); Kangying Zhou (Yale School of Management)


We investigate the performance of non-linear return prediction models in the high complexity regime, i.e., when the number of model parameters exceeds the number of observations. We document a "virtue of complexity" in all asset classes that we study (US equities, international equities, bonds, commodities, currencies, and interest rates). Specifically, return prediction R2 and optimal portfolio Sharpe ratio generally increase with model parameterization for every asset class. The virtue of complexity is present even in extremely data-scarce environments, e.g., for predictive models with less than twenty observations and tens of thousands of predictors. The empirical association between model complexity and out-of-sample model performance exhibits a striking consistency with theoretical predictions.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4171581


Friday, August 19, 2022

Complexity in Prediction

Really glad to see that Kelly et al. are keeping at it, moving well into the "double dip" zone and adding regularization.

 The Virtue of Complexity in Return Prediction (2022)


Bryan T. KellySemyon MalamudKangying Zhou




The extant literature predicts market returns with “simple” models that use only a few parameters. Contrary to conventional wisdom, we theoretically prove that simple models severely understate return predictability compared to “complex” models in which the number of parameters exceeds the number of observations. We empirically document the virtue of complexity in US equity market return prediction. Our findings establish the rationale for modeling expected returns through machine learning.



http://d.repec.org/n?u=RePEc:nbr:nberwo:30217&r=

Wednesday, August 10, 2022

Instrumental Variables in Practical Application

I have always been fascinated by Alwyn Young's paper,  "Consistency without Inference:  Instrumental Variables in Practical Application."   On-line appendix.  Glad to see that it's now published in the European Economic Review.  Note the key role of non-white disturbances.

From the intro:

The economics profession is in the midst of a “credibility revolution” (Angrist and Pischke 2010) in which careful research design has become firmly established as a necessary characteristic of applied work.  A key element in this revolution has been the use of instruments to identify causal effects free of the potential biases carried by endogenous ordinary least squares regressors.  The growing emphasis on research design has not gone hand in hand, however, with equal demands on the quality of inference.  Despite the widespread use of Eicker (1963)-Hinkley (1977)-White (1980) heteroskedasticity robust covariance estimates and their clustered extensions, the implications of non-iid error processes for the quality of inference, and their interaction in this regard with regression and research design, has not received the attention it deserves.  Heteroskedastic and correlated errors in highly leveraged regressions produce test statistics whose dispersion is typically much greater than believed, exaggerating the statistical significance of both 1st and 2nd stage tests, while lowering power to detect meaningful alternatives.  Furthermore, the bias of 2SLS relative to OLS rises as predicted second stage values are increasingly determined by the realization of a few errors, thereby eliminating much of the benefit of IV.  This paper shows that these problems exist in a substantial fraction of published work. 

Saturday, June 11, 2022

Great Summer Courses in Glasgow


Summer School Sept 5-9, Adam Smith Business School, University of Glasgow:

Kamil Yilmaz will teach a two-day Network Connectedness course Sept 5-6, covering both methods and applications ("Financial and Macroeconomic Connectedness: A Network Approach to Measurement and Monitoring").

Refet Gurkaynak will teach a two-day High-Frequency Finance course Sept 7-8, again covering both methods and applications ("Asset Price Reactions to News: High Frequency Methods and Applications").

Both courses will be helpful for researchers and policy analysts at universities, central banks, international policy institutes, and think tanks.

Looks great!

Monday, February 28, 2022

New and Novel ARCH Model Application (Seriously)

 The Variability and Volatility of Sleep: An Archetypal Approach

By:Hamermesh, Daniel S. (Barnard College); Pfann, Gerard A. (Maastricht University)
Abstract:Using Dutch time-diary data from 1975-2005 covering over 10,000 respondents for 7 consecutive days each, we show that individuals' sleep time exhibits both variability and volatility characterized by stationary autoregressive conditional heteroscedasticity: The absolute values of deviations from a person's average sleep on one day are positively correlated with those on the next day. Sleep is more variable on weekends and among people with less education, who are younger and who do not have young children at home. Volatility is greater among parents with young children, slightly greater among men than women, but independent of other demographics. A theory of economic incentives to minimize the dispersion of sleep predicts that higher-wage workers will exhibit less dispersion, a result demonstrated using extraneous estimates of earnings equations to impute wage rates. Volatility in sleep spills over onto volatility in other personal activities, with no reverse causation onto sleep. The results illustrate a novel dimension of economic inequality and could be applied to a wide variety of human behavior and biological processes.
Keywords:time use, ARCH, economic incentives in biological processes, volatility
JEL:C22 J22 I14
Date:2022–01
URL:http://d.repec.org/n?u=RePEc:iza:izadps:dp15001&r=&r=ets

Long-Memory Neural Nets

 Fractional SDE-Net: Generation of Time Series Data with Long-term Memory

By:Kohei HayashiKei Nakagawa
Abstract:In this paper, we focus on generation of time-series data using neural networks. It is often the case that input time-series data, especially taken from real financial markets, is irregularly sampled, and its noise structure is more complicated than i.i.d. type. To generate time series with such a property, we propose fSDE-Net: neural fractional Stochastic Differential Equation Network. It generalizes the neural SDE model by using fractional Brownian motion with Hurst index larger than half, which exhibits long-term memory property. We derive the solver of fSDE-Net and theoretically analyze the existence and uniqueness of the solution to fSDE-Net. Our experiments demonstrate that the fSDE-Net model can replicate distributional properties well.
Date:2022–01
URL:http://d.repec.org/n?u=RePEc:arx:papers:2201.05974&r=&r=ets

Tuesday, February 22, 2022

Range-Based ("Candlestick") Volatility Estimation Slides



 

Reading the Candlesticks:

An OK Estimator for Volatility

Paper by J. Li, D. Wang and Q. Zhang (LWZ)

Discussion by F.X. Diebold

Society for Financial Econometrics

February 21, 2002

 


Reading the Candlesticks:

An OK Estimator for Volatility

Paper by J. Li, D. Wang and Q. Zhang (LWZ)

Discussion by F.X. Diebold

Society for Financial Econometrics

February 21, 2002

 

(***) Consider a different title…

Classic Traditions: University of Chicago, Journal of Business, Al Madansky, …

n  CLI, CCI analyses related to modern macro/BC nowcasting (Zarnowitz, Neftci, ...)

n  Range-based volatility estimation related to modern financial volatility nowcasting

The Extreme Value Method for Estimating the Variance of the Rate of Return

Author(s): Michael Parkinson

Source: The Journal of Business , Jan., 1980, Vol. 53, No. 1 (Jan., 1980), pp. 61-65

 

On the Estimation of Security Price Volatilities from Historical Data

Author(s): Mark B. Garman and Michael J. Klass

Source: The Journal of Business , Jan., 1980, Vol. 53, No. 1 (Jan., 1980), pp. 67-78

Others have extended:

n  HLC-based estimation (e.g., Beckers, 1983; Rogers and Satchell, 1991)

n  HLOC-based estimation (e.g., Yang and Zhang, 2000)

(***) Should be discussed

In Yang and Zhang (2000):

VOL = O - .383 C + 1.364 HL + 0.019 HLC

(***) LWZ results have strong resemblance


In LWZ:

VOL  = λ1 (H-L) + λ2 |C-O| (by assumption)

(***) Restrictive ?

VOL* = 0.811 (H-L) – 0.369 |C-O|

How does the range fit in?

Efficiency hierarchy (worst to best):

r^2

|r|

HL range

HLC

HLOC

“large-k” RV

r^2 or |r|:  r=0 implies vol=0


r^2 or |r|: Even when r non-zero, very different paths can be scored the same

Range:  The key vol info is embedded


different days can be scored the same


Range: Even the range can be tricked
(Only large-k RV can’t be tricked…)

Why care about the range?

(if only large-k RV can’t be tricked…)

n  Effortless yet highly efficient 

n  Robust to microstructure noise
(bias is just average B/A spread)

n  Available over long historical periods
(and risk premia are all about recessions)

 

Also, using range improves large-k RV

 

Christensen and Podolskij (2007)
(* Needs more discussion)

 

Not so compelling in large-k contexts?

 


 

What to do when you can’t do
(or don’t want to do) large-k RV?

n  Fixed(small)-k r^2-based RV (Bollerslev, Li and Liao, 2021 (BLL))

n  Fixed(small)-k range-based RV (LWZ)
(More compelling than large-k range-based RV:  Efficiency, robustness, …)

RV Volatility Proxies and Treatment of k

                                  k:

Large k

Fixed(Small) k

Vol Proxy:

 

 

r^2

ABDL 2001

BNS 2002

ABDL 2003

BLL 2021

Range

CP 2007

LWZ 2022


ABDL 2001   Andersen, Bollerslev, Diebold and Labys, JASA

BNS 2002     Barndorff-Nielsen and Shephard, JRSS

ABDL 2003   Andersen, Bollerslev, Diebold and Labys, Ectca

CP 2007        Christensen and Podolskij, JoE

BLL 2021      Bollerslev, Li, and Liao, JoE

LWZ 2022    Li, Wang and Zhang, unpublished – NICE!