Monday, December 28, 2015
Cochrane on Research Reliability and Replication
Check out John's new piece. His views largely match mine. Here's to the demand side!
Friday, December 18, 2015
Holiday Haze
Your dedicated blogger is about to vanish in the holiday haze, returning in the new year. Meanwhile, all best wishes for the holidays.
[Photo credit: Public domain, by Marcus Quigmire, from Florida, USA (Happy Holidays Uploaded by Princess Mérida) [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons]
Monday, December 14, 2015
Sunday, December 13, 2015
Superforecasting
A gratis copy of Philip Tetlock and Dan Gardner's new book, Superforecasting, arrived a couple months ago, just before it was published. It's been sitting on my desk until now. With a title like "Superforcasting," perhaps I subconsciously thought it would be pop puffery and delayed looking at it. If so, I was wrong. It's a winner.
Superforecasting is in the tradition of Nate Silver's The Signal and the Noise, but whereas Silver has little expertise (except in politics, baseball and poker, which he knows well) and goes for breadth rather than depth, Tetlock has significant expertise (his own pioneering research, on which his book is built) and goes for depth. Tetlock's emphasis throughout is on just one question: What makes good forecasters good?
Superforecasting is mostly about probabilistic event forecasting, for events much more challenging than those that we econometricians and statisticians typically consider, and for which there is often no direct historical data (e.g., conditional on information available at this moment, what is the probability that Google files for bankruptcy by December 31, 2035?). Nevertheless it contains many valuable lessons for us in forecast construction, evaluation, combination, updating, etc.
You can expect several posts on aspects of Superforecasting in the new year as I re-read it. For now I just wanted to bring it to your attention in case you missed it. Really nice.
Thursday, December 10, 2015
Long Memory Stochastic Volatility
Check out Mark Jensen's new paper. Long memory is a key feature of realized high-frequency asset-return volatility, yet it remains poorly understood. Jensen's approach may help change that. Of particular interest are: (1) its ability to handle seamlessly d in [0, 1[, despite the fact that the unconditional variance is infinite for d in ].5, 1[, and (2) closely related, the important role played by wavelets.
Details:
Robust
estimation of nonstationary, fractionally integrated, autoregressive,
stochastic volatility
Date:
|
2015-11-01
|
By:
|
Jensen, Mark J. (Federal Reserve Bank of
Atlanta)
|
Empirical volatility studies have
discovered nonstationary, long-memory dynamics in the volatility of the stock
market and foreign exchange rates. This highly persistent, infinite
variance—but still mean reverting—behavior is commonly found with
nonparametric estimates of the fractional differencing parameter d, for
financial volatility. In this paper, a fully parametric Bayesian estimator,
robust to nonstationarity, is designed for the fractionally integrated,
autoregressive, stochastic volatility (SV-FIAR) model. Joint estimates of the
autoregressive and fractional differencing parameters of volatility are found
via a Bayesian, Markov chain Monte Carlo (MCMC) sampler. Like Jensen (2004),
this MCMC algorithm relies on the wavelet representation of the log-squared
return series. Unlike the Fourier transform, where a time series must be a
stationary process to have a spectral density function, wavelets can
represent both stationary and nonstationary pr! ocesses. As long as the
wavelet has a sufficient number of vanishing moments, this paper's MCMC
sampler will be robust to nonstationary volatility and capable of generating
the posterior distribution of the autoregressive and long-memory parameters
of the SV-FIAR model regardless of the value of d. Using simulated and
empirical stock market return data, we find our Bayesian estimator producing
reliable point estimates of the autoregressive and fractional differencing
parameters with reasonable Bayesian confidence intervals for either
stationary or nonstationary SV-FIAR models.
|
|
Keywords:
|
|
JEL:
|
|
URL:
|
New Elsevier: Good or Bad?
|
Sunday, December 6, 2015
New Review of Forecasting at Bank of England
Check it out here. It's thorough and informative.
It's interesting and unfortunate that even the Bank of England, the great "fan chart pioneer," produces density forecasts for only three of eleven variables forecasted (p. 15). In my view, the most important single forecasting improvement that the Bank of England -- and all central banks -- could implement is a complete switch from point to density forecast construction, evaluation and combination.
It's interesting and unfortunate that even the Bank of England, the great "fan chart pioneer," produces density forecasts for only three of eleven variables forecasted (p. 15). In my view, the most important single forecasting improvement that the Bank of England -- and all central banks -- could implement is a complete switch from point to density forecast construction, evaluation and combination.
Wednesday, December 2, 2015
NYU "Five-Star" Conference 2015
Program with clickable papers here. The amazing thing about Five-Star is that it actually works, and works well, year after year, despite the usually-disastrous fact that it involves coordination among universities.
Eurostat Forecasting Competition Deadline Approaching
I have some serious reservations about forecasting competitions, at least as typically implemented by groups like Kaggle. But still they're useful and exciting and absolutely fascinating. Here's a timely call for participation, from Eurostat. (Actually this one is nominally for nowcasting, not forecasting, but in reality they're the same thing.)
[I'm not sure why they're trying to shoehorn "big data" into it, except that it sounds cool and everyone wants to jump on the bandwagon. The winner is the winner, whether based on big data, small data, or whatever, and whether produced by an econometrician, a statistician, or a data scientist. I'm not even sure what "Big Data" means, or what a "data scientist" means, here or anywhere. (Standard stat quip: A data scientist is a statistician who lives in San Francisco.) End of rant.]
Big Data for Official Statistics Competition launched -
please register by 10 January 2016
[I'm not sure why they're trying to shoehorn "big data" into it, except that it sounds cool and everyone wants to jump on the bandwagon. The winner is the winner, whether based on big data, small data, or whatever, and whether produced by an econometrician, a statistician, or a data scientist. I'm not even sure what "Big Data" means, or what a "data scientist" means, here or anywhere. (Standard stat quip: A data scientist is a statistician who lives in San Francisco.) End of rant.]
Big Data for Official Statistics Competition launched -
please register by 10 January 2016
The Big Data for Official Statistics Competition
(BDCOMP) has just been launched, and you are most welcome to participate. All details
are provided in the call for participation:
Participation is open to everybody (with a few very
specific exceptions detailed in the call).
In this first instalment of BDCOMP, the competition is
exclusively about nowcasting economic indicators at national or European level.
There are 7 tracks in
the competition. They correspond to 4 main indicators: Unemployment, HICP,
Tourism and Retail Trade and some of their variants.
Usage of Big Data is
encouraged but not mandatory. For a detailed description of the competition
tasks, please refer to the call.
The authors of the best-performing submissions for
each track will be invited to present their work at the NTTS 2017 conference
(the exact award criteria can be found in the call).
The deadline for registration is 10 January
2016. The duration of the competition is roughly a year (including about a
month for evaluation). For a detailed schedule of submissions, please refer to
the call.
The competition is organised by Eurostat and has a
Scientific Committee composed of colleagues from various member and observer
organisations of the European Statistical System (ESS).
On the behalf of the BDCOMP Scientific
Committee,
The BDCOMP organising team
Monday, November 23, 2015
On Bayesian DSGE Modeling with Hard and Soft Restrictions
A theory is essentially a restriction on a reduced form. It can be imposed directly (hard restrictions) or used as as a prior mean in a more flexible Bayesian analysis (soft restrictions). The soft restriction approach -- "theory as a shrinkage direction" -- is appealing: coax parameter configurations toward a prior mean suggested by theory, but also respect the likelihood, and govern the mix by prior precision.
(1) Important macro-econometric DSGE work, dating at least to the classic Ingram and Whiteman (1994) paper, finds that using theory as a VAR shrinkage direction is helpful for forecasting.
(2) But that's not what most Bayesian DSGE work now does. Instead it imposes hard theory restrictions on a VAR, conditioning completely on an assumed DSGE model, using Bayesian methods simply to coax the assumed model's parameters toward "reasonable" values.
It's not at all clear that approach (2) should dominate approach (1) for prediction, and indeed research like Del Negro and Schorfheide (2004) and Del Negro and Schorfheide (2007) indicates that it doesn't.
I like (1) and I think it needs renewed attention.
[A related issue is whether "theory priors" will supplant others, like the "Minnesota prior." I'll save that for a later post.]
(1) Important macro-econometric DSGE work, dating at least to the classic Ingram and Whiteman (1994) paper, finds that using theory as a VAR shrinkage direction is helpful for forecasting.
(2) But that's not what most Bayesian DSGE work now does. Instead it imposes hard theory restrictions on a VAR, conditioning completely on an assumed DSGE model, using Bayesian methods simply to coax the assumed model's parameters toward "reasonable" values.
It's not at all clear that approach (2) should dominate approach (1) for prediction, and indeed research like Del Negro and Schorfheide (2004) and Del Negro and Schorfheide (2007) indicates that it doesn't.
I like (1) and I think it needs renewed attention.
[A related issue is whether "theory priors" will supplant others, like the "Minnesota prior." I'll save that for a later post.]
Monday, November 16, 2015
Climatology and Predictive Modeling
A notice about this paper just arrived.
Very cool, I thought. So I clicked on the EEE above, to see more systematically what the NBER's Environmental and Energy Economics group is doing these days. In general it has a very interesting list, and in particular it has an interesting list from a predictive modeling viewpoint. Check this, for example:
Climate Engineering Economics
Garth Heutel, Juan Moreno-Cruz, Katharine Ricke
Modeling Uncertainty in Climate Change: A Multi-Model Comparison
Kenneth Gillingham, William D. Nordhaus, David Anthoff, Geoffrey Blanford, Valentina Bosetti, Peter Christensen, Haewon McJeon, John Reilly, Paul Sztorc
The economics of climate change involves a vast array of uncertainties, complicating both the analysis and development of climate policy. This study presents the results of the first comprehensive study of uncertainty in climate change using multiple integrated assessment models. The study looks at model and parametric uncertainties for population, total factor productivity, and climate sensitivity. It estimates the pdfs of key output variables, including CO2 concentrations, temperature, damages, and the social cost of carbon (SCC). One key finding is that parametric uncertainty is more important than uncertainty in model structure. Our resulting pdfs also provide insights on tail events.
There's lots of great stuff in GNABBCMRS. (Sorry for the tediously-long acronym.) Among other things, it is correct in noting that "It is conceptually clear that the ensemble approach is an inappropriate
measure of uncertainty of outcomes," and it takes a much broader approach. [The "ensemble approach" means different things in different meteorological / climatological contexts, but in this paper's context it means equating forecast error uncertainty with the dispersion of point forecasts across models.] The fact is that point forecast dispersion and forecast uncertainty are very different things. History is replete with examples of tight consensuses that turned out to be wildly wrong.
Unfortunately, however, the "ensemble approach" remains standard in meteorology / climatology. The standard econometric/statistical taxonomy, in contrast, includes not only model uncertainty, but also parameter uncertainty and innovation (stochastic shock) uncertainty. GNABBCMRS focus mostly on parameter uncertainty vs. model uncertainty and find that parameter uncertainty is much more important. That's a major advance.
But more focus is still needed on the third component of forecast error uncertainty, innovation uncertainty. The deterministic Newton / Lorenz approach embodied in much of meteorology / climatology needs thorough exorcising. I have long believed that traditional time-series econometric methods have much to offer in that regard.
Wednesday, November 11, 2015
A Fascinating Event Study
I just read an absolutely fascinating event study, "The Power of the Street:
Evidence from Egypt’s Arab Spring," by Daron Acemoglu, Tarek Hassan and Ahmed Tahoun (DHT). The paper is here. I'm looking forward to seeing the seminar later today.
[Abstract: During Egypt’s Arab Spring, unprecedented popular mobilization and protests brought down Hosni Mubarak’s government and ushered in an era of competition between three groups: elites associated with Mubarak’s National Democratic Party, the military, and the Islamist Muslim Brotherhood. Street protests continued to play an important role during this power struggle. We show that these protests are associated with differential stock market returns for firms connected to the three groups. Using daily variation in the number of protesters, we document that more intense protests in Tahrir Square are associated with lower stock market valuations for firms connected to the group currently in power relative to non-connected firms, but have no impact on the relative valuations of firms connected to other powerful groups. We further show that activity on social media may have played an important role in mobilizing protesters, but had no direct effect on relative valuations. According to our preferred interpretation, these events provide evidence that, under weak institutions, popular mobilization and protests have a role in restricting the ability of connected firms to capture excess rents.]
When first reading DHT, I thought the authors might be unaware of the large finance literature on event studies, since they don't cite any of it. Upon closer reading, however, I see that they repeatedly use the term "standard event study," indicating awareness coupled with a view that the methodology is now so well known as to render a citation unnecessary, along the lines of "no need to cite Student every time you report a t-statistic."
Well, perhaps, although I'm certain that most economists, even thoroughly empirical economists -- indeed even econometricians! -- have no idea what an event study is.
Anyway, here's a bit of background for those who want some.
DHT-style event studies originated in finance. The idea is to fit a benchmark return model to pre-event data, and then to examine cumulative "abnormal" returns (assessed using the model fitted pre-event) in a suitable post-event window. Large abnormal returns indicate a large causal impact of the event under study. The idea is brilliant in its simplicity and power.
Like so many things in empirical finance, event studies trace to Gene Fama (in this case, with several other luminaries):
The adjustment of stock prices to new information
THERE IS an impressive body of empirical evidence which indicates that successive price
For surveys, see:
Abstract: The number of published event studies exceeds 500, and the literature continues
[Abstract: During Egypt’s Arab Spring, unprecedented popular mobilization and protests brought down Hosni Mubarak’s government and ushered in an era of competition between three groups: elites associated with Mubarak’s National Democratic Party, the military, and the Islamist Muslim Brotherhood. Street protests continued to play an important role during this power struggle. We show that these protests are associated with differential stock market returns for firms connected to the three groups. Using daily variation in the number of protesters, we document that more intense protests in Tahrir Square are associated with lower stock market valuations for firms connected to the group currently in power relative to non-connected firms, but have no impact on the relative valuations of firms connected to other powerful groups. We further show that activity on social media may have played an important role in mobilizing protesters, but had no direct effect on relative valuations. According to our preferred interpretation, these events provide evidence that, under weak institutions, popular mobilization and protests have a role in restricting the ability of connected firms to capture excess rents.]
When first reading DHT, I thought the authors might be unaware of the large finance literature on event studies, since they don't cite any of it. Upon closer reading, however, I see that they repeatedly use the term "standard event study," indicating awareness coupled with a view that the methodology is now so well known as to render a citation unnecessary, along the lines of "no need to cite Student every time you report a t-statistic."
Well, perhaps, although I'm certain that most economists, even thoroughly empirical economists -- indeed even econometricians! -- have no idea what an event study is.
Anyway, here's a bit of background for those who want some.
DHT-style event studies originated in finance. The idea is to fit a benchmark return model to pre-event data, and then to examine cumulative "abnormal" returns (assessed using the model fitted pre-event) in a suitable post-event window. Large abnormal returns indicate a large causal impact of the event under study. The idea is brilliant in its simplicity and power.
Like so many things in empirical finance, event studies trace to Gene Fama (in this case, with several other luminaries):
The adjustment of stock prices to new information
EF Fama, L Fisher, MC Jensen, R Roll - International economic review, 1969 - JSTOR
THERE IS an impressive body of empirical evidence which indicates that successive price
changes in individual common stocks are very nearly independent. 2 Recent papers by
Mandelbrot [11] and Samuelson [16] show rigorously that independence of successive ...
For surveys, see:
The econometrics of event studies
SP Kothari, JB Warner - Available at SSRN 608601, 2004 - papers.ssrn.com,
Abstract: The number of published event studies exceeds 500, and the literature continues
to grow. We provide an overview of event study methods. Short-horizon methods are quite
reliable. While long-horizon methods have improved, serious limitations remain. A ...
reliable. While long-horizon methods have improved, serious limitations remain. A ...
Cited by 824 Related articles All 14 versions Cite Save More
Event studies in economics and finance
AC MacKinlay - Journal of economic literature, 1997
ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study ...Cited by 3290 Related articles All 27 versions Web of Science: 499 Cite Save More
Event studies in economics and finance
AC MacKinlay - Journal of economic literature, 1997
ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study ...Cited by 3290 Related articles All 27 versions Web of Science: 499 Cite Save More
(Not much has changed since 2004, or for that matter, since 1997.)
Friday, November 6, 2015
Conference on Bond Markets and Yield Curve Modeling
Fantastic job by Bank of Canada and FRBSF. Kudos to both for successfully assembling such talent. It's just ending as I write. It was all good, but the papers/discussants that resonated most with me were:
Session 4: Predicting Interest Rates
Robust Bond Risk Premia
Michael Bauer, Federal Reserve Bank of San Francisco
James Hamilton, University of California at San Diego
Discussant: John Cochrane, Hoover Institute at Stanford University
Loss Functions for Forecasting Treasury Yields
Hitesh Doshi, University of Houston
Kris Jacobs, University of Houston
Rui Liu, University of Houston
Discussant: Frank Diebold, University of Pennsylvania
Session 5: Term Structure Modeling and the Zero Lower Bound
Session Chair: Antonio Diez de los Rios, Bank of Canada
Tractable Term Structure Models: A New Approach
Bruno Feunou, Bank of Canada
Jean-Sebastien Fontaine, Bank of Canada
Anh Le, Kenan-Flagler Business School, University of North Carolina at Chapel Hill
Discussant: Greg Duffee, Johns Hopkins University
Staying at Zero with Affine Processes: An Application to Term Structure Modelling
Alain Monfort, Banque de France
Fulvio Pegoraro, Banque de France
Jean-Paul Renne, Banque de France
Guillaume Roussellet, Banque de France
Discussant: Marcel Priebsch, Board of Governors of the Federal Reserve System
Here's the whole thing:
5th Conference on Fixed Income Markets
Recent Advances in Fixed Income Research and Implications for Monetary Policy
Bank of Canada and Federal Reserve Bank of San Francisco
Yellen Conference Center
November 5-6, 2015
Thursday, November 5
8:00 – 8:45 a.m. Breakfast
8:45 – 9:00 a.m. Welcoming Remarks
Timothy Lane, Deputy Governor, Bank of Canada
9:00 – 10:30 a.m. Session 1: The Effects of Quantitative Easing
Session Chair: Michael Bauer, Federal Reserve Bank of San Francisco
A Lesson from the Great Depression that the Fed Might have Learned: A
Comparison of the 1932 Open Market Purchases with Quantitative Easing
Michael Bordo, Rutgers University, Hoover Institute at Stanford University, NBER
Arunima Sinha, Fordham University
Discussant: Annette Vissing-Jorgensen, Berkeley Haas
Transmission of Quantitative Easing: The Role of Central Bank Reserves
Jens Christensen, Federal Reserve Bank of San Francisco
Signe Krogstrup, Swiss National Bank
Discussant: Arvind Krishnamurthy, Stanford Graduate School of Business
10:30 – 11:00 a.m. Break
11:00 a.m. – 12:30 p.m. Session 2: Macroeconomic Risks and the Yield Curve
Economic Policy Uncertainty and the Yield Curve
Markus Leippold, Swiss Financial Institute and University of Zurich
Felix Matthys, Princeton University
Discussant: Anna Cieslak, Duke University
Macro Risks and the Term Structure
Geert Bekaert, Columbia University and NBER
Eric Engstrom, Board of Governors of the Federal Reserve System
Andrey Ermolov, Columbia University
Discussant: Mikhail Chernov, University of California at Los Angeles
12:30 p.m. Lunch, Market Street Dining Room, Fourth Floor
1:45 – 3:15 p.m. Session 3: Bond Prices in Equilibrium
Session Chair: Michael Ehrmann, Bank of Canada
A Macroeconomic Model of Equities and Real, Nominal, and Defaultable Debt
Eric Swanson, University of California at Irvine
Discussant: Hanno Lustig, Stanford Graduate School of Business
Bond Risk Premia in Consumption-based Models
Drew Creal, University of Chicago Booth School of Business
Jing Cynthia Wu, University of Chicago Booth School of Business and NBER
Discussant: Ivan Shaliastovich, Wharton School of the University of Pennsylvania
3:15 – 3:45 p.m. Break
3:45 – 5:15 p.m. Session 4: Predicting Interest Rates
Robust Bond Risk Premia
Michael Bauer, Federal Reserve Bank of San Francisco
James Hamilton, University of California at San Diego
Discussant: John Cochrane, Hoover Institute at Stanford University
Loss Functions for Forecasting Treasury Yields
Hitesh Doshi, University of Houston
Kris Jacobs, University of Houston
Rui Liu, University of Houston
Discussant: Frank Diebold, University of Pennsylvania
5:15 – 6:00 p.m. Reception, Salons A&B, Fourth Floor
6:00 – 8:00 p.m. Dinner, Market Street Dining Room, Fourth Floor
Introduction: John C. Williams, President, Federal Reserve Bank of San Francisco
Keynote Speaker: Athanasios Orphanides, Massachusetts Institute of Technology
Friday, November 6
8:00 – 8:45 a.m. Breakfast
8:45 – 10:15 a.m. Session 5: Term Structure Modeling and the Zero Lower Bound
Session Chair: Antonio Diez de los Rios, Bank of Canada
Tractable Term Structure Models: A New Approach
Bruno Feunou, Bank of Canada
Jean-Sebastien Fontaine, Bank of Canada
Anh Le, Kenan-Flagler Business School, University of North Carolina at Chapel Hill
Discussant: Greg Duffee, Johns Hopkins University
Staying at Zero with Affine Processes: An Application to Term Structure
Modelling
Alain Monfort, Banque de France
Fulvio Pegoraro, Banque de France
Jean-Paul Renne, Banque de France
Guillaume Roussellet, Banque de France
Discussant: Marcel Priebsch, Board of Governors of the Federal Reserve System
10:15 – 10:45 a.m. Break
10:45 – 12:15 p.m. Session 6: Financial Stability in Bond Markets
Reaching for Yield by Corporate Bond Mutual Funds
Jaewon Choi, University of Illinois at Urbana-Champaign
Matias Kronlund, University of Illinois at Urbana-Champaign
Discussant: Francis Longstaff, University of California at Los Angeles
Collateral, Central Bank Repos, and Systemic Arbitrage
Falko Fecht, Frankfurt School of Finance & Management
Kjell Nyborg, University of Zurich, Swiss Finance Institute, and CEPR
Jorg Rocholl, ESMT European School of Management and Technology
Jiri Woschitz, University of Zurich
Discussant: Stefania D’Amico, Federal Reserve Bank of Chicago
12:15 – 1:30 p.m. Lunch
1:30 p.m. Adjourn
Program Committee:
Antonio Diez de los Rios, Bank of Canada
Jean-Sebastien Fontaine, Bank of Canada
Michael Bauer, Federal Reserve Bank of San Francisco
Jens Christensen, Federal Reserve Bank of San Francisco
Session 4: Predicting Interest Rates
Robust Bond Risk Premia
Michael Bauer, Federal Reserve Bank of San Francisco
James Hamilton, University of California at San Diego
Discussant: John Cochrane, Hoover Institute at Stanford University
Loss Functions for Forecasting Treasury Yields
Hitesh Doshi, University of Houston
Kris Jacobs, University of Houston
Rui Liu, University of Houston
Discussant: Frank Diebold, University of Pennsylvania
Session 5: Term Structure Modeling and the Zero Lower Bound
Session Chair: Antonio Diez de los Rios, Bank of Canada
Tractable Term Structure Models: A New Approach
Bruno Feunou, Bank of Canada
Jean-Sebastien Fontaine, Bank of Canada
Anh Le, Kenan-Flagler Business School, University of North Carolina at Chapel Hill
Discussant: Greg Duffee, Johns Hopkins University
Staying at Zero with Affine Processes: An Application to Term Structure Modelling
Alain Monfort, Banque de France
Fulvio Pegoraro, Banque de France
Jean-Paul Renne, Banque de France
Guillaume Roussellet, Banque de France
Discussant: Marcel Priebsch, Board of Governors of the Federal Reserve System
Here's the whole thing:
5th Conference on Fixed Income Markets
Recent Advances in Fixed Income Research and Implications for Monetary Policy
Bank of Canada and Federal Reserve Bank of San Francisco
Yellen Conference Center
November 5-6, 2015
Thursday, November 5
8:00 – 8:45 a.m. Breakfast
8:45 – 9:00 a.m. Welcoming Remarks
Timothy Lane, Deputy Governor, Bank of Canada
9:00 – 10:30 a.m. Session 1: The Effects of Quantitative Easing
Session Chair: Michael Bauer, Federal Reserve Bank of San Francisco
A Lesson from the Great Depression that the Fed Might have Learned: A
Comparison of the 1932 Open Market Purchases with Quantitative Easing
Michael Bordo, Rutgers University, Hoover Institute at Stanford University, NBER
Arunima Sinha, Fordham University
Discussant: Annette Vissing-Jorgensen, Berkeley Haas
Transmission of Quantitative Easing: The Role of Central Bank Reserves
Jens Christensen, Federal Reserve Bank of San Francisco
Signe Krogstrup, Swiss National Bank
Discussant: Arvind Krishnamurthy, Stanford Graduate School of Business
10:30 – 11:00 a.m. Break
11:00 a.m. – 12:30 p.m. Session 2: Macroeconomic Risks and the Yield Curve
Economic Policy Uncertainty and the Yield Curve
Markus Leippold, Swiss Financial Institute and University of Zurich
Felix Matthys, Princeton University
Discussant: Anna Cieslak, Duke University
Macro Risks and the Term Structure
Geert Bekaert, Columbia University and NBER
Eric Engstrom, Board of Governors of the Federal Reserve System
Andrey Ermolov, Columbia University
Discussant: Mikhail Chernov, University of California at Los Angeles
12:30 p.m. Lunch, Market Street Dining Room, Fourth Floor
1:45 – 3:15 p.m. Session 3: Bond Prices in Equilibrium
Session Chair: Michael Ehrmann, Bank of Canada
A Macroeconomic Model of Equities and Real, Nominal, and Defaultable Debt
Eric Swanson, University of California at Irvine
Discussant: Hanno Lustig, Stanford Graduate School of Business
Bond Risk Premia in Consumption-based Models
Drew Creal, University of Chicago Booth School of Business
Jing Cynthia Wu, University of Chicago Booth School of Business and NBER
Discussant: Ivan Shaliastovich, Wharton School of the University of Pennsylvania
3:15 – 3:45 p.m. Break
3:45 – 5:15 p.m. Session 4: Predicting Interest Rates
Robust Bond Risk Premia
Michael Bauer, Federal Reserve Bank of San Francisco
James Hamilton, University of California at San Diego
Discussant: John Cochrane, Hoover Institute at Stanford University
Loss Functions for Forecasting Treasury Yields
Hitesh Doshi, University of Houston
Kris Jacobs, University of Houston
Rui Liu, University of Houston
Discussant: Frank Diebold, University of Pennsylvania
5:15 – 6:00 p.m. Reception, Salons A&B, Fourth Floor
6:00 – 8:00 p.m. Dinner, Market Street Dining Room, Fourth Floor
Introduction: John C. Williams, President, Federal Reserve Bank of San Francisco
Keynote Speaker: Athanasios Orphanides, Massachusetts Institute of Technology
Friday, November 6
8:00 – 8:45 a.m. Breakfast
8:45 – 10:15 a.m. Session 5: Term Structure Modeling and the Zero Lower Bound
Session Chair: Antonio Diez de los Rios, Bank of Canada
Tractable Term Structure Models: A New Approach
Bruno Feunou, Bank of Canada
Jean-Sebastien Fontaine, Bank of Canada
Anh Le, Kenan-Flagler Business School, University of North Carolina at Chapel Hill
Discussant: Greg Duffee, Johns Hopkins University
Staying at Zero with Affine Processes: An Application to Term Structure
Modelling
Alain Monfort, Banque de France
Fulvio Pegoraro, Banque de France
Jean-Paul Renne, Banque de France
Guillaume Roussellet, Banque de France
Discussant: Marcel Priebsch, Board of Governors of the Federal Reserve System
10:15 – 10:45 a.m. Break
10:45 – 12:15 p.m. Session 6: Financial Stability in Bond Markets
Reaching for Yield by Corporate Bond Mutual Funds
Jaewon Choi, University of Illinois at Urbana-Champaign
Matias Kronlund, University of Illinois at Urbana-Champaign
Discussant: Francis Longstaff, University of California at Los Angeles
Collateral, Central Bank Repos, and Systemic Arbitrage
Falko Fecht, Frankfurt School of Finance & Management
Kjell Nyborg, University of Zurich, Swiss Finance Institute, and CEPR
Jorg Rocholl, ESMT European School of Management and Technology
Jiri Woschitz, University of Zurich
Discussant: Stefania D’Amico, Federal Reserve Bank of Chicago
12:15 – 1:30 p.m. Lunch
1:30 p.m. Adjourn
Program Committee:
Antonio Diez de los Rios, Bank of Canada
Jean-Sebastien Fontaine, Bank of Canada
Michael Bauer, Federal Reserve Bank of San Francisco
Jens Christensen, Federal Reserve Bank of San Francisco
Complexity in Economics: Big Data and Parallelization
Good conference in Switzerland. I was not there, but my colleague Frank Schorfheide sends glowing reports. For my tastes/interests at the moment, I am most intrigued by titles like:
Davide Pettenuzzo, Brandeis University
Bayesian Compressed Vector Autoregressions
Mike West, Duke University
Bayesian Predictive Synthesis (BPS)
Hedibert Freitas Lopes, INSPER – Institute of Education and Research
Parsimony-Inducing Priors for Large Scale State-Space Models.
I look forward to reading them and others. Program and list of participants follow.
www.szgerzensee.ch
Foundation of the Swiss National Bank
Program
Complexity in Economics: Big Data and Parallelization
6th ESOBE Annual Conference, October 29 – 30, 2015
Study Center Gerzensee, Gerzensee, Switzerland
Wednesday, October 28
18.00
18.45
19.30
Shuttle (Bern Railway station, Meeting Point)
Arrival of Participants
Dinner
Thursday, October 29
08.15
Opening
Dirk Niepelt, Study Center Gerzensee
08.30 – 09.30
Chair: Sylvia Kaufmann, Study Center Gerzensee
Keynote
Matthew Jackson, Stanford University
Modeling Network Formation with Correlated Links
Coffee Break
10.00 – 12.00
Network & Multidimensional
Chair: Helga Wagner, Johannes Kepler University
Daniele Bianchi, Warwick Business School, University of Warwick
Modeling Contagion and Systemic Risk
Veni Arakelian, Panteion University
European Sovereign Systemic Risk Zones
Stefano Grassi, University of Kent
Dynamic Predictive Density Combinations for Large Data Sets in Economics and Finance
Mark Jensen, Federal Reserve Bank of Atlanta
Cross-section of Mutual Fund Performance
12.00 – 14.00
Standing Lunch & Poster Session
14.00 – 15.00
Chair: Gianni Amisano, Federal Reserve Board and University of Technology Sidney
Keynote
Frank Schorfheide, University of Pennsylvania, Philadelphia
Sequential Monte Carlo Methods for DSGE Models
15.30 – 17.00
Macro & Forecasting
Chair: Markus Pape, Ruhr-Universität Bochum
Davide Pettenuzzo, Brandeis University
Bayesian Compressed Vector Autoregressions
Arnab Bhattacharjee, Heriot-Watt University
Does the FOMC Care about Model Misspecification?
Mike West, Duke University
Bayesian Predictive Synthesis (BPS)
17.15 – 18.15
Time Series
Chair: Maria Bolboaca, Study Center Gerzensee
Hedibert Freitas Lopes, INSPER – Institute of Education and Research
Parsimony Inducing Priors for Large Scale State-Space Models
Markus Jochmann, Newcastle University
Bayesian Nonparametric Cointegration Analysis
19.00
Dinner
Friday, October 30
09.00 – 10.00
Chair: Herman K. van Dijk, Erasmus University Rotterdam
Keynote
John Geweke, University of Technology, Sydney
Sequential Adaptive Bayesian Leaning Algorithms for Inference and Optimization
Coffee Break
10.30 – 12.00
Chair: Hedibert Freitas Lopes, INSPER - Institute of Education and Research
Invited Speakers
Gianni Amisano, Federal Reserve Board and University of Technology Sidney
Large Time Varying Parameter VARs for Macroeconomic Forecasting
Sylvia Frühwirth-Schnatter, Vienna University of Economics and Business
Flexible Econometric Modelling Based on Sparse Finite Mixtures
Herman van Dijk, Erasmus University Rotterdam
Bayesian Inference and Forecasting with Time-Varying Reduced Rank Econometric Models
12.00 – 13.30
Standing Lunch & Poster Session
13.30 – 15.30
Chair: Markus Jochmann, Newcastle University
Junior Researcher Session
Gregor Kastner, Vienna University of Economics and Business
Sparse Bayesian Latent Factor Stochastic Volatility Models for Dynamic Covariance Estimation in High-Dimensional Time Series
Markus Pape, Ruhr University Bochum
A Two-Step Approach to Bayesian Analysis of Sparse Factor Models
Vegard Larsen, BI Norwegian Business School
The Value of News
Discussants: John Geweke, Mark Jensen, Mike West
15.45 – 16.45
Panel Data
Chair: Veni Arakelian, Panteion University
Taps Maiti, Michigan State University
Spatio-Temporal Forecasting: A Bayesian Spatial Clustering Approach
Helga Wagner, Johannes Kepler University
Sparse Bayesian modelling for categorical predictors
16.45
Departure of Participants
Shuttle to Bern
Poster Sessions
Thursday
Boris Blagov, University of Hamburg
Modelling the Time-Variation in Euro Area Lending
Angela Bitto, WU Vienna University of Economics and Business
Achieving Shrinkage in the Time-Varying Parameter Models Framework
Shuo Cao, University of Glasgow
Co-Movement, Spillovers and Excess Returns in Global Bond Markets
Christoph Frey, University of Konstanz
Bayesian Regularization of Portfolio Weights
Blazej Mazur, Cracow University of Economics
Forecasting Performance of Bayesian Autoregressive Conditional Score Models using Flexible Asymmetric Distributions
Boriss Siliverstovs, ETH Zurich KOF
Dissecting Models’ Forecasting Performance
Friday
Arnab Bhattacharjee, Heriot-Watt University
Latent Space Supply Chain Linkages of Three US Auto Manufacturing Giants
Daniel Kaufmann, ETH Zurich
Metal vs. Paper: An Assessment of Nominal Stability across Monetary Regimes
Gertraud Malsiner-Walli, Johannes Kepler University Linz
Bayesian Variable Selection in Semi-Parametric Growth Regression
Julia Elizabeth Reynolds, Vienna Graduate School of Finance
Commonality in Liquidity Dimensions: The Impact of Financial Crisis and
Regulation NMS
Peter Schwendner, ZHAW
European Government Bond Dynamics and Stability Policies: Taming Contagion Risks
Participants
Complexity in Economics: Big Data and Parallelization
6th ESOBE Annual Conference, October 29 – 30, 2015
Study Center Gerzensee, Gerzensee, Switzerland
Names Last Names Institutions
Gianni Amisano
Federal Reserve Board and University of Technology
Sidney
Veni Arakelian Panteion University
Nalan Basturk Maastricht University
Simon Beyeler Study Center Gerzensee
Arnab Bhattacharjee Heriot-Watt University
Daniele Bianchi University of Warwick
Angela Bitto Vienna University of Economics and Business
Boris Blagov University of Hamburg
Maria Bolboaca Study Center Gerzensee
Shuo Cao University of Glasgow
Christoph Frey University of Konstanz
Sylvia Frühwirth-Schnatter Wirtschaftsuniversität Wien
John Geweke University of Technology
Stefano Grassi University of Kent
Nils Herger Study Center Gerzensee
Matthew O. Jackson Stanford University
Mark Jensen Federal Reserve Bank of Atlanta
Markus Jochmann Newcastle University
Gregor Kastner Vienna University of Economics and Business
Sylvia Kaufmann Study Center Gerzensee
Daniel Kaufmann ETH Zurich
Dimitris Korobilis University of Glasgow
Vegard Larsen BI Norwegian Business School
Hedibert Lopes INSPER - Institute of Education and Research
Taps Maiti Michigan State University
Gertraud Malsiner-Walli Johannes Kepler University Linz
Blazej Mazur Caracow University of Economics
Sen Roy Nandini Goethe University Frankfurt
Dirk Niepelt Study Center Gerzensee
Markus Pape Ruhr-Universität Bochum
Davide Pettenuzzo Brandeis University
Julia Elizabeth Reynolds Vienna Graduate School of Finance
Names Last Names Institutions
Frank Schorfheide University of Pennsylvania
Martin Schuele ZHAW Zurich IAS
Peter Schwendner ZHAW School of Management and Law
Boriss Siliverstovs ETH Zurich KOF
Herman K. van Dijk Erasmus University Rotterdam
Audrone Virbickaite University of Konstanz
Stefan Voigt Vienna Graduate School of Finance
Helga Wagner Johannes Kepler University
Mike West Duke University
Davide Pettenuzzo, Brandeis University
Bayesian Compressed Vector Autoregressions
Mike West, Duke University
Bayesian Predictive Synthesis (BPS)
Hedibert Freitas Lopes, INSPER – Institute of Education and Research
Parsimony-Inducing Priors for Large Scale State-Space Models.
I look forward to reading them and others. Program and list of participants follow.
www.szgerzensee.ch
Foundation of the Swiss National Bank
Program
Complexity in Economics: Big Data and Parallelization
6th ESOBE Annual Conference, October 29 – 30, 2015
Study Center Gerzensee, Gerzensee, Switzerland
Wednesday, October 28
18.00
18.45
19.30
Shuttle (Bern Railway station, Meeting Point)
Arrival of Participants
Dinner
Thursday, October 29
08.15
Opening
Dirk Niepelt, Study Center Gerzensee
08.30 – 09.30
Chair: Sylvia Kaufmann, Study Center Gerzensee
Keynote
Matthew Jackson, Stanford University
Modeling Network Formation with Correlated Links
Coffee Break
10.00 – 12.00
Network & Multidimensional
Chair: Helga Wagner, Johannes Kepler University
Daniele Bianchi, Warwick Business School, University of Warwick
Modeling Contagion and Systemic Risk
Veni Arakelian, Panteion University
European Sovereign Systemic Risk Zones
Stefano Grassi, University of Kent
Dynamic Predictive Density Combinations for Large Data Sets in Economics and Finance
Mark Jensen, Federal Reserve Bank of Atlanta
Cross-section of Mutual Fund Performance
12.00 – 14.00
Standing Lunch & Poster Session
14.00 – 15.00
Chair: Gianni Amisano, Federal Reserve Board and University of Technology Sidney
Keynote
Frank Schorfheide, University of Pennsylvania, Philadelphia
Sequential Monte Carlo Methods for DSGE Models
15.30 – 17.00
Macro & Forecasting
Chair: Markus Pape, Ruhr-Universität Bochum
Davide Pettenuzzo, Brandeis University
Bayesian Compressed Vector Autoregressions
Arnab Bhattacharjee, Heriot-Watt University
Does the FOMC Care about Model Misspecification?
Mike West, Duke University
Bayesian Predictive Synthesis (BPS)
17.15 – 18.15
Time Series
Chair: Maria Bolboaca, Study Center Gerzensee
Hedibert Freitas Lopes, INSPER – Institute of Education and Research
Parsimony Inducing Priors for Large Scale State-Space Models
Markus Jochmann, Newcastle University
Bayesian Nonparametric Cointegration Analysis
19.00
Dinner
Friday, October 30
09.00 – 10.00
Chair: Herman K. van Dijk, Erasmus University Rotterdam
Keynote
John Geweke, University of Technology, Sydney
Sequential Adaptive Bayesian Leaning Algorithms for Inference and Optimization
Coffee Break
10.30 – 12.00
Chair: Hedibert Freitas Lopes, INSPER - Institute of Education and Research
Invited Speakers
Gianni Amisano, Federal Reserve Board and University of Technology Sidney
Large Time Varying Parameter VARs for Macroeconomic Forecasting
Sylvia Frühwirth-Schnatter, Vienna University of Economics and Business
Flexible Econometric Modelling Based on Sparse Finite Mixtures
Herman van Dijk, Erasmus University Rotterdam
Bayesian Inference and Forecasting with Time-Varying Reduced Rank Econometric Models
12.00 – 13.30
Standing Lunch & Poster Session
13.30 – 15.30
Chair: Markus Jochmann, Newcastle University
Junior Researcher Session
Gregor Kastner, Vienna University of Economics and Business
Sparse Bayesian Latent Factor Stochastic Volatility Models for Dynamic Covariance Estimation in High-Dimensional Time Series
Markus Pape, Ruhr University Bochum
A Two-Step Approach to Bayesian Analysis of Sparse Factor Models
Vegard Larsen, BI Norwegian Business School
The Value of News
Discussants: John Geweke, Mark Jensen, Mike West
15.45 – 16.45
Panel Data
Chair: Veni Arakelian, Panteion University
Taps Maiti, Michigan State University
Spatio-Temporal Forecasting: A Bayesian Spatial Clustering Approach
Helga Wagner, Johannes Kepler University
Sparse Bayesian modelling for categorical predictors
16.45
Departure of Participants
Shuttle to Bern
Poster Sessions
Thursday
Boris Blagov, University of Hamburg
Modelling the Time-Variation in Euro Area Lending
Angela Bitto, WU Vienna University of Economics and Business
Achieving Shrinkage in the Time-Varying Parameter Models Framework
Shuo Cao, University of Glasgow
Co-Movement, Spillovers and Excess Returns in Global Bond Markets
Christoph Frey, University of Konstanz
Bayesian Regularization of Portfolio Weights
Blazej Mazur, Cracow University of Economics
Forecasting Performance of Bayesian Autoregressive Conditional Score Models using Flexible Asymmetric Distributions
Boriss Siliverstovs, ETH Zurich KOF
Dissecting Models’ Forecasting Performance
Friday
Arnab Bhattacharjee, Heriot-Watt University
Latent Space Supply Chain Linkages of Three US Auto Manufacturing Giants
Daniel Kaufmann, ETH Zurich
Metal vs. Paper: An Assessment of Nominal Stability across Monetary Regimes
Gertraud Malsiner-Walli, Johannes Kepler University Linz
Bayesian Variable Selection in Semi-Parametric Growth Regression
Julia Elizabeth Reynolds, Vienna Graduate School of Finance
Commonality in Liquidity Dimensions: The Impact of Financial Crisis and
Regulation NMS
Peter Schwendner, ZHAW
European Government Bond Dynamics and Stability Policies: Taming Contagion Risks
Participants
Complexity in Economics: Big Data and Parallelization
6th ESOBE Annual Conference, October 29 – 30, 2015
Study Center Gerzensee, Gerzensee, Switzerland
Names Last Names Institutions
Gianni Amisano
Federal Reserve Board and University of Technology
Sidney
Veni Arakelian Panteion University
Nalan Basturk Maastricht University
Simon Beyeler Study Center Gerzensee
Arnab Bhattacharjee Heriot-Watt University
Daniele Bianchi University of Warwick
Angela Bitto Vienna University of Economics and Business
Boris Blagov University of Hamburg
Maria Bolboaca Study Center Gerzensee
Shuo Cao University of Glasgow
Christoph Frey University of Konstanz
Sylvia Frühwirth-Schnatter Wirtschaftsuniversität Wien
John Geweke University of Technology
Stefano Grassi University of Kent
Nils Herger Study Center Gerzensee
Matthew O. Jackson Stanford University
Mark Jensen Federal Reserve Bank of Atlanta
Markus Jochmann Newcastle University
Gregor Kastner Vienna University of Economics and Business
Sylvia Kaufmann Study Center Gerzensee
Daniel Kaufmann ETH Zurich
Dimitris Korobilis University of Glasgow
Vegard Larsen BI Norwegian Business School
Hedibert Lopes INSPER - Institute of Education and Research
Taps Maiti Michigan State University
Gertraud Malsiner-Walli Johannes Kepler University Linz
Blazej Mazur Caracow University of Economics
Sen Roy Nandini Goethe University Frankfurt
Dirk Niepelt Study Center Gerzensee
Markus Pape Ruhr-Universität Bochum
Davide Pettenuzzo Brandeis University
Julia Elizabeth Reynolds Vienna Graduate School of Finance
Names Last Names Institutions
Frank Schorfheide University of Pennsylvania
Martin Schuele ZHAW Zurich IAS
Peter Schwendner ZHAW School of Management and Law
Boriss Siliverstovs ETH Zurich KOF
Herman K. van Dijk Erasmus University Rotterdam
Audrone Virbickaite University of Konstanz
Stefan Voigt Vienna Graduate School of Finance
Helga Wagner Johannes Kepler University
Mike West Duke University
Thursday, October 29, 2015
Viewing Emailed Posts that Contain Math
A reminder for those of you who subscribe to the email feed:
MathJax doesn't display in email, so when you look at the emailed post, the math will just be in LaTeX. Simply click/tap on the post's title in your email (e.g., in the latest case, “The HAC Emperor has no Clothes”). It's hyperlinked to the actual blog site, which should display fine on all devices.
Wednesday, October 28, 2015
The HAC Emperor has no Clothes
Well, at least in time-series settings. (I'll save cross sections for a later post.)
Consider a time-series regression with possibly heteroskedastic and/or autocorrelated disturbances,
Punting via kernel-HAC estimation is a bad idea in time series, for several reasons:
(1) [Kernel-HAC is not likely to produce good \(\beta\) estimates.] It stays with OLS and hence gives up on efficient estimation of \(\hat{\beta}\). In huge samples the efficiency loss from using OLS rather than GLS/ML is likely negligible, but time-series samples are often smallish. For example, samples like 1960Q1-2014Q4 are typical in macroeconomics -- just a couple hundred observations of highly-serially-correlated data.
(2) [Kernel-HAC is not likely to produce good \(\beta\) inference.] Its standard errors are not tailored to a specific parametric approximation to \(\varepsilon\) dynamics. Proponents will quickly counter that that's a benefit, not a cost, and in some settings the proponents may be correct. But not in time series settings. In time series, \(\varepsilon\) dynamics are almost always accurately and parsimoniously approximated parametrically (ARMA for conditional mean dynamics in \(\varepsilon\), and GARCH for conditional variance dynamics in \(\varepsilon\)). Hence kernel-HAC standard errors may be unnecessarily unreliable in small samples, even if they're accurate asymptotically. And again, time-series sample sizes are often smallish.
(3) [Most crucially, kernel-HAC fails to capture invaluable predictive information.] Time series econometrics is intimately concerned with prediction, and explicit parametric modeling of dynamic heteroskedasticity and autocorrelation in \(\varepsilon\) can be used for improved prediction of \(y\). Autocorrelation can be exploited for improved point prediction, and dynamic conditional heteroskedasticity can be exploited for improved interval and density prediction. Punt on them and you're potentially leaving a huge amount of money on the table.
The clearly preferable approach is traditional parametric disturbance heteroskedasticty / autocorrelation modeling, with GLS/ML estimation. Simply allow for ARMA(p,q)-GARCH(P,Q) disturbances (say), with p,q, P and Q selected by AIC (say). (In many applications something like AR(3)-GARCH(1,1) or ARMA(1,1)-GARCH(1,1) would be more than adequate.) Note that the traditional approach is actually fully non-parametric when appropriately viewed as a sieve, and moreover it features automatic bandwidth selection.
Kernel-HAC people call the traditional strategy "pre-whitening," to be done prior to kernel-HAC estimation. But the real point is that it's all -- or at least mostly all -- in the pre-whitening.
In closing, I might add that the view expressed here is strongly supported by top-flight research. On my point (2) and my general recommendation, for example, see the insightful work of den Haan and Levin (2000). It fell on curiously deaf ears and remains unpublished many years later. (It's on Wouter den Haan's web site in a section called "Sleeping and Hard to Get"!) In the interim much of the world jumped on the kernel-HAC bandwagon. It's time to jump off.
Consider a time-series regression with possibly heteroskedastic and/or autocorrelated disturbances,
\( y_t = x_t' \beta + \varepsilon_t \).
A popular approach is to punt on the potentially non-iid disturbance, instead simply running OLS with kernel-based heteroskedasticity and autocorrelation consistent (HAC) standard errors.Punting via kernel-HAC estimation is a bad idea in time series, for several reasons:
(1) [Kernel-HAC is not likely to produce good \(\beta\) estimates.] It stays with OLS and hence gives up on efficient estimation of \(\hat{\beta}\). In huge samples the efficiency loss from using OLS rather than GLS/ML is likely negligible, but time-series samples are often smallish. For example, samples like 1960Q1-2014Q4 are typical in macroeconomics -- just a couple hundred observations of highly-serially-correlated data.
(2) [Kernel-HAC is not likely to produce good \(\beta\) inference.] Its standard errors are not tailored to a specific parametric approximation to \(\varepsilon\) dynamics. Proponents will quickly counter that that's a benefit, not a cost, and in some settings the proponents may be correct. But not in time series settings. In time series, \(\varepsilon\) dynamics are almost always accurately and parsimoniously approximated parametrically (ARMA for conditional mean dynamics in \(\varepsilon\), and GARCH for conditional variance dynamics in \(\varepsilon\)). Hence kernel-HAC standard errors may be unnecessarily unreliable in small samples, even if they're accurate asymptotically. And again, time-series sample sizes are often smallish.
The clearly preferable approach is traditional parametric disturbance heteroskedasticty / autocorrelation modeling, with GLS/ML estimation. Simply allow for ARMA(p,q)-GARCH(P,Q) disturbances (say), with p,q, P and Q selected by AIC (say). (In many applications something like AR(3)-GARCH(1,1) or ARMA(1,1)-GARCH(1,1) would be more than adequate.) Note that the traditional approach is actually fully non-parametric when appropriately viewed as a sieve, and moreover it features automatic bandwidth selection.
Kernel-HAC people call the traditional strategy "pre-whitening," to be done prior to kernel-HAC estimation. But the real point is that it's all -- or at least mostly all -- in the pre-whitening.
In closing, I might add that the view expressed here is strongly supported by top-flight research. On my point (2) and my general recommendation, for example, see the insightful work of den Haan and Levin (2000). It fell on curiously deaf ears and remains unpublished many years later. (It's on Wouter den Haan's web site in a section called "Sleeping and Hard to Get"!) In the interim much of the world jumped on the kernel-HAC bandwagon. It's time to jump off.
Sunday, October 25, 2015
Predictive Accuracy Rankings by MSE vs. MAE
We've all ranked forecast accuracy by mean squared error (MSE) and mean absolute error (MAE), the two great workhorses of relative accuracy comparison. MSE-rankings and MAE-rankings often agree, but they certainly don't have to -- they're simply different loss functions -- which is why we typically calculate and examine both.
Here's a trivially simple question: Under what conditions will MSE-rankings and MAE-rankings agree? It turns out that the answer it is not at all trivial -- indeed it's unknown. Things get very difficult, very quickly.
With \(N(\mu, \sigma^2)\) forecast errors we have that
\( E(|e|) = \sigma \sqrt{2/\pi} \exp\left( -\frac{\mu^{2}}{2 \sigma^{2}}\right) + \mu \left[1-2 \Phi\left(-\frac{\mu}{\sigma} \right) \right], \)
where \(\Phi(\cdot)\) is the standard normal cdf. This relates MAE to the two components of MSE, bias (\(\mu\)) and variance (\(\sigma^2\)), but the relationship is complex. In the unbiased Gaussian case (\(\mu=0\) ), the result collapses to \(MAE \propto \sigma \), so that MSE-rankings and MAE-rankings must agree. But the unbiased Gaussian case is very, very special, and little else is known.
Some energetic grad student should crack this nut, giving necessary and sufficient conditions for identical MSE and MAE rankings in general environments. Two leads: See section 5 of Diebold-Shin (2014), who give a numerical characterization in the biased Gaussian case, and section 2 of Ardakani et al. (2015), who make analytical progress using the SED representation of expected loss.
Here's a trivially simple question: Under what conditions will MSE-rankings and MAE-rankings agree? It turns out that the answer it is not at all trivial -- indeed it's unknown. Things get very difficult, very quickly.
With \(N(\mu, \sigma^2)\) forecast errors we have that
\( E(|e|) = \sigma \sqrt{2/\pi} \exp\left( -\frac{\mu^{2}}{2 \sigma^{2}}\right) + \mu \left[1-2 \Phi\left(-\frac{\mu}{\sigma} \right) \right], \)
where \(\Phi(\cdot)\) is the standard normal cdf. This relates MAE to the two components of MSE, bias (\(\mu\)) and variance (\(\sigma^2\)), but the relationship is complex. In the unbiased Gaussian case (\(\mu=0\) ), the result collapses to \(MAE \propto \sigma \), so that MSE-rankings and MAE-rankings must agree. But the unbiased Gaussian case is very, very special, and little else is known.
Some energetic grad student should crack this nut, giving necessary and sufficient conditions for identical MSE and MAE rankings in general environments. Two leads: See section 5 of Diebold-Shin (2014), who give a numerical characterization in the biased Gaussian case, and section 2 of Ardakani et al. (2015), who make analytical progress using the SED representation of expected loss.
Friday, October 23, 2015
Victor Zarnowitz and the Jewish Diaspora
Let me be clear: I'm a German/Irish Philadelphia Catholic (with a bit more mixed in...), a typical present-day product of nineteenth-century U.S. immigration. So what do I really know about the Jewish experience, and why am I fit to pontificate (so to speak)? Of course I'm not, but that never stopped me before.
Credible estimates suggest that between 1939 and the end of WWII, ten million Jews left Europe. One of them was Victor Zarnowitz. He and I and other close colleagues with similar interests, not least Glenn Rudebusch, saw a lot of each other in the eighties and nineties and zeros, learned in spades from each other, and immensely enjoyed the ride.
But that's just the professional side. For Victor's full story, see his Fleeing the Nazis, Surviving the Gulag, and Arriving in the Free World: My Life and Times. I cried when reading it several years ago. All I'll say is that you should get it and read it. His courage and strength are jaw-dropping and intensely inspirational. And he's just one among millions.
[The impetus for this post came from the outpouring of emails for another recent post that mentioned Victor. Thanks everyone for your memories. Sorry that I had to disable blog comments. Maybe someday I'll bring them back.]
Credible estimates suggest that between 1939 and the end of WWII, ten million Jews left Europe. One of them was Victor Zarnowitz. He and I and other close colleagues with similar interests, not least Glenn Rudebusch, saw a lot of each other in the eighties and nineties and zeros, learned in spades from each other, and immensely enjoyed the ride.
But that's just the professional side. For Victor's full story, see his Fleeing the Nazis, Surviving the Gulag, and Arriving in the Free World: My Life and Times. I cried when reading it several years ago. All I'll say is that you should get it and read it. His courage and strength are jaw-dropping and intensely inspirational. And he's just one among millions.
[The impetus for this post came from the outpouring of emails for another recent post that mentioned Victor. Thanks everyone for your memories. Sorry that I had to disable blog comments. Maybe someday I'll bring them back.]
Saturday, October 17, 2015
Athey and Imbens on Machine Learning and Econometrics
Check out Susan Athey and Guido Imbens' NBER Summer Institute 2015 "Lectures on Machine Learning". (Be sure to scroll down, as there are four separate videos.) I missed the lectures this summer, and I just remembered that they're on video. Great stuff, reflecting parts of an emerging blend of machine learning (ML), time-series econometrics (TSE) and cross-section econometrics (CSE).
The characteristics of ML are basically (1) emphasis on overall modeling, for prediction (as opposed, for example, to emphasis on inference), (2) moreover, emphasis on non-causal modeling and prediction, (3) emphasis on computationally-intensive methods and algorithmic development, and (4) emphasis on large and often high-dimensional datasets.
Readers of this blog will recognize the ML characteristics as closely matching those of TSE! Rob Engle's V-Lab at NYU Stern's Volatility Institute, for example, embeds all of (1)-(4). So TSE and ML have a lot to learn from each other, but the required bridge is arguably quite short.
Interestingly, Athey and Imbens come not from the TSE tradition, but rather from the CSE tradition, which typically emphasizes causal estimation and inference. That makes for a longer required CSE-ML bridge, but it may also make for a larger payoff from building and crossing it (in both directions).
In any event I share Athey and Imbens' excitement, and I welcome any and all cross-fertilization of ML, TSE and CSE.
The characteristics of ML are basically (1) emphasis on overall modeling, for prediction (as opposed, for example, to emphasis on inference), (2) moreover, emphasis on non-causal modeling and prediction, (3) emphasis on computationally-intensive methods and algorithmic development, and (4) emphasis on large and often high-dimensional datasets.
Readers of this blog will recognize the ML characteristics as closely matching those of TSE! Rob Engle's V-Lab at NYU Stern's Volatility Institute, for example, embeds all of (1)-(4). So TSE and ML have a lot to learn from each other, but the required bridge is arguably quite short.
Interestingly, Athey and Imbens come not from the TSE tradition, but rather from the CSE tradition, which typically emphasizes causal estimation and inference. That makes for a longer required CSE-ML bridge, but it may also make for a larger payoff from building and crossing it (in both directions).
In any event I share Athey and Imbens' excitement, and I welcome any and all cross-fertilization of ML, TSE and CSE.
Sunday, October 11, 2015
On Forecast Intervals "too Wide to be Useful"
I keep hearing people say things like this or that forecast interval is "too wide to be useful."
In general, equating "wide" intervals with "useless" intervals is nonsense. A good (useful) forecast interval is one that's correctly conditionally calibrated; see Christoffersen (International Economic Review, 1998). If a correctly-conditionally-calibrated interval is wide, then so be it. If conditional risk is truly high, then a wide interval is appropriate and desirable.
[Note well: The relevant calibration concept is conditional. It's not enough for a forecast interval to be merely correctly unconditionally calibrated, which means that an allegedly x percent interval actually winds up containing the realization x percent of the time. That's necessary, but not sufficient, for correct conditional calibration. Again, see Christoffersen.]
Of course all this holds as well for density forecasts. Whether a density forecast is "good" has nothing to do with its dispersion. Rather, in precise parallel to interval forecasts, a good density forecast is one that's correctly conditionally calibrated; see Diebold, Gunther and Tay (International Economic Review, 1998).
In general, equating "wide" intervals with "useless" intervals is nonsense. A good (useful) forecast interval is one that's correctly conditionally calibrated; see Christoffersen (International Economic Review, 1998). If a correctly-conditionally-calibrated interval is wide, then so be it. If conditional risk is truly high, then a wide interval is appropriate and desirable.
[Note well: The relevant calibration concept is conditional. It's not enough for a forecast interval to be merely correctly unconditionally calibrated, which means that an allegedly x percent interval actually winds up containing the realization x percent of the time. That's necessary, but not sufficient, for correct conditional calibration. Again, see Christoffersen.]
Of course all this holds as well for density forecasts. Whether a density forecast is "good" has nothing to do with its dispersion. Rather, in precise parallel to interval forecasts, a good density forecast is one that's correctly conditionally calibrated; see Diebold, Gunther and Tay (International Economic Review, 1998).
Sunday, October 4, 2015
Whither Econometric Principal-Components Regressions?
Principal-components regression (PCR) is routine in applied time-series econometrics.
Why so much PCR, and so little ridge regression? Ridge and PCR are both shrinkage procedures involving PC's. The difference is that ridge effectively includes all PC's and shrinks according to sizes of associated eigenvalues, whereas PCR effectively shrinks some PCs completely to zero (those not included) and doesn't shrink others at all (those included).
Does not ridge resonate as more natural and appropriate?
This recognition is hardly new or secret. It's in standard texts, like the beautiful Hastie et al. Elements of Statistical Learning.
Econometricians should pay more attention to ridge.
Why so much PCR, and so little ridge regression? Ridge and PCR are both shrinkage procedures involving PC's. The difference is that ridge effectively includes all PC's and shrinks according to sizes of associated eigenvalues, whereas PCR effectively shrinks some PCs completely to zero (those not included) and doesn't shrink others at all (those included).
Does not ridge resonate as more natural and appropriate?
This recognition is hardly new or secret. It's in standard texts, like the beautiful Hastie et al. Elements of Statistical Learning.
Econometricians should pay more attention to ridge.
Thursday, October 1, 2015
Balke et al. on Real-Time Nowcasting
Check out the new paper, "Incorporating the Beige Book in a Quantitative Index of Economic Activity," by Nathan Balke, Michael Fulmer and Ren Zhang (BFZ).
[The Beige Book (BB) is a written description of U.S. economic conditions, produced by the Federal Reserve system. It is released eight times a year, roughly two weeks before the FOMC meeting.]
Basically BFZ include BB in an otherwise-standard FRB Philadelphia ADS Index. Here's the abstract:
The paper is interesting for several reasons.
First, from a technical viewpoint, BFZ take mixed-frequency data to the max, because Beige Book releases are unequally spaced. Their modified ADS has quarterly, monthly, weekly, and now unequally-spaced, variables. But the Kalman filter handles it all, seamlessly.
Second, including Beige Book -- basically "the view of the Federal Reserve System" -- is a novel and potentially large expansion of the nowcast information set.
Third, BFZ approach the evaluation problem in a very clever way, not revealed in the abstract. They view the initial ADS releases (with vs. without BB included) as forecasts of final-revised ADS (without BB included). They find large gains from including BB in estimating time t activity using time t vintage data, but little gain from including BB in estimating time t-30 (days) activity using time t vintage data. That is, including BB in ADS improves real-time nowcasting, even if it evidently adds little to retrospective historical assessment.
[The Beige Book (BB) is a written description of U.S. economic conditions, produced by the Federal Reserve system. It is released eight times a year, roughly two weeks before the FOMC meeting.]
Basically BFZ include BB in an otherwise-standard FRB Philadelphia ADS Index. Here's the abstract:
We apply customized text analytics to the written description contained in the BB to obtain a quantitative measure of current economic conditions. This quantitative BB measure is then included into a dynamic factor index model that also contains other commonly used quantitative economic data. We find that at the time the BB is released, the BB has information about current economic activity not contained in other quantitative data. This is particularly the case during recessionary periods. However, by three weeks after its release date,"old" BB contain little additional information about economic activity not already contained in other quantitative data.
The paper is interesting for several reasons.
First, from a technical viewpoint, BFZ take mixed-frequency data to the max, because Beige Book releases are unequally spaced. Their modified ADS has quarterly, monthly, weekly, and now unequally-spaced, variables. But the Kalman filter handles it all, seamlessly.
Second, including Beige Book -- basically "the view of the Federal Reserve System" -- is a novel and potentially large expansion of the nowcast information set.
Third, BFZ approach the evaluation problem in a very clever way, not revealed in the abstract. They view the initial ADS releases (with vs. without BB included) as forecasts of final-revised ADS (without BB included). They find large gains from including BB in estimating time t activity using time t vintage data, but little gain from including BB in estimating time t-30 (days) activity using time t vintage data. That is, including BB in ADS improves real-time nowcasting, even if it evidently adds little to retrospective historical assessment.
Sunday, September 27, 2015
Near Writing Disasters
Check this out this "Retraction Watch" post, forwarded by a reader:
http://retractionwatch.com/2014/11/11/overly-honest-references-should-we-cite-the-crappy-gabor-paper-here/
Really funny. Except that it's a little close to home. I suspect that we've all had a few such accidents, or at least near-accidents, and with adjectives significantly stronger than "crappy". I know I have.
http://retractionwatch.com/2014/11/11/overly-honest-references-should-we-cite-the-crappy-gabor-paper-here/
Really funny. Except that it's a little close to home. I suspect that we've all had a few such accidents, or at least near-accidents, and with adjectives significantly stronger than "crappy". I know I have.
Thursday, September 24, 2015
Coolest Paper at 2015 Jackson Hole
The Faust-Leeper paper is wild and wonderful. The friend who emailed it said, "Be prepared, it’s very different but a great picture of real-time forecasting..." He got it right.
Actually his full email was, "Be prepared, it’s very different but a great picture of real-time forecasting, and they quote Zarnowitz." (He and I always liked and admired Victor Zarnowitz. But that's another post.)
The paper shines its light all over the place, and different people will read it differently. I did some spot checks with colleagues. My interpretation below resonated with some, while others wondered if we had read the same paper. Perhaps, as with Keynes, we'll never know exactly what Faust-Leeper really, really, really meant.
I read Faust-Leeper as speaking to factor analysis in macroeconomics and finance, arguing that dimensionality reduction via factor structure, at least as typically implemented and interpreted, is of limited value to policymakers, although the paper never uses wording like "dimensionality reduction" or "factor structure".
If Faust-Leeper are doubting factor structure itself, then I think they're way off base. It's no accident that factor structure is at the center of both modern empirical/theoretical macro and modern empirical/theoretical finance. It's really there and it really works.
Alternatively, if they're implicitly saying something like this, then I'm interested:
Small-scale factor models involving just a few variables and a single common factor (or even two factors like "real activity" and "inflation") are likely missing important things, and are therefore incomplete guides for policy analysis.
Or, closely related and more constructively:
We should cast a wide net in terms of the universe of observables from which we extract common factors, and the number of factors that we extract. Moreover we should examine and interpret not only common factors, but also allegedly "idiosyncratic" factors, which may actually be contemporaneously correlated, time dependent, or even trending, due to mis-specification.
Enough. Read it for yourself.
[General note: My use of terms like "factor modeling" throughout this post should be broadly interpreted to include not only explicit reduced-form statistical/econometric dynamic factor modeling, but also structural DSGE modeling.]
Actually his full email was, "Be prepared, it’s very different but a great picture of real-time forecasting, and they quote Zarnowitz." (He and I always liked and admired Victor Zarnowitz. But that's another post.)
The paper shines its light all over the place, and different people will read it differently. I did some spot checks with colleagues. My interpretation below resonated with some, while others wondered if we had read the same paper. Perhaps, as with Keynes, we'll never know exactly what Faust-Leeper really, really, really meant.
I read Faust-Leeper as speaking to factor analysis in macroeconomics and finance, arguing that dimensionality reduction via factor structure, at least as typically implemented and interpreted, is of limited value to policymakers, although the paper never uses wording like "dimensionality reduction" or "factor structure".
If Faust-Leeper are doubting factor structure itself, then I think they're way off base. It's no accident that factor structure is at the center of both modern empirical/theoretical macro and modern empirical/theoretical finance. It's really there and it really works.
Alternatively, if they're implicitly saying something like this, then I'm interested:
Small-scale factor models involving just a few variables and a single common factor (or even two factors like "real activity" and "inflation") are likely missing important things, and are therefore incomplete guides for policy analysis.
Or, closely related and more constructively:
We should cast a wide net in terms of the universe of observables from which we extract common factors, and the number of factors that we extract. Moreover we should examine and interpret not only common factors, but also allegedly "idiosyncratic" factors, which may actually be contemporaneously correlated, time dependent, or even trending, due to mis-specification.
Enough. Read it for yourself.
[General note: My use of terms like "factor modeling" throughout this post should be broadly interpreted to include not only explicit reduced-form statistical/econometric dynamic factor modeling, but also structural DSGE modeling.]
Subscribe to:
Posts (Atom)