Monday, December 28, 2015
Cochrane on Research Reliability and Replication
Check out John's new piece. His views largely match mine. Here's to the demand side!
Friday, December 18, 2015
Holiday Haze
Your dedicated blogger is about to vanish in the holiday haze, returning in the new year. Meanwhile, all best wishes for the holidays.
[Photo credit: Public domain, by Marcus Quigmire, from Florida, USA (Happy Holidays Uploaded by Princess Mérida) [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons]
Monday, December 14, 2015
Sunday, December 13, 2015
Superforecasting
A gratis copy of Philip Tetlock and Dan Gardner's new book, Superforecasting, arrived a couple months ago, just before it was published. It's been sitting on my desk until now. With a title like "Superforcasting," perhaps I subconsciously thought it would be pop puffery and delayed looking at it. If so, I was wrong. It's a winner.
Superforecasting is in the tradition of Nate Silver's The Signal and the Noise, but whereas Silver has little expertise (except in politics, baseball and poker, which he knows well) and goes for breadth rather than depth, Tetlock has significant expertise (his own pioneering research, on which his book is built) and goes for depth. Tetlock's emphasis throughout is on just one question: What makes good forecasters good?
Superforecasting is mostly about probabilistic event forecasting, for events much more challenging than those that we econometricians and statisticians typically consider, and for which there is often no direct historical data (e.g., conditional on information available at this moment, what is the probability that Google files for bankruptcy by December 31, 2035?). Nevertheless it contains many valuable lessons for us in forecast construction, evaluation, combination, updating, etc.
You can expect several posts on aspects of Superforecasting in the new year as I re-read it. For now I just wanted to bring it to your attention in case you missed it. Really nice.
Thursday, December 10, 2015
Long Memory Stochastic Volatility
Check out Mark Jensen's new paper. Long memory is a key feature of realized high-frequency asset-return volatility, yet it remains poorly understood. Jensen's approach may help change that. Of particular interest are: (1) its ability to handle seamlessly d in [0, 1[, despite the fact that the unconditional variance is infinite for d in ].5, 1[, and (2) closely related, the important role played by wavelets.
Details:
Robust
estimation of nonstationary, fractionally integrated, autoregressive,
stochastic volatility
Date:
|
2015-11-01
|
By:
|
Jensen, Mark J. (Federal Reserve Bank of
Atlanta)
|
Empirical volatility studies have
discovered nonstationary, long-memory dynamics in the volatility of the stock
market and foreign exchange rates. This highly persistent, infinite
variance—but still mean reverting—behavior is commonly found with
nonparametric estimates of the fractional differencing parameter d, for
financial volatility. In this paper, a fully parametric Bayesian estimator,
robust to nonstationarity, is designed for the fractionally integrated,
autoregressive, stochastic volatility (SV-FIAR) model. Joint estimates of the
autoregressive and fractional differencing parameters of volatility are found
via a Bayesian, Markov chain Monte Carlo (MCMC) sampler. Like Jensen (2004),
this MCMC algorithm relies on the wavelet representation of the log-squared
return series. Unlike the Fourier transform, where a time series must be a
stationary process to have a spectral density function, wavelets can
represent both stationary and nonstationary pr! ocesses. As long as the
wavelet has a sufficient number of vanishing moments, this paper's MCMC
sampler will be robust to nonstationary volatility and capable of generating
the posterior distribution of the autoregressive and long-memory parameters
of the SV-FIAR model regardless of the value of d. Using simulated and
empirical stock market return data, we find our Bayesian estimator producing
reliable point estimates of the autoregressive and fractional differencing
parameters with reasonable Bayesian confidence intervals for either
stationary or nonstationary SV-FIAR models.
|
|
Keywords:
|
|
JEL:
|
|
URL:
|
New Elsevier: Good or Bad?
|
Sunday, December 6, 2015
New Review of Forecasting at Bank of England
Check it out here. It's thorough and informative.
It's interesting and unfortunate that even the Bank of England, the great "fan chart pioneer," produces density forecasts for only three of eleven variables forecasted (p. 15). In my view, the most important single forecasting improvement that the Bank of England -- and all central banks -- could implement is a complete switch from point to density forecast construction, evaluation and combination.
It's interesting and unfortunate that even the Bank of England, the great "fan chart pioneer," produces density forecasts for only three of eleven variables forecasted (p. 15). In my view, the most important single forecasting improvement that the Bank of England -- and all central banks -- could implement is a complete switch from point to density forecast construction, evaluation and combination.
Wednesday, December 2, 2015
NYU "Five-Star" Conference 2015
Program with clickable papers here. The amazing thing about Five-Star is that it actually works, and works well, year after year, despite the usually-disastrous fact that it involves coordination among universities.
Eurostat Forecasting Competition Deadline Approaching
I have some serious reservations about forecasting competitions, at least as typically implemented by groups like Kaggle. But still they're useful and exciting and absolutely fascinating. Here's a timely call for participation, from Eurostat. (Actually this one is nominally for nowcasting, not forecasting, but in reality they're the same thing.)
[I'm not sure why they're trying to shoehorn "big data" into it, except that it sounds cool and everyone wants to jump on the bandwagon. The winner is the winner, whether based on big data, small data, or whatever, and whether produced by an econometrician, a statistician, or a data scientist. I'm not even sure what "Big Data" means, or what a "data scientist" means, here or anywhere. (Standard stat quip: A data scientist is a statistician who lives in San Francisco.) End of rant.]
Big Data for Official Statistics Competition launched -
please register by 10 January 2016
[I'm not sure why they're trying to shoehorn "big data" into it, except that it sounds cool and everyone wants to jump on the bandwagon. The winner is the winner, whether based on big data, small data, or whatever, and whether produced by an econometrician, a statistician, or a data scientist. I'm not even sure what "Big Data" means, or what a "data scientist" means, here or anywhere. (Standard stat quip: A data scientist is a statistician who lives in San Francisco.) End of rant.]
Big Data for Official Statistics Competition launched -
please register by 10 January 2016
The Big Data for Official Statistics Competition
(BDCOMP) has just been launched, and you are most welcome to participate. All details
are provided in the call for participation:
Participation is open to everybody (with a few very
specific exceptions detailed in the call).
In this first instalment of BDCOMP, the competition is
exclusively about nowcasting economic indicators at national or European level.
There are 7 tracks in
the competition. They correspond to 4 main indicators: Unemployment, HICP,
Tourism and Retail Trade and some of their variants.
Usage of Big Data is
encouraged but not mandatory. For a detailed description of the competition
tasks, please refer to the call.
The authors of the best-performing submissions for
each track will be invited to present their work at the NTTS 2017 conference
(the exact award criteria can be found in the call).
The deadline for registration is 10 January
2016. The duration of the competition is roughly a year (including about a
month for evaluation). For a detailed schedule of submissions, please refer to
the call.
The competition is organised by Eurostat and has a
Scientific Committee composed of colleagues from various member and observer
organisations of the European Statistical System (ESS).
On the behalf of the BDCOMP Scientific
Committee,
The BDCOMP organising team