I recently listened to a stimulating statistics talk, "Discerning a Steady State Sequentially," by Moshe Pollak (with Tom Hope), presently visiting Penn. Of course it's impossible to know with certainty whether we're in steady state based on a finite sample path, but the point is that we may nevertheless be able to make probabilistic statements, effectively "testing the hypothesis" that we're in steady state.

Moshe takes a sequential analytic approach. Here's his abstract: "In many contexts one observes a stochastic process with the goal of learning steady-state characteristics. This talk addresses the question of how to declare with confidence that steady-state has been reached. We focus on a sequence of independent observations that tends in a stochastically monotone fashion to a constant distribution."

Moshe's obvious limitation is independence, as steady states of simulated Markov chains, not independent sequences, are the object of interest in many important applications (posterior simulation, global optimization, etc.).

In the Markov chain case, why not do something like the following. Whenever time \(t\) is a multiple of \(m\), use a distribution-free non-parametric (randomization) test for equality of distributions to test whether the unknown distribution \(f_1\) of \(x_t, ..., x_{t-(m/2)}\) equals the unknown distribution \(f_2\) of \(x_{t-(m/2)+1}, ..., x_{t-m}\). If, for example, we pick \(m=20,000\), then whenever time \(t\) is a multiple of 20,000 we would test equality of the distributions of \(x_t, ..., x_{t-10000}\) and \(x_{t-10001}, ..., x_{t-20000}\). We declare arrival at the steady state when the null is not rejected. Or something like that.

Of course the Markov chain is serially correlated, but who cares, as we're only trying to assess equality of unconditional distributions. That is, randomizations of \(x_t, ..., x_{t-(m/2)}\) and of \(x_{t-(m/2)+1}, ..., x_{t-m}\) destroy the serial correlation, but so what?

My suggestion is either misguided for some reason that I'm missing, or someone must have done it. (It's just too obvious.)