Stationary variance of AR(2) process
It is hard (or impossible) to directly obtain the analytical expression for the stationary distribution of the poll-delay voter model. But we can look at various possible approximations, with the Beta distribution being the prime suspect. To fit the Beta distribution (or any other two-parameter distribution), we need to know two stationary moments of the model. Deriving the stationary mean is a trivial problem, while deriving the stationary variance is more involved.
In this post, let us use the Yule-Walker equations to obtain the expression for stationary variance of AR(2) process.
Preliminaries
Given a second-order auto-regressive process:
\begin{equation} x_t = \phi_1 x_{t-1} + \phi_2 x_{t-2} + \varepsilon_t . \end{equation}
In the above \( \varepsilon_t \) is white noise with variance \( \sigma^2 \), and \( \phi_i \) are known parameter values.
While we know that the poll-delay voter model is stationary, this is not general true for an arbitrary AR(2) process. So, for arbitrary parameter values, we need to verify that the AR(2) process is stationary. To do this, we need to find the roots of the characteristic equation:
\begin{equation} 1 - \phi_1 z - \phi_2 z^2 = 0 . \end{equation}
The process is stationary if the roots of this equation lie outside the unit circle. In other words, the roots may be complex numbers, but their modulus must be greater than unity.
My exploration of the roots implies the conclusion that AR(2) process will be stationary if \( \phi_2 \in \left[-1, 1-\left|\phi_1\right|\right] \) or \( \phi_1 \in \left[-1+\phi_2, 1-\phi_2\right] \).
Obtaining stationary variance
Now, if the process is stationary, we can formulate the Yule-Walker equations (with \( \gamma_i = \mathrm{Cov}(x_t, x_{t-i}) \)):
\begin{align} \gamma_0 & = \phi_1 \gamma_1 + \phi_2 \gamma_2 + \sigma^2 , \\ \gamma_1 & = \phi_1 \gamma_0 + \phi_2 \gamma_1 , \\ \gamma_2 & = \phi_1 \gamma_1 + \phi_2 \gamma_0 . \end{align}
We are interested in the expression for \( \gamma_0 \) as it corresponds to the stationary variance. The expression for stationary variance:
\begin{equation} \gamma_0 = \mathrm{Var}\left(x_t\right) = \frac{(1-\phi_2) \sigma^2}{(1+\phi_2) \left(1-\phi_1^2+\phi_2^2-2 \phi_2\right)} . \end{equation}
While the expression is nonlinear, the dependence between the stationary variance and the process parameters is underwhelmingly trivial. Maximum variance is reached in the extremes (close to the non-stationarity bounds), and is minimal when the parameter values are close to zero.
You can see exactly that in the interactive plots below. In the first plot, you can adjust \( \phi_2 \) value and observe the dependence of variance on \( \phi_1 \). Note that the plot shows variance values between \( 0 \) and \( 15 \), to keep the plots readable (to "hide" the infinities at the stationarity boundaries). If there is no stationary variance (process is non-stationary), the variance is artificially assigned a negative value.
The second plot is identical to the first one, but now you can adjust \( \phi_1 \) value and examine the dependence between \( \phi_2 \) and variance.
Interactive app
If the variance plots are not as intuitive for you, you can visually explore the effect \( \phi_i \) parameters have using the app below. This app generates sample time series of the process.
Note that, for speed purposes, the noise values remain the same (i.e., they are not regenerated when you change parameter values). If you want to generate new samples, you'll have to reload this webpage.