PACF and AR(p) models

In the last few posts we have seen that random walk can be written in recursive form, which suggests that random walk is AR(1) process. We have also became familiar with the partial auto-correlation functions. Here in this post we show that PACF can provide an intuition on the order of AR which should be used in modeling the data.

Partial auto-correlation function (explained by ritvikmath)

In the last post we have seen that auto-correlation function breaks when try to analyze random walk time series. We have used differencing technique which has allowed us to circumvent non-stationarity of the random walk series.

In the upcoming post in our ARFIMA series we will use another technique known as partial auto-correlation function (abbr., PACF). This new technique is discussed by ritvikmath in the video below. Watch in order to understand the new tool.

Random walk as AR process

In the previous post I have mentioned that in our review we have also presented a novel result, which we have analyzed ARFIMA process. Understanding ARFIMA process requires some specialized knowledge, which we will cover in this and the next few posts.

In this post we will take a well-known physical model, random walk, and try to understand it in the context of economical model, AR(p) process.

Numberphile: Math of being a pig

For many board games have become a tool to retain their sanity during COVID-19 lockdowns. These games vary in their complexity and the mechanics they use. In this Numberphile video mathematician Ben Sparks uses statistics to develop a strategy how to play game known as "Pass the Pigs". In general this method should work for many other simple "push your luck" style board games.

Big review of works by our group

This summer we (all active members of the group) have contributed to a pretty big paper [1]. In the said paper we have reviewed all of our varied approaches to the modeling of the long-range memory (which we understand as 1/f noise). The core difference of our approach, from the approaches taken by other groups, is that we use Markovian models without embedding actual memory into our models.

Title page

The paper also includes a new result - application of burst statistics (note that the paper uses another term - burst and inter-burst duration analysis) to understanding the nature of long-range memory of the fractional Levy stable motion. Fractional Levy stable motion is an interesting generalization of the Brownian motion in two regards. First of all the driving noise is not normally distributed, but is instead distributed according to the stable distribution, which makes large jumps quite likely. Also, the time series is integrated using fractional integral and thus possesses true long-range memory (one embedded into the model). We have shown that burst statistics is a good tool for this particular task (at least it has some advantages over other alternatives).

In the upcoming posts we will talk about ARFIMA process, which was instrumental in our analysis as it helped us to generate fractional Levy stable motion.

Disclaimer: While I was one of the authors of the paper, my impact on Section 4 was rather limited. I have written implementation of ARFIMA in Python and Mathematica. I have also validated that figures can be reproduced following the instructions given in Section 4. But this is fine as here we will focus on ARFIMA itself.

References