This post is inspired by a recent attempt by the HIPS group to read the book “General irreducible Markov chains and non-negative operators” by Nummelin.

Let $P$ be the transition matrix of a discrete-time Markov chain on a finite state space such that $p_{ij}$ is the probability of transitioning from state $i$ to state $j$. We call a Markov chain *absorbing* if there is at least one state such that the chain can never leave that state once entered. Such a state is called an *absorbing state*, and non-absorbing states are called *transient states*. An *ergodic* Markov chain is such that every state is reachable from every other state in one or more moves. A chain is called a *regular* Markov chain if all entries of $P^n$ are greater than zero for some $n$. We will focus on absorbing chains first, and then look at ergodic and regular chains.

By permuting the states of an absorbing chain so that the transient states come first, we can write the transition matrix of the absorbing chain as

\begin{equation}

P = \left [ \begin{array}{cc}

Q & R \\

0 & I

\end{array} \right ]
\end{equation}

The matrix $Q$ describes the transition probabilities between transient states, $R$ the transition probabilities from transient to absorbing states ($R$ should not be the matrix of all zeros), and $I$ is the identity matrix since the chain stays at absorbing states.

Notice that the iterates $Q^n \rightarrow 0$ as $n \rightarrow \infty$. We can see this by noting that the probability that the chain does not reach an absorbing state from a transient state is the sum of the corresponding row of $Q$, call it $q_j$. The probability of not being absorbed in $n$ steps is $q_j^n$, and since $q_j < 1$ this probability goes to $0$ so the individual entries of $Q^n$ converge to $0$ as well. We define the *fundamental matrix* for an absorbing Markov chain as

\begin{equation}

N = I + Q + Q^2 + \ldots

\end{equation}

Each entry of $N$, $n_{ij}$, can be interpreted as the expected number of times the chain is in state $j$ if it started in state $i$. Note that $N$ is the finite analog of the potential kernel $G$ from Nummelin, and it can be shown that $N = (I-Q)^{-1}$.

The fundamental matrix, $N$, can be used to compute many interesting quantities of an absorbing Markov chain (which probably explains the name fundamental matrix). For instance, we can compute the expected time until the chain is absorbed as $t = N1$, where $t = (t_1,\ldots,t_K)$ is a vector where $t_i$ is the expected number of steps until the chain reaches an absorbing state starting from state $i$, and $1$ is a vector of ones. We can also compute the probability that a particular absorbing state is reached given a starting state. Let $B$ be a matrix where $b_{ij}$ is the probability of reaching absorbing state $j$ starting at state $i$. It turns out that $B=NR$, where $R$ is in the decomposition of $P$ above.

Now we turn our attention to ergodic and regular Markov chains. Let $P$ be the transition matrix of a regular Markov chain, then the iterates, $P^n$ converge to a matrix, $W$, such that all rows of $W$ are the same. Call the shared row $w$ so that $w = Pw$. The vector $w$ is called the *stationary distribution* of the chain. We can also define a fundamental matrix for ergodic and regular Markov chains.

As a first attempt at the form of the fundamental matrix for an ergodic or regular chain we could try $N = (I – P)^{-1}$. However, $(I-P)$ does note have an inverse since the columns are linearly dependent. The reason we could define $N$ for absorbing chains was because the iterates $Q^n$ were decaying to $0$. Since $P^n \rightarrow W$, we may instead try defining

\begin{equation}

N = I + (P-W) + (P-W)^2 + \ldots

\end{equation}

The terms in this series decay to $0$ and the series ends up converging. We then have $N = (I-P+W)^{-1}$ is the fundamental matrix of an ergodic or regular chain.

As in the absorbing case $N$ can be used to compute some interesting properties of the chain. For instance, the mean first passage time is the expected number of steps required to reach state $j$ from state $i$ which we denote by $m_{ij}$. It can be shown that $m_{ij} = \frac{n_{jj} – n_{ij}}{w_j}$. The fundamental matrix also appears in the asymptotic variance in the Central Limit Theorem for Markov chains.

I highly recommend reading Chapter 11 from “Introduction to Probability” by Grinstead and Snell (available for free here) for a nice introduction to finite Markov chains. It is written at the undergraduate level, but it is very clear and contains some ideas and results that you probably didn’t see in your first treatment of Markov chains. Another nice reference for finite Markov chains that covers potential theory is “Finite Markov Chains” by Kemeny and Snell.

As a quick aside, notice that the fundamental matrices defined here are the inverses of matrices that look very similar to graph laplacians. The short story is that potential theory for Markov chains (and processes more generally) is related to classical potential theory in physics. In fact there is a deep connection between Markov chains and electric networks. See this book by Doyle and Snell for a treatment.

## Comments 1

Could you proof this: Theorem 11.3 In an absorbing Markov chain, the probability that the process will be absorbed is 1 (i.e., Q^n → 0 as n → ∞).??