On this page, we discuss the topic of Markov Chains, a way of modelling so-called "discrete stochastic processes", i.e. systems which randomly change between a finite number of different states. Our focus will mainly be to determine (if possible) long-term predictions for such a system, by finding a "steady-state vector".
Note: The basic and advanced learning objectives listed below are meant to give you an idea of the material you should learn about this section. These are mainly intended to be used in a course which uses an Active Learning approach, where students are required to "read ahead" before each class - but can equally be used in a more traditional course setting.
Unless your teacher gives you specific instructions, it is up to you to decide how much of the listed resources you need to read or watch - you probably do not need to go through all of it. You might also want to look at the General Study Tips & Tricks page for some recommendations on how to effectively study with a math textbook and videos.
Basic learning objectives
These are the tasks you should be able to perform with reasonable fluency when you arrive at your next class meeting. Important new vocabulary words are indicated in italics.
- Given a description of a Markov chain, draw its transition graph and find its transition matrix.
- Given the transition matrix of a Markov chain and a state vector, find the future state vector.
Advanced learning objectives
In addition to mastering the basic objectives, here are the tasks you should be able to perform after class, with practice:
- Given the transition matrix of a Markov chain, find the steady-state vector (if it exists).
To prepare for class
Read and experiment with this interactive introduction to Markov chains by Victor Powell and Lewis Lehe (note that they write the transition matrix in rows instead of columns):
Watch this video which shows example calculations of finding the steady-state vector for a \(2\times 2\) and a \(3\times 3\) transition matrix:
After class (optional)
A steady-state vector for a Markov chain is a special case of a so-called eigenvector (with associated eigenvalue \(1\)) of a matrix:
Read and experiment with this interactive introduction to Eigenvalues and Eigenvectors by Victor Powell and Lewis Lehe (and note in particular the section about Markov chains):