Glossary
Math FoundationsIntermediate10 min read

Markov Chains

A Markov chain is a mathematical model for systems that transition between states, where the probability of each transition depends only on the current state (the Markov property, also called memorylessness). Markov chains are used extensively in quantitative finance for modeling credit ratings, market regimes, and option pricing, and they appear frequently in quant interview probability questions.

What Is a Markov Chain?

A Markov chain is a mathematical model for a system that transitions between a set of states according to probabilistic rules, with one critical restriction: the probability of moving to any particular state depends only on the current state, not on the sequence of states that preceded it. This is called the Markov property (or memorylessness).

Formally, a discrete-time Markov chain {X0, X1, X2, ...} satisfies:

P(Xn+1 = j | Xn = i, Xn-1 = in-1, ..., X0 = i0) = P(Xn+1 = j | Xn = i)

The conditional dependence on the full history collapses to dependence on only the current state. This simplification makes Markov chains mathematically tractable while still being powerful enough to model many real-world systems.

Examples of systems modeled as Markov chains include random walks, board games (Monopoly, Snakes and Ladders), weather patterns, credit rating transitions, and market regime switching models used in quantitative finance.

Transition Matrices and Key Properties

A Markov chain with n states is fully described by its transition probability matrix P, where entry Pij is the probability of moving from state i to state j:

Each row sums to 1 (you must go somewhere). The matrix can be multiplied by itself to compute multi-step probabilities: Pkij gives the probability of being in state j after k steps, starting from state i.

Key properties:

  • Stationary distribution (π): A probability distribution that is unchanged by the transition matrix: πP = π. If the chain is ergodic (irreducible and aperiodic), it converges to this unique stationary distribution regardless of the starting state.
  • Absorbing states: A state that, once entered, cannot be left (Pii = 1). Absorbing Markov chains model processes with permanent endpoints, like gambler's ruin (you either go broke or reach a target).
  • Expected hitting time: The expected number of steps to reach a particular state from a given starting state. Computed by setting up a system of linear equations from the transition probabilities.
  • Recurrence and transience: A state is recurrent if the chain returns to it with probability 1; transient if there's a positive probability of never returning.

Get free quant interview prep resources

Mock interviews, resume guides, and 500+ practice questions β€” straight to your inbox.

Worked Example: Gambler's Ruin

The gambler's ruin problem is the most famous Markov chain application and a classic interview question.

Problem: A gambler starts with $3 and repeatedly bets $1 on a fair coin flip (50/50). The game ends when the gambler reaches $5 (wins) or $0 (goes bust). What is the probability of winning?

Setup: States are {0, 1, 2, 3, 4, 5}. States 0 and 5 are absorbing. From any other state i, P(i → i+1) = 0.5 and P(i → i-1) = 0.5.

Solution: Let pi = probability of reaching $5 starting from state i.

  • Boundary conditions: p0 = 0 (already bust), p5 = 1 (already won).
  • Recurrence: pi = 0.5 × pi+1 + 0.5 × pi-1 for i = 1, 2, 3, 4.

For a fair coin, the solution is linear: pi = i/5.

So p3 = 3/5 = 60%.

Expected duration: The expected number of bets until the game ends, starting from $3, can also be computed via a system of equations. The answer is i × (N - i) = 3 × 2 = 6 bets.

For a biased coin (probability p of winning each bet), the solution changes to: pi = (1 - (q/p)i) / (1 - (q/p)N), where q = 1 - p.

Want personalized guidance from a quant?

Speak with a quant trader or researcher who’s worked at a top firm.

Book a Free Consult

Markov Chains in Quant Finance

Markov chains have several important applications in quantitative finance:

  • Credit rating transitions: Rating agencies model the probability of a company moving between credit ratings (AAA, AA, A, BBB, ..., D) using a Markov chain transition matrix estimated from historical data. This is fundamental to credit risk modeling and bond pricing.
  • Regime switching models: Financial markets alternate between "regimes" (e.g., low-volatility trending vs. high-volatility mean-reverting). Hidden Markov Models (HMMs) treat the regime as a hidden Markov chain and infer the current regime from observed data. This is used for dynamic strategy allocation at quant hedge funds.
  • Random walks: A simple random walk on integers is the simplest Markov chain β€” and the foundation for modeling stock prices. Brownian motion is the continuous-time, continuous-space limit of a random walk.
  • Monte Carlo Markov Chain (MCMC): Algorithms like Metropolis-Hastings and Gibbs sampling use Markov chains to sample from complex probability distributions, enabling Bayesian inference in high-dimensional models.
  • Interview questions: Random walk, gambler's ruin, and expected stopping time problems are staples of quant interviews. Understanding Markov chains gives you a systematic framework for solving them.

Key Formulas

Transition probability: the probability of moving from state i to state j in one step. The Markov property means this depends only on the current state i.

Stationary distribution: the row vector pi that is unchanged by the transition matrix. For an ergodic chain, this is the long-run proportion of time spent in each state.

Key Takeaways

  • The Markov property states that the future depends only on the present state β€” the history of how you got there is irrelevant.
  • A Markov chain is fully described by its states and transition probability matrix P, where P_ij = P(next state = j | current state = i).
  • Key concepts include stationary distributions, absorbing states, and expected hitting times β€” all frequently tested in interviews.
  • In finance, Markov chains model credit rating transitions, market regime switching, and are the foundation for random walks and Brownian motion.
  • Markov chain problems (gambler's ruin, random walks on graphs) are among the most common probability interview questions at quant firms.

Why This Matters for Quant Careers

Markov chain problems are heavily tested in quant interviews, particularly at prop trading firms like Jane Street, SIG, and Optiver. Classic problems include gambler's ruin, random walks on graphs, expected hitting times, and absorbing chain calculations. Being comfortable setting up and solving the recurrence equations is essential.

Practice with our Jane Street interview questions. Book a free consultation to work on your probability skills.

Frequently Asked Questions

What is the Markov property in simple terms?

The Markov property means 'the future depends only on the present, not the past.' If you know the current state of the system, knowing the entire history provides no additional information for predicting what happens next. For example, in a random walk, your next step depends only on where you are now β€” not the path you took to get here.

What is the gambler's ruin problem?

A gambler starts with some initial wealth and repeatedly makes fair (or biased) bets. The game ends when the gambler reaches a target amount (wins) or goes broke (ruins). This is modeled as a Markov chain with absorbing states at 0 and the target. For a fair game starting with i dollars targeting N dollars, the probability of winning is i/N. This problem appears frequently in quant interviews.

What is a stationary distribution?

The stationary distribution is the long-run equilibrium of a Markov chain β€” the proportion of time the chain spends in each state as the number of steps approaches infinity. For an ergodic chain, the stationary distribution is unique and independent of the starting state. It is found by solving pi Γ— P = pi subject to the probabilities summing to 1.

How are Markov chains related to Brownian motion?

Brownian motion is the continuous-time, continuous-space limit of a random walk, which is a simple Markov chain. As you take smaller and smaller steps at faster and faster rates, the discrete random walk converges to Brownian motion. This connection β€” formalized by Donsker's theorem β€” links the discrete Markov chain theory of probability with the continuous stochastic calculus used in derivatives pricing.

Master These Concepts for Quant Interviews

Our bootcamp covers probability, statistics, trading intuition, and 500+ real interview questions from top quant firms.

Book a Free Consult