Random Walk
Random walk theory suggests that stock price changes are independent and identically distributed, meaning past prices cannot predict future movements β a foundational concept in financial economics.
A Markov chain is a mathematical model for systems that transition between states, where the probability of each transition depends only on the current state (the Markov property, also called memorylessness). Markov chains are used extensively in quantitative finance for modeling credit ratings, market regimes, and option pricing, and they appear frequently in quant interview probability questions.
A Markov chain is a mathematical model for a system that transitions between a set of states according to probabilistic rules, with one critical restriction: the probability of moving to any particular state depends only on the current state, not on the sequence of states that preceded it. This is called the Markov property (or memorylessness).
Formally, a discrete-time Markov chain {X0, X1, X2, ...} satisfies:
P(Xn+1 = j | Xn = i, Xn-1 = in-1, ..., X0 = i0) = P(Xn+1 = j | Xn = i)
The conditional dependence on the full history collapses to dependence on only the current state. This simplification makes Markov chains mathematically tractable while still being powerful enough to model many real-world systems.
Examples of systems modeled as Markov chains include random walks, board games (Monopoly, Snakes and Ladders), weather patterns, credit rating transitions, and market regime switching models used in quantitative finance.
A Markov chain with n states is fully described by its transition probability matrix P, where entry Pij is the probability of moving from state i to state j:
Each row sums to 1 (you must go somewhere). The matrix can be multiplied by itself to compute multi-step probabilities: Pkij gives the probability of being in state j after k steps, starting from state i.
Key properties:
Mock interviews, resume guides, and 500+ practice questions β straight to your inbox.
The gambler's ruin problem is the most famous Markov chain application and a classic interview question.
Problem: A gambler starts with $3 and repeatedly bets $1 on a fair coin flip (50/50). The game ends when the gambler reaches $5 (wins) or $0 (goes bust). What is the probability of winning?
Setup: States are {0, 1, 2, 3, 4, 5}. States 0 and 5 are absorbing. From any other state i, P(i → i+1) = 0.5 and P(i → i-1) = 0.5.
Solution: Let pi = probability of reaching $5 starting from state i.
For a fair coin, the solution is linear: pi = i/5.
So p3 = 3/5 = 60%.
Expected duration: The expected number of bets until the game ends, starting from $3, can also be computed via a system of equations. The answer is i × (N - i) = 3 × 2 = 6 bets.
For a biased coin (probability p of winning each bet), the solution changes to: pi = (1 - (q/p)i) / (1 - (q/p)N), where q = 1 - p.
Want personalized guidance from a quant?
Speak with a quant trader or researcher whoβs worked at a top firm.
Book a Free ConsultMarkov chains have several important applications in quantitative finance:
Transition probability: the probability of moving from state i to state j in one step. The Markov property means this depends only on the current state i.
Stationary distribution: the row vector pi that is unchanged by the transition matrix. For an ergodic chain, this is the long-run proportion of time spent in each state.
Markov chain problems are heavily tested in quant interviews, particularly at prop trading firms like Jane Street, SIG, and Optiver. Classic problems include gambler's ruin, random walks on graphs, expected hitting times, and absorbing chain calculations. Being comfortable setting up and solving the recurrence equations is essential.
Practice with our Jane Street interview questions. Book a free consultation to work on your probability skills.
Random walk theory suggests that stock price changes are independent and identically distributed, meaning past prices cannot predict future movements β a foundational concept in financial economics.
Brownian motion (Wiener process) is the continuous-time random process that models the random component of asset price movements and is the foundation of the Black-Scholes model and stochastic calculus.
Stochastic calculus extends classical calculus to handle random processes, providing the mathematical foundation for derivatives pricing models like Black-Scholes and modern quantitative finance.
Monte Carlo simulation uses repeated random sampling to model the probability of different outcomes in complex systems, making it essential for derivatives pricing, risk analysis, and strategy evaluation.
The Markov property means 'the future depends only on the present, not the past.' If you know the current state of the system, knowing the entire history provides no additional information for predicting what happens next. For example, in a random walk, your next step depends only on where you are now β not the path you took to get here.
A gambler starts with some initial wealth and repeatedly makes fair (or biased) bets. The game ends when the gambler reaches a target amount (wins) or goes broke (ruins). This is modeled as a Markov chain with absorbing states at 0 and the target. For a fair game starting with i dollars targeting N dollars, the probability of winning is i/N. This problem appears frequently in quant interviews.
The stationary distribution is the long-run equilibrium of a Markov chain β the proportion of time the chain spends in each state as the number of steps approaches infinity. For an ergodic chain, the stationary distribution is unique and independent of the starting state. It is found by solving pi Γ P = pi subject to the probabilities summing to 1.
Brownian motion is the continuous-time, continuous-space limit of a random walk, which is a simple Markov chain. As you take smaller and smaller steps at faster and faster rates, the discrete random walk converges to Brownian motion. This connection β formalized by Donsker's theorem β links the discrete Markov chain theory of probability with the continuous stochastic calculus used in derivatives pricing.
Our bootcamp covers probability, statistics, trading intuition, and 500+ real interview questions from top quant firms.
Book a Free Consult