Glossary
Probability & StatisticsBeginner8 min read

Bayes' Theorem

Bayes' theorem is a formula in probability theory that describes how to update the probability of a hypothesis given new evidence. Expressed as P(A|B) = P(B|A)P(A)/P(B), it is one of the most important tools in statistics, machine learning, and quantitative finance. It appears frequently in quant trading interviews and underlies many signal-processing and prediction techniques used in systematic trading.

What Is Bayes' Theorem?

Bayes' theorem is a formula that tells you how to update your beliefs when you receive new evidence. Named after the Reverend Thomas Bayes (1701-1761), it is one of the most important results in all of probability and statistics.

The theorem answers a fundamental question: if you observe some evidence B, how should you update your belief about whether hypothesis A is true? The formula is:

P(A|B) = P(B|A) × P(A) / P(B)

Where:

  • P(A|B) = the posterior β€” the probability of A after observing B
  • P(B|A) = the likelihood β€” the probability of observing B if A is true
  • P(A) = the prior β€” the probability of A before observing B
  • P(B) = the marginal likelihood β€” the total probability of observing B

The beauty of Bayes' theorem is that it lets you flip a conditional probability around. You often know P(B|A) β€” the probability of the data given the hypothesis β€” but you want P(A|B) β€” the probability of the hypothesis given the data. Bayes' theorem makes the conversion.

Classic Worked Examples

Example 1 β€” Medical Testing (The Standard Interview Problem):

A disease affects 1% of the population. A test for the disease is 99% accurate: it correctly identifies 99% of sick people (sensitivity) and correctly identifies 99% of healthy people (specificity). You test positive. What is the probability you actually have the disease?

Solution:

  • P(Disease) = 0.01 (prior)
  • P(Positive | Disease) = 0.99 (sensitivity)
  • P(Positive | No Disease) = 0.01 (false positive rate)
  • P(No Disease) = 0.99

P(Positive) = P(Positive|Disease) × P(Disease) + P(Positive|No Disease) × P(No Disease)

P(Positive) = 0.99 × 0.01 + 0.01 × 0.99 = 0.0099 + 0.0099 = 0.0198

P(Disease | Positive) = (0.99 × 0.01) / 0.0198 = 0.0099 / 0.0198 = 50%

Despite the 99% accurate test, a positive result only means a 50% chance of having the disease! The key insight: when the condition is rare (1% prevalence), even a very accurate test produces many false positives relative to true positives.

Example 2 β€” Unfair Coins:

You have two coins. Coin A is fair (50% heads). Coin B is biased (75% heads). You pick a coin at random and flip it 3 times, getting HHT. What is the probability you picked Coin B?

P(HHT | A) = 0.53 = 0.125

P(HHT | B) = 0.752 × 0.25 = 0.140625

P(HHT) = 0.5 × 0.125 + 0.5 × 0.140625 = 0.1328

P(B | HHT) = (0.5 × 0.140625) / 0.1328 = 52.9%

Get free quant interview prep resources

Mock interviews, resume guides, and 500+ practice questions β€” straight to your inbox.

Bayesian Reasoning in Trading

Bayesian thinking is woven into quantitative trading at every level:

  • Signal updating: A quant model might start with a prior belief that a stock is fairly valued. As new data arrives β€” earnings reports, order flow, macro data β€” the model updates its belief using Bayesian logic. The posterior becomes the new prior for the next update.
  • Regime detection: Is the market in a "trending" or "mean-reverting" regime? A Bayesian model starts with prior probabilities for each regime and updates them as new returns are observed. This is formalized as a Hidden Markov Model β€” a direct application of Bayes' theorem.
  • Parameter estimation: In Bayesian statistics, model parameters (like the mean and variance of returns) are treated as random variables with prior distributions. As data is observed, the posterior distribution becomes more precise. This is more robust than point estimates, especially with limited data.
  • Market making: A market maker implicitly uses Bayesian reasoning when adjusting quotes. When large buy orders arrive, the maker updates the probability that informed traders are active (increasing the probability of an upward move) and widens the spread accordingly.
  • Machine learning: Naive Bayes classifiers, Bayesian neural networks, and Bayesian optimization are all built on Bayes' theorem and are used in quant research for signal generation and model selection.

Want personalized guidance from a quant?

Speak with a quant trader or researcher who’s worked at a top firm.

Book a Free Consult

Common Pitfalls and Intuition

Bayesian reasoning is counterintuitive for many people. The most common mistakes are:

  • Base rate neglect: Ignoring the prior probability and focusing only on the evidence. In the medical test example above, many people guess 99% instead of 50% because they ignore the 1% base rate. Always start with the base rate.
  • Confusing P(A|B) with P(B|A): The probability of rain given dark clouds is not the same as the probability of dark clouds given rain. Bayes' theorem converts between them, but they are fundamentally different quantities.
  • Forgetting to normalize: P(B) in the denominator ensures the posterior probabilities sum to 1. It's often computed by expanding P(B) = P(B|A)P(A) + P(B|not A)P(not A).

A useful framework for interview problems: When solving Bayes' theorem problems in interviews, use a probability tree or a frequency table. For the medical test problem, imagine 10,000 people: 100 have the disease (99 test positive), 9,900 are healthy (99 test positive by false alarm). So 99 + 99 = 198 test positive, and 99/198 = 50% actually have the disease. This natural frequency approach is much more intuitive than plugging into the formula directly.

Key Formulas

Bayes' theorem: updates the probability of hypothesis A given observed evidence B. Converts the likelihood P(B|A) into the posterior P(A|B).

Law of total probability: computes the denominator of Bayes' theorem by summing over all possible hypotheses. This is the normalization constant.

Key Takeaways

  • Bayes' theorem updates probabilities when new evidence arrives: P(A|B) = P(B|A) Γ— P(A) / P(B).
  • The theorem converts P(B|A) (likelihood of evidence given the hypothesis) into P(A|B) (probability of the hypothesis given the evidence).
  • Prior probability represents your belief before evidence; posterior probability is your updated belief after evidence.
  • Bayesian reasoning is essential in trading: as new market data arrives, quants update their beliefs about fair value, regime, and signal strength.
  • Bayes' theorem questions are among the most common probability topics in quant interviews.

Why This Matters for Quant Careers

Bayes' theorem is one of the most heavily tested probability topics in quant interviews. Firms like Jane Street, SIG, Optiver, and Citadel frequently ask Bayesian probability problems to assess whether candidates can reason under uncertainty. You should be able to solve problems quickly and explain the intuition β€” especially the base rate effect.

Practice with our Jane Street interview questions and SIG interview questions. Book a free consultation to assess your probability readiness.

Frequently Asked Questions

Why is Bayes' theorem important for quant trading?

Bayes' theorem provides the mathematical framework for updating beliefs with new evidence β€” exactly what traders do constantly. As new market data, news, or signals arrive, a quant's estimate of fair value should be updated using Bayesian logic. Additionally, Bayesian statistics, Bayesian machine learning, and Hidden Markov Models (all built on Bayes' theorem) are widely used in systematic trading research.

What is the difference between prior and posterior?

The prior P(A) is your belief about hypothesis A before seeing any evidence. The posterior P(A|B) is your updated belief after observing evidence B. For example, before a medical test, you believe there's a 1% chance of disease (prior). After a positive test result, you update to a 50% chance (posterior). The posterior incorporates both your prior belief and the new evidence.

How do you solve Bayes' theorem problems quickly in interviews?

Use the natural frequency approach: instead of abstract probabilities, imagine a concrete population (e.g., 10,000 people) and count how many fall into each category. This converts a Bayes' theorem problem into simple arithmetic. For example, with 1% disease prevalence and 99% test accuracy: 10,000 people β†’ 100 sick (99 test positive) + 9,900 healthy (99 test positive) β†’ 198 positives total β†’ P(sick|positive) = 99/198 = 50%.

What is the base rate fallacy?

The base rate fallacy is the tendency to ignore prior probabilities (base rates) when evaluating evidence. In the medical test example, people often guess a 99% probability of disease after a positive test, ignoring the fact that only 1% of the population is actually sick. The base rate (1%) dramatically affects the answer. In trading, a similar mistake is overreacting to a signal without considering how rare the event it predicts actually is.

Master These Concepts for Quant Interviews

Our bootcamp covers probability, statistics, trading intuition, and 500+ real interview questions from top quant firms.

Book a Free Consult