Math Foundations Roadmap

You don’t need a math degree. You need enough math to understand why algorithms work and when they break. Learn just-in-time — study each topic when the ML concept you’re learning requires it.

1. Linear Algebra

The language of data. Every dataset is a matrix, every model parameter is a vector.

2. Calculus

How models learn. Gradient descent is just calculus applied iteratively.

  • Derivatives — rate of change, slopes, the basis of optimization
  • Partial Derivatives — multivariable functions, one variable at a time
  • Chain Rule — backpropagation is literally the chain rule
  • Gradient — direction of steepest ascent, vector of partials
  • Gradient Descent — the optimization loop that trains every model

3. Probability & Statistics

How we reason under uncertainty. Every prediction has a confidence.

4. Information Theory

The math behind decision trees, cross-entropy loss, and language models.

When to go deeper

Most ML requires linear algebra + calculus at an intuitive level. Go deeper into probability for Bayesian methods, information theory for NLP, and optimization theory for research.