← back
CST / Part IA / NST Mathematics

NST Mathematics

Combined revision notes from the Michaelmas, Lent, and Easter course-B lecture PDFs · consolidated to replace missing lecture-board fill-ins where possible

Course Map

The course is really a toolkit. Each term adds techniques for describing geometry, approximation, change, oscillation, fields, linear structure, and evolution equations. The common pattern is: identify the structure, choose the right representation, simplify the algebra, and then interpret the result geometrically or physically.

Michaelmas
  • Vectors and coordinates.
  • Complex numbers and hyperbolic functions.
  • Single-variable calculus, series, integration.
  • Elementary probability and distributions.
Lent + Easter
  • ODEs, multivariable calculus, vector calculus, Fourier series.
  • Linear algebra: vector spaces, matrices, eigenstuff.
  • PDEs: Laplace, wave, diffusion.

General Problem-Solving Method

  1. Classify the object: vector, scalar field, ODE, matrix, Fourier series, PDE, random variable.
  2. Choose coordinates or basis that match the symmetry.
  3. Exploit structure: linearity, separability, orthogonality, exactness, symmetry, periodicity.
  4. Compute carefully.
  5. Interpret the answer: geometry, extrema, probability, physical meaning, or boundary behaviour.
Most hard questions become manageable once you identify the right representation: polar coordinates, diagonal basis, integrating factor, or Fourier expansion.

Vectors

Vectors encode magnitude and direction. In this course they are used both geometrically and as a bridge to later ideas like bases, coordinates, gradients, flux, and matrices.

Basic Operations

Addition
Componentwise; geometrically head-to-tail.
Scalar multiplication
Rescales magnitude, reverses direction if scalar is negative.
Magnitude
|a| = sqrt(a · a).
Unit vector
â = a / |a|.

Lines and Planes

Line through r0 in direction d:   r = r0 + λd
Plane through r0 with normal n:   (r - r0) · n = 0

A line is determined by one point and one direction vector. A plane is determined by one point and one normal vector.

Scalar Product

a · b = |a||b| cos θ = axbx + ayby + azbz
  • a · b = 0 iff the vectors are perpendicular.
  • Use it for angles, projections, distances, and normal conditions.

Vector Product

|a × b| = |a||b| sin θ

a × b is perpendicular to both a and b, with direction from the right-hand rule. Its magnitude is the area of the parallelogram spanned by a and b.

Triple Products

a · (b × c)

Gives signed volume of the parallelepiped. It vanishes iff the three vectors are coplanar.

a × (b × c) = b(a · c) - c(a · b)

Distances

  • Point to plane: projection onto the plane normal.
  • Point to line: use area or projection decomposition.
  • Line to line: if skew, project separation vector onto a common normal.

Coordinates and Bases

Cartesian basis vectors are fixed and orthogonal. Cylindrical and spherical bases are still orthogonal, but the basis vectors depend on position.

SystemCoordinatesUse
Cartesian(x, y, z)No special symmetry.
Cylindrical(r, θ, z)Axial symmetry.
Spherical(r, θ, φ)Radial symmetry.
Coordinate changes simplify geometry only when they match the symmetry of the problem.

Complex Numbers

A complex number is z = x + iy with i² = -1. Complex numbers combine algebra with plane geometry and become especially powerful for rotations, oscillations, roots, and logarithms.

Basic Forms

z = x + iy
|z| = sqrt(x² + y²)
arg(z) = θ
z = |z|(cos θ + i sin θ) = |z| e^{iθ}

Conjugate

z* = x - iy
zz* = |z|²

Useful for division and extracting real-valued quantities.

Multiplication and Division

  • Multiplication multiplies moduli and adds arguments.
  • Division divides moduli and subtracts arguments.
z1 z2 = r1 r2 e^{i(θ1 + θ2)}
z1 / z2 = (r1 / r2) e^{i(θ1 - θ2)}

Euler and de Moivre

e^{iθ} = cos θ + i sin θ
(cos θ + i sin θ)^n = cos(nθ) + i sin(nθ)

This gives roots of unity and efficient formulas for powers and roots.

Roots and Logarithm

z^{1/n} = r^{1/n} e^{i(θ + 2πk)/n},  k = 0, ..., n-1
log z = ln|z| + i(arg z + 2πk)

Complex logarithm is multivalued because the argument is multivalued.

Oscillations

Complex exponentials compress trig algebra:

cos ωt = Re(e^{iωt}),   sin ωt = Im(e^{iωt})

This is why they appear constantly in waves, circuits, and Fourier analysis.

Hyperbolic Functions

cosh x = (e^x + e^{-x}) / 2
sinh x = (e^x - e^{-x}) / 2
tanh x = sinh x / cosh x

These behave like a “hyperbolic” analogue of trig functions, but note the key sign change:

cosh²x - sinh²x = 1

Key Facts

  • cosh is even, sinh and tanh are odd.
  • cosh x ≥ 1 and |tanh x| < 1.
  • They parameterise hyperbolae in the same way circular trig parameterises circles.

Inverse Hyperbolic Functions

They are useful in integration and in solving equations involving exponentials.

Differentiation, Limits, and Series

This block is the single-variable calculus core, extended beyond standard school technique into approximation and local analysis.

Differentiation Rules

(fg)' = f'g + fg'
(f/g)' = (f'g - fg') / g²
(f(g(x)))' = f'(g(x)) g'(x)

Also use implicit differentiation when the curve is given by a relation rather than an explicit function.

Stationary Points and Sketching

  • f'(x0) = 0 gives a candidate stationary point.
  • f''(x0) > 0 local minimum.
  • f''(x0) < 0 local maximum.
  • If the first nonzero derivative is of odd order, the point is an inflection-type change.

Limits and Asymptotics

Use algebra, comparison, Taylor expansion, or l'Hôpital when appropriate. The course also emphasises order of magnitude and O-notation for dominant behaviour.

f(x) = O(g(x))  means  |f(x)| ≤ C|g(x)|  in the relevant limit

Infinite Series

Convergence matters. Main tests in these notes are comparison and ratio tests, with standard benchmark examples like geometric series and harmonic series.

Taylor Series

f(x) = Σ_{n=0}^N f^{(n)}(a) (x-a)^n / n! + R_{N+1}

Near x = a, Taylor series turn a complicated function into a polynomial approximation whose leading nonzero term often controls the local behaviour.

e^x = 1 + x + x²/2! + ...
sin x = x - x³/3! + ...
cos x = 1 - x²/2! + ...
ln(1+x) = x - x²/2 + x³/3 - ...

Binomial Expansion

(1 + x)^α = 1 + αx + α(α-1)x²/2! + ...

Newton-Raphson

x_{n+1} = x_n - f(x_n) / f'(x_n)

Geometric meaning: replace the function by its tangent line and use the tangent’s root as the next iterate.

Integration

Integration is developed both as area/accumulation and as an algebraic toolkit.

Fundamental Theorem of Calculus

d/dx ∫_a^x f(t) dt = f(x)

Differentiation and integration are inverse operations, up to constants.

Main Techniques

  • Substitution.
  • Integration by parts.
  • Partial fractions.
  • Trig and hyperbolic substitutions/identities.
  • Complex-number tricks for mixed trig expressions.

Useful Patterns

∫ u dv = uv - ∫ v du
∫ f(x)^α f'(x) dx  →  substitute u = f(x)

Even and Odd Integrands

f odd:  ∫_{-a}^a f(x) dx = 0
f even: ∫_{-a}^a f(x) dx = 2∫_0^a f(x) dx

Differentiation Under the Integral Sign

The course includes differentiating integrals with respect to parameters and limits. This is extremely useful for parametric integral families.

Approximating Sums by Integrals

This gives asymptotic estimates, especially for factorial growth via Stirling’s approximation.

ln(n!) ≈ n ln n - n

Schwarz Inequality

(∫ f g)^2 ≤ (∫ f²)(∫ g²)

Core inequality for controlling integrals and proving bounds.

Multiple Integrals

Choose coordinates to match the domain.

dA = r dr dθ
dV = r dr dθ dz
dV = r² sin θ dr dθ dφ
  • 2D polar for disks and radial planar regions.
  • Cylindrical for tubes/axial symmetry.
  • Spherical for balls and radial 3D symmetry.

Gaussian Integral

∫_{-∞}^{∞} e^{-x²} dx = √π

The standard route squares the integral and switches to polar coordinates.

Probability

The course probability block is elementary but important: set language, conditional probability, distributions, and moments.

Basic Rules

0 ≤ P(A) ≤ 1
P(Ā) = 1 - P(A)
P(A ∪ B) = P(A) + P(B) - P(A ∩ B)

Conditional Probability

P(B | A) = P(A ∩ B) / P(A)

Use it when information changes the sample space. Be careful in medical-test style problems: P(test+ | disease) is not the same as P(disease | test+).

Combinatorics

Permutations:   n!
Combinations:   (n choose r) = n! / (r!(n-r)!)

Expectation and Variance

E[X] = Σ x p(x)   or   ∫ x f(x) dx
Var(X) = E[(X - μ)²] = E[X²] - (E[X])²

Important Distributions

DistributionWhenMeanVariance
Binomialn independent yes/no trialsnpnp(1-p)
PoissonRare countsλλ
ExponentialWaiting times1/λ1/λ²
NormalContinuous bell-shaped behaviourμσ²

Normal Distribution

f(x) = 1/(σ√(2π)) exp(-(x-μ)²/(2σ²))

Central because sums/averages of many effects tend toward it. Standardise with Z = (X - μ)/σ.

Ordinary Differential Equations

An ODE relates an unknown function and its ordinary derivatives. The first task is always to identify the class of equation.

First-Order Types

Integrable
y' = f(x). Integrate directly.
Separable
y' = f(x)/g(y). Rearrange and integrate.
Linear
y' + p(x)y = f(x). Use integrating factor.

Integrating Factor

μ(x) = exp(∫ p(x) dx)
(μy)' = μf

This converts the ODE into an exact derivative.

General Solution vs Conditions

General solutions contain arbitrary constants. Initial or boundary conditions determine those constants.

Second-Order Linear ODEs with Constant Coefficients

ay'' + by' + cy = 0

Use the trial form y = e^{mx}, giving the characteristic equation:

am² + bm + c = 0
  • Distinct real roots: y = A e^{m1x} + B e^{m2x}.
  • Repeated root: y = (A + Bx)e^{mx}.
  • Complex roots α ± iβ: y = e^{αx}(A cos βx + B sin βx).

Inhomogeneous Equations

General solution = complementary function + particular integral. Linearity and superposition control everything.

Damping, Resonance, Transients

For forced oscillators, the homogeneous part gives transients and the particular part gives the long-time driven response. Resonance occurs when forcing frequency aligns with the system’s natural frequency, subject to damping.

Partial Derivatives and Multivariable Extrema

For functions of several variables, the derivative depends on which direction you vary. Partial derivatives isolate one variable at a time.

Definitions

fx = ∂f/∂x,   fy = ∂f/∂y

These describe local rates of change holding the other variables fixed.

Differentials

df = fx dx + fy dy

This is the linear approximation to the change in f.

Chain Rule

If variables depend on other variables, partial derivatives combine through the chain rule. This becomes essential in coordinate changes and thermodynamic identities.

Exact Differentials

P dx + Q dy is exact  iff  ∂P/∂y = ∂Q/∂x

If exact, then P dx + Q dy = df for some potential function f. This is the multivariable version of “being a total derivative”. Integrating factors can sometimes make an inexact form exact.

Stationary Points in Two Variables

fx = 0,  fy = 0

Then classify using second derivatives or the Hessian. In two variables:

D = fxx fyy - fxy²
  • D > 0 and fxx > 0: local minimum.
  • D > 0 and fxx < 0: local maximum.
  • D < 0: saddle point.

Constraints and Lagrange Multipliers

To optimise f subject to g = 0, solve stationary points of

L = f - λg
∇f = λ∇g

Geometrically, the level sets are tangent at the constrained extremum.

Vector Calculus

This extends vectors and partial derivatives into field theory. Scalar fields assign a number to each point; vector fields assign a vector to each point.

Gradient

∇φ = (∂φ/∂x, ∂φ/∂y, ∂φ/∂z)

Points in the direction of greatest increase of φ and is perpendicular to level surfaces φ = const.

Line Integrals

∫_C F · dr

Measures accumulated tangential effect along a curve. For conservative fields it depends only on endpoints.

Conservative Fields

Equivalent characterisations in this course:

  • F = ∇φ for some potential φ.
  • F · dr is an exact differential.
  • Line integrals are path-independent.
  • Closed-loop integrals vanish.
  • ∇ × F = 0 on suitable simply connected domains.

Flux Integrals

∫_S F · dS

Measure net flow through a surface.

Divergence and Laplacian

∇ · F = ∂Fx/∂x + ∂Fy/∂y + ∂Fz/∂z
∇²φ = ∇ · (∇φ)

Divergence measures local source strength. Laplacian measures local curvature/imbalance of a scalar field.

Gauss' Theorem

∫_V ∇ · F dV = ∮_{∂V} F · dS

Converts a volume integral of divergence into a flux through the boundary.

Curl and Stokes' Theorem

∫_S (∇ × F) · dS = ∮_{∂S} F · dr

Curl measures local circulation density. Stokes converts it into circulation around the boundary curve.

Fourier Series

Fourier series represent periodic functions using orthogonal sine and cosine modes. They are the natural language for periodic structure, oscillations, and PDE boundary problems.

General Form

f(x) = a0/2 + Σ_{n=1}^∞ [an cos(nπx/L) + bn sin(nπx/L)]

Coefficients

an = (1/L) ∫_{-L}^L f(x) cos(nπx/L) dx
bn = (1/L) ∫_{-L}^L f(x) sin(nπx/L) dx

Parity Shortcuts

  • If f is even, all bn = 0.
  • If f is odd, all an = 0 and a0 = 0.

Orthogonality

The whole method works because the sine and cosine basis functions are orthogonal on the interval.

Discontinuities and Convergence

At jump discontinuities, the Fourier series converges to the midpoint of the left and right limits, not usually to either side value. Partial sums near jumps show Gibbs phenomenon.

Parseval

(1/L) ∫_{-L}^L f(x)² dx = a0²/2 + Σ_{n=1}^∞ (an² + bn²)

This is an energy identity linking the function to its Fourier coefficients.

Differentiating/Integrating Fourier Series

Often allowed term-by-term when regularity is good enough. This is crucial in PDE solving.

Linear Algebra

The Easter term abstracts familiar vector ideas and makes them computational. This is the algebraic backbone behind coordinate changes, Hessians, and PDE mode expansions.

Vector Spaces

A vector space is a set closed under addition and scalar multiplication. Key ideas:

  • Span: all linear combinations of a set of vectors.
  • Linear independence: no nontrivial linear relation.
  • Basis: linearly independent spanning set.
  • Dimension: number of basis vectors.

Matrices

Matrices represent linear maps once a basis is chosen. Learn the basic operations, especially multiplication as composition of maps.

Determinants

  • Detect singularity: det A = 0 iff matrix is singular.
  • Scale oriented volume.
  • Change sign when swapping rows.
  • Multiply over products: det(AB) = det A det B.

Inverse

A^{-1}A = AA^{-1} = I

Exists iff det A ≠ 0. Then linear systems Ax = b have the unique solution x = A^{-1}b.

Kernel and Solvability

Nontrivial kernel means non-uniqueness. Singular matrices can give no solutions or infinitely many solutions depending on whether b lies in the image.

Orthogonal Matrices

O^T O = I

Columns form an orthonormal basis. Orthogonal matrices preserve lengths and angles. In real space, determinant +1 corresponds to rotations and -1 to reflections/orientation reversal.

Eigenvalues and Eigenvectors

Av = λv

Eigenvectors are directions preserved by the map. Eigenvalues give the scaling along those directions.

det(A - λI) = 0

This characteristic equation gives the eigenvalues.

Real Symmetric Matrices

  • All eigenvalues are real.
  • Eigenvectors for distinct eigenvalues are orthogonal.
  • There exists an orthonormal eigenbasis.
A = S D S^T

This diagonalisation is fundamental: it rotates the problem into principal axes. It is exactly why Hessian classification works cleanly.

Partial Differential Equations

The course focuses on three canonical linear second-order PDEs:

EquationMeaning
∇²φ = 0Laplace equation: equilibrium / steady state.
∂y/∂t = D ∂²y/∂x²Diffusion equation: smoothing/spreading.
∂²y/∂t² = c² ∂²y/∂x²Wave equation: propagation/oscillation.

General Features

  • Linearity gives superposition.
  • Boundary and initial conditions select the physical solution.
  • Separation of variables is the key technique in this course.

Classification

For a second-order linear PDE in two variables, classification by b² - ac gives elliptic, parabolic, or hyperbolic type.

Laplace Equation

Represents steady-state behaviour. Separable trial:

φ(x,y) = X(x)Y(y)

Boundary conditions then determine the allowed modes and coefficients.

Wave Equation

Travelling-wave solutions take the form f(x-ct) and g(x+ct). On finite intervals with fixed endpoints, standing waves arise and Fourier modes become the natural basis.

Diffusion Equation

Solutions smooth out over time. Delta-like initial data spreads; absorbing boundaries remove mass. Again, separation and Fourier modes dominate bounded-domain solutions.

In PDE questions, the algebra is only half the job. You also need to state the domain, the boundary conditions, and what the resulting modes mean physically.

Core Formulae

TopicFormula
Scalar producta · b = |a||b| cos θ
Vector product|a × b| = |a||b| sin θ
Eulere^{iθ} = cos θ + i sin θ
Taylorf(x) = Σ f^{(n)}(a)(x-a)^n/n! + R
Newton-Raphsonx_{n+1} = x_n - f(x_n)/f'(x_n)
VarianceVar(X) = E[X²] - E[X]²
Integrating factorμ = e^{∫p(x)dx}
Hessian testD = fxx fyy - fxy²
Gradient∇φ
Divergence theorem∫_V ∇·F dV = ∮ F·dS
Stokes∫_S (∇×F)·dS = ∮ F·dr
Fourier coefficientsan, bn by orthogonal projection integrals
Eigenvaluesdet(A - λI) = 0
DiagonalisationA = SDS^T for real symmetric A

Exam Use

What Strong Answers Usually Do

  • State the relevant definition before using it.
  • Exploit symmetry early rather than grinding in bad coordinates.
  • Separate the complementary function from the particular integral in ODEs.
  • Name the method explicitly: integrating factor, Lagrange multipliers, separation of variables, diagonalisation.
  • Check boundary or initial conditions at the end.

Common Failure Modes

  • Using polar/spherical coordinates without changing the measure.
  • Forgetting that complex logarithm and roots are multivalued.
  • Treating every stationary point as max/min without Hessian or sign analysis.
  • Writing Fourier series coefficients without using parity simplifications.
  • Ignoring the condition det A ≠ 0 when inverting matrices.
  • Solving a PDE mode equation but not fitting the boundary data.
The course is broad, but the techniques repeat. If you can recognise structure quickly, the calculations tend to collapse to a standard template.