Discrete Mathematics
Course Map
This course is four things at once: a course about how to prove things, a course about integers and modular arithmetic, a course about sets and functions, and a course about formal languages and automata. The same habits drive all four parts: define objects precisely, reason from the definition, and keep track of the quantifiers.
- Translate English statements into logical structure.
- Choose the right proof method.
- Move between extensional and structural definitions.
- Recognise when an object is finite, countable, or too large to list.
- Switch between grammars, regexes, and automata.
- Confusing implication with biconditional.
- Using induction without a clear inductive hypothesis.
- Mixing up injective, surjective, and bijective.
- Treating regular expressions as if they were arbitrary grammars.
- Using the pumping lemma backwards.
Survival Strategy
- First identify the type of statement: universal, existential, equivalence, negation, counting, or closure property.
- Then identify the object type: number, set, relation, function, string, regex, automaton.
- Then pick a proof pattern: direct, contrapositive, contradiction, induction, rule induction, or constructive counterexample.
- Only then do the algebra or symbolic manipulation.
Logic and Proof
The opening lectures are about reading mathematical statements correctly. Most errors in discrete maths are logic errors before they are maths errors.
Core Logical Forms
P ⇒ Q- If
P, thenQ. To prove it, assumePand deriveQ. P ⇔ Q- Equivalent to proving both
P ⇒ QandQ ⇒ P. ∀x. P(x)- Take an arbitrary
xand proveP(x). ∃x. P(x)- Exhibit a witness and verify it works.
¬P- Often prove by contradiction or show
P ⇒ false. ∃!x. P(x)- Unique existence: prove both existence of some witness and uniqueness of that witness.
Logical Equivalences That Matter
P ⇒ Q ≡ ¬P ∨ Q
¬(P ∧ Q) ≡ ¬P ∨ ¬Q
¬(P ∨ Q) ≡ ¬P ∧ ¬Q
¬∀x.P(x) ≡ ∃x.¬P(x)
¬∃x.P(x) ≡ ∀x.¬P(x)
Two Tiny But Constantly Used Rules
- Modus ponens: from
PandP ⇒ Q, concludeQ. - Contrapositive: to prove
P ⇒ Q, it is enough to prove¬Q ⇒ ¬P.
Main Proof Styles
| Method | When To Use |
|---|---|
| Direct proof | Definitions and algebra lead naturally to the target. |
| Contrapositive | The negation of the conclusion is easier to use. |
| Contradiction | Assuming the opposite produces an impossible object or statement. |
| Case split | Domain naturally breaks into exhaustive alternatives. |
| Counterexample | To disprove a universal claim. |
Proof By Contradiction
To prove P, assume ¬P and derive a contradiction. This is especially common in number theory, non-regularity, and infinitude arguments.
Divisibility and Congruence
Divisibility
a | b means ∃k ∈ Z such that b = ak
Many proofs here are just “unpack the definition, rewrite, and repack”.
Congruence Modulo m
a ≡ b (mod m) means m | (a - b)
This is equality after quotienting by divisibility by m. It behaves like ordinary arithmetic:
a ≡ b (mod m), c ≡ d (mod m)
⇒ a + c ≡ b + d (mod m)
⇒ ac ≡ bd (mod m)
Division Algorithm
For integers m and positive n, there exist unique integers q,r such that
m = qn + r, 0 ≤ r < n
This is the foundation for remainders, modular arithmetic, and Euclid’s algorithm.
GCD, Euclid, Extended Euclid, Fermat
Greatest Common Divisor
gcd(m,n) is the positive integer dividing both m and n, and divisible by every common divisor.
Euclid's Algorithm
gcd(m,n) = gcd(n,r) where m = qn + r
Repeat until the remainder is 0. The last nonzero remainder is the gcd.
Bézout / Extended Euclid
Extended Euclid computes integers s,t such that
sm + tn = gcd(m,n)
This is the key to modular inverses.
Multiplicative Inverses Modulo m
[a] has an inverse in Z_m iff gcd(a,m)=1. Then the coefficient t from Bézout gives the inverse modulo m.
Useful Coprime Fact
A recurring exam move is:
gcd(a,b)=1 and a|n and b|n ⇒ ab|n
The converse also matters: if this divisibility implication holds for all integers n, then gcd(a,b)=1. This is exactly the style of one official 2023 exam question.
Fermat's Little Theorem
For prime p and integer n:
n^p ≡ n (mod p)
If p ∤ n, then n^{p-1} ≡ 1 (mod p)
It gives modular inverses in prime fields:
n^{-1} ≡ n^{p-2} (mod p)
Cryptography Link
The course discusses Diffie-Hellman and modular exponentiation as concrete applications of inverses and arithmetic in modular systems.
Induction
Induction proves claims indexed by natural numbers. Structurally, it says: prove the base case, then prove the step from n to n+1.
Standard Induction Template
- State the proposition
P(n). - Prove
P(0)orP(1)depending on indexing. - Assume
P(n)as the inductive hypothesis. - Use it to prove
P(n+1).
Strong Induction
Assume all earlier cases up to n in order to prove n+1. In the later lectures, this is reframed as well-founded induction.
Canonical Uses In This Course
- Binomial theorem and Pascal identities.
- Fundamental theorem of arithmetic.
- Euclid’s infinitude of primes.
- Definitions by recursion on natural numbers.
What To Watch In Induction Proofs
- Do not assume the result for
n+1anywhere in the step. - State clearly which proposition is the inductive hypothesis.
- If the recursive structure is not “add 1”, use strong or well-founded induction instead.
Sets and Set Language
Sets are treated axiomatically but operationally: you need to be fluent with membership, subset, products, powersets, unions, and partitions.
Basic Laws
A = B iff ∀x. (x ∈ A ⇔ x ∈ B)
A ⊆ B iff ∀x. (x ∈ A ⇒ x ∈ B)
Nearly every set equality proof is extensionality: show both contain exactly the same elements.
Important Constructions
- Powerset
P(A)is the set of subsets ofA.- Product
A × Bis the set of ordered pairs.- Big union
⋃_{i∈I} A_icontains elements from at least one member of the family.- Big intersection
⋂_{i∈I} A_icontains elements common to all members.- Disjoint union
- Like a union but remembers where elements came from.
Why Russell's Paradox Appears
The notes use it to motivate disciplined set formation: not every informal collection can be treated as a set.
Relations, Matrices, Directed Graphs
Relation
R : A ↛ B means R ⊆ A × B
Write a R b instead of (a,b) ∈ R.
Composition
a (S ∘ R) c iff ∃b. (a R b ∧ b S c)
Composition is associative and identity relations act as units.
Special Relation Types
- Reflexive:
aRafor alla. - Symmetric:
aRb ⇒ bRa. - Transitive:
aRb ∧ bRc ⇒ aRc. - Antisymmetric:
aRb ∧ bRa ⇒ a=b.
Equivalence relations are reflexive + symmetric + transitive. A preorder is reflexive + transitive. A partial order is reflexive + antisymmetric + transitive.
Relations as Boolean Matrices
For finite sets, a relation can be tabulated as a matrix over the Boolean semiring. Then:
- matrix addition corresponds to union of relations;
- matrix multiplication corresponds to relational composition.
Directed Graphs
A directed graph is a set A with an endo-relation R : A ↛ A. A path of length n is a sequence of n composable edges.
The path relation is the reflexive-transitive closure of the edge relation.
Functions, Bijections, Equivalence Relations
Partial vs Total Functions
A partial function is a relation that gives at most one output for each input; a total function is defined on every input.
f : A ⇀ B partial
f : A → B total
Injective / Surjective / Bijective
- Injective
- Different inputs have different outputs.
- Surjective
- Every element of the codomain is hit.
- Bijective
- Both injective and surjective; equivalently has a two-sided inverse.
Sections and Retractions
The notes emphasise them because they give a clean algebra of inverses:
g is a retraction of f iff g ∘ f = id
g is a section of f iff f ∘ g = id
- Having a left inverse implies injective.
- Having a right inverse implies surjective.
Image Factorisation
Any function f : A → B factors as a surjection followed by an injection:
A ↠ im(f) ↪ B
This is conceptually simple but very useful when proving statements about injectivity, surjectivity, and cardinality.
Equivalence Relations and Partitions
An equivalence relation chops a set into equivalence classes. These classes form a partition, and conversely every partition determines an equivalence relation.
Indicator Functions
Subsets of A correspond to characteristic functions A → [2]. This is central later in diagonalisation arguments.
Finite Cardinality, Countability, Choice
Finite Cardinality
A set has cardinality n if it is in bijection with [n] = {0,...,n-1}. Counting arguments in the course are really bijection arguments.
Calculus of Finite Cardinalities
|A ⊔ B| = |A| + |B|
|A × B| = |A||B|
|B^A| = |B|^{|A|}
The last identity is the counting form of “functions from A to B”. Indicator functions give |P(A)| = 2^{|A|} for finite A.
Enumerable and Countable
- Enumerable: in bijection with
N. - Countable: finite or enumerable.
Standard examples: N, Z, and Q are countable.
Closure Facts
- Finite products of countable sets are countable.
- Finite or countable disjoint unions of countable sets are countable.
- Subsets of countable sets are countable.
Axiom of Choice
In these notes it appears through sections, selections, and arguments that pick representatives from families of nonempty sets.
Images, Diagonalisation, Cantor, Foundation
Images
Direct image: f[X] = {f(x) | x ∈ X}
Inverse image: f^{-1}[Y] = {x | f(x) ∈ Y}
Inverse image behaves especially well with unions, intersections, and complements. Direct image is generally better behaved with unions than with intersections.
f^{-1}[Y ∪ Z] = f^{-1}[Y] ∪ f^{-1}[Z].f^{-1}[Y ∩ Z] = f^{-1}[Y] ∩ f^{-1}[Z].f^{-1}[B \ Y] = A \ f^{-1}[Y]forf : A → B.f[X ∪ X'] = f[X] ∪ f[X'].
Cantor's Theorem
No function A → P(A) is surjective
So P(A) is strictly larger than A. This is the core unbounded-cardinality fact.
Diagonalisation
Assume a supposed enumeration of all subsets/functions/bit-strings. Build a new one that differs from the nth object at the nth place. Therefore it is missing from the list.
This proves uncountability of the reals by encoding reals in [0,1] as infinite binary sequences.
Cantor-Schröder-Bernstein
If there are injections both ways between A and B, then there is a bijection between them. This lets you prove equality of cardinality without building the bijection directly.
Well-Foundedness and Foundation
A relation is well-founded when there are no infinitely descending chains of the relevant kind, equivalently every nonempty subset has a minimal element. This gives well-founded induction. The foundation axiom rules out pathological self-membership loops in set theory.
Replacement and Families of Sets
The later set-theory notes also discuss set-indexed constructions: a family of sets indexed by another set, and the principle that applying a functional rule across a set gives a set of outputs. For revision purposes, remember this mainly as permission to form images and indexed collections without leaving set theory.
Formal Languages, Inductive Definitions, Rule Induction
A formal language over an alphabet Σ is just a subset of Σ*. The second half of the course is about defining languages structurally rather than only extensionally.
Alphabet, String, Language
Σ finite alphabet
Σ* all finite strings over Σ
ε empty string
L ⊆ Σ* formal language
Concrete vs Abstract Syntax
The later HTML notes explicitly distinguish concrete syntax as strings from abstract syntax as trees. Parsing converts the former into the latter. This matters because structural induction is usually really induction on abstract syntax trees or derivations.
Inductive Definitions
Give axioms and rules that generate exactly the intended objects. Membership is witnessed by a finite derivation tree.
Rule Induction
To prove every inductively generated object has property P:
- Show every axiom-generated object has
P. - Show each rule preserves
Passuming its premises haveP.
This is the structural induction principle for rule-generated objects.
Regular Expressions
Syntax
Regular expressions are themselves syntactic objects, built inductively from:
- symbols
a ∈ Σ, ε,∅,- union
r | s, - concatenation
rs, - Kleene star
r*.
Semantics
Each regex denotes a language:
L(∅) = ∅
L(ε) = {ε}
L(a) = {a}
L(r|s) = L(r) ∪ L(s)
L(rs) = {uv | u ∈ L(r), v ∈ L(s)}
L(r*) = ⋃_{n≥0} L(r)^n
Algebra
Regular expressions satisfy useful equations, but remember these are semantic equalities between denoted languages, not necessarily literal string equalities of regex syntax.
Finite Automata
Deterministic Finite Automaton
A DFA consists of a finite state set, a start state, a transition function, and accepting states.
Operational Picture
- Read the input string one symbol at a time.
- Move between states according to the transition function.
- Accept iff the final state is accepting.
Nondeterminism
NFA-style machines may branch or use ε-moves. They are not more expressive than DFAs for regular languages, only more convenient to describe.
Exam Skill
You should be comfortable tracing an input through an automaton, constructing a small automaton for a simple pattern, and explaining why a machine accepts exactly the intended language.
Language of an Automaton
L(M) = { w ∈ Σ* | M accepts w }
Regular Languages and Kleene's Theorem
Kleene’s theorem is the central equivalence:
A language is regular iff it is accepted by a finite automaton
Why It Matters
- Regexes are good for describing languages compactly.
- Automata are good for deciding membership operationally.
- Kleene’s theorem lets you move between them.
Two Directions
- Regex → automaton: build an automaton recursively from the regex operations.
- Automaton → regex: eliminate states or otherwise encode accepted paths by regexes.
Closure Properties
Regular languages are closed under union, concatenation, star, and many other constructions because both automata and regex presentations support them.
If you are asked whether a language is regular, you usually try one of two things: either build a regex/automaton, or prove impossible by pumping lemma. Kleene’s theorem is the bridge that makes those equivalent ways of succeeding.
Pumping Lemma
The pumping lemma is a necessary property of regular languages, not a sufficient one.
Statement Shape
If L is regular, then there exists a pumping length ℓ such that every string w ∈ L with |w| ≥ ℓ can be written
w = u1 v u2
with:
v ≠ ε,|u1v| ≤ ℓ,u1 v^n u2 ∈ Lfor alln ≥ 0.
Why It Is True
In a DFA with ℓ states, any accepting path of length at least ℓ must repeat a state within the first ℓ transitions. That loop can be pumped.
How To Use It To Prove Non-Regularity
- Assume the language is regular.
- Let
ℓbe its pumping length. - Choose a specific
w ∈ Lwith|w| ≥ ℓ. - Show that for every allowed decomposition
w = u1vu2, pumping breaks membership inL. - Contradiction, so
Lis not regular.
Classic Targets
{ a^n b^n | n ≥ 0 }- palindromes
- well-nested parentheses
Proof Patterns To Memorise
Set Equality
Take arbitrary x.
Show x ∈ A ⇒ x ∈ B.
Show x ∈ B ⇒ x ∈ A.
Conclude A = B by extensionality.
Injective
Assume f(x) = f(y).
Manipulate until x = y.
Surjective
Take arbitrary y in codomain.
Construct x in domain with f(x) = y.
Induction
Base case.
Inductive hypothesis.
Inductive step.
Conclusion.
Non-Regularity
Assume regular.
Let ℓ be pumping length.
Choose witness w.
Consider arbitrary allowed split.
Pump down or up.
Contradiction.
Exam Checklist
- Did you write the quantifiers in the right order?
- If proving an implication, did you explicitly assume the antecedent?
- If proving equality of sets/relations/functions, did you use extensionality?
- If using induction, is the inductive hypothesis exactly what you need?
- If using modular arithmetic, did you keep the modulus fixed and legal?
- If claiming inverse modulo
m, did you check gcd = 1? - If discussing a relation, did you distinguish reflexive/transitive/symmetric/antisymmetric correctly?
- If discussing a language, are you talking about strings, syntax trees, regex syntax, or the denoted language?
- If using pumping lemma, did you quantify over all possible decompositions?