// WEEK 1 — CH.1: LINEAR EQUATIONS (§1.1–1.4)
// 1.1–1.4 CORE CONCEPTS
DEFINITION
Linear System & Solution
System of m equations in n unknowns x₁…xₙ. A solution is a list (s₁…sₙ) satisfying all equations. The collection of all solutions = solution set.
a₁₁x₁ + a₁₂x₂ + … + a₁ₙxₙ = b₁
⋮
aₘ₁x₁ + aₘ₂x₂ + … + aₘₙxₙ = bₘ → matrix form: Ax = b
KEY FACT
Consistent vs Inconsistent
INCONSISTENT
⟺ NO SOLUTION
Row gives 0 = c, c≠0
CONSISTENT
⟺ ≥1 SOLUTION
Either 1 or ∞ many
DEFINITION
Row Echelon Form (REF) & RREF
REF: leading entry = 1, each pivot right of previous, zero rows at bottom.
RREF: REF + pivot is the ONLY nonzero in its column.
REF: [1 2 3] RREF: [1 0 3]
[0 1 4] [0 1 4]
[0 0 1] [0 0 1]
ALGORITHM
Row Reduction Steps
1. Find leftmost nonzero column → pivot column
2. Swap to get nonzero at top, scale to make pivot = 1
3. Zero out all entries below pivot (EROs)
4. Repeat for submatrix → gives REF
5. Zero out entries above each pivot → gives RREF
EROs: Rᵢ ↔ Rⱼ | c·Rᵢ→Rᵢ (c≠0) | Rᵢ+c·Rⱼ→Rᵢ
THEOREM
3 Cases (Existence & Uniqueness)
d = n − rank(A) (free parameters)
Case 1: inconsistent row (0…0|c), c≠0 → NO SOLUTION
Case 2: d = 0, rank=n → UNIQUE SOLUTION
Case 3: d ≥ 1, rank<n → ∞ MANY SOLUTIONS
// WEEK 2 — CH.1: SOLUTION SETS, INDEPENDENCE (§1.5–1.7)
// 1.5 SOLUTION SETS OF LINEAR SYSTEMS
DEFINITION
Homogeneous System
A system Ax = 0 (all constants = 0) is called homogeneous. It is ALWAYS consistent — trivial solution x = 0 always exists.
Ax = 0 → trivial solution: x = (0, 0, …, 0) always works
Nontrivial solutions exist ⟺ system has FREE VARIABLES ⟺ rank(A) < n
KEY FACT
Parametric Vector Form
When free variables exist, write solution as a parametric vector form: x = p + t·v (or multiple params). Here p is a particular solution and v spans the solution set.
Example — solution set of Ax=0:
x = t·v₁ + s·v₂ (t, s ∈ ℝ) — passes through origin
Solution set of Ax=b (nonhomogeneous, consistent):
x = p + t·v₁ + s·v₂ — translated parallel to homogeneous solution set
THEOREM 1.6
Nonhomogeneous Solution Structure
If p is any particular solution to Ax = b, then every solution has the form:
x = p + xₕ
where xₕ is any solution to the homogeneous system Ax = 0
Geometric picture: solution set of Ax=b is a translate of solution set of Ax=0.
// 1.6 APPLICATIONS OF LINEAR SYSTEMS
APPLICATION
Network Flow & Balancing
Set up equations for each node: flow in = flow out. Write as linear system, row-reduce. Free variables = adjustable flows.
APPLICATION
Balancing Chemical Equations
Assign unknown coefficients to each molecule. For each element, write: atoms in = atoms out. Solve the linear system for the coefficients.
// 1.7 LINEAR INDEPENDENCE
DEFINITION
Linear Independence / Dependence
Vectors {v₁, v₂, …, vₚ} are linearly independent if the only solution to c₁v₁ + c₂v₂ + … + cₚvₚ = 0 is c₁=c₂=…=cₚ=0 (trivial).
They are linearly dependent if a nontrivial solution exists (some cᵢ ≠ 0).
Test: put vectors as columns → [v₁ v₂ … vₚ]x = 0
Row reduce. If only trivial solution → INDEPENDENT
If free variable exists → DEPENDENT
KEY THEOREMS
Linear Dependence Facts
1. A set with the ZERO VECTOR is always dependent.
2. A set of 2 vectors is dependent ⟺ one is a scalar multiple of the other.
3. If p > n (more vectors than entries): ALWAYS dependent.
(More columns than rows → free variable guaranteed)
4. {v₁,…,vₚ} dependent ⟺ at least one vector is a linear
combination of the others (the preceding ones).
// WEEK 3 — CH.1&2: TRANSFORMATIONS, MATRIX OPS (§1.8–2.1)
// 1.8 LINEAR TRANSFORMATIONS
DEFINITION
Linear Transformation
A transformation T: ℝⁿ → ℝᵐ is linear if for all u, v ∈ ℝⁿ and scalars c:
1. T(u + v) = T(u) + T(v) (additivity)
2. T(cu) = cT(u) (homogeneity)
Consequence: T(0) = 0 always
T(cu + dv) = cT(u) + dT(v) (superposition)
KEY IDEA
Matrix Multiplication = Linear Transformation
Every matrix multiplication T(x) = Ax defines a linear transformation. The matrix A IS the transformation — it tells you where each basis vector goes.
T: ℝⁿ → ℝᵐ defined by T(x) = Ax where A is m×n
Domain = ℝⁿ, Codomain = ℝᵐ
Range = set of all Ax = column space of A
// 1.9 MATRIX OF A LINEAR TRANSFORMATION
THEOREM 1.10
Standard Matrix of T
For any linear T: ℝⁿ → ℝᵐ, there exists a unique matrix A such that T(x) = Ax. This matrix is:
A = [T(e₁) T(e₂) … T(eₙ)]
where e₁, e₂, …, eₙ are standard basis vectors of ℝⁿ.
Just apply T to each basis vector → that's the column!
DEFINITION
Onto and One-to-One
ONE-TO-ONE
T(u)=T(v) ⟹ u=v
⟺ Ax=0 has
only trivial sol.
⟺ columns of A
linearly independent
ONTO
Every b in ℝᵐ
is hit by some x
⟺ A has a pivot
in every ROW
⟺ cols span ℝᵐ
// 2.1 MATRIX OPERATIONS
DEFINITION
Matrix Operations
Addition: A + B (same size, add entry-by-entry)
Scalar mult: cA (multiply every entry by c)
Multiplication: (AB)ᵢⱼ = row i of A · col j of B
A is m×n, B is n×p → AB is m×p
Transpose: (Aᵀ)ᵢⱼ = Aⱼᵢ (flip rows/cols)
IMPORTANT PROPERTIES
Matrix Algebra Rules
A(BC) = (AB)C (associative)
A(B+C) = AB + AC (distributive)
(A+B)C = AC + BC
(AB)ᵀ = BᵀAᵀ (reverse order!)
(ABC)ᵀ = CᵀBᵀAᵀ
✗ AB ≠ BA in general (NOT commutative!)
✗ AB = AC does NOT imply B = C
✗ AB = 0 does NOT imply A=0 or B=0
REMEMBER
Column View of AB
Each column of AB = A times the corresponding column of B.
AB = A[b₁ b₂ … bₚ] = [Ab₁ Ab₂ … Abₚ]
// WEEK 4 — CH.2: INVERSE, IMT, LU (§2.2–2.5)
// 2.2 INVERSE OF A MATRIX
DEFINITION
Invertible (Nonsingular) Matrix
An n×n matrix A is invertible if there exists an n×n matrix C such that AC = CA = I. Then C = A⁻¹ (unique). A non-invertible matrix is called singular.
If A invertible: Ax = b has unique solution x = A⁻¹b
(A⁻¹)⁻¹ = A
(AB)⁻¹ = B⁻¹A⁻¹ ← REVERSE ORDER
(Aᵀ)⁻¹ = (A⁻¹)ᵀ
ALGORITHM
Computing A⁻¹ by Row Reduction
Augment A with identity, row reduce to RREF. If A reduces to I, the right side becomes A⁻¹.
[ A | I ] → row reduce → [ I | A⁻¹ ]
If left side CANNOT reduce to I → A is SINGULAR (no inverse)
FORMULA
2×2 Inverse Formula
A = [a b] A⁻¹ = ___1___ · [ d -b]
[c d] ad - bc [-c a]
det(A) = ad - bc
If det(A) = 0 → A is SINGULAR (not invertible)
// 2.3 INVERTIBLE MATRIX THEOREM (IMT)
THEOREM 2.8 — THE BIG ONE
Invertible Matrix Theorem (IMT)
For an n×n matrix A, ALL of the following are equivalent:
a) A is invertible
b) A is row equivalent to Iₙ
c) A has n pivot positions
d) Ax = 0 has only the trivial solution
e) Columns of A are linearly independent
f) T(x)=Ax is one-to-one
g) Ax = b has a solution for every b in ℝⁿ
h) Columns of A span ℝⁿ
i) T(x)=Ax is onto
j) There exists C such that CA = I
k) There exists D such that AD = I
l) Aᵀ is invertible
⚡ If ANY ONE is true → ALL are true. If ANY ONE is false → ALL are false.
// 2.4 PARTITIONED MATRICES
DEFINITION
Block Matrix Multiplication
Matrices can be divided into blocks (submatrices). Multiplication works block-by-block, same rules as regular multiplication — as long as block sizes are compatible.
[ A₁₁ A₁₂ ] [ B₁ ] [ A₁₁B₁ + A₁₂B₂ ]
[ A₂₁ A₂₂ ] [ B₂ ] = [ A₂₁B₁ + A₂₂B₂ ]
// 2.5 MATRIX FACTORIZATIONS — LU DECOMPOSITION
DEFINITION
LU Factorization
An m×n matrix A (with no row swaps needed) can be written as A = LU where:
L = m×m lower triangular matrix with 1s on diagonal
U = m×n upper triangular (REF of A)
A = LU
To solve Ax = b using LU:
Step 1: Solve Ly = b (forward substitution) → find y
Step 2: Solve Ux = y (back substitution) → find x
WHY useful: Factor A once, solve for many different b vectors cheaply!
ALGORITHM
Finding L and U
1. Row reduce A to U (REF) using only row replacements
(Rᵢ + c·Rⱼ → Rᵢ, NO swaps, NO scaling)
2. L = identity with multipliers placed below diagonal:
If you used "Rᵢ − c·Rⱼ → Rᵢ" to eliminate,
then place c in position (i,j) of L.
Check: multiply L × U = A
SUMMARY
Chapter 2 Big Picture
Square matrix A (n×n):
Invertible? YES → unique solution, det≠0, full rank, cols independent
NO → singular, det=0, free variables, cols dependent
LU = efficient way to solve Ax=b for many b
IMT = one theorem connecting ALL invertibility concepts
// MASTER CHEAT SHEET — ALL 4 WEEKS
FORMULAS
Must-Know Formulas
d = n − rank(A) # free parameters
det(2×2) = ad − bc # for invertibility
(AB)⁻¹ = B⁻¹A⁻¹ # reverse order!
(AB)ᵀ = BᵀAᵀ # reverse order!
A = LU # LU factorization
x = p + xₕ # nonhomog solution = particular + homog
QUICK CHECKS
How to Classify a System / Matrix
→ Row reduce [A|b] to RREF
Has row (0…0 | c), c≠0? → INCONSISTENT (no solution)
rank(A) = n, no inconsist.? → UNIQUE solution
rank(A) < n, consistent? → INFINITELY MANY (d free params)
Square matrix n×n:
rank = n? → INVERTIBLE, det≠0, cols independent, spans ℝⁿ
rank < n? → SINGULAR, det=0, cols dependent, doesn't span ℝⁿ
DEFINITIONS RECAP
Key Terms Fast Reference
Homogeneous system Ax = 0 (always consistent, trivial sol = 0)
Particular solution any ONE solution p to Ax = b
Linear independence c₁v₁+…+cₚvₚ=0 only if all cᵢ=0
Standard matrix of T A = [T(e₁) T(e₂) … T(eₙ)]
One-to-one T Ax=0 trivial only ↔ cols independent
Onto T cols of A span codomain ↔ pivot every row
Invertible matrix AC = CA = I ↔ ALL conditions in IMT true
LU factorization A = LU, L lower triangular, U upper (REF)
IMT SHORTLIST
Invertible Matrix Theorem — Remember These
n×n matrix A is INVERTIBLE ⟺ each of:
• n pivot positions (full rank)
• Ax=0 only trivial solution
• Columns linearly independent
• Columns span ℝⁿ
• T(x)=Ax is one-to-one AND onto
• det(A) ≠ 0
• Aᵀ is invertible
GOTCHAS
Common Mistakes to Avoid
✗ AB ≠ BA (matrix mult is NOT commutative)
✗ AB=0 does NOT mean A=0 or B=0
✗ AB=AC does NOT mean B=C (unless A invertible)
✗ (AB)⁻¹ ≠ A⁻¹B⁻¹ → correct: B⁻¹A⁻¹
✗ More vectors than dimensions → ALWAYS linearly dependent
Zero vector in set → ALWAYS linearly dependent
A set of 1 nonzero vector → ALWAYS linearly independent