What this post covers
This post introduces orthogonality and projection.
- What orthogonality means through the inner product
- Why projection gives the closest point in a space
- How approximation error splits into an explained part and a residual
- Why this leads naturally to least squares and regression
Key terms
- orthogonality: the relation
u · v = 0 - projection: the closest point in a target subspace
- residual: the part left unexplained after projection
- orthogonal basis: a basis whose vectors are pairwise orthogonal
Core idea
Suppose you have a 3D data point but want the best 2D approximation on a plane. Then the right question is not “how do I force an exact solution?” but “what is the closest explainable point?”
That is exactly what projection answers.
Under the Euclidean inner product, two vectors are orthogonal if their dot product is zero.
u · v = u1v1 + u2v2 + ... + unvn
u · v = 0
Orthogonality is powerful because the directions do not interfere with each other. That makes coordinates, error decomposition, and approximation much cleaner.
A projection sends a vector to the closest point inside a chosen subspace. More formally, the orthogonal projection of y onto a subspace W is the unique point p in W such that y - p is orthogonal to every vector in W.
Step-by-step examples
Example 1) A simple orthogonality check
Let
u = (1, 0), v = (0, 1)
Then
u · v = 0
so the vectors are orthogonal.
Example 2) Projection onto a line
Project y = (3, 4) onto the line spanned by u = (1, 0).
For a 1D subspace,
proj_u(y) = ((y · u) / (u · u)) u
The fraction tells us how much of y lies in the u direction, and multiplying by u turns that scalar amount back into a vector on the line.
Here,
y · u = 3
u · u = 1
proj_u(y) = (3, 0)
So the residual is
y - proj_u(y) = (0, 4)
which is orthogonal to u, because (0, 4) · (1, 0) = 0.
Example 3) Why projection is the closest point
If p is the projection of y onto a subspace W, then the error y - p is orthogonal to W. If w is any other point in W, then
||y - w||^2 = ||y - p||^2 + ||p - w||^2
so ||y - w|| is always at least ||y - p||. Projection is not just a picture of a shadow. It is the best approximation in Euclidean distance.
Math notes
- In this post, orthogonality means orthogonality under the Euclidean inner product.
- The 1D formula
proj_u(y) = ((y · u) / (u · u))uis specific to projection onto a single direction. - In finite-dimensional spaces, the projection onto a subspace exists uniquely.
- Orthogonal bases make projection much easier to compute.
This is why orthogonality is not only geometric. It is computationally useful.
Common mistakes
Thinking orthogonality only means a 90-degree picture in 2D
The inner-product definition works in any dimension.
Treating projection as only a shadow metaphor
The key fact is that the residual is orthogonal and the result is the closest point.
Thinking the leftover error is arbitrary
In projection, the leftover is not random. It has a very specific orthogonality structure.
Practice or extension
- Project
y = (2, 3)onto the directionu = (1, 1). - Why does orthogonality make “closest point” arguments work?
- Why do orthogonal bases simplify computation?
Wrap-up
This post introduced orthogonality and projection.
- Orthogonality means zero inner product.
- Projection gives the closest point in a target subspace.
- The residual stays orthogonal to the explained space.
- Next, we use that structure to explain least squares.
💬 댓글
이 글에 대한 의견을 남겨주세요