What this post covers
This post studies linear independence and linear dependence.
- What redundancy means inside a vector set
- Why dependence means one vector can be built from the others
- How independence leads to basis and dimension
- Why this matters for feature redundancy in practice
Key terms
- linear independence: no unnecessary redundancy in a vector set
- linear dependence: at least one vector can be expressed using the others
- trivial solution: the all-zero solution
- basis: a set that spans a space without redundancy
Core idea
After learning span, the next question is not only “what can these vectors produce?” but also “are some of these vectors repeating the same information?”
Once you understand span, the next question is obvious:
Do these vectors contain unnecessary overlap?
That is what linear independence answers.
Vectors v1, v2, ..., vk are linearly independent if
a1v1 + a2v2 + ... + akvk = 0
forces
a1 = a2 = ... = ak = 0
In other words, the only way to combine them to make zero is the trivial way. We test combinations equal to zero because if a nontrivial zero combination exists, we can move one vector to the other side and rewrite it using the rest.
If at least one coefficient is nonzero and the combination still gives zero, then the vectors are linearly dependent.
That means one vector is redundant. It can be recovered from the others.
Step-by-step examples
Example 1) Independent vectors
(1, 0) and (0, 1) are independent. To see this, solve
a(1, 0) + b(0, 1) = (0, 0)
This gives (a, b) = (0, 0), so only the trivial solution works.
Example 2) Dependent vectors
(1, 2) and (2, 4) are dependent, because
2(1, 2) - 1(2, 4) = (0, 0)
This is a nontrivial way to get zero, so the set is dependent.
Example 3) A practical feature example
Imagine a dataset with features
- height in centimeters
- height in meters
- weight
The first two are not independent because one is just a scaled version of the other. So even though there are three columns, the amount of genuinely new information is smaller.
Math notes
- Independence is about whether a set explains its span efficiently.
- The columns of a matrix are linearly independent if and only if the homogeneous system
Ax = 0has only the trivial solution. Here, homogeneous simply means the right-hand side is zero. - If vectors are dependent, you may remove one without changing the span.
- This is exactly why independence matters for basis, rank, and dimension.
Common mistakes
Thinking more vectors are always better
Not if they repeat the same direction.
Confusing correlation with independence
Correlation is statistical. Linear independence is an exact statement about linear combinations.
Relying only on geometric pictures
Pictures help in low dimensions, but elimination and matrix reasoning become more reliable in higher dimensions.
Practice or extension
Decide whether each set is independent or dependent.
(1, 0),(0, 1)(1, 2),(2, 4)(1, 0, 0),(0, 1, 0),(1, 1, 0)
Hint for 3: can the third vector be written as a sum of the first two?
Wrap-up
This post introduced independence and dependence.
- Independence means no vector is redundant.
- Dependence means at least one vector can be built from the others.
- This is the starting point for basis, dimension, and rank.
- Next, we combine span and independence into one idea: basis.
💬 댓글
이 글에 대한 의견을 남겨주세요