[Linear Algebra Series Part 19] Eigenvalues and Eigenvectors: Directions That Stay Special Under a Transformation

한국어 버전

What this post covers

This post introduces eigenvalues and eigenvectors.

  • What it means for a vector to stay on the same line under a transformation
  • Why eigenvalues measure scaling along those special directions
  • How this leads to diagonalization
  • Why this matters for repeated transformations and PCA

Key terms

  • eigenvector: a nonzero vector that stays on the same line after transformation
  • eigenvalue: the scalar that measures the stretch or flip on that line
  • diagonalization: rewriting a matrix in a basis of eigenvectors when possible
  • PCA: a main application of eigenvectors in data analysis

Core idea

Most vectors change both direction and length when a matrix acts on them. But some special vectors behave more simply: they stay on the same line, and only their scale changes. Imagine a transformation where most arrows tilt to new directions, but a few special arrows merely stretch or flip along their own line.

Those are eigenvectors.

The defining equation is

Av = λv

where v is a nonzero eigenvector and λ is its eigenvalue.

If λ > 0, the vector keeps its direction. If λ < 0, it flips to the opposite direction while staying on the same line.

Step-by-step examples

Example 1) A diagonal matrix

For

A = [2 0
     0 3]

(1, 0) is an eigenvector with eigenvalue 2, and (0, 1) is an eigenvector with eigenvalue 3.

Example 2) A nondiagonal matrix

Now consider

A = [2 1
     0 3]

This is not diagonal, but it can still have eigenvectors. So eigenvectors are not just a “diagonal matrix trick.” They are a way of finding the important directions of a transformation more generally. In fact, (1, 0) is an eigenvector with eigenvalue 2, because A(1, 0) = (2, 0) = 2(1, 0). Also (1, 1) is an eigenvector with eigenvalue 3, because A(1, 1) = (3, 3) = 3(1, 1).

Example 3) PCA intuition

In PCA, the main directions of spread come from the eigenvectors of the covariance matrix. Because covariance matrices are symmetric positive semidefinite, their eigenvalues are nonnegative and their eigenvectors can be chosen orthogonal. That makes the geometry especially clean.

Example 4) Repeated application

If a transformation is applied many times, directions associated with larger-magnitude eigenvalues may dominate over time. That is the basic intuition behind ideas like power iteration.

Math notes

  • Eigenvectors reveal special 1D invariant directions of a transformation.
  • If an n x n matrix has n linearly independent eigenvectors, then we can change to a basis where the transformation becomes diagonal:
A = P D P^-1

where the columns of P are eigenvectors and the diagonal entries of D are eigenvalues.

  • In more advanced treatments, eigenvalues are found by solving the characteristic equation det(A - λI) = 0.
  • The case λ = 0 is important too: it means the eigenvector is sent to zero, so that direction collapses into the null space.
  • Not every matrix has enough real eigenvectors to do this. That is one reason SVD is so important later.

Common mistakes

Treating eigenvalues as only a calculation exercise

The real point is to understand which directions a transformation treats in a special way.

Assuming every matrix diagonalizes nicely

Many do not. That limitation matters in practice.

Thinking a large eigenvalue is automatically “good”

Its meaning depends on context. It may signal strong variation, fast growth, instability, or something else entirely.

Practice or extension

  1. Why are the standard basis vectors eigenvectors of a diagonal matrix?
  2. What does “stays on the same line” mean geometrically?
  3. Why are eigenvectors important in PCA?

Wrap-up

This post introduced eigenvalues and eigenvectors.

  • Some directions behave especially simply under a transformation.
  • Eigenvectors mark those directions.
  • Eigenvalues describe the scaling along them.
  • Next, we move to the more general factorization tool: SVD.

💬 댓글

이 글에 대한 의견을 남겨주세요