[Linear Algebra Series Part 3] Vector Addition, Scalar Multiplication, and the Start of Linearity

한국어 버전

What this post covers

This post introduces the first two operations you must learn for vectors.

  • What vector addition means
  • How scalar multiplication changes magnitude and direction
  • Why these two operations are the starting point of linearity
  • How to read them through examples such as motion, weighted combinations, and data mixing

Key terms

  • vector: the basic object that holds several components together
  • scalar: a number that scales a vector's size or reverses its direction
  • linear transformation: a transformation that preserves addition and scalar multiplication

Core idea

Once you understand vectors, the next step is operation. In linear algebra, almost everything begins from just two operations.

  1. Vector addition
  2. Scalar multiplication

These two operations matter because later ideas are built from them. Linear combinations, span, basis, linear transformations, matrix multiplication, and projection all extend the same pattern: add vectors and scale them by numbers.

In this series, those scalars are real numbers. Later, the same ideas can be extended to other number systems, but real vectors are the right starting point for most programming examples.

Vector addition

Vector addition combines vectors of the same dimension componentwise.

If v = (v1, v2) and w = (w1, w2), then

v + w = (v1 + w1, v2 + w2)

That looks like a simple rule, but the meaning matters more. Vector addition combines the contribution of each axis inside the same coordinate system.

  • For displacement vectors, it accumulates movement.
  • For velocity vectors, it combines directional effects.
  • For feature vectors, it combines the contribution of several effects at once.

Scalar multiplication

Scalar multiplication multiplies a vector by one number. If a is a scalar and v = (v1, v2), then

a v = (a v1, a v2)

Scalar multiplication changes the vector's size and, depending on the sign, its direction.

  • If a > 1, the vector gets longer.
  • If 0 < a < 1, it points in the same direction but gets shorter.
  • If a = 0, it becomes the zero vector.
  • If a < 0, its direction flips.

Why this is the start of linearity

Suppose we want a transformation not to break these two operations. That idea leads to linearity.

Linearity means preserving addition and scalar multiplication. A function or transformation T between vector spaces is linear when it satisfies

T(v + w) = T(v) + T(w)
T(a v) = a T(v)

These two properties are often called additivity and homogeneity. They matter because they let us understand complicated inputs as combinations of simpler pieces.

A few consequences are worth remembering early.

T(0) = 0
T(x1 v1 + x2 v2 + ... + xn vn) = x1 T(v1) + x2 T(v2) + ... + xn T(vn)

So a linear transformation sends the zero vector to the zero vector and preserves linear combinations. For example, T(0) = 0 is not magic. It follows from T(0) = T(0v) = 0T(v) = 0.

Step-by-step examples

Example 1) Position and displacement

Suppose the current position is p = (10, 5) and the displacement in one frame is d = (2, -1). Then the new position is

p + d = (10, 5) + (2, -1) = (12, 4)

Strictly speaking, p is a point or position, while d is a displacement vector. In beginner-friendly settings, positions are often treated like coordinate vectors, and that is the convention we will use for intuition here.

If the same displacement is applied for only half a second, then

0.5 d = (1, -0.5)

Scalar multiplication now reads naturally as time scaling, intensity scaling, or weighting.

Example 2) A small 2D numeric example

Let v = (2, 3) and w = (-1, 4).

v + w = (2 + -1, 3 + 4) = (1, 7)
2v = (4, 6)
-1v = (-2, -3)
0v = (0, 0)

Working through a small numeric example makes the two rules very concrete. Vector addition adds componentwise. Scalar multiplication applies the same scale factor to every component.

Example 3) A weighted combination of effects

Suppose a recommender system treats a user's profile as a mixture of two broad factors.

  • genre preference vector g
  • price sensitivity vector p

If you want to mix them evenly, you can write

0.5 g + 0.5 p

If you want genre preference to matter more, you might write

0.8 g + 0.2 p

This is the simplest form of a linear combination. Later, span asks how far such combinations can go.

In real systems, a model may also include a bias term or nonlinear activation. This example intentionally isolates the linear skeleton.

Example 4) Color interpolation

Suppose A = (255, 0, 0) and B = (0, 0, 255). Their midpoint is

0.5 A + 0.5 B = (127.5, 0, 127.5)

In a simplified linear-RGB view, that interpolation is perfectly natural. Real graphics pipelines also worry about color spaces and gamma, so this is a helpful model rather than the full story.

Example 5) A non-linear counterexample

Now consider a transformation that always shifts a position 3 units to the right.

T(x) = x + (3, 0)

This feels simple, but it is not linear because

T(0) = (3, 0) != 0

Translation is common in applications, but it is not a linear transformation. It is better described as an affine transformation: it preserves straight lines and parallel structure, but it does not preserve the origin.

Math notes

  • Vector addition and scalar multiplication are the basic operations that define a vector space.
  • Many important objects in linear algebra are either closed under these operations or are defined by preserving them.
  • The zero vector is the reference point for both operations. For example, 0v = 0 and v + 0 = v.
  • Every vector v has an additive inverse -v = (-1)v, so v + (-v) = 0.
  • These operations also satisfy distributive and associativity rules.
a(v + w) = av + aw
(a + b)v = av + bv
(ab)v = a(bv)

Even when dimensions match, addition only feels natural when the vectors belong to the same meaning system. In applications, units and feature meaning matter, not only dimension.

Common mistakes

Memorizing only the component rules

If you only memorize (a, b) + (c, d) = (a + c, b + d), you may still miss what the operation is supposed to mean. Vector addition should be read as combining same-axis contributions.

Treating positions and vectors as completely identical

In introductory settings, positions are often handled as vectors. But formally, points and displacement vectors are different objects. The distinction becomes important later, even if we keep the beginner-friendly view for now.

Thinking scalar multiplication only means stretching

Negative scalars reverse direction, and zero collapses everything to the zero vector. Scalar multiplication is not just enlargement.

Mixing arbitrary vectors without checking meaning

Weighted combinations are powerful, but if the vectors do not live in the same meaning system, the result may be hard to interpret.

Practice or extension

Explain what each expression could mean in context.

  1. (3, 1) + (-1, 4)
  2. 2 (5, -2)
  3. 0.25 a + 0.75 b
  4. -1 v
  5. Why T(x) = x + (2, 0) is not linear

Then ask yourself:

  • What effects does the addition combine?
  • What ratio or scaling does the scalar multiplication express?
  • Does the result still belong to the same kind of object?
  • Which examples are linear and which are not?

That habit helps the formulas start feeling like models instead of symbol manipulation.

Wrap-up

This post introduced the two core vector operations.

  • Vector addition combines information componentwise along the same axes.
  • Scalar multiplication changes magnitude and can reverse direction.
  • Together they form the starting point of linear combinations and linearity.
  • In programming, they show up in motion, weighted sums, interpolation, and feature mixing.
  • Familiar operations such as translation can still fail linearity, so the distinction matters.

The next post defines vector length and distance so that we can talk numerically about how far apart two vectors are.

💬 댓글

이 글에 대한 의견을 남겨주세요