Subspaces#

There once was a vector so fine
Whose span was simply divine
With its comrades in tow
A subspace did grow
A mathematical treat so sublime

– ChatGPT, March 2023

Subspaces#

So far have been working with vector spaces like \(\mathbb{R}^2, \mathbb{R}^3, \mathbb{R}^{1000}, \mathbb{R}^n.\)

But there are more vector spaces…

Today we’ll define a subspace and show how the concept helps us understand the nature of matrices and their linear transformations.

Definition. A subspace is any set \(H\) in \(\mathbb{R}^n\) that has three properties:

  1. The zero vector is in \(H\).

  2. For each \(\mathbf{u}\) and \(\mathbf{v}\) in \(H\), the sum \(\mathbf{u} + \mathbf{v}\) is in \(H\).

  3. For each \(\mathbf{u}\) in \(H\) and each scalar \(c,\) the vector \(c\mathbf{u}\) is in \(H\).

Another way of stating properties 2 and 3 is that \(H\) is closed under addition and scalar multiplication.

Conceptually, a subspace is a set of vectors that is contained in another space …

such that linear combinations of the vectors in the subspace stay in the subspace.

In other words, a subspace has the “Las Vegas property”:

What happens in \(H\), stays in \(H\).

Every Span is a Subspace#

The first thing to note is that there is a close connection between Span and Subspace:

Every Span is a Subspace.

To see this, let’s take a specific example.

For example, take \(\mathbf{v}_1\) and \(\mathbf{v}_2\) in \(\mathbb{R}^n\), and let \(H\) = Span\(\{\mathbf{v}_1, \mathbf{v}_2\}.\)

Then \(H\) is a subspace of \(\mathbb{R}^n\).

Let’s check this:

  1. The zero vector is in \(H\)

… because \({\bf 0} = 0\mathbf{v}_1 + 0\mathbf{v}_2\),

… so \({\bf 0}\) is in Span\(\{\mathbf{v}_1, \mathbf{v}_2\}.\)

  1. The sum of any two vectors in \(H\) is in \(H\)

In other words, if \(\mathbf{u} = s_1\mathbf{v}_1 + s_2\mathbf{v}_2,\) and \(\mathbf{v} = t_1\mathbf{v}_1 + t_2\mathbf{v}_2,\)

… their sum \(\mathbf{u} + \mathbf{v}\) is \((s_1+t_1)\mathbf{v}_1 + (s_2+t_2)\mathbf{v}_2,\)

… which is in \(H\).

  1. For any scalar \(c\), \(c\mathbf{u}\) is in \(H\)

because \(c\mathbf{u} = c(s_1\mathbf{v}_1 + s_2\mathbf{v}_2) = (cs_1\mathbf{v}_1 + cs_2\mathbf{v}_2).\)

Because every span is a subspace, we refer to Span\(\{\mathbf{v}_1,\dots,\mathbf{v}_p\}\) as the subspace spanned by \(\mathbf{v}_1,\dots,\mathbf{v}_p.\)

OK, here is another subspace – a line:

_images/1b13131ae2c623f40d93601c47083a8682bcb35ec00d32318b3e7dfdf922a04c.png

Next question: is any line a subspace?

What about a line that is not through the origin?

How about this line:

_images/bd709e2f31e71a30e8486c280e258695e90067a457545d9b3b46f4cd01b794d1.png

In fact, a line \(L\) not through the origin fails all three requirements for a subspace:

  1. \(L\) does not contain the zero vector.

  1. \(L\) is not closed under addition.

_images/7835efecd704ec7d4bef88d494416cee53250d6c125cf3359ce8ad8a5fcdaee0.png
  1. \(L\) is not closed under scalar multiplication.

_images/593cc9df0a0ac2d13ad8ae1839337ee808faa51456b25901a5d0064e24c12add.png

On the other hand, any line, plane, or hyperplane through the origin is a subspace.

Make sure you can see why (or prove it to yourself).

Two Important Subspaces#

Now let’s start to use the subspace concept to characterize matrices.

We are thinking of these matrices as linear operators.

Every matrix has associated with it two subspaces:

  • column space and

  • null space.

Column Space#

Definition. The column space of a matrix \(A\) is the set \({\operatorname{Col}}\ A\) of all linear combinations of the columns of \(A\).

If \(A\) = \([\mathbf{a}_1 \;\cdots\; \mathbf{a}_n]\), with columns in \(\mathbb{R}^m,\) then \({\operatorname{Col}}\ A\) is the same as Span\(\{\mathbf{a}_1,\dots,\mathbf{a}_n\}.\)

The column space of an \(m\times n\) matrix is a subspace of \(\mathbb{R}^m.\)

In particular, note that \({\operatorname{Col}}\ A\) equals \(\mathbb{R}^m\) only when the columns of \(A\) span \(\mathbb{R}^m.\)

Otherwise, \({\operatorname{Col}}\ A\) is only part of \(\mathbb{R}^m.\)

When a system of linear equations is written in the form \(A\mathbf{x} = \mathbf{b},\) the column space of \(A\) is the set of all \(\mathbf{b}\) for which the system has a solution.

Equivalently, when we consider the linear operator \(T: \mathbb{R}^n\rightarrow\mathbb{R}^m\) that is implemented by the matrix \(A\), the column space of \(A\) is the range of \(T.\)

Null Space#

Definition. The null space of a matrix \(A\) is the set of all solutions of the homogeneous equation \(A\mathbf{x} = 0.\)

In other words: the null space of \(A\) is the set of all vectors that are mapped to the zero vector by \(A\).

We denote the null space of \(A\) as \(\operatorname{Nul} A\).

When \(A\) has \(n\) columns, a solution of \(A\mathbf{x} = {\bf 0}\) is a vector in \(\mathbb{R}^n.\) So the null space of \(A\) is a subset of \(\mathbb{R}^n.\)

In fact, \(\operatorname{Nul} A\) is a subspace of \(\mathbb{R}^n.\)

Theorem. The null space of an \(m\times n\) matrix \(A\) is a subspace of \(\mathbb{R}^n.\)

Equivalently, the set of all solutions of a system \(A\mathbf{x} = {\bf 0}\) of \(m\) homogeneous linear equations in \(n\) unknowns is a subspace of \(\mathbb{R}^n.\)

Proof.

  1. The zero vector is in \(\operatorname{Nul} A\) because \(A{\bf 0} = {\bf 0}.\)

  1. The sum of two vectors in \(\operatorname{Nul} A\) is in \(\operatorname{Nul} A.\)

Take two vectors \(\mathbf{u}\) and \(\mathbf{v}\) that are in \(\operatorname{Nul} A.\) By definition \(A\mathbf{u} = {\bf 0}\) and \(A\mathbf{v} = {\bf 0}.\)

Then \(\mathbf{u} + \mathbf{v}\) is in \(\operatorname{Nul} A\) because \(A(\mathbf{u} + \mathbf{v}) = A\mathbf{u} + A\mathbf{v} = {\bf 0} + {\bf 0} = {\bf 0}.\)

  1. Any scalar multiple of a vector in \({\operatorname{Nul}}\ A\) is in \({\operatorname{Nul}}\ A.\)

Take a vector \(\mathbf{v}\) that is in \({\operatorname{Nul}}\ A.\) Then \(A(c\mathbf{v}) = cA\mathbf{v} = c{\bf 0} = {\bf 0}.\)

Testing whether a vector \(\mathbf{v}\) is in \({\operatorname{Nul}}\ A\) is easy: simply compute \(A\mathbf{v}\) and see if the result is zero.

There’s a space known as \(\operatorname{Nul}\),
Where vectors are no fun at all,
For when equations we solve,
And zero is the goal,
In null space, the vectors take the fall

– ChatGPT, March 2023

Comparing \({\operatorname{Col}}\ A\) and \({\operatorname{Nul}}\ A\).#

What is the relationship between these two subspaces that are defined using \(A\)?

Actually, there is no particular connection (at this moment in the course).

The important thing to note at present is that these two subspaces live in different “universes”.

For an \(m\times n\) matrix,

  • the column space is a subset of \(\mathbb{R}^m\) (all its vectors have \(m\) components),

  • while the null space is a subset of \(\mathbb{R}^n\) (all its vectors have \(n\) components).

However: next lecture we will make a connection!

A Basis for a Subspace#

Let’s say you have a subspace.

For example, perhaps it is Span\(\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}\).

We would like to find the simplest way of describing this space.

For example, consider this subspace:

Note that \(\mathbf{a}_3\) is a scalar multiple of \(\mathbf{a}_2\). Thus:

Span\(\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}\)

is the same subspace as

Span\(\{\mathbf{a}_1, \mathbf{a}_2\}\).

Can you see why?

Making this idea more general:

  • we would like to describe a subspace as the span of a set of vectors, and

  • that set of vectors should have the fewest members as possible!

So in our example above, we would prefer to say that the subspace is:

\(H\) = Span\(\{\mathbf{a}_1, \mathbf{a}_2\}\)

rather than

\(H\) = Span\(\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}\).

In other words, the more “concisely” we can describe the subspace, the better.

Now, given some subspace, how small a spanning set can we find?

Here is the key idea we will use to answer that question:

It can be shown that the smallest possible spanning set must be linearly independent.

We will call such minimally-small sets of vectors a basis for the space.

Definition. A basis for a subspace \(H\) of \(\mathbb{R}^n\) is a linearly independent set in \(H\) that spans \(H.\)

_images/a73572850bbd6b1f1b05ff68bd1a6f2055d2408e078390c09528db5ca148be73.png

So in the example above, a basis for \(H\) could be:

\(\{\mathbf{a}_1, \mathbf{a}_2\}\)

or

\(\{\mathbf{a}_1, \mathbf{a}_3\}.\)

However,

\(\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}\)

is not a basis for \(H\).

That is because \(\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}\) are not linearly independent.

(Conceptually, there are “too many vectors” in this set).

And furthermore,

\(\{\mathbf{a}_1\}\)

is not a basis for \(H\).

That is because \(\{\mathbf{a}_1\}\) does not span \(H\).

(Conceptually, there are “not enough vectors” in this set).

A subspace has vectors so neat,
Their span forms a mathematical feat,
But to understand their place,
In this vectorial space,
We need a basis, oh so complete!

A basis is a set of vectors, you see,
That span the subspace perfectly,
No vector is wasted,
All others are based on this basic,
A strong foundation, for this subspace to be.

– ChatGPT, March 2023

Example.

The columns of any invertible \(n\times n\) matrix form a basis for \(\mathbb{R}^n.\)

This is because, by the Invertible Matrix Theorem, they are linearly independent, and they span \(\mathbb{R}^n.\)

So, for example, we could use the identity matrix, \(I.\) It columns are \(\mathbf{e}_1, \dots, \mathbf{e}_n.\)

The set \(\{\mathbf{e}_1,\dots,\mathbf{e}_n\}\) is called the standard basis for \(\mathbb{R}^n.\)

_images/aeb17e4c56e9dac9b647e7f082a5cb5dea0e9d377cb17b1e7536c62992b86e77.png

Bases for Null and Column Spaces#

Being able to express a subspace in terms of a basis is very powerful.

It gives us a concise way of describing the subspace.

And we will see in the next lecture, that it will allow us to introduce ideas of coordinate systems and dimension.

Hence, we will often want to be able to describe subspaces like \(\operatorname{Col} A\) or \(\operatorname{Nul} A\) using their bases.

Finding a basis for the Null Space#

We’ll start with finding a basis for the null space of a matrix.

Example. Find a basis for the null space of the matrix

\[\begin{split}A = \begin{bmatrix}-3&6&-1&1&-7\\1&-2&2&3&-1\\2&-4&5&8&-4\end{bmatrix}.\end{split}\]

Solution. We would like to describe the set of all solutions of \(A\mathbf{x} = {\bf 0}.\)

We start by writing the solution of \(A\mathbf{x} = {\bf 0}\) in parametric form:

\[\begin{split}[A \;{\bf 0}] \sim \begin{bmatrix}1&-2&0&-1&3&0\\0&0&1&2&-2&0\\0&0&0&0&0&0\end{bmatrix}, \;\;\; \begin{array}{rrrrrcl}x_1&-2x_2&&-x_4&+3x_5&=&0\\&&x_3&+2x_4&-2x_5&=&0\\&&&&0&=&0\end{array}\end{split}\]

So \(x_1\) and \(x_3\) are basic, and \(x_2, x_4,\) and \(x_5\) are free.

So the general solution is:

\[\begin{split}\begin{array}{rcl}x_1&=&2x_2 + x_4 -3x_5,\\ x_3&=&-2x_4 + 2x_5.\end{array}\end{split}\]

Now, what we want to do is write the solution set as a weighted combination of vectors.

This is a neat trick – we are creating a vector equation.

The key idea is that the free variables will become the weights.

\[\begin{split}\begin{bmatrix}x_1\\x_2\\x_3\\x_4\\x_5\end{bmatrix} = \begin{bmatrix}2x_2 + x_4 - 3x_5\\x_2\\-2x_4 + 2x_5\\x_4\\x_5\end{bmatrix} \end{split}\]
\[\begin{split} = x_2\begin{bmatrix}2\\1\\0\\0\\0\end{bmatrix}+x_4\begin{bmatrix}1\\0\\-2\\1\\0\end{bmatrix}+x_5\begin{bmatrix}-3\\0\\2\\0\\1\end{bmatrix} \end{split}\]
\[= x_2\mathbf{u} + x_4\mathbf{v} + x_5{\bf w}.\]

Now what we have is an expression that describes the entire solution set of \(A\mathbf{x} = {\bf 0}.\)

So \({\operatorname{Nul}}\ A\) is the set of all linear combinations of \(\mathbf{u}, \mathbf{v},\) and \({\bf w}\). That is, \({\operatorname{Nul}}\ A\) is the subspace spanned by \(\{\mathbf{u}, \mathbf{v}, {\bf w}\}.\)

Furthermore, this construction automatically makes \(\mathbf{u}, \mathbf{v},\) and \({\bf w}\) linearly independent.

Since each weight appears by itself in one position, the only way for the whole weighted sum to be zero is if every weight is zero – which is the definition of linear independence.

So \(\{\mathbf{u}, \mathbf{v}, {\bf w}\}\) is a basis for \({\operatorname{Nul}}\ A.\)

Conclusion: by finding a parametric description of the solution of the equation \(A\mathbf{x} = {\bf 0},\) we can construct a basis for the nullspace of \(A\).

Finding a Basis for the Column Space#

To find a basis for the column space, we have an easier starting point.

We know that the column space is the span of the matrix columns.

So, we can choose matrix columns to make up the basis.

The question is: which columns should we choose?

Warmup.

We start with a warmup example.

Suppose we have a matrix \(B\) that happens to be in reduced echelon form:

\[\begin{split}B = \begin{bmatrix}1&0&-3&5&0\\0&1&2&-1&0\\0&0&0&0&1\\0&0&0&0&0\end{bmatrix}.\end{split}\]

Denote the columns of \(B\) by \(\mathbf{b}_1,\dots,\mathbf{b}_5\).

Note that \(\mathbf{b}_3 = -3\mathbf{b}_1 + 2\mathbf{b}_2\) and \(\mathbf{b}_4 = 5\mathbf{b}_1-\mathbf{b}_2.\)

So any combination of \(\mathbf{b}_1,\dots,\mathbf{b}_5\) is actually just a combination of \(\mathbf{b}_1, \mathbf{b}_2,\) and \(\mathbf{b}_5.\)

So \(\{\mathbf{b}_1, \mathbf{b}_2, \mathbf{b}_5\}\) spans \({\operatorname{Col}}\ B\).

Also, \(\mathbf{b}_1, \mathbf{b}_2,\) and \(\mathbf{b}_5\) are linearly independent, because they are columns from an identity matrix.

So: the pivot columns of \(B\) form a basis for \({\operatorname{Col}}\ B.\)

Note that this means: there is no combination of columns 1, 2, and 5 that yields the zero vector.

(Other than the trivial combination of course.)

So, for matrices in reduced row echelon form, we have a simple rule for the basis of the column space:

Choose the columns that hold the pivots.

The general case.

Now I’ll show that the pivot columns of \(A\) form a basis for \({\operatorname{Col}}\ A\) for any \(A\).

Consider the case where \(A\mathbf{x} = {\bf 0}\) for some nonzero \(\mathbf{x}.\)

This says that there is a linear dependence relation between some of the columns of \(A\).

If any of the entries in \(\mathbf{x}\) are zero, then those columns do not participate in the linear dependence relation.

When we row-reduce \(A\) to its reduced echelon form \(B\), the columns are changed, but the equations \(A\mathbf{x} = {\bf 0}\) and \(B\mathbf{x} = {\bf 0}\) have the same solution set.

So this means that the columns of \(A\) have exactly the same dependence relationships as the columns of \(B\).

In other words:

  1. If some column of \(B\) can be written as a combination of other columns of \(B\), then the same is true of the corresponding columns of \(A\).

  2. If no combination of certain columns of \(B\) yields the zero vector, then no combination of corresponding columns of \(A\) yields the zero vector.

In other words:

  1. If some set of columns of \(B\) spans the column space of \(B\), then the same columns of \(A\) span the column space of \(A\).

  2. If some set of columns of \(B\) are linearly independent, then the same columns of \(A\) are linearly independent.

Example. Consider the matrix \(A\):

\[\begin{split}A = \begin{bmatrix}1&3&3&2&-9\\-2&-2&2&-8&2\\2&3&0&7&1\\3&4&-1&11&-8\end{bmatrix}\end{split}\]

It is row equivalent to the matrix \(B\) that we considered above. So to find its basis, we simply need to look at the basis for its reduced row echelon form. We already computed that a basis for \({\operatorname{Col}}\ B\) was columns 1, 2, and 5.

Therefore we can immediately conclude that a basis for \({\operatorname{Col}}\ A\) is \(A\)’s columns 1, 2, and 5.

So a basis for \({\operatorname{Col}}\ A\) is:

\[\begin{split}\left\{\begin{bmatrix}1\\-2\\2\\3\end{bmatrix},\begin{bmatrix}3\\-2\\3\\4\end{bmatrix},\begin{bmatrix}-9\\2\\1\\-8\end{bmatrix}\right\}\end{split}\]

Theorem. The pivot columns of a matrix \(A\) form a basis for the column space of \(A\).

Be careful here – note that we compute the reduced row echelon form of \(A\) to find which columns are pivot columns…

but we use the columns of \(A\) itself as the basis for \({\operatorname{Col}}\ A\)!