Skip to main content

Section 4.6 Subspaces

In this chapter, we have been looking at bases for \(\real^p\text{,}\) sets of vectors that are linearly independent and span \(\real^p\text{.}\) Frequently, however, we focus on only a subset of \(\real^p\text{.}\) In particular, if we are given an \(m\by n\) matrix \(A\text{,}\) we have been interested in both the span of the columns of \(A\) and the solution space to the homogeneous equation \(A\xvec = \zerovec\text{.}\) In this section, we will expand the concept of basis to describe sets like these.

Preview Activity 4.6.1.

Let’s consider the following matrix \(A\) and its reduced row echelon form.
\begin{equation*} A = \left[\begin{array}{rrrr} 2 \amp -1 \amp 2 \amp 3 \\ 1 \amp 0 \amp 0 \amp 2 \\ -2 \amp 2 \amp -4 \amp -2 \\ \end{array}\right] \sim \left[\begin{array}{rrrr} 1 \amp 0 \amp 0 \amp 2 \\ 0 \amp 1 \amp -2 \amp 1 \\ 0 \amp 0 \amp 0 \amp 0 \\ \end{array}\right]\text{.} \end{equation*}
  1. Are the columns of \(A\) linearly independent? Is the span of the columns \(\real^3\text{?}\)
  2. Give a parametric description of the solution space to the homogeneous equation \(A\xvec = \zerovec\text{.}\)
  3. Explain how this parametric description produces two vectors \(\wvec_1\) and \(\wvec_2\) whose span is the solution space to the equation \(A\xvec = \zerovec\text{.}\)
  4. What can you say about the linear independence of the set of vectors \(\wvec_1\) and \(\wvec_2\text{?}\)
  5. Let’s denote the columns of \(A\) as \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) \(\vvec_3\text{,}\) and \(\vvec_4\text{.}\) Explain why \(\vvec_3\) and \(\vvec_4\) can be written as linear combinations of \(\vvec_1\) and \(\vvec_2\text{.}\)
  6. Explain why \(\vvec_1\) and \(\vvec_2\) are linearly independent and \(\laspan{\vvec_1,\vvec_2} = \laspan{\vvec_1, \vvec_2, \vvec_3, \vvec_4}\text{.}\)
Solution.
  1. The columns of \(A\) are not linearly independent since there is not a pivot position in every column. Also, the span of the columns is not \(\real^3\) because there is not a pivot position in every row.
  2. From the reduced row echelon form, we see that the homogeneous equation leads to the equations
    \begin{equation*} \begin{alignedat}{5} x_1 \amp \amp \amp \amp \amp {}+{} \amp 2x_4 \amp {}={} \amp 0 \\ \amp \amp x_2 \amp {}-{} \amp 2x_3 \amp {}+{} \amp x_4 \amp {}={} \amp 0 \\ \end{alignedat}\text{,} \end{equation*}
    which leads to the parametric description
    \begin{equation*} \xvec=\fourvec{x_1}{x_2}{x_3}{x_4} = \fourvec{-2x_4}{2x_3-x_4}{x_3}{x_4} = x_3\fourvec{0}{2}{1}{0} + x_4\fourvec{2}{-1}{0}{1}\text{.} \end{equation*}
  3. We see that every vector in the solution space is a linear combination of the vectors \(\wvec_1=\fourvec{0}{2}{1}{0}\) and \(\wvec_2 = \fourvec{2}{-1}{0}{1}\text{.}\)
  4. This pair of vectors is linearly independent because one is not a scalar multiple of the other.
  5. From the reduced row echelon form of \(A\text{,}\) we see that \(\vvec_3 = -2\vvec_2\) and \(\vvec_4=2\vvec_1+\vvec_2\text{.}\)
  6. We see that \(\vvec_1\) and \(\vvec_2\) are linearly independent from the reduced row echelon form of \(A\text{.}\) Moreover, we know that \(\vvec_3\) and \(\vvec_4\) can be written as linear combinations of \(\vvec_1\) and \(\vvec_2\text{.}\) Therefore, any linear combination of \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) \(\vvec_3\text{,}\) and \(\vvec_4\) can be written as a linear combination of \(\vvec_1\) and \(\vvec_2\) alone.

Subsection 4.6.1 Subspaces

Our goal is to develop a common framework for describing subsets like the span of the columns of a matrix and the solution space to a homogeneous equation. That leads us to the following definition.

Definition 4.6.1.

A subspace of \(\real^p\) is a subset of \(\real^p\) that is the span of a set of vectors.
Since we have explored the concept of span in some detail, this definition just gives us a new word to describe something familiar. Let’s look at some examples.

Example 4.6.2. Subspaces of \(\real^3\).

In Activity 3.1.3 and the following discussion, we looked at subspaces in \(\real^3\) without explicitly using that language. Let’s recall some of those examples.
  • Suppose we have a single nonzero vector \(\vvec\text{.}\) The span of \(\vvec\) is a subspace, which we’ll write as \(S = \laspan{\vvec}\text{.}\) As we have seen, the span of a single vector consists of all scalar multiples of that vector, and these form a line passing through the origin.
  • If instead we have two linearly independent vectors \(\vvec_1\) and \(\vvec_2\text{,}\) the subspace \(S=\laspan{\vvec_1,\vvec_2}\) is a plane passing through the origin.
  • Consider the three vectors \(\evec_1\text{,}\) \(\evec_2\text{,}\) and \(\evec_3\text{.}\) Since we know that every 3-dimensional vector can be written as a linear combination, we have \(S = \laspan{\evec_1, \evec_2, \evec_3} = \real^3\text{.}\)
  • One more subspace worth mentioning is \(S=\laspan{\zerovec}\text{.}\) Since any linear combination of the zero vector is itself the zero vector, this subspace consists of a single vector, \(\zerovec\text{.}\)
In fact, any subspace of \(\real^3\) is one of these types: the origin, a line, a plane, or all of \(\real^3\text{.}\)

Activity 4.6.2.

We will look at some sets of vectors and the subspaces they form.
  1. If \(\vvec_1, \vvec_2,\ldots,\vvec_n\) is a set of vectors in \(\real^m\text{,}\) explain why \(\zerovec\) can be expressed as a linear combination of these vectors. Use this fact to explain why the zero vector \(\zerovec\) belongs to any subspace in \(\real^m\text{.}\)
  2. Explain why the line on the left of Figure 4.6.3 is not a subspace of \(\real^2\) and why the line on the right is.
    Figure 4.6.3. Two lines in \(\real^2\text{,}\) one of which is a subspace and one of which is not.
  3. Consider the vectors
    \begin{equation*} \vvec_1=\threevec101,~~~ \vvec_2=\threevec011,~~~ \vvec_3=\threevec110, \end{equation*}
    and describe the subspace \(S=\laspan{\vvec_1,\vvec_2,\vvec_3}\) of \(\real^3\text{.}\)
  4. Consider the vectors
    \begin{equation*} \wvec_1=\threevec210,~~~ \wvec_2=\threevec{-1}1{-1},~~~ \wvec_3=\threevec03{-2} \end{equation*}
    1. Write \(\wvec_3\) as a linear combination of \(\wvec_1\) and \(\wvec_2\text{.}\)
    2. Explain why \(\laspan{\wvec_1,\wvec_2,\wvec_3} = \laspan{\wvec_1, \wvec_2}\text{.}\)
    3. Describe the subspace \(S = \laspan{\wvec_1,\wvec_2,\wvec_3}\) of \(\real^3\text{.}\)
  5. Suppose that \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) \(\vvec_3\text{,}\) and \(\vvec_4\) are four vectors in \(\real^3\) and that
    \begin{equation*} \begin{bmatrix} \vvec_1 \amp \vvec_2 \amp \vvec_3 \amp \vvec_4 \end{bmatrix} \sim \begin{bmatrix} 1 \amp 2 \amp 0 \amp -2 \\ 0 \amp 0 \amp 1 \amp 1 \\ 0 \amp 0 \amp 0 \amp 0 \\ \end{bmatrix}. \end{equation*}
    Give a description of the subspace \(S=\laspan{\vvec_1,\vvec_2,\vvec_3,\vvec_4}\) of \(\real^3\text{.}\)
Solution.
  1. If we choose all the weights \(c_1=c_2=\ldots=c_n = 0\text{,}\) then the linear combination
    \begin{equation*} c_1\vvec_1 + c_2\vvec_2 + \cdots + c_n\vvec_n = \zerovec. \end{equation*}
    This means that \(\zerovec\) is the subspace \(S=\laspan{\vvec_1,\vvec_2,\ldots,\vvec_n}\text{.}\)
  2. The line on the left cannot be a subspace of \(\real^2\) since it does not contain the zero vector. The line on the right is a subspace because it can be represented as the span of any nonzero vector on the line.
  3. The matrix whose columns are the given vectors has a pivot in every row. Therefore, the span of these vectors is \(\real^3\) and so \(S=\laspan{\vvec_1,\vvec_2,\vvec_3} = \real^3\text{.}\)
    1. We see that \(\wvec_3 = \wvec_1 + 2\wvec_2\text{.}\)
    2. Any linear combination
      \begin{equation*} c_1\wvec_1 + c_2\wvec_2 + c_2\wvec_3 = (c_1 + c_2)\wvec_1 + (c_2+2c_2)\wvec_2. \end{equation*}
    3. The subspace \(S=\laspan{\wvec_1,\wvec_2,\wvec_3} = \laspan{\wvec_1,\wvec_2}\) is a plane in \(\real^3\text{.}\)
  4. Since we can write \(\vvec_2 = 2\vvec_1\) and \(\vvec_4 = -2\vvec_1 + \vvec_2\text{,}\) then
    \begin{equation*} S = \laspan{\vvec_1,\vvec_2,\vvec_3,\vvec_4} = \laspan{\vvec_1,\vvec_3}, \end{equation*}
    which is a plane in \(\real^3\text{.}\)
As the activity shows, it is possible to represent some subspaces as the span of more than one set of vectors. We are particularly interested in representing a subspace as the span of a linearly independent set of vectors.

Definition 4.6.4.

A basis for a subspace \(S\) of \(\real^p\) is a set of vectors in \(S\) that are linearly independent and whose span is \(S\text{.}\) We say that the dimension of the subspace \(S\text{,}\) denoted \(\dim S\text{,}\) is the number of vectors in any basis.

Example 4.6.5. A subspace of \(\real^4\).

Suppose we have the 4-dimensional vectors \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) and \(\vvec_3\) that define the subspace \(S = \laspan{\vvec_1,\vvec_2,\vvec_3}\) of \(\real^4\text{.}\) Suppose also that
\begin{equation*} \begin{bmatrix} \vvec_1 \amp \vvec_2 \amp \vvec_3 \end{bmatrix} \sim \begin{bmatrix} 1 \amp -1 \amp 0 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \\ \end{bmatrix}. \end{equation*}
From the reduced row echelon form of the matrix, we see that \(\vvec_2 = -\vvec\text{.}\) Therefore, any linear combination of \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) and \(\vvec_3\) can be rewritten
\begin{equation*} c_1\vvec_1 + c_2\vvec_2 + c_3 \vvec_3 = (c_1-c_2)\vvec_1 + c_2\vvec_3 \end{equation*}
as a linear combination of \(\vvec_1\) and \(\vvec_3\text{.}\) This tells us that
\begin{equation*} S = \laspan{\vvec_1,\vvec_2,\vvec_3} = \laspan{\vvec_1,\vvec_3}. \end{equation*}
Furthermore, the reduced row echelon form of the matrix shows that \(\vvec_1\) and \(\vvec_3\) are linearly independent. Therefore, \(\{\vvec_1,\vvec_3\}\) is a basis for \(S\text{,}\) which means that \(S\) is a two-dimensional subspace of \(\real^4\text{.}\)
Subspaces of \(\real^3\) are either
  • 0-dimensional, consisting of the single vector \(\zerovec\text{,}\)
  • a 1-dimensional line,
  • a 2-dimensional plane, or
  • the 3-dimensional subspace \(\real^3\text{.}\)
There is no 4-dimensional subspace of \(\real^3\) because there is no linearly independent set of four vectors in \(\real^3\text{.}\)
There are two important subspaces associated to any matrix, each of which springs from one of our two fundamental questions, as we will now see.

Subsection 4.6.2 The column space of \(A\)

The first subspace associated to a matrix that we’ll consider is its column space.

Definition 4.6.6.

If \(A\) is an \(m\by n\) matrix, we call the span of its columns the column space of \(A\) and denote it as \(\col(A)\text{.}\)
Notice that the columns of \(A\) are vectors in \(\real^m\text{,}\) which means that any linear combination of the columns is also in \(\real^m\text{.}\) Since the column space is described as the span of a set of vectors, we see that \(\col(A)\) is a subspace of \(\real^m\text{.}\)

Activity 4.6.3.

We will explore some column spaces in this activity.
  1. Consider the matrix
    \begin{equation*} A= \left[\begin{array}{rrr} \vvec_1 \amp \vvec_2 \amp \vvec_3 \end{array}\right] = \left[\begin{array}{rrr} 1 \amp 3 \amp -1 \\ -2 \amp 0 \amp -4 \\ 1 \amp 2 \amp 0 \\ \end{array}\right]. \end{equation*}
    Since \(\col(A)\) is the span of the columns, we have
    \begin{equation*} \col(A) = \laspan{\vvec_1,\vvec_2,\vvec_3}. \end{equation*}
    Explain why \(\vvec_3\) can be written as a linear combination of \(\vvec_1\) and \(\vvec_2\) and why \(\col(A)=\laspan{\vvec_1,\vvec_2}\text{.}\)
  2. Explain why the vectors \(\vvec_1\) and \(\vvec_2\) form a basis for \(\col(A)\) and why \(\col(A)\) is a 2-dimensional subspace of \(\real^3\) and therefore a plane.
  3. Now consider the matrix \(B\) and its reduced row echelon form:
    \begin{equation*} B = \left[\begin{array}{rrrr} -2 \amp -4 \amp 0 \amp 6 \\ 1 \amp 2 \amp 0 \amp -3 \\ \end{array}\right] \sim \left[\begin{array}{rrrr} 1 \amp 2 \amp 0 \amp -3 \\ 0 \amp 0 \amp 0 \amp 0 \\ \end{array}\right]. \end{equation*}
    Explain why \(\col(B)\) is a 1-dimensional subspace of \(\real^2\) and is therefore a line.
  4. For a general matrix \(A\text{,}\) what is the relationship between the dimension \(\dim~\col(A)\) and the number of pivot positions in \(A\text{?}\)
  5. How does the location of the pivot positions indicate a basis for \(\col(A)\text{?}\)
  6. If \(A\) is an invertible \(9\by9\) matrix, what can you say about the column space \(\col(A)\text{?}\)
  7. Suppose that \(A\) is an \(8\by 10\) matrix and that \(\col(A) = \real^8\text{.}\) If \(\bvec\) is an 8-dimensional vector, what can you say about the equation \(A\xvec = \bvec\text{?}\)
Solution.
  1. We have
    \begin{equation*} A=\left[\begin{array}{rrr} 1 \amp 3 \amp -1 \\ -2 \amp 0 \amp -4 \\ 1 \amp 2 \amp 0 \\ \end{array}\right]\sim \left[\begin{array}{rrr} 1 \amp 0 \amp 2 \\ 0 \amp 1 \amp -1 \\ 0 \amp 0 \amp 0 \\ \end{array}\right]\text{,} \end{equation*}
    which shows that the vectors are not linearly independent and, in fact, that \(\vvec_3 = 2\vvec_1 - \vvec_2\text{.}\) As we’ve seen several times, this means that any linear combination of \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) and \(\vvec_3\) can be written as a linear combination of \(\vvec_1\) and \(\vvec_2\) alone and hence that
    \begin{equation*} \col(A)=\laspan{\vvec_1,\vvec_2,\vvec_3} = \laspan{\vvec_1,\vvec_2}. \end{equation*}
  2. The reduced row echelon form of \(A\) shows that \(\vvec_1\) and \(\vvec_2\) are linearly independent. We also know that the span of these two vectors is \(\col(A)\text{.}\) Therefore, they form a basis for \(\col(A)\text{.}\)
  3. Denoting the columns of \(B\) as \(\vvec_i\text{,}\) the reduced row echelon form shows that \(\vvec_2=2\vvec_1\text{,}\) \(\vvec_3=0\vvec_1\text{,}\) and \(\vvec_4 = -3\vvec_1\text{.}\) Therefore, any linear combination of \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) \(\vvec_3\text{,}\) and \(\vvec_4\) can be written as a linear combination of \(\vvec_1\) alone. This means that \(\vvec_1\) forms a basis for \(\col(A)\text{,}\) which is then the line consisting of all scalar multiples of \(\vvec_1\text{.}\) p
  4. The number of vectors in a basis of \(\col(A)\) equals the number of pivot positions. Therefore, \(\dim~\col(A)\) equals the number of pivot positions in \(A\text{.}\)
  5. As the examples in this activity illustrate, the columns of \(A\) that contain pivot positions form a basis for \(\col(A)\text{.}\)
  6. If \(A\) is invertible, then it has a pivot position in every row, which means that the span of the columns is \(\real^9\text{.}\) Therefore, \(\col(A) = \real^9\text{.}\)
  7. Since \(\col(A)=\real^8\text{,}\) we know that every 8-dimensional vector \(\bvec\) is in \(\col(A)\text{.}\) This means that \(\bvec\) is in the span of the columns of \(A\) so the equation \(A\xvec = \bvec\) must be consistent.

Example 4.6.7.

Consider the matrix \(A\) and its reduced row echelon form:
\begin{equation*} A = \left[\begin{array}{rrrrr} 2 \amp 0 \amp -4 \amp -6 \amp 0 \\ -4 \amp -1 \amp 7 \amp 11 \amp 2 \\ 0 \amp -1 \amp -1 \amp -1 \amp 2 \\ \end{array}\right] \sim \left[\begin{array}{rrrrr} 1 \amp 0 \amp -2 \amp -3 \amp 0 \\ 0 \amp 1 \amp 1 \amp 1 \amp -2 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \\ \end{array}\right], \end{equation*}
and denote the columns of \(A\) as \(\vvec_1,\vvec_2,\ldots,\vvec_5\text{.}\)
It is certainly true that \(\col(A) = \laspan{\vvec_1,\vvec_2,\ldots,\vvec_5}\) by the definition of the column space. However, the reduced row echelon form of the matrix shows us that the vectors are not linearly independent so \(\vvec_1,\vvec_2,\ldots,\vvec_5\) do not form a basis for \(\col(A)\text{.}\)
From the reduced row echelon form, however, we can see that
\begin{equation*} \begin{aligned} \vvec_3 \amp {}={} -2\vvec_1 + \vvec_2 \\ \vvec_4 \amp {}={} -3\vvec_1 + \vvec_2 \\ \vvec_5 \amp {}={} -2\vvec_2 \\ \end{aligned}\text{.} \end{equation*}
This means that any linear combination of \(\vvec_1,\vvec_2,\ldots,\vvec_5\) can be written as a linear combination of just \(\vvec_1\) and \(\vvec_2\text{.}\) Therefore, we see that \(\col(A) = \laspan{\vvec_1,\vvec_2}\text{.}\)
Moreover, the reduced row echelon form shows that \(\vvec_1\) and \(\vvec_2\) are linearly independent, which implies that they form a basis for \(\col(A)\text{.}\) This means that \(\col(A)\) is a 2-dimensional subspace of \(\real^3\text{,}\) which is a plane in \(\real^3\text{,}\) having basis
\begin{equation*} \threevec{2}{-4}{0}, \qquad \threevec{0}{-1}{1}\text{.} \end{equation*}
In general, a column without a pivot position can be written as a linear combination of the columns that have pivot positions. This means that a basis for \(\col(A)\) will always be given by the columns of \(A\) having pivot positions. This leads us to the following definition and proposition.

Definition 4.6.8.

The rank of a matrix \(A\) is the number of pivot positions in \(A\) and is denoted by \(\rank(A)\text{.}\)
For example, the rank of the matrix \(A\) in Example 4.6.7 is two because there are two pivot positions. A basis for \(\col(A)\) is given by the first two columns of \(A\) since those columns have pivot positions.

Caution.

Remember, we determine the pivot positions by looking at the reduced row echelon form of \(A\text{.}\) However, we form a basis of \(\col(A)\) from the columns of \(A\) rather than the columns of the reduced row echelon matrix.

Subsection 4.6.3 The null space of \(A\)

The second subspace associated to a matrix is its null space.

Definition 4.6.10.

If \(A\) is an \(m\by n\) matrix, we call the subset of vectors \(\xvec\) in \(\real^n\) satisfying \(A\xvec = \zerovec\) the null space of \(A\) and denote it by \(\nul(A)\text{.}\)
Remember that a subspace is a subset that can be represented as the span of a set of vectors. The column space of \(A\text{,}\) which is simply the span of the columns of \(A\text{,}\) fits this definition. It may not be immediately clear how the null space of \(A\text{,}\) which is the solution space of the equation \(A\xvec = \zerovec\text{,}\) does, but we will see that \(\nul(A)\) is a subspace of \(\real^n\text{.}\)

Activity 4.6.4.

We will explore some null spaces in this activity and see why \(\nul(A)\) satisfies the definition of a subspace.
  1. Consider the matrix
    \begin{equation*} A=\begin{bmatrix} 1 \amp 3 \amp -1 \amp 2 \\ -2 \amp 0 \amp -4 \amp 2 \\ 1 \amp 2 \amp 0 \amp 1 \end{bmatrix} \end{equation*}
    and give a parametric description of the solution space to the equation \(A\xvec = \zerovec\text{.}\) In other words, give a parametric description of \(\nul(A)\text{.}\)
  2. This parametric description shows that the vectors satisfying the equation \(A\xvec=\zerovec\) can be written as a linear combination of a set of vectors. In other words, this description shows why \(\nul(A)\) is the span of a set of vectors and is therefore a subspace. Identify a set of vectors whose span is \(\nul(A)\text{.}\)
  3. Use this set of vectors to find a basis for \(\nul(A)\) and state the dimension of \(\nul(A)\text{.}\)
  4. The null space \(\nul(A)\) is a subspace of \(\real^p\) for which value of \(p\text{?}\)
  5. Now consider the matrix \(B\) whose reduced row echelon form is given by
    \begin{equation*} B \sim \left[\begin{array}{rrrr} 1 \amp 2 \amp 0 \amp -3 \\ 0 \amp 0 \amp 0 \amp 0 \\ \end{array}\right]. \end{equation*}
    Give a parametric description of \(\nul(B)\text{.}\)
  6. The parametric description gives a set of vectors that span \(\nul(B)\text{.}\) Explain why this set of vectors is linearly independent and hence forms a basis. What is the dimension of \(\nul(B)\text{?}\)
  7. For a general matrix \(A\text{,}\) how does the number of pivot positions indicate the dimension of \(\nul(A)\text{?}\)
  8. Suppose that the columns of a matrix \(A\) are linearly independent. What can you say about \(\nul(A)\text{?}\)
Solution.
  1. We have
    \begin{equation*} A=\begin{bmatrix} 1 \amp 3 \amp -1 \amp 2 \\ -2 \amp 0 \amp -4 \amp 2 \\ 1 \amp 2 \amp 0 \amp 1 \\ \end{bmatrix}\sim \begin{bmatrix} 1 \amp 0 \amp 2 \amp -1 \\ 0 \amp 1 \amp -1 \amp 1 \\ 0 \amp 0 \amp 0 \amp 0\\ \end{bmatrix}, \end{equation*}
    which leads to the parametric description of the solution space to the homogeneous equation:
    \begin{equation*} \xvec=x_3\fourvec{-2}{1}{1}0 + x_4\fourvec{1}{-1}01. \end{equation*}
  2. The parametric description shows that every solution to the equation \(A\xvec=\zerovec\) is a linear combination of \(\wvec_1=\fourvec{-2}110\) and \(\wvec_2=\fourvec1{-1}01\text{.}\)
  3. The vectors \(\wvec_1\) and \(\wvec_2\) are linearly independent so they form a basis for \(\nul(A)\text{.}\) Therefore, \(\nul(A)\) is 2-dimensional.
  4. The vectors in \(\nul(A)\) are 4-dimensional so \(\nul(A)\) is a subspace of \(\real^4\text{.}\)
  5. A parametric description of the null space is \(\xvec=x_2\fourvec{-2}{1}{0}{0} + x_3\fourvec0010 + x_4\fourvec{3}{0}{0}{1}\text{.}\) We can check that the vectors
    \begin{equation*} \fourvec{-2}100,~~~\fourvec0010,~~~\fourvec30{0}1 \end{equation*}
    are linearly independent so they form a basis for \(\nul(A)\text{.}\) This means that \(\nul(A)\) is 3-dimensional.
  6. The number of vectors in a basis of the null space equals the number of free variables that appear in the equation \(A\xvec=\zerovec\text{,}\) which is the number of columns that do not have pivot positions. This says that \(\dim~\nul(A)\) equals the number of columns of \(A\) minus the number of pivot positions.
  7. If the columns are linearly independent, then the homogeneous equation \(A\xvec=\zerovec\) has only the zero solution \(\xvec=\zerovec\text{.}\) Therefore, \(\nul(A) = \{\zerovec\}\text{.}\)

Example 4.6.11.

Consider the matrix \(A\) along with its reduced row echelon form:
\begin{equation*} A = \left[\begin{array}{rrrrr} 2 \amp 0 \amp -4 \amp -6 \amp 0 \\ -4 \amp -1 \amp 7 \amp 11 \amp 2 \\ 0 \amp -1 \amp -1 \amp -1 \amp 2 \\ \end{array}\right] \sim \left[\begin{array}{rrrrr} 1 \amp 0 \amp -2 \amp -3 \amp 0 \\ 0 \amp 1 \amp 1 \amp 1 \amp -2 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \\ \end{array}\right]\text{.} \end{equation*}
To find a parametric description of the solution space to \(A\xvec=\zerovec\text{,}\) imagine that we augment both \(A\) and its reduced row echelon form by a column of zeroes, which leads to the equations
\begin{equation*} \begin{alignedat}{6} x_1 \amp \amp \amp {}-{} \amp 2x_3 \amp {}-{} \amp 3x_4 \amp \amp \amp {}={} \amp 0 \\ \amp \phantom{+} \amp x_2 \amp {}+{} \amp x_3 \amp {}+{} \amp x_4 \amp {}-{} \amp 2x_5 \amp {}={} \amp 0 \\ \end{alignedat}\text{.} \end{equation*}
Notice that \(x_3\text{,}\) \(x_4\text{,}\) and \(x_5\) are free variables so we rewrite these equations as
\begin{equation*} \begin{aligned} x_1 \amp {}={} 2x_3 + 3x_4 \\ x_2 \amp {}={} -x_3 - x_4 + 2x_5 \\ \end{aligned}\text{.} \end{equation*}
In vector form, we have
\begin{equation*} \begin{aligned} \xvec \amp {}={} \fivevec{x_1}{x_2}{x_3}{x_4}{x_5} = \fivevec{2x_3 + 3x_4}{-x_3-x_4+2x_5}{x_3}{x_4}{x_5} \\ \\ \amp {}={} x_3\fivevec{2}{-1}{1}{0}{0} +x_4\fivevec{3}{-1}{0}{1}{0} +x_5\fivevec{0}{2}{0}{0}{1} \end{aligned}\text{.} \end{equation*}
This expression says that any vector \(\xvec\) satisfying \(A\xvec= \zerovec\) is a linear combination of the vectors
\begin{equation*} \vvec_1 = \fivevec{2}{-1}{1}{0}{0}, \quad \vvec_2 = \fivevec{3}{-1}{0}{1}{0}, \quad \vvec_3 = \fivevec{0}{2}{0}{0}{1}\text{.} \end{equation*}
It is straightforward to check that these vectors are linearly independent, which means that \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) and \(\vvec_3\) form a basis for \(\nul(A)\text{,}\) a 3-dimensional subspace of \(\real^5\text{.}\)
As illustrated in this example, the dimension of \(\nul(A)\) is equal to the number of free variables in the equation \(A\xvec=\zerovec\text{,}\) which equals the number of columns of \(A\) without pivot positions or the number of columns of \(A\) minus the number of pivot positions.
Combining Proposition 4.6.9 and Proposition 4.6.12 shows that

Subsection 4.6.4 Summary

Once again, we find ourselves revisiting our two fundamental questions concerning the existence and uniqueness of solutions to linear systems. The column space \(\col(A)\) contains all the vectors \(\bvec\) for which the equation \(A\xvec = \bvec\) is consistent. The null space \(\nul(A)\) is the solution space to the equation \(A\xvec = \zerovec\text{,}\) which reflects on the uniqueness of solutions to this and other equations.
  • A subspace \(S\) of \(\real^p\) is a subset of \(\real^p\) that can be represented as the span of a set of vectors. A basis of \(S\) is a linearly independent set of vectors whose span is \(S\text{.}\)
  • If \(A\) is an \(m\by n\) matrix, the column space \(\col(A)\) is the span of the columns of \(A\) and forms a subspace of \(\real^m\text{.}\)
  • A basis for \(\col(A)\) is found from the columns of \(A\) that have pivot positions. The dimension is therefore \(\dim~\col(A) = \rank(A)\text{.}\)
  • The null space \(\nul(A)\) is the solution space to the homogeneous equation \(A\xvec = \zerovec\) and is a subspace of \(\real^n\text{.}\)
  • A basis for \(\nul(A)\) is found through a parametric description of the solution space of \(A\xvec = \zerovec\text{,}\) and we have that \(\dim~\nul(A) = n - \rank(A)\text{.}\)

Exercises 4.6.5 Exercises

1.

Suppose that \(A\) and its reduced row echelon form are
\begin{equation*} A = \left[\begin{array}{rrrrrr} 0 \amp 2 \amp 0 \amp -4 \amp 0 \amp 6 \\ 0 \amp -4 \amp -1 \amp 7 \amp 0 \amp -16 \\ 0 \amp 6 \amp 0 \amp -12 \amp 3 \amp 15 \\ 0 \amp 4 \amp -1 \amp -9 \amp 0 \amp 8 \\ \end{array}\right] \sim \left[\begin{array}{rrrrrr} 0 \amp 1 \amp 0 \amp -2 \amp 0 \amp 3 \\ 0 \amp 0 \amp 1 \amp 1 \amp 0 \amp 4 \\ 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp -1 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \\ \end{array}\right]\text{.} \end{equation*}
  1. The null space \(\nul(A)\) is a subspace of \(\real^p\) for what \(p\text{?}\) The column space \(\col(A)\) is a subspace of \(\real^q\) for what \(q\text{?}\)
  2. What are the dimensions \(\dim~\nul(A)\) and \(\dim~\col(A)\text{?}\)
  3. Find a basis for the column space \(\col(A)\text{.}\)
  4. Find a basis for the null space \(\nul(A)\text{.}\)

2.

Suppose that
\begin{equation*} A = \left[\begin{array}{rrrr} 2 \amp 0 \amp -2 \amp -4 \\ -2 \amp -1 \amp 1 \amp 2 \\ 0 \amp -1 \amp -1 \amp -2 \end{array}\right]\text{.} \end{equation*}
  1. Is the vector \(\threevec{0}{-1}{-1}\) in \(\col(A)\text{?}\)
  2. Is the vector \(\fourvec{2}{1}{0}{2}\) in \(\col(A)\text{?}\)
  3. Is the vector \(\threevec{2}{-2}{0}\) in \(\nul(A)\text{?}\)
  4. Is the vector \(\fourvec{1}{-1}{3}{-1}\) in \(\nul(A)\text{?}\)
  5. Is the vector \(\fourvec{1}{0}{1}{-1}\) in \(\nul(A)\text{?}\)

3.

Determine whether the following statements are true or false and provide a justification for your response. Unless otherwise stated, assume that \(A\) is an \(m\by n\) matrix.
  1. If \(A\) is a \(127\by 341\) matrix, then \(\nul(A)\) is a subspace of \(\real^{127}\text{.}\)
  2. If \(\dim~\nul(A) = 0\text{,}\) then the columns of \(A\) are linearly independent.
  3. If \(\col(A) = \real^m\text{,}\) then \(A\) is invertible.
  4. If \(A\) has a pivot position in every column, then \(\nul(A) = \real^n\text{.}\)
  5. If \(\col(A) = \real^m\) and \(\nul(A) = \{\zerovec\}\text{,}\) then \(A\) is invertible.

4.

Explain why the following statements are true.
  1. If \(B\) is invertible, then \(\nul(BA) = \nul(A)\text{.}\)
  2. If \(B\) is invertible, then \(\col(AB) = \col(A)\text{.}\)
  3. If \(A\sim A'\text{,}\) then \(\nul(A) = \nul(A')\text{.}\)

5.

For each of the following conditions, construct a \(3\by 3\) matrix having the given properties.
  1. \(\dim~\nul(A) = 0\text{.}\)
  2. \(\dim~\nul(A) = 1\text{.}\)
  3. \(\dim~\nul(A) = 2\text{.}\)
  4. \(\dim~\nul(A) = 3\text{.}\)

6.

Suppose that \(A\) is a \(3\by 4\) matrix.
  1. Is it possible that \(\dim~\nul(A) = 0\text{?}\)
  2. If \(\dim~\nul(A) = 1\text{,}\) what can you say about \(\col(A)\text{?}\)
  3. If \(\dim~\nul(A) = 2\text{,}\) what can you say about \(\col(A)\text{?}\)
  4. If \(\dim~\nul(A) = 3\text{,}\) what can you say about \(\col(A)\text{?}\)
  5. If \(\dim~\nul(A) = 4\text{,}\) what can you say about \(\col(A)\text{?}\)

7.

Consider the vectors
\begin{equation*} \vvec_1 = \threevec{2}{3}{-1}, \vvec_2 = \threevec{-1}{2}{4}, \wvec_1 = \fourvec{3}{-1}{1}{0}, \wvec_2 = \fourvec{-2}{4}{0}{1} \end{equation*}
and suppose that \(A\) is a matrix such that \(\col(A)=\laspan{\vvec_1,\vvec_2}\) and \(\nul(A) = \laspan{\wvec_1,\wvec_2}\text{.}\)
  1. What are the dimensions of \(A\text{?}\)
  2. Find such a matrix \(A\text{.}\)

8.

Suppose that \(A\) is an \(8\by 8\) matrix and that \(\det A = 14\text{.}\)
  1. What can you conclude about \(\nul(A)\text{?}\)
  2. What can you conclude about \(\col(A)\text{?}\)

9.

Suppose that \(A\) is a matrix and there is an invertible matrix \(P\) such that
\begin{equation*} A = P~\left[\begin{array}{rrr} 2 \amp 0 \amp 0 \\ 0 \amp -3 \amp 0 \\ 0 \amp 0 \amp 1 \\ \end{array}\right]~P^{-1}\text{.} \end{equation*}
  1. What can you conclude about \(\nul(A)\text{?}\)
  2. What can you conclude about \(\col(A)\text{?}\)

10.

In this section, we saw that the solution space to the homogeneous equation \(A\xvec = \zerovec\) is a subspace of \(\real^p\) for some \(p\text{.}\) In this exercise, we will investigate whether the solution space to another equation \(A\xvec = \bvec\) can form a subspace.
Let’s consider the matrix
\begin{equation*} A = \left[\begin{array}{rr} 2 \amp -4 \\ -1 \amp 2 \\ \end{array}\right]\text{.} \end{equation*}
  1. Find a parametric description of the solution space to the homogeneous equation \(A\xvec = \zerovec\text{.}\)
  2. Graph the solution space to the homogeneous equation to the right.
  3. Find a parametric description of the solution space to the equation \(A\xvec = \twovec{4}{-2}\) and graph it above.
  4. Is the solution space to the equation \(A\xvec = \twovec{4}{-2}\) a subspace of \(\real^2\text{?}\)
  5. Find a parametric description of the solution space to the equation \(A\xvec=\twovec{-8}{4}\) and graph it above.
  6. What can you say about all the solution spaces to equations of the form \(A\xvec = \bvec\) when \(\bvec\) is a vector in \(\col(A)\text{?}\)
  7. Suppose that the solution space to the equation \(A\xvec = \bvec\) forms a subspace. Explain why it must be true that \(\bvec = \zerovec\text{.}\)