Skip to main content
Logo image

Section 4.1 Vector Spaces

We will
  • consider the definition of a vector space (general),
  • recognize many of the properties of a vector space from previous material,
  • practice testing the properties, and
  • name a few special spaces,
  • find (calculate) those special spaces for specific problems.

Subsection 4.1.1 Definition

The definition has a lot of pieces. We will work through them slowly to gain an understanding.

Definition 4.1.1. Vector Space.

A vector space is a set of objects, called vectors, and a set of scalars with two operations called vector addition and scalar multiplication such that the following properties are met.
  1. \(\vec{u}+\vec{v}\) is another vector.
  2. \(c\vec{u}\) is another vector.
  3. \(0\vec{u}=\vec{0}\text{.}\)
  4. \(1\vec{u}=\vec{u}\text{.}\)
  5. \(\vec{u}+\vec{v}=\vec{v}+\vec{u}\text{.}\)
  6. \(\vec{u}+(\vec{v}+\vec{w})=(\vec{u}+\vec{v})+\vec{w}\text{.}\)
  7. \(c(\vec{u}+\vec{v})=c\vec{u}+c\vec{v}\text{.}\)
  8. \((c+d)\vec{u}=c\vec{u}+d\vec{u}\text{.}\)
  9. \(c(d\vec{u})=(cd)\vec{u}\text{.}\)
  10. There exists a vector \(\vec{0}\) such that \(\vec{u}+\vec{0}=\vec{u}\) for all \(\vec{u}\text{.}\)
  11. For each \(\vec{u}\) there exists a \(\vec{u}_i\) such that \(\vec{u}+\vec{u}_i=\vec{0}\text{.}\)
Each property has a name. Most we learned in algebra classes. We will review these names to help us remember these properties.
The first two items are called closure, that is if you add or multiply two vectors the resulting object is still a vector. In contrast you may recall that the dot product of two vectors produces a scalar (different kind of object). The dot product, therefore, does not have closure.
The third property does not have a common name, but is the well recognized property that multiplying by zero (scalar) always results in zero.
The fourth property is the scalar identity property. Just like we have identity matrices, every vector field has an scalar identity—an element that when multiplied does not change the vector.

Checkpoint 4.1.2.

The next two properties are scalar over vector distributive and vector over scalar distributive.

Checkpoint 4.1.3.

What proprety does Item 9 look like?
The final two properties are vector identity and vector inverse. Note that matrix identity and inverses are similarly defined.
Our next step in understanding vector spaces (specifically the parts of the definition) is to identify as vector spaces, some objects we already know and maybe love.

Checkpoint 4.1.4.

Below are candidates for vector spaces. For each candidate (column) select example vectors (objects from the set) and test if each condition (part of the definition) of a vector space is met.
Note a vector is just an object. For example in the last case a vector is a polynomial: it is not a list of polynomials. You must now divorce from your mind the old, unnecessarily specific meaning of the word vector.
Vectors
\(3 \times 3\) matrices
\(3 \times 3\) matrices with determinate 1
polynomials of degree at most 2
Scalars
\(\R\)
\(\R\)
\(\R\)
Vector addition
Matrix addition
Matrix addition
Polynomial addition
Scalar multiplication
Real number multiplication

Example 4.1.5.

\(3 \times 3\) matrices with matrix addition and (real) scalar multiplication forms a vector space.
Closure (addition)
By definition of matrix addition the sum of \(3 \times 3\) matrices is a \(3 \times 3\) matrix.
Closure (scalar)
By definition of scalar multiplication multiplying a \(3 \times 3\) matrix by a scalar produces a \(3 \times 3\) matrix.
Zero scalar
Show \(0\vec{x}=\vec{0}\text{.}\)
\begin{equation*} 0\left[ \begin{array}{rrr} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right] = \left[ \begin{array}{rrr} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right] = \vec{0}. \end{equation*}
One scalar
The proof is similar to the previous, replacing 0 with 1 and noticing that none of the matrix elements change.
Commutative
Show \(\vec{u}+\vec{v}=\vec{v}+\vec{u}\text{.}\)
Essentially this is because matrix addition is defined in terms of scalar addition (in each element).
\begin{align*} \left[ \begin{array}{rrr} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right] + \left[ \begin{array}{rrr} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{array} \right] = \\ \text{def. matrix addition} \\ \left[ \begin{array}{rrr} a_{11}+b_{11} & a_{12}+b_{12} & a_{13}+b_{13} \\ a_{21}+b_{21} & a_{22}+b_{22} & a_{23}+b_{23} \\ a_{31}+b_{31} & a_{32}+b_{32} & a_{33}+b_{33} \end{array} \right] & = \\ \text{commutativity of reals}\\ \left[ \begin{array}{rrr} b_{11}+a_{11} & b_{12}+a_{12} & b_{13}+a_{13} \\ b_{21}+a_{21} & b_{22}+a_{22} & b_{23}+a_{23} \\ b_{31}+a_{31} & b_{32}+a_{32} & b_{33}+a_{33} \end{array} \right] & = \\ \text{definition of matrix addition}\\ \left[ \begin{array}{rrr} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{array} \right]+\left[ \begin{array}{rrr} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right]. \end{align*}
Vector associative
The proof is similar to the previous. Write out the three matrices, do the arithmetic, note it ends the way we need. Details are left for an exercise.
Scalar properties
Proofs of scalar distribution forms also result from writing down the starting point, doing the arithmetic, and recognizing the result.
Additive identity
The additive identity is the matrix containing all zeros. As a result when adding an entry \(m_{i,j}\) from the other matrix we obtain \(0+m_{i,j}=m_{i,j}\text{.}\) Thus this is an additive identity.
Multiplicative identity
The multiplicative identity is the identity matrix whose property we have already proved.

Subsection 4.1.2 Special Spaces

Each of the following is an example of a special case of a vector space. They illustrate cases in which it is easier (faster) to test if all conditions from the vector space defintion are met.

Example 4.1.6. Subspace.

The set of all polynomials of degree 2 or lower is a vector space. Is the set of all polynomials of degree 1 or lower a vector space?
Which properties from the vector space definition do we need to check? We will work through them one at a time.
  • Closure must be tested. We need to see if scalar multiplication or addition produce an object other than a polynomial of appropriate degree.
    We note that the sum of two polynomials is always a polynomial of degree no higher that the two added. Terms can be removed (subtracted out), but nothing larger can be produced. Think of the addition algorithm if unconvinced.
    Scalar multiplication produces another polynomial having terms of the exact same degrees, so it is still a polynomial of degree at most 1.
  • Zero and one scalar properties do not need to be tested. They work on all the polynomials, so they must work on the subset.
  • Commutative, associative, and distributive prdoperties do not need to be tested. They are not based on the specific vectors, but rather on the definition of the operations which we know works.
  • Zero vector being contained is not immediately obvious. Perhaps we chose a subset that does not include it. However, the zero scalar times any vector is the zero vector. Because we have tested closure under scalar multiplication, the zero vector must be included.
  • Vector inverses also seem like they need to be check. Perhaps the subset chosen does not include those elements. However, we can show that \(-1\vec{x}\) is always the additive inverse of \(\vec{x}\text{.}\) Thus scalar closure also guarantees these vector inverses.
In general we have noted that to test if a subset of a vector space is also a vector space we need check only the two closure properties. The rest of the properties have already been tested on the whole and therefore apply to the part.

Example 4.1.7.

Is the set of polynomials of degree at most 2 with only positive coefficients a vector space?
Solution.
It is not. Note that if we multiply any of these polynomials by -3 the result is a polynomial with negative coefficients. Thus the scalar closure fails.
Also not that the vector inverse of such a polynomial is a polynomial with all negative coefficients, so this property also fails.

Example 4.1.8. Span.

Is the span of the set of vectors \(\{\vec{a},\vec{b},\vec{c}\}\) a vector space?
Note this is a subspace so we only need to test closure. By definition a span is all linear combinations, so closure is met. The span of any set of vectors is a subspace of the vector space from which they come.

Example 4.1.9. Null Space.

Is the set of all solutions (\(\vec{x}\)) to \(A\vec{x}=\vec{0}\) a vector space?
Note this is a subspace so we only need to test closure. Suppose \(A\vec{x}=\vec{0}\) and \(A\vec{y}=\vec{0}\text{.}\) Consider \(a\vec{x}+b\vec{y}\text{.}\)
\begin{align*} A(a\vec{x}+b\vec{y}) & = \text{matrix distributive} \\ A(a\vec{x})+A(b\vec{y}) & = \text{matrix associative} \\ a(A\vec{x})+b(A\vec{y}) & = \text{given above} \\ a\vec{0}+b\vec{0} & = \vec{0}. \end{align*}
Thus \(a\vec{x}+b\vec{y}\) is in the null space (closure exists).

Example 4.1.10. Column Space.

Is the span of the columns of a matrix a vector space? Because matrix multiplication on the right is a linear combination of the columns, the column space is all of the products \(\vec{b}\) from \(A\vec{x}=\vec{b}\text{.}\)
Note this is a subspace of the space of all vectors, so we only need to test closure. Suppose \(A\vec{x}=\vec{a}\) and \(A\vec{y}=\vec{b}\text{.}\) Consider
\begin{align*} A(a\vec{x}+b\vec{y}) & = \text{matrix distributive} \\ A(a\vec{x})+A(b\vec{y}) & = \text{matrix associative} \\ a(A\vec{x})+b(A\vec{y}) & = \text{given above} \\ a\vec{a}+b\vec{b}. \end{align*}
Thus any linear combination of column space vectors is also a column space vector (closure).

Example 4.1.11. Row Space.

Is the span of the rows of a matrix a vector space? This is the span of all rows. Two vectors in this set look like \(\vec{v}_1 = a_1\vec{R}_1+a_2\vec{R}_2+\ldots+a_n\vec{R}_n\) and \(\vec{v}_2 = b_1\vec{R}_1+b_2\vec{R}_2+\ldots+b_n\vec{R}_n\text{.}\) Thus the sum of any two vectors in the row space looks like
\begin{align*} \vec{v}_1+\vec{v}_2 = & a_1\vec{R}_1+a_2\vec{R}_2+\ldots+a_n\vec{R}_n\\ & + b_1\vec{R}_1+b_2\vec{R}_2+\ldots+b_n\vec{R}_n\\ = & (a_1+b_1)\vec{R}_1+(a_2+b_2)\vec{R}_2+\ldots+(a_n+b_n)\vec{R}_n. \end{align*}
This form is just another entry in the span which is all possible coefficients. Thus we have closure and this is a subspace.

Definition 4.1.12. Null Space (Matrix).

A set of vectors \(\vec{x}\) is a null space of a matrix \(A\) if and only if \(A\vec{x}=\vec{0}\) for every \(\vec{x}\text{.}\)

Definition 4.1.13. Column Space (Matrix).

A set of vectors \(\vec{x}\) is a column space of a matrix \(A\) if and only if every \(\vec{x}\) is in the span of the columns of \(A\text{.}\)

Definition 4.1.14. Row Space (Matrix).

A set of vectors \(\vec{x}\) is a row space of a matrix \(A\) if and only if every \(\vec{x}\) is in the span of the rows of \(A\text{.}\)

Definition 4.1.15. Null Space (Transformation).

A set of vectors \(\vec{x}\) is a null space of a linear transformation \(T\) if and only if \(T(\vec{x})=\vec{0}\) for every \(\vec{x}\text{.}\)

Definition 4.1.16. Column Space (Transformation).

A set of vectors \(\vec{x}\) is a column space of a linear transformation \(T\) if and only if for each \(\vec{x}\) \(\vec{x}=T(\vec{w})\) has a solution.

Subsection 4.1.3 Finding Spaces

A skill we need is to calculate (find) a representation of these special spaces for specific matrices and transformations.

Example 4.1.17.

Find the null space of \(\left[ \begin{array}{rrr} 1 & 2 & -1 \\ 3 & 2 & 5 \\ 0 & -4 & 8 \end{array} \right]\text{.}\)
Solution.
\begin{align*} \left[ \begin{array}{*{3}{r}} 1 & 2 & -1 \\ 3 & 2 & 5 \\ 0 & -4 & 8 \\ \end{array} \right] & \sim \begin{array}{l} \\ R_2 \leftarrow -3R_1+R_2 \\ \\ \end{array}\\ \left[ \begin{array}{*{3}{r}} 1 & 2 & -1 \\ 0 & -4 & 8 \\ 0 & -4 & 8 \\ \end{array} \right] & \sim \begin{array}{l} \\ \\ R_3 \leftarrow -1R_2+R_3 \\ \end{array}\\ \left[ \begin{array}{*{3}{r}} 1 & 2 & -1 \\ 0 & -4 & 8 \\ 0 & 0 & 0 \\ \end{array} \right] & \sim & \begin{array}{l} R_1 \leftarrow \frac{1}{2}R_2+R_1 \\ R_2-\frac{1}{4} \leftarrow R_2 \\ \\ \end{array}\\ \left[ \begin{array}{*{3}{r}} 1 & 0 & 3 \\ 0 & 1 & -2 \\ 0 & 0 & 0 \\ \end{array} \right] \end{align*}
\begin{align*} x+3z & = 0. \\ x & = -3z. \\ y-2z & = 0. \\ y & = 2z. \end{align*}
\begin{equation*} [3z,2z,z]^T = z[3,2,1]^T \end{equation*}

Example 4.1.18.

Find the column space of \(A=\left[ \begin{array}{rrr} 1 & 2 & -1 \\ 3 & 2 & 5 \\ 0 & -4 & 8 \end{array} \right]\text{.}\)
Solution.
\begin{equation*} \text{col}(A)=\left\{ c_1\left[\begin{array}{r} 1 \\ 3 \\ 0 \end{array}\right]+ c_2\left[\begin{array}{r} 2 \\ 2 \\ -4 \end{array}\right]+ c_2\left[\begin{array}{r} -1 \\ 5 \\ 8 \end{array}\right] \right\} \end{equation*}
which is the span of the columns.

Example 4.1.19.

Find the null space and range of \(T((x_1,x_2,x_3)^T)=(x_1+2x_2-x_3,3x_1+2x_2+5x_3,-4x_2+8x_3)^T\text{.}\)
Solution.
First find the matrix of the transformation.
\begin{align*} T([1,0,0]^T) & = [1,3,0]^T. \\ T([0,1,0]^T) & = [2,2,-8]^T. \\ T([0,0,1]^T) & = [-1,5,8]^T. \end{align*}
The null space and range are the same as for the matrix.