Skip to main content
Logo image

Section 4.3 Coordinates

Goals.

Now that we know about the existence of bases, we will
  • introduce the concept of coordinates,
  • practice writing vectors as linear combinations of basis vectors in any vector space, and
  • work problems using just the coefficients of these linear combinations.

Subsection 4.3.1 Definition of Coordinates

We use the definition of linear combination and the nature of a basis to recognize a way to express every vector in a vector space. Note the following.
  • Every vector in a vector space can be written as a linear combination of basis vectors (by definition of basis).
  • A linear combination is a sum of scaled vectors.
  • As a result any vector from a vector space can be written as a linear combination of vectors from the basis.
  • Because the basis vectors are the same for every vector we express this way, each vector is described solely by the coefficients in the linear combination.
  • These coefficients are the coordinates.

Definition 4.3.1. Coordinates.

For a vector \(\vec{v}\) and a basis \(B=\{\vec{b}_1, \vec{b}_2, \ldots, \vec{b}_n \}\) the coordinates of \(\vec{v}\) with respect to basis \(B\text{,}\) denoted \([\vec{v}]_\B\text{,}\) is the vector of coefficients \([c_1,c_2,\ldots,c_n]\) such that \(\vec{v}=c_1\vec{b}_1+c_2\vec{b}_2+\ldots+c_n\vec{b}_n\text{.}\)

Subsection 4.3.2 Working in Arbitrary Vector Spaces

Our first set of examples use the vector space of polynomials. For this space we should remember the following.
  • 0 is a polynomial. It has 0 as the coefficient of every term (power).
  • Two polynomials are equal if and only if every coefficient is equal. For example if \(ax^2+bx+c=5x^2-3x+2\) then \(a=5\text{,}\) \(b=-3\text{,}\) and \(c=2\text{.}\)

Example 4.3.2. Test if Linearly Independent.

We will test whether the following set of polynomials is independent. \(\B=\{x^2+x,x+1,x^2+1\} \text{.}\)
(a)
First we set up the equation to test that these three vectors are independent. \(a(x^2+x)+b(x+1)+c(x^2+1) = 0\text{.}\)
(b)
Next we collect like terms. \((a+c)x^2+(a+b)x+(b+c)1 = 0\text{.}\)
(c)
We use this form to set up a system of equations that will determine how many solutions exist (test of dependence).
\begin{equation*} \begin{array}{rrrcr} a & & +c & = 0. \\ a & +b & & = 0. \\ & b & +c & = 0. \end{array} \end{equation*}
(d)
Row reduce to solve this system.
\begin{equation*} \left[\begin{array}{rrr} 1 & 0 & 1 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \end{array}\right] \sim \left[\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right] \end{equation*}
(e)
Recognize that the existence of a unique solution implies the polynomials are independent.

Example 4.3.3. Test if Spanning Set.

In context of polynomials \(\B\) spans \(\Poly_2\) (all polynomials of degree two or smaller) if and only if any polynomial \(a_2x^2+a_1x+a_0\) can be written as a linear combination of these basis vectors.
(a)
First we set up the equations to test if the polynomial above is in the span of \(\B\text{.}\)
\begin{equation*} a(x^2+x)+b(x+1)+c(x^2+1) = a_2x^2+a_1x+a_0 \end{equation*}
(b)
Next we collect like terms.
\begin{equation*} (a+c)x^2+(a+b)x+(b+c) = a_2x^2+a_1x+a_0. \end{equation*}
(c)
Use this form to set up a system of equations that will determine if a solutions exists for this system.
\begin{equation*} \begin{array}{rrrcr} a & & +c & = & a_2. \\ a & +b & & = & a_1. \\ & b & +c & = & a_0. \end{array} \end{equation*}
(d)
Solve this system to test if there is a solution.
\begin{equation*} \left[\begin{array}{rrrr} 1 & 0 & 1 & a_2 \\ 1 & 1 & 0 & a_1 \\ 0 & 1 & 1 & a_0 \end{array}\right] \sim \left[\begin{array}{rrrr} 1 & 0 & 0 & \frac{1}{2}(a_2+a_1-a_0) \\ 0 & 1 & 0 & \frac{1}{2}(-a_2+a_1+a_0)\\ 0 & 0 & 1 & \frac{1}{2}(a_2-a_1+a_0) \end{array}\right] \end{equation*}
(e)
We note that there is always a solution so \(\B\) spans \(\Poly_2\text{.}\)

Example 4.3.4. Calculate Coordinates.

Now that we have proved that \(\B=\{x^2+x,x+1,x^2+1\} \) is a basis for \(\Poly_2\text{,}\) we can calculate the coordinates of vectors (polynomials) with respect to this basis. Calculate the coordinates of \(\vec{x}=2x^2+2x+2\) and \(\vec{y}=5x^2+3\text{.}\)
(a)
First we set up the equations to calculate the coordinates.
\begin{equation*} a(x^2+x)+b(x+1)+c(x^2+1) = 2x^2+2x+2 \end{equation*}
(b)
Next we collect like terms.
\begin{equation*} (a+c)x^2+(a+b)x+(b+c)1 = 2x^2+2x+2 \end{equation*}
(c)
We use this form to set up a system of equations that will determine the coefficients of the basis vectors that give us the coordinates.
\begin{equation*} \begin{array}{rrrcr} a & & +c & = & 2. \\ a & +b & & = & 2. \\ & b & +c & = & 2. \end{array}\text{.} \end{equation*}
(d)
Finally we solve the system to find these coordinates.
\begin{equation*} \left[\begin{array}{rrrr} 1 & 0 & 1 & 2 \\ 1 & 1 & 0 & 2 \\ 0 & 1 & 1 & 2 \end{array}\right] \sim \left[\begin{array}{rrrr} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \end{array}\right] \end{equation*}
Next we demonstrate the same ideas using another vector space: the space of matrices (\(2 \times 2\) in this case).
For these we need to remember
  • the 0 matrix has zeros in every position, and
  • two matrices are equal if and only if every position is equal.

Example 4.3.5. Test if Linearly Independent.

We will test whether the following set of matrices is linearly independent. \(\C=\left\{ \left[ \begin{array}{rr} 1 & 1 \\ 1 & 0 \end{array} \right], \left[ \begin{array}{rr} 1 & 1 \\ 0 & 1 \end{array} \right], \left[ \begin{array}{rr} 0 & 1 \\ 1 & 1 \end{array} \right], \left[ \begin{array}{rr} 1 & 0 \\ 1 & 1 \end{array} \right] \right\}. \)
(a)
First we set up the equation to test that these four matrices are independent.
\begin{align*} a\left[ \begin{array}{rr} 1 & 1 \\ 1 & 0 \end{array} \right]+ &\\ b\left[ \begin{array}{rr} 1 & 1 \\ 0 & 1 \end{array} \right]+ &\\ c\left[ \begin{array}{rr} 0 & 1 \\ 1 & 1 \end{array} \right]+\\ d\left[ \begin{array}{rr} 1 & 0 \\ 1 & 1 \end{array} \right] = & \\ \left[ \begin{array}{rr} 0 & 0 \\ 0 & 0 \end{array} \right]. \end{align*}
(b)
Using the definition of matrix addition we obtain
\begin{equation*} \left[ \begin{array}{rr} a+b+d & a+b+c \\ a+c+d & b+c+d \end{array} \right] = \left[ \begin{array}{rr} 0 & 0 \\ 0 & 0 \end{array} \right] \end{equation*}
(c)
Using the definition of matrix equality we obtain the following system of equations
\begin{align*} a+b+d & = 0\\ a+b+c & = 0\\ a+c+d & = 0\\ b+c+d & = 0 \end{align*}
(d)
Solve the system to prove these vectors are independent. \(\left[\begin{array}{rrrr} 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 \end{array}\right] \sim \left[\begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] \)
(e)
Recognize that the existence of a unique solution implies the polynomials are independent.

Example 4.3.6. Test if Spanning Set.

In context of matrices \(\C\) spans \(\R^{2 \times 2}\) (all \(2 \times 2\) matrices) if and only if any matrix \(\left[ \begin{array}{rr} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \end{array} \right] \) can be written as a linear combination of the basis vectors.
(a)
First we set up the equations to test if the polynomial above is in the span of \(\B\text{.}\)
\begin{align*} a\left[ \begin{array}{rr} 1 & 1 \\ 1 & 0 \end{array} \right]+ &\\ b\left[ \begin{array}{rr} 1 & 1 \\ 0 & 1 \end{array} \right]+ &\\ c\left[ \begin{array}{rr} 0 & 1 \\ 1 & 1 \end{array} \right]+\\ d\left[ \begin{array}{rr} 1 & 0 \\ 1 & 1 \end{array} \right] = & \\ \left[ \begin{array}{rr} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \end{array} \right]. \end{align*}
(b)
Using the definition of matrix addition we obtain
\begin{equation*} \left[ \begin{array}{rr} a+b+d & a+b+c \\ a+c+d & b+c+d \end{array} \right] = \left[ \begin{array}{rr} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \end{array} \right] \end{equation*}
(c)
Using the definition of matrix equality we obtain the following system of equations
\begin{align*} a+b+d & = a_{1,1}\\ a+b+c & = a_{1,2}\\ a+c+d & = a_{2,1}\\ b+c+d & = a_{2,2} \end{align*}
(d)
Solve the system to prove these vectors are independent. \(\left[\begin{array}{rrrr|r} 1 & 1 & 0 & 1 & a_{1,1} \\ 1 & 1 & 1 & 0 & a_{1,2} \\ 1 & 0 & 1 & 1 & a_{2,1} \\ 0 & 1 & 1 & 1 & a_{2,2} \end{array}\right] \sim \left[\begin{array}{rrrr|r} 1 & 0 & 0 & 0 & \frac{1}{3}(a_{1,1}+a_{1,2}+a_{2,1}-2a_{2,2}) \\ 0 & 1 & 0 & 0 & \frac{1}{3}(a_{1,1}+a_{1,2}-2a_{2,1}+a_{2,2}) \\ 0 & 0 & 1 & 0 & \frac{1}{3}(-2a_{1,1}+a_{1,2}+a_{2,1}+a_{2,2}) \\ 0 & 0 & 0 & 1 & \frac{1}{3}(a_{1,1}-2a_{1,2}+a_{2,1}+a_{2,2}) \end{array}\right] \)
(e)
Recognize that the existence of a unique solution implies the matrices are independent.

Example 4.3.7. Calculate Coordinates.

Now that we have proved that \(\C \) is a basis for \(\R^{2 \times 2}\text{,}\) we can calculate the coordinates of vectors (matrices) with respect to this basis. Calculate the coordinates of \(\vec{x}=\left[ \begin{array}{rr} 3 & 6 \\ 6 & 6 \end{array} \right]\) and \(\vec{y}=\left[ \begin{array}{rr} 7 & 4 \\ 2 & 3 \end{array} \right]\text{.}\)
(a)
First we set up the equations to calculate the coordinates.
\begin{equation*} a\left[ \begin{array}{rr} 1 & 1 \\ 1 & 0 \end{array} \right]+ b\left[ \begin{array}{rr} 1 & 1 \\ 0 & 1 \end{array} \right]+ c\left[ \begin{array}{rr} 0 & 1 \\ 1 & 1 \end{array} \right]+ d\left[ \begin{array}{rr} 1 & 0 \\ 1 & 1 \end{array} \right] = \left[ \begin{array}{rr} 3 & 6 \\ 6 & 6 \end{array} \right]. \end{equation*}
(b)
Using the definition of matrix addition
\begin{equation*} \left[ \begin{array}{rr} a+b+d & a+b+c \\ a+c+d & b+c+d \end{array} \right] = \left[ \begin{array}{rr} 3 & 6 \\ 6 & 6 \end{array} \right]. \end{equation*}
(c)
Using the definition of matrix equality we obtain the following system of equations
\begin{align*} a+b+d & = 3\\ a+b+c & = 6\\ a+c+d & = 6\\ b+c+d & = 6 \end{align*}
(d)
Finally we solve the system to find these coordinates.
\begin{equation*} \left[\begin{array}{rrrr|r} 1 & 1 & 0 & 1 & 3 \\ 1 & 1 & 1 & 0 & 6 \\ 1 & 0 & 1 & 1 & 6 \\ 0 & 1 & 1 & 1 & 6 \end{array}\right] \sim \left[\begin{array}{rrrr|r} 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 4 \\ 0 & 0 & 0 & 1 & 1 \end{array}\right] \end{equation*}
(e)
Recognize that the coordinate is \([1, 1, 4, 1]^T\text{.}\)

Subsection 4.3.3 Coordinates for Each Basis

The coordinates for a given vector depend on the basis in use. Note the differences as you calculate the coordinates.

Checkpoint 4.3.8.

Consider the vector \(\vec{v}=[8,5,3]^T\text{.}\)
(a)
Find the coordinates of \(\vec{v}\) with respect to the standard basis (columns of the identity matrix).
(b)
Find the coordinates of \(\vec{v}\) with respect to the basis: \(\B = \{ [1,0,0]^T, [1,1,0]^T, [1,1,1]^T \}\text{.}\)
(c)
Find the coordinates of \(\vec{v}\) with respect to the basis: \(\C = \{ [0,1,1]^T, [1,0,1]^T, [1,1,0]^T \}\text{.}\)

Checkpoint 4.3.9.

Consider the vector \(\vec{p}=2x^2-3x+5\text{.}\)
(a)
Find the coordinates of \(\vec{p}\) with respect to the standard basis \(\{1, x, x^2\}\text{.}\)
(b)
Find the coordinates of \(\vec{v}\) with respect to the basis: \(\B = \{ x^2, x^2+x, x^2+x+1 \}\text{.}\)
(c)
Find the coordinates of \(\vec{v}\) with respect to the basis: \(\C = \{ 1, x, x^2-2 \}\text{.}\)