Prior to beginning this problem set, consider reading the Problem Set Success Guide Section A.1 for advice and clarity around expectations for Problem Sets in this course. Upload your solutions to all problems on this page to gradescope as a single .pdf file, remembering to assign pages appropriately for each question. Complete instructions for Problem Sets are available on Canvas.
The following statements are all invalid for at least one reason. Determine what makes them invalid and, suggest alternative valid statements that the author may have meant instead.
(a)
The matrix \(A\) is linearly independent because \(\RREF(A)\) has a pivot in each column.
(b)
The matrix \(A\) does not span \(\IR^4\) because \(\RREF(A)\) has a row of zeroes.
(c)
The set of vectors \(S\) spans.
(d)
The set of vectors \(S\) is a basis.
Solution.
(a)
Linear independence is not a property of matrices; it’s a property of sets of vectors. One possible alternative to this statement that would be valid is “The set \(S\) is linearly independent because \(\RREF(A)\) has a pivot in each column.”
(b)
Similar to above, “spanning” is not a property of matrices; it’s a property of sets of vectors. One possible valid alternative would be “The set \(S\) does not span \(\IR^4\) because \(\RREF(A)\) has a row of zeroes.”
(c)
Here, the author needs to be more precise: what exactly do they claim \(S\) to span? Based on above, we know that \(S\) does not span \(\IR^4\text{.}\) However, perhaps the author meant to say \(S\) spans some three-dimensional subspace. That is, perhaps they really meant that \(S\) spanned the subspace \(\vspan(S)\text{.}\)
(d)
Likewise, we need more precision. The set \(S\) is not a basis for \(\IR^4\text{,}\) but it is a basis for the subspace \(\vspan(S)\text{.}\)
to conclude that the set \(\setList{
\left[\begin{array}{c}1\\2\\3\end{array}\right],
\left[\begin{array}{c}0\\-2\\-2\end{array}\right]
}\text{,}\) the set of vectors corresponding to the pivot columns of the RREF, is a basis for \(W\text{.}\)
(a)
Explain why neither of the vectors \(\left[\begin{array}{c}1\\0\\0\end{array}\right],
\left[\begin{array}{c}0\\1\\0\end{array}\right]\) are elements of \(W\text{.}\)
(b)
Explain why this shows that, in general, when we calculate a basis for \(W=\vspan\{\vec{v}_1,\dots, \vec{v}_n\}\text{,}\) the pivot columns of \(\RREF[\vec{v}_1\dots \vec{v}_n]\) themselves do not form a basis for \(W\text{.}\)
Solution.
(a)
The following RREF calculations illustrate that neither \(\left[\begin{array}{c}1\\0\\0\end{array}\right]\) nor \(\left[\begin{array}{c}0\\1\\0\end{array}\right]\) are linear combinations of the set \(\setList{
\left[\begin{array}{c}1\\2\\3\end{array}\right],
\left[\begin{array}{c}0\\-2\\-2\end{array}\right]
}\text{:}\)
(b)
Consider the example above. If we take the set consisting of the pivot columns of \(RREF[\vec{v}_1 \vec{v}_2 \vec{v}_3 \vec{v}_4]\text{,}\) then we get two vectors that do not even live in \(W\text{.}\) Thus, there is no this set is a basis for \(W\text{.}\) On the other hand, the set consisting of the first and third vectors of the original set does give us a basis.
This example is enough to demonstrate that, in general, we wouldn’t expect the pivot columns of the RREF itself to help us.
ProblemA.4.3.(Problem 3).
Let \(A\) denote an \(m\times n\) matrix so that the column vectors of \(A\) are vectors in \(\IR^m\) and the row vectors of \(A\) (i.e., the rows) are vectors in \(\IR^n\text{.}\) The column space of \(A\) (denoted \(\text{Col}(A)\)) is the subspace spanned by the columns of \(A\text{.}\) The row space of \(A\) (denoted \(\text{Row}(A)\)) is the subspace spanned by the rows of \(A\text{.}\) For instance, if \(A=\left[\begin{array}{ccc}1& 2& 3\\ -1&4& 7\end{array}\right]\text{,}\) then:
In Section 2.6, we learned how to calculate a basis for the column space of an arbitrary matrix \(B\text{:}\) we row reduce to calculate \(\RREF(B)\) and then the columns corresponding to the columns containing pivot positions in \(\RREF(B)\) form a basis. The Problem just above demonstrates why it’s important to not take the pivot columns of \(\RREF(B)\) itself.
In contrast, the non-zero rows of \(\RREF(B)\) will always form a basis for \(\text{Row}(B)\) as you will now show:
(a)
If \(A\) is an arbitrary matrix, explain why the non-zero rows of \(\RREF(A)\) span the same subspace as the rows of \(A\text{.}\)
(b)
If \(A\) is an arbitrary matrix, explain why the non-zero rows of \(\RREF(A)\) are always linearly independent.
(c)
Even though \(\text{Row}(A)\) and \(\text{Col}(A)\) are different subspaces of Euclidean vectors (in fact, unless \(n=m\text{,}\) the vectors in each set do not even live in the same ambient space), explain why \(\dim(\text{Col}(A))=\dim(\text{Row}(A))\) holds for any matrix \(A\text{.}\)
Solution.
(a)
Suppose that the rows of the matrix \(A\) are \(\vec{R}_1,\dots, \vec{R}_m\text{.}\) Then, \(\text{Row}(A)=\vspan\{\vec{R}_1,\dots, \vec{R}_m\}\text{.}\) We claim that for each row operation we perform on \(A\text{,}\) the span of the row space doesn’t change.
Row-swapping: If we swap the order of any two rows, the underlying set of vectors doesn’t change. Therefore, the two matrices would have the same row spaces, even though the rows are in different orders.
Scaling a row by a non-zero scalar \(c\text{:}\) Suppose we multiply row \(\vec{R}_i\) by \(c\neq 0\text{.}\) Then, any linear combination that can be made with \(\{\vec{R}_1,\dots, \vec{R}_m\}\) can be made with the set \(\{\vec{R}_1,\dots, c\vec{R}_i,\dots, \vec{R}_m\}\) by scaling the corresponding amount of \(\vec{R}_i\) needed. So, the span will not change.
Adding a multiple of one row to another: For concreteness, let’s assume we do \(\vec{R}_1+c\vec{R}_2\to \vec{R}_1\text{.}\) Then, our original row space is \(S=\vspan\{\vec{R}_1,\dots, \vec{R}_m\}\) and our new row space is \(T=\vspan\{\vec{R}_1+c\vec{R}_2,\vec{R}_2,\cdots,\vec{R}_m\}\text{.}\) Now, since all the vectors in \(T\) are linear combinations of vectors in \(S\text{,}\) it follows that anything in the new row space actually already lives in the old row space. On the other hand, since we can re-build the vector \(\vec{R}_1\) from \(\vec{R}_1+c\vec{R}_2\) by subtracting \(c\vec{R}_2\text{,}\) it follows that that anything in the old row space can also be built of the vectors in the new row space.
Since calculating the RREF involves a sequence of row operations and the row operations aren’t changing the row space, the result follows.
(b)
Suppose we start with the matrix \(A\text{,}\) compute its RREF and that \(\vec{S}_1,\dots, \vec{S}_r\) are the non-zero rows that appear in \(\RREF(A)\text{.}\) For each \(i\text{,}\) if we read the row from the left to the right, the first non-zero entry that we will see is a \(1\) because that’s what corresponds to the pivots. If we were to take a linear combination of the rows \(a_1\vec{S}_1+\cdots+a_r\vec{S}_r\text{,}\) then the resulting vector’s entries will be one of two forms:
If the entry corresponds to the position of a pivot column, then this entry will just be \(a_i\text{,}\) corresponding to the variable in front of the pivot.
Otherwise, the entry will be some linear combination of the coefficients \(a_1,\dots, a_r\text{.}\)
The only way that this combination can be the zero vector is if all of the coefficients \(a_i\) are themselves equal to \(0\text{;}\) it follows that these non-zero rows are linearly independent.
(c)
The dimension of the column space is equal to the number of pivot columns in \(\RREF(A)\text{.}\) This is equal to the number of non-zero rows in \(\RREF(A)\text{,}\) which gives us the dimension of the row space.
ProblemA.4.4.(Problem 4).
An \(n \times n\) matrix \(M\) is non-singular if the associated homogeneous system with coefficient matrix \(M\) is consistent with one solution. In what follows, assume that \(M\) is a non-singular matrix.
(a)
Explain why the RREF of \(M\) is the identity matrix.
(b)
Explain why, for any column vector \(\vec{b} = \left[\begin{array}{c}b_1\\b_2\\ \vdots \\b_n \end{array}\right]\text{,}\) the system of equations given by \(\left[\begin{array}{c|c}M & \vec{b}\end{array}
\right]\) has a unique solution.
(c)
Explain why the columns of \(M\) form a basis of \(\IR^n\text{.}\)
Solution.
(a)
By definition, the system \(\left[\begin{array}{c|c}M & \vec{0}\end{array}\right]\) has a unique solution. This means that every column of \(M\) is a pivot column. Since \(M\) is a square matrix, it follows that every row has a pivot as well, which explains why \(M\) is the identity matrix.
(b)
In part (a), we saw that \(\RREF(M)\) is equal to the identity matrix. This means that the columns of \(M\) span all of \(\IR^n\text{,}\) which tells us that the system in question is consistent no matter what \(\vec{b}\) is. But we also know that each column is a pivot column, so there are no free-variables, which tells us the system is unique.
A set of vectors \(\{\vec{u}_1,\dots,\vec{u}_p\}\) in \(\IR^n\) is called an orthogonal set if each pair of distinct vectors are orthogonal. That is, we have \(\vec{u}_i\bullet\vec{u}_j=0\) whenever \(i\neq j\text{.}\) If \(W\) is a subspace of \(\IR^n\text{,}\) then an orthogonal basis for \(W\) is a basis of \(W\) that is also an orthogonal set.
(a)
Suppose that \(S=\{\vec{u}_1,\dots, \vec{u}_p\}\) is an orthogonal set of non-zero vectors. Suppose further that \(c_1,\dots, c_p\) are scalars for which
Show that, in fact, each \(c_i\) must be equal to \(0\) and conclude that \(S\) must be linearly independent. In other words, show that any orthogonal set of non-zero vectors is linearly independent.
(b)
Suppose further that \(S=\{\vec{u}_1,\dots, \vec{u}_p\}\) is an orthogonal basis for some subspace \(W\) of \(\IR^n\) and that \(\vec{y}\in W\text{.}\) Then, since \(S\) is a basis, we know that we can find unique scalars \(c_1,\dots, c_p\) for which
Let’s put this together and practice the new terms/concepts. Let \(S=\left\{\begin{bmatrix}1\\0\\1\end{bmatrix},\begin{bmatrix}-1\\0\\1\end{bmatrix},\begin{bmatrix}0\\1\\0\end{bmatrix}\right\}\text{.}\) Verify that \(S\) is an orthogonal basis of \(\IR^3\) and explain and demonstrate how to write the vector \(\left[\begin{array}{c}1\\2\\3\end{array}\right]\) as a linear combination of vectors in \(S\)without doing any row reducing or solving of linear equations (for either task).
Hint.
For (a): what happens if you take the left-hand-side of the vector equation in (a) and calculate the dot-product with \(\vec{u}_i\text{?}\)
since we have \(\vec{u}_i\bullet\vec{u}_j=0\) whenever \(i\neq j\text{.}\) Since \(\vec{u}_i\bullet\vec{u}_i\neq 0\text{,}\) it follows that \(c_i=0\text{.}\)
Repeating this for each choice of \(i\text{,}\) we conclude that, in fact, all of the coefficient \(c_1,\dots, c_p\) must be equal to \(0\text{,}\) which shows that the set of vectors is linearly indepent.
Here, let \(S=\vspan\left\{\begin{bmatrix}1\\0\\1\end{bmatrix},\begin{bmatrix}-1\\0\\1\end{bmatrix},\begin{bmatrix}0\\1\\0\end{bmatrix}\right\}=\left\{\vec{u}_1,\vec{u}_2,\vec{u}_3\right\}.\) Then, by computing the dot products, we find that \(\vec{u}_1\bullet\vec{u}_2=\vec{u}_2\bullet\vec{u}_3=\vec{u}_1\bullet\vec{u}_3=0.\) It follows that \(S\) is an orthogonal set and therefore linearly independent. Since \(S\) has three vectors, it follows that \(S\) must also span \(\IR^3\text{;}\) therefore, \(S\) is an orthogonal basis.
Using our result above, we know that we can find the coefficients that build \(\vec{y}\) by computing dot products: