Prior to beginning this problem set, consider reading the Problem Set Success Guide Section A.1 for advice and clarity around expectations for Problem Sets in this course. Upload your solutions to all problems on this page to gradescope as a single .pdf file, remembering to assign pages appropriately for each question. Complete instructions for Problem Sets are available on Canvas.
ProblemA.5.1.(Problem 1).
In this problem, \(T\colon V\to W\) denotes a linear transformation.
(a)
Show that if \(\ker(T)=\{\vec{0}\}\text{,}\) then \(T\) must be injective. Note: we saw and used this fact in class, but we did not prove it; the point of this exercise is for you to prove it here.
(b)
Suppose that \(\setList{\vec{v}_1,\vec{v}_2,\vec{v}_3}\) are vectors in \(V\) and that the set \(\setList{T(\vec{v}_1),T(\vec{v}_2),T(\vec{v}_3)}\) is a linearly independent set. Does it follows that \(\setList{\vec{v}_1,\vec{v}_2,\vec{v}_3}\) is also linearly independent? If “yes”, explain why; if “no”, provide a counterexample to the claim.
(c)
Suppose that \(\setList{\vec{u}_1,\vec{u}_2,\vec{u}_3}\) are vectors in \(V\text{.}\) If the set \(\setList{T(\vec{u}_1),T(\vec{u}_2),T(\vec{u}_3)}\) spans \(W\text{,}\) does it necessarily follows that \(\setList{\vec{u}_1,\vec{u}_2,\vec{u}_3}\) spans \(V\text{?}\) If “yes”, explain why; if “no”, provide a counterexample to the claim.
Solution.
(a)
Suppose that \(T\colon V\to W\) is some linear transformation. To show that \(T\) is injective, we need to prove that if \(\vec{u},\vec{v}\in V\) are vectors for which \(T(\vec{u})=T(\vec{v})\) then, in fact, we have \(\vec{u}=\vec{v}\text{.}\)
To that end, suppose that \(T(\vec{u})=T(\vec{u})\text{.}\) It follows, then, that \(T(\vec{u}-\vec{v})=T(\vec{u})-T(\vec{v})=\vec{0}\text{.}\) In other words, \(\vec{u}-\vec{v}\in\ker(T)\text{.}\) But since our assumption is that \(\ker(T)=\{\vec{0}\}\text{,}\) it follows that \(\vec{u}-\vec{v}=\vec{0}\) or \(\vec{u}=\vec{v}\text{,}\) from which it follows that \(T\) is injective.
(b)
Yes, it does follow. Suppose otherwise that the set \(\setList{\vec{v}_1,\vec{v}_2,\vec{v}_3}\) were linearly dependent. It would then follow that one of the vectors, say \(\vec{v}_1\text{,}\) is a linear combination of the others:
But this would imply that the set \(\setList{T(\vec{v}_1),T(\vec{v}_2),T(\vec{v}_3)}\text{,}\) which is contrary to our assumptions. We conclude that the set \(\setList{\vec{v}_1,\vec{v}_2,\vec{v}_3}\) is therefore linearly independent.
(c)
No, not necessarily. For instance, if \(\vec{u}_1=\vec{e}_1, \vec{u}_2=\vec{e}_2,\vec{u}_3=\vec{e}_3\) denote the first three standard basis vectors in \(\IR^4\) and \(T\colon\IR^4\to\IR^3\) is the linear transformation given by
then since the columns of this matrix span \(\IR^3\) but the vectors \(\vec{u}_1,\vec{u}_2,\vec{u}_3\) do not span the domain \((\IR^4)\text{,}\) we have a counter-example.
ProblemA.5.2.(Problem 2).
Let \(T\colon M_{2,2}\to\IR\) denote the transformation given by the rule:
Explain and demonstrate how to calculate a basis for the kernel of \(T\) and a basis for the image of \(T\text{.}\) Is \(T\) injective? Is \(T\) surjective?
(c)
Verify the rank-nullity theorem for this linear transformation.
Solution.
(a)
Let \(A=\left[\begin{array}{cc}a& b\\ c& d\end{array}\right]\) and \(B=\left[\begin{array}{cc}x& y\\ z& w\end{array}\right]\text{.}\) Then, we have:
These expressions agree, so we conclude that \(T(A)+T(B)=T(A+B)\) for all elements of \(M_{2,2}\text{.}\)
Similarly, if \(k\) is a scalar, then \(T(kA)=T(\left[\begin{array}{cc}ka& kb\\ kc& kd\end{array}\right])=ka+kd\text{.}\) On the other hand, \(kT(A)=k(a+d)=ka+kd\text{.}\) So, \(T\) respects scalar multiplication as well and \(T\) is linear.
(b)
By definition, the kernel of \(T\) is the set of all matrices for which \(T(A)=0\text{.}\) This can described as:
which tells us that the set \(S=\setList{\left[\begin{array}{cc}1& 0\\ 0& -1\end{array}\right],\left[\begin{array}{cc}0& 1\\ 0& 0\end{array}\right],\left[\begin{array}{cc}0& 0\\ 1& 0\end{array}\right]}\) spans the kernel (since any element is a linear combination of the matrices in \(S\)).
The only way that the above linear combination of matrices is equal to the zero matrix is when \(a,b,c\) are all themselves equal to \(0\text{.}\) It follows that the set \(S\) is linear independent and that \(S\) is therefore a basis for the kernel.
Considering a matrix of the form \(\left[\begin{array}{cc}a& 0\\ 0& 0\end{array}\right]\text{.}\) Applying \(T\text{,}\) we get \(a\in\IR\text{,}\) so the image of \(T\) is all of \(\IR\text{.}\) It follows that \(\{1\}\) is a basis.
Since the kernel is non-trivial, the map is not injective. Since the image is all of \(\IR\text{,}\) the map is surjective.
(c)
By our calculations above, the rank of \(T\) is equal to \(1\) and the nullity is equal to \(3\text{.}\) Adding these together gives \(4\text{,}\) the dimension of \(M_{2,2}\)
ProblemA.5.3.(Problem 3).
Let \(W\) denote the plane living in \(\IR^4\) that is given by the following set of equations:
Explain and demonstrate how to find two vectors \(\vec{u},\vec{v}\) for which \(W=\vspan\{\vec{u},\vec{v}\}\) is a basis for \(W\text{,}\) confirming that \(W\) is indeed a plane in 4-space.
(b)
Explain and demonstrate how to find an example of a linear transformation \(T\colon\IR^3\to\IR^4\) for which the image of \(T\) is equal to \(W\text{.}\)
(c)
Explain and demonstrate how to find one example of a linear transformation \(S\colon\IR^2\to\IR^4\) for which the image of \(S\) is equal to some \(1\)-dimensional subspace of \(W\text{.}\)
Solution.
(a)
By definition, \(W\) is the solution space of the given homogeneous linear system. Using methods we are by now familiar with, a basis for this solution space is given by:
Methods and solutions will vary. Since the image of a linear transformation is equal to the span of the columns of the corresponding standard matrix, one approach is find a \(4\times 3\) standard matrix for which its columns span exactly \(W\text{.}\) One such example is:
Similar to above, we can search for an appropriate \(4\times 2\) standard matrix. Here, we want the columns to span a line that is contained in \(W\text{.}\) One way to do this is to let both columns be equal to one of the two spanning vectors.
ProblemA.5.4.(Problem 4).
Let \(A\) be an \(m\times n\) matrix. The transpose of \(A\) is the \(n\times m\) matrix that is obtained by reflecting the matrix \(A\) about the main top-left to bottom-right diagonal and is denoted by \(A^T\text{.}\) For instance, the following is an example of a matrix \(B\) and its transpose \(B^T\text{:}\)
Now suppose \(f\colon\mathbb{R}^n\to\mathbb{R}^m\) is a linear transformation with standard matrix \(A\) and let \(g\colon\mathbb{R}^m\to\mathbb{R}^n\) denote the linear transformation with standard matrix \(A^T\text{.}\)
(a)
With \(A=\left[\begin{array}{cccc}1& 0&2&-3\\3& 2&-1&-1\\4& 2&1&-4\end{array}\right]\) and notation as defined above, calculate the rank and nullity of the transformations \(f\) and \(g\text{.}\)
(b)
Now suppose that \(f\colon\mathbb{R}^n\to\mathbb{R}^m\) is an arbitrary linear transformation, let \(A\) denote its standard matrix, and let \(g\) be the transformation corresponding to the transpose \(B=A^T\text{.}\) Explain why the rank of \(f\) will always be equal to the rank of \(g\text{.}\)
(c)
Notation as in (b), explain why \(\textrm{null(g)}-\textrm{null(f)}=m-n\) and conclude that \(\textrm{null}(f)=\textrm{null}(g)\) if and only if \(m=n\text{.}\)
Since there are two pivot columns, the rank of \(A\) is equal to \(2\text{.}\) Since there are two non-pivot columns, the nullity of \(A\) is equal to \(2\text{.}\)
One way to calculate the rank and nullity of \(A^T\) is to calculuate the matrix \(A^T\) and then calculate its RREF. Another approach to solving the problem is: since the rows of \(A\) are the columns of \(A^T\) and vice-versa and, since we know that dimension of the row-space of a matrix is equal to the dimension of the column space, it follows that the rank of \(g\) must be the same as the rank of \(f\text{.}\) Since the dimension of the domain is different (it’s \(3\) in this case), the nullity of \(g\) is just \(1\text{.}\)
(b)
In general, if \(A \) is the standard matrix of a linear transformation \(f\text{,}\) then its rank is the dimension of the column space of the corresponding standard matrix. Since this is always equal to the dimension of its row-space and the row-space is the same as the column space of the transpose, it follows that \(f\) and \(g\) have the same rank.
(c)
Applying the rank-nullity theorem to \(f\) and \(g\) yield the following equalities: