Skip to main content

Section A.6 Problem Set 5

Instructions.

Prior to beginning this problem set, consider reading the Problem Set Success Guide Section A.1 for advice and clarity around expectations for Problem Sets in this course. Upload your solutions to all problems on this page to gradescope as a single .pdf file, remembering to assign pages appropriately for each question. Complete instructions for Problem Sets are available on Canvas.

Problem A.6.1. (Problem 1).

In class, we defined the inverse of a matrix in two steps. First, if \(A\) is a matrix, we said that \(A\) was invertible if the corresponding linear map \(T\colon\IR^n\to\IR^n\) was an invertible function. Given this, the inverse of \(A\text{,}\) denoted \(A^{-1}\) was defined to be the standard matrix of the inverse linear map \(T^{-1}\colon\IR^n\to\IR^n\text{.}\)
An alternative approach would be: an \(n\times n\) matrix \(A\) is invertible if we can find a matrix \(B\) for which \(AB=BA=I_n\text{;}\) in this case, \(B\) is unique and we define \(A^{-1}\) to be this matrix. For this problem (and future ones) you may decide to use either characterization you want.

(a)

If \(A,B\) are both invertible matrices, explain why \(AB\) is also invertible and that \((AB)^{-1}=B^{-1}A^{-1}\text{.}\)

(b)

If \(A,B\) are \(n\times n\) matrices and \(A\) is invertible, explain and demonstrate how to solve the following matrix equations for \(X\text{:}\)
  1. \(\displaystyle A^{-1}XA=B\)
  2. \(\displaystyle AXA^{-1}=B\)
  3. \(\displaystyle ABX=I\)

(c)

If \(H,G\) are invertible matrices, is it necessarily the case that \((H+G)\) is invertible? If yes, prove it; if not, provide a counterexample.

Problem A.6.2. (Problem 2).

Let \(A\) and \(B\) be two matrices for which the products \(AB\) and \(BA\) are both defined. If \(AB=BA\text{,}\) we say that \(A,B\) commute; if \(AB=-BA\text{,}\) we say that \(A\) and \(B\) anti-commute.

(a)

Let \(A=\left[\begin{array}{cc}1& 3\\-1& 1\end{array}\right]\) and \(S=\{X\in M_{2,2}|\ AX=XA\}\) be the subset of \(2\times 2\) matrices that commute with \(A\text{.}\) Explain why \(S\) is a subspace of \(M_{2,2}\) and explain and demonstrate how to find a basis for \(S\text{.}\)

(b)

Let \(B=\left[\begin{array}{cc}0& 1\\-1& 0\end{array}\right]\) and \(T=\{Y\in M_{2,2}|\ BY=-YB\}\) be the subset of \(2\times 2\) matrices that anti-commute with \(B\text{.}\) Explain why \(T\) is a subspace of \(M_{2,2}\) and explain and demonstrate how to find a basis for \(T\text{.}\)

Problem A.6.3. (Problem 3).

Suppose that \(S\colon V\to W\) and \(T\colon W\to U\) are two linear transformations of vector spaces.

(a)

If \(S\) and \(T\) are both surjective, explain why \(T\circ S\) is also surjective.

(b)

Let \(A\) be an \(m\times n\) matrix and \(B\) an \(n\times k\) matrix. Suppose further that we know that \(\RREF(A)\) and \(\RREF(B)\) have pivot positions in each row. Explain why each row of \(\RREF(AB)\) also has a pivot.

(c)

Again, let \(A\) be an \(m\times n\) matrix and \(B\) an \(n\times k\) matrix. Suppose we know that \(\RREF(AB)\) has a pivot in each row. Explain why \(\RREF(A)\) must also have a pivot position in each of its rows, but show, by providing an example, that it is possible for some row of \(\RREF(B)\) to be missing a pivot.
Remark: If you’d like explore further: formulate and answer a related sequence of activities involving injective transformations and products of matrices whose RREFs have pivots in each column.

Observation A.6.4. A Different Take on Matrix Multiplication.

In class, we defined the product of two matrices to be the standard matrix of the composition of the two corresponding linear transformations. Here is an alternate formula/definition for the matrix product that builds on the work you’ve been doing with dot-products.
Suppose that \(A\) is an \(m\times n\) matrix and that \(B\) is an \(m\times k\) matrix. Let \(\vec{r}_1,\dots,\vec{r}_m\) be the rows of \(A\) and let \(\vec{c}_1,\dots,\vec{c}_k\) be the columns of \(B\text{,}\) so that we can write \(A=\left[\begin{array}{c}\vec{r}_1\\\vdots\\\vec{r}_m\end{array}\right]\) and \(B=[\vec{c}_1\ \cdots\ \vec{c}_k]\text{.}\) Since the rows of \(A\) and the columns of \(B\) are all vectors in \(\IR^n\text{,}\) it makes sense to take the dot-product between them. The matrix product can then be defined by
\begin{equation*} AB=\left[\begin{array}{ccc}\vec{r}_1\bullet\vec{c}_1&\cdots&\ \vec{r}_1\bullet\vec{c}_k\\\vdots &\vdots & \vdots\\\vec{r}_m\bullet\vec{c}_1&\cdots& \vec{r}_m\bullet\vec{c}_k\end{array}\right]. \end{equation*}
In other words, the \(ij\) entry of the product \(AB\) is the dot-product of the \(i\)-th row of \(A\) and the \(j\)-th column of \(B\text{.}\)
As an example, in class, we computed that:
\begin{equation*} \left[\begin{array}{cc}1&2\\0&1\\3&5\\-1&-2\end{array}\right]\left[\begin{array}{ccc}2&1&-3\\5&-3&4\end{array}\right]=\left[\begin{array}{ccc}12&-5&5\\5&-3&4\\31&-12&11\\-12&5&-5\end{array}\right]. \end{equation*}
Now, we can check the above as follows: we see that the \((3,2)\) entry of this product is \(-12\text{.}\) This is the same as taking the dot-product of the third row of \(A\) and the second column of \(B\text{:}\)
\begin{equation*} \left[\begin{array}{c}3\\5\end{array}\right]\bullet\left[\begin{array}{c}1\\-3\end{array}\right]=-12. \end{equation*}

Problem A.6.5. (Problem 4).

Use the above conceptualization of matrix-product to revisit some old friends with a new perspective.

(a)

Let \(T\colon\IR^n\to\IR^m\) be a linear transformation with \(m\times n\) standard matrix \(A\text{.}\) Explain why the kernel of \(T\) is equal to the orthogonal complement of the row space of \(A\text{.}\) That is, explain why:
\begin{equation*} \ker(T)=\textrm{Row}(A)^\perp. \end{equation*}

(b)

Using part (a), explain and demonstrate how to calculate a basis for \(W^\perp\) where:
\begin{equation*} W=\vspan\left\{\left[\begin{array}{c}1\\2\\-3\\2\end{array}\right],\left[\begin{array}{c}2\\7\\-1\\5\end{array}\right]\right\}. \end{equation*}

(c)

Let \(W\) be a subspace of \(\IR^n\text{.}\) Explain, using results covered in class or in previous problems sets, why
\begin{equation*} \dim(W)+\dim(W^\perp)=n. \end{equation*}
Hint.
For part (c): choose a spanning set for \(W\text{.}\) That is, suppose that \(W=\vspan\{\vec{v}_1,\dots, \vec{v}_r\}\) for some finite set of vectors.