Finding inverse matrices

In the past, we have repeatedly added multiples of vectors to each other until we arrive at something simpler. To simplify further, we also multiplied rows by nonzero constants. These steps can be used to do many different things:

On this page, I will call adding a multiple of one row to another and multiplying a row by a nonzero constant elementary row operations. We will find one more use case for them.

Elementary row operations as matrix multiplication

Consider any matrix that is just like an identity matrix, but with one of the zeros replaced by a different number, e.g. $$ \begin{bmatrix} 1&0&0 \\ 5&1&0 \\ 0&0&1 \end{bmatrix}. $$ Let's multiply this with any matrix of the same size: $$ \begin{align} &~~~~~ \begin{bmatrix} 1&0&0 \\ 5&1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} \red a&\red b&\red c \\ \green d&\green e&\green f \\ \blue g&\blue h&\blue i \end{bmatrix} \\ &= \begin{bmatrix} (1,0,0)\cdot(\red a,\green d,\blue g)&(1,0,0)\cdot(\red b,\green e,\blue h)&(1,0,0)\cdot(\red c,\green f,\blue i) \\ (5,1,0)\cdot(\red a,\green d,\blue g)&(5,1,0)\cdot(\red b,\green e,\blue h)&(5,1,0)\cdot(\red c,\green f,\blue i) \\ (0,0,1)\cdot(\red a,\green d,\blue g)&(0,0,1)\cdot(\red b,\green e,\blue h)&(0,0,1)\cdot(\red c,\green f,\blue i) \end{bmatrix} \\ &= \begin{bmatrix} \red a&\red b&\red c \\ \green d+5\red a&\green e+5\red b&\green f+5\red c \\ \blue g&\blue h&\blue i \end{bmatrix} \end{align} $$ This added a multiple of the first row to the second row. Moving the $5$ down adds to the bottom row instead: $$ \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 5&0&1 \end{bmatrix} \begin{bmatrix} \red a&\red b&\red c \\ \green d&\green e&\green f \\ \blue g&\blue h&\blue i \end{bmatrix} = \begin{bmatrix} \red a&\red b&\red c \\ \green d&\green e&\green f \\ \blue g+5\red a&\blue h+5\red b&\blue i+5\red c \end{bmatrix} $$ Moving the $5$ right makes the added numbers come from a different row: $$ \begin{bmatrix} 1&0&0 \\ 0&1&5 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} \red a&\red b&\red c \\ \green d&\green e&\green f \\ \blue g&\blue h&\blue i \end{bmatrix} = \begin{bmatrix} \red a&\red b&\red c \\ \green d+5\blue g&\green e+5\blue h&\green f+5\blue i \\ \blue g&\blue h&\blue i \end{bmatrix} $$ By moving the location of 5, or by using a different number instead of 5, we can add any multiple of any row to any other row this way.

If the $5$ is on the diagonal (i.e. where the identity matrix has $1$), it multiplies a row by 5 instead: $$ \begin{bmatrix} 1&0&0 \\ 0&5&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} \red a&\red b&\red c \\ \green d&\green e&\green f \\ \blue g&\blue h&\blue i \end{bmatrix} = \begin{bmatrix} \red a&\red b&\red c \\ 5\green d&5\green e&5\green f \\ \blue g&\blue h&\blue i \end{bmatrix} $$ Because order matters, it is important that the matrix representing the elementary operation (i.e. the matrix where only one number differs from the identity matrix) is on the left, and the matrix whose rows are used is on the right.

Any elementary row operation can be viewed as multiplying from left by a matrix.

Finding the inverse with elementary row operations

Let's say we start with a matrix $A$. By applying some elementary operation, represented by a matrix $E_1$, we get a matrix $E_1 A$. Let's say we apply 4 more elementary operations, and the result happens to be the identity matrix: $$ E_5 E_4 E_3 E_2 E_1 A = I $$ Just like in this derivation, this shows that $A$ is invertible, and its inverse matrix is $E_5 E_4 E_3 E_2 E_1$. To avoid having to calculate $E_5 E_4 E_3 E_2 E_1$ as a huge matrix multiplication, we can rewrite it with an extra $I$ at the end: $$ A^{-1} = E_5 E_4 E_3 E_2 E_1 = E_5 E_4 E_3 E_2 E_1 I $$ So to find the inverse matrix of $A$, we just have to apply the same operations to the identity matrix.

For example, let's find the inverse of $$ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}. $$ Because we will apply the same operations to $A$ and $I$, let's write them next to each other.

$$\begin{align} &~~~ \begin{bmatrix} \red{1} & \red{2} \\ \blue{3} & \blue{4} \\ \end{bmatrix} \qquad \begin{bmatrix} \red{1} & \red{0} \\ \blue{0} & \blue{1} \\ \end{bmatrix} \\ \end{align}$$

Next we choose the elementary row operations to apply so that we get an identity matrix on the left side, i.e. $A$ has turned into the identity matrix. The 1 in top left corner is already correct. To get $0$ below it, we notice that $\blue 3 + (-3)\red 1 = \magenta 0$.

$$\begin{align} &~~~ \begin{bmatrix} \red{1} & \red{2} \\ \blue{3} & \blue{4} \\ \end{bmatrix} \qquad \begin{bmatrix} \red{1} & \red{0} \\ \blue{0} & \blue{1} \\ \end{bmatrix} \\ \to&~~~ \begin{bmatrix} \red{1} & \red{2} \\ \magenta{0} & \magenta{-2} \\ \end{bmatrix} \qquad \begin{bmatrix} \red{1} & \red{0} \\ \magenta{-3} & \magenta{1} \\ \end{bmatrix} \quad \magenta{\text{new bottom}} = \blue{\text{old bottom}} + \left( -3 \right) \cdot \red{\text{top}} \\ \end{align}$$

To get 1 in the bottom right corner, we next multiply the bottom row by $-\frac{1}{2}$ to get $(-\frac{1}{2})\magenta{(-2)} = \green 1$.

$$\begin{align} &~~~ \begin{bmatrix} \red{1} & \red{2} \\ \magenta{0} & \magenta{-2} \\ \end{bmatrix} \qquad \begin{bmatrix} \red{1} & \red{0} \\ \magenta{-3} & \magenta{1} \\ \end{bmatrix} \\ \to&~~~ \begin{bmatrix} \red{1} & \red{2} \\ \green{0} & \green{1} \\ \end{bmatrix} \qquad \begin{bmatrix} \red{1} & \red{0} \\ \green{\frac{3}{2}} & \green{-\frac{1}{2}} \\ \end{bmatrix} \quad \green{\text{new bottom}} = \left( -\frac{1}{2} \right) \cdot \magenta{\text{old bottom}} \\ \end{align}$$

We can now get rid of the remaining $\red 2$:

$$\begin{align} &~~~ \begin{bmatrix} \red{1} & \red{2} \\ \green{0} & \green{1} \\ \end{bmatrix} \qquad \begin{bmatrix} \red{1} & \red{0} \\ \green{\frac{3}{2}} & \green{-\frac{1}{2}} \\ \end{bmatrix} \\ \to&~~~ \begin{bmatrix} \darkyellow{1} & \darkyellow{0} \\ \green{0} & \green{1} \\ \end{bmatrix} \qquad \begin{bmatrix} \darkyellow{-2} & \darkyellow{1} \\ \green{\frac{3}{2}} & \green{-\frac{1}{2}} \\ \end{bmatrix} \quad \darkyellow{\text{new top}} = \red{\text{old top}} + \left( -2 \right) \cdot \green{\text{bottom}} \\ \end{align}$$

Here are all the steps:

$$\begin{align} &~~~ \begin{bmatrix} \red{1} & \red{2} \\ \blue{3} & \blue{4} \\ \end{bmatrix} \qquad \begin{bmatrix} \red{1} & \red{0} \\ \blue{0} & \blue{1} \\ \end{bmatrix} \\ \to&~~~ \begin{bmatrix} \red{1} & \red{2} \\ \magenta{0} & \magenta{-2} \\ \end{bmatrix} \qquad \begin{bmatrix} \red{1} & \red{0} \\ \magenta{-3} & \magenta{1} \\ \end{bmatrix} \quad \magenta{\text{new bottom}} = \blue{\text{old bottom}} + \left( -3 \right) \cdot \red{\text{top}} \\ \to&~~~ \begin{bmatrix} \red{1} & \red{2} \\ \green{0} & \green{1} \\ \end{bmatrix} \qquad \begin{bmatrix} \red{1} & \red{0} \\ \green{\frac{3}{2}} & \green{-\frac{1}{2}} \\ \end{bmatrix} \quad \green{\text{new bottom}} = \left( -\frac{1}{2} \right) \cdot \magenta{\text{old bottom}} \\ \to&~~~ \begin{bmatrix} \darkyellow{1} & \darkyellow{0} \\ \green{0} & \green{1} \\ \end{bmatrix} \qquad \begin{bmatrix} \darkyellow{-2} & \darkyellow{1} \\ \green{\frac{3}{2}} & \green{-\frac{1}{2}} \\ \end{bmatrix} \quad \darkyellow{\text{new top}} = \red{\text{old top}} + \left( -2 \right) \cdot \green{\text{bottom}} \\ \end{align}$$

This shows that $$ \begin{bmatrix} \red 1 & \red 2 \\ \blue 3 & \blue 4 \end{bmatrix}^{-1} = \begin{bmatrix} \darkyellow{-2} & \darkyellow 1 \\ \green{\frac{3}{2}} & \green{-\frac{1}{2}} \end{bmatrix}. $$

Checking when it works

Let's check which invertible matrices can be turned into the identity matrix with elementary row operations, i.e. when this way to compute inverse matrices works.

Consider any invertible $4 \times 4$ matrix. Because it doesn't matter what numbers exactly it contains, I will denote "any number" with $*$. $$ \begin{bmatrix} \red *&*&*&*\\ \red *&*&*&*\\ \red *&*&*&*\\ \red *&*&*&* \end{bmatrix} $$ Consider the left column, highlighted in red. It must contain some nonzero number, because if it was zeros only, then the columns would be linearly dependent and the matrix wouldn't be invertible. Let's say that the nonzero number is 5. $$ \begin{bmatrix} \red*&*&*&*\\ \red*&*&*&*\\ \red5&*&*&*\\ \red*&*&*&* \end{bmatrix} $$ If the top left corner is zero, we can add the row with 5 (or whatever the nonzero number is) to it. This way we can ensure there's a nonzero number in the top left corner. Let's say it's 7. $$ \begin{bmatrix} \red7&*&*&*\\ \red*&*&*&*\\ \red5&*&*&*\\ \red*&*&*&* \end{bmatrix} $$ Let's now multiply the top row by $\frac{1}{7}$. This way we get 1 in the corner, just like in an identity matrix. $$ \begin{bmatrix} \red1&*&*&*\\ \red*&*&*&*\\ \red5&*&*&*\\ \red*&*&*&* \end{bmatrix} $$ We can now add multiples of the top row to other rows so that the rest of the first column becomes zeros. For example, adding $-5$ times the first row cancels the $5$ in the third row. $$ \begin{bmatrix} 1&*&*&*\\ 0&\red*&*&*\\ 0&\red*&*&*\\ 0&\red*&*&* \end{bmatrix} $$ Let's now focus on the numbers in the second column except the topmost one, again highlighted in red. If they were all zero, then the second column would be $(x,0,0,0)$ with some number $x$, and it would be a linear combination of the first column $(1,0,0,0)$. Because the matrix is invertible, this isn't possible, and one of the red numbers is nonzero. Just like before, we can ensure that the first number is nonzero, e.g. $5$, and then multiply the row by e.g. $\frac{1}{5}$ to get it to $1$. $$ \begin{bmatrix} 1&*&*&*\\ 0&\red1&*&*\\ 0&\red*&*&*\\ 0&\red*&*&* \end{bmatrix} $$ We can again zero the rest of the column this way, including the topmost number that we have ignored so far. $$ \begin{bmatrix} 1&0&*&*\\ 0&1&*&*\\ 0&0&\red*&*\\ 0&0&\red*&* \end{bmatrix} $$ If the two numbers highlighted in red were both zero, the third column would be $(x,y,0,0)$ with some numbers $x$ and $y$, so it would be a linear combination of the first column $(1,0,0,0)$ and the second column $(0,1,0,0)$. This isn't possible, because the columns are linearly independent, so one of the two red numbers is nonzero. As before, we can ensure that the topmost red number is nonzero. Let's say it's 7. $$ \begin{bmatrix} 1&0&*&*\\ 0&1&*&*\\ 0&0&\red7&*\\ 0&0&\red*&* \end{bmatrix} $$ By continuing this process, we eventually arrive at the identity matrix.

The inverse of any invertible matrix can be calculated with elementary row operations.