Calculating determinants with row operations¶
Consider a determinant that has rows $\red{\vec v}$ and $\blue{\vec w}$, and possibly several other unrelated rows. I will lazily denote this determinant with $\det(\red{\vec v}, \blue{\vec w})$. Now consider $\det(\red{\vec v} +2\blue{\vec w}, \blue{\vec w})$. By the linearity in the definition of determinant, we get $$ \begin{align} \det(\red{\vec v} + 2\blue{\vec w}, \blue{\vec w}) &= \det(\red{\vec v},\blue{\vec w}) + \det(2\blue{\vec w},\blue{\vec w}) \\ &= \det(\red{\vec v},\blue{\vec w}) + 2\det(\blue{\vec w},\blue{\vec w}). \end{align} $$ The second determinant is zero, because it has the same row in it twice; this was previously explained here. Therefore $$ \det(\red{\vec v} + 2\blue{\vec w}, \blue{\vec w}) = \det(\red{\vec v},\blue{\vec w}). $$
Adding a multiple of one row to another doesn't affect the determinant.
This way it is possible to compute determinants very similarly to checking linear dependence and independence. For example, let's calculate $$ \det\begin{bmatrix} -1 & -2 & 2 & 2 \\ 3 & 0 & 3 & -2 \\ -1 & -1 & 3 & -1 \\ -2 & 0 & -2 & -1 \end{bmatrix}. $$ The second column already has many zeros. We can use its $\magenta{-1}$ to cancel the $\red{-2}$ in top row to zero.
$$\begin{align} \det\begin{bmatrix} \red{-1} & \red{-2} & \red{2} & \red{2} \\ \blue{3} & \blue{0} & \blue{3} & \blue{-2} \\ \magenta{-1} & \magenta{-1} & \magenta{3} & \magenta{-1} \\ \green{-2} & \green{0} & \green{-2} & \green{-1} \\ \end{bmatrix} &= \det\begin{bmatrix} \darkyellow{1} & \darkyellow{0} & \darkyellow{-4} & \darkyellow{4} \\ \blue{3} & \blue{0} & \blue{3} & \blue{-2} \\ \magenta{-1} & \magenta{-1} & \magenta{3} & \magenta{-1} \\ \green{-2} & \green{0} & \green{-2} & \green{-1} \\ \end{bmatrix} \quad \darkyellow{\text{new row 1}} = \red{\text{old row 1}} + \left( -2 \right) \cdot \magenta{\text{row 3}} \\ \end{align}$$Next we can use the $\darkyellow{1}$ in top left corner to make all other rows start with zero.
$$\begin{align} \det\begin{bmatrix} \darkyellow{1} & \darkyellow{0} & \darkyellow{-4} & \darkyellow{4} \\ \blue{3} & \blue{0} & \blue{3} & \blue{-2} \\ \magenta{-1} & \magenta{-1} & \magenta{3} & \magenta{-1} \\ \green{-2} & \green{0} & \green{-2} & \green{-1} \\ \end{bmatrix} &= \det\begin{bmatrix} \darkyellow{1} & \darkyellow{0} & \darkyellow{-4} & \darkyellow{4} \\ \red{0} & \red{0} & \red{15} & \red{-14} \\ \magenta{-1} & \magenta{-1} & \magenta{3} & \magenta{-1} \\ \green{-2} & \green{0} & \green{-2} & \green{-1} \\ \end{bmatrix} \quad \red{\text{new row 2}} = \blue{\text{old row 2}} + \left( -3 \right) \cdot \darkyellow{\text{row 1}} \\ &= \det\begin{bmatrix} \darkyellow{1} & \darkyellow{0} & \darkyellow{-4} & \darkyellow{4} \\ \red{0} & \red{0} & \red{15} & \red{-14} \\ \blue{0} & \blue{-1} & \blue{-1} & \blue{3} \\ \green{-2} & \green{0} & \green{-2} & \green{-1} \\ \end{bmatrix} \quad \blue{\text{new row 3}} = \magenta{\text{old row 3}} + 1 \cdot \darkyellow{\text{row 1}} \\ &= \det\begin{bmatrix} \darkyellow{1} & \darkyellow{0} & \darkyellow{-4} & \darkyellow{4} \\ \red{0} & \red{0} & \red{15} & \red{-14} \\ \blue{0} & \blue{-1} & \blue{-1} & \blue{3} \\ \magenta{0} & \magenta{0} & \magenta{-10} & \magenta{7} \\ \end{bmatrix} \quad \magenta{\text{new row 4}} = \green{\text{old row 4}} + 2 \cdot \darkyellow{\text{row 1}} \\ \end{align}$$The $\magenta{7}$ and $\red{-14}$ can be canceled to zero nicely. This leaves a $\green{5}$ as the only nonzero number in the third column, and we can use that to zero the rest of the third column.
$$\begin{align} \det\begin{bmatrix} \darkyellow{1} & \darkyellow{0} & \darkyellow{-4} & \darkyellow{4} \\ \red{0} & \red{0} & \red{15} & \red{-14} \\ \blue{0} & \blue{-1} & \blue{-1} & \blue{3} \\ \magenta{0} & \magenta{0} & \magenta{-10} & \magenta{7} \\ \end{bmatrix} &= \det\begin{bmatrix} \darkyellow{1} & \darkyellow{0} & \darkyellow{-4} & \darkyellow{4} \\ \green{0} & \green{0} & \green{-5} & \green{0} \\ \blue{0} & \blue{-1} & \blue{-1} & \blue{3} \\ \magenta{0} & \magenta{0} & \magenta{-10} & \magenta{7} \\ \end{bmatrix} \quad \green{\text{new row 2}} = \red{\text{old row 2}} + 2 \cdot \magenta{\text{row 4}} \\ &= \det\begin{bmatrix} \red{1} & \red{0} & \red{0} & \red{4} \\ \green{0} & \green{0} & \green{-5} & \green{0} \\ \blue{0} & \blue{-1} & \blue{-1} & \blue{3} \\ \magenta{0} & \magenta{0} & \magenta{-10} & \magenta{7} \\ \end{bmatrix} \quad \red{\text{new row 1}} = \darkyellow{\text{old row 1}} + \left( -\frac{4}{5} \right) \cdot \green{\text{row 2}} \\ &= \det\begin{bmatrix} \red{1} & \red{0} & \red{0} & \red{4} \\ \green{0} & \green{0} & \green{-5} & \green{0} \\ \darkyellow{0} & \darkyellow{-1} & \darkyellow{0} & \darkyellow{3} \\ \magenta{0} & \magenta{0} & \magenta{-10} & \magenta{7} \\ \end{bmatrix} \quad \darkyellow{\text{new row 3}} = \blue{\text{old row 3}} + \left( -\frac{1}{5} \right) \cdot \green{\text{row 2}} \\ &= \det\begin{bmatrix} \red{1} & \red{0} & \red{0} & \red{4} \\ \green{0} & \green{0} & \green{-5} & \green{0} \\ \darkyellow{0} & \darkyellow{-1} & \darkyellow{0} & \darkyellow{3} \\ \blue{0} & \blue{0} & \blue{0} & \blue{7} \\ \end{bmatrix} \quad \blue{\text{new row 4}} = \magenta{\text{old row 4}} + \left( -2 \right) \cdot \green{\text{row 2}} \\ \end{align}$$Finally we use the $\blue{7}$ to zero other rows in the last column.
$$\begin{align} \det\begin{bmatrix} \red{1} & \red{0} & \red{0} & \red{4} \\ \green{0} & \green{0} & \green{-5} & \green{0} \\ \darkyellow{0} & \darkyellow{-1} & \darkyellow{0} & \darkyellow{3} \\ \blue{0} & \blue{0} & \blue{0} & \blue{7} \\ \end{bmatrix} &= \det\begin{bmatrix} \magenta{1} & \magenta{0} & \magenta{0} & \magenta{0} \\ \green{0} & \green{0} & \green{-5} & \green{0} \\ \darkyellow{0} & \darkyellow{-1} & \darkyellow{0} & \darkyellow{3} \\ \blue{0} & \blue{0} & \blue{0} & \blue{7} \\ \end{bmatrix} \quad \magenta{\text{new row 1}} = \red{\text{old row 1}} + \left( -\frac{4}{7} \right) \cdot \blue{\text{row 4}} \\ &= \det\begin{bmatrix} \magenta{1} & \magenta{0} & \magenta{0} & \magenta{0} \\ \green{0} & \green{0} & \green{-5} & \green{0} \\ \red{0} & \red{-1} & \red{0} & \red{0} \\ \blue{0} & \blue{0} & \blue{0} & \blue{7} \\ \end{bmatrix} \quad \red{\text{new row 3}} = \darkyellow{\text{old row 3}} + \left( -\frac{3}{7} \right) \cdot \blue{\text{row 4}} \\ \end{align}$$By linearity, we can bring the remaining numbers to front, and the determinant becomes $$ \red{(-1)}\green{(-5)}\blue7\det\begin{bmatrix} 1&0&0&0 \\ 0&0&1&0 \\ 0&1&0&0 \\ 0&0&0&1 \end{bmatrix}. $$ The remaining matrix can be turned into the identity matrix with one swap, so its determinant is $-1$, and we get $$ \red{(-1)}\green{(-5)}\blue7(-1) = -35. $$
Determinants and linear dependence/independence¶
If the rows of a determinant are linearly dependent, we can cancel one of them to zero by adding multiples of other rows into it. For example, if $\vec u = 2\vec v + 3\vec w$, then by adding row $\vec v$ to row $\vec u$ with multiplier $-2$, and row $\vec w$ to row $\vec u$ with multiplier $-2$, we get a determinant that has the row $$ \vec u - 2\vec v - 3\vec w = (0,0,\dots,0). $$ Because $$ (0,0,\dots,0) = \red 0(0,0,\dots,0), $$ and it is possible to bring the $\red 0$ in front of the determinant by linearity, a determinant containing a zero row is always zero. So, if the rows of a square matrix are linearly dependent, then the determinant is zero.
If the rows are linearly independent, the matrix can be turned into the identity matrix with elementary row operations. Adding a multiple of one row to another doesn't change the determinant at all, and by linearity, multiplying a row by a nonzero number multiples the whole determinant by the same number. For that reason, if the rows of a matrix $A$ are linearly independent, we have $$ \det(A) = (\text{nonzero number})(\text{nonzero number})\dots (\text{nonzero number}) \det(I) \ne 0. $$
The rows of a square matrix $A$ are linearly dependent if and only if $\det(A) = 0$.
Determinants and invertibility¶
Let $A$ be a square matrix. In the past, we have seen that a matrix $A$ is invertible if and only if it is a square matrix and the columns are linearly independent. When working with transposes, we saw that the columns are linearly independent if and only if the rows are. Combining that with what we got above gives the following result.
A matrix $A$ is invertible if and only if it is a square matrix and $\det(A) \ne 0$.
Recall that the formula for the inverse of a $2 \times 2$ matrix is $$ \begin{bmatrix} a & b \\ c & d \end{bmatrix}^{-1} = \frac{1}{-ad+bc} \begin{bmatrix} -d & b \\ c & -a \end{bmatrix}. $$ Let's compute the determinant like we did right after defining it. There are two $2 \times 2$ permutation matrices, $$ \begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix} \quad \text{and} \quad \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}. $$ Their determinants are $1$ and $-1$ respectively, because the first matrix is the identity matrix, and it takes one swap to turn the second one into the identity matrix. These determinants must be multiplied by the numbers taken from the original matrix where the permutation matrix has $1$, so we get $$ \begin{align} \det\begin{bmatrix} a & b \\ c & d \end{bmatrix} &= ad \cdot \underbrace{\det\begin{bmatrix} 1&0 \\ 0&1 \end{bmatrix}}_1 + bc\cdot \underbrace{\det\begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}}_{-1} \\ &= ad - bc. \end{align} $$ This is zero if and only if $-ad+bc=0$.
The $2 \times 2$ inverse formula divides by zero if and only if the matrix is not invertible.
This means that the $2 \times 2$ inverse formula works just like you would expect: it computes the inverse if it exists, and divides by zero if the inverse doesn't exist.