Inverse matrices

Let f be a linear function that has an inverse function f1 (not all functions have an inverse function). We now have v=f(f1(v)) for any vector v. Using this, we can show that the inverse function f1 is also linear:

If a function is linear and its inverse function exists, the inverse function is also linear.

Recall that any matrix A corresponds with a linear function. If that function has an inverse function, we say that the matrix A is invertible, and its inverse matrix A1 is the matrix of the inverse function of A's linear function.

Examples:

When does an inverse matrix exist?

As explained here, the inverse function of f exists if and only if both of these conditions are met:

  1. The function f gives different outputs for different inputs.
  2. The function f produces each output with some input.

Let f be the linear function corresponding to a matrix A. Let's look at what it would mean to get the same output for two different inputs, Av=Aw with vw. As shown here, Av is a linear combination of A's columns, with numbers from v as coefficients. So for example, if v=(6,0), w=(3,1) and A=[1326], we would have 6[12]+0[36]=3[12]+1[36]. Because vw, the linear combinations don't have the same coefficients. This is not possible with linearly independent vectors, so if the columns are linearly independent, it is not possible to have Av=Aw with vw.

If the columns of a matrix are linearly dependent, it means that one of them is a linear combination of others, which then means that you can multiply the matrix with two different vectors and get the same resulting vector. For example, if a matrix A has 3 columns c1,c2,c3 and c1=4c23c3, we get A[100]=A[043]. So if the columns are linearly independent, there are no two different vectors that produce the same vector when multiplied with A, and if the columns are linearly dependent, there are such vectors.

A matrix produces different outputs for different inputs if and only if the columns of the matrix are linearly independent.

In other words, linear independence is needed for a matrix to be invertible, and it is enough to take care of condition 1.

Let's look at the second condition. Consider the set of all output vectors of the matrix; that is, the set of all vectors Av, where v is any vector. Because Av can be any linear combination of A's columns, this is in fact the span of A's columns.

A matrix with height n is invertible if and only if its columns are linearly independent and their span is the set of all n-dimensional vectors.

From here, we split this into 3 cases. We can assume that the columns are linearly independent, because we will need that anyway for the first condition.

We can now summarize invertibility into a very simple condition:

A matrix is invertible if and only if it is a square matrix and its columns are linearly independent.