Finding span and checking linear dependence¶
On this page, we find a way to check whether given vectors are linearly dependent or independent.
Adding multiples doesn't affect linear dependence¶
Let $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w}$ be linearly independent vectors. If $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w} + 2\blue{\vec v}$ are linearly dependent, we get a contradiction:
- If $\green{\vec w} + 2\blue{\vec v}$ was a linear combination of $\red{\vec u}$ and $\blue{\vec v}$, it would lead to $$ \green{\vec w} + 2\blue{\vec v} = a\red{\vec u} + b\blue{\vec v}, $$ with some numbers $a$ and $b$, and now counting the vector $\green{\vec w}$ on both sides (see above) would give $1=0$.
- If $\blue{\vec v}$ was a linear combination of $\red{\vec u}$ and $\green{\vec w} + 2\blue{\vec v}$, we would have $$ \blue{\vec v} = a\red{\vec u} + b(\green{\vec w} + 2\blue{\vec v}). $$ We can now count $\green{\vec w}$ on both sides to get $0=b$, which leads to $\blue{\vec v} = a\red{\vec u}$. This is impossible, because now $\blue{\vec v}$ is a linear combination of $\red{\vec u}$ (and $\green{\vec w}$), even though $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w}$ are linearly indepepdent.
- If $\red{\vec u}$ was a linear combination of $\blue{\vec v}$ and $\green{\vec w} + 2\blue{\vec v}$, we would have $$ \red{\vec u} = a\blue{\vec v} + b(\green{\vec w} + 2\blue{\vec v}). $$ By counting $\vec u$ on both sides, we get $1=0$, so this isn't possible either.
So if $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w}$ are linearly independent, then $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w} + 2\blue{\vec v}$ must also be linearly independent: adding $2\blue{\vec v}$ to $\green{\vec w}$ preserves the linear independence.
The same also works with dependent instead of independent: if $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w}$ are linearly dependent, then $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w} + 2\blue{\vec v}$ are also linearly dependent. They cannot be independent, because if they were, then you could add $-2\blue{\vec v}$ to get $\green{\vec w} + 2\blue{\vec v}$ back to just $\green{\vec w}$; this would lead to $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w}$ being linearly independent.
These results work with any number instead of $2$ (in other words, with any multiple of $\blue{\vec v}$ instead of $2\blue{\vec v}$).
With two or more vectors, adding a multiple of one vector to another doesn't affect linear dependence or independence.
Here a multiple of a vector means the vector multiplied by some number. For example, $5\red{\vec u}$, $-2\red{\vec u}$ and $0\red{\vec u}$ are multiples of $\red{\vec u}$.
Checking linear dependence/independence¶
Using the above result, you can check whether given vectors are linearly dependent or independent by repeatedly adding multiples of one vector to another. In this context, it is common to write the vectors in a matrix so that each row of the matrix represents one vector. The goal is to get as much zeros as possible, so that you can easily see whether the vectors are dependent or independent.
For example, let's check if the vectors $(2, 4, 4)$, $(0, 2, 0)$ and $(4, 2, 8)$ are linearly dependent or independent.
$$\begin{align} \begin{bmatrix} \red{2} & \red{4} & \red{4} \\ \blue{0} & \blue{2} & \blue{0} \\ \magenta{4} & \magenta{2} & \magenta{8} \\ \end{bmatrix} &\to \begin{bmatrix} \red{2} & \red{4} & \red{4} \\ \blue{0} & \blue{2} & \blue{0} \\ \green{4} & \green{0} & \green{8} \\ \end{bmatrix} \quad \green{\text{new bottom}} = \magenta{\text{old bottom}} + \left( -1 \right) \cdot \blue{\text{middle}} \\ &\to \begin{bmatrix} \darkyellow{2} & \darkyellow{0} & \darkyellow{4} \\ \blue{0} & \blue{2} & \blue{0} \\ \green{4} & \green{0} & \green{8} \\ \end{bmatrix} \quad \darkyellow{\text{new top}} = \red{\text{old top}} + \left( -2 \right) \cdot \blue{\text{middle}} \\ &\to \begin{bmatrix} \darkyellow{2} & \darkyellow{0} & \darkyellow{4} \\ \blue{0} & \blue{2} & \blue{0} \\ \red{0} & \red{0} & \red{0} \\ \end{bmatrix} \quad \red{\text{new bottom}} = \green{\text{old bottom}} + \left( -2 \right) \cdot \darkyellow{\text{top}} \\ \end{align}$$The numbers to multiply by are chosen so that we get new zeros. For example, the $-1$ was chosen because that resulted in $\magenta{2} + (-1)\blue{2} = \green0$ in the bottom row.
Because one of the resulting vectors is $\red{(0,0,0)}$, it is a linear combination of other vectors: $$ \red{(0,0,0)} = 0\darkyellow{(2,0,4)} + 0\blue{(0,2,0)} $$ This means that the resulting vectors $\darkyellow{(2,0,4)}$, $\blue{(0,2,0)}$ and $\red{(0,0,0)}$ are linearly dependent, so the original vectors $(2, 4, 4)$, $(0, 2, 0)$ and $(4, 2, 8)$ are also linearly dependent.
Finding the span¶
Consider the spans $$ \span(\red{\vec u}, \blue{\vec v}) \quad \text{and} \quad \span(\red{\vec u}, \green{\vec v + 2\vec u}). $$ Because $\green{\vec v + 2\vec u}$ is a linear combination of $\red{\vec u}$ and $\blue{\vec v}$, adding it to their span doesn't change anything: $$ \span(\red{\vec u}, \blue{\vec v}) = \span(\red{\vec u}, \blue{\vec v}, \green{\vec v + 2\vec u}). $$ On the other hand, $$ \blue{\vec v} = \green{\vec v + 2\vec u} - 2\red{\vec u} $$ is a linear combination of $\green{\vec v + 2\vec u}$ and $\red{\vec u}$, so $$ \span(\red{\vec u}, \green{\vec v + 2\vec u}) = \span(\red{\vec u}, \blue{\vec v}, \green{\vec v + 2\vec u}). $$ By putting it all together, we get $$ \span(\red{\vec u}, \blue{\vec v}) = \span(\red{\vec u}, \green{\vec v + 2\vec u}). $$ This also works with any number instead of 2, and also with more than two vectors.
The span doesn't change when adding a multiple of one vector to another.
This means that you can find a span in the exact same way as you would check for linear dependence or independence (see above): add multiples of the vectors to each other until it is clear what the span will be.
For example, let's find $\span((2,4,5), (0,-5,-9), (3,5,5))$:
$$\begin{align} \begin{bmatrix} \red{2} & \red{4} & \red{5} \\ \blue{0} & \blue{-5} & \blue{-9} \\ \magenta{3} & \magenta{5} & \magenta{5} \\ \end{bmatrix} &\to \begin{bmatrix} \red{2} & \red{4} & \red{5} \\ \blue{0} & \blue{-5} & \blue{-9} \\ \green{0} & \green{-1} & \green{-\frac{5}{2}} \\ \end{bmatrix} \quad \green{\text{new bottom}} = \magenta{\text{old bottom}} + \left( -\frac{3}{2} \right) \cdot \red{\text{top}} \\ &\to \begin{bmatrix} \darkyellow{2} & \darkyellow{0} & \darkyellow{-5} \\ \blue{0} & \blue{-5} & \blue{-9} \\ \green{0} & \green{-1} & \green{-\frac{5}{2}} \\ \end{bmatrix} \quad \darkyellow{\text{new top}} = \red{\text{old top}} + 4 \cdot \green{\text{bottom}} \\ &\to \begin{bmatrix} \darkyellow{2} & \darkyellow{0} & \darkyellow{-5} \\ \red{0} & \red{0} & \red{\frac{7}{2}} \\ \green{0} & \green{-1} & \green{-\frac{5}{2}} \\ \end{bmatrix} \quad \red{\text{new middle}} = \blue{\text{old middle}} + \left( -5 \right) \cdot \green{\text{bottom}} \\ &\to \begin{bmatrix} \blue{2} & \blue{0} & \blue{0} \\ \red{0} & \red{0} & \red{\frac{7}{2}} \\ \green{0} & \green{-1} & \green{-\frac{5}{2}} \\ \end{bmatrix} \quad \blue{\text{new top}} = \darkyellow{\text{old top}} + \frac{10}{7} \cdot \red{\text{middle}} \\ &\to \begin{bmatrix} \blue{2} & \blue{0} & \blue{0} \\ \red{0} & \red{0} & \red{\frac{7}{2}} \\ \magenta{0} & \magenta{-1} & \magenta{0} \\ \end{bmatrix} \quad \magenta{\text{new bottom}} = \green{\text{old bottom}} + \frac{5}{7} \cdot \red{\text{middle}} \\ \end{align}$$This calculation shows two things:
- The vectors are linearly independent, because the resulting vectors $\blue{(2,0,0)}$, $\red{\left(0,0,\frac{7}{2}\right)}$ and $\magenta{(0,-1,0)}$ are linearly independent. None of them is a linear combination of others; for example, any linear combination of $\blue{(2,0,0)}$ and $\magenta{(0,-1,0)}$ is on the xy plane, but $\red{\left(0,0,\frac{7}{2}\right)}$ isn't.
- The span of the vectors is $$ \span((2,4,5), (0,-5,-9), (3,5,5)) = \span\left(\blue{(2,0,0)}, \red{\left(0,0,\frac{7}{2}\right)}, \magenta{(0,-1,0)}\right), $$ which is the set of all 3D vectors, because any 3D vector $(x,y,z)$ can be written as $$ (x,y,z) = \frac{x}{2}\blue{(2,0,0)} - y\magenta{(0,-1,0)} + \frac{2z}{7}\red{\left(0,0,\frac{7}{2}\right)}. $$
Multiplying a row¶
In the previous example, we ended up having to calculate with fractions, which is error-prone when calculating by hand. To avoid fractions, you can multiply a row by 2 whenever a fraction like $\frac{7}{2}$ appears in it. The same works with any other nonzero number instead of 2.
This doesn't affect the span: because the span consists of all linear combinations, the vectors get multiplied by all numbers anyway, and the original length of the vectors doesn't matter. The same goes for linear dependence and independence. Let me know if this isn't clear to you and you would like to see more details.
Here's the same calculation again, but with this trick in use:
$$\begin{align} \begin{bmatrix} \red{2} & \red{4} & \red{5} \\ \blue{0} & \blue{-5} & \blue{-9} \\ \magenta{3} & \magenta{5} & \magenta{5} \\ \end{bmatrix} &\to \begin{bmatrix} \red{2} & \red{4} & \red{5} \\ \blue{0} & \blue{-5} & \blue{-9} \\ \green{0} & \green{-1} & \green{-\frac{5}{2}} \\ \end{bmatrix} \quad \green{\text{new bottom}} = \magenta{\text{old bottom}} + \left( -\frac{3}{2} \right) \cdot \red{\text{top}} \\ &\to \begin{bmatrix} \red{2} & \red{4} & \red{5} \\ \blue{0} & \blue{-5} & \blue{-9} \\ \darkyellow{0} & \darkyellow{-2} & \darkyellow{-5} \\ \end{bmatrix} \quad \darkyellow{\text{new bottom}} = 2 \cdot \green{\text{old bottom}} \\ &\to \begin{bmatrix} \magenta{2} & \magenta{0} & \magenta{-5} \\ \blue{0} & \blue{-5} & \blue{-9} \\ \darkyellow{0} & \darkyellow{-2} & \darkyellow{-5} \\ \end{bmatrix} \quad \magenta{\text{new top}} = \red{\text{old top}} + 2 \cdot \darkyellow{\text{bottom}} \\ &\to \begin{bmatrix} \magenta{2} & \magenta{0} & \magenta{-5} \\ \green{0} & \green{0} & \green{\frac{7}{2}} \\ \darkyellow{0} & \darkyellow{-2} & \darkyellow{-5} \\ \end{bmatrix} \quad \green{\text{new middle}} = \blue{\text{old middle}} + \left( -\frac{5}{2} \right) \cdot \darkyellow{\text{bottom}} \\ &\to \begin{bmatrix} \magenta{2} & \magenta{0} & \magenta{-5} \\ \red{0} & \red{0} & \red{7} \\ \darkyellow{0} & \darkyellow{-2} & \darkyellow{-5} \\ \end{bmatrix} \quad \red{\text{new middle}} = 2 \cdot \green{\text{old middle}} \\ &\to \begin{bmatrix} \magenta{2} & \magenta{0} & \magenta{-5} \\ \blue{0} & \blue{0} & \blue{1} \\ \darkyellow{0} & \darkyellow{-2} & \darkyellow{-5} \\ \end{bmatrix} \quad \blue{\text{new middle}} = \frac{1}{7} \cdot \red{\text{old middle}} \\ &\to \begin{bmatrix} \green{2} & \green{0} & \green{0} \\ \blue{0} & \blue{0} & \blue{1} \\ \darkyellow{0} & \darkyellow{-2} & \darkyellow{-5} \\ \end{bmatrix} \quad \green{\text{new top}} = \magenta{\text{old top}} + 5 \cdot \blue{\text{middle}} \\ &\to \begin{bmatrix} \green{2} & \green{0} & \green{0} \\ \blue{0} & \blue{0} & \blue{1} \\ \red{0} & \red{-2} & \red{0} \\ \end{bmatrix} \quad \red{\text{new bottom}} = \darkyellow{\text{old bottom}} + 5 \cdot \blue{\text{middle}} \\ \end{align}$$Now there are more steps, but less fractions.
Note that multiplying a row by a number is optional, and you don't have to do it if you don't want to. I introduce it here only to avoid dealing with annoying fractions.
Maximum number of independent vectors¶
Let's check whether the vectors $(2,1,1)$, $(1,5,1)$, $(2,2,3)$ and $(4,2,1)$ are linearly independent or dependent. First, we use the first vector to make all other vectors start with zero.
$$\begin{align} \begin{bmatrix} \red{2} & \red{1} & \red{1} \\ \blue{1} & \blue{5} & \blue{1} \\ \magenta{2} & \magenta{2} & \magenta{3} \\ \green{4} & \green{2} & \green{1} \\ \end{bmatrix} &\to \begin{bmatrix} \red{2} & \red{1} & \red{1} \\ \darkyellow{0} & \darkyellow{\frac{9}{2}} & \darkyellow{\frac{1}{2}} \\ \magenta{2} & \magenta{2} & \magenta{3} \\ \green{4} & \green{2} & \green{1} \\ \end{bmatrix} \quad \darkyellow{\text{new row 2}} = \blue{\text{old row 2}} + \left( -\frac{1}{2} \right) \cdot \red{\text{row 1}} \\ &\to \begin{bmatrix} \red{2} & \red{1} & \red{1} \\ \darkyellow{0} & \darkyellow{\frac{9}{2}} & \darkyellow{\frac{1}{2}} \\ \blue{0} & \blue{1} & \blue{2} \\ \green{4} & \green{2} & \green{1} \\ \end{bmatrix} \quad \blue{\text{new row 3}} = \magenta{\text{old row 3}} + \left( -1 \right) \cdot \red{\text{row 1}} \\ &\to \begin{bmatrix} \red{2} & \red{1} & \red{1} \\ \darkyellow{0} & \darkyellow{\frac{9}{2}} & \darkyellow{\frac{1}{2}} \\ \blue{0} & \blue{1} & \blue{2} \\ \magenta{0} & \magenta{0} & \magenta{-1} \\ \end{bmatrix} \quad \magenta{\text{new row 4}} = \green{\text{old row 4}} + \left( -2 \right) \cdot \red{\text{row 1}} \\ \end{align}$$We can now use the second vector to make all vectors below it start with two zeros. The last row already starts with two zeros, so we don't need to do anything to it.
$$\begin{align} \begin{bmatrix} \red{2} & \red{1} & \red{1} \\ \darkyellow{0} & \darkyellow{\frac{9}{2}} & \darkyellow{\frac{1}{2}} \\ \blue{0} & \blue{1} & \blue{2} \\ \magenta{0} & \magenta{0} & \magenta{-1} \\ \end{bmatrix} &\to \begin{bmatrix} \red{2} & \red{1} & \red{1} \\ \darkyellow{0} & \darkyellow{\frac{9}{2}} & \darkyellow{\frac{1}{2}} \\ \green{0} & \green{0} & \green{\frac{17}{9}} \\ \magenta{0} & \magenta{0} & \magenta{-1} \\ \end{bmatrix} \quad \green{\text{new row 3}} = \blue{\text{old row 3}} + \left( -\frac{2}{9} \right) \cdot \darkyellow{\text{row 2}} \\ \end{align}$$Finally, we can use the third vector to make the last vector all zeros:
$$\begin{align} \begin{bmatrix} \red{2} & \red{1} & \red{1} \\ \darkyellow{0} & \darkyellow{\frac{9}{2}} & \darkyellow{\frac{1}{2}} \\ \green{0} & \green{0} & \green{\frac{17}{9}} \\ \magenta{0} & \magenta{0} & \magenta{-1} \\ \end{bmatrix} &\to \begin{bmatrix} \red{2} & \red{1} & \red{1} \\ \darkyellow{0} & \darkyellow{\frac{9}{2}} & \darkyellow{\frac{1}{2}} \\ \green{0} & \green{0} & \green{\frac{17}{9}} \\ \blue{0} & \blue{0} & \blue{0} \\ \end{bmatrix} \quad \blue{\text{new row 4}} = \magenta{\text{old row 4}} + \frac{9}{17} \cdot \green{\text{row 3}} \\ \end{align}$$Because we ended up with a zero row, the vectors we started with are linearly dependent. However, with some handling for a few special cases, the same process works with any four 3D vectors, and you can always do this to end up with a matrix that has zeros in these locations: $$ \begin{bmatrix} &&\\ 0&&\\ 0&0&\\ 0&0&0 \end{bmatrix} $$ To ensure that the first column has the needed zeros, we can do this:
- If the first row starts with $x$ and another row starts with $y$, we can set $$ \text{new other row} = \text{old other row} + \left(-\frac y x \right) \cdot \text{first row}. $$ This way, the first number in the other row becomes $$ y + \left( -\frac{y}{x} \right)x = 0. $$ However, this doesn't work if the first row already starts with zero, because we needed to divide by $x$.
- If the first row starts with zero but some other row doesn't, we can change the order of the rows so that the row not starting with zero is first, and then proceed as above. Swapping the vectors doesn't affect linear dependence or independence.
- If all rows already start with zero, we already have the zeros we needed and we don't have to do anything.
Once this is done, we can ignore the first row and first column of the matrix, and repeat the same process with the remaining two columns and three rows. Because these rows already have zeros in the leftmost column, adding multiples of them to each other won't change the leftmost column. After repeating this for all columns, ignoring one more row every time we move on to the next column, we have the zeros.
This shows that there can't be 4 linearly independent 3D vectors. We also can't have more than 4, because then we could pick 4 of them, and they would be linearly independent. The same process works with any other dimension $n$ instead of 3.
There cannot be more than $n$ linearly independent $n$-dimensional vectors.
Spanning with just linear independence¶
Let $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w}$ be linearly independent 3D vectors. If their span doesn't contain all 3D vectors, there's a 3D vector $\magenta{\vec m}$ that is not in $\span(\red{\vec u}, \blue{\vec v}, \green{\vec w})$. Then $\red{\vec u}$, $\blue{\vec v}$, $\green{\vec w}$ and $\magenta{\vec m}$ are linearly independent:
- $\magenta{\vec m}$ is not a linear combination of $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w}$, because it isn't in their span.
-
If $\red{\vec u}$ was a linear combination of other vectors, as in
$$
\red{\vec u} = a\blue{\vec v} + b\green{\vec w} + c\magenta{\vec m},
$$
we have two cases, neither of which can actually happen:
- If $c=0$, then $\red{\vec u}$ is a linear combination of $\blue{\vec v}$ and $\green{\vec w}$. This isn't possible, because $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w}$ are linearly independent.
- If $c \ne 0$, then we can divide by $c$ and solve $\magenta{\vec m}$ as a linear combination of other vectors: $$ \magenta{\vec m} = \frac{1}{c}\red{\vec u} - \frac{a}{c}\blue{\vec v} - \frac{b}{c}\green{\vec w}. $$ This isn't possible either, because $\magenta{\vec m}$ is not in the span of $\red{\vec u}$, $\blue{\vec v}$ and $\green{\vec w}$.
- If $\blue{\vec v}$ or $\green{\vec w}$ was a linear combination of other vectors, we could do the same steps we did with $\red{\vec u}$.
By our previous result, it is not possible to have 4 linearly independent 3D vectors. This means that any span of any 3 linearly independent 3D vectors must contain all 3D vectors. The same works with any other dimension instead of 3.
The span of any $n$ linearly independent $n$-dimensional vectors is the set of all $n$-dimensional vectors.
This is basically a generalization of $\span((1,0,0),(0,1,0),(0,0,1)) = \mathbb{R}^3$ that works with any linearly independent vectors.