Linear dependancy/independancy

Oaky

New member
Joined
Feb 11, 2012
Messages
19
Essentially I need to determine if a set of 4 vectors (where each vector has 4 'terms') are linearly dependant or independant.

So I was basically solving \(\displaystyle a\textbf{v1} + b\textbf{v2} + c\textbf{v3} + d\textbf{v4} = 0\) where a,b,c,d don't equal 0 themselves.


I set up these 4 vectors such that each vector corresponded to the column of a 4x4 matrix. I then row reduced the matrix (I think the official name is Gaussian Elimination).

I was left with this...

\(\displaystyle \begin{pmatrix} 1 & 2 & 5 & 3 \\ 0 & 1 & 2 & -2 \\ 0 & 0 & 0 & 7 \\ 0 & 0 & 0 & 0 \end{pmatrix}\)


Apparently the bottom row of zeroes shows that the vectors are linearly dependant, but I don't understand how.

Usually I would look at the third row, so 7d = 0 and therefore d = 0, and back substitute this into the second row and continue. But if you can reach 0 with a linear combination of only the first 3 vectors, doesn't that mean that one of those 3 is dependant of the others?!?


So I'm very confused. 1) Why does a row of zeroes signify dependence and 2) Am I doing something wrong by back substituting to get d = 0?

Help would very much be appreciated :)

 
Essentially I need to determine if a set of 4 vectors (where each vector has 4 'terms') are linearly dependant or independant.

So I was basically solving \(\displaystyle a\textbf{v1} + b\textbf{v2} + c\textbf{v3} + d\textbf{v4} = 0\) where a,b,c,d don't equal 0 themselves.


I set up these 4 vectors such that each vector corresponded to the column of a 4x4 matrix. I then row reduced the matrix (I think the official name is Gaussian Elimination).

I was left with this...

\(\displaystyle \begin{pmatrix} 1 & 2 & 5 & 3 \\ 0 & 1 & 2 & -2 \\ 0 & 0 & 0 & 7 \\ 0 & 0 & 0 & 0 \end{pmatrix}\)


Apparently the bottom row of zeroes shows that the vectors are linearly dependant, but I don't understand how.

Usually I would look at the third row, so 7d = 0 and therefore d = 0, and back substitute this into the second row and continue. But if you can reach 0 with a linear combination of only the first 3 vectors, doesn't that mean that one of those 3 is dependant of the others?!?
Yes, it does. What is wrong with that? Because d= 0 you are left with the two equations, a+ 2b+5c= 0 and b+ 2c= 0. The second equation gives b= -2c and putting that into the first equation, a+ 2(-2c)+ 5c= a+ c= 0 so a= -c. For ANY number, c, a= -c, b= -2c, d= 0 will make the linear combination of vectors equal to 0, showing that the vectors are dependent. For example, if we take c= 1, we get a= -1 and b= -2 so that -v1- 2v2+ v3+ 0v4= 0, a linear combination equal to 0 where the coefficients are not all 0.


So I'm very confused. 1) Why does a row of zeroes signify dependence
You started out with, say n vectors and n parameters so, in effect, n equations in n unknown values (each row, equal to 0, being one equation). If, after row reduction, one row is all 0s, you have n-1 equations still with n unknown values. You can solve those n- 1 equations for n- 1 of the values, in terms of the other one. That means that taking different values for that one, gives different values for the other.

and 2) Am I doing something wrong by back substituting to get d = 0?
No, nothing at all wrong- except that you didn't continue and see that a and b can be written in terms of c.

Help would very much be appreciated :)

 
Last edited:
Top