The basic definition of "linear dependence" is that a set of vectors \(\displaystyle \{v_1, v_2, v_3, ..., v_n\}\) is linearly dependent if and only if there exist a set of numbers, \(\displaystyle \{a_1, a_2, a_3, ..., a_n\}\), not all 0, such that \(\displaystyle a_1v_1+ a_2v_2+ \cdot\cdot\cdot+ a_nv_n= 0\). Since not all coefficients are 0, there exist at least one non-zero coefficient and we can "solve" for that vector: if \(\displaystyle a_j\ne 0\) then we can write \(\displaystyle a_jv_j= -(a_1v_1+ a_2v_2+ \cdot\cdot\cdot+ a_nv_n)\) where the sum on the right is of all the vectors except \(\displaystyle v_j\) and then, of course, divide both sides by \(\displaystyle a_j\): \(\displaystyle a_jv_j= -\frac{1}{a_j}(a_1v_1+ a_2v_2+ \cdot\cdot\cdot+ a_nv_n)\). This you say you understand.
Now, we look for a vector, \(\displaystyle v_i\), that is linear combination of the preceding vectors in this list. If \(\displaystyle v_2\) is as multiple of \(\displaystyle v_1\), we are done. If not, can \(\displaystyle v_3\) be written as a linear combination of \(\displaystyle v_1\) and \(\displaystyle v_2\)? If yes, we are done. If not, can \(\displaystyle v_4\) be written as a linear combination of \(\displaystyle v_1\), \(\displaystyle v_2\), and \(\displaystyle v_3\). If yes, we are done. If not, we continue in this same way. Because there are only a finite number of vectors in the set (the fact that we are dealing with a finite dimensional vector space is important here), and we know that one of the can be written as a linear combination of the others, this will eventually terminate.