Matrix-Eigenvector Operations (Linear Algebra)

madmanmadland

New member
Joined
Dec 4, 2014
Messages
5
Hey all,

I have finished every problem on this week's problem set aside from this one single problem that is driving me absolutely insane.

Here it is:

V1 = [-3, 3] and V2 = [-5, -3]
(both 2x1 vertical vectors)

V1 and V2 are eigenvectors of Matrix A corresponding to the eigenvalues 5 and 4 respectively. (V1 -> 5, V2 -> 4)

Then,
1.) A(V1 + V2) = ? (a 2x1 vector)
2.) A(-2V1) = ? (also a 2x1 vector)

Does anyone have any clue as to how to solve for those two 2x1 vectors? My original thought was to try to reconstruct Matrix A using A = PDP^-1 using the diagonal eigenvalue matrix and a matrix of the eigenvectors but that didn't work. I really don't know where to turn. Any help would be extraordinarily appreciated.
 
These are very very basic applications of the fact that matrix multiplication is linear. That means that
A(V1+ V2)= A(V1)+ A(V2) and A(-2V1)= -2A(V1).

Now, what are A(V1) and A(V2)?
 
These are very very basic applications of the fact that matrix multiplication is linear. That means that
A(V1+ V2)= A(V1)+ A(V2) and A(-2V1)= -2A(V1).

Now, what are A(V1) and A(V2)?

I'm not sure how to compute A(anything) because I don't know how to find what A is. Can you reconstruct matrix A using what is given? Or are there some rules and relationships between the matrix A and its eigenvectors and values that make it so that I do not even need to know what A is in order to solve the problem?
 
Last edited:
I'm not sure how to compute A(anything) because I don't know how to find what A is. Can you reconstruct matrix A using what is given? Or are there some rules and relationships between the matrix A and its eigenvectors and values that make it so that I do not even need to know what A is in order to solve the problem?
Well, whoever gave you this problem clearly expects you to know what an "eigenvector" and "eigenvalue" are! Do you? If not I would have thought that would be the first thing you would ask about.
 
Well, whoever gave you this problem clearly expects you to know what an "eigenvector" and "eigenvalue" are! Do you? If not I would have thought that would be the first thing you would ask about.

We were only taught how to find them, my professor didn't talk about what the eigenvalues and vectors actually are or what they represent.
 
I am puzzled how you can learn how to find something without learning what it is you are finding!

\(\displaystyle \lambda\) is an eigenvalue for linear operator A if and only if there exist a non-zero vector v such that \(\displaystyle Av= \lambda v\). In that case, v is called an "eigenvector" corresponding to eigenvalue \(\displaystyle \lambda\). I would be amazed to hear that your teacher, or textbook, had never mentioned that!

If \(\displaystyle v_1\) is an eigenvector of linear operator A corresponding to eigenvalue, \(\displaystyle \lambda_1\) then \(\displaystyle Av_1= \lambda_1 v_1\). If \(\displaystyle v_2\) is an eigenvector of linear operator A corresponding to eigenvalue, \(\displaystyle \lambda_2\) then \(\displaystyle Av_2= \lambda_1 v_2\).

Therefore \(\displaystyle A(v_1+ v_2)= Av_1+ Av_2= \lambda_1v_1+ \lambda_2v_2\) and \(\displaystyle A(-2v_1)= -2a(v_1)= -2a\lambda_1v_1\).
 
Last edited:
We were only taught how to find them, my professor didn't talk about what the eigenvalues and vectors actually are or what they represent.


They have intimate meanings in physics - many actually, when concerning Eigenvalues. One such case you may consider the two matrices



\(\displaystyle \begin{pmatrix} 1 & 0 & 0 & 0 \\0 & -1 & 0 & 0\\0 & 0 & -1 & 0 \\0 & 0 & 0 & 1 \end{pmatrix}\)


\(\displaystyle \begin{pmatrix} -1 & 0 & 0 & 0 \\0 & 1 & 0 & 0\\0 & 0 & 1 & 0 \\0 & 0 & 0 & -1 \end{pmatrix}\)


When they commute they give a Diagonally Dominant Matrix; the interesting part is if all its diagonal elements are negative, then the real parts of its eigenvalues are negative so if assuming symmetry, if the diagonal entries are positive the negative parts of the eigenvalues are positive as well. These results can be shown from the Gershgorin circle theorem. This further indicates that there are two solutions in which the possible eigenvalues \(\displaystyle \pm\) can take this as a property of Chirality, which should be thought of as a ''handedness'' to particle systems.


Just a small gem to keep in mind. In our case, the multiplication of the matrices given are in fact a negative Diagonally Dominant Matrix. In a way, because it produces negative entries, it's like an anti-Hermitian system.
 
Last edited:
In a nutshell, an eigenvalue is just another word for an energy state.

Let me give you an example: If you shoot a photon at a mirror statistically under \(\displaystyle \Psi\), the state vector holds all those possible angles a photon may bounce off it. All these possible Eigenvalues/paths reduce to just one Eigenvalue (the path and the energy) because it's wave function \(\displaystyle \psi\) was collapsed upon meeting the mirror, - in rare occasions, all the possible Eigenvalues can contribute to the mass of the mirror - especially if it is warm. If you can cool a box generously, you can capture the photon in a box for a while.
 
Last edited:
In a nutshell, an eigenvalue is just another word for an energy state.
In a particular physics application. I saw no indication that this was a physics question.


Let me give you an example: If you shoot a photon at a mirror statistically under \(\displaystyle \Psi\), the state vector holds all those possible angles a photon may bounce off it. All these possible Eigenvalues/paths reduce to just one Eigenvalue (the path and the energy) because it's wave function \(\displaystyle \psi\) was collapsed upon meeting the mirror, - in rare occasions, all the possible Eigenvalues can contribute to the mass of the mirror - especially if it is warm. If you can cool a box generously, you can capture the photon in a box for a while.
 
Top