Dears, 

Though it might sounds strange, but the eigenvectors of my 2X2 matrix is rather
different if I get it calculated in a loop over many other similar matrices:

for instance:

matrix:
[[ 0.60000000+0.j         -1.97537668-0.09386068j]
 [-1.97537668+0.09386068j -0.60000000+0.j        ]]
eigenvals:
[-2.06662112  2.06662112]
eigenvects:
[[ 0.59568071+0.j          0.80231613-0.03812232j]
 [-0.80322132+0.j          0.59500941-0.02827207j]]

In this case, the elements in the first column of the eigenvectors are real.
In the fortran code, such transformation can be easily done by dividing all
the elements in the i-th row by EV_{i1}/abs(EV_{i1}, where EV_{i1} denotes
the first element in the i-th row. Same can be performed column-wise if 
it is intended. 

In this way, at least for the moment, I could get the same eigenvectors for the
same complex matrix by Python and Fortran. I do not know whether this is
the solution, but I hope this would work.

Cheers,

Hongbin 



                                                        Ad hoc, ad loc and quid 
pro quo                                                                         
                                       ---   Jeremy Hilary Boob

From: hongbin_zhan...@hotmail.com
To: numpy-discussion@scipy.org
Date: Tue, 3 Apr 2012 15:02:18 +0800
Subject: Re: [Numpy-discussion] One question about the numpy.linalg.eig() 
routine







Hej Val,

Thank you very much for your replies.

Yes, I know that both eigenvectors are correct while they are indeed related 
to each other by unitary transformations (unitary matrices).

Actually, what I am trying to do is to evaluate the Berry phase which is 
closely related to the gauge chosen. It is okay to apply an arbitrary 
phase to the eigenvectors, while to get the (meaningful) physical quantity
the phase should be consistent for all the other eigenvectors. 

To my understanding, if I run both Fortran and python on the same computer,
they should have the same phase (that is the arbitrary phase is
computer-dependent). Maybe some additional "rotations" have been performed in
python,
but should this be written/commented somewhere in the man page?

I will try to fix this by performing additional rotation to make the diagonal
elements real and check whether this is the solution or not.

Thank you all again, a
 nd of course more insightful suggestions are welcome.

Regards,

Hongbin



                                                        Ad hoc, ad loc and quid 
pro quo          &n
 bsp;                                                                           
                          ---   Jeremy Hilary Boob

Date: Mon, 2 Apr 2012 22:19:55 -0500
From: kalat...@gmail.com
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] One question about the numpy.linalg.eig()       
routine

BTW this extra degree of freedom can be used to "rotate" the eigenvectors along 
the unit circle (multiplication by
  exp(j*phi)). To those of physical inclinations it should remind of gauge 
fixing (vector potential in EM/QM). 

These "rotations" can be used to make one (any) non-zero component of each 
eigenvector be positive real number. 
Finally to the point: it seems that numpy.linalg.eig uses these "rotations" to 
turn the 
diagonal elements in the eigenvector matrix to real positive numbers, that's 
why the numpy solutions looks neat. Val
PS Probably nobody cares to know, but the phase factor I gave in my 1st email 
should be negated: 
0.99887305445887753+0.047461785427773337j

On Mon, Apr 2, 2012 at 8:53 PM, Matthew Brett <matthew.br...@gmail.com> wrote:

Hi,



On Mon, Apr 2, 2012 at 5:38 PM, Val Kalatsky <kalat...@gmail.com> wrote:

> Both results are correct.

> There are 2 factors that make the results look different:

> 1) The order: the 2nd eigenvector of the numpy solution corresponds to the

> 1st eigenvector of your solution,

> note that the vectors are written in columns.

> 2) The phase: an eigenvector can be multiplied by an arbitrary phase factor

> with absolute value = 1.

> As you can see this factor is -1 for the 2nd eigenvector

> and -0.99887305445887753-0.047461785427773337j for the other one.



Thanks for this answer; for my own benefit:



Definition: A . v = L . v  where A is the input matrix, L is an

eigenvalue of A and v is an eigenvector of A.



http://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix



In [63]: A = [[0.6+0.0j,

-1.97537668-0.09386068j],[-1.97537668+0.09386068j, -0.6+0.0j]]



In [64]: L, v = np.linalg.eig(A)



In [66]: np.allclose(np.dot(A, v), L * v)

Out[66]: True



Best,



Matthew

_______________________________________________

NumPy-Discussion mailing list

NumPy-Discussion@scipy.org

http://mail.scipy.org/mailman/listinfo/numpy-discussion




_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion                         
                  

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion                         
                  
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to