Hi Christoph,

I did some tests already. First, in other codes where I use kwant it seems that 
MUMPS is working correctly. But overall it is also slower in my venv than in 
conda.

Regarding numpy with MKL or OpenBlas, I've found a benchmark script on github 
and it gave me the results below. These show that MKL is about 2x faster for 
SVD and eigenvalues, which is the type of calls I'm making. While a factor of 2 
cannot explain the huge time difference in my main code, I'm actually using 
even larger matrices (at least 8000x8000). So it could simply be MKL vs 
Openblas, depending on how this scale with the matrix size. I'll check it soon.

numpy + conda + mkl
-------------------------
Dotted two 4096x4096 matrices in 0.52 s.
Dotted two vectors of length 524288 in 0.04 ms.
SVD of a 2048x1024 matrix in 0.26 s.
Cholesky decomposition of a 2048x2048 matrix in 0.08 s.
Eigendecomposition of a 2048x2048 matrix in 3.07 s.

numpy+openblas:
--------------------
Dotted two 4096x4096 matrices in 0.64 s.
Dotted two vectors of length 524288 in 0.05 ms.
SVD of a 2048x1024 matrix in 0.46 s.
Cholesky decomposition of a 2048x2048 matrix in 0.11 s.
Eigendecomposition of a 2048x2048 matrix in 5.70 s.



I'll try to run the code measuring time.time() in different parts to track 
where it's slowing down. Also, I'll try to trim the code into a simple example 
to see if I can reproduce it with something easier to understand. If I find 
something I'll let you know. But since I'm currently trying to finish a paper, 
I'm using conda for a little longer until I understand this huge time 
difference.

Thanks for the attention. I'll reply here as soon as I have more relevant 
numbers.

Reply via email to