[Numpy-discussion] Checking matrix condition number

2017-01-25 Thread Edward Richards
What is the best way to make sure that a matrix inversion makes any 
sense before preforming it? I am currently struggling to understand some 
results from matrix inversions in my work, and I would like to see if I 
am dealing with an ill-conditioned problem. It is probably user error, 
but I don't like having the possibility hanging over my head.


I naively put a call to np.linalg.cond into my code; all of my cores 
went to 100% and a few minutes later I got a number. To be fair A is 
6400 elements square, but this takes ~20x more time than the inversion. 
This is not really practical for what I am doing, is there a better way?


This is partly in response to Ilhan Polat's post about introducing the 
A\b operator to numpy. I also couldn't check the Numpy mailing list 
archives to see if this has been asked before, the numpy-discussion 
gmane link isn't working for me at all.


Thanks for your time,
Ned
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance solving system of equations in numpy and MATLAB

2015-12-17 Thread Edward Richards
Thanks everyone for helping me glimpse the secret world of FORTRAN 
compilers. I am running a Linux machine, so I will look into MKL and 
openBLAS. It was easy for me to get a Intel parallel studio XE license 
as a student, so I have options.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] performance solving system of equations in numpy and MATLAB

2015-12-16 Thread Edward Richards
I recently did a conceptual experiment to estimate the computational 
time required to solve an exact expression in contrast to an approximate 
solution (Helmholtz vs. Helmholtz-Kirchhoff integrals). The exact 
solution requires a matrix inversion, and in my case the matrix would 
contain ~15000 rows.



On my machine MATLAB seems to perform this matrix inversion with random 
matrices about 9x faster (20 sec vs 3 mins). I thought the performance 
would be roughly the same because I presume both rely on the same LAPACK 
solvers.



I will not actually need to solve this problem (even at 20 sec it is 
prohibitive for broadband simulation), but if I needed to I would 
reluctantly choose MATLAB . I am simply wondering why there is this 
performance gap, and if there is a better way to solve this problem in 
numpy?



Thank you,

Ned


#Python version

import numpy as np

testA = np.random.randn(15000, 15000)

testb = np.random.randn(15000)

%time testx = np.linalg.solve(testA, testb)


%MATLAB version

testA = randn(15000);

testb = randn(15000, 1);

tic(); testx = testA \ testb; toc();
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion