Hi all,
I am finding it hard to debug cases where I am creating numpy arrays from
generators, and the generator function throws an exception. It seems that
numpy just swallows the exception, and what I get is a not too helpful
ValueError: iterator too short
Much more helpfull would be to see
Ping.
How to tell, if numpy successfully build against libamd.a and libumfpack.a?
How do I know if they were successfully linked (statically)?
Is it possible from within numpy, like show_config() ?
I think show_config() has no information about these in it :-(
Anybody?
Thanks,
Samuel
It certainly does. Here is mine, showing that numpy is linked against mkl:
In [2]: np.show_config()
lapack_opt_info:
libraries = ['mkl_lapack95', 'mkl_intel', 'mkl_intel_thread',
'mkl_core', 'mkl_p4m', 'mkl_p4p', 'pthread']
library_dirs =
I think numpy doesn't use umfpack. scipy.sparse used to, but now the
umfpack stuff has been moved out to a scikit.
So you probably won't see anything about those libraries, but if you
install scikits.umfpack and it works then you must be linked
correctly.
Cheers
Robin
On Fri, Feb 18, 2011 at
My translation is:
x1 = rcv[n:n-N:-1]
z = np.dot (P, x1.conj().transpose())
g = z / (_lambda + np.dot (x1, z))
y = np.dot (h, x1.conj().transpose())
e = x[n-N/2] - y
h += np.dot (e, g.conj().transpose())
P = (P - np.dot (g, z.conj().transpose()))/_lambda
But
Neal Becker wrote:
My translation is:
x1 = rcv[n:n-N:-1]
z = np.dot (P, x1.conj().transpose())
g = z / (_lambda + np.dot (x1, z))
y = np.dot (h, x1.conj().transpose())
e = x[n-N/2] - y
h += np.dot (e, g.conj().transpose())
P = (P - np.dot (g,
On Fri, Feb 18, 2011 at 12:50 PM, Neal Becker ndbeck...@gmail.com wrote:
Neal Becker wrote:
My translation is:
x1 = rcv[n:n-N:-1]
z = np.dot (P, x1.conj().transpose())
g = z / (_lambda + np.dot (x1, z))
y = np.dot (h, x1.conj().transpose())
e = x[n-N/2]
Den 17.02.2011 16:31, skrev Matthieu Brucher:
It may also be the sizes of the chunk OMP uses. You can/should specify
them.in http://them.in
Matthieu
the OMP pragma so that it is a multiple of the cache line size or
something close.
Also beware of false sharing among the threads. When one
I think x.conj().transpose() is too verbose, use x.H instead :-)
Sturla
Den 18.02.2011 19:11, skrev Neal Becker:
My translation is:
x1 = rcv[n:n-N:-1]
z = np.dot (P, x1.conj().transpose())
g = z / (_lambda + np.dot (x1, z))
y = np.dot (h, x1.conj().transpose())