On Sat, Mar 15, 2008 at 07:33:51PM -0400, Anne Archibald wrote:
> ...
> To answer the OP's question, there is a relatively small number of C
> inner loops that could be marked up with OpenMP #pragmas to cover most
> matrix operations. Matrix linear algebra is a separate question, since
> numpy/scipy prefers to use optimized third-party libraries - in these
> cases one would need to use parallel linear algebra libraries (which
> do exist, I think, and are plug-compatible). So parallelizing numpy is
> probably feasible, and probably not too difficult, and would be
> valuable.

OTOH, there are reasons to _not_ want numpy to automatically use
OpenMP.  I personally have a lot of multi-core CPUs and/or
multi-processor servers that I use numpy on.  The way I use numpy
is to run a bunch of (embarassingly) parallel numpy jobs, one for
each CPU core.  If OpenMP became "standard" (and it does work well
in gcc 4.2 and 4.3), we definitely want to have control over how
it is used...

> The biggest catch, I think, would be compilation issues - is
> it possible to link an OpenMP-compiled shared library into a normal
> executable?

I think so.  The new gcc compilers use the libgomp libraries to
provide the OpenMP functionality.  I'm pretty sure those work just
like any other libraries.

S

-- 
Scott M. Ransom            Address:  NRAO
Phone:  (434) 296-0320               520 Edgemont Rd.
email:  [EMAIL PROTECTED]             Charlottesville, VA 22903 USA
GPG Fingerprint: 06A9 9553 78BE 16DB 407B  FFCA 9BFA B6FF FFD3 2989
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to