[Numpy-discussion] numpy.fromiter hiding exceptions

2011-02-18 Thread Åsmund Hjulstad
Hi all,

I am finding it hard to debug cases where I am creating numpy arrays from
generators, and the generator function throws an exception. It seems that
numpy just swallows the exception, and what I get is a not too helpful

ValueError: iterator too short

Much more helpfull would be to see the original exception, and get the
stacktrace in ipython (or wherever I am working).

Is this possible in some easy way, or am I stuck with the equivalent of

if debug:
   mygenerator = list(mygenerator)
a = np.fromiter(iter(mygenerator), dtype=xxx, count=xxx)

-- 
Åsmund Hjulstad, asm...@hjulstad.com
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell if I succeeded to build numpy with amd, umfpack and lapack

2011-02-18 Thread Samuel John
Ping.

How to tell, if numpy successfully build against libamd.a and libumfpack.a?
How do I know if they were successfully linked (statically)?
Is it possible from within numpy, like show_config() ?
I think show_config() has no information about these in it :-(

Anybody?

Thanks,
 Samuel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell if I succeeded to build numpy with amd, umfpack and lapack

2011-02-18 Thread Chris Colbert
It certainly does. Here is mine, showing that numpy is linked against mkl:

In [2]: np.show_config()
lapack_opt_info:
libraries = ['mkl_lapack95', 'mkl_intel', 'mkl_intel_thread',
'mkl_core', 'mkl_p4m', 'mkl_p4p', 'pthread']
library_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3.0/lib']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3.0/include']
blas_opt_info:
libraries = ['mkl_intel', 'mkl_intel_thread', 'mkl_core', 'mkl_p4m',
'mkl_p4p', 'pthread']
library_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3.0/lib']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3.0/include']
lapack_mkl_info:
libraries = ['mkl_lapack95', 'mkl_intel', 'mkl_intel_thread',
'mkl_core', 'mkl_p4m', 'mkl_p4p', 'pthread']
library_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3.0/lib']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3.0/include']
blas_mkl_info:
libraries = ['mkl_intel', 'mkl_intel_thread', 'mkl_core', 'mkl_p4m',
'mkl_p4p', 'pthread']
library_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3.0/lib']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3.0/include']
mkl_info:
libraries = ['mkl_intel', 'mkl_intel_thread', 'mkl_core', 'mkl_p4m',
'mkl_p4p', 'pthread']
library_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3.0/lib']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3.0/include']

If those lists are empty for you, then numpy is not linked properly (of
course yours will be ATLAS and not mkl)

On Fri, Feb 18, 2011 at 5:32 AM, Samuel John sc...@samueljohn.de wrote:

 Ping.

 How to tell, if numpy successfully build against libamd.a and libumfpack.a?
 How do I know if they were successfully linked (statically)?
 Is it possible from within numpy, like show_config() ?
 I think show_config() has no information about these in it :-(

 Anybody?

 Thanks,
  Samuel
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell if I succeeded to build numpy with amd, umfpack and lapack

2011-02-18 Thread Robin
I think numpy doesn't use umfpack. scipy.sparse used to, but now the
umfpack stuff has been moved out to a scikit.
So you probably won't see anything about those libraries, but if you
install scikits.umfpack and it works then you must be linked
correctly.

Cheers

Robin

On Fri, Feb 18, 2011 at 11:32 AM, Samuel John sc...@samueljohn.de wrote:
 Ping.

 How to tell, if numpy successfully build against libamd.a and libumfpack.a?
 How do I know if they were successfully linked (statically)?
 Is it possible from within numpy, like show_config() ?
 I think show_config() has no information about these in it :-(

 Anybody?

 Thanks,
  Samuel
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help with translating some matlab

2011-02-18 Thread Neal Becker
My translation is:

x1 = rcv[n:n-N:-1]

z = np.dot (P, x1.conj().transpose())

g = z / (_lambda + np.dot (x1, z))

y = np.dot (h, x1.conj().transpose())

e = x[n-N/2] - y

h += np.dot (e, g.conj().transpose())

P = (P - np.dot (g, z.conj().transpose()))/_lambda

But it doesn't work.

You say z should be a column vector.  I got:
In [138]: x1.shape
Out[138]: (64,)

In [139]: z.shape
Out[139]: (64,)

Clearly, I did something wrong here.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help with translating some matlab

2011-02-18 Thread Neal Becker
Neal Becker wrote:

 My translation is:
 
 x1 = rcv[n:n-N:-1]
 
 z = np.dot (P, x1.conj().transpose())
 
 g = z / (_lambda + np.dot (x1, z))
 
 y = np.dot (h, x1.conj().transpose())
 
 e = x[n-N/2] - y
 
 h += np.dot (e, g.conj().transpose())
 
 P = (P - np.dot (g, z.conj().transpose()))/_lambda
 
 But it doesn't work.
 
 You say z should be a column vector.  I got:
 In [138]: x1.shape
 Out[138]: (64,)
 
 In [139]: z.shape
 Out[139]: (64,)
 
 Clearly, I did something wrong here.

I think I've got it.  In numpy, a 'vector' is a row vector.  If I want to turn 
a 
row vector into a column vector, transpose doesn't work.  I need to use newaxis 
for that.  So the whole translation is:

x1 = rcv[n:n-N:-1]

z = np.dot (P, x1.conj()[:,np.newaxis])

g = z / (_lambda + np.dot (x1, z))

y = np.dot (h, x1.conj()[:,np.newaxis])

e = x[n] - y

h += np.dot (e, g.conj().transpose())

P = (P - np.dot (g, z.conj().transpose()))/_lambda


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help with translating some matlab

2011-02-18 Thread Warren Weckesser
On Fri, Feb 18, 2011 at 12:50 PM, Neal Becker ndbeck...@gmail.com wrote:

 Neal Becker wrote:

  My translation is:
 
  x1 = rcv[n:n-N:-1]
 
  z = np.dot (P, x1.conj().transpose())
 
  g = z / (_lambda + np.dot (x1, z))
 
  y = np.dot (h, x1.conj().transpose())
 
  e = x[n-N/2] - y
 
  h += np.dot (e, g.conj().transpose())
 
  P = (P - np.dot (g, z.conj().transpose()))/_lambda
 
  But it doesn't work.
 
  You say z should be a column vector.  I got:
  In [138]: x1.shape
  Out[138]: (64,)
 
  In [139]: z.shape
  Out[139]: (64,)
 
  Clearly, I did something wrong here.

 I think I've got it.  In numpy, a 'vector' is a row vector.  If I want to
 turn a
 row vector into a column vector, transpose doesn't work.  I need to use
 newaxis
 for that.  So the whole translation is:

x1 = rcv[n:n-N:-1]

 z = np.dot (P, x1.conj()[:,np.newaxis])

g = z / (_lambda + np.dot (x1, z))

 y = np.dot (h, x1.conj()[:,np.newaxis])

e = x[n] - y

h += np.dot (e, g.conj().transpose())

P = (P - np.dot (g, z.conj().transpose()))/_lambda



Sure, you can implement Matlab's row and column vectors that way.  I would
probably stick with true 1D arrays (rather than Nx1 2D arrays), and
implement those lines something like this:

x1 = rcv[n:n-N:-1]

z = np.dot(P, x1.conj())

g = z / (_lambda + np.dot(x1, z))

y = np.vdot(x1, h)

e = x[n] - y

# Note: e is a scalar.
h += e * g.conj()

P = (P - np.outer(g, z.conj()))/_lambda


Note the use of vdot(); vdot(a,b) conjugates a and then dots with b.  Also
note the use of outer() in the line that updates P.

Warren



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: performance in C extension; OpenMP, or SSE ?

2011-02-18 Thread Sturla Molden

Den 17.02.2011 16:31, skrev Matthieu Brucher:
It may also be the sizes of the chunk OMP uses. You can/should specify 
them.in http://them.in


Matthieu

the OMP pragma so that it is a multiple of the cache line size or 
something close.


Also beware of false sharing among the threads. When one processor 
updates the array dist in Sebastian's code, the cache line is dirtied 
for the other processors:


  #pragma omp parallel for private(j, i,ax,ay, dif_x, dif_y)
  for(i=0;ina;i++){
 ax=a_ps[i*nx1];
 ay=a_ps[i*nx1+1];
 for(j=0;jnb;j++) {
 dif_x = ax - b_ps[j*nx2];
 dif_y = ay - b_ps[j*nx2+1];

 /* update shared memory */

 dist[2*i+j]  = sqrt(dif_x*dif_x+dif_y*dif_y);

 /* ... and poof the cache is dirty */

 }
  }

Whenever this happens, the processors must stop whatever they are doing 
to resynchronize their cache lines. False sharing can therefore work 
as an invisible GIL inside OpenMP code.The processors can appear to 
run in syrup, and there is excessive traffic on the memory bus.


This is also why MPI programs often scale better than OpenMP programs, 
despite the IPC overhead.


An advice when working with OpenMP is to let each thread write to 
private data arrays, and only share read-only arrays.


One can e.g. use OpenMP's reduction pragma to achieve this. E.g. 
intialize the array dist with zeros, and use reduction(+:dist) in the 
OpenMP pragma line.


Sturla
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help with translating some matlab

2011-02-18 Thread Sturla Molden

I think x.conj().transpose() is too verbose, use x.H instead :-)

Sturla



Den 18.02.2011 19:11, skrev Neal Becker:
 My translation is:

  x1 = rcv[n:n-N:-1]

  z = np.dot (P, x1.conj().transpose())

  g = z / (_lambda + np.dot (x1, z))

  y = np.dot (h, x1.conj().transpose())

  e = x[n-N/2] - y

  h += np.dot (e, g.conj().transpose())

  P = (P - np.dot (g, z.conj().transpose()))/_lambda

 But it doesn't work.

 You say z should be a column vector.  I got:
 In [138]: x1.shape
 Out[138]: (64,)

 In [139]: z.shape
 Out[139]: (64,)

 Clearly, I did something wrong here.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion