Re: [Numpy-discussion] Tuple outer product?

2009-09-26 Thread Mads Ipsen
Robert Kern wrote:
 On Fri, Sep 25, 2009 at 17:38, Mads Ipsen m...@comxnet.dk wrote:
   
 Yes, but it should also work for [2.1,3.2,4.5] combined with
 [4.6,-2.3,5.6] - forgot to tell that.
 

 In [5]: np.transpose(np.meshgrid([2.1,3.2,4.5], [4.6,-2.3,5.6]))
 Out[5]:
 array([[[ 2.1,  4.6],
 [ 2.1, -2.3],
 [ 2.1,  5.6]],

[[ 3.2,  4.6],
 [ 3.2, -2.3],
 [ 3.2,  5.6]],

[[ 4.5,  4.6],
 [ 4.5, -2.3],
 [ 4.5,  5.6]]])

   
Point taken :-)

-- 
++
| Mads Ipsen, Scientific developer   |
+--+-+
| QuantumWise A/S  | phone: +45-29716388 |
| Nørresøgade 27A  | www:www.quantumwise.com |
| DK-1370 Copenhagen, Denmark  | email:  m...@quantumwise.com |
+--+-+


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] unpacking bytes directly in numpy

2009-09-26 Thread David Cournapeau
On Sat, Sep 26, 2009 at 10:33 PM, Thomas Robitaille
thomas.robitai...@gmail.com wrote:
 Hi,

 To convert some bytes to e.g. a 32-bit int, I can do

 bytes = f.read(4)
 i = struct.unpack('i', bytes)[0]

 and the convert it to np.int32 with

 i = np.int32(i)

 However, is there a more direct way of directly transforming bytes
 into a np.int32 type without the intermediate 'struct.unpack' step?

Assuming you have an array of bytes, you could just use view:

# x is an array of bytes, whose length is a multiple of 4
x.view(np.int32)

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] unpacking bytes directly in numpy

2009-09-26 Thread Thomas Robitaille
Hi,

To convert some bytes to e.g. a 32-bit int, I can do

bytes = f.read(4)
i = struct.unpack('i', bytes)[0]

and the convert it to np.int32 with

i = np.int32(i)

However, is there a more direct way of directly transforming bytes  
into a np.int32 type without the intermediate 'struct.unpack' step?

Thanks for any help,

Tom
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.any and np.all short-circuiting

2009-09-26 Thread Citi, Luca
Hello David,
thank you.
I followed your suggestion but I was unable to make it work.
I surprisingly found that with numpy in a different folder, it worked.
I am afraid it is due to the fact that the first one is not a linux filesystem 
and cannot deal with permission and ownership.
This would make sense and agree with a recent post about nose having problems 
with certain permissions or ownerships.
Problem solved.
I checked out the svn version into a linux filesystem and it works.
Thanks again to all those that offered help.
Best,
Luca
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] python reduce vs numpy reduce for outer product

2009-09-26 Thread Erik Tollerud
I'm encountering behavior that I think makes sense, but I'm not sure
if there's some numpy function I'm unaware of that might speed up this
operation.

I have a (potentially very long) sequence of vectors, but for
examples' sake, I'll stick with three: [A,B,C] with lengths na,nb, and
nc.  To get the result I want, I first reshape them to (na,1,1) ,
(1,nb,1) and (1,1,nc) and do:

reduce(np.multiply,[A,B,C])

and the result is what I want... The curious thing is that

np.prod.reduce([A,B,C])

throws

ValueError: setting an array element with a sequence.

Presumably this is because np.prod.reduce is trying to operate
elemnt-wise without broadcasting.  But is there a way to make the
ufunc broadcast faster than doing the python-level reduce?  (I tried
np.prod(broadcast_arrays([A,B,C]),axis=0), but that seemed slower,
presumably because it needs to allocate the full array for all three
instead of just once).

Or, if there's a better way to just start with the first 3 1d
vectorsand jump straight to the broadcast product (basically, an outer
product over arbitrary number of dimensions...)?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python reduce vs numpy reduce for outer product

2009-09-26 Thread Robert Kern
On Sat, Sep 26, 2009 at 17:17, Erik Tollerud erik.tolle...@gmail.com wrote:
 I'm encountering behavior that I think makes sense, but I'm not sure
 if there's some numpy function I'm unaware of that might speed up this
 operation.

 I have a (potentially very long) sequence of vectors, but for
 examples' sake, I'll stick with three: [A,B,C] with lengths na,nb, and
 nc.  To get the result I want, I first reshape them to (na,1,1) ,
 (1,nb,1) and (1,1,nc) and do:

reduce(np.multiply,[A,B,C])

 and the result is what I want... The curious thing is that

np.prod.reduce([A,B,C])

I'm sure you mean np.multiply.reduce().

 throws

 ValueError: setting an array element with a sequence.

 Presumably this is because np.prod.reduce is trying to operate
 elemnt-wise without broadcasting.

No. np.multiply.reduce() is trying to coerce its argument into an
array. You have given it a list with three arrays that do not have the
compatible shapes.

In [1]: a = arange(5).reshape([5,1,1])

In [2]: b = arange(6).reshape([1,6,1])

In [4]: c = arange(7).reshape([1,1,7])

In [5]: array([a,b,c])
---
ValueErrorTraceback (most recent call last)

/Users/rkern/Downloads/ipython console in module()

ValueError: setting an array element with a sequence.

  But is there a way to make the
 ufunc broadcast faster than doing the python-level reduce?  (I tried
 np.prod(broadcast_arrays([A,B,C]),axis=0), but that seemed slower,
 presumably because it needs to allocate the full array for all three
 instead of just once).

Basically yes because it is computing np.array(np.broadcast_arrays([A,B,C])).

 Or, if there's a better way to just start with the first 3 1d
 vectorsand jump straight to the broadcast product (basically, an outer
 product over arbitrary number of dimensions...)?

Well, numpy doesn't support arbitrary numbers of dimensions, nor will
your memory. You won't be able to do more than a handful of dimensions
practically. Exactly what are you trying to do? Specifics, please, not
toy examples.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python reduce vs numpy reduce for outer product

2009-09-26 Thread Erik Tollerud
 I'm sure you mean np.multiply.reduce().
Yes, sorry - typo.

 Or, if there's a better way to just start with the first 3 1d
 vectorsand jump straight to the broadcast product (basically, an outer
 product over arbitrary number of dimensions...)?

 Well, numpy doesn't support arbitrary numbers of dimensions, nor will
 your memory. You won't be able to do more than a handful of dimensions
 practically. Exactly what are you trying to do? Specifics, please, not
 toy examples.

Well, I'm not sure how to get too much more specific than what I just
described. I am computing moments of n-d input arrays given a
particular axis ... I want to take a sequence of 1D arrays, and get an
output has as many dimensions as the input sequence's length, with
each dimension's size matching the corresponding vector.
Symbolically, A[i,j,k,...] = v0[i]*v1[j]*v2[k]*...  A is then
multiplied by the input n-d array (same shape as A), and that is the
output.

And yes, practically, this will only work until I run out of memory,
but the reduce method works for the n=1,2, 3, and 4 cases, and
potentially in the future it will be needed for with higher (up to
maybe 8) dimensions that are small enough that they won't overwhelm
the memory. So it seems like a bad idea to write custom versions for
each potential dimensionality.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python reduce vs numpy reduce for outer product

2009-09-26 Thread Robert Kern
On Sat, Sep 26, 2009 at 18:17, Erik Tollerud erik.tolle...@gmail.com wrote:
 I'm sure you mean np.multiply.reduce().
 Yes, sorry - typo.

 Or, if there's a better way to just start with the first 3 1d
 vectorsand jump straight to the broadcast product (basically, an outer
 product over arbitrary number of dimensions...)?

 Well, numpy doesn't support arbitrary numbers of dimensions, nor will
 your memory. You won't be able to do more than a handful of dimensions
 practically. Exactly what are you trying to do? Specifics, please, not
 toy examples.

 Well, I'm not sure how to get too much more specific than what I just
 described. I am computing moments of n-d input arrays given a
 particular axis ... I want to take a sequence of 1D arrays, and get an
 output has as many dimensions as the input sequence's length, with
 each dimension's size matching the corresponding vector.
 Symbolically, A[i,j,k,...] = v0[i]*v1[j]*v2[k]*...  A is then
 multiplied by the input n-d array (same shape as A), and that is the
 output.

 And yes, practically, this will only work until I run out of memory,
 but the reduce method works for the n=1,2, 3, and 4 cases, and
 potentially in the future it will be needed for with higher (up to
 maybe 8) dimensions that are small enough that they won't overwhelm
 the memory. So it seems like a bad idea to write custom versions for
 each potential dimensionality.

Okay, that's the key fact I needed. When you said that you would have
a long list of vectors, I was worried that you wanted a dimension for
each of them.

You probably aren't going to be able to beat reduce(np.multiply, ...).

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion