[Numpy-discussion] Array concatenation performance

2010-07-15 Thread John Porter
Has anyone got any advice about array creation. I've been using numpy
for a long time and have just noticed something unexpected about array
concatenation.

It seems that using numpy.array([a,b,c]) is around 20 times slower
than creating an empty array and adding the individual elements.

Other things that don't work well either:
   numpy.concatenate([a,b,c]).reshape(3,-1)
   numpy.concatenate([[a],[b],[c]))

Is there a better way to efficiently create the array ?

See the following snippet:
---
import time
import numpy as nx
print 'numpy version', nx.version.version
t = time.time()
# test array
a = nx.arange(1000*1000)
print 'a ',time.time()-t
t = time.time()
# create array in the normal way..
b0 = nx.array([a,a,a])
print 'b0',time.time()-t
t = time.time()
# create using empty array
b1 = nx.empty((3,)+(a.shape))
b1[0] = a
b1[1] = a
b1[2] = a
print 'b1',time.time()-t
print nx.all((b0==b1))
-
Produces the output:
  numpy version 1.3.0
  a  0.0019519329071
  b0 0.286643981934
  b1 0.0116579532623
  equal True
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread jf
I will be out of office till 8. August
For urgent matters, please contact patrick.lambe...@heliotis.ch


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread jf
I will be out of office till 8. August
For urgent matters, please contact patrick.lambe...@heliotis.ch


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread jf
I will be out of office till 8. August
For urgent matters, please contact patrick.lambe...@heliotis.ch


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread jf
I will be out of office till 8. August
For urgent matters, please contact patrick.lambe...@heliotis.ch


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread jf
I will be out of office till 8. August
For urgent matters, please contact patrick.lambe...@heliotis.ch


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread jf
I will be out of office till 8. August
For urgent matters, please contact patrick.lambe...@heliotis.ch


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread jf
I will be out of office till 8. August
For urgent matters, please contact patrick.lambe...@heliotis.ch


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread jf
I will be out of office till 8. August
For urgent matters, please contact patrick.lambe...@heliotis.ch


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread jf
I will be out of office till 8. August
For urgent matters, please contact patrick.lambe...@heliotis.ch


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread jf
I will be out of office till 8. August
For urgent matters, please contact patrick.lambe...@heliotis.ch


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.fft, yet again

2010-07-15 Thread Martin Raspaud
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

David Goldsmith skrev:

 
 Interesting comment: it made me run down the fftpack tutorial
 http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/fftpack.rst/
 josef has alluded to in the past to see if the suggested pointer
 could point there without having to write a lot of new content. 
 What I found was that although the scipy basic fft functions don't
 support it (presumably because they're basically just wrappers for
 the numpy fft functions), scipy's discrete cosine transforms support
 an norm=ortho keyword argument/value pair that enables the
 function to return the unitary versions that you describe above. 
 There isn't much narrative explanation of the issue yet, but it got
 me wondering: why don't the fft functions support this?  If there
 isn't a good reason, I'll go ahead and submit an enhancement ticket.
 
 
 Having seen no post of a good reason, I'm going to go ahead and file
 enhancement tickets.

Hi,

I have worked on fourier transforms and I think normalization is generally seen
as a whole : fft + ifft should be the identity function, thus the necessity of a
normalization, which often done on the ifft.

As one of the previous poster mentioned, sqrt(len(x)) is often seen as a good
compromise to split the normalization equally between fft and ifft.

In the sound community though, the whole normalization often done after the fft,
such that looking at the amplitude spectrum gives the correct amplitude values
for the different components of the sound (sinusoids).

My guess is that normalization requirements are different for every user: that's
why I like the no normalization approach of fftw, such that anyone does whatever
he/she/it wants.

Best regards,
Martin
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJMPuD6AAoJEBdvyODiyJI4pEEIAJRcNAMzjS47MhAMc8nK+Ds/
hkFuI7IPsREPLZ7N5roOny7eCq2+DK2r9Qx4+43ZMU/rPouYHmugpTSQcL7cIgmW
AZT7ll//BgK4PgN4x7mXj5p1BK+XsNTabNoaTswPsOYy84CvTawQ6eRi+FGQZK+u
OXMt8AsyVn60thP8BVRDUXLnmNXKz2qT9KYdStrby3WvDnvoIFSOcy6u2VRuEOQR
3fKzSU30p9bd8og4Rz2wXz2IeNv+apOP1VQGEY0zfN7r8VC9yaiY/TNG0alSTT7o
EpuphiIKoh+63he97MJvXgFFAVxhsAHHo5M8ZY7C48pwO99oNDx1kKFu+YtRcQY=
=h0tq
-END PGP SIGNATURE-
attachment: martin_raspaud.vcf___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] missing string formatting functionality?

2010-07-15 Thread Neal Becker
It looks like np.savetxt is pretty flexible, accepting fmt, and delimiter 
args.  But to format into a string, we have array_repr and array_str, which 
are not flexible.

Of course, one can use np.savetxt with python stringio, but that's more 
work.  Would be nice if np.savetxt could just return a string.  Better 
still, if np.savestring (returning a string) would implement core 
functionality, and np.savetxt would just us it.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread Skipper Seabold
On Thu, Jul 15, 2010 at 5:54 AM, John Porter jpor...@cambridgesys.com wrote:
 Has anyone got any advice about array creation. I've been using numpy
 for a long time and have just noticed something unexpected about array
 concatenation.

 It seems that using numpy.array([a,b,c]) is around 20 times slower
 than creating an empty array and adding the individual elements.

 Other things that don't work well either:
    numpy.concatenate([a,b,c]).reshape(3,-1)
    numpy.concatenate([[a],[b],[c]))

 Is there a better way to efficiently create the array ?


What was your timing for concatenate?  It wins for me given the shape of a.

In [1]: import numpy as np

In [2]: a = np.arange(1000*1000)

In [3]: timeit b0 = np.array([a,a,a])
1 loops, best of 3: 216 ms per loop

In [4]: timeit b1 = np.empty(((3,)+a.shape)); b1[0]=a;b1[1]=a;b1[2]=a
100 loops, best of 3: 19.3 ms per loop

In [5]: timeit b2 = np.c_[a,a,a].T
10 loops, best of 3: 30.5 ms per loop

In [6]: timeit b3 = np.concatenate([a,a,a]).reshape(3,-1)
100 loops, best of 3: 9.33 ms per loop

Skipper

 See the following snippet:
 ---
 import time
 import numpy as nx
 print 'numpy version', nx.version.version
 t = time.time()
 # test array
 a = nx.arange(1000*1000)
 print 'a ',time.time()-t
 t = time.time()
 # create array in the normal way..
 b0 = nx.array([a,a,a])
 print 'b0',time.time()-t
 t = time.time()
 # create using empty array
 b1 = nx.empty((3,)+(a.shape))
 b1[0] = a
 b1[1] = a
 b1[2] = a
 print 'b1',time.time()-t
 print nx.all((b0==b1))
 -
 Produces the output:
   numpy version 1.3.0
   a  0.0019519329071
   b0 0.286643981934
   b1 0.0116579532623
   equal True
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] isinf raises in inf

2010-07-15 Thread John Hunter
I am seeing a problem on Solaris since I upgraded to svn HEAD.
np.isinf does not handle np.inf.  See ipython session below.  I am not
seeing this problem w/ HEAD on an ubuntu linux box I tested on

In [1]: import numpy as np

In [2]: np.__version__
Out[2]: '2.0.0.dev8480'

In [3]: x = np.inf
np.infnp.info   np.infty

In [3]: x = np.inf

In [4]: np.isinf(x)
Warning: invalid value encountered in isinf
Out[4]: True

In [5]: np.seter
np.seterr  np.seterrcall  np.seterrobj

In [5]: np.seterr(all='raise')
Out[5]: {'over': 'print', 'divide': 'print', 'invalid': 'print',
'under': 'ignore'}

In [6]: np.isinf(x)
---
FloatingPointErrorTraceback (most recent call last)

/home/titan/johnh/ipython console

FloatingPointError: invalid value encountered in isinf

In [7]: !uname -a
SunOS udesktop191 5.10 Generic_139556-08 i86pc i386 i86pc

In [43]: !gcc --version
gcc (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath)
Copyright (C) 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


Posted on tracker:

http://projects.scipy.org/numpy/ticket/1547
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread John Porter
You're right - I screwed up the timing for the one that works...
It does seem to be faster.

I've always just built arrays using nx.array([]) in the past though
and was surprised
that it performs so badly.


On Thu, Jul 15, 2010 at 2:41 PM, Skipper Seabold jsseab...@gmail.com wrote:
 On Thu, Jul 15, 2010 at 5:54 AM, John Porter jpor...@cambridgesys.com wrote:
 Has anyone got any advice about array creation. I've been using numpy
 for a long time and have just noticed something unexpected about array
 concatenation.

 It seems that using numpy.array([a,b,c]) is around 20 times slower
 than creating an empty array and adding the individual elements.

 Other things that don't work well either:
    numpy.concatenate([a,b,c]).reshape(3,-1)
    numpy.concatenate([[a],[b],[c]))

 Is there a better way to efficiently create the array ?


 What was your timing for concatenate?  It wins for me given the shape of a.

 In [1]: import numpy as np

 In [2]: a = np.arange(1000*1000)

 In [3]: timeit b0 = np.array([a,a,a])
 1 loops, best of 3: 216 ms per loop

 In [4]: timeit b1 = np.empty(((3,)+a.shape)); b1[0]=a;b1[1]=a;b1[2]=a
 100 loops, best of 3: 19.3 ms per loop

 In [5]: timeit b2 = np.c_[a,a,a].T
 10 loops, best of 3: 30.5 ms per loop

 In [6]: timeit b3 = np.concatenate([a,a,a]).reshape(3,-1)
 100 loops, best of 3: 9.33 ms per loop

 Skipper

 See the following snippet:
 ---
 import time
 import numpy as nx
 print 'numpy version', nx.version.version
 t = time.time()
 # test array
 a = nx.arange(1000*1000)
 print 'a ',time.time()-t
 t = time.time()
 # create array in the normal way..
 b0 = nx.array([a,a,a])
 print 'b0',time.time()-t
 t = time.time()
 # create using empty array
 b1 = nx.empty((3,)+(a.shape))
 b1[0] = a
 b1[1] = a
 b1[2] = a
 print 'b1',time.time()-t
 print nx.all((b0==b1))
 -
 Produces the output:
   numpy version 1.3.0
   a  0.0019519329071
   b0 0.286643981934
   b1 0.0116579532623
   equal True
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread Fabrice Silva
Le jeudi 15 juillet 2010 à 16:05 +0100, John Porter a écrit :
 You're right - I screwed up the timing for the one that works...
 It does seem to be faster.
 
 I've always just built arrays using nx.array([]) in the past though
 and was surprised that it performs so badly.

Can anyone provide an explanation (or a pointer) to such differences ?
Thanks

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread Skipper Seabold
On Thu, Jul 15, 2010 at 11:05 AM, John Porter jpor...@cambridgesys.com wrote:
 You're right - I screwed up the timing for the one that works...
 It does seem to be faster.

 I've always just built arrays using nx.array([]) in the past though
 and was surprised
 that it performs so badly.


 On Thu, Jul 15, 2010 at 2:41 PM, Skipper Seabold jsseab...@gmail.com wrote:
 On Thu, Jul 15, 2010 at 5:54 AM, John Porter jpor...@cambridgesys.com 
 wrote:
 Has anyone got any advice about array creation. I've been using numpy
 for a long time and have just noticed something unexpected about array
 concatenation.

 It seems that using numpy.array([a,b,c]) is around 20 times slower
 than creating an empty array and adding the individual elements.

 Other things that don't work well either:
    numpy.concatenate([a,b,c]).reshape(3,-1)
    numpy.concatenate([[a],[b],[c]))

 Is there a better way to efficiently create the array ?


 What was your timing for concatenate?  It wins for me given the shape of a.

 In [1]: import numpy as np

 In [2]: a = np.arange(1000*1000)

 In [3]: timeit b0 = np.array([a,a,a])
 1 loops, best of 3: 216 ms per loop

 In [4]: timeit b1 = np.empty(((3,)+a.shape)); b1[0]=a;b1[1]=a;b1[2]=a
 100 loops, best of 3: 19.3 ms per loop

 In [5]: timeit b2 = np.c_[a,a,a].T
 10 loops, best of 3: 30.5 ms per loop

 In [6]: timeit b3 = np.concatenate([a,a,a]).reshape(3,-1)
 100 loops, best of 3: 9.33 ms per loop


One more.

In [26]: timeit b4 = np.vstack((a,a,a))
100 loops, best of 3: 9.46 ms per loop

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread John Porter
ok - except that vstack doesn't seem to work for 2d arrays (without a
reshape) which is what I'm actually after.

The difference between the numpy.concatenate version and numpy.array is fairly
impressive though, I get a factor of  50x. It would be nice to know why.

On Thu, Jul 15, 2010 at 4:15 PM, Skipper Seabold jsseab...@gmail.com wrote:
 On Thu, Jul 15, 2010 at 11:05 AM, John Porter jpor...@cambridgesys.com 
 wrote:
 You're right - I screwed up the timing for the one that works...
 It does seem to be faster.

 I've always just built arrays using nx.array([]) in the past though
 and was surprised
 that it performs so badly.


 On Thu, Jul 15, 2010 at 2:41 PM, Skipper Seabold jsseab...@gmail.com wrote:
 On Thu, Jul 15, 2010 at 5:54 AM, John Porter jpor...@cambridgesys.com 
 wrote:
 Has anyone got any advice about array creation. I've been using numpy
 for a long time and have just noticed something unexpected about array
 concatenation.

 It seems that using numpy.array([a,b,c]) is around 20 times slower
 than creating an empty array and adding the individual elements.

 Other things that don't work well either:
    numpy.concatenate([a,b,c]).reshape(3,-1)
    numpy.concatenate([[a],[b],[c]))

 Is there a better way to efficiently create the array ?


 What was your timing for concatenate?  It wins for me given the shape of a.

 In [1]: import numpy as np

 In [2]: a = np.arange(1000*1000)

 In [3]: timeit b0 = np.array([a,a,a])
 1 loops, best of 3: 216 ms per loop

 In [4]: timeit b1 = np.empty(((3,)+a.shape)); b1[0]=a;b1[1]=a;b1[2]=a
 100 loops, best of 3: 19.3 ms per loop

 In [5]: timeit b2 = np.c_[a,a,a].T
 10 loops, best of 3: 30.5 ms per loop

 In [6]: timeit b3 = np.concatenate([a,a,a]).reshape(3,-1)
 100 loops, best of 3: 9.33 ms per loop


 One more.

 In [26]: timeit b4 = np.vstack((a,a,a))
 100 loops, best of 3: 9.46 ms per loop

 Skipper
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread Skipper Seabold
On Thu, Jul 15, 2010 at 12:23 PM, John Porter jpor...@cambridgesys.com wrote:
 ok - except that vstack doesn't seem to work for 2d arrays (without a
 reshape) which is what I'm actually after.


Ah, then you might want hstack.  There is also a column_stack and
row_stack if you need to go that route.

 The difference between the numpy.concatenate version and numpy.array is fairly
 impressive though, I get a factor of  50x. It would be nice to know why.


Sorry, I don't have any deep insight here.  There is probably just
overhead in the array creation.  Consider if you try to use hstack and
company on lists.

In [1]: import numpy as np

In [2]: a = np.arange(1000*1000)

In [3]: b = a.tolist()

In [4]: timeit b0 = np.array((a,a,a))
1 loops, best of 3: 217 ms per loop

In [5]: timeit b1 = np.vstack((b,b,b))
1 loops, best of 3: 380 ms per loop

Skipper


 On Thu, Jul 15, 2010 at 4:15 PM, Skipper Seabold jsseab...@gmail.com wrote:
 On Thu, Jul 15, 2010 at 11:05 AM, John Porter jpor...@cambridgesys.com 
 wrote:
 You're right - I screwed up the timing for the one that works...
 It does seem to be faster.

 I've always just built arrays using nx.array([]) in the past though
 and was surprised
 that it performs so badly.


 On Thu, Jul 15, 2010 at 2:41 PM, Skipper Seabold jsseab...@gmail.com 
 wrote:
 On Thu, Jul 15, 2010 at 5:54 AM, John Porter jpor...@cambridgesys.com 
 wrote:
 Has anyone got any advice about array creation. I've been using numpy
 for a long time and have just noticed something unexpected about array
 concatenation.

 It seems that using numpy.array([a,b,c]) is around 20 times slower
 than creating an empty array and adding the individual elements.

 Other things that don't work well either:
    numpy.concatenate([a,b,c]).reshape(3,-1)
    numpy.concatenate([[a],[b],[c]))

 Is there a better way to efficiently create the array ?


 What was your timing for concatenate?  It wins for me given the shape of a.

 In [1]: import numpy as np

 In [2]: a = np.arange(1000*1000)

 In [3]: timeit b0 = np.array([a,a,a])
 1 loops, best of 3: 216 ms per loop

 In [4]: timeit b1 = np.empty(((3,)+a.shape)); b1[0]=a;b1[1]=a;b1[2]=a
 100 loops, best of 3: 19.3 ms per loop

 In [5]: timeit b2 = np.c_[a,a,a].T
 10 loops, best of 3: 30.5 ms per loop

 In [6]: timeit b3 = np.concatenate([a,a,a]).reshape(3,-1)
 100 loops, best of 3: 9.33 ms per loop


 One more.

 In [26]: timeit b4 = np.vstack((a,a,a))
 100 loops, best of 3: 9.46 ms per loop

 Skipper
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Matrix dot product over an axis(for a 3d array/list of matrices)

2010-07-15 Thread Emmanuel Bengio
Hello,

I have a list of 4x4 transformation matrices, that I want to dot with
another list of the same size (elementwise).
Making a for loop that calculates the dot product of each is extremely slow,

I thought that maybe it's due to the fact that I have thousands of matrices
and it's a python for loop and there's a high Python overhead.

I do something like this:
 for a,b in izip(Rot,Trans):
 c.append(numpy.dot(a,b))

Is there a way to do this in one instruction?
Or is there a way to do this all using weave.inline?

-- 


 Emmanuel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.fft, yet again

2010-07-15 Thread David Goldsmith
On Thu, Jul 15, 2010 at 3:20 AM, Martin Raspaud martin.rasp...@smhi.sewrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 David Goldsmith skrev:
 
 
  Interesting comment: it made me run down the fftpack tutorial
  http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/fftpack.rst/
  josef has alluded to in the past to see if the suggested pointer
  could point there without having to write a lot of new content.
  What I found was that although the scipy basic fft functions don't
  support it (presumably because they're basically just wrappers for
  the numpy fft functions), scipy's discrete cosine transforms support
  an norm=ortho keyword argument/value pair that enables the
  function to return the unitary versions that you describe above.
  There isn't much narrative explanation of the issue yet, but it got
  me wondering: why don't the fft functions support this?  If there
  isn't a good reason, I'll go ahead and submit an enhancement
 ticket.
 
 
  Having seen no post of a good reason, I'm going to go ahead and file
  enhancement tickets.

 Hi,

 I have worked on fourier transforms and I think normalization is generally
 seen
 as a whole : fft + ifft should be the identity function, thus the necessity
 of a
 normalization, which often done on the ifft.

 As one of the previous poster mentioned, sqrt(len(x)) is often seen as a
 good
 compromise to split the normalization equally between fft and ifft.

 In the sound community though, the whole normalization often done after the
 fft,
 such that looking at the amplitude spectrum gives the correct amplitude
 values
 for the different components of the sound (sinusoids).

 My guess is that normalization requirements are different for every user:
 that's
 why I like the no normalization approach of fftw, such that anyone does
 whatever
 he/she/it wants.


I get the picture: in the docstring, refer people to fftw.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matrix dot product over an axis(for a 3d array/list of matrices)

2010-07-15 Thread John Salvatier
Could you place all Rot's into the same array and all the Trans's into the
same array? If you have the first index of each array refer to which array
it is numpy.dot should work fine, since numpy.dot just does the dot product
over the second to last and last indexes.
http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html

John

On Thu, Jul 15, 2010 at 9:38 AM, Emmanuel Bengio beng...@gmail.com wrote:

 Hello,

 I have a list of 4x4 transformation matrices, that I want to dot with
 another list of the same size (elementwise).
 Making a for loop that calculates the dot product of each is extremely
 slow,
 I thought that maybe it's due to the fact that I have thousands of matrices
 and it's a python for loop and there's a high Python overhead.

 I do something like this:
  for a,b in izip(Rot,Trans):
  c.append(numpy.dot(a,b))

 Is there a way to do this in one instruction?
 Or is there a way to do this all using weave.inline?

 --


  Emmanuel

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matrix dot product over an axis(for a 3d array/list of matrices)

2010-07-15 Thread Keith Goodman
On Thu, Jul 15, 2010 at 9:38 AM, Emmanuel Bengio beng...@gmail.com wrote:

 Hello,

 I have a list of 4x4 transformation matrices, that I want to dot with 
 another list of the same size (elementwise).
 Making a for loop that calculates the dot product of each is extremely slow,
 I thought that maybe it's due to the fact that I have thousands of matrices 
 and it's a python for loop and there's a high Python overhead.

 I do something like this:
  for a,b in izip(Rot,Trans):
  c.append(numpy.dot(a,b))

 Is there a way to do this in one instruction?
 Or is there a way to do this all using weave.inline?

How about using list comprehension? And setting dot = numpy.dot. Would
initializing the the c list first help?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matrix dot product over an axis(for a 3d array/list of matrices)

2010-07-15 Thread Keith Goodman
On Thu, Jul 15, 2010 at 9:45 AM, Keith Goodman kwgood...@gmail.com wrote:
 On Thu, Jul 15, 2010 at 9:38 AM, Emmanuel Bengio beng...@gmail.com wrote:

 Hello,

 I have a list of 4x4 transformation matrices, that I want to dot with 
 another list of the same size (elementwise).
 Making a for loop that calculates the dot product of each is extremely slow,
 I thought that maybe it's due to the fact that I have thousands of matrices 
 and it's a python for loop and there's a high Python overhead.

 I do something like this:
  for a,b in izip(Rot,Trans):
  c.append(numpy.dot(a,b))

 Is there a way to do this in one instruction?
 Or is there a way to do this all using weave.inline?

 How about using list comprehension? And setting dot = numpy.dot. Would
 initializing the the c list first help?

Doesn't buy much:

 def forloop(a, b):
   .: c = []
   .: for x, y in izip(a, b):
   .: c.append(np.dot(x, y))
   .: return c
   .:
 a = [np.random.rand(4,4) for i in range(1)]
 b = [np.random.rand(4,4) for i in range(1)]

 timeit forloop(a, b)
10 loops, best of 3: 21.2 ms per loop

 dot = np.dot
 timeit [dot(x, y) for x, y in izip(a, b)]
100 loops, best of 3: 19.2 ms per loop
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matrix dot product over an axis(for a 3d array/list of matrices)

2010-07-15 Thread Charles R Harris
On Thu, Jul 15, 2010 at 10:38 AM, Emmanuel Bengio beng...@gmail.com wrote:

 Hello,

 I have a list of 4x4 transformation matrices, that I want to dot with
 another list of the same size (elementwise).
 Making a for loop that calculates the dot product of each is extremely
 slow,
 I thought that maybe it's due to the fact that I have thousands of matrices
 and it's a python for loop and there's a high Python overhead.

 I do something like this:
  for a,b in izip(Rot,Trans):
  c.append(numpy.dot(a,b))

 Is there a way to do this in one instruction?
 Or is there a way to do this all using weave.inline?


Yes, there is a trick for this using a multiply with properly placed newaxis
followed by a sum. It uses more memory but for stacks of small arrays that
shouldn't matter. See the post
herehttp://thread.gmane.org/gmane.comp.python.numeric.general/20360/focus=21033.


Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Meshgrid and mgrid Differences

2010-07-15 Thread Jed Ludlow
Hello, all.

Is there a technical reason that 'meshgrid' and 'mgrid' produce results
which differ from each other by a transpose? For example,

In [1]: X,Y = meshgrid(array([0,1,2,3]), array([0,1,2,3,4,5]))

In [2]: X
Out[2]:
array([[0, 1, 2, 3],
   [0, 1, 2, 3],
   [0, 1, 2, 3],
   [0, 1, 2, 3],
   [0, 1, 2, 3],
   [0, 1, 2, 3]])

In [3]: Y
Out[3]:
array([[0, 0, 0, 0],
   [1, 1, 1, 1],
   [2, 2, 2, 2],
   [3, 3, 3, 3],
   [4, 4, 4, 4],
   [5, 5, 5, 5]])

In [4]: U,V = mgrid[0:4,0:6]

In [5]: U
Out[5]:
array([[0, 0, 0, 0, 0, 0],
   [1, 1, 1, 1, 1, 1],
   [2, 2, 2, 2, 2, 2],
   [3, 3, 3, 3, 3, 3]])

In [6]: V
Out[6]:
array([[0, 1, 2, 3, 4, 5],
   [0, 1, 2, 3, 4, 5],
   [0, 1, 2, 3, 4, 5],
   [0, 1, 2, 3, 4, 5]])

The functions seemed to be designed to do a similar task. It may not seem
like a major difference, but it does make for interesting downstream
effects, particularly when working with mayavi.mlab. When you call
mlab.surf(s) it gets the axes sense correct only if you used mgrid to
generate the x-y grid for evaluating s. If you used meshgrid to generate the
x-y grid the surface gets rotated in the scene by 90 degrees.

Regards,

Jed Ludlow
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread Eric Firing
On 07/15/2010 04:54 AM, John Hunter wrote:
 I am seeing a problem on Solaris since I upgraded to svn HEAD.
 np.isinf does not handle np.inf.  See ipython session below.  I am not
 seeing this problem w/ HEAD on an ubuntu linux box I tested on

 In [1]: import numpy as np

 In [2]: np.__version__
 Out[2]: '2.0.0.dev8480'

 In [3]: x = np.inf
 np.infnp.info   np.infty

 In [3]: x = np.inf

 In [4]: np.isinf(x)
 Warning: invalid value encountered in isinf
 Out[4]: True

 In [5]: np.seter
 np.seterr  np.seterrcall  np.seterrobj

 In [5]: np.seterr(all='raise')
 Out[5]: {'over': 'print', 'divide': 'print', 'invalid': 'print',
 'under': 'ignore'}

 In [6]: np.isinf(x)
 ---
 FloatingPointErrorTraceback (most recent call last)

 /home/titan/johnh/ipython console

 FloatingPointError: invalid value encountered in isinf

 In [7]: !uname -a
 SunOS udesktop191 5.10 Generic_139556-08 i86pc i386 i86pc

 In [43]: !gcc --version
 gcc (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath)
 Copyright (C) 2004 Free Software Foundation, Inc.
 This is free software; see the source for copying conditions.  There is NO
 warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


 Posted on tracker:

 http://projects.scipy.org/numpy/ticket/1547

John,

This is related to this thread:

http://www.mail-archive.com/numpy-discussion@scipy.org/msg25796.html

The first of the two bugs I noted was fixed; you are seeing the second.

Eric

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matrix dot product over an axis(for a 3d array/list of matrices)

2010-07-15 Thread Emmanuel Bengio
I get about 60% of the original execution times for about any size of stack.

On 15 July 2010 14:09, Charles R Harris charlesr.har...@gmail.com wrote:



 On Thu, Jul 15, 2010 at 12:00 PM, Emmanuel Bengio beng...@gmail.comwrote:

 Ok I get it. Thanks!

 Numpy syntax that works for me:
 numpy.sum(a[:,:,:,numpy.newaxis]*b[:,numpy.newaxis,:,:],axis=-2)


 The leading ... gives the same thing, but iterates over all the leading
 indicies in case you want multidimensional arrays of matrices ;) You can
 also use the sum method which might be a bit more economical:

 (a[:,:,:,numpy.newaxis]*b[:,numpy.newaxis,:,:]).sum(axis=-2)

 How do the execution times compare?

 snip

 Chuck

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 


 Emmanuel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread Benjamin Root
On Thu, Jul 15, 2010 at 12:38 PM, Sturla Molden stu...@molden.no wrote:

 Sorry for the previous mispost.

 This thread remids me of something I've though about for a while: Would
 NumPy benefit from an np.ndarraylist subclass of np.ndarray, that has an
 O(1) amortized append like Python lists? (Other methods of Python lists
 (pop, extend) would be worth considering as well.) Or will we get the
 same performance by combining a Python list and ndarray?


Another idea that I always thought was interesting comes from the C++
Standard Library. The .reserve() function call for the vector class, which
would go ahead and allocate the specified length, but the array length is
not set to that.  It was useful in the case where you have a decent idea of
the expected size of the array, but you still have to grow the array
iteratively.  Don't know how well that would fit into this context, but I
thought I ought to mention that.

Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matrix dot product over an axis(for a 3d array/list of matrices)

2010-07-15 Thread David Warde-Farley
On 2010-07-15, at 12:38 PM, Emmanuel Bengio beng...@gmail.com wrote:

 Hello,
 
 I have a list of 4x4 transformation matrices, that I want to dot with 
 another list of the same size (elementwise).
 Making a for loop that calculates the dot product of each is extremely slow, 
 I thought that maybe it's due to the fact that I have thousands of matrices 
 and it's a python for loop and there's a high Python overhead.
 
 I do something like this:
  for a,b in izip(Rot,Trans):
  c.append(numpy.dot(a,b))

If you need/want more speed than the solution Chuck proposed, you should check 
out Cython and Tokyo. Cython lets you write loops that execute at C speed, 
whereas Tokyo provides a Cython level wrapper for BLAS (no need to go through 
Python code to call NumPy). Tokyo was designed for exactly your use case: lots 
of matrix multiplies with relatively small matrices, where you start noticing 
the Python overhead.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread Pauli Virtanen
Thu, 15 Jul 2010 09:54:12 -0500, John Hunter wrote:
[clip]
 In [4]: np.isinf(x)
 Warning: invalid value encountered in isinf Out[4]: True

As far as I know, isinf has always created NaNs -- since 2006 it has been 
defined on unsupported platforms as

(!isnan((x))  isnan((x)-(x)))

I'll replace it by the obvious

((x) == NPY_INFINITY || (x) == -NPY_INFINITY)

which is true only for +-inf, and cannot raise any FPU exceptions.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread Eric Firing
On 07/15/2010 11:45 AM, Pauli Virtanen wrote:
 Thu, 15 Jul 2010 09:54:12 -0500, John Hunter wrote:
 [clip]
 In [4]: np.isinf(x)
 Warning: invalid value encountered in isinf Out[4]: True

 As far as I know, isinf has always created NaNs -- since 2006 it has been
 defined on unsupported platforms as

   (!isnan((x))  isnan((x)-(x)))

 I'll replace it by the obvious

   ((x) == NPY_INFINITY || (x) == -NPY_INFINITY)

 which is true only for +-inf, and cannot raise any FPU exceptions.


Is it certain that the Solaris compiler lacks isinf?  Is it possible 
that it has it, but it is not being detected?

Eric
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matrix dot product over an axis(for a 3d array/list of matrices)

2010-07-15 Thread David Warde-Farley
On 2010-07-15, at 4:31 PM, David Warde-Farley wrote:

 If you need/want more speed than the solution Chuck proposed, you should 
 check out Cython and Tokyo. Cython lets you write loops that execute at C 
 speed, whereas Tokyo provides a Cython level wrapper for BLAS (no need to go 
 through Python code to call NumPy). Tokyo was designed for exactly your use 
 case: lots of matrix multiplies with relatively small matrices, where you 
 start noticing the Python overhead.

It occurred to me I neglected to provide a link (cursed iPhone):

http://www.vetta.org/2009/09/tokyo-a-cython-blas-wrapper-for-fast-matrix-math/

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matrix dot product over an axis(for a 3d array/list of matrices)

2010-07-15 Thread Charles R Harris
On Thu, Jul 15, 2010 at 4:28 PM, David Warde-Farley d...@cs.toronto.eduwrote:

 On 2010-07-15, at 4:31 PM, David Warde-Farley wrote:

  If you need/want more speed than the solution Chuck proposed, you should
 check out Cython and Tokyo. Cython lets you write loops that execute at C
 speed, whereas Tokyo provides a Cython level wrapper for BLAS (no need to go
 through Python code to call NumPy). Tokyo was designed for exactly your use
 case: lots of matrix multiplies with relatively small matrices, where you
 start noticing the Python overhead.

 It occurred to me I neglected to provide a link (cursed iPhone):


 http://www.vetta.org/2009/09/tokyo-a-cython-blas-wrapper-for-fast-matrix-math/


For speed I'd go straight to c and avoid BLAS since the matrices are so
small. There might also be a cache advantage to copying the non-contiguous
columns of the rhs to the stack.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-User] Saving Complex Numbers

2010-07-15 Thread David Warde-Farley
(CCing NumPy-discussion where this really belongs)

On 2010-07-08, at 1:34 PM, cfra...@uci.edu wrote:

 Need Complex numbers in the saved file.

Ack, this has come up several times according to list archives and no one's 
been able to provide a real answer.

It seems that there is nearly no formatting support for complex numbers in 
Python. for a single value, {0.real:.18e}{0.imag:+.18e}.format(val) will get 
the job done, but because of the way numpy.savetxt creates its format string 
this isn't a trivial fix. 

Anyone else have ideas on how complex number format strings can be elegantly 
incorporated in savetxt?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread John Hunter
On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu wrote:
 Is it certain that the Solaris compiler lacks isinf?  Is it possible
 that it has it, but it is not being detected?

Just to clarify, I'm not using the sun compiler, but gcc-3.4.3 on solaris x86
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread John Hunter
On Thu, Jul 15, 2010 at 7:11 PM, John Hunter jdh2...@gmail.com wrote:
 On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu wrote:
 Is it certain that the Solaris compiler lacks isinf?  Is it possible
 that it has it, but it is not being detected?

 Just to clarify, I'm not using the sun compiler, but gcc-3.4.3 on solaris x86

Correction: the version of gcc I compiled numpy with is different than
the one in my default path.  The version I compiled numpy with is

   /opt/app/g++lib6/gcc-4.2/bin/gcc --version
  gcc (GCC) 4.2.2

running on solaris 5.10

JDH
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread Charles R Harris
On Thu, Jul 15, 2010 at 6:11 PM, John Hunter jdh2...@gmail.com wrote:

 On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu wrote:
  Is it certain that the Solaris compiler lacks isinf?  Is it possible
  that it has it, but it is not being detected?

 Just to clarify, I'm not using the sun compiler, but gcc-3.4.3 on solaris
 x86


Might be related to this
threadhttp://thread.gmane.org/gmane.comp.python.numeric.general/38298.
What version of numpy are you using?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread John Hunter
On Thu, Jul 15, 2010 at 7:27 PM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Thu, Jul 15, 2010 at 6:11 PM, John Hunter jdh2...@gmail.com wrote:

 On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu wrote:
  Is it certain that the Solaris compiler lacks isinf?  Is it possible
  that it has it, but it is not being detected?

 Just to clarify, I'm not using the sun compiler, but gcc-3.4.3 on solaris
 x86

 Might be related to this thread.  What version of numpy are you using?

svn HEAD (2.0.0.dev8480)

After reading the thread you suggested, I tried forcing the

  CFLAGS=-DNPY_HAVE_DECL_ISFINITE

flag to be set, but this is apparently a bad idea for my platform...

  File 
/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/__init__.py,
line 5, in ?
import multiarray
ImportError: ld.so.1: python: fatal: relocation error: file
/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/multiarray.so:
symbol isfinite: referenced symbol not found

so while I think my bug is related to that thread, I don't see
anything in that thread to help me fix my problem.  Or am I missing
something?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread Charles R Harris
On Thu, Jul 15, 2010 at 6:42 PM, John Hunter jdh2...@gmail.com wrote:

 On Thu, Jul 15, 2010 at 7:27 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Thu, Jul 15, 2010 at 6:11 PM, John Hunter jdh2...@gmail.com wrote:
 
  On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu
 wrote:
   Is it certain that the Solaris compiler lacks isinf?  Is it possible
   that it has it, but it is not being detected?
 
  Just to clarify, I'm not using the sun compiler, but gcc-3.4.3 on
 solaris
  x86
 
  Might be related to this thread.  What version of numpy are you using?

 svn HEAD (2.0.0.dev8480)

 After reading the thread you suggested, I tried forcing the

  CFLAGS=-DNPY_HAVE_DECL_ISFINITE

 flag to be set, but this is apparently a bad idea for my platform...

  File
 /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/__init__.py,
 line 5, in ?
import multiarray
 ImportError: ld.so.1: python: fatal: relocation error: file
 /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/multiarray.so:
 symbol isfinite: referenced symbol not found

 so while I think my bug is related to that thread, I don't see
 anything in that thread to help me fix my problem.  Or am I missing
 something?
 


In the thread there is a way to check if isinf is in the library. You can
also grep through the python include files where its presence should be set,
that's in /usr/include/python2.6/pyconfig.h on my system. If it's present,
then you should be able to apply David's
fixhttp://projects.scipy.org/numpy/changeset/8455which amount to
just a few lines.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread Charles R Harris
On Thu, Jul 15, 2010 at 6:55 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Thu, Jul 15, 2010 at 6:42 PM, John Hunter jdh2...@gmail.com wrote:

 On Thu, Jul 15, 2010 at 7:27 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Thu, Jul 15, 2010 at 6:11 PM, John Hunter jdh2...@gmail.com wrote:
 
  On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu
 wrote:
   Is it certain that the Solaris compiler lacks isinf?  Is it possible
   that it has it, but it is not being detected?
 
  Just to clarify, I'm not using the sun compiler, but gcc-3.4.3 on
 solaris
  x86
 
  Might be related to this thread.  What version of numpy are you using?

 svn HEAD (2.0.0.dev8480)

 After reading the thread you suggested, I tried forcing the

  CFLAGS=-DNPY_HAVE_DECL_ISFINITE

 flag to be set, but this is apparently a bad idea for my platform...

  File
 /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/__init__.py,
 line 5, in ?
import multiarray
 ImportError: ld.so.1: python: fatal: relocation error: file

 /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/multiarray.so:
 symbol isfinite: referenced symbol not found

 so while I think my bug is related to that thread, I don't see
 anything in that thread to help me fix my problem.  Or am I missing
 something?
 


 In the thread there is a way to check if isinf is in the library. You can
 also grep through the python include files where its presence should be set,
 that's in /usr/include/python2.6/pyconfig.h on my system. If it's present,
 then you should be able to apply David's 
 fixhttp://projects.scipy.org/numpy/changeset/8455which amount to just a few 
 lines.


PS, of course we should fix the macro also. Since the bit values of +/-
infinity are known we should be able to define them as constants using a
couple of ifdefs and unions. I believe SUN has quad precision so we might
need to check the layout, but the usual case seems to be zero mantissa,
maximum exponent, and the sign. Not a number differs in that the mantissa is
non-zero. I believe there are two versions of nans, signaling and
non-signaling, but in any case it doesn't look to me like it should be
impossible to simply have all those values as numpy constants like pi and e.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread Charles R Harris
On Thu, Jul 15, 2010 at 7:09 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Thu, Jul 15, 2010 at 6:55 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Thu, Jul 15, 2010 at 6:42 PM, John Hunter jdh2...@gmail.com wrote:

 On Thu, Jul 15, 2010 at 7:27 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Thu, Jul 15, 2010 at 6:11 PM, John Hunter jdh2...@gmail.com
 wrote:
 
  On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu
 wrote:
   Is it certain that the Solaris compiler lacks isinf?  Is it possible
   that it has it, but it is not being detected?
 
  Just to clarify, I'm not using the sun compiler, but gcc-3.4.3 on
 solaris
  x86
 
  Might be related to this thread.  What version of numpy are you using?

 svn HEAD (2.0.0.dev8480)

 After reading the thread you suggested, I tried forcing the

  CFLAGS=-DNPY_HAVE_DECL_ISFINITE

 flag to be set, but this is apparently a bad idea for my platform...

  File
 /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/__init__.py,
 line 5, in ?
import multiarray
 ImportError: ld.so.1: python: fatal: relocation error: file

 /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/multiarray.so:
 symbol isfinite: referenced symbol not found

 so while I think my bug is related to that thread, I don't see
 anything in that thread to help me fix my problem.  Or am I missing
 something?
 


 In the thread there is a way to check if isinf is in the library. You can
 also grep through the python include files where its presence should be set,
 that's in /usr/include/python2.6/pyconfig.h on my system. If it's present,
 then you should be able to apply David's 
 fixhttp://projects.scipy.org/numpy/changeset/8455which amount to just a 
 few lines.


 PS, of course we should fix the macro also. Since the bit values of +/-
 infinity are known we should be able to define them as constants using a
 couple of ifdefs and unions. I believe SUN has quad precision so we might
 need to check the layout, but the usual case seems to be zero mantissa,
 maximum exponent, and the sign. Not a number differs in that the mantissa is
 non-zero. I believe there are two versions of nans, signaling and
 non-signaling, but in any case it doesn't look to me like it should be
 impossible to simply have all those values as numpy constants like pi and e.


And here are some functions stolen from
herehttp://ftp.math.utah.edu/pub//zsh/zsh-4.1.1-nonstop-fp.tar.gz.
They have have an oldstyle BSD type license which need quoting.

#if !defined(HAVE_ISINF)
/**/
int
(isinf)(double x)
{
if ((-1.0  x)  (x  1.0))/* x is small, and thus finite */
return (0);
else if ((x + x) == x)/* only true if x == Infinity */
return (1);
else/* must be finite (normal or subnormal), or NaN */
return (0);
}
#endif

#if !defined(HAVE_ISNAN)
/**/
static double
(store)(double *x)
{
return (*x);
}

/**/
int
(isnan)(double x)
{
/* (x != x) should be sufficient, but some compilers incorrectly
optimize it away */
return (store(x) != store(x));
}
#endif

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isinf raises in inf

2010-07-15 Thread Charles R Harris
On Thu, Jul 15, 2010 at 6:42 PM, John Hunter jdh2...@gmail.com wrote:

 On Thu, Jul 15, 2010 at 7:27 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Thu, Jul 15, 2010 at 6:11 PM, John Hunter jdh2...@gmail.com wrote:
 
  On Thu, Jul 15, 2010 at 6:14 PM, Eric Firing efir...@hawaii.edu
 wrote:
   Is it certain that the Solaris compiler lacks isinf?  Is it possible
   that it has it, but it is not being detected?
 
  Just to clarify, I'm not using the sun compiler, but gcc-3.4.3 on
 solaris
  x86
 
  Might be related to this thread.  What version of numpy are you using?

 svn HEAD (2.0.0.dev8480)

 After reading the thread you suggested, I tried forcing the

  CFLAGS=-DNPY_HAVE_DECL_ISFINITE

 flag to be set, but this is apparently a bad idea for my platform...

  File
 /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/__init__.py,
 line 5, in ?
import multiarray
 ImportError: ld.so.1: python: fatal: relocation error: file
 /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/multiarray.so:
 symbol isfinite: referenced symbol not found

 so while I think my bug is related to that thread, I don't see
 anything in that thread to help me fix my problem.  Or am I missing
 something?
 __


Can you try out this patch without David's fixes?

diff --git a/numpy/core/include/numpy/npy_math.h
b/numpy/core/include/numpy/npy_
index d53900e..341fb58 100644
--- a/numpy/core/include/numpy/npy_math.h
+++ b/numpy/core/include/numpy/npy_math.h
@@ -151,13 +151,13 @@ double npy_spacing(double x);
 #endif

 #ifndef NPY_HAVE_DECL_ISFINITE
-#define npy_isfinite(x) !npy_isnan((x) + (-x))
+#define npy_isfinite(x) (((x) + (x)) != (x)  (x) == (x))
 #else
 #define npy_isfinite(x) isfinite((x))
 #endif

 #ifndef NPY_HAVE_DECL_ISINF
-#define npy_isinf(x) (!npy_isfinite(x)  !npy_isnan(x))
+#define npy_isinf(x) (((x) + (x)) == (x)  (x) != 0)
 #else
 #define npy_isinf(x) isinf((x))
 #endif


Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion