Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread David Cournapeau
On Tue, Mar 24, 2009 at 6:32 AM, Bruce Southey  wrote:

> I get a problem with using longdouble as that is the dtype that causes
> the  TestPower.test_large_types to crash.

Hey, when I said the windows 64 bits support was experimental, I meant it :)

> Also, np.finfo(np.float128) crashes. I can assign and multiple
> longdoubles and take the square root but not use the power '**'.
>  >>> y=np.longdouble(2)
>  >>> y
> 2.0
>  >>> y**1
> 2.0
>  >>> y**2
> crash

There was a bug in the mingw powl function, but I thought the problem
was fixed upstream. I will look at it.

This shows that numpy lacks long testing, though - the numpy test
suite passes 100 % (when it does not crash :) ), but the long double
support is very flaky at best on the windows 64 + mingw combination
ATM.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread David Cournapeau
2009/3/24 Charles R Harris :
>
>
> On Mon, Mar 23, 2009 at 12:34 PM, Robert Pyle 
> wrote:
>>
>> Hi all,
>>
>> This is a continuation of something I started last week, but with a
>> more appropriate subject line.
>>
>> To recap, my machine is a dual G5 running OS X 10.5.6, my python is
>>
>>    Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008,
>> 15:28:32)
>>
>> and numpy 1.3.0b1 was installed from the source tarball in the
>> straightforward way with
>>
>>    sudo python setup.py install
>>
>>
>> On Mar 19, 2009, at 3:46 PM, Charles R Harris wrote:
>>
>> > On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote:
>> >
>> > Thanks for tracking this! Can you check what your platform gives for:
>> >
>> > > import numpy as np
>> > > info = np.finfo(np.longcomplex)
>> > > print "eps:", info.eps, info.eps.dtype
>> > > print "tiny:", info.tiny, info.tiny.dtype
>> > > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps)
>> >
>> > eps: 1.3817869701e-76 float128
>> > tiny: -1.08420217274e-19 float128
>> > log10: nan nan
>> >
>> > The log of a negative number is nan, so part of the problem is the
>> > value of tiny. The size of the values also look suspect to me. On my
>> > machine
>> >
>> > In [8]: finfo(longcomplex).eps
>> > Out[8]: 1.084202172485504434e-19
>> >
>> > In [9]: finfo(float128).tiny
>> > Out[9]: array(3.3621031431120935063e-4932, dtype=float128)
>> >
>> > So at a minimum eps and tiny are reversed.
>> >
>> > I started to look at the code for this but my eyes rolled up in my
>> > head and I passed out. It could use some improvements...
>> >
>> > Chuck
>>
>> I have chased this a bit (or perhaps 128 bits) further.
>>
>> The problem seems to be that float128 is screwed up in general.  I
>> tracked the test error back to lines 95-107 in
>>
>> /PyModules/numpy-1.3.0b1/build/lib.macosx-10.3-ppc-2.5/numpy/lib/
>> machar.py
>>
>> Here is a short program built from these lines that demonstrates what
>> I believe to be at the root of the test failure.
>>
>> ##
>> #! /usr/bin/env python
>>
>> import numpy as np
>> import binascii as b
>>
>> def t(type="float"):
>>     max_iterN = 1
>>     print "\ntesting %s" % type
>>     a = np.array([1.0],type)
>>     one = a
>>     zero = one - one
>>     for _ in xrange(max_iterN):
>>         a = a + a
>>         temp = a + one
>>         temp1 = temp - a
>>         print _+1, b.b2a_hex(temp[0]), temp1
>>         if any(temp1 - one != zero):
>>             break
>>     return
>>
>> if __name__ == '__main__':
>>     t(np.float32)
>>     t(np.float64)
>>     t(np.float128)
>>
>> ##
>>
>> This tries to find the number of bits in the significand by
>> calculating ((2.0**n)+1.0) for increasing n, and stopping when the sum
>> is indistinguishable from (2.0**n), that is, when the added 1.0 has
>> fallen off the bottom of the significand.
>>
>> My print statement shows the power of 2.0, the hex representation of
>> ((2.0**n)+1.0), and the difference ((2.0**n)+1.0) - (2.0**n), which
>> one expects to be 1.0 up to the point where the added 1.0 is lost.
>>
>> Here are the last few lines printed for float32:
>>
>> 19 4910 [ 1.]
>> 20 4988 [ 1.]
>> 21 4a04 [ 1.]
>> 22 4a82 [ 1.]
>> 23 4b01 [ 1.]
>> 24 4b80 [ 0.]
>>
>> You can see the added 1.0 marching to the right and off the edge at 24
>> bits.
>>
>> Similarly, for float64:
>>
>> 48 42f00010 [ 1.]
>> 49 4308 [ 1.]
>> 50 4314 [ 1.]
>> 51 4322 [ 1.]
>> 52 4331 [ 1.]
>> 53 4340 [ 0.]
>>
>> There are 53 bits, just as IEEE 754 would lead us to hope.
>>
>> However, for float128:
>>
>> 48 42f00010 [1.0]
>> 49 4308 [1.0]
>> 50 4314 [1.0]
>> 51 4322 [1.0]
>> 52 4331 [1.0]
>> 53 43403ff0 [1.0]
>> 54 43503ff0 [1.0]
>>
>> Something weird happens as we pass 53 bits.  I think lines 53 and 54
>> *should* be
>
> PPC stores long doubles as two doubles. I don't recall exactly how the two
> are used, but the result is that the numbers aren't in the form you would
> expect. Long doubles on the PPC have always been iffy, so it is no surprise
> that machar fails. The failure on SPARC quad precision bothers me more.
>
> I think the easy thing to do for the 1.3 release is to fix the precision
> test to use a hardwired range of values, I don't think testing the extreme
> small values is necessary to check the power series expansion. But I have
> been leaving that fixup to Pauli.
>
> Longer term, I think the values in finfo could come from npy_cpu.h and be
> hardwired in.

I don't think it is a good idea: long double support depends on 3
things (CPU, toolchain, OS), so hardwiring them would be a nightmare,
since the number of cases could easily go > 100.

 > Anyhow, PPC is an exception in the way
> it

Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Charles R Harris
On Mon, Mar 23, 2009 at 6:06 PM, Pauli Virtanen  wrote:

> Mon, 23 Mar 2009 16:52:28 -0600, Charles R Harris wrote:
> [clip]
> >> >  >>> y=np.longdouble(2)
> >> >  >>> y
> >> > 2.0
> >> >  >>> y**1
> >> > 2.0
> >> >  >>> y**2
> >> > crash
> >>
> >> Ok, this looks a bit tricky, I have no idea what's going on. Why does
> >> it not crash with the exponent 1...
> >
> > I'd guess because nothing happens, the function simply returns.
>
> Which function? The code path in question appears to be through
> @n...@_power at scalarmathmodule.c.src:755, and from there it directly
> seems to go to _basic_longdouble_pow, where it calls npy_pow, which calls
> system's powl. (Like so on my system, verified with gdb.)
>
> I don't see branches testing for exponent 1, so this probably means that
> the crash occurs inside powl?
>

Yes, I think so. But powl itself might special case for some exponent values
and follow different paths accordingly. The float128 looks like mingw and,
since 64 bit support is still in development, what we might be seeing is a
bug in either mingw or its linking with MS. I don't know if mingw uses its
own library for extended precision, but I'm pretty sure MS doesn't support
it yet.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Pauli Virtanen
Mon, 23 Mar 2009 16:52:28 -0600, Charles R Harris wrote:
[clip]
>> >  >>> y=np.longdouble(2)
>> >  >>> y
>> > 2.0
>> >  >>> y**1
>> > 2.0
>> >  >>> y**2
>> > crash
>>
>> Ok, this looks a bit tricky, I have no idea what's going on. Why does
>> it not crash with the exponent 1...
>
> I'd guess because nothing happens, the function simply returns.

Which function? The code path in question appears to be through 
@n...@_power at scalarmathmodule.c.src:755, and from there it directly 
seems to go to _basic_longdouble_pow, where it calls npy_pow, which calls 
system's powl. (Like so on my system, verified with gdb.)

I don't see branches testing for exponent 1, so this probably means that 
the crash occurs inside powl?

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Charles R Harris
On Mon, Mar 23, 2009 at 4:27 PM, Pauli Virtanen  wrote:

> Mon, 23 Mar 2009 16:32:47 -0500, Bruce Southey wrote:
> [clip: crashes with longdouble on Windows 64]
> > No.
> >
> > I get a problem with using longdouble as that is the dtype that causes
> > the  TestPower.test_large_types to crash. Also, np.finfo(np.float128)
> > crashes. I can assign and multiple longdoubles and take the square root
> > but not use the power '**'.
> >  >>> y=np.longdouble(2)
> >  >>> y
> > 2.0
> >  >>> y**1
> > 2.0
> >  >>> y**2
> > crash
>
> Ok, this looks a bit tricky, I have no idea what's going on. Why does it
> not crash with the exponent 1...
>

I'd guess because nothing happens, the function simply returns.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Pauli Virtanen
Mon, 23 Mar 2009 16:18:52 -0600, Charles R Harris wrote:
[clip: #1008 fixes]
>> Backport?
>>
>>
> I think so. It is a bug and the fix doesn't look complicated.
> 
> I don't much like all the ifdefs in the middle of the code, but if there
> is a cleaner way to do it, it can wait.

Done, r6717.

Sorry about the long time it took to get this fixed...

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Pauli Virtanen
Mon, 23 Mar 2009 16:32:47 -0500, Bruce Southey wrote:
[clip: crashes with longdouble on Windows 64]
> No.
> 
> I get a problem with using longdouble as that is the dtype that causes
> the  TestPower.test_large_types to crash. Also, np.finfo(np.float128)
> crashes. I can assign and multiple longdoubles and take the square root
> but not use the power '**'.
>  >>> y=np.longdouble(2)
>  >>> y
> 2.0
>  >>> y**1
> 2.0
>  >>> y**2
> crash

Ok, this looks a bit tricky, I have no idea what's going on. Why does it 
not crash with the exponent 1...

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Charles R Harris
On Mon, Mar 23, 2009 at 4:01 PM, Pauli Virtanen  wrote:

> Mon, 23 Mar 2009 19:55:17 +, Pauli Virtanen wrote:
>
> > Mon, 23 Mar 2009 13:22:29 -0600, Charles R Harris wrote: [clip]
> >> PPC stores long doubles as two doubles. I don't recall exactly how the
> >> two are used, but the result is that the numbers aren't in the form you
> >> would expect. Long doubles on the PPC have always been iffy, so it is
> >> no surprise that machar fails. The failure on SPARC quad precision
> >> bothers me more.
> >
> > The test fails on SPARC, since we need one term more in the Horner
> > series to reach quad precision accuracy. I'll add that for long doubles.
>
> Another reason turned out to be that (1./6) is a double-precision
> constant, whereas the series of course needs an appropriate precision for
> each data type. Fixed in r6715, r6716.
>

Heh, I should have caught that too when I looked it over.


>
> I also skip the long double test if it seems that finfo(longdouble) is
> bogus.
>
> Backport?
>

I think so. It is a bug and the fix doesn't look complicated.

I don't much like all the ifdefs in the middle of the code, but if there is
a cleaner way to do it, it can wait.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Pauli Virtanen
Mon, 23 Mar 2009 19:55:17 +, Pauli Virtanen wrote:

> Mon, 23 Mar 2009 13:22:29 -0600, Charles R Harris wrote: [clip]
>> PPC stores long doubles as two doubles. I don't recall exactly how the
>> two are used, but the result is that the numbers aren't in the form you
>> would expect. Long doubles on the PPC have always been iffy, so it is
>> no surprise that machar fails. The failure on SPARC quad precision
>> bothers me more.
> 
> The test fails on SPARC, since we need one term more in the Horner
> series to reach quad precision accuracy. I'll add that for long doubles.

Another reason turned out to be that (1./6) is a double-precision 
constant, whereas the series of course needs an appropriate precision for 
each data type. Fixed in r6715, r6716.

I also skip the long double test if it seems that finfo(longdouble) is 
bogus.

Backport?

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Charles R Harris
On Mon, Mar 23, 2009 at 3:57 PM, Charles R Harris  wrote:

>
>
> On Mon, Mar 23, 2009 at 3:32 PM, Bruce Southey  wrote:
>
>> Pauli Virtanen wrote:
>> > Mon, 23 Mar 2009 15:03:09 -0500, Bruce Southey wrote:
>> > [clip]
>> >
>> >> I do not know if this is related, but I got similar error with David's
>> >> windows 64 bits installer on my 64 bit Vista system.
>> >>
>> http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041282.html
>> >>
>> >> In particular this code crashes:
>> >>
>> > import numpy as np
>> > info = np.finfo(np.longcomplex)
>> >
>> >
>> > Could you narrow that down a bit: do
>> >
>> > import numpy as np
>> > z = np.longcomplex(complex(1.,1.))
>> > z + z
>> > z - z
>> > z * z
>> > z / z
>> > z + 2
>> > z - 2
>> > z * 2
>> > z / 2
>> > z**0
>> > z**1
>> > z**2
>> > z**3
>> > z**4
>> > z**4.5
>> > z**(-1)
>> > z**(-2)
>> > z**101
>> >
>> > Do you get a crash at some point?
>> >
>> >
>> No.
>>
>> I get a problem with using longdouble as that is the dtype that causes
>> the  TestPower.test_large_types to crash.
>> Also, np.finfo(np.float128) crashes. I can assign and multiple
>> longdoubles and take the square root but not use the power '**'.
>>  >>> y=np.longdouble(2)
>>  >>> y
>> 2.0
>>  >>> y**1
>> 2.0
>>  >>> y**2
>> crash
>>
>
> Do you know if your binary was compiled with MSVC or mingw? I suspect the
> latter because I don't think MSVC supports float128 (long doubles are
> doubles). So there might be a library problem...
>
> Chuck
>

On the other hand, I believe python is compiled with MSVC. This might be
causing some incompatibilities with a mingw compiled numpy.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Charles R Harris
On Mon, Mar 23, 2009 at 3:32 PM, Bruce Southey  wrote:

> Pauli Virtanen wrote:
> > Mon, 23 Mar 2009 15:03:09 -0500, Bruce Southey wrote:
> > [clip]
> >
> >> I do not know if this is related, but I got similar error with David's
> >> windows 64 bits installer on my 64 bit Vista system.
> >> http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041282.html
> >>
> >> In particular this code crashes:
> >>
> > import numpy as np
> > info = np.finfo(np.longcomplex)
> >
> >
> > Could you narrow that down a bit: do
> >
> > import numpy as np
> > z = np.longcomplex(complex(1.,1.))
> > z + z
> > z - z
> > z * z
> > z / z
> > z + 2
> > z - 2
> > z * 2
> > z / 2
> > z**0
> > z**1
> > z**2
> > z**3
> > z**4
> > z**4.5
> > z**(-1)
> > z**(-2)
> > z**101
> >
> > Do you get a crash at some point?
> >
> >
> No.
>
> I get a problem with using longdouble as that is the dtype that causes
> the  TestPower.test_large_types to crash.
> Also, np.finfo(np.float128) crashes. I can assign and multiple
> longdoubles and take the square root but not use the power '**'.
>  >>> y=np.longdouble(2)
>  >>> y
> 2.0
>  >>> y**1
> 2.0
>  >>> y**2
> crash
>

Do you know if your binary was compiled with MSVC or mingw? I suspect the
latter because I don't think MSVC supports float128 (long doubles are
doubles). So there might be a library problem...

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Bruce Southey
Pauli Virtanen wrote:
> Mon, 23 Mar 2009 15:03:09 -0500, Bruce Southey wrote:
> [clip]
>   
>> I do not know if this is related, but I got similar error with David's
>> windows 64 bits installer on my 64 bit Vista system.
>> http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041282.html
>>
>> In particular this code crashes:
>> 
> import numpy as np
> info = np.finfo(np.longcomplex)
>   
>
> Could you narrow that down a bit: do
>
> import numpy as np
> z = np.longcomplex(complex(1.,1.))
> z + z
> z - z
> z * z
> z / z
> z + 2
> z - 2
> z * 2
> z / 2
> z**0
> z**1
> z**2
> z**3
> z**4
> z**4.5
> z**(-1)
> z**(-2)
> z**101
>
> Do you get a crash at some point?
>
>   
No.

I get a problem with using longdouble as that is the dtype that causes 
the  TestPower.test_large_types to crash.
Also, np.finfo(np.float128) crashes. I can assign and multiple 
longdoubles and take the square root but not use the power '**'.
 >>> y=np.longdouble(2)
 >>> y
2.0
 >>> y**1
2.0
 >>> y**2
crash


Bruce
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANNOUNCE: ETS 3.2.0 Released

2009-03-23 Thread Dave Peterson
Hello,

I'm pleased to announce that Enthought Tool Suite (ETS) version 3.2.0 
has been tagged and released!

Source distributions (.tar.gz) have been uploaded to PyPi, and Windows 
binaries will be follow shortly. A full install of ETS can be done using 
Setuptools via a command like:
easy_install -U "ets[nonets] >= 3.2.0"

NOTE 1: Users of an old ETS release will need to first uninstall prior 
to installing the new ETS.

NOTE 2: If you get a 'SandboxViolation' error, simply re-run the command 
again -- it may take multiple invocations to get everything installed. 
(This error appears to be a long-standing incompatibility between 
numpy.distutils and setuptools.)

Please see below for a list of what's new in this release.


What Is ETS?
===

The Enthought Tool Suite (ETS) is a collection of components developed 
by Enthought and the open-source community, which we use every day to 
construct custom scientific applications. It includes a wide variety of 
components, including:
* an extensible application framework
* application building blocks
* 2-D and 3-D graphics libraries
* scientific and math libraries
* developer tools
The cornerstone on which these tools rest is the Traits package, which 
provides explicit type declarations in Python; its features include 
initialization, validation, delegation, notification, and visualization 
of typed attributes.

More information on ETS is available from the development home page:
http://code.enthought.com/projects/index.php


Changelog
=

ETS 3.2.0 is a feature-added update to ETS 3.1.0, including numerous
bug-fixes. Some of the notable changes include:

Chaco
-

* Domain limits - Mappers now can declare the "limits" of their valid 
domain. PanTool and ZoomTool respect these limits. (pwang)

* Adding "hide_grids" parameter to Plot.img_plot() and 
Plot.contour_plot() so users can override the default behavior of hiding 
grids. (pwang)

* Refactored examples to declare a Demo object so they can be be run 
with the demo.py example launcher. (vibha)

* Adding chaco.overlays package with some canned SVG overlays. (bhendrix)

* DragZoom now can scale both X and Y axes independently corresponding 
to the mouse cursor motion along the X and Y axes (similar to the zoom 
behavior in Matplotlib). (pwang)

* New Examples:
* world map (bhendrix)
* more financial plots (pwang)
* scatter_toggle (pwang)
* stacked_axis (pwang)

* Fixing the chaco.scales TimeFormatter to use the built-in localtime() 
instead of the one in the safetime.py module due to Daylight Savings 
Time issues with timedelta. (r23231, pwang)

* Improved behavior of ScatterPlot when it doesn't get the type of 
metadata it expects in its "selections" and "selection_masks" metadata 
keys (r23121, pwang)

* Setting the .range2d attribute on GridMapper now properly sets the two 
DataRange1D instances of its sub-mappers. (r23119, pwang)

* ScatterPlot.map_index() now respects the index_only flag (r23060, pwang)

* Fixed occasional traceback/bug in LinePlot that occurred when data was 
completely outside the visible range (r23059, pwang)

* Implementing is_in() on legends to account for padding and alignment 
(caused by tools that move the legend) (r23052, bhendrix)

* Legend behaves properly when there are no plots to display (r23012, judah)

* Fixed LogScale in the chaco.scales package to correctly handle the 
case when the length of the interval is less than a decade (r22907, 
warren.weckesser)

* Fixed traceback when calling copy_traits() on a DataView (r22894, vibha)

* Scatter plots generated by Plot.plot() now properly use the "auto" 
coloring feature of Plot. (r22727, pwang)

* Reduced the size of screenshots in the user manual. (r22720, rkern)


Mayavi
--

* 17, 18 March, 2009 (PR):
* NEW: A simple example to show how one can use TVTK’s visual module 
with mlab. [23250]
* BUG: The size trait was being overridden and was different from the 
parent causing a bug with resizing the viewer. [23243]

* 15 March, 2009 (GV):
* ENH: Add a volume factory to mlab that knows how to set color, vmin 
and vmax for the volume module [23221].

* 14 March, 2009 (PR):
* API/TEST: Added a new testing entry point: ‘mayavi -t’ now runs tests 
in separate process, for isolation. Added enthought.mayavi.api.test to 
allow for simple testing from the interpreter [23195]...[23200], 
[23213], [23214], [23223].
* BUG: The volume module was directly importing the wx_gradient_editor 
leading to an import error when no wxPython is available. This has been 
tested and fixed. Thanks to Christoph Bohme for reporting this issue. 
[23191]

* 14 March, 2009 (GV):
* BUG: [mlab]: fix positioning for titles [23194], and opacity for 
titles and text [23193].
* ENH: Add the mlab_source attribute on all objects created by mlab, 
when possible [23201], [23209].
* ENH: Add a message to help the first-time user, using the new banner 
feature of the IPython shell view [23208].

* 13 March, 2009 (PR):
* NEW/API: Adding a powerful T

Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Charles R Harris
On Mon, Mar 23, 2009 at 2:22 PM, Robert Pyle  wrote:

> > PPC stores long doubles as two doubles. I don't recall exactly how
> > the two are used, but the result is that the numbers aren't in the
> > form you would expect. Long doubles on the PPC have always been
> > iffy, so it is no surprise that machar fails. The failure on SPARC
> > quad precision bothers me more.
>
> Ah, now I see.  A little more googling and I find that the PPC long
> double value is just the sum of the two halves, each looking like a
> double on its own.
>
> That brought back a distant memory!  The DEC-20 used a similar
> scheme.  Conversion from double to single precision floating point was
> as simple as adding the two halves.  Now this at most changes the
> least-significant bit of the upper half.  Sometime around 1970, I
> wrote something in DEC-20 assembler that accumulated in double
> precision, but returned a single-precision result.  In order to insure
> that I understood the double-precision floating-point format, I wrote
> a trivial Fortran program to test the conversion from double to single
> precision.  The Fortran program set the LSB of the more-significant
> half seemingly at random, with no apparent relation to the actual
> value of the less-significant half.
>
> More digging with dumps from the Fortran compiler showed that its
> authors had not understood the double-precision FP format at all.  It
> took quite a few phone calls to DEC before they believed it, but they
> did fix it about two or three months later.
>

I wonder if you could get the same service today? Making even one phone call
can be a long term project calling for a plate of cheese and fruit and a
bottle of wine...

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute

2009-03-23 Thread Sturla Molden
Sturla Molden wrote:
>> def fromaddress(address, nbytes, dtype=double):
I guess dtype=float works better...


S.M.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Robert Pyle
> PPC stores long doubles as two doubles. I don't recall exactly how  
> the two are used, but the result is that the numbers aren't in the  
> form you would expect. Long doubles on the PPC have always been  
> iffy, so it is no surprise that machar fails. The failure on SPARC  
> quad precision bothers me more.

Ah, now I see.  A little more googling and I find that the PPC long  
double value is just the sum of the two halves, each looking like a  
double on its own.

That brought back a distant memory!  The DEC-20 used a similar  
scheme.  Conversion from double to single precision floating point was  
as simple as adding the two halves.  Now this at most changes the  
least-significant bit of the upper half.  Sometime around 1970, I  
wrote something in DEC-20 assembler that accumulated in double  
precision, but returned a single-precision result.  In order to insure  
that I understood the double-precision floating-point format, I wrote  
a trivial Fortran program to test the conversion from double to single  
precision.  The Fortran program set the LSB of the more-significant  
half seemingly at random, with no apparent relation to the actual  
value of the less-significant half.

More digging with dumps from the Fortran compiler showed that its  
authors had not understood the double-precision FP format at all.  It  
took quite a few phone calls to DEC before they believed it, but they  
did fix it about two or three months later.

Bob


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Pauli Virtanen
Mon, 23 Mar 2009 15:03:09 -0500, Bruce Southey wrote:
[clip]
> I do not know if this is related, but I got similar error with David's
> windows 64 bits installer on my 64 bit Vista system.
> http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041282.html
> 
> In particular this code crashes:
> >>> import numpy as np
> >>> info = np.finfo(np.longcomplex)

Could you narrow that down a bit: do

import numpy as np
z = np.longcomplex(complex(1.,1.))
z + z
z - z
z * z
z / z
z + 2
z - 2
z * 2
z / 2
z**0
z**1
z**2
z**3
z**4
z**4.5
z**(-1)
z**(-2)
z**101

Do you get a crash at some point?

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute

2009-03-23 Thread Jochen Schroeder
On 23/03/09 15:40, Sturla Molden wrote:
> Jens Rantil wrote:
> > Hi all,
> >
> > So I have a C-function in a DLL loaded through ctypes. This particular 
> > function returns a pointer to a double. In fact I know that this 
> > pointer points to the first element in an array of, say for simplicity, 
> > 200 elements.
> >
> > How do I convert this pointer to a NumPy array that uses this data (ie. 
> > no copy of data in memory)? I am able to create a numpy array using a 
> > copy of the data.
> >   
> 
> def fromaddress(address, nbytes, dtype=double):
> 
> class Dummy(object): pass
> 
> d = Dummy()
> 
> d.__array_interface__ = {
> 
>  'data' : (address, False),
> 
>  'typestr' : numpy.uint8.str,
> 
>  'descr' : numpy.uint8.descr,
> 
>  'shape' : (nbytes,),
> 
>  'strides' : None,
> 
>  'version' : 3
> 
> }   
> 
> return numpy.asarray(d).view( dtype=dtype )
> 

Might I suggest that restype is going to be removed from the  documentation, 
it also cost me quite some time trying to get ndpointer to work with restype 
when I first tried it until I finally came to the conclusion that an
approach like the above is necessary and ndpointer does not work with
restype. 

Cheers
Jochen
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Bruce Southey
Pauli Virtanen wrote:
> Mon, 23 Mar 2009 13:22:29 -0600, Charles R Harris wrote:
> [clip]
>   
>> PPC stores long doubles as two doubles. I don't recall exactly how the
>> two are used, but the result is that the numbers aren't in the form you
>> would expect. Long doubles on the PPC have always been iffy, so it is no
>> surprise that machar fails. The failure on SPARC quad precision bothers
>> me more.
>> 
>
> The test fails on SPARC, since we need one term more in the Horner series 
> to reach quad precision accuracy. I'll add that for long doubles.
>
>   
>> I think the easy thing to do for the 1.3 release is to fix the precision
>> test to use a hardwired range of values, I don't think testing the
>> extreme small values is necessary to check the power series expansion.
>> But I have been leaving that fixup to Pauli.
>> 
>
> I'll do just that. The test is overly strict.
>
>   
I do not know if this is related, but I got similar error with David's 
//windows 64 bits installer on my 64 bit Vista system. // 
http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041282.html

In particular this code crashes:
>>>/ import numpy as np
/>>>/ info = np.finfo(np.longcomplex)
/
>/From the Windows Problem signature:
/  Fault Module Name:   umath.pyd


Bruce

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Pauli Virtanen
Mon, 23 Mar 2009 13:22:29 -0600, Charles R Harris wrote:
[clip]
> PPC stores long doubles as two doubles. I don't recall exactly how the
> two are used, but the result is that the numbers aren't in the form you
> would expect. Long doubles on the PPC have always been iffy, so it is no
> surprise that machar fails. The failure on SPARC quad precision bothers
> me more.

The test fails on SPARC, since we need one term more in the Horner series 
to reach quad precision accuracy. I'll add that for long doubles.

> I think the easy thing to do for the 1.3 release is to fix the precision
> test to use a hardwired range of values, I don't think testing the
> extreme small values is necessary to check the power series expansion.
> But I have been leaving that fixup to Pauli.

I'll do just that. The test is overly strict.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Charles R Harris
On Mon, Mar 23, 2009 at 12:34 PM, Robert Pyle wrote:

> Hi all,
>
> This is a continuation of something I started last week, but with a
> more appropriate subject line.
>
> To recap, my machine is a dual G5 running OS X 10.5.6, my python is
>
>Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008,
> 15:28:32)
>
> and numpy 1.3.0b1 was installed from the source tarball in the
> straightforward way with
>
>sudo python setup.py install
>
>
> On Mar 19, 2009, at 3:46 PM, Charles R Harris wrote:
>
> > On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote:
> >
> > Thanks for tracking this! Can you check what your platform gives for:
> >
> > > import numpy as np
> > > info = np.finfo(np.longcomplex)
> > > print "eps:", info.eps, info.eps.dtype
> > > print "tiny:", info.tiny, info.tiny.dtype
> > > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps)
> >
> > eps: 1.3817869701e-76 float128
> > tiny: -1.08420217274e-19 float128
> > log10: nan nan
> >
> > The log of a negative number is nan, so part of the problem is the
> > value of tiny. The size of the values also look suspect to me. On my
> > machine
> >
> > In [8]: finfo(longcomplex).eps
> > Out[8]: 1.084202172485504434e-19
> >
> > In [9]: finfo(float128).tiny
> > Out[9]: array(3.3621031431120935063e-4932, dtype=float128)
> >
> > So at a minimum eps and tiny are reversed.
> >
> > I started to look at the code for this but my eyes rolled up in my
> > head and I passed out. It could use some improvements...
> >
> > Chuck
>
> I have chased this a bit (or perhaps 128 bits) further.
>
> The problem seems to be that float128 is screwed up in general.  I
> tracked the test error back to lines 95-107 in
>
> /PyModules/numpy-1.3.0b1/build/lib.macosx-10.3-ppc-2.5/numpy/lib/
> machar.py
>
> Here is a short program built from these lines that demonstrates what
> I believe to be at the root of the test failure.
>
> ##
> #! /usr/bin/env python
>
> import numpy as np
> import binascii as b
>
> def t(type="float"):
> max_iterN = 1
> print "\ntesting %s" % type
> a = np.array([1.0],type)
> one = a
> zero = one - one
> for _ in xrange(max_iterN):
> a = a + a
> temp = a + one
> temp1 = temp - a
> print _+1, b.b2a_hex(temp[0]), temp1
> if any(temp1 - one != zero):
> break
> return
>
> if __name__ == '__main__':
> t(np.float32)
> t(np.float64)
> t(np.float128)
>
> ##
>
> This tries to find the number of bits in the significand by
> calculating ((2.0**n)+1.0) for increasing n, and stopping when the sum
> is indistinguishable from (2.0**n), that is, when the added 1.0 has
> fallen off the bottom of the significand.
>
> My print statement shows the power of 2.0, the hex representation of
> ((2.0**n)+1.0), and the difference ((2.0**n)+1.0) - (2.0**n), which
> one expects to be 1.0 up to the point where the added 1.0 is lost.
>
> Here are the last few lines printed for float32:
>
> 19 4910 [ 1.]
> 20 4988 [ 1.]
> 21 4a04 [ 1.]
> 22 4a82 [ 1.]
> 23 4b01 [ 1.]
> 24 4b80 [ 0.]
>
> You can see the added 1.0 marching to the right and off the edge at 24
> bits.
>
> Similarly, for float64:
>
> 48 42f00010 [ 1.]
> 49 4308 [ 1.]
> 50 4314 [ 1.]
> 51 4322 [ 1.]
> 52 4331 [ 1.]
> 53 4340 [ 0.]
>
> There are 53 bits, just as IEEE 754 would lead us to hope.
>
> However, for float128:
>
> 48 42f00010 [1.0]
> 49 4308 [1.0]
> 50 4314 [1.0]
> 51 4322 [1.0]
> 52 4331 [1.0]
> 53 43403ff0 [1.0]
> 54 43503ff0 [1.0]
>
> Something weird happens as we pass 53 bits.  I think lines 53 and 54
> *should* be
>

PPC stores long doubles as two doubles. I don't recall exactly how the two
are used, but the result is that the numbers aren't in the form you would
expect. Long doubles on the PPC have always been iffy, so it is no surprise
that machar fails. The failure on SPARC quad precision bothers me more.

I think the easy thing to do for the 1.3 release is to fix the precision
test to use a hardwired range of values, I don't think testing the extreme
small values is necessary to check the power series expansion. But I have
been leaving that fixup to Pauli.

Longer term, I think the values in finfo could come from npy_cpu.h and be
hardwired in. We only support ieee floats and I don't think it should be
difficult to track extended precision (current intel) vs quad precision
(SPARC). Although at some point I expect intel will also go to quad
precision and then things might get sticky.  Hmm..., I wonder what some to
the other supported achitectures do? Anyhow, PPC is an exception in the way
it treats long doubles and I'm not even sure it hasn't changed in some of
th

Re: [Numpy-discussion] numpy installation with nonroot python installation

2009-03-23 Thread charlie
Alright, I solved this by using numscons.

On Mon, Mar 23, 2009 at 12:48 AM, charlie  wrote:

> Dear numpyers,
>
> I am trying to install numpy 1.3 with my own version of python 2.5. I got
> stuck with following error:
> *building 'numpy.core.multiarray' extension
> compiling C sources
> C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall
> -Wstrict-prototypes -fPIC
>
> creating build/temp.linux-x86_64-2.5
> creating build/temp.linux-x86_64-2.5/numpy
> creating build/temp.linux-x86_64-2.5/numpy/core
> creating build/temp.linux-x86_64-2.5/numpy/core/src
> compile options: '-Ibuild/src.linux-x86_64-2.5/numpy/core/src
> -Inumpy/core/include -Ibuild/src.linux-x86_64-2.5/numpy/core/include/numpy
> -Inumpy/core/src -Inumpy/core/include
> -I/home/cmb-01/lxia/usr/include/python2.5 -c'
> gcc: numpy/core/src/multiarraymodule.c
> gcc -pthread -shared
> build/temp.linux-x86_64-2.5/numpy/core/src/multiarraymodule.o -L. -lm -lm
> -lpython2.5 -o build/lib.linux-x86_64-2.5/numpy/core/multiarray.so
> /usr/bin/ld: cannot find -lpython2.5
> collect2: ld returned 1 exit status
> /usr/bin/ld: cannot find -lpython2.5
> collect2: ld returned 1 exit status
> error: Command "gcc -pthread -shared
> build/temp.linux-x86_64-2.5/numpy/core/src/multiarraymodule.o -L. -lm -lm
> -lpython2.5 -o build/lib.linux-x86_64-2.5/numpy/core/multiarray.so" failed
> with exit status 1*
>
> I guess it is because the ld can find the libpython2.5.so; So i tried
> following methods:
> 1. $export LD_LIBRARY_PATH = $HOME/usr/lib # where my libpython2.5.so is
> in
> 2. edited the site.cfg file so that:
> [DEFAULT]
> library_dirs = ~/usr/lib
> include_dirs = ~/usr/include
> search_static_first = false
>
> Both methods don't work. But when I remove the -lpython2.5 flags from the
> compiling command, the command go through without problem. But I don know
> where to remove this flag in the numpy package. I ran out choice now and
> thus I want to get help from you. Thanks for any advice.
>
> Charlie
>
>
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

2009-03-23 Thread Robert Pyle
Hi all,

This is a continuation of something I started last week, but with a  
more appropriate subject line.

To recap, my machine is a dual G5 running OS X 10.5.6, my python is

Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008,  
15:28:32)

and numpy 1.3.0b1 was installed from the source tarball in the  
straightforward way with

sudo python setup.py install


On Mar 19, 2009, at 3:46 PM, Charles R Harris wrote:

> On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote:
>
> Thanks for tracking this! Can you check what your platform gives for:
>
> > import numpy as np
> > info = np.finfo(np.longcomplex)
> > print "eps:", info.eps, info.eps.dtype
> > print "tiny:", info.tiny, info.tiny.dtype
> > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps)
>
> eps: 1.3817869701e-76 float128
> tiny: -1.08420217274e-19 float128
> log10: nan nan
>
> The log of a negative number is nan, so part of the problem is the  
> value of tiny. The size of the values also look suspect to me. On my  
> machine
>
> In [8]: finfo(longcomplex).eps
> Out[8]: 1.084202172485504434e-19
>
> In [9]: finfo(float128).tiny
> Out[9]: array(3.3621031431120935063e-4932, dtype=float128)
>
> So at a minimum eps and tiny are reversed.
>
> I started to look at the code for this but my eyes rolled up in my  
> head and I passed out. It could use some improvements...
>
> Chuck

I have chased this a bit (or perhaps 128 bits) further.

The problem seems to be that float128 is screwed up in general.  I  
tracked the test error back to lines 95-107 in

/PyModules/numpy-1.3.0b1/build/lib.macosx-10.3-ppc-2.5/numpy/lib/ 
machar.py

Here is a short program built from these lines that demonstrates what  
I believe to be at the root of the test failure.

##
#! /usr/bin/env python

import numpy as np
import binascii as b

def t(type="float"):
 max_iterN = 1
 print "\ntesting %s" % type
 a = np.array([1.0],type)
 one = a
 zero = one - one
 for _ in xrange(max_iterN):
 a = a + a
 temp = a + one
 temp1 = temp - a
 print _+1, b.b2a_hex(temp[0]), temp1
 if any(temp1 - one != zero):
 break
 return

if __name__ == '__main__':
 t(np.float32)
 t(np.float64)
 t(np.float128)

##

This tries to find the number of bits in the significand by  
calculating ((2.0**n)+1.0) for increasing n, and stopping when the sum  
is indistinguishable from (2.0**n), that is, when the added 1.0 has  
fallen off the bottom of the significand.

My print statement shows the power of 2.0, the hex representation of  
((2.0**n)+1.0), and the difference ((2.0**n)+1.0) - (2.0**n), which  
one expects to be 1.0 up to the point where the added 1.0 is lost.

Here are the last few lines printed for float32:

19 4910 [ 1.]
20 4988 [ 1.]
21 4a04 [ 1.]
22 4a82 [ 1.]
23 4b01 [ 1.]
24 4b80 [ 0.]

You can see the added 1.0 marching to the right and off the edge at 24  
bits.

Similarly, for float64:

48 42f00010 [ 1.]
49 4308 [ 1.]
50 4314 [ 1.]
51 4322 [ 1.]
52 4331 [ 1.]
53 4340 [ 0.]

There are 53 bits, just as IEEE 754 would lead us to hope.

However, for float128:

48 42f00010 [1.0]
49 4308 [1.0]
50 4314 [1.0]
51 4322 [1.0]
52 4331 [1.0]
53 43403ff0 [1.0]
54 43503ff0 [1.0]

Something weird happens as we pass 53 bits.  I think lines 53 and 54  
*should* be

53 43408000 [1.0]
54 43504000 [1.0]

etc., with the added 1.0 continuing to march rightwards to extinction,  
as before.

The calculation eventually terminates with

1022 7fd03ff0 [1.0]
1023 7fe03ff0 [1.0]
1024 7ff0 [NaN]

(7ff0 == Inf).

This totally messes up the remaining parts of machar.py, and leaves us  
with Infs and Nans that give the logs of negative numbers, etc. that  
we saw last week.

But wait, there's more!  I also have an Intel Mac (a MacBook Pro).   
This passes numpy.test(), but when I look in detail with the above  
code, I find that the float128 significand has only 64 bits, leading  
me to suspect that it is really the so-called 80-bit "extended  
precision".

Is this true?  If so, should there be such large differences between  
architectures for the same nominal precision?  Is float128 just  
whatever the underlying C compiler thinks a "long double" is?

I've spent way too much time on this, so I'm going to bow out here  
(unless someone can suggest something for me to try that won't take  
too much time).

Bob








___
Numpy-discussion mailing list
Numpy-discussion@scipy.o

Re: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute

2009-03-23 Thread Sturla Molden
Jens Rantil wrote:
> Hi all,
>
> So I have a C-function in a DLL loaded through ctypes. This particular 
> function returns a pointer to a double. In fact I know that this 
> pointer points to the first element in an array of, say for simplicity, 
> 200 elements.
>
> How do I convert this pointer to a NumPy array that uses this data (ie. 
> no copy of data in memory)? I am able to create a numpy array using a 
> copy of the data.
>   

def fromaddress(address, nbytes, dtype=double):

class Dummy(object): pass

d = Dummy()

d.__array_interface__ = {

 'data' : (address, False),

 'typestr' : numpy.uint8.str,

 'descr' : numpy.uint8.descr,

 'shape' : (nbytes,),

 'strides' : None,

 'version' : 3

}   

return numpy.asarray(d).view( dtype=dtype )






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute

2009-03-23 Thread Jens Rantil
Hi all,

So I have a C-function in a DLL loaded through ctypes. This particular 
function returns a pointer to a double. In fact I know that this 
pointer points to the first element in an array of, say for simplicity, 
200 elements.

How do I convert this pointer to a NumPy array that uses this data (ie. 
no copy of data in memory)? I am able to create a numpy array using a 
copy of the data.

I have tried using the 'numpy.ctypeslib.ndpointer' but so far failed. 
In its documentation it claims it should be possible to use for not 
only argtypes attribute, but also restype. I have not found a single 
example of this on the web, and I wonder how this is done. As I see it, 
it would have to use the errcheck attribute to return an ndarray and 
not just restype.

My latest trial was:

>>> import ctypes
>>> pointer = DLL.my_func()
>>> ctypes_arr_type = C.POINTER(200 * ctypes.c_double)

>>> ctypes_arr = ctypes_arr_type(pointer)

>>> narray = N.ctypeslib.as_array(ctypes_arr)

however this didn't work.

Any hints would be appreciated.

Thanks,

Jens
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] String arrays from Python to Fortran

2009-03-23 Thread Sturla Molden
How did you import the function? f2py? What did you put in your .pyf file?




> *My Fortran code:*
>
> subroutine print_string (a, c)
> implicit none
> character(len=255), dimension(c), intent(inout)::  a
> integer, intent(in) :: c
> integer :: i
> do i = 1, size(a)
> print*, a(i)
> end do
>
> end subroutine print_string
>
> *My Python code:*
>
> from test import *
> from numpy import *
>
> a = "this is the test string."
> a = a.split()
>
> b = a
>
> a = char.array(a, itemsize=1, order = 'Fortran')
>
> print_string(a, len(a)) #this is imported from the compiled Fortran code
>
> 
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] memoization with ndarray arguments

2009-03-23 Thread Francesc Alted
A Saturday 21 March 2009, Paul Northug escrigué:
[clip]
> numpy arrays are not hashable, maybe for a good reason.

Numpy array are not hashable because they are mutable.

> I tried 
> anyway by  keeping a dict of hash(tuple(X)), but started having
> collisions. So I switched to md5.new(X).digest() as the hash function 
> and it seems to work ok. In a quick search, I saw cPickle.dumps and
> repr are also used as key values.

Having collisions is not necessarily very bad, unless you have *a lot* 
of them.  I wonder what kind of X you are dealing with that can provoke 
so much collisions when using hash(tuple(X))?  Just curious.

> I am assuming this is a common problem with functions with numpy
> array arguments and was wondering what the best approach is
> (including not using memoization).

If md5.new(X).digest() works well for you, then go ahead; it seems fast:

In [14]: X = np.arange(1000.)

In [15]: timeit hash(tuple(X))
1000 loops, best of 3: 504 µs per loop

In [16]: timeit md5.new(X).digest()
1 loops, best of 3: 40.4 µs per loop

Cheers,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fwd: numpy installation with nonroot python installation

2009-03-23 Thread charlie
Dear numpyers,

I am trying to install numpy 1.3 with my own version of python 2.5. I got
stuck with following error:
*building 'numpy.core.multiarray' extension
compiling C sources
C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall
-Wstrict-prototypes -fPIC

creating build/temp.linux-x86_64-2.5
creating build/temp.linux-x86_64-2.5/numpy
creating build/temp.linux-x86_64-2.5/numpy/core
creating build/temp.linux-x86_64-2.5/numpy/core/src
compile options: '-Ibuild/src.linux-x86_64-2.5/numpy/core/src
-Inumpy/core/include -Ibuild/src.linux-x86_64-2.5/numpy/core/include/numpy
-Inumpy/core/src -Inumpy/core/include
-I/home/cmb-01/lxia/usr/include/python2.5 -c'
gcc: numpy/core/src/multiarraymodule.c
gcc -pthread -shared
build/temp.linux-x86_64-2.5/numpy/core/src/multiarraymodule.o -L. -lm -lm
-lpython2.5 -o build/lib.linux-x86_64-2.5/numpy/core/multiarray.so
/usr/bin/ld: cannot find -lpython2.5
collect2: ld returned 1 exit status
/usr/bin/ld: cannot find -lpython2.5
collect2: ld returned 1 exit status
error: Command "gcc -pthread -shared
build/temp.linux-x86_64-2.5/numpy/core/src/multiarraymodule.o -L. -lm -lm
-lpython2.5 -o build/lib.linux-x86_64-2.5/numpy/core/multiarray.so" failed
with exit status 1*

I guess it is because the ld can find the libpython2.5.so; So i tried
following methods:
1. $export LD_LIBRARY_PATH = $HOME/usr/lib # where my libpython2.5.so is in
2. edited the site.cfg file so that:
[DEFAULT]
library_dirs = ~/usr/lib
include_dirs = ~/usr/include
search_static_first = false

Both methods don't work. But when I remove the -lpython2.5 flags from the
compiling command, the command go through without problem. But I don know
where to remove this flag in the numpy package. I ran out choice now and
thus I want to get help from you. Thanks for any advice.

Charlie
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unhelpful errors trying to create very large arrays?

2009-03-23 Thread David Cournapeau
Hi Matthew,

On Sun, Mar 22, 2009 at 5:40 PM, Matthew Brett  wrote:
>
> I get the 'negative dimensions' error in this situation:

I think I have fixed both arange and zeros errors in the trunk. arange
error was specific to arange (unchecked overflow in a double -> int
cast), but the zero one was more general (it should fix any 'high
level' array creation call like empty, ones, etc)

Tell me if you still have problems,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion