Re: [Numpy-discussion] trying to speed up the following....

2009-03-24 Thread Robert Kern
On Wed, Mar 25, 2009 at 00:09, Brennan Williams
 wrote:
> Robert Kern wrote:
>> On Tue, Mar 24, 2009 at 18:29, Brennan Williams
>>  wrote:
>>
>>> I have an array (porvatt.yarray) of ni*nj*nk values.
>>> I want to create two further arrays.
>>>
>>> activeatt.yarray is of size ni*nj*nk and is a pointer array to an active
>>> cell number. If a cell is inactive then its activeatt.yarray value will be 0
>>>
>>> ijkatt.yarray is of size nactive, the number of active cells (which I
>>> already know). ijkatt.yarray holds the ijk cell number for each active cell.
>>>
>>>
>>> My code looks something like...
>>>
>>>           activeatt.yarray=zeros(ncells,dtype=int)
>>>           ijkatt.yarray=zeros(nactivecells,dtype=int)
>>>
>>>            iactive=-1
>>>            ni=currentgrid.ni
>>>            nj=currentgrid.nj
>>>            nk=currentgrid.nk
>>>            for ijk in range(0,ni*nj*nk):
>>>              if porvatt.yarray[ijk]>0:
>>>                iactive+=1
>>>                activeatt.yarray[ijk]=iactive
>>>                ijkatt.yarray[iactive]=ijk
>>>
>>> I may often have a million+ cells.
>>> So the code above is slow.
>>> How can I speed it up?
>>>
>>
>> mask = (porvatt.yarray.flat > 0)
>> ijkatt.yarray = np.nonzero(mask)
>>
>> # This is not what your code does, but what I think you want.
>> # Where porvatt.yarray is inactive, activeatt.yarray is -1.
>> # 0 might be an active cell.
>> activeatt.yarray = np.empty(ncells, dtype=int)
>> activeatt.yarray.fill(-1)
>> activeatt.yarray[mask] = ijkatt.yarray
>>
>>
>>
> Thanks. Concise & fast. This is what I've got so far (minor mods from
> the above)
>
> from numpy import *
> ...
> mask=porvatt.yarray>0.0
> ijkatt.yarray=nonzero(mask)[0]
> activeindices=arange(0,ijkatt.yarray.size)
> activeatt.yarray = empty(ncells, dtype=int)
> activeatt.yarray.fill(-1)
> activeatt.yarray[mask] = activeindices
>
> I have...
>
> ijkatt.yarray=nonzero(mask)[0]
>
> because it looks like nonzero returns a tuple of arrays rather than an
> array.

Yes. Apologies.

> I used
>
> activeindices=arange(0,ijkatt.yarray.size)
>
> and
>
> activeatt.yarray[mask] = activeindices

Yes. You are correct.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trying to speed up the following....

2009-03-24 Thread Brennan Williams
Robert Kern wrote:
> On Tue, Mar 24, 2009 at 18:29, Brennan Williams
>  wrote:
>   
>> I have an array (porvatt.yarray) of ni*nj*nk values.
>> I want to create two further arrays.
>>
>> activeatt.yarray is of size ni*nj*nk and is a pointer array to an active
>> cell number. If a cell is inactive then its activeatt.yarray value will be 0
>>
>> ijkatt.yarray is of size nactive, the number of active cells (which I
>> already know). ijkatt.yarray holds the ijk cell number for each active cell.
>>
>>
>> My code looks something like...
>>
>>   activeatt.yarray=zeros(ncells,dtype=int)
>>   ijkatt.yarray=zeros(nactivecells,dtype=int)
>>
>>iactive=-1
>>ni=currentgrid.ni
>>nj=currentgrid.nj
>>nk=currentgrid.nk
>>for ijk in range(0,ni*nj*nk):
>>  if porvatt.yarray[ijk]>0:
>>iactive+=1
>>activeatt.yarray[ijk]=iactive
>>ijkatt.yarray[iactive]=ijk
>>
>> I may often have a million+ cells.
>> So the code above is slow.
>> How can I speed it up?
>> 
>
> mask = (porvatt.yarray.flat > 0)
> ijkatt.yarray = np.nonzero(mask)
>
> # This is not what your code does, but what I think you want.
> # Where porvatt.yarray is inactive, activeatt.yarray is -1.
> # 0 might be an active cell.
> activeatt.yarray = np.empty(ncells, dtype=int)
> activeatt.yarray.fill(-1)
> activeatt.yarray[mask] = ijkatt.yarray
>
>
>   
Thanks. Concise & fast. This is what I've got so far (minor mods from 
the above)

from numpy import *
...
mask=porvatt.yarray>0.0
ijkatt.yarray=nonzero(mask)[0]
activeindices=arange(0,ijkatt.yarray.size)
activeatt.yarray = empty(ncells, dtype=int)
activeatt.yarray.fill(-1)
activeatt.yarray[mask] = activeindices

I have...

ijkatt.yarray=nonzero(mask)[0]

because it looks like nonzero returns a tuple of arrays rather than an 
array.

I used

activeindices=arange(0,ijkatt.yarray.size)

and

activeatt.yarray[mask] = activeindices

as I have 686000 cells of which 129881 are 'active' so my 
activeatt.yarray values range from -1 for inactive
through 0 for the first active cell up to 129880 for the last active cell.

About to test it out by replacing my old for loop. Looks like it will be 
about 20x faster for 1m cells.


Brennan

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Seg fault from numpy.rec.fromarrays

2009-03-24 Thread Dan Yamins
>
>> Can someone explain why this might be happening, and how I can fix it
>> (without having to use the pickling hack)?
>>
>
> What architecture/operating system is this?
>


Sorry, I should have included this information before.  it's OS 10.5.6.  the
is a 64-bit intel core-2 duo, but the python is the standard OS X 10.5
binary from the python.org website, which is a 32-bit framework build.
It's numpy 1.3, which I built on this machine.   (The same problem happens
with earlier version of numpy as well, I tried the same computation using
numpy 1.1 earlier.)


Dan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Seg fault from numpy.rec.fromarrays

2009-03-24 Thread Charles R Harris
2009/3/24 Dan Yamins 

> Hi all,
>
> I'm having a seg fault error from numpy.rec.fromarrays.
>
> I have a python list
> L = [Col1, Col2]
> where Col1 and Col2 are python lists of short strings (the max length of
> Col1 strings is 4 chars and max length of Col2 is 7 chars).  The len of Col1
> and Col2 is about 11500.
>
> Then I attempt
>>>> A = numpy.rec.fromarrays(L,names = ['Aggregates','__color__'])
>
> This should produce a  numpy record array with two columns, one called
> 'Aggregates', the other called '__color__'.
>
> In and of it self, this runs.  But then when I attempt to look at the
> contents of A, running the __getitem__ method, say by doing:
>
>>>> print A
> or
>>>> A.tolist()
> or
>>>> A[0]
>
> then I get a seg fault error.  (Acutally, the segfault only occurs about
> 80% of the time I run these commands.)
>
> However, the __getitem__ method does work to produce attribute arrays from
> column names , e.g.
>
>   >>> Ag =  A['Aggregates']
>
> or
>
>   >>> col =  A['__color__']
>
> both produce (apparently) completely correct and working numpy arrays.
>
> Moreover, If I pickle the object A before looking at it, everything works
> fine.   E.g. if I execute:
>
>>>> Hold_A = A.dumps()
>>>> A = numpy.loads(Hold_A)
>
> then A seems to work fine.
>
> (Also:  pickling the list L = [Col1,Col2] first, before running the
> numpy.rec.fromarrays method, does not always fix the segfault.)
>
>
> Can someone explain why this might be happening, and how I can fix it
> (without having to use the pickling hack)?
>

What architecture/operating system is this?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trying to speed up the following....

2009-03-24 Thread Robert Kern
On Tue, Mar 24, 2009 at 18:29, Brennan Williams
 wrote:
> I have an array (porvatt.yarray) of ni*nj*nk values.
> I want to create two further arrays.
>
> activeatt.yarray is of size ni*nj*nk and is a pointer array to an active
> cell number. If a cell is inactive then its activeatt.yarray value will be 0
>
> ijkatt.yarray is of size nactive, the number of active cells (which I
> already know). ijkatt.yarray holds the ijk cell number for each active cell.
>
>
> My code looks something like...
>
>           activeatt.yarray=zeros(ncells,dtype=int)
>           ijkatt.yarray=zeros(nactivecells,dtype=int)
>
>            iactive=-1
>            ni=currentgrid.ni
>            nj=currentgrid.nj
>            nk=currentgrid.nk
>            for ijk in range(0,ni*nj*nk):
>              if porvatt.yarray[ijk]>0:
>                iactive+=1
>                activeatt.yarray[ijk]=iactive
>                ijkatt.yarray[iactive]=ijk
>
> I may often have a million+ cells.
> So the code above is slow.
> How can I speed it up?

mask = (porvatt.yarray.flat > 0)
ijkatt.yarray = np.nonzero(mask)

# This is not what your code does, but what I think you want.
# Where porvatt.yarray is inactive, activeatt.yarray is -1.
# 0 might be an active cell.
activeatt.yarray = np.empty(ncells, dtype=int)
activeatt.yarray.fill(-1)
activeatt.yarray[mask] = ijkatt.yarray


-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] trying to speed up the following....

2009-03-24 Thread Brennan Williams
I have an array (porvatt.yarray) of ni*nj*nk values.
I want to create two further arrays.

activeatt.yarray is of size ni*nj*nk and is a pointer array to an active 
cell number. If a cell is inactive then its activeatt.yarray value will be 0

ijkatt.yarray is of size nactive, the number of active cells (which I 
already know). ijkatt.yarray holds the ijk cell number for each active cell.


My code looks something like...

   activeatt.yarray=zeros(ncells,dtype=int)
   ijkatt.yarray=zeros(nactivecells,dtype=int)

iactive=-1
ni=currentgrid.ni
nj=currentgrid.nj
nk=currentgrid.nk
for ijk in range(0,ni*nj*nk):
  if porvatt.yarray[ijk]>0:
iactive+=1
activeatt.yarray[ijk]=iactive
ijkatt.yarray[iactive]=ijk

I may often have a million+ cells.
So the code above is slow.
How can I speed it up?

TIA

Brennan

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Doc update for 1.3.0?

2009-03-24 Thread Pauli Virtanen
Sun, 22 Mar 2009 14:09:07 +0900, David Cournapeau wrote:
[clip]
> You can backport as many docstring changes as possible, since there is
> little chance to break anything just from docstring.

Merge to trunk is here:

http://projects.scipy.org/numpy/changeset/6725

I'll backport it and some other doc fixes to 1.3.x tomorrow (if no 
objections):

git://github.com/pv/numpy-work.git work-1.3.x

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] mlabwrap 1.0.1

2009-03-24 Thread Alexander Schmolck
Mlabwrap allows pythonistas to interface to Matlab(tm) in a very
straightforward fashion:

>>> from mlabwrap import mlab
>>> mlab.eig([[0,1],[1,1]])
array([[-0.61803399],
   [ 1.61803399]])

More at .

Mlabwrap 1.0.1 is just a maintenance release that fixes a few bugs and
simplifies installation (no more LD_LIBRARY_PATH hassles). No future 
(non-bugfix) releases of mlabwrap are currently planned, but if and when I find 
the time to finish overhauling and extending the API I will make an official 
release of scikits.mlabwrap, which probably won't be 100% backwards compatible.

'as
-- 
Pt! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: 
http://www.gmx.net/de/go/multimessenger01
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Seg fault from numpy.rec.fromarrays

2009-03-24 Thread Dan Yamins
Hi all,

I'm having a seg fault error from numpy.rec.fromarrays.

I have a python list
L = [Col1, Col2]
where Col1 and Col2 are python lists of short strings (the max length of
Col1 strings is 4 chars and max length of Col2 is 7 chars).  The len of Col1
and Col2 is about 11500.

Then I attempt
   >>> A = numpy.rec.fromarrays(L,names = ['Aggregates','__color__'])

This should produce a  numpy record array with two columns, one called
'Aggregates', the other called '__color__'.

In and of it self, this runs.  But then when I attempt to look at the
contents of A, running the __getitem__ method, say by doing:

   >>> print A
or
   >>> A.tolist()
or
   >>> A[0]

then I get a seg fault error.  (Acutally, the segfault only occurs about 80%
of the time I run these commands.)

However, the __getitem__ method does work to produce attribute arrays from
column names , e.g.

  >>> Ag =  A['Aggregates']

or

  >>> col =  A['__color__']

both produce (apparently) completely correct and working numpy arrays.

Moreover, If I pickle the object A before looking at it, everything works
fine.   E.g. if I execute:

   >>> Hold_A = A.dumps()
   >>> A = numpy.loads(Hold_A)

then A seems to work fine.

(Also:  pickling the list L = [Col1,Col2] first, before running the
numpy.rec.fromarrays method, does not always fix the segfault.)


Can someone explain why this might be happening, and how I can fix it
(without having to use the pickling hack)?


Thanks,
Dan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] more on that missing directory

2009-03-24 Thread Sul, Young L
Hi,
The following is a list of files that have ^Ms in them.

I did an svn checkout of the latest stuff today and ran  "grep -Ril ^M *" in 
numpy and scipy

scipy seems to have more embedded feeds than numpy. It also seems that whatever 
'builds' the files to be compiled in scipy is the thing that is embedding the 
linefeeds prior to the compile. (I tried cleaning them out, but re-running 
numscons overwrites the cleaned up files).

I have attached to this message a list of the files that have embedded feeds 
still in them.

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of David Cournapeau [courn...@gmail.com]
Sent: Friday, March 20, 2009 9:19 PM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] more on that missing directory

Hi,

2009/3/21 Sul, Young L :
> Hi,
>
> (I’m on a Solaris 10 intel system, and am trying to use the sunperf
> libraries)
> An immediate problem is that some files seem to have embedded ^Ms in them. I
> had to clean and rerun a few times before numpy installed.

Could you tell me what those files are ? In numscons or numpy ? Those
files should be fixed, neither numpy or numscons should have any CRF
types of end of lines.

> Now, I am trying to install scipy via numscons. It looked like it was going
> to work, but it barfed. From the output it looks like whatever is building
> the compile commands forgot to add the cc command at the beginning of the
> line (see below. I’ve highlighted the barf).

Yes, it is a bug in scons - its way of looking for compilers is buggy
on solaris. I will look into it later today (I don't have a solaris
installation in handy ATM),

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


scipy_lines
Description: scipy_lines


numpy_lines
Description: numpy_lines
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Defining version_pattern for fcompiler (pathscale)

2009-03-24 Thread Lewis E. Randerson
Puali,

Thanks.  Somehow google failed me when I was looking for a
clear explanation.

--Lew

On Mar 24, 2009, at 3:55 PM, Pauli Virtanen wrote:

> Tue, 24 Mar 2009 15:44:07 -0400, Lewis E. Randerson wrote:
>
>> Puali,
>>
>> I was wondering why the there seemed to be two uses for parens in the
>> string.  I now have the braces in.   The issue now I suspect is the
>> stuff after (?P.  That is where I am really confused.
>
> This is probably best answered by the documentation:
>
>   http://docs.python.org/library/re.html
>
> In short, the (?P<...>) construct defines a named group.
>
> -- 
> Pauli Virtanen
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

--
Lewis E. Randerson
DOE Princeton University Plasma Physics Laboratory,
Princeton University, James Forrestal Campus
100 Stellarator Road, Princeton, NJ 08543
Work: 609/243-3134, Fax: 609/243-3086, PPPL Web: http://www.pppl.gov




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Defining version_pattern for fcompiler (pathscale)

2009-03-24 Thread Pauli Virtanen
Tue, 24 Mar 2009 15:44:07 -0400, Lewis E. Randerson wrote:

> Puali,
> 
> I was wondering why the there seemed to be two uses for parens in the
> string.  I now have the braces in.   The issue now I suspect is the
> stuff after (?P.  That is where I am really confused.

This is probably best answered by the documentation:

http://docs.python.org/library/re.html

In short, the (?P<...>) construct defines a named group.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] test failure in numpy trunk

2009-03-24 Thread Charles R Harris
On Tue, Mar 24, 2009 at 1:12 PM, Pauli Virtanen  wrote:

> Tue, 24 Mar 2009 13:15:21 -0400, Darren Dale wrote:
> > I just performed an svn update, deleted my old build/ and
> > site-packages/numpy*, reinstalled, and I see a new test failure on a 64
> > bit linux machine:
> >
> > ==
> > FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex
> > --
> [clip]
> >   "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
> > line 497, in check
> > 'arctanh')
> > AssertionError: (135, 3.4039637354191726288e-09,
> > 3.9031278209478159624e-18, 'arctanh')
>
> I can reproduce this (on another 64-bit machine). This time around, it's
> the real function that is faulty:
>
> >>> x = np.longdouble(3e-9)
> >>> np.arctanh(x+0j).real - x
> 9.0876776281460559983e-27
> >>> np.arctanh(x).real - x
> 0.0
> >>> np.finfo(np.longdouble).eps * x
> 3.2526065174565132804e-28
>
> So, the system atanhl is ~ 30 relative eps away from the correct answer:
>

I see this also. The compiler is gcc version 4.3.0 20080428 (Red Hat
4.3.0-8) (GCC). Maybe we should ping the compiler folks? I could also open a
Fedora bug for this.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Defining version_pattern for fcompiler (pathscale)

2009-03-24 Thread Lewis E. Randerson
Puali,

I was wondering why the there seemed to be two uses for parens in the
string.  I now have the braces in.   The issue now I suspect is
the stuff after (?P.  That is where I am really confused.

Any opinions there.

--Lew

On Mar 24, 2009, at 3:32 PM, Pauli Virtanen wrote:

> Tue, 24 Mar 2009 15:11:05 -0400, Lewis E. Randerson wrote:
>
>> Hi,
>>
>> I am trying to setup a new compiler for numpy and  my lack of python
>> pattern matching syntax knowledge is bogging me down.
>>
>> Here is one of my non-working patterns.
>> =
>> version_pattern = r'Pathscale(TM) Compiler Suite: Version (? 
>> P[^\s]*)'
>> =
>
> Possibly like so:
>
> version_pattern = r'PathScale\(TM\) Compiler Suite: Version (? 
> P[^\s]*)'
>
> You need the escapes to avoid the first braces to be interpreted as  
> a group.
>
> -- 
> Pauli Virtanen
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

--
Lewis E. Randerson
DOE Princeton University Plasma Physics Laboratory,
Princeton University, James Forrestal Campus
100 Stellarator Road, Princeton, NJ 08543
Work: 609/243-3134, Fax: 609/243-3086, PPPL Web: http://www.pppl.gov




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Defining version_pattern for fcompiler (pathscale)

2009-03-24 Thread Pauli Virtanen
Tue, 24 Mar 2009 15:11:05 -0400, Lewis E. Randerson wrote:

> Hi,
> 
> I am trying to setup a new compiler for numpy and  my lack of python
> pattern matching syntax knowledge is bogging me down.
> 
> Here is one of my non-working patterns.
> = 
> version_pattern = r'Pathscale(TM) Compiler Suite: Version (?P[^\s]*)'
> =

Possibly like so:

version_pattern = r'PathScale\(TM\) Compiler Suite: Version (?P[^\s]*)'

You need the escapes to avoid the first braces to be interpreted as a group.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Defining version_pattern for fcompiler (pathscale)

2009-03-24 Thread Lewis E. Randerson
Hi,

I am trying to setup a new compiler for numpy and  my lack of python
pattern matching syntax knowledge is bogging me down.

Here is one of my non-working patterns.
=
version_pattern =  r'Pathscale(TM) Compiler Suite: Version (? 
P[^\s]*)'
=

Here is the string I am trying to get the version from.
===
$ pathf95 --version
PathScale(TM) Compiler Suite: Version 3.2
Built on: 2008-06-16 16:45:36 -0700
Thread model: posix
GNU gcc version 3.3.1 (PathScale 3.2 driver)

Copyright 2000, 2001 Silicon Graphics, Inc.  All Rights Reserved.
Copyright 2002, 2003, 2004, 2005, 2006 PathScale, Inc.  All Rights  
Reserved.
Copyright 2006, 2007 QLogic Corporation.  All Rights Reserved.
Copyright 2007, 2008 PathScale LLC.  All Rights Reserved.
See complete copyright, patent and legal notices in the
/usr/pppl/pathscale/3.2/share/doc/pathscale-compilers-3.2/LEGAL.pdf  
file.
===

Any idea what the correct value for "version_pattern" should be".
Even better, If you have a working pathscale.py for the above, I'll  
take that
instead.

Thanks for any help!
--Lew




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] test failure in numpy trunk

2009-03-24 Thread Pauli Virtanen
Tue, 24 Mar 2009 13:15:21 -0400, Darren Dale wrote:
> I just performed an svn update, deleted my old build/ and
> site-packages/numpy*, reinstalled, and I see a new test failure on a 64
> bit linux machine:
> 
> ==
> FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex
> --
[clip]
>   "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
> line 497, in check
> 'arctanh')
> AssertionError: (135, 3.4039637354191726288e-09,
> 3.9031278209478159624e-18, 'arctanh')

I can reproduce this (on another 64-bit machine). This time around, it's 
the real function that is faulty:

>>> x = np.longdouble(3e-9)
>>> np.arctanh(x+0j).real - x
9.0876776281460559983e-27
>>> np.arctanh(x).real - x
0.0
>>> np.finfo(np.longdouble).eps * x
3.2526065174565132804e-28

So, the system atanhl is ~ 30 relative eps away from the correct answer:

>>> from sympy import mpmath
>>> mpmath.mp.dps=60
>>> p = mpmath.mpf('3e-9')
>>> print (mpmath.atanh(p) - p)*1e27
9.0001681879956480095820042512435586643130912

I'll relax the test tolerance to allow for this...

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Win32 MSI

2009-03-24 Thread David Cournapeau
On Wed, Mar 25, 2009 at 3:13 AM, F. David del Campo Hill
 wrote:
> Dear Numpy Forum,
>
>        I have found the Win64 (Windows x64) Numpy MSI installer in 
> Sourceforge (numpy-1.3.0b1.win-amd64-py2.6.msi), but cannot find the Win32 
> (Windows i386) one. I have tried unpacking the Win32 EXE installer package 
> (numpy-1.3.0b1-win32-superpack-python2.6.exe) to see if the MSI installer 
> could be found inside, but without luck. Does the package I look for exist, 
> and if so, where could someone point me to where I can download it from?

No, it does not. The problem is that I need to add a way to execute
.msi from nsis (nsis is the software I use to build the superpack),
and I did not find a way when I tried - but it should be possible.

Now, I am not so familiar with msi: what does it bring compared to
.exe ? Would an exe installing a .msi solve your problems ? (windows64
has an msi because 64 bits implies SSE2, and as such we don't need to
check for CPU wo SSE).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Summer of Code: Proposal for Implementing date/time types in NumPy

2009-03-24 Thread Marty Fuhry
Hello,

Sorry for any overlap, as I've been referred here from the scipi-dev
mailing list.
I was reading through the Summer of Code ideas and I'm terribly
interested in date/time proposal
(http://projects.scipy.org/numpy/browser/trunk/doc/neps/datetime-proposal3.rst).
I would love to work on this for a Google Summer of Code project. I'm
a sophmore studying Computer Science and Mathematics at Kent State
University in Ohio, so this project directly relates to my studies. Is
there anyone looking into this proposal yet?

Thank you.

-Marty Fuhry
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Win32 MSI

2009-03-24 Thread F. David del Campo Hill
Dear Numpy Forum,

I have found the Win64 (Windows x64) Numpy MSI installer in Sourceforge 
(numpy-1.3.0b1.win-amd64-py2.6.msi), but cannot find the Win32 (Windows i386) 
one. I have tried unpacking the Win32 EXE installer package 
(numpy-1.3.0b1-win32-superpack-python2.6.exe) to see if the MSI installer could 
be found inside, but without luck. Does the package I look for exist, and if 
so, where could someone point me to where I can download it from?

Thank you for your help.

Yours,

David del Campo


"The more corrupt the state, the more numerous the laws."
-Gaius Cornelius Tacitus (ca. 56-ca. 117), Annals, Book III, 27


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] test failure in numpy trunk

2009-03-24 Thread Darren Dale
2009/3/24 Charles R Harris 

>
>
> 2009/3/24 Darren Dale 
>
> Hello,
>>
>> I just performed an svn update, deleted my old build/ and
>> site-packages/numpy*, reinstalled, and I see a new test failure on a 64 bit
>> linux machine:
>>
>> ==
>> FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex
>> --
>> Traceback (most recent call last):
>>   File "/usr/lib64/python2.6/site-packages/nose/case.py", line 182, in
>> runTest
>> self.test(*self.arg)
>>   File "/usr/lib64/python2.6/site-packages/numpy/testing/decorators.py",
>> line 169, in knownfailer
>> return f(*args, **kwargs)
>>   File
>> "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line
>> 557, in test_loss_of_precision_longcomplex
>> self.check_loss_of_precision(np.longcomplex)
>>   File
>> "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line
>> 510, in check_loss_of_precision
>> check(x_series, 2*eps)
>>   File
>> "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line
>> 497, in check
>> 'arctanh')
>> AssertionError: (135, 3.4039637354191726288e-09,
>> 3.9031278209478159624e-18, 'arctanh')
>>
>
> What machine is it?
>

64-bit gentoo linux, gcc-4.3.3, python-2.6.1
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] test failure in numpy trunk

2009-03-24 Thread Nils Wagner
On Tue, 24 Mar 2009 11:20:53 -0600
  Charles R Harris  wrote:
> 2009/3/24 Darren Dale 
> 
>> Hello,
>>
>> I just performed an svn update, deleted my old build/ 
>>and
>> site-packages/numpy*, reinstalled, and I see a new test 
>>failure on a 64 bit
>> linux machine:
>>
>> ==
>> FAIL: 
>>test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex
>> --
>> Traceback (most recent call last):
>>   File 
>>"/usr/lib64/python2.6/site-packages/nose/case.py", line 
>>182, in
>> runTest
>> self.test(*self.arg)
>>   File 
>>"/usr/lib64/python2.6/site-packages/numpy/testing/decorators.py",
>> line 169, in knownfailer
>> return f(*args, **kwargs)
>>   File 
>>"/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
>> line 557, in test_loss_of_precision_longcomplex
>> self.check_loss_of_precision(np.longcomplex)
>>   File 
>>"/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
>> line 510, in check_loss_of_precision
>> check(x_series, 2*eps)
>>   File 
>>"/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
>> line 497, in check
>> 'arctanh')
>> AssertionError: (135, 3.4039637354191726288e-09, 
>>3.9031278209478159624e-18,
>> 'arctanh')
>>
> 
> What machine is it?
> 
> Chuck

I can reproduce the failure.
Linux linux-mogv 2.6.27.19-3.2-default #1 SMP 2009-02-25 
15:40:44 +0100 x86_64 x86_64 x86_64 GNU/Linux

cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 15
model name  : Intel(R) Pentium(R) Dual  CPU  T3200  @ 
2.00GHz
stepping: 13
cpu MHz : 1000.000
cache size  : 1024 KB
physical id : 0
siblings: 2
core id : 0
cpu cores   : 2
apicid  : 0
initial apicid  : 0
fpu : yes
fpu_exception   : yes
cpuid level : 10
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic 
sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr 
sse sse2 ss ht tm pbe syscall nx lm constant_tsc 
arch_perfmon pebs bts rep_good nopl pni monitor ds_cpl est 
tm2 ssse3 cx16 xtpr lahf_lm
bogomips: 3996.80
clflush size: 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
cpu family  : 6
model   : 15
model name  : Intel(R) Pentium(R) Dual  CPU  T3200  @ 
2.00GHz
stepping: 13
cpu MHz : 1000.000
cache size  : 1024 KB
physical id : 0
siblings: 2
core id : 1
cpu cores   : 2
apicid  : 1
initial apicid  : 1
fpu : yes
fpu_exception   : yes
cpuid level : 10
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic 
sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr 
sse sse2 ss ht tm pbe syscall nx lm constant_tsc 
arch_perfmon pebs bts rep_good nopl pni monitor ds_cpl est 
tm2 ssse3 cx16 xtprlahf_lm
bogomips: 3996.82
clflush size: 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

==
FAIL: 
test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex
--
Traceback (most recent call last):
   File 
"/home/nwagner/local/lib64/python2.6/site-packages/nose-0.10.4-py2.6.egg/nose/case.py",
 
line 182, in runTest
 self.test(*self.arg)
   File 
"/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/decorators.py",
 
line 169, in knownfailer
 return f(*args, **kwargs)
   File 
"/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
 
line 557, in test_loss_of_precision_longcomplex
 self.check_loss_of_precision(np.longcomplex)
   File 
"/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
 
line 510, in check_loss_of_precision
 check(x_series, 2*eps)
   File 
"/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
 
line 497, in check
 'arctanh')
AssertionError: (135, 3.4039637354191726288e-09, 
3.9031278209478159624e-18, 'arctanh')

--
Ran 2031 tests in 15.923s

FAILED (KNOWNFAIL=1, failures=1)

  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] test failure in numpy trunk

2009-03-24 Thread Charles R Harris
2009/3/24 Darren Dale 

> Hello,
>
> I just performed an svn update, deleted my old build/ and
> site-packages/numpy*, reinstalled, and I see a new test failure on a 64 bit
> linux machine:
>
> ==
> FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex
> --
> Traceback (most recent call last):
>   File "/usr/lib64/python2.6/site-packages/nose/case.py", line 182, in
> runTest
> self.test(*self.arg)
>   File "/usr/lib64/python2.6/site-packages/numpy/testing/decorators.py",
> line 169, in knownfailer
> return f(*args, **kwargs)
>   File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
> line 557, in test_loss_of_precision_longcomplex
> self.check_loss_of_precision(np.longcomplex)
>   File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
> line 510, in check_loss_of_precision
> check(x_series, 2*eps)
>   File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
> line 497, in check
> 'arctanh')
> AssertionError: (135, 3.4039637354191726288e-09, 3.9031278209478159624e-18,
> 'arctanh')
>

What machine is it?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] test failure in numpy trunk

2009-03-24 Thread Darren Dale
Hello,

I just performed an svn update, deleted my old build/ and
site-packages/numpy*, reinstalled, and I see a new test failure on a 64 bit
linux machine:

==
FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex
--
Traceback (most recent call last):
  File "/usr/lib64/python2.6/site-packages/nose/case.py", line 182, in
runTest
self.test(*self.arg)
  File "/usr/lib64/python2.6/site-packages/numpy/testing/decorators.py",
line 169, in knownfailer
return f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
line 557, in test_loss_of_precision_longcomplex
self.check_loss_of_precision(np.longcomplex)
  File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
line 510, in check_loss_of_precision
check(x_series, 2*eps)
  File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py",
line 497, in check
'arctanh')
AssertionError: (135, 3.4039637354191726288e-09, 3.9031278209478159624e-18,
'arctanh')
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SWIG and numpy.i

2009-03-24 Thread Bill Spotz
Kevin,

You need to declare vecSum() *after* you %include "numpy.i" and use  
the %apply directive.  Based on what you have, I think you can just  
get rid of the "extern double vecSum(...)".  I don't see what purpose  
it serves.  As is, it is telling swig to wrap vecSum() before you have  
set up your numpy typemaps.

On Mar 24, 2009, at 10:33 AM, Kevin Françoisse wrote:

> Hi everyone,
>
> I have been using NumPy for a couple of month now, as part of my  
> research project at the university.  But now, I have to use a big C  
> library I wrote myself in a python project.  So I choose to use SWIG  
> for the interface between both my python script and my C library. To  
> make things more comprehensible, I wrote a small C methods that  
> illustrate my problem:
>
> /* matrix.c */
>
> #include 
> #include 
> /* Compute the sum of a vector of reals */
> double vecSum(int* vec,int m){
>int  i;
>double sum =0.0;
>
>for(i=0;isum += vec[i];
>}
>return sum;
> }
>
> /***/
>
> /* matrix.h */
>
> double vecSum(int* vec,int m);
>
> /***/
>
> /* matrix.i */
>
> %module matrix
> %{
> #define SWIG_FILE_WITH_INIT
> #include "matrix.h"
> %}
>
> extern double vecSum(int* vec, int m);
>
> %include "numpy.i"
>
> %init %{
> import_array();
> %}
>
> %apply (int* IN_ARRAY1, int DIM1) {(int* vec, int m)};
> %include "matrix.h"
>
> /***/
>
> I'm using a python script to compile my swig interface and my C  
> files (running Mac OS X 10.5)
>
> /* matrixSetup.py */
>
> from distutils.core import setup, Extension
> import numpy
>
> setup(name='matrix', version='1.0', ext_modules  
> =[Extension('_matrix', ['matrix.c','matrix.i'],
> include_dirs = [numpy.get_include(),'.'])])
>
> /***/
>
> Everything seems to work fine ! But when I test my wrapped module in  
> python with an small NumPy array, here what I get :
>
> >>> import matrix
> >>> from numpy import *
> >>> a = arange(10)
> >>> matrix.vecSum(a,a.shape[0])
> Traceback (most recent call last):
>  File "", line 1, in 
> TypeError: in method 'vecSum', argument 1 of type 'int *'
>
> How can I tell SWIG that my Integer NumPy array should represent a  
> int* array in C ?
>
> Thank you very much,
>
> Kevin
> 

** Bill Spotz  **
** Sandia National Laboratories  Voice: (505)845-0170  **
** P.O. Box 5800 Fax:   (505)284-0154  **
** Albuquerque, NM 87185-0370Email: wfsp...@sandia.gov **






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] manipulating lists

2009-03-24 Thread Nils Wagner
On Tue, 24 Mar 2009 10:27:18 -0400
  josef.p...@gmail.com wrote:
> On Tue, Mar 24, 2009 at 10:14 AM, Nils Wagner
>  wrote:
>> Hi all,
>>
>> How can I extract the numbers from the following list
>>
>> ['&', '-1.878722E-08,', '3.835992E-11',
>> '1.192970E-03,-5.080192E-06']
>>
>> It is easy to extract
>>
> liste[1]
>> '-1.878722E-08,'
> liste[2]
>> '3.835992E-11'
>>
>> but
>>
> liste[3]
>> '1.192970E-03,-5.080192E-06'
>>
>> How can I accomplish that ?
>>
> 
> in python I would do this:
> 
 ss=['&', '-1.878722E-08,', 
'3.835992E-11','1.192970E-03,-5.080192E-06']
 li = []
 for j in ss:
>   for ii in j.split(','):   # assumes "," is delimiter
>   try: li.append(float(ii));
>   except ValueError: pass
 li
> [-1.87872199e-008, 3.83599203e-011, 
>0.00119297,
> -5.08019199e-006]
 np.array(li)
> array([ -1.87872200e-08,   3.83599200e-11, 
>  1.19297000e-03,
>-5.08019200e-06])
> 
> Josef
  
Thank you. Works like a charm.

Nils
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] SWIG and numpy.i

2009-03-24 Thread Kevin Françoisse
Hi everyone,

I have been using NumPy for a couple of month now, as part of my research
project at the university.  But now, I have to use a big C library I wrote
myself in a python project.  So I choose to use SWIG for the interface
between both my python script and my C library. To make things more
comprehensible, I wrote a small C methods that illustrate my problem:

/* matrix.c */

#include 
#include 
/* Compute the sum of a vector of reals */
double vecSum(int* vec,int m){
   int  i;
   double sum =0.0;

   for(i=0;i>> import matrix
>>> from numpy import *
>>> a = arange(10)
>>> matrix.vecSum(a,a.shape[0])
Traceback (most recent call last):
 File "", line 1, in 
TypeError: in method 'vecSum', argument 1 of type 'int *'

How can I tell SWIG that my Integer NumPy array should represent a int*
array in C ?

Thank you very much,

Kevin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] manipulating lists

2009-03-24 Thread josef . pktd
On Tue, Mar 24, 2009 at 10:14 AM, Nils Wagner
 wrote:
> Hi all,
>
> How can I extract the numbers from the following list
>
> ['&', '-1.878722E-08,', '3.835992E-11',
> '1.192970E-03,-5.080192E-06']
>
> It is easy to extract
>
 liste[1]
> '-1.878722E-08,'
 liste[2]
> '3.835992E-11'
>
> but
>
 liste[3]
> '1.192970E-03,-5.080192E-06'
>
> How can I accomplish that ?
>

in python I would do this:

>>> ss=['&', '-1.878722E-08,', '3.835992E-11','1.192970E-03,-5.080192E-06']
>>> li = []
>>> for j in ss:
for ii in j.split(','):   # assumes "," is delimiter
try: li.append(float(ii));
except ValueError: pass
>>> li
[-1.87872199e-008, 3.83599203e-011, 0.00119297,
-5.08019199e-006]
>>> np.array(li)
array([ -1.87872200e-08,   3.83599200e-11,   1.19297000e-03,
-5.08019200e-06])

Josef
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] manipulating lists

2009-03-24 Thread Nils Wagner
Hi all,

How can I extract the numbers from the following list

['&', '-1.878722E-08,', '3.835992E-11', 
'1.192970E-03,-5.080192E-06']

It is easy to extract

>>> liste[1]
'-1.878722E-08,'
>>> liste[2]
'3.835992E-11'

but

>>> liste[3]
'1.192970E-03,-5.080192E-06'

How can I accomplish that ?

Nils
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute

2009-03-24 Thread Sturla Molden
Jens Rantil wrote:
>
> Thanks Sturla. However numpy.uint8 seem to be lacking attributes 'str'
> and 'descr'. I'm using installed Ubuntu package 1:1.1.1-1. Is it too old
> or is the code broken?
Oops, my fault :)


def fromaddress(address, nbytes, dtype=float):
class Dummy(object): pass
d = Dummy()
bytetype = numpy.dtype(numpy.uint8)
d.__array_interface__ = {
 'data' : (address, False),
 'typestr' : bytetype.str,
 'descr' : bytetype.descr,
 'shape' : (nbytes,),
 'strides' : None,
 'version' : 3
}   
return numpy.asarray(d).view(dtype=dtype)


You will have to make sure the address is an integer.


> Also, could you elaborate why dtype=float would work better?
Because there is no such thing as a double type in Python?


Sturla Molden






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] savetxt for complex

2009-03-24 Thread Neal Becker
How does savetxt format a complex vector?  How can I control it?


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute

2009-03-24 Thread Jens Rantil
On Mon, 2009-03-23 at 15:40 +0100, Sturla Molden wrote:
> def fromaddress(address, nbytes, dtype=double):
> class Dummy(object): pass
> d = Dummy()
> d.__array_interface__ = {
>  'data' : (address, False),
>  'typestr' : numpy.uint8.str,
>  'descr' : numpy.uint8.descr,
>  'shape' : (nbytes,),
>  'strides' : None,
>  'version' : 3
> }   
> 
> return numpy.asarray(d).view( dtype=dtype )

Thanks Sturla. However numpy.uint8 seem to be lacking attributes 'str'
and 'descr'. I'm using installed Ubuntu package 1:1.1.1-1. Is it too old
or is the code broken?

Also, could you elaborate why dtype=float would work better?

Jens

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion