Re: [Numpy-discussion] fromfile() -- aarrgg!

2010-01-12 Thread Pauli Virtanen
ma, 2010-01-11 kello 16:11 -0800, Christopher Barker kirjoitti:
[clip]
 If no conversion is performed, zero is returned and the value of nptr 
 is stored in the location referenced by endptr.
 
 off do do some more testing, but I guess that means that those pointers 
 need to be checked after the call, to see if a conversion was generated.
 
 Am I right?

Yes, that's how strtod() is typically used.

NumPyOS_ascii_ftolf already checks that, but it seems to me that
fromstr_next_element or possibly fromstr does not.

 PS: Boy, this is a pain!

Welcome to the wonderful world of C ;)

Pauli


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Getting Callbacks with arrays to work

2010-01-12 Thread Jon Moore
Hi,

I'm trying to build a differential equation integrator and later a stochastic 
differential equation integrator.

I'm having trouble getting f2py to work where the callback itself receives an 
array from the Fortran routine does some work on it and then passes an array 
back.  

For the stoachastic integrator I'll need 2 callbacks both dealing with arrays.

The idea is the code that never changes (ie the integrator) will be in Fortran 
and the code that changes (ie the callbacks defining differential equations) 
will be different for each problem.

To test the idea I've written basic code which should pass an array back and 
forth between Python and Fortran if it works right.

Here is some code which doesn't work properly:-

SUBROUTINE CallbackTest(dv,v0,Vout,N)
    !IMPLICIT NONE
    
cF2PY intent( hide ):: N
    INTEGER:: N, ic
    
    EXTERNAL:: dv    

    DOUBLE PRECISION, DIMENSION( N ), INTENT(IN):: v0    
    DOUBLE PRECISION, DIMENSION( N ), INTENT(OUT):: Vout
    
    DOUBLE PRECISION, DIMENSION( N ):: Vnow
    DOUBLE PRECISION, DIMENSION( N )::  temp
    
    Vnow = v0
    

    temp = dv(Vnow, N)

    DO ic = 1, N
    Vout( ic ) = temp(ic)
    END DO    
    
END SUBROUTINE CallbackTest



When I test it with this python code I find the code just replicates the first 
term of the array!




from numpy import *
import callback as c

def dV(v):
    print 'in Python dV: V is: ',v
    return v.copy()    

arr = array([2.0, 4.0, 6.0, 8.0])

print 'Arr is: ', arr

output = c.CallbackTest(dV, arr)

print 'Out is: ', output




Arr is:  [ 2.  4.  6.  8.]

in Python dV: V is:  [ 2.  4.  6.  8.]

Out is:  [ 2.  2.  2.  2.]



Any ideas how I should do this, and also how do I get the code to work with 
implicit none not commented out?

Thanks

Jon


  ___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting Callbacks with arrays to work

2010-01-12 Thread Pearu Peterson
Hi,

The problem is that f2py does not support callbacks that
return arrays. There is easy workaround to that: provide
returnable arrays as arguments to callback functions.
Using your example:

SUBROUTINE CallbackTest(dv,v0,Vout,N)
  IMPLICIT NONE

  !F2PY intent( hide ):: N
  INTEGER:: N, ic
  EXTERNAL:: dv

  DOUBLE PRECISION, DIMENSION( N ), INTENT(IN):: v0
  DOUBLE PRECISION, DIMENSION( N ), INTENT(OUT):: Vout

  DOUBLE PRECISION, DIMENSION( N ):: Vnow
  DOUBLE PRECISION, DIMENSION( N )::  temp

  Vnow = v0
  !f2py intent (out) temp
  call dv(temp, Vnow, N)

  DO ic = 1, N
 Vout( ic ) = temp(ic)
  END DO

END SUBROUTINE CallbackTest

$ f2py -c test.f90 -m t --fcompiler=gnu95

 from numpy import *
 from t import *
 arr = array([2.0, 4.0, 6.0, 8.0])
 def dV(v):
print 'in Python dV: V is: ',v
ret = v.copy()
ret[1] = 100.0
return ret
...
 output = callbacktest(dV, arr)
in Python dV: V is:  [ 2.  4.  6.  8.]
 output
array([   2.,  100.,6.,8.])

What problems do you have with implicit none? It works
fine here. Check the format of your source code,
if it is free then use `.f90` extension, not `.f`.

HTH,
Pearu

Jon Moore wrote:
  Hi,
 
 I'm trying to build a differential equation integrator and later a
 stochastic differential equation integrator.
 
 I'm having trouble getting f2py to work where the callback itself
 receives an array from the Fortran routine does some work on it and then
 passes an array back.  
 
 For the stoachastic integrator I'll need 2 callbacks both dealing with
 arrays.
 
 The idea is the code that never changes (ie the integrator) will be in
 Fortran and the code that changes (ie the callbacks defining
 differential equations) will be different for each problem.
 
 To test the idea I've written basic code which should pass an array back
 and forth between Python and Fortran if it works right.
 
 Here is some code which doesn't work properly:-
 
 SUBROUTINE CallbackTest(dv,v0,Vout,N)
 !IMPLICIT NONE
 
 cF2PY intent( hide ):: N
 INTEGER:: N, ic
 
 EXTERNAL:: dv
 
 DOUBLE PRECISION, DIMENSION( N ), INTENT(IN):: v0
 DOUBLE PRECISION, DIMENSION( N ), INTENT(OUT):: Vout
 
 DOUBLE PRECISION, DIMENSION( N ):: Vnow
 DOUBLE PRECISION, DIMENSION( N )::  temp
 
 Vnow = v0
 
 
 temp = dv(Vnow, N)
 
 DO ic = 1, N
 Vout( ic ) = temp(ic)
 END DO
 
 END SUBROUTINE CallbackTest
 
 
 
 When I test it with this python code I find the code just replicates the
 first term of the array!
 
 
 
 
 from numpy import *
 import callback as c
 
 def dV(v):
 print 'in Python dV: V is: ',v
 return v.copy()
 
 arr = array([2.0, 4.0, 6.0, 8.0])
 
 print 'Arr is: ', arr
 
 output = c.CallbackTest(dV, arr)
 
 print 'Out is: ', output
 
 
 
 
 Arr is:  [ 2.  4.  6.  8.]
 
 in Python dV: V is:  [ 2.  4.  6.  8.]
 
 Out is:  [ 2.  2.  2.  2.]
 
 
 
 Any ideas how I should do this, and also how do I get the code to work
 with implicit none not commented out?
 
 Thanks
 
 Jon
 
 
 
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] TypeError: 'module' object is not callable

2010-01-12 Thread Alan G Isaac
 filter(lambda x: x.startswith('eig'),dir(np.linalg))
['eig', 'eigh', 'eigvals', 'eigvalsh']
 import scipy.linalg as spla
 filter(lambda x: x.startswith('eig'),dir(spla))
['eig', 'eig_banded', 'eigh', 'eigvals', 'eigvals_banded', 'eigvalsh']

hth,
Alan Isaac

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] TypeError: 'module' object is not callable

2010-01-12 Thread Alan G Isaac
On 1/12/2010 1:35 AM, Jankins wrote:
  from scipy.sparse.linalg.eigen import eigen
 Traceback (most recent call last):
 File stdin, line 1, inmodule
 ImportError: cannot import name eigen


Look at David's example:
from scipy.sparse.linalg import eigen

hth,
Alan Isaac
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] TypeError: 'module' object is not callable

2010-01-12 Thread Jankins
  import scipy.sparse.linalg as linalg
  dir(linalg)
['LinearOperator', 'Tester', '__all__', '__builtins__', '__doc__', 
'__file__', '
__name__', '__package__', '__path__', 'aslinearoperator', 'bench', 
'bicg', 'bicg
stab', 'cg', 'cgs', 'dsolve', 'eigen', 'factorized', 'gmres', 
'interface', 'isol
ve', 'iterative', 'linsolve', 'lobpcg', 'minres', 'qmr', 'splu', 
'spsolve', 'tes
t', 'umfpack', 'use_solver', 'utils']
  dir(linalg.eigen)
['Tester', '__all__', '__builtins__', '__doc__', '__file__', '__name__', 
'__pack
age__', '__path__', 'bench', 'lobpcg', 'test']
  linalg.eigen.test()
Running unit tests for scipy.sparse.linalg.eigen
NumPy version 1.3.0
NumPy is installed in C:\Python26\lib\site-packages\numpy
SciPy version 0.7.1
SciPy is installed in C:\Python26\lib\site-packages\scipy
Python version 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 
bit (Int
el)]
nose version 0.11.1
..
--
Ran 10 tests in 2.240s

OK
nose.result.TextTestResult run=10 errors=0 failures=0
 



On 1/12/2010 8:15 AM, Alan G Isaac wrote:
 On 1/12/2010 1:35 AM, Jankins wrote:

   from scipy.sparse.linalg.eigen import eigen

 Traceback (most recent call last):
  File stdin, line 1, inmodule
 ImportError: cannot import name eigen
  

 Look at David's example:
 from scipy.sparse.linalg import eigen

 hth,
 Alan Isaac
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] TypeError: 'module' object is not callable

2010-01-12 Thread Arnar Flatberg
On Tue, Jan 12, 2010 at 4:11 PM, Jankins andyjian430...@gmail.com wrote:

Hi

On my Ubuntu, I would reach the arpack wrapper as follows:

from scipy.sparse.linalg.eigen.arpack import eigen

However, I'd guess that you deal with a symmetric matrix (Laplacian or
adjacency matrix), so the symmetric solver might be the best choice.

This might be reached by:

In [29]: from scipy.sparse.linalg.eigen.arpack import eigen_symmetric
In [30]: scipy.__version__
Out[30]: '0.7.0'


Arnar
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-12 Thread stephen.pascoe
We have noticed the MaskedArray implementation in numpy-1.4.0 breaks
some of our code.  For instance we see the following:
 
in 1.3.0:

 a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
 numpy.ma.sum(a, 1)
masked_array(data = [ 6 15],
mask = False,
fill_value = 99)

in 1.4.0

 a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
 numpy.ma.sum(a, 1)
Traceback (most recent call last):
  File stdin, line 1, in module
  File
/usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
umpy/ma/core.py, line 5682, in __call__
return method(*args, **params)
  File
/usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
umpy/ma/core.py, line 4357, in sum
newmask = _mask.all(axis=axis)
ValueError: axis(=1) out of bounds


Also note the Report Bugs link on http://numpy.scipy.org is broken
(http://numpy.scipy.org/bug-report.html)

Thanks,
Stephen.
 
---
Stephen Pascoe  +44 (0)1235 445980
British Atmospheric Data Centre
Rutherford Appleton Laboratory
-- 
Scanned by iCritical.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] TypeError: 'module' object is not callable

2010-01-12 Thread Jankins

Thanks so so much.

Finally, it works.

 import scipy.sparse.linalg.eigen.arpack as arpack
 dir(arpack)
['__builtins__', '__doc__', '__file__', '__name__', '__package__', 
'__path__', '
_arpack', 'arpack', 'aslinearoperator', 'eigen', 'eigen_symmetric', 
'np', 'speig

s', 'warnings']


But I still didn't get it. Why some of you can directly use 
scipy.sparse.linalg.eigen as a function, while some of you couldn't use 
it that way?


Anyway, your solution works for me.

On 1/12/2010 9:19 AM, Arnar Flatberg wrote:



On Tue, Jan 12, 2010 at 4:11 PM, Jankins andyjian430...@gmail.com 
mailto:andyjian430...@gmail.com wrote:


Hi

On my Ubuntu, I would reach the arpack wrapper as follows:

from scipy.sparse.linalg.eigen.arpack import eigen

However, I'd guess that you deal with a symmetric matrix (Laplacian or 
adjacency matrix), so the symmetric solver might be the best choice.


This might be reached by:

In [29]: from scipy.sparse.linalg.eigen.arpack import eigen_symmetric
In [30]: scipy.__version__
Out[30]: '0.7.0'


Arnar


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
   


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy1.4 dtype issues: scipy.stats pytables

2010-01-12 Thread denis
On 11/01/2010 18:10, josef.p...@gmail.com wrote:

 For this problem, it's supposed to be only those packages that have or
 import cython generated code.

Right; is this a known bug, is there a known fix  for mac dmgs ?
(Whisper, how'd it get past testing ?)

scipy/stats/__init__.py has an apparent patch which doesn't work
 #remove vonmises_cython from __all__, I don't know why it is included
 __all__ = filter(lambda s:not (s.startswith('_') or 
s.endswith('cython')),dir())

but just removing vonmises_cython in distributions.py
= import scipy.stats then works.

Similarly import scipy.cluster = trace
   File numpy.pxd, line 30, in scipy.spatial.ckdtree 
(scipy/spatial/ckdtree.c:6087)
ValueError: numpy.dtype does not appear to be the correct type object

I like the naming convention xx_cython.so.

cheers
   -- denis


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy1.4 dtype issues: scipy.stats pytables

2010-01-12 Thread Robert Kern
On Tue, Jan 12, 2010 at 10:33, denis denis-bz...@t-online.de wrote:
 On 11/01/2010 18:10, josef.p...@gmail.com wrote:

 For this problem, it's supposed to be only those packages that have or
 import cython generated code.

 Right; is this a known bug, is there a known fix  for mac dmgs ?
 (Whisper, how'd it get past testing ?)

It's not a bug, but it is a known issue. We tried very hard to keep
numpy 1.4 binary compatible; however, Pyrex and Cython impose
additional runtime checks above and beyond binary compatibility.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy1.4 dtype issues: scipy.stats pytables

2010-01-12 Thread josef . pktd
On Tue, Jan 12, 2010 at 11:33 AM, denis denis-bz...@t-online.de wrote:
 On 11/01/2010 18:10, josef.p...@gmail.com wrote:

 For this problem, it's supposed to be only those packages that have or
 import cython generated code.

 Right; is this a known bug, is there a known fix  for mac dmgs ?
 (Whisper, how'd it get past testing ?)

Switching to numpy 1.4 requires recompiling cython code (i.e. scipy),
there's a lot of information on the details in the mailing lists.


 scipy/stats/__init__.py has an apparent patch which doesn't work
     #remove vonmises_cython from __all__, I don't know why it is included
     __all__ = filter(lambda s:not (s.startswith('_') or 
 s.endswith('cython')),dir())

No this is unrelated, this is just to reduce namespace pollution in __all__

vonmises_cython is still imported as an internal module and  functions
in distributions.

Josef


 but just removing vonmises_cython in distributions.py
 = import scipy.stats then works.

Then, I expect you will get an import error or some other exception
when you try to use stats.vonmises.


 Similarly import scipy.cluster = trace
   File numpy.pxd, line 30, in scipy.spatial.ckdtree 
 (scipy/spatial/ckdtree.c:6087)
 ValueError: numpy.dtype does not appear to be the correct type object

 I like the naming convention xx_cython.so.

 cheers
   -- denis


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-12 Thread Pierre GM
On Jan 12, 2010, at 10:52 AM, stephen.pas...@stfc.ac.uk 
stephen.pas...@stfc.ac.uk wrote:
 We have noticed the MaskedArray implementation in numpy-1.4.0 breaks
 some of our code.  For instance we see the following:

My, that's embarrassing. Sorry for the inconvenience.



 
 in 1.3.0:
 
 a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
 numpy.ma.sum(a, 1)
 masked_array(data = [ 6 15],
 mask = False,
 fill_value = 99)
 
 in 1.4.0
 
 a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
 numpy.ma.sum(a, 1)
 Traceback (most recent call last):
  File stdin, line 1, in module
  File
 /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
 umpy/ma/core.py, line 5682, in __call__
return method(*args, **params)
  File
 /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
 umpy/ma/core.py, line 4357, in sum
newmask = _mask.all(axis=axis)
 ValueError: axis(=1) out of bounds

Confirmed.
Before I take full blame for it, can you try the following on both 1.3 and 1.4 ?
 np.array(False).all().sum(1)

Back to your problem: I'll fix that ASAIC, but it'll be on the SVN. Meanwhile, 
you can:
* Use -1 instead of 1 for your axis.
* Force the definition of a mask when you define your array with 
masked_array(...,mask=False)




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] wrong casting of augmented assignment statements

2010-01-12 Thread Sebastian Walter
Hello,
I have a question about the augmented assignment statements *=, +=, etc.
Apparently, the casting of types is not working correctly. Is this
known resp. intended behavior of numpy?
(I'm using numpy.__version__ = '1.4.0.dev7039' on this machine but I
remember a recent checkout of numpy yielded the same result).

The problem is best explained at some examples:

wrong casting from float to int::

In [1]: import numpy

In [2]: x = numpy.ones(2,dtype=int)

In [3]: y = 1.3 * numpy.ones(2,dtype=float)

In [4]: z = x * y

In [5]: z
Out[5]: array([ 1.3,  1.3])

In [6]: x *= y

In [7]: x
Out[7]: array([1, 1])

In [8]: x.dtype
Out[8]: dtype('int32')

 wrong casting from float to object::

In [1]: import numpy

In [2]: import adolc

In [3]: x = adolc.adouble(numpy.array([1,2,3],dtype=float))

In [4]: y = numpy.array([4,5,6],dtype=float)

In [5]: x
Out[5]: array([1(a), 2(a), 3(a)], dtype=object)

In [6]: y
Out[6]: array([ 4.,  5.,  6.])

In [7]: x * y
Out[7]: array([4(a), 10(a), 18(a)], dtype=object)

In [8]: y *= x

In [9]: y

Out[9]: array([ 4.,  5.,  6.])


It is inconsistent to the Python behavior::

In [9]: a = 1

In [10]: b = 1.3

In [11]: c = a * b

In [12]: c
Out[12]: 1.3

In [13]: a *= b

In [14]: a
Out[14]: 1.3


I would expect that numpy should at least raise an exception in the
case of casting object to float.
Any thoughts?

regards,
Sebastian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] wrong casting of augmented assignment statements

2010-01-12 Thread Robert Kern
On Tue, Jan 12, 2010 at 12:05, Sebastian Walter
sebastian.wal...@gmail.com wrote:
 Hello,
 I have a question about the augmented assignment statements *=, +=, etc.
 Apparently, the casting of types is not working correctly. Is this
 known resp. intended behavior of numpy?

Augmented assignment modifies numpy arrays in-place, so the usual
casting rules for assignment into an array apply. Namely, the array
being assigned into keeps its dtype.

If you do not want in-place modification, do not use augmented assignment.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] sphinx numpydoc fails due to no __init__ for class SignedType

2010-01-12 Thread Peter Caldwell
I'm trying to use sphinx to build documentation for our project (CDAT) 
that uses numpy.  I'm running into an exception due to 
numpy.numarray.numerictypes.SignedType not having an __init__ attribute, 
which causes problems with numpydoc.  I'm sure there must be a 
workaround or I'm doing something wrong since the basic numpy 
documentation is created with sphinx!  Suggestions?

I'm using sphinx v1.0, numpy v1.3.0, and numpydoc v0.3.1on Redhat 
Enterprise 5.x.

Big thanks,
Peter

ps - I'm sending this question to both Numpy-discussion and 
sphinx-...@googlegroups because the issue lies at the intersection of 
these groups.

Here's the error:
=
Running Sphinx v1.0
loading pickled environment... not found
building [html]: targets for 6835 source files that are out of date
updating environment: 6835 added, 0 changed, 0 removed
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/ext/docscrape.py:117:
 
UserWarning: Unknown section Unary Ufuncs:
  warn(Unknown section %s % key)
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/ext/docscrape.py:117:
 
UserWarning: Unknown section Binary Ufuncs:
  warn(Unknown section %s % key)
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/ext/docscrape.py:117:
 
UserWarning: Unknown section Seealso
  warn(Unknown section %s % key)
reading sources... [  3%] 
output/lev0/numpy.numarray
Exception occurred:
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/ext/numpydoc.py,
 
line 76, in mangle_signature
'initializes x; see ' in pydoc.getdoc(obj.__init__)):
AttributeError: class SignedType has no attribute '__init__'
The full traceback has been saved in /tmp/sphinx-err-fprbpu.log, if you 
want to report the issue to the author.
Please also report this if it was a user error, so that a better error 
message can be provided next time.
Send reports to sphinx-...@googlegroups.com. Thanks!
make: *** [html] Error 1
=
Here's the full traceback:

# Sphinx version: 1.0
# Docutils version: 0.6 release
Traceback (most recent call last):
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/cmdline.py,
 
line 172, in main
app.build(all_files, filenames)
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/application.py,
 
line 130, in build
self.builder.build_update()
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/builders/__init__.py,
 
line 265, in build_update
'out of date' % len(to_build))
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/builders/__init__.py,
 
line 285, in build
purple, length):
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/builders/__init__.py,
 
line 131, in status_iterator
for item in iterable:
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/environment.py,
 
line 513, in update_generator
self.read_doc(docname, app=app)
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/Sphinx-1.0dev_20091202-py2.5.egg/sphinx/environment.py,
 
line 604, in read_doc
pub.publish()
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/docutils/core.py, 
line 203, in publish
self.settings)
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/docutils/readers/__init__.py,
 
line 69, in read
self.parse()
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/docutils/readers/__init__.py,
 
line 75, in parse
self.parser.parse(self.input, document)
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/docutils/parsers/rst/__init__.py,
 
line 157, in parse
self.statemachine.run(inputlines, document, inliner=self.inliner)
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/docutils/parsers/rst/states.py,
 
line 170, in run
input_source=document['source'])
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/docutils/statemachine.py,
 
line 233, in run
context, state, transitions)
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/docutils/statemachine.py,
 
line 421, in check_line
return method(match, context, next_state)
  File 
/usr/local/cdat/release/5.2d/lib/python2.5/site-packages/docutils/parsers/rst/states.py,
 
line 2678, in underline
self.section(title, source, style, lineno - 1, messages)
  File 

Re: [Numpy-discussion] wrong casting of augmented assignment statements

2010-01-12 Thread josef . pktd
On Tue, Jan 12, 2010 at 1:05 PM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
 Hello,
 I have a question about the augmented assignment statements *=, +=, etc.
 Apparently, the casting of types is not working correctly. Is this
 known resp. intended behavior of numpy?
 (I'm using numpy.__version__ = '1.4.0.dev7039' on this machine but I
 remember a recent checkout of numpy yielded the same result).

 The problem is best explained at some examples:

 wrong casting from float to int::

            In [1]: import numpy

            In [2]: x = numpy.ones(2,dtype=int)

            In [3]: y = 1.3 * numpy.ones(2,dtype=float)

            In [4]: z = x * y

            In [5]: z
            Out[5]: array([ 1.3,  1.3])

            In [6]: x *= y

            In [7]: x
            Out[7]: array([1, 1])

            In [8]: x.dtype
            Out[8]: dtype('int32')

  wrong casting from float to object::

            In [1]: import numpy

            In [2]: import adolc

            In [3]: x = adolc.adouble(numpy.array([1,2,3],dtype=float))

            In [4]: y = numpy.array([4,5,6],dtype=float)

            In [5]: x
            Out[5]: array([1(a), 2(a), 3(a)], dtype=object)

            In [6]: y
            Out[6]: array([ 4.,  5.,  6.])

            In [7]: x * y
            Out[7]: array([4(a), 10(a), 18(a)], dtype=object)

            In [8]: y *= x

            In [9]: y

            Out[9]: array([ 4.,  5.,  6.])


        It is inconsistent to the Python behavior::

            In [9]: a = 1

            In [10]: b = 1.3

            In [11]: c = a * b

            In [12]: c
            Out[12]: 1.3

            In [13]: a *= b

            In [14]: a
            Out[14]: 1.3


 I would expect that numpy should at least raise an exception in the
 case of casting object to float.
 Any thoughts?

You are assigning to an existing array, which implies casting to the
dtype of that array. It's the behavior that I would expect. If you
want upcasting then don't use inplace *= , ...

Josef



 regards,
 Sebastian
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fromfile() -- aarrgg!

2010-01-12 Thread Christopher Barker
Pauli Virtanen wrote:
 ma, 2010-01-11 kello 16:11 -0800, Christopher Barker kirjoitti:
 [clip]
 If no conversion is performed, zero is returned and the value of nptr 
 is stored in the location referenced by endptr.

 off do do some more testing, but I guess that means that those pointers 
 need to be checked after the call, to see if a conversion was generated.

 Am I right?
 
 Yes, that's how strtod() is typically used.
 
 NumPyOS_ascii_ftolf already checks that,

no, I don't' think it does, but it does pass the nifo through, so its 
API should be the same as PyOS_ascii_ftolf which is the same as 
strftolf(), which makes sense.

 but it seems to me that
 fromstr_next_element or possibly fromstr does not.

The problem is fromstr -- it changes the symantics, assigning the value 
to a pointer passed in, and returning an error code -- except it doesn't 
actually check for an error -- it always returns 0:

static int
@fn...@_fromstr(char *str, @type@ *ip, char **endptr, PyArray_Descr 
*NPY_UNUSED(ignore))
{
 double result;
 result = NumPyOS_ascii_strtod(str, endptr);
 *ip = (@type@) result;
 return 0;
}

so the errors are getting lost in the shuffle: This implies that 
fromstring/fromfile are the only things using it -- unless someone has 
seen similar bad behaviour anywhere else.

 Welcome to the wonderful world of C ;)

yup -- which is why I haven't worked out a fix yet...

Thanks,

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] wrong casting of augmented assignment statements

2010-01-12 Thread Sebastian Walter
On Tue, Jan 12, 2010 at 7:09 PM, Robert Kern robert.k...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 12:05, Sebastian Walter
 sebastian.wal...@gmail.com wrote:
 Hello,
 I have a question about the augmented assignment statements *=, +=, etc.
 Apparently, the casting of types is not working correctly. Is this
 known resp. intended behavior of numpy?

 Augmented assignment modifies numpy arrays in-place, so the usual
 casting rules for assignment into an array apply. Namely, the array
 being assigned into keeps its dtype.

what are the usual casting rules?
How does numpy know how to cast an object to a float?



 If you do not want in-place modification, do not use augmented assignment.

Normally, I'd be perfectly fine with that.
However, this particular problem occurs when you try to automatically
differentiate an algorithm by using an Algorithmic Differentiation
(AD) tool.
E.g. given a function

x = numpy.ones(2)
def f(x):
   a = numpy.ones(2)
   a *= x
   return numpy.sum(a)

one would use an AD tool as follows:
x = numpy.array([adouble(1.), adouble(1.)])
y = f(x)

but since the casting from object to float is not possible the
computed gradient \nabla_x f(x) will be wrong.



 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-12 Thread Pauli Virtanen
ti, 2010-01-12 kello 12:51 -0500, Pierre GM kirjoitti:
[clip] 
  a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
  numpy.ma.sum(a, 1)
  Traceback (most recent call last):
   File stdin, line 1, in module
   File
  /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
  umpy/ma/core.py, line 5682, in __call__
 return method(*args, **params)
   File
  /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
  umpy/ma/core.py, line 4357, in sum
 newmask = _mask.all(axis=axis)
  ValueError: axis(=1) out of bounds
 
 Confirmed.
 Before I take full blame for it, can you try the following on both 1.3 and 
 1.4 ?
  np.array(False).all().sum(1)

Oh crap, it's mostly my fault:

http://projects.scipy.org/numpy/ticket/1286
http://projects.scipy.org/numpy/changeset/7697
http://projects.scipy.org/numpy/browser/trunk/doc/release/1.4.0-notes.rst#deprecations

Pretty embarassing, as very simple things break, although the test suite
miraculously passes...

 Back to your problem: I'll fix that ASAIC, but it'll be on the SVN. 
 Meanwhile, you can:
 * Use -1 instead of 1 for your axis.
 * Force the definition of a mask when you define your array with 
 masked_array(...,mask=False)

Sounds like we need a 1.4.1 out at some point not too far in the future,
then.

Pauli


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] wrong casting of augmented assignment statements

2010-01-12 Thread Robert Kern
On Tue, Jan 12, 2010 at 12:31, Sebastian Walter
sebastian.wal...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 7:09 PM, Robert Kern robert.k...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 12:05, Sebastian Walter
 sebastian.wal...@gmail.com wrote:
 Hello,
 I have a question about the augmented assignment statements *=, +=, etc.
 Apparently, the casting of types is not working correctly. Is this
 known resp. intended behavior of numpy?

 Augmented assignment modifies numpy arrays in-place, so the usual
 casting rules for assignment into an array apply. Namely, the array
 being assigned into keeps its dtype.

 what are the usual casting rules?

For assignment into an array, the array keeps its dtype and the data
being assigned into it will be cast to that dtype.

 How does numpy know how to cast an object to a float?

For a general object, numpy will call its __float__ method.

 If you do not want in-place modification, do not use augmented assignment.

 Normally, I'd be perfectly fine with that.
 However, this particular problem occurs when you try to automatically
 differentiate an algorithm by using an Algorithmic Differentiation
 (AD) tool.
 E.g. given a function

 x = numpy.ones(2)
 def f(x):
   a = numpy.ones(2)
   a *= x
   return numpy.sum(a)

 one would use an AD tool as follows:
 x = numpy.array([adouble(1.), adouble(1.)])
 y = f(x)

 but since the casting from object to float is not possible the
 computed gradient \nabla_x f(x) will be wrong.

Sorry, but that's just a limitation of the AD approach. There are all
kinds of numpy constructions that AD can't handle.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-12 Thread Charles R Harris
On Tue, Jan 12, 2010 at 11:32 AM, Pauli Virtanen p...@iki.fi wrote:

 ti, 2010-01-12 kello 12:51 -0500, Pierre GM kirjoitti:
 [clip]
   a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
   numpy.ma.sum(a, 1)
   Traceback (most recent call last):
File stdin, line 1, in module
File
  
 /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
   umpy/ma/core.py, line 5682, in __call__
  return method(*args, **params)
File
  
 /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
   umpy/ma/core.py, line 4357, in sum
  newmask = _mask.all(axis=axis)
   ValueError: axis(=1) out of bounds
 
  Confirmed.
  Before I take full blame for it, can you try the following on both 1.3
 and 1.4 ?
   np.array(False).all().sum(1)

 Oh crap, it's mostly my fault:

 http://projects.scipy.org/numpy/ticket/1286
 http://projects.scipy.org/numpy/changeset/7697

 http://projects.scipy.org/numpy/browser/trunk/doc/release/1.4.0-notes.rst#deprecations

 Pretty embarassing, as very simple things break, although the test suite
 miraculously passes...

  Back to your problem: I'll fix that ASAIC, but it'll be on the SVN.
 Meanwhile, you can:
  * Use -1 instead of 1 for your axis.
  * Force the definition of a mask when you define your array with
 masked_array(...,mask=False)

 Sounds like we need a 1.4.1 out at some point not too far in the future,
 then.


If so, then it should be sooner rather than later in order to sync with the
releases of ubuntu and fedora. Both of the upcoming releases still use
1.3.0, but that could change...

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] wrong casting of augmented assignment statements

2010-01-12 Thread Sebastian Walter
On Tue, Jan 12, 2010 at 7:38 PM, Robert Kern robert.k...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 12:31, Sebastian Walter
 sebastian.wal...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 7:09 PM, Robert Kern robert.k...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 12:05, Sebastian Walter
 sebastian.wal...@gmail.com wrote:
 Hello,
 I have a question about the augmented assignment statements *=, +=, etc.
 Apparently, the casting of types is not working correctly. Is this
 known resp. intended behavior of numpy?

 Augmented assignment modifies numpy arrays in-place, so the usual
 casting rules for assignment into an array apply. Namely, the array
 being assigned into keeps its dtype.

 what are the usual casting rules?

 For assignment into an array, the array keeps its dtype and the data
 being assigned into it will be cast to that dtype.

 How does numpy know how to cast an object to a float?

 For a general object, numpy will call its __float__ method.

1)
the object does not have a __float__ method.

2)
I've now implemented the __float__ method (to raise an error).
However, it doesn't get called. All objects are casted to 1.





 If you do not want in-place modification, do not use augmented assignment.

 Normally, I'd be perfectly fine with that.
 However, this particular problem occurs when you try to automatically
 differentiate an algorithm by using an Algorithmic Differentiation
 (AD) tool.
 E.g. given a function

 x = numpy.ones(2)
 def f(x):
   a = numpy.ones(2)
   a *= x
   return numpy.sum(a)

 one would use an AD tool as follows:
 x = numpy.array([adouble(1.), adouble(1.)])
 y = f(x)

 but since the casting from object to float is not possible the
 computed gradient \nabla_x f(x) will be wrong.

 Sorry, but that's just a limitation of the AD approach. There are all
 kinds of numpy constructions that AD can't handle.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] wrong casting of augmented assignment statements

2010-01-12 Thread Christopher Barker
Sebastian Walter wrote:
 However, this particular problem occurs when you try to automatically
 differentiate an algorithm by using an Algorithmic Differentiation
 (AD) tool.
 E.g. given a function

 x = numpy.ones(2)
 def f(x):
   a = numpy.ones(2)
   a *= x
   return numpy.sum(a)

I don't know anything about AD, but in general, when I write a function 
that requires a given numpy array type as input, I'll do something like:

def f(x):
   x = np.asarray(a, dtype=np.float)
   a = np.ones(2)
   a *= x
   return np.sum(a)


That makes the casting explicit, and forces it to happen at the top of 
the function, where the error will be more obvious. asarray will just 
pass through a conforming array, so little performance penalty when you 
do give it the right type.

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-12 Thread Pierre GM
On Jan 12, 2010, at 1:52 PM, Charles R Harris wrote:
 
 
 
 On Tue, Jan 12, 2010 at 11:32 AM, Pauli Virtanen p...@iki.fi wrote:
 ti, 2010-01-12 kello 12:51 -0500, Pierre GM kirjoitti:
 [clip]
   a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
   numpy.ma.sum(a, 1)
   Traceback (most recent call last):
File stdin, line 1, in module
File
   /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
   umpy/ma/core.py, line 5682, in __call__
  return method(*args, **params)
File
   /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
   umpy/ma/core.py, line 4357, in sum
  newmask = _mask.all(axis=axis)
   ValueError: axis(=1) out of bounds
 
  Confirmed.
  Before I take full blame for it, can you try the following on both 1.3 and 
  1.4 ?
   np.array(False).all().sum(1)
 
 Oh crap, it's mostly my fault:
 
 http://projects.scipy.org/numpy/ticket/1286
 http://projects.scipy.org/numpy/changeset/7697
 http://projects.scipy.org/numpy/browser/trunk/doc/release/1.4.0-notes.rst#deprecations
 
 Pretty embarassing, as very simple things break, although the test suite
 miraculously passes...
 
  Back to your problem: I'll fix that ASAIC, but it'll be on the SVN. 
  Meanwhile, you can:
  * Use -1 instead of 1 for your axis.
  * Force the definition of a mask when you define your array with 
  masked_array(...,mask=False)
 
 Sounds like we need a 1.4.1 out at some point not too far in the future,
 then.
 
 
 If so, then it should be sooner rather than later in order to sync with the 
 releases of ubuntu and fedora. Both of the upcoming releases still use 1.3.0, 
 but that could change...

I guess that the easiest would be for me to provide a workaround for the bug 
(Pauli's modifications make sense, I was relying on a *feature* that wasn't 
very robust).
I'll update both the trunk and the 1.4.x branch
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy sum table by category

2010-01-12 Thread Marc Schwarzschild


I have a csv file like this:

Account, Symbol, Quantity, Price
One,SPY,5,119.00
One,SPY,3,120.00
One,SPY,-2,125.00
One,GE,...
One,GE,...
Two,SPY, ...
Three,GE, ...
 ...

The data is much larger, could be 10,000 records.  I can load it
into a numpy array using matplotlib.mlab.csv2rec().  I learned
several useful numpy functions and have been reading lots of
documentation.  However, I have not found a way to create a
unique list of symbols and the Sum of their respective Quantity
values.  I want do various calculations on the data like pull out
all the records for a given Account.  The actual data has lots
more columns and sometimes I may want to sum(Quantity*Price) by
Account and Symbol.

I'm attracted to numpy for speed but would welcome alternative
suggestions.

I tried unsuccessfully to install PyTables on my Ubuntu system
and abandoned that avenue.

Can anyone provide some examples on how to do this or point me to
documentation?

Much appreciated. 

_
Marc Schwarzschild  The Brookhaven Group, LLC
1-212-580-1175 Analytics for Hedge Fund Investors
 Risk it, carefully!
   

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy sum table by category

2010-01-12 Thread josef . pktd
On Tue, Jan 12, 2010 at 3:33 PM, Marc Schwarzschild
m...@thebrookhavengroup.com wrote:


 I have a csv file like this:

    Account, Symbol, Quantity, Price
    One,SPY,5,119.00
    One,SPY,3,120.00
    One,SPY,-2,125.00
    One,GE,...
    One,GE,...
    Two,SPY, ...
    Three,GE, ...
     ...

 The data is much larger, could be 10,000 records.  I can load it
 into a numpy array using matplotlib.mlab.csv2rec().  I learned
 several useful numpy functions and have been reading lots of
 documentation.  However, I have not found a way to create a
 unique list of symbols and the Sum of their respective Quantity
 values.  I want do various calculations on the data like pull out
 all the records for a given Account.  The actual data has lots
 more columns and sometimes I may want to sum(Quantity*Price) by
 Account and Symbol.

 I'm attracted to numpy for speed but would welcome alternative
 suggestions.

 I tried unsuccessfully to install PyTables on my Ubuntu system
 and abandoned that avenue.

 Can anyone provide some examples on how to do this or point me to
 documentation?

If you don't want to do a lot of programming yourself, then I
recommend tabular for this, which looks good for this kind of
spreadsheet like operations, alternatively pandas.

Josef



 Much appreciated.

 _
 Marc Schwarzschild              The Brookhaven Group, LLC
 1-212-580-1175         Analytics for Hedge Fund Investors
                 Risk it, carefully!


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fromfile() -- aarrgg!

2010-01-12 Thread Christopher Barker
Christopher Barker wrote:
 static int
 @fn...@_fromstr(char *str, @type@ *ip, char **endptr, PyArray_Descr 
 *NPY_UNUSED(ignore))
 {
  double result;
  result = NumPyOS_ascii_strtod(str, endptr);
  *ip = (@type@) result;
  return 0;
 }

OK, I've done the diagnostics, but not all of the fix. Here's the issue:

numpyos.c: NumPyOS_ascii_strtod()

Was incrementing the input pointer to strip out whitespace before 
passing it on to PyOS_ascii_strtod(). So the **endptr getting passed 
back to @fn...@_fromstr didn't match.

I've fixed that -- so now it should be possible to check if str and 
*endptr are the same after the call, to see if a double was actually 
read -- Im not suite sure what to do in that case, but a return code is 
a good start.

However, I also took a look at integers. For example:

In [39]: np.fromstring(4.5, 3, sep=',', dtype=np.int)
Out[39]: array([4])

clearly wrong -- it may be OK to read 4.5 as 4, but then it stops, I 
guess because there is a .5 before the next sep. Anyway, not the best 
solution.

However, in this case, the function is here:

@fn...@_fromstr(char *str, @type@ *ip, char **endptr, PyArray_Descr 
*NPY_UNUSED(ignore))
{
 @btype@ result;

 result = pyos_st...@func@(str, endptr, 10);
 *ip = (@type@) result;
 printf(In int fromstr - result: %i\n, result );
 printf(In int fromstr - str: '%s', %p  %p\n, str, str, *endptr);

 return 0;
}

so it's calling PyOS_strtol(), which when called on 4.5 returns 4 -- 
which explains the abive behaviou -- but how to know that that wasn't a 
  proper reading? This really is a mess!

Since there was just some talk about a 1.4.1 -- I'd like to get some of 
this fixed before then

-Chris





-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion