Re: [Numpy-discussion] numpy.test() Program received signal SIGABRT, Aborted.

2010-12-02 Thread Nils Wagner
On Fri, 03 Dec 2010 08:47:32 +0100
  "Nils Wagner"  wrote:
> On Fri, 3 Dec 2010 00:42:16 -0700
>  Charles R Harris  wrote:
>> On Fri, Dec 3, 2010 at 12:29 AM, Nils Wagner
>> wrote:
>> 
>>> Hi all,
>>>
>>> I have installed the latest version of numpy.
>>>
>>> >>> numpy.__version__
>>> '2.0.0.dev-6aacc2d'
>>>
>>>
>> 
>> I don't see that here or on the buildbots. There was a 
>>problem with
>> segfaults that was fixed in commit
>> c0e1cf27b55dfd5aCan
>> you check that your installation is clean, etc. Also, 
>>what platform
>> are
>> you running on?
> 
> I have removed the build directory.
> Is it also neccessary to remove numpy in thr 
>installation 
> directory ?
>  
> /data/home/nwagner/local/lib/python2.5/site-packages/
> 
> Platform
> 
> 2.6.18-92.el5 #1 SMP Tue Jun 10 18:51:06 EDT 2008 x86_64 
> x86_64 x86_64 GNU/Linux
> 
>  
> Nils
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

  
I have also removed the numpy directory within 
/data/home/nwagner/local/lib/python2.5/site-packages/.
Now all tests pass.
Ran 3080 tests in 12.288s

OK (KNOWNFAIL=4, SKIP=1)


How is the build process implemented on the build bots ?

Nils
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.test() Program received signal SIGABRT, Aborted.

2010-12-02 Thread Nils Wagner
On Fri, 3 Dec 2010 00:42:16 -0700
  Charles R Harris  wrote:
> On Fri, Dec 3, 2010 at 12:29 AM, Nils Wagner
> wrote:
> 
>> Hi all,
>>
>> I have installed the latest version of numpy.
>>
>> >>> numpy.__version__
>> '2.0.0.dev-6aacc2d'
>>
>>
> 
> I don't see that here or on the buildbots. There was a 
>problem with
> segfaults that was fixed in commit
> c0e1cf27b55dfd5aCan
> you check that your installation is clean, etc. Also, 
>what platform
> are
> you running on?

I have removed the build directory.
Is it also neccessary to remove numpy in thr installation 
directory ?
  
/data/home/nwagner/local/lib/python2.5/site-packages/

Platform

2.6.18-92.el5 #1 SMP Tue Jun 10 18:51:06 EDT 2008 x86_64 
x86_64 x86_64 GNU/Linux

  
Nils
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.test() Program received signal SIGABRT, Aborted.

2010-12-02 Thread Charles R Harris
On Fri, Dec 3, 2010 at 12:29 AM, Nils Wagner
wrote:

> Hi all,
>
> I have installed the latest version of numpy.
>
> >>> numpy.__version__
> '2.0.0.dev-6aacc2d'
>
>

I don't see that here or on the buildbots. There was a problem with
segfaults that was fixed in commit
c0e1cf27b55dfd5aCan
you check that your installation is clean, etc. Also, what platform
are
you running on?



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.test() Program received signal SIGABRT, Aborted.

2010-12-02 Thread Nils Wagner
Hi all,

I have installed the latest version of numpy.

>>> numpy.__version__
'2.0.0.dev-6aacc2d'

numpy.test(verbose=2) received signal SIGABRT.

test_cdouble_2 (test_linalg.TestEig) ... ok
test_csingle (test_linalg.TestEig) ... FAIL
*** glibc detected *** 
/data/home/nwagner/local/bin/python: free(): invalid next 
size (fast): 0x1c2887b0 ***
=== Backtrace: =
/lib64/libc.so.6[0x383cc71684]
/lib64/libc.so.6(cfree+0x8c)[0x383cc74ccc]
/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/multiarray.so[0x2b33e06f710e]

(gdb) bt
#0  0x00383cc30155 in raise () from /lib64/libc.so.6
#1  0x00383cc31bf0 in abort () from /lib64/libc.so.6
#2  0x00383cc6a3db in __libc_message () from 
/lib64/libc.so.6
#3  0x00383cc71684 in _int_free () from 
/lib64/libc.so.6
#4  0x00383cc74ccc in free () from /lib64/libc.so.6
#5  0x2b33e06f710e in array_dealloc (self=0x1c65fa00) 
at numpy/core/src/multiarray/arrayobject.c:209
#6  0x004d6dbb in frame_dealloc (f=0x1c65eec0) at 
Objects/frameobject.c:416

Nils
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pushing changes to numpy git repo problem

2010-12-02 Thread Pearu Peterson
Thanks!
Pearu

On Thu, Dec 2, 2010 at 11:08 PM, Charles R Harris
 wrote:
>
>
> On Thu, Dec 2, 2010 at 1:52 PM, Pearu Peterson 
> wrote:
>>
>> Hi,
>>
>> I have followed Development workflow instructions in
>>
>>  http://docs.scipy.org/doc/numpy/dev/gitwash/
>>
>> but I am having a problem with the last step:
>>
>> $ git push upstream ticket1679:master
>> fatal: remote error:
>>  You can't push to git://github.com/numpy/numpy.git
>>  Use g...@github.com:numpy/numpy.git
>>
>
> Do what the message says, the first address is readonly. You can change the
> settings in .git/config, mine looks like
>
> [core]
>     repositoryformatversion = 0
>     filemode = true
>     bare = false
>     logallrefupdates = true
> [remote "origin"]
>     fetch = +refs/heads/*:refs/remotes/origin/*
>     url = g...@github.com:charris/numpy
> [branch "master"]
>     remote = origin
>     merge = refs/heads/master
> [remote "upstream"]
>     url = g...@github.com:numpy/numpy
>     fetch = +refs/heads/*:refs/remotes/upstream/*
> [alias]
>     mb = merge --no-ff
>
> Where upstream is the numpy repository.
>
> Chuck
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pushing changes to numpy git repo problem

2010-12-02 Thread Charles R Harris
On Thu, Dec 2, 2010 at 1:52 PM, Pearu Peterson wrote:

> Hi,
>
> I have followed Development workflow instructions in
>
>  http://docs.scipy.org/doc/numpy/dev/gitwash/
>
> but I am having a problem with the last step:
>
> $ git push upstream ticket1679:master
> fatal: remote error:
>  You can't push to git://github.com/numpy/numpy.git
>  Use g...@github.com:numpy/numpy.git
>
>
Do what the message says, the first address is readonly. You can change the
settings in .git/config, mine looks like

[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = g...@github.com:charris/numpy
[branch "master"]
remote = origin
merge = refs/heads/master
[remote "upstream"]
url = g...@github.com:numpy/numpy
fetch = +refs/heads/*:refs/remotes/upstream/*
[alias]
mb = merge --no-ff

Where upstream is the numpy repository.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pushing changes to numpy git repo problem

2010-12-02 Thread Fernando Perez
On Thu, Dec 2, 2010 at 12:52 PM, Pearu Peterson
 wrote:
>
> What I am doing wrong?
>
> Here's some additional info:
> $ git remote -v show
> origin  ...@github.com:pearu/numpy.git (fetch)
> origin  ...@github.com:pearu/numpy.git (push)
> upstream        git://github.com/numpy/numpy.git (fetch)
> upstream        git://github.com/numpy/numpy.git (push)

The git:// protocol is read-only, for write access you need ssh
access.  Just edit your /path-to-repo/.git/config file and change the

git://github.com/numpy/numpy.git

lines for

g...@github.com:numpy/numpy.git

in the upstream description.  That should be sufficient.

Regards,

f
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Pushing changes to numpy git repo problem

2010-12-02 Thread Pearu Peterson
Hi,

I have followed Development workflow instructions in

  http://docs.scipy.org/doc/numpy/dev/gitwash/

but I am having a problem with the last step:

$ git push upstream ticket1679:master
fatal: remote error:
  You can't push to git://github.com/numpy/numpy.git
  Use g...@github.com:numpy/numpy.git

What I am doing wrong?

Here's some additional info:
$ git remote -v show
origin  g...@github.com:pearu/numpy.git (fetch)
origin  g...@github.com:pearu/numpy.git (push)
upstreamgit://github.com/numpy/numpy.git (fetch)
upstreamgit://github.com/numpy/numpy.git (push)
$ git branch -a
  master
* ticket1679
  remotes/origin/HEAD -> origin/master
  remotes/origin/maintenance/1.0.3.x
  remotes/origin/maintenance/1.1.x
  remotes/origin/maintenance/1.2.x
  remotes/origin/maintenance/1.3.x
  remotes/origin/maintenance/1.4.x
  remotes/origin/maintenance/1.5.x
  remotes/origin/master
  remotes/origin/ticket1679
  remotes/upstream/maintenance/1.0.3.x
  remotes/upstream/maintenance/1.1.x
  remotes/upstream/maintenance/1.2.x
  remotes/upstream/maintenance/1.3.x
  remotes/upstream/maintenance/1.4.x
  remotes/upstream/maintenance/1.5.x
  remotes/upstream/master


Thanks,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Float16 and PEP 3118

2010-12-02 Thread Pauli Virtanen
Thu, 02 Dec 2010 10:20:27 -0700, Charles R Harris wrote:
> Now that the float16 type is in I was wondering if we should do anything
> to support it in the PEP 3118 buffer interface. This would probably
> affect the Cython folks as well as the people working on fixing up the
> structure module for Python 3.x. 

Before introducing a PEP 3118 type code for half floats in the PEP, one 
would need to argue the Python people to add it to the struct module.

Before that, the choices probably are:

- refuse to export buffers containing half floats

- export half floats as two bytes

> There is a fairly long thread about the latter and it also looks like
> what the Python folks are doing with structure alignment isn't going to
> be compatible with Numpy structured arrays. Thoughts?

I think it would be useful for the Python people to have feedback from us 
here.

AFAIK, the native-aligned mode that was discussed there is compatible 
with what dtype(..., align=True) produces: Numpy aligns structs as given 
by the maximum alignment of its fields.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] printoption to allow hexified floats?

2010-12-02 Thread Benjamin Root
On Thu, Dec 2, 2010 at 11:17 AM, Ken Basye  wrote:

> Thanks for the replies.
>
> Robert is right; many numerical operations, particularly complex ones,
> generate different values across platforms, and we deal with these by
> storing the values from some platform as a reference and using
> allclose(), which requires extra work.  But many basic operations
> generate the same underlying values on IEEE 754-compliant platforms but
> don't always format floats consistently (see
> http://bugs.python.org/issue1580 for a lengthy discussion on this).  My
> impression is that Python 2.7 does a better job here, but at this point
> a lot of differences also crop up between 2.6 (or less) and 2.7 due to
> the changed formatting built into 2.7, and these are the result of
> formatting differences; the numbers themselves are identical (in our
> experience so far, at any rate).  This is a current pain-point which an
> exact representation would alleviate.
>
> In response to David, we haven't implemented a separate print; we rely
> on the Numpy repr/str for ndarrays and the printoptions that allow some
> control over float formatting.  I'm basically proposing to add a bit
> more control there.  And thanks for the info on supported versions of
> Python.
>
>  Ken
>
>
Another approach to consider is to save the numerical data in a
platform-independent standard file format (maybe like netcdf?).  While this
isn't a fool-proof approach because the calculations themselves may
introduce differences that are platform dependent, this at least puts strong
controls on one aspect of the overall problem.

One caveat that does come across my mind is if the save/load process for the
file might have some platform-dependent differences based on the
compression/decompression schemes.  For example, the GRIB file format does a
compression where the mean value and the differences from those means are
stored.  Calculations like these might introduce some slight differences on
various platforms.

Just food for thought,
Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Float16 and PEP 3118

2010-12-02 Thread Charles R Harris
Hi Folks,

Now that the float16 type is in I was wondering if we should do anything to
support it in the PEP 3118 buffer interface. This would probably affect the
Cython folks as well as the people working on fixing up the structure module
for Python 3.x. There is a fairly long
thread about the latter and it also looks like what the Python folks are
doing with structure alignment isn't going to be compatible with Numpy
structured arrays. Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] printoption to allow hexified floats?

2010-12-02 Thread Ken Basye
Thanks for the replies.

Robert is right; many numerical operations, particularly complex ones, 
generate different values across platforms, and we deal with these by 
storing the values from some platform as a reference and using 
allclose(), which requires extra work.  But many basic operations 
generate the same underlying values on IEEE 754-compliant platforms but 
don't always format floats consistently (see 
http://bugs.python.org/issue1580 for a lengthy discussion on this).  My 
impression is that Python 2.7 does a better job here, but at this point 
a lot of differences also crop up between 2.6 (or less) and 2.7 due to 
the changed formatting built into 2.7, and these are the result of 
formatting differences; the numbers themselves are identical (in our 
experience so far, at any rate).  This is a current pain-point which an 
exact representation would alleviate.

In response to David, we haven't implemented a separate print; we rely 
on the Numpy repr/str for ndarrays and the printoptions that allow some 
control over float formatting.  I'm basically proposing to add a bit 
more control there.  And thanks for the info on supported versions of 
Python.

  Ken


On 12/2/10 8:14 AM, Robert Kern wrote:
> On Wed, Dec 1, 2010 at 13:18, Ken Basye  wrote:
>> Hi Numpy folks,
>> ? ? When working with floats, I prefer to have exact string
>> representations in doctests and other reference-based testing; I find it
>> helps a lot to avoid chasing cross-platform differences that are really
>> about the string conversion rather than about numerical differences.
> Unfortunately, there are still cross-platform numerical differences
> that are real (but are irrelevant to the validity of the code under
> test). Hex-printing for floats only helps a little to make doctests
> useful for numerical code.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Threshold

2010-12-02 Thread totonixs...@gmail.com
On Thu, Dec 2, 2010 at 11:14 AM, Zachary Pincus  wrote:
>> mask = numpy.zeros(medical_image.shape, dtype="uint16")
>> mask[ numpy.logical_and( medical_image >= lower, medical_image <=
>> upper)] = 255
>>
>> Where lower and upper are the threshold bounds. Here I' m marking the
>> array positions where medical_image is between the threshold bounds
>> with 255, where isn' t with 0. The question is: Is there a better
>> way to do that?
>
> This will give you a True/False boolean mask:
> mask = numpy.logical_and( medical_image >= lower, medical_image <=
> upper)
>
> And this a 0/255 mask:
> mask = 255*numpy.logical_and( medical_image >= lower, medical_image <=
> upper)
>
> You can make the code a bit more terse/idiomatic by using the bitwise
> operators, which do logical operations on boolean arrays:
> mask = 255*((medical_image >= lower) & (medical_image <= upper))
>
> Though this is a bit annoying as the bitwise ops (& | ^ ~) have higher
> precedence than the comparison ops (< <= > >=), so you need to
> parenthesize carefully, as above.
>
> Zach

Thanks, Zach! I stayed with the last one.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Threshold

2010-12-02 Thread Zachary Pincus
> mask = numpy.zeros(medical_image.shape, dtype="uint16")
> mask[ numpy.logical_and( medical_image >= lower, medical_image <=  
> upper)] = 255
>
> Where lower and upper are the threshold bounds. Here I' m marking the
> array positions where medical_image is between the threshold bounds
> with 255, where isn' t with 0. The question is: Is there a better  
> way to do that?

This will give you a True/False boolean mask:
mask = numpy.logical_and( medical_image >= lower, medical_image <=  
upper)

And this a 0/255 mask:
mask = 255*numpy.logical_and( medical_image >= lower, medical_image <=  
upper)

You can make the code a bit more terse/idiomatic by using the bitwise  
operators, which do logical operations on boolean arrays:
mask = 255*((medical_image >= lower) & (medical_image <= upper))

Though this is a bit annoying as the bitwise ops (& | ^ ~) have higher  
precedence than the comparison ops (< <= > >=), so you need to  
parenthesize carefully, as above.

Zach


On Dec 2, 2010, at 7:35 AM, totonixs...@gmail.com wrote:

> Hi all,
>
> I' m developing a medical software named InVesalius [1], it is a free
> software. It uses numpy arrays to store the medical images (CT and
> MRI) and the mask, the mask is used to mark the region of interest and
> to create 3D surfaces. Those array generally have 512x512 elements.
> The mask is created based in threshold, with lower and upper bound,
> this way:
>
> mask = numpy.zeros(medical_image.shape, dtype="uint16")
> mask[ numpy.logical_and( medical_image >= lower, medical_image <=  
> upper)] = 255
>
> Where lower and upper are the threshold bounds. Here I' m marking the
> array positions where medical_image is between the threshold bounds
> with 255, where isn' t with 0. The question is: Is there a better way
> to do that?
>
> Thank!
>
> [1] - svn.softwarepublico.gov.br/trac/invesalius
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Threshold

2010-12-02 Thread totonixs...@gmail.com
Hi all,

I' m developing a medical software named InVesalius [1], it is a free
software. It uses numpy arrays to store the medical images (CT and
MRI) and the mask, the mask is used to mark the region of interest and
to create 3D surfaces. Those array generally have 512x512 elements.
The mask is created based in threshold, with lower and upper bound,
this way:

mask = numpy.zeros(medical_image.shape, dtype="uint16")
mask[ numpy.logical_and( medical_image >= lower, medical_image <= upper)] = 255

Where lower and upper are the threshold bounds. Here I' m marking the
array positions where medical_image is between the threshold bounds
with 255, where isn' t with 0. The question is: Is there a better way
to do that?

Thank!

[1] - svn.softwarepublico.gov.br/trac/invesalius
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] broadcasting with numpy.interp

2010-12-02 Thread Friedrich Romstedt
2010/12/1 greg whittier :
> On Wed, Nov 24, 2010 at 3:16 PM, Friedrich Romstedt
>  wrote:
>> I assume you just need *some* interpolation, not that specific one?
>> In that case, I'd suggest the following:
>>
>> 1)  Use a 2d interpolation, taking into account all nearest neighbours.
>> 2)  For this, use a looped interpolation in this nearest-neighbour sense:
>>    a)  Generate sums of all unmasked nearest-neighbour values
>>    b)  Generate counts for the nearest neighbours present
>>    c)  Replace the bad values by the sums divided by the count.
>>    d)  Continue at (a) if there are bad values left
>>
>> Bad values which are neighbouring each other (>= 3) need multiple
>> passes through the loop.  It should be pretty fast.
>>
>> If this is what you have in mind, maybe we (or I) can make up some code.
>>
>> Friedrich
>
> Thanks so much for the response!  Sorry I didn't respond earlier.  I put it
> aside until I found time to try and understand part 2 of your response and
> forgot about it.  I'm not really looking for 2d interpolation at the moment,
> but I can see needing it in the future.  Right now, I just want to
> interpolate along one of the three axes.  I think what you're suggesting
> might work for 1d or 2d depending on how you find the nearest neighbors.
> What routine would you use?  Also, when you say "unmasked" do you mean
> literally using masked arrays?

Hi Greg,

if you can estimate that you'll need a more sophisticated algorithm in
future I'd recommend to write it in full glory, in a general way, in
the end it'll save you time (this is what I would do).

Yes, you're right, by choosing just neighbours along one axis you
could do simple one-axis interpolation, but in some corner cases it'll
not work properly since it will work the following (some ascii
graphics):

"x" are present values, "-" are missing values.  The chain might look
like the following:

-

In this case, interpolation will work.  It'll pick the two neighbours,
and interpolate them.  But consider this:

--

This will just propagate the end points to the neighbours.  The
missing points will have just one neighbour, hence this behaviour.

After the propagation, all values are filled, and you end up with a
step in the middle.

If such neighbouring missing data points are rare, it might still be
considerable over Python loops with numpy.interp().  I don't see a way
to vectorize interp(), since the run lengthes are different in each
case.  You might consider writing a C or Cython function, but I cannot
give any advise with this.

I'm thinking about a way to propagate the values over more than one
step.  You might know that interpolation (in images) uses also kernels
extending beyond the next neighbours.  But I don't know precisely how
to design them.

First, I'd like to know if you have or have not such neighbouring
missing data points.

And why do you prefer interpolation in only one axis?

I can help with the code, but I'd prefer to do it the following way:
You write the code, and when you're stuck, seriously, you write back
to the list.  I'm sure I could do the code, but 1) it might (might?)
save me time, 2) You might profit from doing it yourself :-)

Would you mind putting the code online in a github repo?  Might well
be that I sometimes run across a similar problem.

Considering your masking question, I would keep the mask array
separate, but this is rather because I'm not familiar with masked
arrays.

Another thing which comes into my mind would be to rewrite or write a
new interp() which takes care of masked entries, but it would be quite
an amount of work for me (I'm not familiar with the C interior of
numpy either).  And it would be restricted to one dimension only.

If you can please give more detail on you data, where it comes from etc.

Friedrich
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A Cython apply_along_axis function

2010-12-02 Thread Dag Sverre Seljebotn
On 12/02/2010 08:17 AM, Robert Bradshaw wrote:
> On Wed, Dec 1, 2010 at 6:09 PM, John Salvatier
>   wrote:
>
>> On Wed, Dec 1, 2010 at 6:07 PM, Keith Goodman  wrote:
>>  
>>> On Wed, Dec 1, 2010 at 5:53 PM, David  wrote:
>>>
>>>
 On 12/02/2010 04:47 AM, Keith Goodman wrote:
  
> It's hard to write Cython code that can handle all dtypes and
> arbitrary number of dimensions. The former is typically dealt with
> using templates, but what do people do about the latter?
>
 The only way that I know to do that systematically is iterator. There is
 a relatively simple example in scipy/signal (lfilter.c.src).

 I wonder if it would be possible to add better support for numpy
 iterators in cython...
  
>>> Thanks for the tip. I'm starting to think that for now I should just
>>> template both dtype and ndim.
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>> I enthusiastically support better iterator support for cython
>>  
> I enthusiastically welcome contributions along this line.
>

Me too :-)

I guess we're moving into more Cython-list territory, so let's move any 
follow-ups there (posting this one both places).

Just in case anybody is wondering what something like this could look 
like, here's a rough scetch complete with bugs. The idea would be to a) 
add some rudimentary support for using the yield keyword in Cython to 
make a generator function, b) inline the generator function if the 
generator is used directly in a for-loop. This should result in very 
efficient code, and would also be much easier to implement than a 
general purpose generator.

@cython.inline
cdef array_iter_double(np.ndarray a, int axis=-1):
 cdef np.flatiter it
 ita = np.PyArray_IterAllButAxis(a, &axis)
 cdef Py_ssize_t stride = a.strides[axis], length = a.shape[axis], i
 while np.PyArray_ITER_NOTDONE(ita):
 for i in range(length):
 yield (np.PyArray_ITER_DATA(it) + )[i * stride])[0]
 # TODO: Probably yield indices as well
 np.PyArray_ITER_NEXT(it)
 # TODO: add faster special-cases for stride == sizeof(double)


# Use NumPy iterator API to sum all values of array with
# arbitrary number of dimensions:
cdef double s = 0, value
for value in array_iter_double(myarray):
 s += value
 # at this point, the contents of the array_iter_double function is 
copied,
 # and "s += value" simply inserted everywhere "yield" occurs in the 
function

Dag Sverre

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion