Pierre GM wrote:
On May 13, 2009, at 7:36 PM, Matt Knox wrote:
Here's the catch: it's basically cheating. I got rid of the pre-
processing (where a mask was calculated depending on the domain and
the input set to a filling value depending on this mask, before the
actual computation). Instead,
All,
I just committed (r6994) some modifications to numpy.ma.getdata (Eric
Firing's patch) and to the ufunc wrappers that were too slow with
large arrays. We're roughly 3 times faster than we used to, but still
slower than the equivalent classic ufuncs (no surprise here).
Here's the catch:
Hi Pierre
2009/5/14 Pierre GM pgmdevl...@gmail.com:
This playing around with the error status may (or may not, I don't
know) cause some problems down the road.
I see the buildbot is complaining on SPARC. Not sure if it is
complaining about your commit, but might be worth checking out
On May 13, 2009, at 7:36 PM, Matt Knox wrote:
Here's the catch: it's basically cheating. I got rid of the pre-
processing (where a mask was calculated depending on the domain and
the input set to a filling value depending on this mask, before the
actual computation). Instead, I force
Hi,
Whine. I was afraid of something like that...
2 options, then:
* We revert to computing a mask beforehand. That looks like the part
that takes the most time w/ domained operations (according to Robert
K's profiler. Robert, you deserve a statue for this tool). And that
doesn't solve the
On Wed, May 13, 2009 at 18:36, Matt Knox mattknox...@gmail.com wrote:
Hi Pierre,
Here's the catch: it's basically cheating. I got rid of the pre-
processing (where a mask was calculated depending on the domain and
the input set to a filling value depending on this mask, before the
actual
On May 13, 2009, at 8:07 PM, Matt Knox wrote:
hmm. While this doesn't affect me personally... I wonder if everyone
is aware of
this. Importing modules generally shouldn't have side effects either
I would
think. Has this always been the case for the masked array module?
Well, can't
Hi,
I'm using masked arrays to compute large-scale standard deviation,
multiplication, gaussian, and weighted averages. At first I thought
using the masked arrays would be a great way to sidestep looping
(which it is), but it's still slower than expected. Here's a snippet
of the code that I'm
Eli Bressert wrote:
Hi,
I'm using masked arrays to compute large-scale standard deviation,
multiplication, gaussian, and weighted averages. At first I thought
using the masked arrays would be a great way to sidestep looping
(which it is), but it's still slower than expected. Here's a
Eli Bressert wrote:
Hi,
I'm using masked arrays to compute large-scale standard deviation,
multiplication, gaussian, and weighted averages. At first I thought
using the masked arrays would be a great way to sidestep looping
(which it is), but it's still slower than expected. Here's a snippet
of
Short answer to the subject: Oh yes.
Basically, MaskedArrays in its current implementation is more of a
convenience class than anything. Most of the functions manipulating
masked arrays create a lot of temporaries. When performance is needed,
I must advise you to work directly on the data
On May 9, 2009, at 8:17 PM, Eric Firing wrote:
Eric Firing wrote:
A part of the slowdown is what looks to me like unnecessary copying
in _MaskedBinaryOperation.__call__. It is using getdata, which
applies numpy.array to its input, forcing a copy. I think the copy
is actually
12 matches
Mail list logo