It looks like on my Pentium M multiplication with NaNs is slow, but
using a masked array ranges from slightly faster (with only one value
masked) to twice as slow (with all values masked):
In [15]:Timer("a.prod()", "import numpy as np; aa = np.ones(4096); a =
np.ma.masked_greater(aa,0)").timei
Charles R Harris wrote:
[...]
>b) extending 'and' and 'or' to allow element-by-element logical
> operations or adding && and ||
>
> 2) Lowering the precedence of & so that a > 8 & a < 10 works as you
> would expect.
>
>
> Yes on the extra operators. No on changing the preced
Sebastian Haase wrote:
> Hi,
> I have a (medical) image file.
> I wrote a nice interface based on memmap using numarray.
> The class design I used was essentially to return a numarray array
> object with a new "custom" attribute giving access to special
> information about the base file.
>
> No
Alan G Isaac wrote:
On Fri, 14 Jul 2006, Sven Schreiber apparently wrote:
So maybe that's a feature request, complementing the
nansum function by a nanaverage?
This is not an objection; just an observation.
It has always seemed to me that such descriptive
statistics make more sense as class
Webb Sprague wrote:
> Could someone recommend a way to average an array along the columns
> without propagating the nans and without turning them into some weird
> number which bias the result? I guess I can just keep using an
> indexing array for fooArray, but if there is somehting more graceful,
>
> Would it be reasonable if argsort returned the complete tuple of
> indices, so that
> A[A.argsort(ax)] would work ?
+1
This is the behavior one would naturally expect.
Eric
-
Using Tomcat but need to do more? Need to
Eric Firing wrote:
> Andrew Straw wrote:
>
>
>>Actually, this has been in MPL for a while. For example, see the
>>image_demo3.py example. You don't need the __array_interface__ for this
>>bit of functionality.
>
>
> It's broken.
>
> The firs
Andrew Straw wrote:
> Actually, this has been in MPL for a while. For example, see the
> image_demo3.py example. You don't need the __array_interface__ for this
> bit of functionality.
It's broken.
The first problem is that the kw "aspect = 'preserve'" is no longer
needed or supported. Removin
Robert Kern wrote:
> Eric Firing wrote:
>
>>That makes sense, and implies that the real solution would be the
>>introduction of operators && and || into Python, or a facility that
>>would allow extensions to add operators. I guess it would be a matter
>&
Robert Kern wrote:
> Eric Firing wrote:
>
>>Robert Kern wrote:
>>
>>>Eric Firing wrote:
>>>
>>>
>>>>It seems that the logical operators || and &&, corresponding to
>>>>logical_or and logical_and are missing;
Robert Kern wrote:
> Eric Firing wrote:
>
>>It seems that the logical operators || and &&, corresponding to
>>logical_or and logical_and are missing; one can do
>>
>>z = logical_and(x,y)
>>
>>but not
>>
>>z = x && y
>>
&
It seems that the logical operators || and &&, corresponding to
logical_or and logical_and are missing; one can do
z = logical_and(x,y)
but not
z = x && y
Is there an inherent reason, or is this a bug?
z = (x == y)
works, and a comment in umathmodule.c.src suggests that && and || should
als
In the course of trying to speed up matplotlib, I did a little
experiment that may indicate a place where numpy can be sped up: the
creation of a 2-D array from a list of tuples. Using the attached
script, I find that numarray is roughly 5x faster than either numpy or
Numeric:
[EMAIL PROTECT
Mathew Yeates wrote:
> Hi
> I typically deal with very large arrays that don't fit in memory. How
> does Numpy handle this? In Matlab I can use memory mapping but I would
> prefer caching as is done in The Gimp.
Numpy has a memmap array constructor; as it happens, I was using it for
the first t
14 matches
Mail list logo