Great! That's exactly what I wanted. Works with floats too.
Thanks,
Martin
Robert Kern wrote:
> Mostly, it's simply easy enough to implement yourself. Not all one-liners
> should
> be methods on the array object.
>
>(a == value).sum()
>
-
What's the most straightforward way to count, say, the number of 1s or
Trues in the array? Or the number of any integer?
I was surprised to discover recently that there isn't a count() method
as there is for Python lists. Sorry if this has been discussed already,
but I'm wondering if there's a
Ryan,
Try installing the latest scipy version 0.51. There's a windows binary
for it. Worked fine for me.
Martin
Ryan Krauss wrote:
> I am a Linux user trying to install Numpy/Scipy on a Windows machine
> in my office.
>
> I went to the website and grabbed the two latest versions:
> scipy = sci
I agree. This'll allow me to delete some messy code I have to get the
same behaviour.
I'm amazed by how often I use searchsorted.
'side' sounds like a good keyword name to me.
Martin
Robert Kern wrote:
> Charles R Harris wrote:
>> Hi all,
>>
>> I added the keyword side to the searchsorted meth
Martin Spacek wrote:
>
> Actually, your original version is just as fast as the take() version.
> Both are about 9X faster than numpy.mean() on my system. I prefer the
> take() version because you only have to pass a single argument to
> mean_accum()
I forgot to mention that
Tim Hochberg wrote:
> I'm actually surprised that the take version is faster than my original
> version since it makes a big ol' copy. I guess this is an indication
> that indexing is more expensive than I realize. That's why nothing beats
> measuring!
Actually, your original version is just
Tim Hochberg wrote:
> Here's an approach (mean_accumulate) that avoids making any copies of
> the data. It runs almost 4x as fast as your approach (called baseline
> here) on my box. Perhaps this will be useful:
>
--snip--
> def mean_accumulate(data, indices):
> result = np.zeros([32, 32],
Travis Oliphant wrote:
>
> If frameis is 1-D, then you should be able to use
>
> temp = data.take(frameis,axis=0)
>
> for the first step. This can be quite a bit faster (and is a big
> reason why take is still around). There are several reasons for this
> (one of which is that index check
Hello,
I'm a bit ignorant of optimization in numpy.
I have a movie with 65535 32x32 frames stored in a 3D array of uint8
with shape (65535, 32, 32). I load it from an open file f like this:
>>> import numpy as np
>>> data = np.fromfile(f, np.uint8, count=65535*32*32)
>>> data = data.reshape(