Tim Hochberg wrote: > Here's an approach (mean_accumulate) that avoids making any copies of > the data. It runs almost 4x as fast as your approach (called baseline > here) on my box. Perhaps this will be useful: > --snip-- > def mean_accumulate(data, indices): > result = np.zeros([32, 32], float) > for i in indices: > result += data[i] > result /= len(indices) > return result
Great! I got a roughly 9x speed improvement using take() in combination with this approach. Thanks Tim! Here's what my code looks like now: >>> def mean_accum(data): >>> result = np.zeros(data[0].shape, np.float64) >>> for dataslice in data: >>> result += dataslice >>> result /= len(data) >>> return result >>> >>> # frameis are int64 >>> frames = data.take(frameis.astype(np.int32), axis=0) >>> meanframe = mean_accum(frames) I'm surprised that using a python for loop is faster than the built-in mean method. I suppose mean() can't perform the same in-place operations because in certain cases doing so would fail? Martin ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion