Brad, I think you are doing it the right way, but I think what is happening is that the reshape() call on the sliced array is forcing a copy to be made first. The fact that the copy has to be made twice just worsens the issue. I would save a copy of the reshape result (it is usually a view of the original data, unless a copy is forced), and then perform a min/max call on that with the appropriate axis.
On that note, would it be a bad idea to have a function that returns a min/max tuple? Performing two iterations to gather the min and the max information versus a single iteration to gather both at the same time would be useful. I should note that there is a numpy.ptp() function that returns the difference between the min and the max, but I don't see anything that returns the actual values. Ben Root On Thu, Jun 17, 2010 at 4:50 PM, Brad Buran <bbu...@cns.nyu.edu> wrote: > I have a 1D array with >100k samples that I would like to reduce by > computing the min/max of each "chunk" of n samples. Right now, my > code is as follows: > > n = 100 > offset = array.size % downsample > array_min = array[offset:].reshape((-1, n)).min(-1) > array_max = array[offset:].reshape((-1, n)).max(-1) > > However, this appears to be running pretty slowly. The array is data > streamed in real-time from external hardware devices and I need to > downsample this and compute the min/max for plotting. I'd like to > speed this up so that I can plot updates to the data as quickly as new > data comes in. > > Are there recommendations for faster ways to perform the downsampling? > > Thanks, > Brad > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion >
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion