I have a 1D array with >100k samples that I would like to reduce by
computing the min/max of each "chunk" of n samples.  Right now, my
code is as follows:

n = 100
offset = array.size % downsample
array_min = array[offset:].reshape((-1, n)).min(-1)
array_max = array[offset:].reshape((-1, n)).max(-1)

However, this appears to be running pretty slowly.  The array is data
streamed in real-time from external hardware devices and I need to
downsample this and compute the min/max for plotting.  I'd like to
speed this up so that I can plot updates to the data as quickly as new
data comes in.

Are there recommendations for faster ways to perform the downsampling?

Thanks,
Brad
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to