If none of the suggested methods turn out to be efficient enough due to copying overhead, here's a way to reduce the copying overhead by trading memory (and a bit of complexity) for copying overhead. The general thrust is to allocate M extra slices of memory and then shift the data every M time slices instead of every time slice.
First you would allocate a block of memory N*P*(H+M) in size.
buffer = zeros([H+M,N,P], float)
Then you'd look at the first H time slices.
data = buffer[:H]
Top pop one piece of data off the stack you'd simply shift data to look at a different place in the buffer. The first time, you'd have something like this:
data = buffer[1:1+H]
Every M time steps you need to recopy the data. I expect that this should reduce your copying overhead a bunch since your not copying as frequently. It's pretty tunable too. You'd want to wrap some convenience functions around stuff to automate the copying and popping, but that should be easy enough. I haven't tried this though, so caveat emptor. -tim
_______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion