Image compression schemes work by detecting repeating spatial patterns. If
you could do this in a CA simulation, you've probably already solved your
problem. Other kinds of image compression work by altering the pixel values
(like JPG) which is completely unacceptable. Unfortunately, those are the
only ones that are randomly pixel-addressable.

Really, the best solution is to stick as much RAM as possible in the box.
You can probably use a moving window to page to disk. But as others have
implied, you're probably biting off too large of a problem. Try reducing
your scale and see if what you are interested in still happens!

-Eric

On Jan 23, 2008 12:10 PM, Brent Pedersen <[EMAIL PROTECTED]> wrote:

> On Jan 23, 2008 10:11 AM, Anselm Hook <[EMAIL PROTECTED]> wrote:
> > Thought I'd ask the list this question more directly:
> >
> > If you have a large cellular automata; such as say conways-life (or
> > something with perhaps a few more bits per pixel) - what is an efficient
> way
> > to represent this in memory?
> >
> > It seems to be similar to compressing an image.  There are a variety of
> > algorithms for compressing images.  The goal often seems to be to find
> > duplicate blocks.
> >
> > One constraint is that I want the data to be pixel addressable and speed
> is
> > critical since the data-set may be large.  The best performance is of
> course
> > linear time with no indirection ( pixel = memory[ x + y * stride ] ).
> >
> > This is intended to be used to simulate watersheds.
> >
> >  - a
> >
> >
> > _______________________________________________
> > Geowanking mailing list
> > [email protected]
> > http://lists.burri.to/mailman/listinfo/geowanking
> >
>
> hi, i dont know at all how to address your compression question, but
> re the simulation:
>
> if you can model the CA as a convolution, then you can let python do
> the work via numpy/scipy, specifically scipy.signal.convole2d()
> e.g:
> >>> grid = convolve2d(grid, kernel, mode='same', boundary='wrap')
>
>
> even if do need direct per-pixel access, there is excellent support
> for that in numpy arrays via a number of options:
> cython/pyrex, weave.inline, or pyinstant are all numpy-aware.
> this is a good reference:
> http://www.scipy.org/PerformancePython
>
> i dont know what dimensions you'll be dealing with but in my
> experience, this scales pretty well.
>
> -brentp
> _______________________________________________
> Geowanking mailing list
> [email protected]
> http://lists.burri.to/mailman/listinfo/geowanking
>



-- 
-=--=---=----=----=---=--=-=--=---=----=---=--=-=-
Eric B. Wolf                          720-209-6818
PhD Student          CU-Boulder - Geography
_______________________________________________
Geowanking mailing list
[email protected]
http://lists.burri.to/mailman/listinfo/geowanking

Reply via email to