Thought I'd ask the list this question more directly:

If you have a large cellular automata; such as say conways-life (or
something with perhaps a few more bits per pixel) - what is an efficient way
to represent this in memory?

It seems to be similar to compressing an image.  There are a variety of
algorithms for compressing images.  The goal often seems to be to find
duplicate blocks.

One constraint is that I want the data to be pixel addressable and speed is
critical since the data-set may be large.  The best performance is of course
linear time with no indirection ( pixel = memory[ x + y * stride ] ).

This is intended to be used to simulate watersheds.

 - a
_______________________________________________
Geowanking mailing list
[email protected]
http://lists.burri.to/mailman/listinfo/geowanking

Reply via email to