You can trade off rule size for size of dataset. (that is make the
neighbourhood bigger and apply it to more cells rather than a smaller
neighbourhood and apply it more times).
In life specifically, you can do things like detect gliders and other
iterators (since life is usually sparse with bits of stable or
iterating cells). But then you trade memory (which is dollars per
gigabyte) for programming time (which is much more) depending on your
tradeoff.
There are lots of papers on this kind of thing from the 80s, it's
worth looking at various simulators that exist. Also get on the
ultrafractal and other mailing lists for similar minded people.
On 23 Jan 2008, at 18:11, Anselm Hook wrote:
Thought I'd ask the list this question more directly:
If you have a large cellular automata; such as say conways-life (or
something with perhaps a few more bits per pixel) - what is an
efficient way to represent this in memory?
It seems to be similar to compressing an image. There are a variety
of algorithms for compressing images. The goal often seems to be to
find duplicate blocks.
One constraint is that I want the data to be pixel addressable and
speed is critical since the data-set may be large. The best
performance is of course linear time with no indirection ( pixel =
memory[ x + y * stride ] ).
This is intended to be used to simulate watersheds.
- a
_______________________________________________
Geowanking mailing list
[email protected]
http://lists.burri.to/mailman/listinfo/geowanking
_______________________________________________
Geowanking mailing list
[email protected]
http://lists.burri.to/mailman/listinfo/geowanking