Mike Dougherty wrote:
On 10/4/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
All understood.  Remember, though, that the original reason for talking
about GoL was the question:  Can there ever be a scientific theory that
predicts all the "interesting creatures" given only the rules?

The question of getting something to recognize the existence of the
patterns is a good testbed, for sure.

Given finite rules about a finite world with an en effectively
unlimited resource, it seems that every "interesting creature" exists
as the subset of all permutations minus the noise that isn't
interesting.  The problem is in a provable definition of interesting
(which was earlier defined for example as 'cyclic')  Also, who is
willing to invest unlimited resource to exhaustively search a "toy"
domain?  Even if there were parallels that might lead to formalisms
applicable in a larger context, we would probably divert those
resources to other tasks.  I'm not sure this is a bad idea.  Perhaps
our human attention span is a defense measure against wasting life's
resources on searches that promise fitness without delivering useful
results.

I hear you, but let me quickly summarize the reason why I introduced GoL as an example.

I wanted to use GoL as a nice-and-simple example of a system whose overall behavior (in this case, the existence of certain patterns that are "stable" or "interesting") seems impossible to predict from a knowledge of the rules. I only wanted to use GoL to *illustrate* the general class, not because I was interested in GoL per se.

The important thing is that this idea (that there are some systems that show interesting, but unexplainable, behavior at the global level) has much greater depth and impact than people have previously thought.

In particular, it is important to observe that almost all of our science and engineering is based on observing/analyzing/explaining/building systems that are not in this class.

(Quick caveat: actually, the distinction between the two types of system is not black and white, so pretty much all system do have a small amount of inexplicability to them. But this does not affect the argument).

What is the conclusion to draw from this? Well, when we look at what is going on in a system, there are certain characteristics that can lead us to suspect that a *significant* chunk of its global behaviors might turn out to be inexplicable in this way -- there are fingerprints that we can look out for. Now, if you go out there into the world and look for systems that have those telltale fingerprints, you find that we would expect intelligent systems to be in this class.

Or, more precisely, we would expect that when AI engineers try to build systems that are (a) complete, and (b) have properly grounded learning mechanisms, the systems will be expected to be in this class. This has a massive impact on the techniques we are using to do AI. The more you think about the consequences of this fact, the more you realize that using the conventional techniques of engineering is virtually guaranteed not to work. In fact, we would predict that AI engineers would make *some* progress, but whenever they tried to scale up or expand the scope of their systems they would find that things did not get much better, and we would expect that AI engineers would have great difficulty coming up with learning mechanisms that generated usable symbols from real world input.

So, while GoL itself is interesting, and all kinds of stuff can be said about it, most of that is not important to the core argument.


Richard Loosemore




In the case of RSI, the rules are not fixed.  I wouldn't dare call
them mathematical infinite, but an evolving ruleset probably should be
considered functionally unlimited.  I imagine Incompleteness applies
here, even if I don't know how to explicitly state it.  I believe
finding "all" of the interesting creatures is nearly impossible.
Finding "an" interesting creature should be possible given a
sufficiently exact definition of interesting.  After some amount of
search, the results probably have to be expressed as a confidence
metric like, "given an exhaustive search of only 10% of the known
region, there we found N number of candidates that match the criteria
within X degree of freedom.  By assessment of the distribution of
candidates in the searched space, extrapolation suggests there may be
{prediction formula result} 'interesting creatures' in this universe"

the Drake equation is an example of this kind of answer/function.
Ironic that it's purpose is to determine the number of intelligences
in our own universe.  Of course Fermi paradox, testable hypothesis,
etc. etc. - the point is not about whether GoL searches or SETI
searches are any more or less productive than each other.  My interest
is in how intelligences of any origin (natural human brains,
human-designed CPU, however improbable aliens) manage to find common
symbols in order to create/exchange/consume ideas.  If we have this
difficulty communicating with each other given the shared KB of
classes (archetypes?) of human existence, how likely is it that we
will even recognize non-human intelligence if/when we encounter it?

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50341107-125d97

Reply via email to