I'm not sure I have ever seen anybody successfully rephrase your
complexity argument back at you; since nobody understands what you mean
it's not surprising that people are complacent about it.
Bit of an overgeneralization, methinks: this list is disproportionately
populated with people who satisfy the conjunctive property [do not
understand it] and [do like to chat about AGI]. That is no criticism, but
it makes it look like nobody understands it.
I understand what Richard means by his complexity argument and see his point
though I believe that it can be worked around if you're aware of it -- the
major problem being, as Richard points out, most AGI systems developers
don't see it as necessary to work around.
As I have said before, I do get people contacting me offlist (and
off-blog, now) who do understand it, but simply do not feel the need to
engage in list-chat.
. . . . because many people on this list are more invested in being right
then being educated. I think that this argument is a lost cause on this
list and generally choose not to wast time on lost causes -- but I'm in an
odd mood, so . . . .
If you just randomly slap together systems that have those kinds of
mechanisms, there is a tendency for complex, emergent properties to be
seen in the system as a whole. Never mind trying to make the system
intelligent, you can make emergent properties appear by generating random,
garbage-like relationships between the elements of a system.
Emergent is a bad word. People do not understand it. They think that
emergent normally means complex, wonderful, and necessarily correct. They
are totally incorrect.
But now here is the interesting thing: this observation (about getting
complexity/emergence out if you set the system up with ugly, tangled
mechanisms) is consistent with the reverse observation: in nature, the
science we have studied so far in the last three hundred years has been
based on simple mechanisms that (almost always) does not involve ugly,
tangled mechanisms.
Nature likes simple. Simple producing complex effects is what nature is
all about. Complex producing simple effects is human studpidity and prone
to dramatic failure.
Richard tends not to make the point but the most flagrant example of his
complexity problem is Ben Goertzel's stories about trying to tune the
numerous parameters for his various AI systems. I think that Richard is
entirely in the right here but have been unsuccessful in repeated attempts
to convince Ben of this. Yes, you *do* need tunable parameters in an AI
system -- but they should not be set up in such a way that they can
oscillate to chaotic failure.
To cut a long story short, it turns out that the Inference Control Engine
is more important than the inference mechanism itself.
Many people agree with this, but . . .
The actual behavior of the system is governed, not by the principles of
perfectly reliable logic, but by a messy, arbitrary inference control
engine, and the mechanisms that drive the latter are messy and tangled.
This is where Richard and I part ways. I think that inference is
currently messy and arbitrary and tangled because we don't understand it
well enough. This may be a great answer to Ed Porter's question of what is
conceptually missing from current AGI attempts. I think that inference
control will turn out to be relatively simple in design as well -- yet
possess tremendously complex effects, just like everything else in nature.
Now, wherever you go in AI, I can tell the same story. A story in which
the idealistic AI researchers start out wanting to build a thinking system
in which there is not supposed to be any arbitrary mechanism that might
give rise to complexity, but where, after a while, some ugly mechanisms
start to creep in, until finally the whole thing is actually determined by
complexity-inducing mechanisms.
Actually, this is not just a complexity argument. It's really an
argument about how many AGI researchers want to start tabula rasa -- but
then find that you can't do everything at once. Some researchers then start
throwing in assumptions and quick fixes until those things dominate the
system while others are smart enough to just reduce the system size and
scope.
5. Therefore we have no methods for building thinking machines, since
engineering discipline does not address how to build complex devices.
Building them as if they are not complex will result in poor behavior;
squeezing out the complexity will squeeze out the thinking, and leaving
it in makes traditional engineering impossible.
Not a bad summary, but a little oddly worded.
Huh? Why doesn't engineering discipline address building complex
devices? Engineering discipline can address everything (just like science)
as long as you're willing to open up your eyes and address reality.
Richard's arguments are only cogent if an AI researcher is trying to ignore
his point. They are *NOT* show-stoppers but merely another complicating
issue to be worked around.
I see AI people denying that this could happen, while at the same time
they take their systems that are not supposed to have complexity in them,
and they make them dependent on mechanisms (for example, inference control
mechanisms) that manifestly are sensitive to complex effects. So while
they deny that that complexity needs to be talked about, they insert
bucketloads of complexity into their systems!
But, instead of acknowledging that this complexity could be unavoidable,
and instead of acknowledging that you cannot insert complexity in, and
then fiddle with parameters to get it to come out right (that being the
ONE thing that we know you cannot do with complexity), they continue to
deny that complexity is an issue.
Like I said -- failure to address reality and perfection by parameter
fiddling . . . . Richard *is* correct. Y'all just don't see it yet.
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com