Derek Zahn wrote:
Richard Loosemore:
> I do not laugh at your misunderstanding, I laugh at the general
> complacency; the attitude that a problem denied is a problem solved. I
> laugh at the tragicomedic waste of effort.
I'm not sure I have ever seen anybody successfully rephrase your
complexity argument back at you; since nobody understands what you mean
it's not surprising that people are complacent about it.
Bit of an overgeneralization, methinks: this list is disproportionately
populated with people who satisfy the conjunctive property [do not
understand it] and [do like to chat about AGI]. That is no criticism,
but it makes it look like nobody understands it.
As I have said before, I do get people contacting me offlist (and
off-blog, now) who do understand it, but simply do not feel the need to
engage in list-chat.
I was going to wait for some more blog posts to have a go at rephrasing
it myself but my (probably wrong effort) would go like this:
1. Many things we want to build have desired properties that are
described at a different level than the things we build them out of.
"Flying" is emergent in this sense from rivets and sheet metal, for
example. Thinking is emergent from neurons, for another example.
Emergent would be an unfortunate choice of word. Everything in the
world has properties that we observe, together with actual mechanisms
underneath that cause those properties, and which could be considered an
"explanation" of the properties. However, very, very few properties are
"emergent" in the sense that complex systems people use that word.
2. Some such things are "complex" in that the emergent properties cannot
be predicted from the lower-level details.
Correct, but ou would be surprised how ew there are: flying is not
emergent (as you say below), and I don't automatically say that thinking
is emergent from neurons. Don't forget that some tings are *partially*
emergent: Pluto's orbit is screwy, for example, so Newton's lovely,
non-complex solar system is in the long run a complex system ... it is
just that the incidence of complexity is very small (just one outburst
of Plutonic silliness every ten million years or so).
Almost every piece of science you learned about in school is about
non-complex systems. As a physicist, the first time that we ever got to
a complex property was with Reynold's Number, and nobody really talked
about that in detail until college level.
3. "Flying" as above is not complex in this way. In fact, all of
engineering is the study of how to build things that are increasingly
complicated but NOT complex. We do not want airplanes to have complex
behavior and the engineering methodology is expressly for squeezing
complexity out.
Well.... that is a strange way to phrase it. Engineering does not
really try to squeeze it out, it is just that because of the way that
the world is built, we simply never encounter systems that are dominated
by complexity, so we never have to bother with it.
For example, is there any system that depends, for its routine
functioning, on the exact details of how vortices are shaped in the wake
of an object? For example, could we build a computer in which the data
was actually carried by individual vortices (so losing a vortex would be
bad)? That would be a nightmare, so we never try to do such things.
4. "Thinking" must be complex. [my understanding of why this must be
true is lacking. Something like: otherwise we'd be able to predict the
behavior of an AGI which would make it useless?]
Er, no. The argument can be presented in many different ways, but the
simple one I went for in the paper was this. Look at real examples of
full-blooded complex systems, and ask yourself about the low-level
mechanism that typically cause a system to go complex. It turns out
that certain kinds of low-level mechanisms do tend to send a system off
the deep end: extreme nonlinear relationships between elements of the
system; memory in the elements; developmental characteristics that cause
the elements to change their character over time; relationships that
depend on the individual identity of elements, etc.
If you just randomly slap together systems that have those kinds of
mechanisms, there is a tendency for complex, emergent properties to be
seen in the system as a whole. Never mind trying to make the system
intelligent, you can make emergent properties appear by generating
random, garbage-like relationships between the elements of a system.
But now here is the interesting thing: this observation (about getting
complexity/emergence out if you set the system up with ugly, tangled
mechanisms) is consistent with the reverse observation: in nature, the
science we have studied so far in the last three hundred years has been
based on simple mechanisms that (almost always) does not involve ugly,
tangled mechanisms.
Now, finally, take a look at the stuff that we know we need in an
intelligent system. Just look at the raw components that store
information about the world: concepts or symbols. Do we suspect them
of interacting with one another in ways that would likely mean that the
system as a whole would show some complexity? Yes: you build new
concepts by allowing existing ones to interact and combine in some
deeply nonlinear manner. The way they interact is memory dependent.
There is development involved. There are immensely tangled processes at
work when concepts are used to make thoughts (how are analogies made?).
There are subtle dependencies when questions are answered and actions
undertaken. The list goes on and on. The stuff of intelligence is
reeking with evidence of just the kinds of interactions that, in any
other system, would make us suspect complexity.
But then, to nail the coffin down good and hard, take a look at all the
attempts that AI researchers have made to drive complexity OUT of
intelligent systems. For example, logical reasoning is supposed to be a
process whereby some existing pieces of knowledge are combined to yield
new knowledge. So long as the axiomatic knowledge is reliable, the
whole system is completely non-complex because the reasoning process
guarantees that truth is preserved. You could extend such a system with
a million new pieces of knowledge, and all derivations would be
reliable. So far so good, but then the worm enters this Garden of Eden:
when you start allowing for probabilistic statements, all of a sudden
you are not quite certain what your statements mean any more (what
exactly does "I like cats" [certainty-value = 92%] mean), and then when
your system is forced to take actions in the real world, it no longer
has time to decide what to do by deriving all of the logical
consequences of the known facts, then using the complete derivation to
make a decision. To cut a long story short, it turns out that the
Inference Control Engine is more important than the inference mechanism
itself. The actual behavior of the system is governed, not by the
principles of perfectly reliable logic, but by a messy, arbitrary
inference control engine, and the mechanisms that drive the latter are
messy and tangled.
Now, wherever you go in AI, I can tell the same story. A story in which
the idealistic AI researchers start out wanting to build a thinking
system in which there is not supposed to be any arbitrary mechanism that
might give rise to complexity, but where, after a while, some ugly
mechanisms start to creep in, until finally the whole thing is actually
determined by complexity-inducing mechanisms.
If you start to look closely, you see evidence of mechanisms that you
would expect to be complex.
5. Therefore we have no methods for building thinking machines, since
engineering discipline does not address how to build complex devices.
Building them as if they are not complex will result in poor behavior;
squeezing out the complexity will squeeze out the thinking, and leaving
it in makes traditional engineering impossible.
Not a bad summary, but a little oddly worded.
One of the easiest ways to get a handle on it is this. When an AGI
"grounds" its symbols properly, we all know that what it will actually
do is to form new symbols by getting a Symbol Builder mechanism to
combine lower level symbols (or perhaps raw percepts, at the very
beginning).
But what does Symbol Builder do? Suppose that, way back in evolution,
nature discovered that it could make a really smart brain if it put in a
Symbol Builder that grabbed two symbols that were occuring together
right now, and then added one other symbol that was also occurring right
now, but in a different modality.... so if you see the color yellow, and
a banana shape, and you are chewing your mother's nipple at that moment,
Symbol Builder makes a new symbol for [yellow} + [banana-shape] +
[nipple-feel]. Just *suppose* that that is how Symbol Builder works
(indulge me).
Now, imagine that what happens is that when symbols are built in this
way, the new symbols eventual whittle themselves down and get rid of one
extraneous chunk ([nipple-feel], in this case), but that because of how
the symbols interact with one another (in their complex-system way, of
ourse), it is absolutely vital that Symbol Builder go by this indirect
route to get to the final, valuable symbol. Always it has to take too
much, then throw something away, and for some reason that makes no sense
one of the three symbols always has to be a symbol from another
modality. Suppose that if you try to rationalize the system to get rid
of this silly arrangement, and instead get it to grab just two symbols
to make the new one, or insist that it gets all three from the same
modality, it just darn well breaks. And what that means is that the
without it the system builds useless symbols, and fails to build up a
decent hierarchy of powerfully abstracted symbols.
This scenario is basically about a situation in which thinking and
symbol processing happen in a way that is very close to what we think
will work in an AGI, but with a small twist that (crucially!) makes no
sense. There is no "logical" or "reasonable" purpose for that extra
symbol picked up at the beginning, but without it the system as a whole
just does not build symbols that are powerful.
Suppose (just suppose) that nature is built in such a way that there are
NO solutions to the symbol-building problem that work, unless this
bizarre, nonsensical twist is present in the symbol building mechanism.
If that were the case, would our present engineering techniques (or
psychological studies) guess that such a thing had to be there? No,
because a crucial high-level property of the system (its ability to
build powerfully abstracted symbols, rather than cheap and useless
symbols) is, in this case, a complex result of the low level mechanism.
Our present engineering techniques would never lead us to that mechanism.
To make the example a little more realistic, suppose that what is
required is not the thing I have just described, but that each symbol in
the system has (say) a triplet of parameters associated with it, and
some mechanisms that combine these parameters in some way when symbols
get together. Imagine that these three parameters have no high-level
interpretation, although as a set they do seem to play a role in what we
would interpret at a high level as the "truth-certainty" of the symbol.
As before, imagine that the proper functioning of the system (the
proper interaction of symbols when the system is thinking and reasoning)
is completely dependent on these parameters and their peculiar
combination mechanism. If thinking and reasoning (in humans) can only
happen when the symbols have these idiosyncratic parameters, how would
we ever discover this fact?
We only ever try to put things in the symbols that CAN be interpreted at
a high level (e.g. probabilties or certainty values), and which combine
in non-complex ways. But suppose that there happens to be no solution
to the intelligence puzzle than this weird mechanism?
I see AI people denying that this could happen, while at the same time
they take their systems that are not supposed to have complexity in
them, and they make them dependent on mechanisms (for example, inference
control mechanisms) that manifestly are sensitive to complex effects.
So while they deny that that complexity needs to be talked about, they
insert bucketloads of complexity into their systems!
But, instead of acknowledging that this complexity could be unavoidable,
and instead of acknowledging that you cannot insert complexity in, and
then fiddle with parameters to get it to come out right (that being the
ONE thing that we know you cannot do with complexity), they continue to
deny that complexity is an issue.
Not quite right I suppose, but I'll keep working at it.
You did pretty well, but missed the detailed consequences.
And I probably did not help, since I just wrote this out in a stream of
consciousness late at night.
I'll try to tidy this up and put it on the blog tomorrow.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com