Abram Demski wrote:
Thanks for the comments. My replies:



It does happen to be the case that I
believe that logic-based methods are mistaken, but I could be wrong about
that, and it could turn out that the best way to build an AGI is with a
completely logic-based AGI, along with just one small mechanism that was
Complex.

Logical methods are quite Complex. This was part of my point. Logical
deduction in any sufficiently complicated formalism satisfies both
types of global-local disconnect that I mentioned (undecidability, and
computational irreducibility). If this were not the case, it seems
your argument would be much less appealing. (In particular, there
would be one less argument for the mind being complex; we could not
say "logic has some subset of the mind's capabilities; a brute-force
theorem prover is a complex system; therefore the mind is probably a
complex system.")

Okay, I made a mistake in my choice of words (I knew it when I wrote them, but neglected to go back and correct!).

I did not mean to imply that I *require* some complexity in an AGI formalism, and that finding some complexity would be a good thing, end of story, problem solved, etc. So for example, you are correct to point out that most 'logical' systems do exhibit complexity, provided they do something realistically approximating intelligence.

Instead, what I meant to say was that we are not setting up our research procedures to cope with the complexity. So, it might turn out that a good, robust AGI can be built with something like a regular logic-based formalism, BUT with just a few small aspects that are complex .... but unfortunately we are currently not able to discover what those complex parts should be like, because our current methodology is to use blind hunch and intuition (i.e. heuristics that "look" as though the will work). Going back to your planning system example, it might be the case that only one choice of heuristic control mechanism will actually make a given logical formalism converge on fully intelligent behavior, but there might be 10^100 choices of possible control mechanism, and our current method for searching through the possibilities is to use intuition to pick likely candidates.

The point here is that a small amount of the factors that give rise to complexity can actualy have a massive effect on the behavior of the system, but people are today acting as if a small amount of complexity-inducing characteristics means a small amount of unpredictability in the behavior. This is simply not the case.






Similarly, you suggest that I "have an image of an AGI that is built out of
totally dumb pieces, with intelligence emerging unexpectedly."  Some people
have suggested that that is my view of AGI, but whether or not those people
are correct in saying that [aside:  they are not!]

Apologies. But your arguments do appear to point in that direction.

In your original blog post, also, you mention the way that AGI planning
The problem is that you have portrayed the
distinction between 'pure' logical mechanisms and 'messy' systems that have
heuristics riding on their backs, as equivalent to a distinction that you
thought I was making between non-complex and complex AGI systems.  I hope
you can see now that this is not what I was trying to argue.

You are right, this characterization is quite bad. I think that is
part of what was making me uneasy about my conclusion. My intention
was not that approximation should always equal a logical search with
messy heuristics stacked upon it. In fact, I had two conflicting
images in mind:use

-A logical search with logical heuristics (such as greedy methods for
NP-complete problems, which are guaranteed to be fairly near optimal)

-A "messy" method (such as a neural net or swarm) that somehow gives
you an answer without precise logic

A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using "messy" methods.

Ah, I agree completely here. We are taling about a Wag The Dog scenario, where everyone focusses on the pristine beauty of the logical formalism, but turns a blind eye to the (assumed-to-be) trivial heuristic control mechanisms .... but in the end it is the heuristic control mechanism that is responsible for almost all of the actual behavior.




I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
"The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one..."

And at that point, your lab and my lab are essentially starting to do the same thing. You need to start searching the space of possible heuristics in a systematic way, rather than just pick a hunch and go with it.

The problem, though, is that you might already have gotten yourself into a You Can't Get There By Starting From Here situation. Suppose your choice of basic logical formalism, and knowledge representation format (and the knowledge acquisition methods that MUST come along with that formalism) has boxed you into a corner in which there does not exist any choice of heuristic control mechanism that will get your system up into human-level intelligence territory?

That is the point where my lab would hold up the example of human cognition and say that we have a greater chance of not being in the You Can't Get There By Starting From Here zone. By staying as close to human cognition as possible, there is more chance of getting a system that works.



Finally, I should mention one general misunderstanding about mathematics.
 This argument has a superficial similarity to Godel's theorem, but you
should not be deceived by that.  Godel was talking about formal deductive
systems, and the fact that there are unreachable truths within such systems.
 My argument is about the feasibility of scientific discovery, when applied
to systems of different sorts.  These are two very different domains.

I think it is fair to say that I accounted for this. In particular, I
said: "It's this second kind of irreducibility, computational
irreducibility, that I see as more relevant to AI." (Actually, I do
see Godel's theorem as relevant to AI; I should have been more
specific and said "relevant to AI's global-local disconnect".)

You are correct: you did include that second perspective, which is about science and not formal completeness. I stand corrected. Too many people have made the error of saying that the argument is just a rehash of Godel, or Chaitin-Kolmogorov, so I was responding to that phantom crowd of onlookers.

I think that I need to write some more to explain the *way* that I see this complex systems problem manifesting itself, because that aspect was not emphasized (due to lack of space) and it leaves a certain amount of confusion in the air. I will get to that when I can.




Richard Loosemore


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to