Very good, Richard. Agree to great extent. Yes the human mind is a complex, interdependent system of subsystems, and you can't chop them off.

[Yes BTW to the "insanity", i.e. literally out-of-the-human-mind, nature of sci. psychology. First, no mind - behaviourism. Then, yes there's a mind, but only an unconscious mind. Then, 1990's, oh we do have a conscious mind too. And still we only study consciousness, as a set of faculties, and not Thought - the conscious mind's actual streams of debate - the geology, if you like, but not the geography of human thought.].

But what you seem to be missing out is the evolutionary (& developmental) standpoint. The human mind evolved. And it also has to develop in stages through childhood, which to a limited extent recapitulates evolution.

So you have to understand why the human system had to evolve and has to develop in those ways. You can't just attempt to recreate, say, an already-developed adult human mind by a super-Manhattan project. We're nowhere near ready for that yet.

(An interesting thought BTW here is that adaptivity itself adapts, becomes more sophisticated through life - and evolution evolves).

Sure, Ben, AGI does not have to copy the evolution of mind exactly, but there are basic principles there of constructing a mind that I think do have to be adhered to, just as there were basic principles of flight..

For example, here nearly everyone seems to be talking about plunging in and creating a sophisticated intellectual mind more or less straight-off, but it takes the human brain roughly 13-20 years to develop physically and mentally to where it is able to intellectualise - to handle concepts like "society" and "development" and "philosophy." Why? I would argue because those powers of abstraction have been grounded in gradually building up a picture tree of underlying images and graphics, of great depth, with extraordinary CGI powers of manipulating them. An abstract concept, for example, like "society", I'm suggesting, is based on a lot of images in the brain - and you have to have them to handle it - as you do all such abstract concepts..


----- Original Message ----- From: "Richard Loosemore" <[EMAIL PROTECTED]>
To: <singularity@v2.listbox.com>
Sent: Wednesday, April 25, 2007 4:59 PM
Subject: [singularity] Re: Why do you think your AGI design will work?


Joshua Fox wrote:
Ben has confidently stated that he believes Novamente will work ( http://www.kurzweilai.net/meme/frame.html?m=3 <http://www.kurzweilai.net/meme/frame.html?m=3> and others).

AGI builders, what evidence do you have that your design will work?

This is an oft-repeated question, but I'd like to focus on two possible bases for saying that an invention will work before it does. 1. A clear, simple, mathematical theory, verified by experiment. The experiments can be "pure science" rather than technology tests.
2. Functional tests of component parts or of crude prototypes.

Maybe I am missing something in the articles I have read, but do contemporary AGI builders have a verified theory and/or verified components and prototypes?

Joshua,

I happen to think your question is a very important one. I am writing a paper on something very close to that question right now, so I want to summarize what I have said there.

First of all, I think a lot of the replies to your post went off at a tangent: inventing a test means nothing (no matter how much fun it is) if the justification for the test is nonexistent. It doesn't matter how many tests people pull out of thin air, the whole point of your question was WHY should we believe this or that test, or WHY should we believe this or that definition of intelligence, or WHY should we believe this or that design for an AGI is better than any other.

What we need is the BASIS that anyone might have for asserting the superiority of one answer over another .... except personal judgment.

But:

This 'basis' is completely missing from all of AI research. AI is just one great big free-for-all exploration, based on personal judgements that are often kept away from the limelight, to build something that works as well as human intelligence. There are no principled approaches, there are only hidden assumptions/preconceptions/guesses, on top of which are layered various kinds of formalism that are designed to make it look more scientific. (And if it seems outrageous to say that so many people are being so self-deceiptful, take a quick look at the history of behaviorism, in psychology.... very similar story, same conclusion).

The above is meant to be a position statement: I believe that I can justify it by means of a long essay, with lots of evidence, but let's just take it for granted right now, so I can move on to the next step.

Here is what I think is happening.

1) Everyone is actually borrowing crucial ideas from the design of the human cognitive system, including those people who say they are not.

I say this because every approach to AI involves something borrowed from the human design: even pure mathematical logic was based on some ideas that the Ancient Greeks had about how their minds worked. Most people borrow just a little (nobody is trying, yet, to borrow most of the human design).

2) The only reason that any AI design works is because something was borrowed from the human design.

There are no objective reasons why AI systems should be intelligent, no matter how much the logicians might argue that what they do is 'deriving true facts about the world by means of truth-preserving laws of inference'. This is just post-hoc rationalization that leaves out all the little bits and pieces they insert into their systems to make them work in practical situations. Those mathematical laws of inference do not guarantee that the systems are intelligent, they just guarantee that if you load up a system with a bunch of facts you can derive a bunch of others.... these are two very different claims.

3) If you step back and ask, objectively, whether we should borrow a lot of the human design, or just take a few snippets and then embellish them, you can come to a serious conclusion, based on our understanding of complex systems: the grab-a-few-snippets-and-then-embellish-them approach is the most ridiculous of all. This approach is almost certain to fail because if you want to emulate a complex system then the dumbest, most lunatic approach of all is to take a quick glance at its low level mechanisms and then pretend that your quick glance can be the root of a development process that will lead to the same global behavior as the original... basically, you are trapping yourself in a Can't Get There From Here situation.

4) If the above problem (item 3) is real, then we would expect to see a number of features in AI research:

(a) Avoidance of the crucial areas where the complexity will get you, like true symbol grounding [CHECK],

(b) Encouraging progress at first because of the borrowing from the human design, followed by stagnation [CHECK],

(c) Repeated cycles in which everyone climbs on a new idea-bandwagon to try to get around the limitations of the previous one, followed by good progress and then stagnation [CHECK],

(d) Very little to show for years of mind-numbing theorem-proving [CHECK],

(e) Double standards by those who claim to be using rigorous scientific (i.e. mathematical) techniques ... the core of what they do is rigorous, to be sure, but they keep very quiet about the fact that they have to add completely arbitrary machinery to 'constrain' their theorem proving engines, so they won't just prove everything in the universe before deciding whether to put the jam on top of the bread or the bread on top of the jam. In other words, these people are just hackers, like their predecessors. [CHECK],

(f) Distractions from the goal of building a working AGI, like people who invent abstract, impossible-to-build AI 'systems' (actually just pure math fantasies), because they love math more than they love the idea of actually getting anything to work [CHECK],

(g) No overall progress, because this approach (borrowing a few ideas from the human design, glorifying them as basic assumptions, and then pretending that it is possible to make a complete AGI system by embellishing and extending those first, arbitrarily chosen ideas) is ultimately going to hit a glass ceiling. The approach will be able to make some limited progress with all the aspects of intelligence that do not depend on too much complexity (like getting the system to build its own concepts and its own high-level learning mechanisms), but this will only produce fragile systems that have to have their hands held in an exponentially increasing way as we try to push them to do more intelligent things.



That last point is the only one we don't know about yet: come back in fifty years and see if, with no cahnge in approach, the situation is still as daft as it is today.

Every one of the AI or AGI projects that I see now is doing the same thing. All borrowing a few chunks from the human design, all pretending that they don't need to borrow the entire human design, all just making it up from a 'design' that is actually someone's best guess, with only personal intuition as their ultimate justification for why their best guess is the one that will work. All, I predict, will make some progress until they hit the glass ceiling.


So what is the way out? The only way out, I claim, is to be honest about the fact that the human design is the source of inspiration, and get serious about borrowing from it in a massive, systematic way.

I am not saying that everyone should just do cognitive science: the folks over there are just as screwed up as the AI community, though for slightly different reasons.

What we actually need is a true middle path, neither conventional AI nor cognitive science/psychology, but something in between. Absent a better name, I am now referring to that middle course as 'Theoretical Psychology'.

So the answer to your question is like this:

Nobody has a clue what a formal theory of AGI would look like, because in the end there cannot be any such thing: the function "being intelligent" is not definable in an objective, non-circular way. So I am afraid you cannot ask for either experimental science or verifiable functional components. Unfortunately, a lot of AI's problems are wrapped up in the fact that people simply cannot get their heads around this idea. They will one day, but why do we have to wait?

The best we can do is to use the human design as a close inspiration -- we do not have to make an exact copy, we just need to get close enough to build something in the same family of systems, that's all -- and set up progress criteria based on how well we explain and understand that design.

Sounds like it would be very unsatisfying to someone who was a mathematican, doesn't it? Horrible, nasty, empirical science. That's why, sadly, mathematicians should not be doing AI.




Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
Internal Virus Database is out-of-date.
Checked by AVG Free Edition. Version: 7.5.463 / Virus Database: 0.0.0/0 - Release Date: <unknown> 00:00




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to