I'm a little bit familiar with Piaget, and I'm guessing that the "formal stage of development" is something on the level of a four-year-old child. If we could create an AI system with the intelligence of a four-year-old child, then we would have a huge breakthrough, far beyond anything done so far in a computer. And we would be approaching a possible singularity. It's just that I see no evidence anywhere of this kind of breakthrough, or anything close to it.

My ideas are certainly inadequate in themselves at the present time. My Gnoljinn project is just about at the point where I can start writing the code for the intelligence engine. The architecture is in place, the interface language, Jinnteera, is being parsed, images are being sent into the Gnoljinn server (along with linguistic statements) and are being pre-processed. The development of the intelligence engine will take time, a lot of coding, experimentation, and re-coding, until I get it right. It's all experimental, and will take time.

I see a singularity, if it occurs at all, to be at least a hundred years out. I know you have a much shorter time frame. But what is it about Novamente that will allow it in a few years time to comprehend its own computer code and intelligently re-write it (especially a system as complex as Novamente)? The artificial intelligence problem is much more difficult than most people imagine it to be.


Ben Goertzel wrote:

John,

On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote:

I don't believe that the singularity is near, or that it will even occur. I am working very hard at developing real artificial general intelligence, but
from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.

First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:
<grin>
Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?
</grin>

Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily "hard takeoff in 5
minutes" fast, but at least "Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation" fast ...
leading up to the phase of "Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ..."

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
"formal stage" of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the "human child like" intuition of the AGI system will
be able to synergize with its "computer like" ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to