Re: [agi] The Singularity

2006-12-06 Thread John Scanlon
Hank - do you have any theories or AGI designs? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Re: [agi] The Singularity

2006-12-06 Thread Andrii (lOkadin) Zvorygin
My aim is not a General AI, currently it's MJZT. General AI just seems to be a side effect .u'i(amuzement). Nodes in a JZT communicate through language (and whatever form it may take), automation occurs of the communication. After a certain point a typical JZT "automation" would be able to have

Re: [agi] The Singularity

2006-12-06 Thread Andrii (lOkadin) Zvorygin
On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote: Your message appeared at first to be rambling and incoherent, but I see that that's probably because English is a second language for you. But that's not a problem if your ideas are solid. English is my second language. My first language is R

Re: Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Alright, one last message for the night. I don't actually consider myself to be pessimistic about AI. I believe that strong AI can and will (bar some global catastrophe) develop. It's the wrong-headed approaches through the history of AI that have hobbled the whole enterprise. The 1970's ha

Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Hank, Do you have a personal "understanding/design of AGI and intelligence in general" that predicts a soon-to-come singularity? Do you have theories or a design for an AGI? John Hank Conn wrote: It has been my experience that one's expectations on the future of AI/Singularity is di

Re: Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel
I see a singularity, if it occurs at all, to be at least a hundred years out. To use Kurzweil's language, you're not thinking in "exponential time" ;-) The artificial intelligence problem is much more difficult than most people imagine it to be. "Most people" have close to zero basis to eve

Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
I'm a little bit familiar with Piaget, and I'm guessing that the "formal stage of development" is something on the level of a four-year-old child. If we could create an AI system with the intelligence of a four-year-old child, then we would have a huge breakthrough, far beyond anything done so

Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Your message appeared at first to be rambling and incoherent, but I see that that's probably because English is a second language for you. But that's not a problem if your ideas are solid. Yes, there is "fake artificial intelligence" out there, systems that are proposed to be intelligent but

Re: [agi] The Singularity

2006-12-05 Thread Matt Mahoney
--- John Scanlon <[EMAIL PROTECTED]> wrote: > Alright, I have to say this. > > I don't believe that the singularity is near, or that it will even occur. I > am working very hard at developing real artificial general intelligence, but > from what I know, it will not come quickly. It will be slo

Re: [agi] The Singularity

2006-12-05 Thread Pei Wang
See http://www.agiri.org/forum/index.php?showtopic=44 and http://www.cis.temple.edu/~pwang/203-AI/Lecture/AGI.htm Pei On 12/5/06, Andrii (lOkadin) Zvorygin <[EMAIL PROTECTED]> wrote: Is there anywhere I could find a list and description of these different kinds of AI?.a'u(interest) I'm sure I

Re: [agi] The Singularity

2006-12-05 Thread Andrii (lOkadin) Zvorygin
On 12/5/06, Richard Loosemore <[EMAIL PROTECTED]> wrote: Ben Goertzel wrote: >> If, on the other hand, all we have is the present approach to AI then I >> tend to agree with you John: ludicrous. >> >> >> >> >> Richard Loosemore > > IMO it is not sensible to speak of "the present approach to AI"

Re: [agi] The Singularity

2006-12-05 Thread Charles D Hixson
Ben Goertzel wrote: ... According to my understanding of the Novamente design and artificial developmental psychology, the breakthrough from slow to fast incremental progress will occur when the AGI system reaches Piaget's "formal stage" of development: http://www.agiri.org/wiki/index.php/Formal

Re: [agi] The Singularity

2006-12-05 Thread Hank Conn
"Ummm... perhaps your skepticism has more to do with the inadequacies of **your own** AGI design than with the limitations of AGI designs in general?" It has been my experience that one's expectations on the future of AI/Singularity is directly dependent upon one's understanding/design of AGI and

Re: [agi] The Singularity

2006-12-05 Thread Richard Loosemore
Ben Goertzel wrote: If, on the other hand, all we have is the present approach to AI then I tend to agree with you John: ludicrous. Richard Loosemore IMO it is not sensible to speak of "the present approach to AI" There are a lot of approaches out there... not an orthodoxy by any means...

Re: Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel
If, on the other hand, all we have is the present approach to AI then I tend to agree with you John: ludicrous. Richard Loosemore IMO it is not sensible to speak of "the present approach to AI" There are a lot of approaches out there... not an orthodoxy by any means... -- Ben G - Thi

Re: [agi] The Singularity

2006-12-05 Thread Richard Loosemore
John Scanlon wrote: Alright, I have to say this. I don't believe that the singularity is near, or that it will even occur. I am working very hard at developing real artificial general intelligence, but from what I know, it will not come quickly. It will be slow and incremental. The idea t

Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel
John, On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote: I don't believe that the singularity is near, or that it will even occur. I am working very hard at developing real artificial general intelligence, but from what I know, it will not come quickly. It will be slow and incremental. The

Re: [agi] The Singularity

2006-12-05 Thread Andrii (lOkadin) Zvorygin
On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote: Alright, I have to say this. I don't believe that the singularity is near, or that it will even occur. I am working very hard at developing real artificial general intelligence, but from what I know, it will not come quickly. It will be slo

[agi] The Singularity

2006-12-05 Thread John Scanlon
Alright, I have to say this. I don't believe that the singularity is near, or that it will even occur. I am working very hard at developing real artificial general intelligence, but from what I know, it will not come quickly. It will be slow and incremental. The idea that very soon we can cr

Re: [agi] the Singularity Summit and regulation of AI

2006-05-11 Thread Bill Hibbard
Thank you for your responses. Jeff, I have taken your suggestion and sent a couple questions to the Summit. My concern is motivated by noticing that the Summit includes speakers who have been very clear about their opposition to regulating AI, but none who I am aware of who have advocated it (exce

Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Jeff Medina
Ben is pretty spot on here. There are many possible approaches and views that will not be covered; there simply isn't enough time. I can't speak for the speakers, nor for the extent to which any one of them will focus his or her time on regulation. But please note that the Summit has an open in

Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Russell Wallace
On 5/10/06, Bill Hibbard <[EMAIL PROTECTED]> wrote: The Singularity Summit should include all points ofview, including advocates for regulation of intelligentmachines. It will weaken the Summit to exclude thispoint of view. Then it would be better if the Summit were not held at all. Nanotech, AGI e

Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Ben Goertzel
On 5/10/06, Bill Hibbard <[EMAIL PROTECTED]> wrote: I am concerned that the Singularity Summit will not include any speaker advocating government regulation of intelligent machines. The purpose of this message is not to convince you of the need for such regulation, but just to say that the Summit

Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Mark Walker
- Original Message - From: "Bill Hibbard" Subject: [agi] the Singularity Summit and regulation of AI I am concerned that the Singularity Summit will not include any speaker advocating government regulation of intelligent machines. The purpose of this message is not to co

[agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Bill Hibbard
I am concerned that the Singularity Summit will not include any speaker advocating government regulation of intelligent machines. The purpose of this message is not to convince you of the need for such regulation, but just to say that the Summit should include someone speaking in favor of it. Note