Re: Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Alright, one last message for the night. I don't actually consider myself to be pessimistic about AI. I believe that strong AI can and will (bar some global catastrophe) develop. It's the wrong-headed approaches through the history of AI that have hobbled the whole enterprise. The 1970's ha

Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Hank, Do you have a personal "understanding/design of AGI and intelligence in general" that predicts a soon-to-come singularity? Do you have theories or a design for an AGI? John Hank Conn wrote: It has been my experience that one's expectations on the future of AI/Singularity is di

Re: Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel
I see a singularity, if it occurs at all, to be at least a hundred years out. To use Kurzweil's language, you're not thinking in "exponential time" ;-) The artificial intelligence problem is much more difficult than most people imagine it to be. "Most people" have close to zero basis to eve

Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
I'm a little bit familiar with Piaget, and I'm guessing that the "formal stage of development" is something on the level of a four-year-old child. If we could create an AI system with the intelligence of a four-year-old child, then we would have a huge breakthrough, far beyond anything done so

Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Your message appeared at first to be rambling and incoherent, but I see that that's probably because English is a second language for you. But that's not a problem if your ideas are solid. Yes, there is "fake artificial intelligence" out there, systems that are proposed to be intelligent but

Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Matt Mahoney
--- Ben Goertzel <[EMAIL PROTECTED]> wrote: > Matt Maohoney wrote: > > My point is that when AGI is built, you will have to trust its answers > based > > on the correctness of the learning algorithms, and not by examining the > > internal data or tracing the reasoning. > > Agreed... > > >I beli

Re: [agi] The Singularity

2006-12-05 Thread Matt Mahoney
--- John Scanlon <[EMAIL PROTECTED]> wrote: > Alright, I have to say this. > > I don't believe that the singularity is near, or that it will even occur. I > am working very hard at developing real artificial general intelligence, but > from what I know, it will not come quickly. It will be slo

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-05 Thread Matt Mahoney
--- Eric Baum <[EMAIL PROTECTED]> wrote: > > Matt> --- Hank Conn <[EMAIL PROTECTED]> wrote: > > >> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: > The "goals > >> of humanity", like all other species, was determined by > > >> evolution. > It is to propagate the species. > >> > >> > >>

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson
BillK wrote: On 12/5/06, Charles D Hixson wrote: BillK wrote: > ... > No time inversion intended. What I intended to say was that most (all?) decisions are made subconsciously before the conscious mind starts its reason / excuse generation process. The conscious mind pretending to weigh vario

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread BillK
On 12/5/06, Charles D Hixson wrote: BillK wrote: > ... > > Every time someone (subconsciously) decides to do something, their > brain presents a list of reasons to go ahead. The reasons against are > ignored, or weighted down to be less preferred. This applies to > everything from deciding to get

Re: [agi] The Singularity

2006-12-05 Thread Pei Wang
See http://www.agiri.org/forum/index.php?showtopic=44 and http://www.cis.temple.edu/~pwang/203-AI/Lecture/AGI.htm Pei On 12/5/06, Andrii (lOkadin) Zvorygin <[EMAIL PROTECTED]> wrote: Is there anywhere I could find a list and description of these different kinds of AI?.a'u(interest) I'm sure I

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson
BillK wrote: ... Every time someone (subconsciously) decides to do something, their brain presents a list of reasons to go ahead. The reasons against are ignored, or weighted down to be less preferred. This applies to everything from deciding to get a new job to deciding to sleep with your best

Re: [agi] The Singularity

2006-12-05 Thread Andrii (lOkadin) Zvorygin
On 12/5/06, Richard Loosemore <[EMAIL PROTECTED]> wrote: Ben Goertzel wrote: >> If, on the other hand, all we have is the present approach to AI then I >> tend to agree with you John: ludicrous. >> >> >> >> >> Richard Loosemore > > IMO it is not sensible to speak of "the present approach to AI"

Re: [agi] The Singularity

2006-12-05 Thread Charles D Hixson
Ben Goertzel wrote: ... According to my understanding of the Novamente design and artificial developmental psychology, the breakthrough from slow to fast incremental progress will occur when the AGI system reaches Piaget's "formal stage" of development: http://www.agiri.org/wiki/index.php/Formal

Re: Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-05 Thread BillK
On 12/5/06, Richard Loosemore wrote: There are so few people who speak up against the conventional attitude to the [rational AI/irrational humans] idea, it is such a relief to hear any of them speak out. I don't know yet if I buy everything Minsky says, but I know I agree with the spirit of it.

Re: [agi] The Singularity

2006-12-05 Thread Hank Conn
"Ummm... perhaps your skepticism has more to do with the inadequacies of **your own** AGI design than with the limitations of AGI designs in general?" It has been my experience that one's expectations on the future of AI/Singularity is directly dependent upon one's understanding/design of AGI and

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
Yes, I could not find a decent definition of irrational at first: Amending my statements now... Using the Wiki basis below: the term is used to describe thinking and actions which are, or appear to be, less useful or logical than the rational alternatives. I would remove the 'logical' portion o

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
>> You have hinted around it, but I would go one step further and say that >> Emotion is NOT contrary to logic. :-) I thought that my last statement that " is equally likely to be congruent with " was a lot more than a hint (unless congruent doesn't mean "not contrary" like I think/thought it d

Re: [agi] The Singularity

2006-12-05 Thread Richard Loosemore
Ben Goertzel wrote: If, on the other hand, all we have is the present approach to AI then I tend to agree with you John: ludicrous. Richard Loosemore IMO it is not sensible to speak of "the present approach to AI" There are a lot of approaches out there... not an orthodoxy by any means...

Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-05 Thread Richard Loosemore
Mark Waser wrote: Talk about fortuitous timing . . . . here's a link on Marvin Minsky's latest about emotions and rational thought http://www.boston.com/news/globe/health_science/articles/2006/12/04/minsky_talks_about_life_love_in_the_age_of_artificial_intelligence/ The most relevant line to

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
I didnt suggest modeling the non-ration part of it, was just responding to the other implication that we needed the non-rational part to model AGI as human. I believe there is very little non-rationality. >This is where Ben and I are sort of having a debate. I agree with him that >the >brain

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
Mark Waser <[EMAIL PROTECTED]> wrote: > Are > you saying that the more excuses we can think up, the more intelligent > we are? (Actually there might be something in that!). Sure. Absolutely. I'm perfectly willing to contend that it takes intelligence to come up with excuses and that more intell

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
> Now about building a rational vs non-rational AGI, how would you go about > modeling a non-rational part of it? Short of a random number generator? Why would you want to build a non-rational AGI? It seems like a *really* bad idea. I think I'm missing your point here. > For the most part we

Re: Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel
If, on the other hand, all we have is the present approach to AI then I tend to agree with you John: ludicrous. Richard Loosemore IMO it is not sensible to speak of "the present approach to AI" There are a lot of approaches out there... not an orthodoxy by any means... -- Ben G - Thi

Re: [agi] The Singularity

2006-12-05 Thread Richard Loosemore
John Scanlon wrote: Alright, I have to say this. I don't believe that the singularity is near, or that it will even occur. I am working very hard at developing real artificial general intelligence, but from what I know, it will not come quickly. It will be slow and incremental. The idea t

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
BillK <[EMAIL PROTECTED]> wrote: On 12/4/06, Mark Waser wrote: > > Explaining our actions is the reflective part of our minds evaluating the > reflexive part of our mind. The reflexive part of our minds, though, > operates analogously to a machine running on compiled code with the > compilation

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
Talk about fortuitous timing . . . . here's a link on Marvin Minsky's latest about emotions and rational thought http://www.boston.com/news/globe/health_science/articles/2006/12/04/minsky_talks_about_life_love_in_the_age_of_artificial_intelligence/ The most relevant line to our conversation is "

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
Are you saying that the more excuses we can think up, the more intelligent we are? (Actually there might be something in that!). Sure. Absolutely. I'm perfectly willing to contend that it takes intelligence to come up with excuses and that more intelligent people can come up with more and be

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mike Dougherty
On 12/5/06, BillK <[EMAIL PROTECTED]> wrote: Your reasoning is getting surreal. You seem to have a real difficulty in admitting that humans behave irrationally for a lot (most?) of the time. Don't you read newspapers? You can redefine rationality if you like to say that all the crazy people are

Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel
John, On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote: I don't believe that the singularity is near, or that it will even occur. I am working very hard at developing real artificial general intelligence, but from what I know, it will not come quickly. It will be slow and incremental. The

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread BillK
On 12/4/06, Mark Waser wrote: Explaining our actions is the reflective part of our minds evaluating the reflexive part of our mind. The reflexive part of our minds, though, operates analogously to a machine running on compiled code with the compilation of code being largely *not* under the con

Re: [agi] The Singularity

2006-12-05 Thread Andrii (lOkadin) Zvorygin
On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote: Alright, I have to say this. I don't believe that the singularity is near, or that it will even occur. I am working very hard at developing real artificial general intelligence, but from what I know, it will not come quickly. It will be slo

[agi] The Singularity

2006-12-05 Thread John Scanlon
Alright, I have to say this. I don't believe that the singularity is near, or that it will even occur. I am working very hard at developing real artificial general intelligence, but from what I know, it will not come quickly. It will be slow and incremental. The idea that very soon we can cr