Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
leria 2, 6928 Manno, Switzerland. http://www.vetta.org/documents/IDSIA-12-06-1.pdf -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mark Waser <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Thursday, November 16, 2006 9:57:40 AM Subject: Re: [agi] A question on the

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread James Ratcliff
Furthermore we learned in class recently about a case where a person was literally born with only half a brain, dont have that story but here is one: http://abcnews.go.com/2020/Health/story?id=1951748&page=1 I think all the talk about hard numbers is really off base unfortunatly and AI shouldnt

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
t; if that article is all that you've seen on the topic (though one would have hoped that an integrity check or a reality check would have prompted further evaluation -- particularly since the article itself mentions that that would require an unreasonably/impossibly large amount of RAM.)

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
Message From: Richard Loosemore <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Wednesday, November 15, 2006 4:09:23 PM Subject: Re: [agi] A question on the symbol-system hypothesis Matt Mahoney wrote: > Richard, what is your definition of "understanding"? How would you test

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
o or sound. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mark Waser <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Wednesday, November 15, 2006 3:48:37 PM Subject: Re: [agi] A question on the symbol-system hypothesis >> The connection between intelligence

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
Mark Waser wrote: >Are you conceding that you can predict the results of a Google search? OK, you are right. You can type the same query twice. Or if you live long enough you can do it the hard way. But you won't. >Are you now conceding that it is not true that "Models that are simple eno

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Richard Loosemore
Matt Mahoney wrote: Richard, what is your definition of "understanding"? How would you test whether a person understands art? Turing offered a behavioral test for intelligence. My understanding of "understanding" is that it is something that requires intelligence. The connection between in

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Mark Waser
sting but unobtainable edge case, why do you believe that Hutter has any relevance at all? - Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Wednesday, November 15, 2006 2:54 PM Subject: Re: [agi] A question on the symbol-system hypothesis

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Mark Waser
and I know of no reasons why opacity is required for intelligence. - Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Wednesday, November 15, 2006 2:24 PM Subject: Re: [agi] A question on the symbol-system hypothesis Sorry if I did not make clear the

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
vember 15, 2006 2:38:49 PM Subject: Re: [agi] A question on the symbol-system hypothesis Matt Mahoney wrote: > Richard Loosemore <[EMAIL PROTECTED]> wrote: >> "Understanding" 10^9 bits of information is not the same as storing 10^9 >> bits of information. > > T

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Mark Waser
er but not the latter, would you care to attempt to offer another counter-example or would you prefer to retract your initial statements? - Original Message ----- From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Wednesday, November 15, 2006 2:24 PM Subject: Re: [agi] A que

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Richard Loosemore
Sys. Tech. J (3) p. 50-64. Standing, L. (1973), “Learning 10,000 Pictures”, Quarterly Journal of Experimental Psychology (25) pp. 207-222. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Richard Loosemore <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Wednesday, Novem

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
November 15, 2006 9:39:14 AM Subject: Re: [agi] A question on the symbol-system hypothesis Mark Waser wrote: >> Given sufficient time, anything should be able to be understood and >> debugged. >> Give me *one* counter-example to the above . . . . Matt Mahoney replied: >

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
riginal Message From: Richard Loosemore <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Wednesday, November 15, 2006 9:33:04 AM Subject: Re: [agi] A question on the symbol-system hypothesis Matt Mahoney wrote: > I will try to answer several posts here. I said that the knowledge > base

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Mark Waser
larly, an AGI doesn't need to store 100% of the information that it uses. It simply needs to know where to find it upon need and how to use it. - Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Tuesday, November 14, 2006 10:34 PM Subject:

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Mark Waser
ut many complex and immense systems are easily bounded. - Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Tuesday, November 14, 2006 10:34 PM Subject: Re: [agi] A question on the symbol-system hypothesis I will try to answer several posts here.

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Richard Loosemore
Matt Mahoney wrote: I will try to answer several posts here. I said that the knowledge base of an AGI must be opaque because it has 10^9 bits of information, which is more than a person can comprehend. By opaque, I mean that you can't do any better by examining or modifying the internal represent

Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread Matt Mahoney
I will try to answer several posts here. I said that the knowledge base of an AGI must be opaque because it has 10^9 bits of information, which is more than a person can comprehend. By opaque, I mean that you can't do any better by examining or modifying the internal representation than you co

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread James Ratcliff
x27;t understand what I'm doing when I'm programming when he's watching me in real time. Everything is easily explainable given sufficient time . . . .- Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]>To: Sent: Tuesday, November 14, 2006 11:03 AMSubje

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread James Ratcliff
Does it generate any kind of overview reasoning of why it does something?If in the VR you tell the bot to go pick up something, and it hides in the corner instead, does it have any kind of useful feedback or 'insight' into its thoughts?I intend to have different levels of thought processes and reas

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread James Ratcliff
No presumably you would have the ability to take a snapshot of what its doing, or as its doing it it should be able to explain what it is doing.James RatcliffBillK <[EMAIL PROTECTED]> wrote: On 11/14/06, James Ratcliff wrote:> If the "contents of a knowledge base for AGI will be beyond our ability

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread Ben Goertzel
Even now, with a relatively primitive system like the current Novamente, it is not pragmatically possible to understand why the system does each thing it does. It is possible in principle, but even given the probabilistic logic semantics of the system's knowledge it's not pragmatic, because somet

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread James Ratcliff
If the "contents of a knowledge base for AGI will be beyond our ability to comprehend"  then it is probably not human level AGI, it is something entirely new, and it will be alien and completely foriegn and unable to interact with us at all, correct?  If you mean it will have more knowledge than we

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread James Ratcliff
ahoneyTo: agi@v2.listbox.com Sent: Monday, November 13, 2006 10:22PM Subject: Re: Re: [agi] A question on thesymbol-system hypothesisJamesRatcliff <[EMAIL PROTECTED]>wrote:>Well, words and language based ideas/terms adequatly describemuch of the up

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread Ben Goertzel
Hi, I would also argue that a large number of weak pieces of evidence also means that Novamente does not *understand* the domain that it is making a judgment in. It is merely totally up weight of evidence. I would say that intuition often consists, internally, in large part, of summing up

Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread William Pearson
Richard Loosemoore > As for your suggestion about the problem being centered on the use of > model-theoretic semantics, I have a couple of remarks. > > One is that YES! this is a crucial issue, and I am so glad to see you > mention it. I am going to have to read your paper and discuss with you

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread BillK
On 11/14/06, James Ratcliff wrote: If the "contents of a knowledge base for AGI will be beyond our ability to comprehend" then it is probably not human level AGI, it is something entirely new, and it will be alien and completely foriegn and unable to interact with us at all, correct? If you me

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread Mark Waser
me *one* counter-example to the above . . . .   - Original Message - From: Matt Mahoney To: agi@v2.listbox.com Sent: Monday, November 13, 2006 10:22 PM Subject: Re: Re: [agi] A question on the symbol-system hypothesis James Ratcliff <[EMAIL PROT

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread Mark Waser
e . . . . - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Tuesday, November 14, 2006 11:03 AM Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis Even now, with a relatively primitive system like the current Novamente, it is not pragm

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-13 Thread Matt Mahoney
James Ratcliff <[EMAIL PROTECTED]> wrote:>Well, words and language based ideas/terms adequatly describe much of the upper levels of human interaction and see >appropriate in that case.>>It fails of course when it devolpes down to the physical level, ie vision or motor cortex skills, but other than

Re: [agi] A question on the symbol-system hypothesis

2006-11-13 Thread Pei Wang
Richard, It is a complicated topic, but I don't have the time to write long emails at the moment (that is why I didn't jump into the discussion until I saw your email). Instead, I'm going to send you two papers of mine in a separate email. One of the two is co-authored with Hofstadter, so you pro

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-13 Thread James Ratcliff
Well, words and language based ideas/terms adequatly describe much of the upper levels of human interaction and see appropriate in that case.It fails of course when it devolpes down to the physical level, ie vision or motor cortex skills, but other than that, using language internaly would seem nat

Re: [agi] A question on the symbol-system hypothesis

2006-11-13 Thread Richard Loosemore
Pei Wang wrote: On 11/13/06, Richard Loosemore <[EMAIL PROTECTED]> wrote: But Now you have me really confused, because Searle's attack would have targetted your approach, my approach and Ben's approach equally: none of us have moved on from the position he was attacking! The situation i

Re: [agi] A question on the symbol-system hypothesis

2006-11-13 Thread Pei Wang
e I get that far). It's good to hear that researchers now > have moved beyond the simplistic GOFAI symbol-as-intelligence idea -- > more people with more advanced ideas to share thoughts with. > > ----- Original Message ----- From: "Richard Loosemore" <[EMAIL PROTECTED]&

Re: [agi] A question on the symbol-system hypothesis

2006-11-13 Thread Richard Loosemore
m: "Richard Loosemore" <[EMAIL PROTECTED]> To: Sent: Sunday, November 12, 2006 4:37 PM Subject: Re: [agi] A question on the symbol-system hypothesis John, The problem is that your phrases below have been used by people I completely disagree with (John Searle) and also by p

Re: [agi] A question on the symbol-system hypothesis

2006-11-12 Thread John Scanlon
-- more people with more advanced ideas to share thoughts with. - Original Message - From: "Richard Loosemore" <[EMAIL PROTECTED]> To: Sent: Sunday, November 12, 2006 4:37 PM Subject: Re: [agi] A question on the symbol-system hypothesis John, The problem is that your

Re: [agi] A question on the symbol-system hypothesis

2006-11-12 Thread Richard Loosemore
John, The problem is that your phrases below have been used by people I completely disagree with (John Searle) and also by people I completely agree with (Doug Hofstadter) in different contexts, they mean totally different things. I am not quite sure how it bears on the quote of mine be

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread John Scanlon
<[EMAIL PROTECTED]> To: Sent: Sunday, November 12, 2006 12:38 AM Subject: Re: Re: [agi] A question on the symbol-system hypothesis So, in the way that you've described this, I totally agree with you. I guess I was attacking a paper tiger that any real thinking person involved in A

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread Ben Goertzel
So, in the way that you've described this, I totally agree with you. I guess I was attacking a paper tiger that any real thinking person involved in AI doesn't bother with anymore. I'm not sure about that ... Cyc seems to be based on the idea that logical manipulation of symbols denoting conce

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread John Scanlon
totally agree with you. I guess I was attacking a paper tiger that any real thinking person involved in AI doesn't bother with anymore. Ben wrote: Subject: Re: [agi] A question on the symbol-system hypothesis My question is: am I wrong that there are still people out there that buy the

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread Ben Goertzel
My question is: am I wrong that there are still people out there that buy the symbol-system hypothesis? including the idea that a system based on the mechanical manipulation of statements in logic, without a foundation of primary intelligence to support it, can produce thought? The problem

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread Matt Mahoney
TECTED]>To: agi@v2.listbox.comSent: Saturday, November 11, 2006 8:27:52 PMSubject: Re: [agi] A question on the symbol-system hypothesis On 11/12/06, John Scanlon <[EMAIL PROTECTED]> wrote: > >     The major missing piece in the AI puzzle goes between the bottom level of automatic learning

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread YKY (Yan King Yin)
On 11/12/06, John Scanlon <[EMAIL PROTECTED]> wrote: > > Well yes, if a symbol-system subsumes a Turing machine then intelligence should be implementable in such a system.  But such a system in which we're trying to implement AI is called a computer.  It's at the bottom level.  Symbol-manipulation

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread John Scanlon
Well yes, if a symbol-system subsumes a Turing machine then intelligence should be implementable in such a system.  But such a system in which we're trying to implement AI is called a computer.  It's at the bottom level.  Symbol-manipulation that I'm talking about is at a completely differe

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread YKY (Yan King Yin)
On 11/12/06, John Scanlon <[EMAIL PROTECTED]> wrote: > >     The major missing piece in the AI puzzle goes between the bottom level of automatic learning systems like neural nets, genetic algorithms, and the like, and top-level symbol manipulation.  This middle layer is the biggest, most important

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread John Scanlon
    Exactly, and this is one reason why real artificial intelligence has been so hard to achieve.  But when people refer to thought in this way, they are conflating thought and consciousness.  Consciousness in a machine is not my goal (though there is no reason that it isn't possi

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread YKY (Yan King Yin)
On 11/12/06, John Scanlon <[EMAIL PROTECTED]> wrote: >     I get the impression that a lot of people interested in AI still believe that the mental manipulation of symbols is equivalent to thought.  As many other people understand now, symbol-manipulation is not thought.  Instead, symbols can be

Re: [agi] A question on the symbol-system hypothesis

2006-11-11 Thread Chris Petersen
That magical, undefined 'thought'...On 11/11/06, John Scanlon < [EMAIL PROTECTED]> wrote:     I get the impression that a lot of people interested in AI still believe that the mental manipulation of symbols is equivalent to thought.  As many other people understand now, symbol-manipulatio

[agi] A question on the symbol-system hypothesis

2006-11-11 Thread John Scanlon
    I get the impression that a lot of people interested in AI still believe that the mental manipulation of symbols is equivalent to thought.  As many other people understand now, symbol-manipulation is not thought.  Instead, symbols can be manipulated by thought to solve various problems t

<    1   2