On 11/14/06, James Ratcliff wrote:
If the contents of a knowledge base for AGI will be beyond our ability to
comprehend then it is probably not human level AGI, it is something
entirely new, and it will be alien and completely foriegn and unable to
interact with us at all, correct?
If you
Richard Loosemoore
As for your suggestion about the problem being centered on the use of
model-theoretic semantics, I have a couple of remarks.
One is that YES! this is a crucial issue, and I am so glad to see you
mention it. I am going to have to read your paper and discuss with you
Hi,
I would also argue that a large number of weak pieces of evidence also
means that Novamente does not *understand* the domain that it is making a
judgment in. It is merely totally up weight of evidence.
I would say that intuition often consists, internally, in large part,
of summing
ubject: Re: Re: [agi] A question on thesymbol-system hypothesisJamesRatcliff [EMAIL PROTECTED]wrote:Well, words and language based ideas/terms adequatly describemuch of the upper levels of human interaction and see appropriate in thatcase.It fails of course w
If the "contents of a knowledge base for AGI will be beyond our ability to comprehend" then it is probably not human level AGI, it is something entirely new, and it will be alien and completely foriegn and unable to interact with us at all, correct? If you mean it will have more knowledge than we
No presumably you would have the ability to take a snapshot of what its doing, or as its doing it it should be able to explain what it is doing.James RatcliffBillK [EMAIL PROTECTED] wrote: On 11/14/06, James Ratcliff wrote: If the "contents of a knowledge base for AGI will be beyond our ability to
Does it generate any kind of overview reasoning of why it does something?If in the VR you tell the bot to go pick up something, and it hides in the corner instead, does it have any kind of useful feedback or 'insight' into its thoughts?I intend to have different levels of thought processes and
time. Everything is easily explainable given sufficient time . . . .- Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]>To: <agi@v2.listbox.com>Sent: Tuesday, November 14, 2006 11:03 AMSubject: Re: Re: Re: [agi] A question on the symbol-system hypothesis Even
I will try to answer several posts here. I said that the knowledge base of an
AGI must be opaque because it has 10^9 bits of information, which is more than
a person can comprehend. By opaque, I mean that you can't do any better by
examining or modifying the internal representation than you
, November 12, 2006 4:37 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
John,
The problem is that your phrases below have been used by people I
completely disagree with (John Searle) and also by people I completely
agree with (Doug Hofstadter) in different contexts
advanced ideas to share thoughts with.
- Original Message - From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 12, 2006 4:37 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
John,
The problem is that your phrases below have been
Pei Wang wrote:
On 11/13/06, Richard Loosemore [EMAIL PROTECTED] wrote:
But
Now you have me really confused, because Searle's attack would have
targetted your approach, my approach and Ben's approach equally: none
of us have moved on from the position he was attacking!
The situation is
Well, words and language based ideas/terms adequatly describe much of the upper levels of human interaction and see appropriate in that case.It fails of course when it devolpes down to the physical level, ie vision or motor cortex skills, but other than that, using language internaly would seem
Richard,
It is a complicated topic, but I don't have the time to write long
emails at the moment (that is why I didn't jump into the discussion
until I saw your email). Instead, I'm going to send you two papers of
mine in a separate email. One of the two is co-authored with
Hofstadter, so you
James Ratcliff [EMAIL PROTECTED] wrote:Well, words and language based ideas/terms adequatly describe much of the upper levels of human interaction and see appropriate in that case.It fails of course when it devolpes down to the physical level, ie vision or motor cortex skills, but other than that,
John,
The problem is that your phrases below have been used by people I
completely disagree with (John Searle) and also by people I completely
agree with (Doug Hofstadter) in different contexts, they mean
totally different things.
I am not quite sure how it bears on the quote of mine
I get the impression that a lot
of people interested in AIstill believe that the mental manipulation of
symbols is equivalent to thought. As many other people understand now,
symbol-manipulation is not thought. Instead, symbols can be manipulated by
thought to solve various problems that
That magical, undefined 'thought'...On 11/11/06, John Scanlon
[EMAIL PROTECTED] wrote:
I get the impression that a lot
of people interested in AIstill believe that the mental manipulation of
symbols is equivalent to thought. As many other people understand now,
symbol-manipulation is
On 11/12/06, John Scanlon [EMAIL PROTECTED] wrote:
I get the impression that a lot of people interested in AIstill believe that the mental manipulation of symbols is equivalent to thought. As many other people understand now, symbol-manipulation is not thought. Instead, symbols can be
Exactly, and this is one
reason why real artificial intelligence has been so hard to achieve. But
when people refer to thought in this way, they are conflating thought and
consciousness. Consciousness in a machine is not my goal (though there
is no reason that it isn't
On 11/12/06, John Scanlon [EMAIL PROTECTED] wrote:The majormissing piece in the AI puzzlegoes between the bottom level of automatic learning systems like neural nets, genetic algorithms, and the like, and top-level symbol manipulation. This middle layer is the biggest, most important piece,
My question is: am I wrong that there are still people out there that
buy the symbol-system hypothesis? including the idea that a system based on
the mechanical manipulation of statements in logic, without a foundation of
primary intelligence to support it, can produce thought?
The
a paper tiger that any real thinking person involved
in AI doesn't bother with anymore.
Ben wrote:
Subject: Re: [agi] A question on the symbol-system hypothesis
My question is: am I wrong that there are still people out there
that
buy the symbol-system hypothesis? including the idea
So, in the way that you've described this, I totally agree with you. I
guess I was attacking a paper tiger that any real thinking person involved
in AI doesn't bother with anymore.
I'm not sure about that ... Cyc seems to be based on the idea that
logical manipulation of symbols denoting
.listbox.com
Sent: Sunday, November 12, 2006 12:38 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis
So, in the way that you've described this, I totally agree with you. I
guess I was attacking a paper tiger that any real thinking person
involved
in AI doesn't bother with anymore
101 - 125 of 125 matches
Mail list logo