leria 2, 6928 Manno, Switzerland.
http://www.vetta.org/documents/IDSIA-12-06-1.pdf
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 9:57:40 AM
Subject: Re: [agi] A question on the
Furthermore we learned in class recently about a case where a person was
literally born with only half a brain, dont have that story but here is one:
http://abcnews.go.com/2020/Health/story?id=1951748&page=1
I think all the talk about hard numbers is really off base unfortunatly and AI
shouldnt
t; if that
article is all that you've seen on the topic (though one would have hoped that
an integrity check or a reality check would have prompted further evaluation --
particularly since the article itself mentions that that would require an
unreasonably/impossibly large amount of RAM.)
Message
From: Richard Loosemore <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 4:09:23 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Matt Mahoney wrote:
> Richard, what is your definition of "understanding"? How would you test
o or sound.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 3:48:37 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
>> The connection between intelligence
Mark Waser wrote:
>Are you conceding that you can predict the results of a Google
search?
OK, you are right. You can type the same query twice. Or if you live long
enough you can do it the hard way. But you won't.
>Are you now conceding that it is not true that "Models that are simple eno
Matt Mahoney wrote:
Richard, what is your definition of "understanding"? How would you test
whether a person understands art?
Turing offered a behavioral test for intelligence. My understanding of
"understanding" is that it is something that requires intelligence. The
connection between in
sting but unobtainable edge case, why do you believe that Hutter has
any relevance at all?
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 15, 2006 2:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
and I know of no reasons why opacity is required for intelligence.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 15, 2006 2:24 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Sorry if I did not make clear the
vember 15, 2006 2:38:49 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Matt Mahoney wrote:
> Richard Loosemore <[EMAIL PROTECTED]> wrote:
>> "Understanding" 10^9 bits of information is not the same as storing 10^9
>> bits of information.
>
> T
er but not the latter, would you care to attempt to offer
another counter-example or would you prefer to retract your initial statements?
- Original Message -----
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 15, 2006 2:24 PM
Subject: Re: [agi] A que
Sys. Tech. J (3) p. 50-64.
Standing, L. (1973), “Learning 10,000 Pictures”,
Quarterly Journal of Experimental Psychology (25) pp. 207-222.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Richard Loosemore <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, Novem
November 15, 2006 9:39:14 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser wrote:
>> Given sufficient time, anything should be able to be understood and
>> debugged.
>> Give me *one* counter-example to the above . . . .
Matt Mahoney replied:
>
riginal Message
From: Richard Loosemore <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:33:04 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Matt Mahoney wrote:
> I will try to answer several posts here. I said that the knowledge
> base
larly,
an AGI doesn't need to store 100% of the information that it uses. It
simply needs to know where to find it upon need and how to use it.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, November 14, 2006 10:34 PM
Subject:
ut
many complex and immense systems are easily bounded.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, November 14, 2006 10:34 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
I will try to answer several posts here.
Matt Mahoney wrote:
I will try to answer several posts here. I said that the knowledge
base of an AGI must be opaque because it has 10^9 bits of information,
which is more than a person can comprehend. By opaque, I mean that you
can't do any better by examining or modifying the internal
represent
I will try to answer several posts here. I said that the knowledge base of an
AGI must be opaque because it has 10^9 bits of information, which is more than
a person can comprehend. By opaque, I mean that you can't do any better by
examining or modifying the internal representation than you co
x27;t understand what I'm doing when I'm programming when he's watching me in real time. Everything is easily explainable given sufficient time . . . .- Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]>To: Sent: Tuesday, November 14, 2006 11:03 AMSubje
Does it generate any kind of overview reasoning of why it does something?If in the VR you tell the bot to go pick up something, and it hides in the corner instead, does it have any kind of useful feedback or 'insight' into its thoughts?I intend to have different levels of thought processes and reas
No presumably you would have the ability to take a snapshot of what its doing, or as its doing it it should be able to explain what it is doing.James RatcliffBillK <[EMAIL PROTECTED]> wrote: On 11/14/06, James Ratcliff wrote:> If the "contents of a knowledge base for AGI will be beyond our ability
Even now, with a relatively primitive system like the current
Novamente, it is not pragmatically possible to understand why the
system does each thing it does.
It is possible in principle, but even given the probabilistic logic
semantics of the system's knowledge it's not pragmatic, because
somet
If the "contents of a knowledge base for AGI will be beyond our ability to comprehend" then it is probably not human level AGI, it is something entirely new, and it will be alien and completely foriegn and unable to interact with us at all, correct? If you mean it will have more knowledge than we
ahoneyTo: agi@v2.listbox.com Sent: Monday, November 13, 2006 10:22PM Subject: Re: Re: [agi] A question on thesymbol-system hypothesisJamesRatcliff <[EMAIL PROTECTED]>wrote:>Well, words and language based ideas/terms adequatly describemuch of the up
Hi,
I would also argue that a large number of weak pieces of evidence also
means that Novamente does not *understand* the domain that it is making a
judgment in. It is merely totally up weight of evidence.
I would say that intuition often consists, internally, in large part,
of summing up
Richard Loosemoore
> As for your suggestion about the problem being centered on the use of
> model-theoretic semantics, I have a couple of remarks.
>
> One is that YES! this is a crucial issue, and I am so glad to see you
> mention it. I am going to have to read your paper and discuss with you
On 11/14/06, James Ratcliff wrote:
If the "contents of a knowledge base for AGI will be beyond our ability to
comprehend" then it is probably not human level AGI, it is something
entirely new, and it will be alien and completely foriegn and unable to
interact with us at all, correct?
If you me
me *one* counter-example to
the above . . . .
- Original Message -
From:
Matt
Mahoney
To: agi@v2.listbox.com
Sent: Monday, November 13, 2006 10:22
PM
Subject: Re: Re: [agi] A question on the
symbol-system hypothesis
James
Ratcliff <[EMAIL PROT
e . . . .
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, November 14, 2006 11:03 AM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
Even now, with a relatively primitive system like the current
Novamente, it is not pragm
James Ratcliff <[EMAIL PROTECTED]> wrote:>Well, words and language based ideas/terms adequatly describe much of the upper levels of human interaction and see >appropriate in that case.>>It fails of course when it devolpes down to the physical level, ie vision or motor cortex skills, but other than
Richard,
It is a complicated topic, but I don't have the time to write long
emails at the moment (that is why I didn't jump into the discussion
until I saw your email). Instead, I'm going to send you two papers of
mine in a separate email. One of the two is co-authored with
Hofstadter, so you pro
Well, words and language based ideas/terms adequatly describe much of the upper levels of human interaction and see appropriate in that case.It fails of course when it devolpes down to the physical level, ie vision or motor cortex skills, but other than that, using language internaly would seem nat
Pei Wang wrote:
On 11/13/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
But
Now you have me really confused, because Searle's attack would have
targetted your approach, my approach and Ben's approach equally: none
of us have moved on from the position he was attacking!
The situation i
e I get that far). It's good to hear that researchers now
> have moved beyond the simplistic GOFAI symbol-as-intelligence idea --
> more people with more advanced ideas to share thoughts with.
>
> ----- Original Message ----- From: "Richard Loosemore" <[EMAIL PROTECTED]&
m: "Richard Loosemore" <[EMAIL PROTECTED]>
To:
Sent: Sunday, November 12, 2006 4:37 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
John,
The problem is that your phrases below have been used by people I
completely disagree with (John Searle) and also by p
-- more people with more
advanced ideas to share thoughts with.
- Original Message -
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To:
Sent: Sunday, November 12, 2006 4:37 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
John,
The problem is that your
John,
The problem is that your phrases below have been used by people I
completely disagree with (John Searle) and also by people I completely
agree with (Doug Hofstadter) in different contexts, they mean
totally different things.
I am not quite sure how it bears on the quote of mine be
<[EMAIL PROTECTED]>
To:
Sent: Sunday, November 12, 2006 12:38 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis
So, in the way that you've described this, I totally agree with you. I
guess I was attacking a paper tiger that any real thinking person
involved
in A
So, in the way that you've described this, I totally agree with you. I
guess I was attacking a paper tiger that any real thinking person involved
in AI doesn't bother with anymore.
I'm not sure about that ... Cyc seems to be based on the idea that
logical manipulation of symbols denoting conce
totally agree with you. I
guess I was attacking a paper tiger that any real thinking person involved
in AI doesn't bother with anymore.
Ben wrote:
Subject: Re: [agi] A question on the symbol-system hypothesis
My question is: am I wrong that there are still people out there
that
buy the
My question is: am I wrong that there are still people out there that
buy the symbol-system hypothesis? including the idea that a system based on
the mechanical manipulation of statements in logic, without a foundation of
primary intelligence to support it, can produce thought?
The problem
TECTED]>To: agi@v2.listbox.comSent: Saturday, November 11, 2006 8:27:52 PMSubject: Re: [agi] A question on the symbol-system hypothesis
On 11/12/06, John Scanlon <[EMAIL PROTECTED]> wrote: > > The major missing piece in the AI puzzle goes between the bottom level of automatic learning
On 11/12/06, John Scanlon <[EMAIL PROTECTED]> wrote: > > Well yes, if a symbol-system subsumes a Turing machine then intelligence should be implementable in such a system. But such a system in which we're trying to implement AI is called a computer. It's at the bottom level. Symbol-manipulation
Well yes, if a symbol-system
subsumes a Turing machine then intelligence should be implementable in such a
system. But such a system in which we're trying to implement AI is
called a computer. It's at the bottom level. Symbol-manipulation
that I'm talking about is at a completely differe
On 11/12/06, John Scanlon <[EMAIL PROTECTED]> wrote: > > The major missing piece in the AI puzzle goes between the bottom level of automatic learning systems like neural nets, genetic algorithms, and the like, and top-level symbol manipulation. This middle layer is the biggest, most important
Exactly, and this is one
reason why real artificial intelligence has been so hard to achieve. But
when people refer to thought in this way, they are conflating thought and
consciousness. Consciousness in a machine is not my goal (though there
is no reason that it isn't possi
On 11/12/06, John Scanlon <[EMAIL PROTECTED]> wrote:
> I get the impression that a lot of people interested in AI still believe that the mental manipulation of symbols is equivalent to thought. As many other people understand now, symbol-manipulation is not thought. Instead, symbols can be
That magical, undefined 'thought'...On 11/11/06, John Scanlon <
[EMAIL PROTECTED]> wrote:
I get the impression that a lot
of people interested in AI still believe that the mental manipulation of
symbols is equivalent to thought. As many other people understand now,
symbol-manipulatio
I get the impression that a lot
of people interested in AI still believe that the mental manipulation of
symbols is equivalent to thought. As many other people understand now,
symbol-manipulation is not thought. Instead, symbols can be manipulated by
thought to solve various problems t
101 - 149 of 149 matches
Mail list logo