--- Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Matt Maohoney wrote:
> > My point is that when AGI is built, you will have to trust its answers
> based
> > on the correctness of the learning algorithms, and not by examining the
> > internal data or tracing the reasoning.
>
> Agreed...
>
> >I beli
BillK <[EMAIL PROTECTED]> wrote: On 12/4/06, Mark Waser wrote:
>
> Explaining our actions is the reflective part of our minds evaluating the
> reflexive part of our mind. The reflexive part of our minds, though,
> operates analogously to a machine running on compiled code with the
> compilation
On 12/5/06, BillK <[EMAIL PROTECTED]> wrote:
Your reasoning is getting surreal.
You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are
On 12/4/06, Mark Waser wrote:
Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind. The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the con
But I'm not at all sure how important that difference is . . . . With the
brain being a massively parallel system, there isn't necessarily a huge
advantage in "compiling knowledge" (I can come up with both advantages and
disadvantages) and I suspect that there are more than enough surprises that
On the other hand, I think that lack of compilation is going to turn out to
be a *very* severe problem for non-massively parallel systems
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 1:00 PM
Subject: Re: Re: Re: Re:
> Well, of course they can be explained by me -- but the acronym for
> that sort of explanation is "BS"
I take your point with important caveats (that you allude to). Yes, nearly
all decisions are made as reflexes or pattern-matchings on what is
effectively compiled knowledge; however, it is the
achine is (or, in reverse, no explanation = no intelligence).
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 12:17 PM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
>> We're reach
age -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 10:45 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis
On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Philip Goetz gave an example of an intrusion detection s
We're reaching the point of agreeing to disagree except . . . .
Are you really saying that nearly all of your decisions can't be explained
(by you)?
Well, of course they can be explained by me -- but the acronym for
that sort of explanation is "BS"
One of Nietzsche's many nice quotes is (parap
:-)
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 11:21 AM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
Hi,
The only real case where a human couldn't understand the machine's
Hi,
The only real case where a human couldn't understand the machine's reasoning
in a case like this is where there are so many entangled variables that the
human can't hold them in comprehension -- and I'll continue my contention
that this case is rare enough that it isn't going to be a problem
or logic to effectively do statistics,
then you're fine -- but I really don't see it happening. I also am becoming
more and more aware of how much feature extraction and isolation is critical
to my view of AGI.
- Original Message -----
From: "Ben Goertzel" <[E
On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Philip Goetz gave an example of an intrusion detection system that learned
> information that was not comprehensible to humans. You argued that he
> could
> have understood it if he tried harder.
No, I gave five separate alternatives most
Matt Maohoney wrote:
My point is that when AGI is built, you will have to trust its answers based
on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.
Agreed...
I believe this is the fundamental
flaw of all AI systems based on structu
mited humans have run out of capacity -- not
the complete change in understanding that you see between us and the lower
animals).
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Thursday, November 30, 2006 9:30 AM
Subject: Re: Re: [agi] A question on
Would you argue that any of your examples produce good results that are
not comprehensible by humans? I know that you sometimes will argue that the
systems can find patterns that are both the real-world simplest explanation
and still too complex for a human to understand -- but I don't believ
On 11/29/06, Philip Goetz <[EMAIL PROTECTED]> wrote:
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> I defy you to show me *any* black-box method that has predictive power
> outside the bounds of it's training set. All that the black-box methods are
> doing is curve-fitting. If you give t
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote:
I defy you to show me *any* black-box method that has predictive power
outside the bounds of it's training set. All that the black-box methods are
doing is curve-fitting. If you give them enough variables they can brute
force solutions through
tt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 29, 2006 2:13 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
AI is about solving problems that you can't solve yourself. You can program
a computer to beat you at chess. You understand the s
ns is that you can't see inside it, it only
seems like an invitation to disaster to me. So why is it a better design?
All that I see here is something akin to "I don't understand it so it must
be good".
----- Original Message -
From: "Philip Goetz" <[EMA
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 1:25:33 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
> A human doesn't have enough time to look th
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> A human doesn't have enough time to look through millions of pieces of
> data, and doesn't have enough memory to retain them all in memory, and
> certainly doesn't have the time or the memory to examine all of the
> 10^(insert large number here
e problem --
though you may be able to solve it -- and validating your answers and
placing intelligent/rational boundaries/caveats on them is not possible.
- Original Message -
From: "Philip Goetz" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 29, 2006 1:14 PM
Sub
On 11/14/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Even now, with a relatively primitive system like the current
> Novamente, it is not pragmatically possible to understand why the
> system does each thing it does.
Pragmatically possible obscures the point I was trying to make with
Matt.
On 11/14/06, Mark Waser <[EMAIL PROTECTED]> wrote:
Matt Mahoney wrote:
>> Models that are simple enough to debug are too simple to scale.
>> The contents of a knowledge base for AGI will be beyond our ability to
comprehend.
Given sufficient time, anything should be able to be understood an
It would be an interesting and appropriate development, of course,...
Just as in humans, for instance, the goal of "getting laid" sometimes
generates the subgoal of "talking to others" ... it seems indirect at
first, but can be remarkably effective ;=)
ben
On 11/23/06, Mike Dougherty <[EMAIL PR
On 11/22/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Well, in the language I normally use to discuss AI planning, this
would mean that
1)keeping charged is a supergoal
2)The system knows (via hard-coding or learning) that
finding the recharging socket ==> keeping charged
If "charged" become
Well, in the language I normally use to discuss AI planning, this
would mean that
1)keeping charged is a supergoal
2)
The system knows (via hard-coding or learning) that
finding the recharging socket ==> keeping charged
(i.e. that the former may be considered a subgoal of the latter)
3)
The s
e can't do
lossless compression).
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Thursday, November 16, 2006 3:15 PM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis
"Rings" and "Models" are appropriated terms
"Rings" and "Models" are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
IMO these analogies are not fair.
The mathematical notion of
x27;t understand what I'm doing when I'm programming when he's watching me in real time. Everything is easily explainable given sufficient time . . . .- Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]>To: Sent: Tuesday, November 14, 2006 11:03 AMSubje
Does it generate any kind of overview reasoning of why it does something?If in the VR you tell the bot to go pick up something, and it hides in the corner instead, does it have any kind of useful feedback or 'insight' into its thoughts?I intend to have different levels of thought processes and reas
No presumably you would have the ability to take a snapshot of what its doing, or as its doing it it should be able to explain what it is doing.James RatcliffBillK <[EMAIL PROTECTED]> wrote: On 11/14/06, James Ratcliff wrote:> If the "contents of a knowledge base for AGI will be beyond our ability
Even now, with a relatively primitive system like the current
Novamente, it is not pragmatically possible to understand why the
system does each thing it does.
It is possible in principle, but even given the probabilistic logic
semantics of the system's knowledge it's not pragmatic, because
somet
If the "contents of a knowledge base for AGI will be beyond our ability to comprehend" then it is probably not human level AGI, it is something entirely new, and it will be alien and completely foriegn and unable to interact with us at all, correct? If you mean it will have more knowledge than we
ahoneyTo: agi@v2.listbox.com Sent: Monday, November 13, 2006 10:22PM Subject: Re: Re: [agi] A question on thesymbol-system hypothesisJamesRatcliff <[EMAIL PROTECTED]>wrote:>Well, words and language based ideas/terms adequatly describemuch of the up
Hi,
I would also argue that a large number of weak pieces of evidence also
means that Novamente does not *understand* the domain that it is making a
judgment in. It is merely totally up weight of evidence.
I would say that intuition often consists, internally, in large part,
of summing up
On 11/14/06, James Ratcliff wrote:
If the "contents of a knowledge base for AGI will be beyond our ability to
comprehend" then it is probably not human level AGI, it is something
entirely new, and it will be alien and completely foriegn and unable to
interact with us at all, correct?
If you me
me *one* counter-example to
the above . . . .
- Original Message -
From:
Matt
Mahoney
To: agi@v2.listbox.com
Sent: Monday, November 13, 2006 10:22
PM
Subject: Re: Re: [agi] A question on the
symbol-system hypothesis
James
Ratcliff <[EMAIL PROT
e . . . .
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, November 14, 2006 11:03 AM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
Even now, with a relatively primitive system like the current
Novamente, it is not pragm
James Ratcliff <[EMAIL PROTECTED]> wrote:>Well, words and language based ideas/terms adequatly describe much of the upper levels of human interaction and see >appropriate in that case.>>It fails of course when it devolpes down to the physical level, ie vision or motor cortex skills, but other than
Well, words and language based ideas/terms adequatly describe much of the upper levels of human interaction and see appropriate in that case.It fails of course when it devolpes down to the physical level, ie vision or motor cortex skills, but other than that, using language internaly would seem nat
<[EMAIL PROTECTED]>
To:
Sent: Sunday, November 12, 2006 12:38 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis
So, in the way that you've described this, I totally agree with you. I
guess I was attacking a paper tiger that any real thinking person
involved
in A
So, in the way that you've described this, I totally agree with you. I
guess I was attacking a paper tiger that any real thinking person involved
in AI doesn't bother with anymore.
I'm not sure about that ... Cyc seems to be based on the idea that
logical manipulation of symbols denoting conce
45 matches
Mail list logo