BillK <[EMAIL PROTECTED]> wrote: On 12/4/06, Mark Waser wrote:
>
> Explaining our actions is the reflective part of our minds evaluating the
> reflexive part of our mind. The reflexive part of our minds, though,
> operates analogously to a machine running on compiled code with the
> compilation
On 12/5/06, BillK <[EMAIL PROTECTED]> wrote:
Your reasoning is getting surreal.
You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are
On 12/4/06, Mark Waser wrote:
Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind. The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the con
But I'm not at all sure how important that difference is . . . . With the
brain being a massively parallel system, there isn't necessarily a huge
advantage in "compiling knowledge" (I can come up with both advantages and
disadvantages) and I suspect that there are more than enough surprises that
On the other hand, I think that lack of compilation is going to turn out to
be a *very* severe problem for non-massively parallel systems
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 1:00 PM
Subject: Re: Re: Re: Re:
> Well, of course they can be explained by me -- but the acronym for
> that sort of explanation is "BS"
I take your point with important caveats (that you allude to). Yes, nearly
all decisions are made as reflexes or pattern-matchings on what is
effectively compiled knowledge; however, it is the
achine is (or, in reverse, no explanation = no intelligence).
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 12:17 PM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
>> We're reach
We're reaching the point of agreeing to disagree except . . . .
Are you really saying that nearly all of your decisions can't be explained
(by you)?
Well, of course they can be explained by me -- but the acronym for
that sort of explanation is "BS"
One of Nietzsche's many nice quotes is (parap
:-)
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 11:21 AM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
Hi,
The only real case where a human couldn't understand the machine's
Hi,
The only real case where a human couldn't understand the machine's reasoning
in a case like this is where there are so many entangled variables that the
human can't hold them in comprehension -- and I'll continue my contention
that this case is rare enough that it isn't going to be a problem
On 11/29/06, Philip Goetz <[EMAIL PROTECTED]> wrote:
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> I defy you to show me *any* black-box method that has predictive power
> outside the bounds of it's training set. All that the black-box methods are
> doing is curve-fitting. If you give t
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote:
I defy you to show me *any* black-box method that has predictive power
outside the bounds of it's training set. All that the black-box methods are
doing is curve-fitting. If you give them enough variables they can brute
force solutions through
tt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 29, 2006 2:13 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
AI is about solving problems that you can't solve yourself. You can program
a computer to beat you at chess. You understand the s
ns is that you can't see inside it, it only
seems like an invitation to disaster to me. So why is it a better design?
All that I see here is something akin to "I don't understand it so it must
be good".
----- Original Message -
From: "Philip Goetz" <[EMA
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 1:25:33 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
> A human doesn't have enough time to look th
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> A human doesn't have enough time to look through millions of pieces of
> data, and doesn't have enough memory to retain them all in memory, and
> certainly doesn't have the time or the memory to examine all of the
> 10^(insert large number here
e problem --
though you may be able to solve it -- and validating your answers and
placing intelligent/rational boundaries/caveats on them is not possible.
- Original Message -
From: "Philip Goetz" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 29, 2006 1:14 PM
Sub
On 11/14/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Even now, with a relatively primitive system like the current
> Novamente, it is not pragmatically possible to understand why the
> system does each thing it does.
Pragmatically possible obscures the point I was trying to make with
Matt.
It would be an interesting and appropriate development, of course,...
Just as in humans, for instance, the goal of "getting laid" sometimes
generates the subgoal of "talking to others" ... it seems indirect at
first, but can be remarkably effective ;=)
ben
On 11/23/06, Mike Dougherty <[EMAIL PR
x27;t understand what I'm doing when I'm programming when he's watching me in real time. Everything is easily explainable given sufficient time . . . .- Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]>To: Sent: Tuesday, November 14, 2006 11:03 AMSubje
Does it generate any kind of overview reasoning of why it does something?If in the VR you tell the bot to go pick up something, and it hides in the corner instead, does it have any kind of useful feedback or 'insight' into its thoughts?I intend to have different levels of thought processes and reas
Even now, with a relatively primitive system like the current
Novamente, it is not pragmatically possible to understand why the
system does each thing it does.
It is possible in principle, but even given the probabilistic logic
semantics of the system's knowledge it's not pragmatic, because
somet
Hi,
I would also argue that a large number of weak pieces of evidence also
means that Novamente does not *understand* the domain that it is making a
judgment in. It is merely totally up weight of evidence.
I would say that intuition often consists, internally, in large part,
of summing up
e . . . .
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, November 14, 2006 11:03 AM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
Even now, with a relatively primitive system like the current
Novamente, it is not pragm
24 matches
Mail list logo