--- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > --- On Sat, 5/31/08, John G. Rose
> <[EMAIL PROTECTED]> wrote:
> >
> > > > From: Matt Mahoney
> [mailto:[EMAIL PROTECTED]
> >
> > > > I don't believe you are conscious. I believe yo
Bob Mottram posted on agi2
>> An interesting case of a woman who never forgets. She describes her
memories as a continuously running movie, which she can't turn off.
http://www.onpointradio.org/shows/2008/05/20080520_b_main.asp
Perhaps we all have this kind of memory, but most of the tim
John:When you describe this you have to be careful how much computation your
mind
is doing and taking for granted. You make many assumptions just by looking
at the pic and saying these are signs that this man is conscious. And saying
that a handheld TV is some sort of model, ya that's making mass
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
>
> you utterly refused to answer my question re: what is your model? It's
> not a
> hard question to start answering - i.e. either you do have some kind of
> model or you don't. You simply avoided it. Again.
I have some models that I feel confident
An interesting case of a woman who never forgets. She describes her
memories as a continuously running movie, which she can't turn off.
http://www.onpointradio.org/shows/2008/05/20080520_b_main.asp
Perhaps we all have this kind of memory, but most of the time we only
have limited or no consc
- Original Message
From: Tudor Boloni <[EMAIL PROTECTED]>
Jim, We will eventually stumble upon this conceptual complexity, namely a few
algorithms that exceed the results that human intelligence uses (the algorithms
created through slow evolution and relatively fast learning).
--
Well, I probably do not understand exactly what you have meant in your previous
statements. But I do not believe that a method of study that only examines
computational structures, no matter how objective, is going to succeed in
producing higher general intelligence without also comparing them t
Jim, We will eventually stumble upon this conceptual complexity, namely a
few algorithms that exceed the results that human intelligence uses (the
algorithms created through slow evolution and relatively fast learning). we
would have a smarter machine that exhibits advanced intelligence in many
wa
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> --- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> > > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
>
> > > I don't believe you are conscious. I believe you
> > > are a zombie. Prove me wrong.
> >
> > I am a zombie. Prove to me that
John,
I'm going to stop here (unless you want to continue) - and not hound you :).
But I would like you to see something -
you utterly refused to answer my question re: what is your model? It's not a
hard question to start answering - i.e. either you do have some kind of
model or you don't. Yo
--- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > I don't believe you are conscious. I believe you
> > are a zombie. Prove me wrong.
>
> I am a zombie. Prove to me that I am not. Otherwise I will
> accuse you of being conscious.
--- On Sat, 5/31/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
I wrote:
> > What internal properties of a Turing machine
> > distinguish one that has subjective experiences from an
> > equivalent machine (implementing the same function) that
> > only pretends to have subjective experience?
>
>
> Y
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
> That's correct. The model of consciousness should be the self [brain-
> body]
> watching and physically interacting with the movie [that is in a sense
> an
> "open movie" - rather than on a closed screen - projected all over the
> world
> outside, an
> From: Ben Goertzel [mailto:[EMAIL PROTECTED]
>
> If by "conscious" you mean "having a humanlike subjective experience",
> I suppose that in future we will infer this about intelligent agents
> via a combination of observation of their behavior, and inspection of
> their internal construction and
John:A movie implies someone or something watching it. Too simplistic. A
rock is
getting the world movie played upon it ad infinitum.
That's correct. The model of consciousness should be the self [brain-body]
watching and physically interacting with the movie [that is in a sense an
"open movie
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> --- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> > If something is pretending, at first it may dupe others into thinking
> > that it is conscious. But as time goes on and other conscious
> > agents detect and suspect an imposter thei
On Sun, Jun 1, 2008 at 2:15 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Sat, 5/31/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>> But in future, there could be impostor agents that act like
>> they have humanlike subjective experience but don't ... and we
>> could uncover them by analyzin
--- On Sat, 5/31/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> But in future, there could be impostor agents that act like
> they have humanlike subjective experience but don't ... and we
> could uncover them by analyzing their internals...
What internal properties of a Turing machine distinguish
If by "conscious" you mean "having a humanlike subjective experience",
I suppose that in future we will infer this about intelligent agents
via a combination of observation of their behavior, and inspection of
their internal construction and dynamics.
As right now the only intelligent agents that
--- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> If something is pretending, at first it may dupe others into thinking
> that it is conscious. But as time goes on and other conscious
> agents detect and suspect an imposter their behavior will change
> towards it and the resultant beh
Jim, these are good points, and seem to be saying that: even with the
perfect metric for intelligence discovered (lets pretend), and a maximally
intelligent program built (keep pretending), that without a value system in
place that selects among future possible actions or internal
tests/experiments
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
> No, I believe I'm right here. Maths is only quantification - the
> question is
> : what are you quantifying? Programs are only recipes to construct
> something
> or a sequence of behaviour. The question again is: what are you
> constructing?
>
Math
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> --- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> > People believe they are conscious. Why? Because they are.
>
> No, because people that didn't believe it did not pass on their genes.
>
Also people that didn't believe that they ha
Suppose that an advocate of behaviorism and reinforcement
was able to make a successful general AI program that was clearly far in
advance of any other effort. At first I
might argue that his advocacy of behaviorism and reinforcement was only an
eccentricity,
that his program must be coded with s
--- On Sat, 5/31/08, John G. Rose <[EMAIL PROTECTED]> wrote:
> People believe they are conscious. Why? Because they are.
No, because people that didn't believe it did not pass on their genes.
> Is there more than just a belief that we are conscious? Sure some
> rare individuals can block pain.
John, The reason why people are thinking about all this stuff in terms of
maths is
because it is not all just fluffy philosophizing you have to have at least
minimalistic math models in order to build software. So when you say
iTheathre or iMovie I'm thinking bits per send, compression, color dep
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
>
> What many people call consciousness is qualia, that which distinguishes
> you from a philosophical zombie, http://en.wikipedia.org/wiki/P-zombie
>
> There is no test for consciousness in this sense, but humans universally
> believe that they are
Why do I believe anyone besides me is conscious? Because they are made of
meat? No, it's because they claim to be conscious, and answer questions about
their consciousness the same way I would, given my own conscious
experience -- and they have the same capabilities, e.g. of introspection,
1-sh
The attempt to create an objective measure or process for intelligence seems
worthwhile, but the problem here is that in making the attempt to eliminate
"actions and beliefs" from the modeling of intelligence one is in danger of
repeating the serious error of over-simplification as was done, for
> From: Mike Tintner [mailto:[EMAIL PROTECTED]
>
> You guys are seriously irritating me.
>
> You are talking such rubbish. But it's collective rubbish - the
> collective *non-sense* of AI. And it occurs partly because our culture
> doesn't offer a simple definition of consciousness. So let me hav
> From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
>
> read http://cs-www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf
>
You can come up with different models of consciousness. And the more models
that you think up, more variables creep into the equation. So you have to
fight to keep ones out
Steve, Josh, etc.
Agree this is off-topic ... it should be posted to
[EMAIL PROTECTED]
instead, perhaps... so I have cross-posted it there and suggest to
continue the discussion there.
Steve:
I think that brain scanning is an interesting and important
technology/research direction, but I don't
32 matches
Mail list logo