On Aug 8, 8:19 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Sun, Aug 7, 2011 at 11:07 PM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> That, as I keep saying, is the question. Assume that the bot can
> >> behave like a person but lacks consciousness.
>
> > No. You have it backwards from the start. There is no such thing as
> > 'behaving like a person'. There is only a person interpreting
> > something's behavior as being like a person. There is no power
> > emanating from a thing that makes it person-like. If you understand
> > this you will know because you will see that the whole question is a
> > red herring. If you don't see that, you do not understand what I'm
> > saying.
>
> "Interpreting something's behaviour as being like a [person's]" is
> what I mean by "behaving like a person".

I know that's what you mean, but I'm trying to explain why those two
phrases are polar opposites in this context, because the whole thread
is about the difference between subjectivity and objectivity. If a
chip could behave like a person, then we wouldn't be having this
conversation right now. We'd be hanging out with our digital friends
instead. Every chip we make would have it's own perspective and do
what it wanted to do, like an infant or a pollywog would. If we want
to make a chip that impersonates something that does have it's own
perspective and does what it wants to, then we can try to do that with
varying levels of success depending upon who you are trying to fool,
how you are trying to fool them, and for how long. The fact that any
particular person interprets the thing as being alive or conscious for
some period of time is not the same thing as the thing being actually
alive or conscious.

>
> >>Then it would be
> >> possible to replace parts of your brain with non-conscious components
> >> that function otherwise normally, which would lead to you lacking some
> >> important aspect aspect of consciousness but being unaware of it. This
> >> is absurd, but it is a corollary of the claim that it is possible to
> >> separate consciousness from function. Therefore, the claim that it is
> >> possible to separate consciousness from function is shown to be false.
> >> If you don't accept this then you allow what you have already admitted
> >> is an absurdity.
>
> > It's a strawman of consciousness that is employed in circular
> > thinking. You assume that consciousness is a behavior from the
> > beginning and then use that fallacy to prove that behavior can't be
> > separated from consciousness. Consciousness drives behavior and vice
> > versa, but each extends beyond the limits of the other.
>
> No, I do NOT assume that consciousness follows from behaviour (and
> certainly not that it IS behaviour) from the beginning!! I've lost
> count of the number of times I have said "assume that it has the
> behaviour, but not the consciousness, of a brain component". How can I
> make it clearer? What other language can I use to convey that the
> thing is unconscious but to an external observer, who can't know its
> subjective states, it does the same sorts of mechanical things as its
> conscious counterpart?

Isn't the whole point of the gradual neuron substitution example to
prove that consciousness must be behavior? That if behavior of the
neurons are the same, and accepted as the same then the conscious
experience of the brain as a whole must be the same? Sorry if I'm not
getting your position right, and it is a subtle thing to try to
dissect. I think the word 'behavior' implies a certain level of
normative repetition which is not sufficient to describe the ability
of neurological awareness to choose whether to respond in the same way
or a new and unpredictable way. When you look at what neurons are
actually like, I think the idea of them having a finite set of
behaviors is not realistic. It's like saying that because speech can
be translated into words and letters, that words and letters should be
able to automatically produce the voice of their speakers.

> >> > The human race has already been supplanted by a superhuman AI. It's
> >> > called law and finance.
>
> >> They are not entities and not intelligent, let alone intelligent in
> >> the way humans are.
>
> > What make you think that law and finance are any less intelligent than
> > a contemporary AI program?
>
> Law and finance are abstractions. A computer may be programmed to
> solve financial problems, and then it has a limited intelligence, but
> it's incorrect to say that "finance" is therefore intelligent.

Computer programming languages are abstractions too. Law and finance
are machine logics that program the computer of civilization, and as
such, no more or less intelligent than any other machine.

> > When you say that intelligence can 'fake' non-intelligence, you imply
> > an internal experience (faking is not an external phenomenon).
> > Intelligence is a broad, informal term. It can mean subjectivity,
> > intersubjectivity, or objective behavior, although I would say not
> > truly objective but intersubjectively imagined as objective. I agree
> > that consciousness or awareness is different from any of those
> > definitions of intelligence which would actually be categories of
> > awareness. I would not say that a zombie is intelligent. Intelligence
> > implies understanding, which is internal. What a computer or a zombie
> > has is intelliform mechanism.
>
> If a computer or zombie can solve the same wide range of problems as a
> human then it is ipso facto as intelligent as a human. If you discover
> that your friend whom you have known for twenty years is actually a
> robot you may doubt in the light of this knowledge that he is
> conscious, but you can't doubt that he is intelligent, since that is
> based purely on your observations of his behaviour and not on internal
> state.

Yes, that's one usage of the word intelligent, definitely. It's not
that simple though if we are getting down to issues of subjectivity
and consciousness. A language translator can compare canned
definitions of words and spit out correlations which are useful to us
as users of the translator, but they are of no use to the translator
itself. The machine doesn't care if it's right or wrong, but we do. To
me, intelligence has to care whether it's right or wrong. It's not
accurate to say that a program which amounts to an interactive
dictionary is 'intelligent' but you could casually say that it's
intelligent to mean that it's design reflects human intelligence.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to