On Wed, Aug 10, 2011 at 12:50 AM, Craig Weinberg <whatsons...@gmail.com> wrote:

> It does mean that a machine can't behave just like a living thing,
> because everything that a machine is, and everything that a living
> thing is, are behaviors and experiences. You can't assume that two
> completely different things have the same behaviors and experiences
> just because the behaviors that you think you observer seem like what
> you expect.

I'm asking about observable behaviours, not experiences. Why do you
always conflate the two while arguing that they should not be
conflated?

>> Everything a human does is determined by genetics and environment.
>> Without the genetic or environmental programming, a human won';t ever
>> learn, grow or change himself.
>
> That's an unfounded assumption. Conjoined twins have the same genetics
> and environment yet they are different people with different
> personalities. A dead body has the same genetics and environment as a
> living person, yet it doesn't learn or grow.

Conjoined twins don't have the same environment since they are
spatially separated, made of different matter. A dead body also has a
different environment to a living body, since the chemical reactions
inside it are very different. Genetics in conjunction with environment
determines what sort of body and brain a being will have. what else
could there possibly be?

>> Are you now saying that your assumption that consciousness does not
>> necessarily follow from conscious-like behaviour is a priori absurd??
>> So if a machine can behave like a human then it must have the same
>> consciousness as a human, and to you this is now obvious a priori??
>
> Ugh. There is no such thing as conscious like behavior. Again. That is
> my point. If I am a cockroach, then cockroaches seem to behave like
> they are conscious to me and human beings are forces of nature. I can
> only think that this insight is not accessible to everyone because
> only some people seem to be capable of getting it and just overlook it
> over and over again. It is critically important to understand this
> point or everything that follows will be a strawman distortion of my
> position.

Can you explain again what you think is a priori absurd?

> So, does cockroach-like behavior mean that a machine is a cockroach?
> Does a wooden duck decoy be the same thing as a duck?

Cockroach-like behaviour means the thing behaves like a cockroach. If
cockroaches are conscious (they may be) cockroachlike behaviour means
the thing behaves like a conscious creature, namely a cockroach. It
isn't actually a cockroach if it is a machine, but just as it can have
cockroachlike behaviour without being a cockroach, it may have
cockroachlike consciousness without being a cockroach.

>> >> The form of argument is similar to assuming that sqrt(2) is rational
>> >> and showing that this assumption leads to contradiction, therefore
>> >> sqrt(2) cannot be rational. The only way to respond to this argument
>> >> if you disagree is to show that there is some error in the logic,
>> >> otherwise you *have* to accept it, even if you don't like it and you
>> >> have conceptual difficulties with irrational numbers.
>>
>> > No, I don't have to accept it. Consciousness is not accessible with
>> > mathematical logic alone. When you insist that it must beforehand, you
>> > poison the result and are forced into absurdity. You cannot prove to
>> > me that you exist. If you accept that that means you don't exist, then
>> > you have accepted that your own ability to accept or reject any
>> > proposition is itself invalid.
>>
>> No, I can't prove to you that I exist, or that I am conscious, or that
>> I will pay you back if you lend me money. But I can prove to you that
>> sqrt(2) is irrational and I can prove to you that if something has
>> behaviour similar to a conscious thing then it will also have the
>> consciousness of the conscious thing.
>
> You can't prove that you have consciousness but you are going to prove
> that something else has your consciousness because it acts like you
> do?

No, I can prove that something that behaves as I do has a similar
consciousness to mine. That doesn't mean that I am conscious or that I
can prove that I am conscious.

>>You need to be able to follow
>> the proof in order to point out the error if you don't agree. There
>> may be an error but simply saying you don't agree is not an argument.
>
> The error is that consciousness cannot be proved. It doesn't exist: it
> insists. Completely different (opposite) epistemology.

I'm not trying to prove consciousness, only that such consciousness as
an entity may or may not have will be preserved if the function of its
brain is preserved.

>> >> As for neurons having a finite set of behaviours, of course they do.
>> >> It is a theorem in physics that a certain volume of space has an upper
>> >> limit of information it can 
>> >> contain:http://en.wikipedia.org/wiki/Bekenstein_bound
>>
>> > There is no limit to the combinations of behaviors they can have over
>> > time though. There is a finite alphabet, but there is no limit to the
>> > possibilities of what can be written. Even the alphabet can be changed
>> > and expanded within the written text. New, unforeseeable behaviors are
>> > invented.
>>
>> No, there is an absolute limit to the behaviours that can be displayed
>> over time by a brain of finite size.
>
> Over how much time? Infinite time = infinite behaviors.

No, if the matter is finite the number of configurations is finite, so
after a finite period of time all the possible configurations will be
exhausted and you will start to repeat.

>>There is only a finite number of
>> particles in the brain
>
> No. The brain is constantly adding, removing, and changing particles.
> All of our cells are.

But the brain is finite in size and the number of types of particle is
finite. If you have a finite sentence length (the size of the brain)
and a finite number of letters (the particles making up the brain)
there is only a finite number of sentences that can be produced. In
order to have infinite brain states you would have to allow the brain
to expand infinitely in size.

>> If mental states supervene on physical states then there can't be more
>> possible mental states than brain states.
>
> Mental states make sense of phenomena outside of the brain, through
> the brain, just as language communicates through words, inventing new
> ones as it goes..

Whatever that means, there can't be more mental states than brain
states, and there is only a finite number of possible brain states if
the brain remains finite in size.

>> If the number of possible
>> brain states is finite then the number of possible mental states is an
>> equal or smaller finite number (probably much smaller).
>
> Neither brain states nor mental states are finite or bound to each
> other explicitly. Some are bound explicitly, some are not. Think of a
> venn diagram with the self as the intersection of neurology and
> experience.

So can you have a change in mental state without a change in brain
state? Brain activity would then seem to be superfluous - you do your
thinking with a disembodied soul.

>> >> So you would say of your friend: "I have known him for twenty years,
>> >> have had many conversations with him and always considered him very
>> >> smart, but now that I know he is a robot I realise that all along he
>> >> was as dumb as a rock".
>>
>> > Of course. It's not unusual for people to deceive themselves in long
>> > term relationships. If you had the friend, would you not be fazed at
>> > all to discover that he is a robot? What if you found out that that he
>> > reports your every conversation to GoogleBook, and that is programmed
>> > to replace you and dispose of your body in the river, would you still
>> > would have faith in his intelligence and your friendship enough to try
>> > to win him over and talk him out of it?
>>
>> I'd be surprised if my friend was a robot but if he was intelligent
>> before I knew he would still be intelligent after I knew. If he tried
>> to kill me then I would be upset, by I would also be upset if my flesh
>> and blood friend tried to kill me.
>
> So you would find it no different whether it is a lifelong friend who
> has been betraying you for 20 years versus a robot who was programmed
> to extract business intelligence from you from the start? You would
> hold the robot personally responsible and not GoogleBook?

I don't know why you chose Google Books as an example but if it could
somehow be intelligent enough to drive a humanlike robot then Google
Books would be responsible for its actions.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to