Re: the redness of the red

2010-02-01 Thread Bruno Marchal


On 31 Jan 2010, at 03:10, soulcatcher☠ wrote:


I see a red rose. You see a red rose. Is your experience of redness
the same as mine?
1. Yes, they are identical.
2. They are different as long as neural organization of our brains is
slightly different, but you are potentially capable of experiencing my
redness with some help from neurosurgeon who can shape your brain in
the way as mine is.
3. They are different as long as some 'code' of our brains is slightly
different but you (and every machine) is potentially capable of
experiencing my redness if they somehow achieve the same 'code'.
5. They are different and absolutely private - you (and anybody else,
be it a human or machine) don't and can't experience my redness.
6. The question doesn't have any sense because ... (please elaborate)
7. ...
What is your opinion?


It is between 3 and 5, I would say. Intuitively, assuming that the  
mechanist substitution level being high,  e may expect our qualia to  
differ between us, as much as the shape of our body. But then logic  
can explain that in such place (other's experience) intuition might  
not be the best adviser.






My (naive) answer is (3). Our experiences are identical (would a
correct term be 'ontologically identical'?) as long as they have the
same symbolic representation and the symbols have the same grounding
in the physical world. The part about grounding is just an un-educated
guess,  I don't understand the subject and have only an intuitive
feeling that semantics (what computation is about) is important and
somehow determined by the physical world out there.


You are right. Our first person consciousness stability has to rely on  
the infinite computations which statistically stabilize the physical  
world. But the semantics will be typically a creation of the person's  
brain.





Let me explain with example. Suppose, that you:
1. simulate my brain in a computer program, so we can say that this
program represents my brain in your symbols.
2. simulate a red rose
3. feed rose data into my simulated brain.
I think (more believe than think) that this simulated brain won't see
my redness - in fact, it won't see nothing at all cause it isn't
conscious.


Then digital mechanism is false, or you have chosen an incorrect level  
of substitution, and your brain may have to include a part of the  
environment.





But if you:
1. make a robot that simulates my brain in my symbols i.e. behaves
(relative to the physical world) in the same ways as I do
2. show a rose to the robot
I think that robot will experience the same redness as me.


See Jason Resch comment.




Would be glad if somebody suggests something to read about 'symbols
grounding', semantics, etc., I have a lot of confusion here, I've
always thought that logic is a formal language for a 'syntactic'
manipulation with 'strings' that acquire meaning only in our minds.


Actually logic is more about the relation between syntax and  
semantics. Both syntax and semantics, and the relation in between are  
studied mathematically by logicians. I would suggest you to study a  
good introduction to mathematical logic like the book by Elliot  
Mendelson. See:


http://www.amazon.com/Introduction-Mathematical-Fourth-Elliott-Mendelson/dp/0412808307

But logic is not a formal language. It is the informal mathematical  
study OF formal languages and theories together with their semantics/ 
meaning. (Proof theory, model theory, computability theory, axiomatic  
set theory, etc.)


Bruno
http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread soulcatcher☠

 What would you say about this setup:

 Computer Simulation-Physical Universe-Your Brain

 That is to say, what if our physical universe were simulated in some
 alien's computer instead of being some primitive physical world?


This setup doesn't sound very convincing to me:
- I believe that simulated objects (agents) can't be conscious
- I believe that I am consious
= I'm not simulated and all the universe is not simulated.

 And another interesting thought experiment to think about:
 What if a baby from birth was never allowed to see the real world, but
 instead were given VR goggles providing a realistic interactive
environment,
 entirely generated from a computer simulation.  Would that infant be
 unconscious of the things it saw?

This argument sound better, but still:
1. Goggles are not enough - baby learns via active interaction with the
outside world, i.e. motor function matters and you should provide baby with
a full-body armor that completely simulates the environment and makes
interaction consistent (so haptic, proprioceptive and visual experiences
don't contradict each other). But that's hard and maybe impossible - you
can't (or can?) completely prevent the contaminating influence of the
world - for example, you should feed the baby.
2. The most important is that baby has nervous system that evolved for a
very long time and already somehow encodes external symbols. You just
substituting real input with virtual input but that virtual input is
already properly encoded and speaks the symbolic language that is grounded
in real world and comprehensible by baby's brain.
3. Baby, itself, is real and made from matter and, maybe, real baby in VR
!= virtual baby in VR. In the other words, there is a special class of
real Turing machine implementations that posses the meaning grounded in
the environment.

OK, i agree that it's very tempting to accept computationalism, but i'm
still not ready, maybe gotta try harder )

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Jason Resch
On Mon, Feb 1, 2010 at 8:05 AM, soulcatcher☠ soulcatche...@gmail.comwrote:

 What would you say about this setup:

 Computer Simulation-Physical Universe-Your Brain

 That is to say, what if our physical universe were simulated in some
 alien's computer instead of being some primitive physical world?


 This setup doesn't sound very convincing to me:
 - I believe that simulated objects (agents) can't be conscious
 - I believe that I am consious
 = I'm not simulated and all the universe is not simulated.

  And another interesting thought experiment to think about:
  What if a baby from birth was never allowed to see the real world, but
  instead were given VR goggles providing a realistic interactive
 environment,
  entirely generated from a computer simulation.  Would that infant be
  unconscious of the things it saw?

 This argument sound better, but still:
 1. Goggles are not enough - baby learns via active interaction with the
 outside world, i.e. motor function matters and you should provide baby with
 a full-body armor that completely simulates the environment and makes
 interaction consistent (so haptic, proprioceptive and visual experiences
 don't contradict each other). But that's hard and maybe impossible - you
 can't (or can?) completely prevent the contaminating influence of the
 world - for example, you should feed the baby.
 2. The most important is that baby has nervous system that evolved for a
 very long time and already somehow encodes external symbols. You just
 substituting real input with virtual input but that virtual input is
 already properly encoded and speaks the symbolic language that is grounded
 in real world and comprehensible by baby's brain.
 3. Baby, itself, is real and made from matter and, maybe, real baby in VR
 != virtual baby in VR. In the other words, there is a special class of
 real Turing machine implementations that posses the meaning grounded in
 the environment.


Maybe we have definitions for what is meant by simulation.  I say this
because of your last comment about meaning needing to be grounded in an
environment.  Within realistic computer simulations there is an environment
which encodes many of the same relations we are used to.  Concreteness of
objects, Newtonian mechanics ( http://www.youtube.com/watch?v=Ae6ovaDBiDE ),
light effects ( http://www.youtube.com/watch?v=lvI1l0nAd1c ) etc. are all
embedded within the code that informs the simulation how to evolve, just as
the laws of physics would in a physical world.  Do you see the meaning of
physical laws being somehow different from the programmed laws that simulate
an environment?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread soulcatcher☠

 Do you see the meaning of physical laws being somehow different from the
 programmed laws that simulate an environment?

Yes, I feel that simulated mind is not identical to the real one. Simulation
is only the extension of the mind - just a tool, a mental crutch, a
pluggable module that gives you additional abilities. For example, if I had
the computation power of my brain sufficient enough, I could simulate other
minds entirely in my mind (in imagination, whatever) - but these imaginary
minds won't be conscious, will they?
In the other words:
1. I accept that computation is a description (the impretaive one) of
reality, like math (declarative) or human language.
2. I don't believe (for now)  that it has any meaning (and consciousness)
per se.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Jason Resch
On Mon, Feb 1, 2010 at 9:27 AM, soulcatcher☠ soulcatche...@gmail.comwrote:

 Do you see the meaning of physical laws being somehow different from the
 programmed laws that simulate an environment?

 Yes, I feel that simulated mind is not identical to the real one.
 Simulation is only the extension of the mind - just a tool, a mental
 crutch, a pluggable module that gives you additional abilities. For
 example, if I had the computation power of my brain sufficient enough, I
 could simulate other minds entirely in my mind (in imagination, whatever) -
 but these imaginary minds won't be conscious, will they?


I think that depends on the level of resolution to which you are simulating
them.  The people you see in your dreams aren't conscious, but if a super
intelligence could simulate another's mind to the resolution of their
neurons, I think those simulated persons would be conscious.



 In the other words:
 1. I accept that computation is a description (the impretaive one) of
 reality, like math (declarative) or human language.


There is a difference between computation as a description (say a print out
or CD containing a program's source code) and the computation as an action
or process.  The CD wouldn't be conscious, but if you loaded it into a
computer and executed it, I think it would be.


 2. I don't believe (for now)  that it has any meaning (and consciousness)
 per se.



So you think the software mind in a software environment would never
question the redness of red, when the robot brain would?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread soulcatcher☠

 I think those simulated persons would be conscious.

The possibility of superintelligence that creates worlds in its dreams kinda
freaks me out :)

So you think the software mind in a software environment would never
 question the redness of red, when the robot brain would?

 No, I think that good enough simulation of me must question the redness of
the red simply by definition  - because I'm questioning and it simulates my
behavior.
Nevertheless, I think that this simulation won't be conscious and has only
descriptive power, like a reflection in the mirror (bad example but confers
the idea). But I can't tell what exactly is the difference, what is that
obscure physicalist principle that I meant speaking about symbol grounding
in the real world and that makes me (and not my simulation) conscious.
ok, suppose we'll record a day in the life of my simulation and then replay
it - will it still be conscious?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Brent Meeker

soulcatcher? wrote:


Do you see the meaning of physical laws being somehow different
from the programmed laws that simulate an environment?

Yes, I feel that simulated mind is not identical to the real one. 
Simulation is only the extension of the mind - just a tool, a mental 
crutch, a pluggable module that gives you additional abilities. For 
example, if I had the computation power of my brain sufficient enough, 
I could simulate other minds entirely in my mind (in imagination, 
whatever) - but these imaginary minds won't be conscious, will they?

In the other words:
1. I accept that computation is a description (the impretaive one) of 
reality, like math (declarative) or human language.
2. I don't believe (for now) that it has any meaning (and 
consciousness) per se.


I would say that it gets its meaning (interpretation) from you. The 
meaning you assign it comes from your internal model of the world you 
interact with. This is partly hardwired by evolution and partly learned 
from your experience.


Brent



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Jason Resch
On Mon, Feb 1, 2010 at 12:10 PM, soulcatcher☠ soulcatche...@gmail.comwrote:

 I think those simulated persons would be conscious.

 The possibility of superintelligence that creates worlds in its dreams
 kinda freaks me out :)


Carl Sagan in Cosmos said that in the Hindu religion, there are an infinite
number of Gods, each dreaming their own universe:
http://www.youtube.com/watch?v=4E-_DdX8Ke0



 So you think the software mind in a software environment would never
 question the redness of red, when the robot brain would?

 No, I think that good enough simulation of me must question the redness of
 the red simply by definition  - because I'm questioning and it simulates my
 behavior.
 Nevertheless, I think that this simulation won't be conscious and has only
 descriptive power, like a reflection in the mirror (bad example but confers
 the idea). But I can't tell what exactly is the difference, what is that
 obscure physicalist principle that I meant speaking about symbol grounding
 in the real world and that makes me (and not my simulation) conscious.
 ok, suppose we'll record a day in the life of my simulation and then replay
 it - will it still be conscious?


I don't think your recording will be conscious.  It lacks the causal
relations that give meaning to its symbols.  I believe the symbols are
grounded and related to each other through their interactions in the
processing by the CPU/Turing machine/physical laws.

Do you think the redness of red is a physical property of red light or an
internal property of you (the organization of neurons in your brain)?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Jason Resch
On Sat, Jan 30, 2010 at 8:10 PM, soulcatcher☠ soulcatche...@gmail.comwrote:

 Let me explain with example. Suppose, that you:
 1. simulate my brain in a computer program, so we can say that this
 program represents my brain in your symbols.
 2. simulate a red rose
 3. feed rose data into my simulated brain.
 I think (more believe than think) that this simulated brain won't see
 my redness - in fact, it won't see nothing at all cause it isn't
 conscious.
 But if you:
 1. make a robot that simulates my brain in my symbols i.e. behaves
 (relative to the physical world) in the same ways as I do
 2. show a rose to the robot
 I think that robot will experience the same redness as me.
 Would be glad if somebody suggests something to read about 'symbols
 grounding', semantics, etc., I have a lot of confusion here, I've
 always thought that logic is a formal language for a 'syntactic'
 manipulation with 'strings' that acquire meaning only in our minds.


When I play a video game I am conscious.  Presumably I would still be
conscious even using a fully immersive system like the vertebrain system
described on this page ( http://marshallbrain.com/discard8.htm ).  If that
is true, and you agree with me so far, do you think a brain in a vat (
http://en.wikipedia.org/wiki/Brain_in_a_vat ) would be conscious?  Would it
be conscious whether its optic nerve were connected to a webcam or connected
to the TV/OUT port of a video game?  What about a human brain that spent its
whole life as a brain in a vat from the time it was born (assuming it were
given a robot body for input, or assuming it was given a computer game
realistic reality)?  I am curious at what point you think the consciousness
would cease.

If you agree that the brain in the vat would be conscious in all cases (even
when given input from a video game) and you agree that a robot body with a
software brain would be conscious, why would it stop working when you put a
software brain in the same position as the brain in a vat?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Brent Meeker

Jason Resch wrote:
On Sat, Jan 30, 2010 at 8:10 PM, soulcatcher☠ soulcatche...@gmail.com 
mailto:soulcatche...@gmail.com wrote:


Let me explain with example. Suppose, that you:
1. simulate my brain in a computer program, so we can say that this
program represents my brain in your symbols.
2. simulate a red rose
3. feed rose data into my simulated brain.
I think (more believe than think) that this simulated brain won't see
my redness - in fact, it won't see nothing at all cause it isn't
conscious.
But if you:
1. make a robot that simulates my brain in my symbols i.e. behaves
(relative to the physical world) in the same ways as I do
2. show a rose to the robot
I think that robot will experience the same redness as me.
Would be glad if somebody suggests something to read about 'symbols
grounding', semantics, etc., I have a lot of confusion here, I've
always thought that logic is a formal language for a 'syntactic'
manipulation with 'strings' that acquire meaning only in our minds.


When I play a video game I am conscious.  Presumably I would still be 
conscious even using a fully immersive system like the vertebrain 
system described on this page 
( http://marshallbrain.com/discard8.htm ).  If that is true, and you 
agree with me so far, do you think a brain in a vat 
( http://en.wikipedia.org/wiki/Brain_in_a_vat ) would be conscious? 
 Would it be conscious whether its optic nerve were connected to a 
webcam or connected to the TV/OUT port of a video game?  What about a 
human brain that spent its whole life as a brain in a vat from the 
time it was born (assuming it were given a robot body for input, or 
assuming it was given a computer game realistic reality)?  I am 
curious at what point you think the consciousness would cease.



I think that if the brain in a vat had sufficient efferent/afferent 
nerve connections so that it was able to both perceive and and act in 
the world (either real or virtual) then it would be conscious.  If it 
were very restricted, e.g. it only go to play the same virtual video 
game over and over, it's consciousness would be similarly limited (I 
think there are degrees of consciousness). And if it were too limited it 
would crash.


Brent


If you agree that the brain in the vat would be conscious in all cases 
(even when given input from a video game) and you agree that a robot 
body with a software brain would be conscious, why would it stop 
working when you put a software brain in the same position as the 
brain in a vat?


Jason

--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-01-31 Thread Jason Resch
On Sat, Jan 30, 2010 at 8:10 PM, soulcatcher☠ soulcatche...@gmail.comwrote:

 I see a red rose. You see a red rose. Is your experience of redness
 the same as mine?
 1. Yes, they are identical.
 2. They are different as long as neural organization of our brains is
 slightly different, but you are potentially capable of experiencing my
 redness with some help from neurosurgeon who can shape your brain in
 the way as mine is.
 3. They are different as long as some 'code' of our brains is slightly
 different but you (and every machine) is potentially capable of
 experiencing my redness if they somehow achieve the same 'code'.
 5. They are different and absolutely private - you (and anybody else,
 be it a human or machine) don't and can't experience my redness.
 6. The question doesn't have any sense because ... (please elaborate)
 7. ...
 What is your opinion?

 My (naive) answer is (3). Our experiences are identical (would a
 correct term be 'ontologically identical'?) as long as they have the
 same symbolic representation and the symbols have the same grounding
 in the physical world. The part about grounding is just an un-educated
 guess,  I don't understand the subject and have only an intuitive
 feeling that semantics (what computation is about) is important and
 somehow determined by the physical world out there.
 Let me explain with example. Suppose, that you:
 1. simulate my brain in a computer program, so we can say that this
 program represents my brain in your symbols.
 2. simulate a red rose
 3. feed rose data into my simulated brain.
 I think (more believe than think) that this simulated brain won't see
 my redness - in fact, it won't see nothing at all cause it isn't
 conscious.
 But if you:
 1. make a robot that simulates my brain in my symbols i.e. behaves
 (relative to the physical world) in the same ways as I do
 2. show a rose to the robot
 I think that robot will experience the same redness as me.
 Would be glad if somebody suggests something to read about 'symbols
 grounding', semantics, etc., I have a lot of confusion here, I've
 always thought that logic is a formal language for a 'syntactic'
 manipulation with 'strings' that acquire meaning only in our minds.


I have to disagree with your intuition that grounding in the physical world
is required.  What would you say about the possibility that our whole
universe is running in some computer?  According to your intuition:

Physical Universe-Your Brain = Conscious
Physical Universe-Robot Brain = Conscious
Physical Universe-Computer Simulation-Software Brain = Not Conscious

What would you say about this setup:

Computer Simulation-Physical Universe-Your Brain

That is to say, what if our physical universe were simulated in some alien's
computer instead of being some primitive physical world?

Also, what would the non-conscious software brain say if someone asked it
what it saw?  Would the software simulation of your brain ever feel
bewilderment over its sensations or wonder about consciousness?  Would it
ever compose an e-mail on topics such as the redness of red or would that
activity be impossible for the software brain fed simulated input?

If you expect different behavior in any conceivable situation between the
robot brain fed input from a camera, and the software brain fed input from
the simulation, assuming the programming and input are identical, I think
this leads to a contradiction.  Equivalent Turing machines should evolve
identically given the same input.  Therefore there should be no case in
which the robot would write an e-mail questioning the redness of red, but
the software simulation would not.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-01-31 Thread Jason Resch
On Sun, Jan 31, 2010 at 5:45 PM, Jason Resch jasonre...@gmail.com wrote:



 On Sat, Jan 30, 2010 at 8:10 PM, soulcatcher☠ soulcatche...@gmail.comwrote:

 I see a red rose. You see a red rose. Is your experience of redness
 the same as mine?
 1. Yes, they are identical.
 2. They are different as long as neural organization of our brains is
 slightly different, but you are potentially capable of experiencing my
 redness with some help from neurosurgeon who can shape your brain in
 the way as mine is.
 3. They are different as long as some 'code' of our brains is slightly
 different but you (and every machine) is potentially capable of
 experiencing my redness if they somehow achieve the same 'code'.
 5. They are different and absolutely private - you (and anybody else,
 be it a human or machine) don't and can't experience my redness.
 6. The question doesn't have any sense because ... (please elaborate)
 7. ...
 What is your opinion?

 My (naive) answer is (3). Our experiences are identical (would a
 correct term be 'ontologically identical'?) as long as they have the
 same symbolic representation and the symbols have the same grounding
 in the physical world. The part about grounding is just an un-educated
 guess,  I don't understand the subject and have only an intuitive
 feeling that semantics (what computation is about) is important and
 somehow determined by the physical world out there.
 Let me explain with example. Suppose, that you:
 1. simulate my brain in a computer program, so we can say that this
 program represents my brain in your symbols.
 2. simulate a red rose
 3. feed rose data into my simulated brain.
 I think (more believe than think) that this simulated brain won't see
 my redness - in fact, it won't see nothing at all cause it isn't
 conscious.
 But if you:
 1. make a robot that simulates my brain in my symbols i.e. behaves
 (relative to the physical world) in the same ways as I do
 2. show a rose to the robot
 I think that robot will experience the same redness as me.
 Would be glad if somebody suggests something to read about 'symbols
 grounding', semantics, etc., I have a lot of confusion here, I've
 always thought that logic is a formal language for a 'syntactic'
 manipulation with 'strings' that acquire meaning only in our minds.


 I have to disagree with your intuition that grounding in the physical world
 is required.  What would you say about the possibility that our whole
 universe is running in some computer?  According to your intuition:

 Physical Universe-Your Brain = Conscious
 Physical Universe-Robot Brain = Conscious
 Physical Universe-Computer Simulation-Software Brain = Not Conscious

 What would you say about this setup:

 Computer Simulation-Physical Universe-Your Brain

 That is to say, what if our physical universe were simulated in some
 alien's computer instead of being some primitive physical world?

 Also, what would the non-conscious software brain say if someone asked it
 what it saw?  Would the software simulation of your brain ever feel
 bewilderment over its sensations or wonder about consciousness?  Would it
 ever compose an e-mail on topics such as the redness of red or would that
 activity be impossible for the software brain fed simulated input?

 If you expect different behavior in any conceivable situation between the
 robot brain fed input from a camera, and the software brain fed input from
 the simulation, assuming the programming and input are identical, I think
 this leads to a contradiction.  Equivalent Turing machines should evolve
 identically given the same input.  Therefore there should be no case in
 which the robot would write an e-mail questioning the redness of red, but
 the software simulation would not.

 Jason


And another interesting thought experiment to think about:

What if a baby from birth was never allowed to see the real world, but
instead were given VR goggles providing a realistic interactive environment,
entirely generated from a computer simulation.  Would that infant be
unconscious of the things it saw?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-01-31 Thread Jason Resch
On Sat, Jan 30, 2010 at 8:10 PM, soulcatcher☠ soulcatche...@gmail.comwrote:

 I see a red rose. You see a red rose. Is your experience of redness
 the same as mine?
 1. Yes, they are identical.
 2. They are different as long as neural organization of our brains is
 slightly different, but you are potentially capable of experiencing my
 redness with some help from neurosurgeon who can shape your brain in
 the way as mine is.
 3. They are different as long as some 'code' of our brains is slightly
 different but you (and every machine) is potentially capable of
 experiencing my redness if they somehow achieve the same 'code'.
 5. They are different and absolutely private - you (and anybody else,
 be it a human or machine) don't and can't experience my redness.
 6. The question doesn't have any sense because ... (please elaborate)
 7. ...
 What is your opinion?


I think our brains are wired similarly enough that most people experience
colors similarly, excepting the tetrachromats and color blind.  Consider the
following other sensations, and how similar you think they might be between
people: a needle prick, coldness, a high-pitched sound, hunger, complete
darkness.  Is complete darkness between two people more or less the same,
what about the sound of an 8 KHz tone?

To answer this question, I would say somewhere between 1 and 2, they are
probably very close between any two random normal humans but perhaps not
identical.  This is not to say that an alien with a differently evolved and
structured brain could not have a completely different experience when
looking at a rose; I just think our brains are wired similarly enough that
red to you could be as much red to me as coldness to you is coldness to me.
 The higher the information content of the experience, however, the more
room there is for possible difference.

Jason



 My (naive) answer is (3). Our experiences are identical (would a
 correct term be 'ontologically identical'?) as long as they have the
 same symbolic representation and the symbols have the same grounding
 in the physical world. The part about grounding is just an un-educated
 guess,  I don't understand the subject and have only an intuitive
 feeling that semantics (what computation is about) is important and
 somehow determined by the physical world out there.
 Let me explain with example. Suppose, that you:
 1. simulate my brain in a computer program, so we can say that this
 program represents my brain in your symbols.
 2. simulate a red rose
 3. feed rose data into my simulated brain.
 I think (more believe than think) that this simulated brain won't see
 my redness - in fact, it won't see nothing at all cause it isn't
 conscious.
 But if you:
 1. make a robot that simulates my brain in my symbols i.e. behaves
 (relative to the physical world) in the same ways as I do
 2. show a rose to the robot
 I think that robot will experience the same redness as me.
 Would be glad if somebody suggests something to read about 'symbols
 grounding', semantics, etc., I have a lot of confusion here, I've
 always thought that logic is a formal language for a 'syntactic'
 manipulation with 'strings' that acquire meaning only in our minds.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



the redness of the red

2010-01-30 Thread soulcatcher☠
I see a red rose. You see a red rose. Is your experience of redness
the same as mine?
1. Yes, they are identical.
2. They are different as long as neural organization of our brains is
slightly different, but you are potentially capable of experiencing my
redness with some help from neurosurgeon who can shape your brain in
the way as mine is.
3. They are different as long as some 'code' of our brains is slightly
different but you (and every machine) is potentially capable of
experiencing my redness if they somehow achieve the same 'code'.
5. They are different and absolutely private - you (and anybody else,
be it a human or machine) don't and can't experience my redness.
6. The question doesn't have any sense because ... (please elaborate)
7. ...
What is your opinion?

My (naive) answer is (3). Our experiences are identical (would a
correct term be 'ontologically identical'?) as long as they have the
same symbolic representation and the symbols have the same grounding
in the physical world. The part about grounding is just an un-educated
guess,  I don't understand the subject and have only an intuitive
feeling that semantics (what computation is about) is important and
somehow determined by the physical world out there.
Let me explain with example. Suppose, that you:
1. simulate my brain in a computer program, so we can say that this
program represents my brain in your symbols.
2. simulate a red rose
3. feed rose data into my simulated brain.
I think (more believe than think) that this simulated brain won't see
my redness - in fact, it won't see nothing at all cause it isn't
conscious.
But if you:
1. make a robot that simulates my brain in my symbols i.e. behaves
(relative to the physical world) in the same ways as I do
2. show a rose to the robot
I think that robot will experience the same redness as me.
Would be glad if somebody suggests something to read about 'symbols
grounding', semantics, etc., I have a lot of confusion here, I've
always thought that logic is a formal language for a 'syntactic'
manipulation with 'strings' that acquire meaning only in our minds.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-01-30 Thread Brent Meeker

soulcatcher? wrote:

I see a red rose. You see a red rose. Is your experience of redness
the same as mine?
1. Yes, they are identical.
2. They are different as long as neural organization of our brains is
slightly different, but you are potentially capable of experiencing my
redness with some help from neurosurgeon who can shape your brain in
the way as mine is.
3. They are different as long as some 'code' of our brains is slightly
different but you (and every machine) is potentially capable of
experiencing my redness if they somehow achieve the same 'code'.
5. They are different and absolutely private - you (and anybody else,
be it a human or machine) don't and can't experience my redness.
6. The question doesn't have any sense because ... (please elaborate)
7. ...
What is your opinion?

My (naive) answer is (3). Our experiences are identical (would a
correct term be 'ontologically identical'?) as long as they have the
same symbolic representation and the symbols have the same grounding
in the physical world. The part about grounding is just an un-educated
guess,  I don't understand the subject and have only an intuitive
feeling that semantics (what computation is about) is important and
somehow determined by the physical world out there.
Let me explain with example. Suppose, that you:
1. simulate my brain in a computer program, so we can say that this
program represents my brain in your symbols.
2. simulate a red rose
3. feed rose data into my simulated brain.
I think (more believe than think) that this simulated brain won't see
my redness - in fact, it won't see nothing at all cause it isn't
conscious.
But if you:
1. make a robot that simulates my brain in my symbols i.e. behaves
(relative to the physical world) in the same ways as I do
2. show a rose to the robot
I think that robot will experience the same redness as me.
Would be glad if somebody suggests something to read about 'symbols
grounding', semantics, etc., I have a lot of confusion here, I've
always thought that logic is a formal language for a 'syntactic'
manipulation with 'strings' that acquire meaning only in our minds.
  


I agree with your intuition that the semantics of thought and 
consciousness must be grounded in interaction with the world and is 
relative to that world. It does leave a puzzle about how private 
internal thoughts, that seem to have no reference to the external world, 
e.g. pure mathematics, get grounded. I guess it is via indirect chains 
of reference.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.