On 11 Mar 2010, at 17:57, Brent Meeker wrote:

On 3/11/2010 1:59 AM, Bruno Marchal wrote:


I don't see how we could use Tononi's paper to provide a physical or a computational role to inactive device in the actual supervenience of a an actual computation currently not using that device.

I'm not sure I understand that question. It seems to turn on what is meant by "using that device". Is my brain using a neuron that isn't firing? I'd say yes, it is part of the system and it's not firing is significant.

Two old guys A and B decide to buy each one a car. They bought identical cars, and paid the same price. But B's car has a defect, above 90 mi/h the engine explode. But both A and B will peacefully enjoy driving their car all the rest of their life. They were old, and never go quicker than 60 mi/h until they die. Would you say that A's car was driving but that B's car was only partially driving.

What about a brain with clever neurons. For example the neurons N24 anticipates that he will be useless for the next ten minutes, which gives him the time to make a pause cafe and to talk with some glial cells friends. Then after ten minutes he come back and do very well its job. Would that brain be less conscious? He did not miss any messages.

The significance of the neuron (firing or not firing) is computational. If for the precise computation C the neuron n is not used in the interval of time (t1 t2), you may replace it by a functionally equivalent machine for the working in that time interval. There is no problem when you make consciousness supervene on the abstract relevant computations, that the existence of some relations between some numbers (given that I have chosen the "base" elementary arithmetic (it is Turing Universal).

To attach consciousness on "physical activity" + the abstract counterfactual, is useless. It introduces more difficulty than what it solves. With comp that needed "physical activity" has to be turing emulable itself: if not it means you make consciousness depending on something not turing emulable, and you cannot say "yes" to the doctor qua computatio.








I see Tononi's theory as providing a kind of answer to questions like, "Is a Mars Rover concsious and if so, what is it conscious of? Is it more or less conscious than a fruit fly."


I tend to work at a more general, or abstract level, and I think that consciousness needs some amount of self-reflection, two universal machines in front of each other, at least. If Mars Rover can add and multiply it may have the consciousness of Robinson Arithmetic. If Mars Rover believe in enough arithmetical induction rules, it can quickly be trivially Löbian. But its consciousness will develop when he identifies genuinely and privately itself with its unameable first person (Bp & p). Using Bp for public science and opinions. It will build a memorable and unique self-experience.

To be clear, Mars Rover may still be largely behind the fruit fly in matter of consciousness. The fruit fly seems capable to appreciate wine, for example. Mars Rover is still too much an infant, it wants only satisfy its mother company, not yet itself.

Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to