On 29 Mar 2015, at 21:25, meekerdb wrote:

On 3/29/2015 1:33 AM, Quentin Anciaux wrote:

Le 29 mars 2015 09:03, "Bruce Kellett" <bhkell...@optusnet.com.au> a écrit :
>
> meekerdb wrote:
>>
>> On 3/28/2015 11:54 PM, Bruce Kellett wrote:
>>>
>>> meekerdb wrote:
>>>>
>>>> On 3/28/2015 11:02 PM, Bruce Kellett wrote:
>>>>>
>>>>> meekerdb wrote:
>>>>>
>>>>> The calculation written out on paper is a static thing, but the result of that calculation might still be part of a simulation that produces consciousness. Though, unless Barbour is right and the actuality of time can be statically encoded in his 'time capsules (current memories of past instances)', I was thinking in terms of a sequence of these states (however calculated).
>>>>
>>>>
>>>> Yes, I agree that the computation should not have to halt (compute a function) in order to instantiate consciousness; it can just be a sequence of states. Written out on paper it can be a sequence of states ordered by position on the paper. But that seems absurd, unless you think of it as consciousness in the context of a world that is also written out on the paper, such that the writing that is conscious is /*conscious of*/ this written out world.
>>>
>>>
>>> My present conscious state includes visual, auditory and tactile inputs -- these are part of the simulation. But they need simulate only the effect on my brain states during that moment -- they do not have to simulate the entire world that gave rise to these inputs. The recreated conscuious state is not counterfactually accurate in this respect, but so what? I am reproducing a few conscious moments, not a fully functional person.
>>
>>
>> But isn't it the case that your brain evolved/learned to interpret and be conscious of these stimuli only because it exists in the context of this world?
>
>
> Yes.
>
>
>
>>>> But in the MGA (or Olympia) we are asked to consider a device which is a conscious AI and then we are led to suppose a radically broken version of it works even though it is reduced to playing back a record of its processes. I think the playback of the record fails to produce consciousness because it is not counterfactually correct and hence is not actually realizing the states of the AI - those states essentially include that some branches were not taken. Maudlin's invention of Klara is intended to overcome this objection and provide a counterfactually correct but physically inert sequence of states. But I think it Maudlin underestimates the problem of context and the additions necessary for counterfactual correctness will extend far beyond "the brain" and entail a "world". These additions come for free when we say "Yes" to the doctor replacing part of our brain because the rest of the world that gave us context is still there. The doctor doesn't remove it.
>>>
>>>
>>> In the "yes doctor" scenario as reported by Russell, it talks only about replacing your brain with an AI program on a computer. It does not mention connecting this to sense organs capable of reproducing all the inputs one normally gets from the world. If this is not clearly specified, I would certainly say 'No' to the doctor. There is little point or future in being a functioning brain without external inputs. As I recall sensory deprivation experiments, subjects rapidly subside into a meaningless cycle of states -- or go mad -- in the absence of sensory stimulation.
>>
>>
>> The question as posed by Bruno, is whether you will say yes to the doctor replacing part of your brain with a digital device that has the connections to the rest of your brain/body and which implements the same input/output function for those connections. Would that leave your consciousness unchanged?
>
>
> OK. If all the connections and inputs remain intact, and the digital simulation is accurate, I don't see a problem. But I might object if the doctor plans to replace my brain with an abstract computation in Platonia -- because I don't know what such a thing might be, and don't believe it actually exists absent some physical instantiation.
>
> As you see, I believe in physicalism, not in Platonia. And I have not yet seen any argument that might lead me to change my mind.

Then as a MGA shows that computations do not supervene in realtime on the physical, then as a physicalist you simply have to reject computationalism as a theory of mind.

The thing is no one is giving arguments to believe one or another... Bruno did only show both assumptions cannot be true at the same time, he chose to keep for the sake of the theory and find where that leads and how it could solve the mind body problem. Never he asserts computationalism is true or that physicalism is false. Feel free to pursue on the possibility that physicalism is true (or a complete other theory) to resolve that same problem. But if you stay in the physicalist contest you can't use computations to explain consciousness and that's what maudlin klara/ olympia and MGA thought experiments shows.

But I don't agree that they show that, or more accurately I think they show that physicalism requires the context of a physical world. Bruno will say, with some justification, that "physical" isn't defined. But that's because he's a Platonist and assumes definitions must be axiomatic. I think ostensive definitions are more useful - at least for defining "physical".


Ironically, I agree with you.

Indeed, I *define*, *axiomatically* the *physical* by the ostensive mode.

The ostensive itself is defined like all indexicals, that is, by the use of the second recursion theorem of Kleene, which I simplify sometimes by the little "song" if Dx gives T(xx), then DD gives T(DD).

T(x) = "x points to the moon", so you can take Dx = (xx points to the moon) and DD = (DD points to the moon).

DD believes this defines the moon, but DD omits to see that this defines only the moon he is pointing too. It makes the moon existing physically, but "physical" remains a mode of the universal machine/ observers.

Eventually, it is this type of analysis which leads to define observable with the indexical []p & <>t, with p sigma_1 (DU-accessible).

No need of physicalism for this. But we need to show that this makes higher the measure on the relative physical observable. The nice first step already done is to show that the logic of [2]p = []p & <>t gives some reasonable quantum logical quantization. By a result by Goldblatt, you get it if you have [2]A -> A, and p -> [2]<2>p on enough proposition p, and some other things. And we get them indeed, on p sigma_1 true, ... except the necessitation rule, and, ... well, that might be a problem for classical computationalism, but up to now we get the right quantum tautologies.

Sorry if I am too much technical. hope you remember enough bit of what I said about modal logic, but you need to grasp more on self-reference logic, which imposes one clear modal logics in which you can define the different points of view.

I often agree with you, but, like with Craig, not as you would give an argument against the fact that comp implies immaterialism, but more as you make a valid point on the physical mode.

Another point: I don't use axiomatic because I would be a platonist, I use axiomatic because it is a way to avoid any metaphysical baggage, be it physicalist-aristotelian, immaterialist-platonist, ...

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to