Le 29 mars 2015 09:03, "Bruce Kellett" <bhkell...@optusnet.com.au> a écrit :
>
> meekerdb wrote:
>>
>> On 3/28/2015 11:54 PM, Bruce Kellett wrote:
>>>
>>> meekerdb wrote:
>>>>
>>>> On 3/28/2015 11:02 PM, Bruce Kellett wrote:
>>>>>
>>>>> meekerdb wrote:
>>>>>
>>>>> The calculation written out on paper is a static thing, but the
result of that calculation might still be part of a simulation that
produces consciousness. Though, unless Barbour is right and the actuality
of time can be statically encoded in his 'time capsules (current memories
of past instances)', I was thinking in terms of a sequence of these states
(however calculated).
>>>>
>>>>
>>>> Yes, I agree that the computation should not have to halt (compute a
function) in order to instantiate consciousness; it can just be a sequence
of states.  Written out on paper it can be a sequence of states ordered by
position on the paper.  But that seems absurd, unless you think of it as
consciousness in the context of a world that is also written out on the
paper, such that the writing that is conscious is /*conscious of*/ this
written out world.
>>>
>>>
>>> My present conscious state includes visual, auditory and tactile inputs
-- these are part of the simulation. But they need simulate only the effect
on my brain states during that moment -- they do not have to simulate the
entire world that gave rise to these inputs. The recreated conscuious state
is not counterfactually accurate in this respect, but so what? I am
reproducing a few conscious moments, not a fully functional person.
>>
>>
>> But isn't it the case that your brain evolved/learned to interpret and
be conscious of these stimuli only because it exists in the context of this
world?
>
>
> Yes.
>
>
>
>>>> But in the MGA (or Olympia) we are asked to consider a device which is
a conscious AI and then we are led to suppose a radically broken version of
it works even though it is reduced to playing back a record of its
processes.  I think the playback of the record fails to produce
consciousness because it is not counterfactually correct and hence is not
actually realizing the states of the AI - those states essentially include
that some branches were not taken. Maudlin's invention of Klara is intended
to overcome this objection and provide a counterfactually correct but
physically inert sequence of states.  But I think it Maudlin underestimates
the problem of context and the additions necessary for counterfactual
correctness will extend far beyond "the brain" and entail a "world".  These
additions come for free when we say "Yes" to the doctor replacing part of
our brain because the rest of the world that gave us context is still
there.  The doctor doesn't remove it.
>>>
>>>
>>> In the "yes doctor" scenario as reported by Russell, it talks only
about replacing your brain with an AI program on a computer. It does not
mention connecting this to sense organs capable of reproducing all the
inputs one normally gets from the world. If this is not clearly specified,
I would certainly say 'No' to the doctor. There is little point or future
in being a functioning brain without external inputs. As I recall sensory
deprivation experiments, subjects rapidly subside into a meaningless cycle
of states -- or go mad -- in the absence of sensory stimulation.
>>
>>
>> The question as posed by Bruno, is whether you will say yes to the
doctor replacing part of your brain with a digital device that has the
connections to the rest of your brain/body and which implements the same
input/output function for those connections.  Would that leave your
consciousness unchanged?
>
>
> OK. If all the connections and inputs remain intact, and the digital
simulation is accurate, I don't see a problem. But I might object if the
doctor plans to replace my brain with an abstract computation in Platonia
-- because I don't know what such a thing might be, and don't believe it
actually exists absent some physical instantiation.
>
> As you see, I believe in physicalism, not in Platonia. And I have not yet
seen any argument that might lead me to change my mind.

Then as a MGA shows that computations do not supervene in realtime on the
physical,  then as a physicalist you simply have to reject computationalism
as a theory of mind.

The thing is no one is giving arguments to believe one or another... Bruno
did only show both assumptions cannot be true at the same time, he chose to
keep for the sake of the theory and find where that leads and how it could
solve the mind body problem. Never he asserts computationalism is true or
that physicalism is false. Feel free to pursue on the possibility that
physicalism is true (or a complete other theory)  to resolve that same
problem. But if you stay in the physicalist contest you can't use
computations to explain consciousness and that's what maudlin klara/olympia
and MGA thought experiments shows.

Quentin

>
> Bruce
>
>
> --
> You received this message because you are subscribed to the Google Groups
"Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to