2015-03-31 7:19 GMT+02:00 meekerdb <meeke...@verizon.net>:

> On 3/30/2015 10:17 PM, Bruce Kellett wrote:
>
>> meekerdb wrote:
>>
>>> On 3/28/2015 11:36 PM, Bruce Kellett wrote:
>>>
>>>>
>>>> Bruno has acknowledged that this is not what the MGA shows. MGA simply
>>>> shows that his version of computationalism is incompatible with physical
>>>> supervenience. This cannot be seen as surprising since it is explicitly
>>>> built into computationalism that physicalism is false.
>>>>
>>>
>>> That's not my understanding.  Bruno's argument starts with assuming that
>>> a part, or all, of your brain could be replaced by a digital AI with the
>>> same I/O and if done at a suitably low level of detail (probably neuronal)
>>> you conscious inner life would be essentially the same.  That seems to me
>>> to be assuming physicalism as the basis of consciousness.
>>>
>>
>> This contradicts what you say below about Bruno assuming that only
>> certain special processes institute consciousness.
>>
>
> He's trying a reductio.  So he assumes physicalism - that some physical
> processes produce consciousness (not just any physical process) - and tried
> to reach the absurdity that the physical process can be a do-nothing
> process.
>
>
>> I think there is an ambiguity, or uncertainty, about just what the
>> program that is to replace part or all of your brain does. If the program
>> is just a simulation of the actual physical brain, neuron by neuron,
>> synapse by synapse, so that that physical laws that govern the behaviour of
>> these brain elements are instantiated by the computer, and act on the
>> initial data given by the state of the brain when the program is started,
>> then there will be no essential difference between the program and the
>> brain it replaces. In this case you might say "Yes, doctor", with some
>> confidence. The necessary programming would presumably be well understood
>> since the brain is deterministic at the level with which we are concerned,
>> and the physical/chemical laws can be determined. If the initial state can
>> be ascertained with sufficient precision without killing you, then the
>> simulated computer brain substitute acts just like the original, so should
>> give no problems.
>>
>> This understanding is based on the idea that consciousness supervenes on
>> the processes and states of the physical brain. These have been replaced by
>> equivalent physical processes, so consciousness should remain intact. There
>> is no appeal to computationalism here.
>>
>
> Sure there is; it's the requirement that the computer compute the
> equivalent physical processes.  They are equivalent in the sense of
> producing the same sequence of states (at whatever level they are
> simulated).
>
>  The simulating computer has to perform many detailed calculations to
>> carry through the operation of known physical laws on the initial data, but
>> I don't think anyone is saying that consciousness supervenes on such
>> calculations.
>>
>
> I think they are.  In fact didn't you say so above: "...then the simulated
> computer brain substitute acts just like the original, so should give no
> problems."  Are you making some distinction between simulating the brain
> and simulating the physics of the brain?
>
>
>> The other approach is to assume that the computer used to replace your
>> brain is running a true AI program. It is not simulating the physical
>> processes piece by piece, but running some black box program that has been
>> shown to reproduce known brain outputs for some range of suitable inputs.
>> The program is presumably supposed to implement the universal TM
>> computations upon which consciousness supervenes independently of the
>> underlying hardware/wetware. If this is the model you have in mind, then
>> the computationalist model directly contradicts physical supervenience,
>> right from the outset.
>>
>
> No, as I understand it Bruno is assuming the doctor replaces all or part
> of your brain with a digital device (or even an analog one so long as it's
> function doesn't depend on infinite precision) that computes the same I/O
> function at it's interface with the rest of you.
>
>
>> Now, I think the interesting question to ask is: "Given these two
>> different implementations of the brain replacing program, would you have
>> equal confidence in both possibilities?"
>>
>> I think the answer would, in general, be "No!". The program that assumes
>> physical supervenience can be tested element by element, so that once it
>> has been shown to truly follow the known chemical and physical laws, and
>> accurately reproduces the structure of your actual brain, it will be
>> counterfactually correct, and could be trusted into the future.
>>
>> The alternative, computationalist model cannot be tested in this way.
>> Basically because it is necessarily holistic. Consciousness is assumed to
>> supervene on a particular type of computation, but is your computationalist
>> program the same as mine? How do we know? I do not think the we could ever
>> guarantee that such an AI device was counterfactually correct for /your/
>> brain. Many artificial learning programs, based on neural nets or the like,
>> can be trained to perform with great reproducibility on the training data
>> set, but fail miserably once one goes outside this data set. They are not
>> counterfactually correct, and I do not know how you could ever ensure the
>> necessary counterfactual correctness, even if you did imagine that you knew
>> precisely the sort of computation upon which consciousness supervened.
>>
>> So I would reject the computationalist program right at the start -- I
>> would not say "Yes, doctor" to that sort of AI program.
>>
>
> Nor would I or Bruno.  But what about the other kind of simulation. It is
> still reducible to a program running on a UTM - except for interaction with
> the world outside the brain.  As I understand it Bruno finesses this
> problem by (1) saying the subject is conscious while dreaming so the
> external world isn't necessary to consciousness or (2) if it's necessary
> whatever part of it that is necessary can be added to the UTM simulation.
>
> Once you have consciousness instantiated by a UTM program then he and
> Maudlin argue that the computation can be inert.
>

The physical instantiation can... beause under physicalism, that's the only
thing real, a computation is an abstract representation of what is
physically going on, and it's because of that, that the Movie Graph machine
and or Olympia who are not couterfactually correct (to the "general
purpose" consciousness program) at all (olympia is just factually correct
to continue the particular run it is in and only it, in case there is a
disruption in the inputs to still continue the same flow) but are a token
to token correct correspondance with the exact run of the thought
experiment... these instantions must be taken as "correct" instantiation
under physicalism (and physical supervenience is just a tool allowing to
show there is a token correspondance). In the end, under physicalism,
computations are not real, matter is what is real and matter under some
circumstances can be modelled as if it undergoes a computation...

Quentin


>
> I think that's his argument, but Bruno can correct me if I'm wrong.
>
> Brent
>
>
>
>> Bruce
>>
>>
>>  The MGA is, therefore, largely irrelevant, because it does not prove
>>>> anything that we didn't already know. It certainly does not show that
>>>> consciousness is an abstract process in Plationia, independent of any
>>>> physical process.
>>>>
>>>
>>> Bruno assumes that only some special processes instantiate consciousness
>>> and these are characterized by being computations of some kind, i.e. a
>>> sequence of states that could be realized by a program running on a
>>> Universal Turing Machine (not necessarily halting).  Since the
>>> consciousness computation defined this way is an abstract mathematical
>>> process in Platonia; it is equivalent to assuming consciousness is
>>> instantiated by an abstract mathematical process.
>>>
>>> Brent
>>>
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
All those moments will be lost in time, like tears in rain. (Roy
Batty/Rutger Hauer)

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to