On 8 March 2010 16:46, Jack Mallah <jackmal...@yahoo.com> wrote:

> --- On Fri, 3/5/10, Stathis Papaioannou <stath...@gmail.com> wrote:
>> If the inputs to the remaining brain tissue are the same as they would have 
>> been normally then effectively you have replaced the missing parts with a 
>> magical processor, and I would say that the thought experiment shows that 
>> the consciousness must be replicated in this magical processor.
>
> No, that's wrong. Having the right inputs could be due to luck (which is 
> conceptually the cleanest way), or it could be due to pre-recording data from 
> a previous simulation.  The only consciousness present is the partial one in 
> the remaining brain.

In the original fading qualia thought experiment the artificial
neurons could be considered black boxes, the consciousness status of
which is unknown. The conclusion is that if the artificial neurons
lack consciousness, then the brain would be partly zombified, which is
absurd. I think this holds *whatever* is in the black boxes:
computers, biological tissue, a demon pulling strings or nothing. It
is of course extremely unlikely that the rest of the brain would
behave normally if the artificial neurons were in fact empty boxes,
but if it did, then intelligent behaviour would be normal and
consciousness would also be normal. Even if the brain were completely
removed and, miraculously, the empty-headed body carried on normally,
passing the Turing test and so on, then it would be conscious. This is
simply another way of saying that philosophical zombies are
impossible: whatever is going on inside the putative zombie's head, if
it reproduces the I/O behaviour of a human, it will have the mind of a
human.

The requirement that a computer be able to handle the counterfactuals
in order to be conscious seems to have been brought in to make
computationalists feel better about computationalism. Certainly, a
computer that behaves randomly or rigidly follows one pathway is not a
very useful computer, but why should that render any computations it
does correctly perform invalid or, if still valid as computations,
incapable of giving rise to consciousness? Brains are all
probabilistic in that disaster could at any point befall them causing
them to deviate widely from normal behaviour or else prevent them from
deviating at all from a rigidly determined pathway, and I don't see in
either case how their consciousness could possibly be affected as a
result.

>> computationalism is only a subset of functionalism.
>
> I used to think so but the terms don't quite mean what they sound like they 
> should. It's a common misconception that "functionalism" means 
> "computationalism" generalized to include analog and noncomputatble systems.
>
> "Functionalism" as philosophers use it focuses on input and output.  It holds 
> that any system which behaves the same in terms of i/o and which acts the 
> same in terms of memory effects has the same consciousness.  There are 
> different ways to make this more precise, and I believe that computationalism 
> is one way, but it is not the only way.  For example, some functionalists 
> would claim that a 'swampman' who spontaneously formed in a swamp due to 
> random thermal motion of atoms, but who is physically identical to a human 
> and coincidentally speaks perfect English, would not be conscious because he 
> didn't have the right inputs.  I obviously reject that; 'swapman' would be a 
> normal human.
>
> "Computationalism" doesn't necessarily mean only digital computations, and it 
> can include super-Turing machines that perform infinite steps in finite time. 
>  The main characteristic of computationalism is its identification of 
> consciousness with systems that causally solve initial-value math problems 
> given the right mapping from system to formal states.

It's perhaps just a matter of definition but I would have thought the
requirement for a hypercomputer was not compatible with
computationalism, but potentially could still come under
functionalism.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to