--- On Mon, 3/8/10, Stathis Papaioannou <stath...@gmail.com> wrote:
> In the original fading qualia thought experiment the artificial neurons could 
> be considered black boxes, the consciousness status of which is unknown. The 
> conclusion is that if the artificial neurons lack consciousness, then the 
> brain would be partly zombified, which is absurd.

That's not the argument Chalmers made, and indeed he couldn't have, since he 
believes zombies are possible; he instead talks about fading qualia.

If you start out believing that computer zombies are NOT possible, the original 
thought experiment is moot; you already believe the conclusion.   His argument 
is aimed at dualists, who are NOT computationalists to start out.

Since partial consciousness is possible, which he didn't take into account, his 
argument _fails_; a dualist who does believe zombies are possible should have 
no problem believing that partial zombies are.  So dualists don't have to be 
computationalists after all.

> I think this holds *whatever* is in the black boxes: computers, biological 
> tissue, a demon pulling strings or nothing.

Partial consciousness is possible and again ruins any such argument.  If you 
don't believe to start out that consciousness can be based on "whatever" (e.g. 
"nothing"), you don't have any reason to accept the conclusion.

> whatever is going on inside the putative zombie's head, if it reproduces the 
> I/O behaviour of a human, it will have the mind of a human.

That is behaviorism, not computationalism, and I certainly don't believe it.  I 
wouldn't say that a computer that uses a huge lookup table algorithm would be 
conscious.

> The requirement that a computer be able to handle the counterfactuals in 
> order to be conscious seems to have been brought in to make computationalists 
> feel better about computationalism.

Not at all.  It was always part of the notion of computation.  Would you buy a 
PC that only plays a movie?  It must handle all possible inputs in a reliable 
manner.

> Brains are all probabilistic in that disaster could at any point befall them 
> causing them to deviate widely from normal behaviour

It is not a problem, it just seems like one at first glance.  Such cases 
include input to the formal system; for some inputs, the device halts or acts 
differently.  Hence my talk of "derailable computations" in my MCI paper.

> or else prevent them from deviating at all from a rigidly determined pathway

If that were done, that would change what computation is being implemented.  
Depending on how it was done, it might or might not affect consciousness.  We 
can't do such an experimemt.

--- On Tue, 3/9/10, Stathis Papaioannou <stath...@gmail.com> wrote:
> Suppose box A contains a probabilistic mechanism that displays the right I/O 
> behaviour 99% of the time. Would the consciousness of the system be perfectly 
> normal until the box misbehaved ... ?

I'd expect it to be.  As above, I'd treat it as a box with input.

Now, as far as we know, there really is no such thing as true randomness.  It's 
all down to initial conditions (which are certainly to be treated as input) or 
to quantum splitting (which is again deterministic).  I don't believe in true 
randomness.

However, if true randomness is possible, then you'd have the same problem with 
Platonia.  In addition to having all of the determininistic Turing machines, 
you'd have all of the probabilistic Turing machines.  It is not an issue that 
bears on physicalism.




      

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to