On 30 September 2013 11:36, Pierz <pier...@gmail.com> wrote:
> If I might just butt in (said the barman)...
>
> It seems to me that Craig's insistence that "nothing is Turing emulable,
> only the measurements are" expresses a different ontological assumption from
> the one that computationalists take for granted. It's evident that if we
> make a flight simulator, we will never leave the ground, regardless of the
> verisimilitude of the simulation. So why would a simulated consciousness be
> expected to actually be conscious? Because of different ontological
> assumptions about matter and consciousness. Science has given up on the
> notion of consciousness as having "being" the same way that matter is
> assumed to. Because consciousness has no place in an objective description
> of the world (i.e., one which is defined purely in terms of the measurable),
> contemporary scientific thinking reduces consciousness to those apparent
> behavioural outputs of consciousness which *can* be measured. This is
> functionalism. Because we can't measure the presence or absence of
> awareness, functionalism gives up on the attempt and presents the functional
> outputs as the only things that are "really real". Hence we get the Turing
> test. If we can't tell the difference, the simulator is no longer a
> simulator: it *is* the thing simulated. This conclusion is shored up by the
> apparently water-tight argument that the brain is made of atoms and
> molecules which are Turing emulable (even if it would take the lifetime of
> the universe to simulate the behaviour of a protein in a complex cellular
> environment, but oh well, we can ignore quantum effects because it's too hot
> in there anyway and just fast forward to the neuronal level, right?). It's
> also supported by the objectifying mental habit of people conditioned
> through years of scientific training. It becomes so natural to step into the
> god-level third person perspective that the elision of private experience
> starts seems like a small matter, and a step that one has no choice but to
> make.
>
> Of course, the alternative does present problems of its own! Craig
> frequently seems to slip into a kind of naturalism that would have it that
> brains possess soft, non-mechanical sense because they are soft and
> non-mechanical seeming. They can't be machines because they don't have
> cables and transistors. "Wetware" can't possibly be hardware. A lot of his
> arguments seem to be along those lines — the refusal to accept abstractions
> which others accept, as telmo aptly puts it. He claims to "solve the hard
> problem of consciousness" but the solution involves manoeuvres like "putting
> the whole universe into the explanatory gap" between objective and
> subjective: hardly illuminating! I get irritated by neologisms like PIP
> (whatever that stands for now - was "multi-sense realism' not obscure
> enough?), which to me seem to be about trying to add substance to vague and
> poetic intuitions about reality by attaching big, intellectual-sounding
> labels to them.
>
> However the same grain of sand that seems to get in Craig's eye does get in
> mine too. It's conceivable that some future incarnation of "cleverbot"
> (cleverbot.com, in case you don't know it) could reach a point of passing a
> Turing test through a combination of a vast repertoire of recorded
> conversation and some clever linguistic parsing to do a better job of
> keeping track of a semantic thread to the conversation (where the program
> currently falls down). But in this case, what goes in inside the machine
> seems to make all the difference, though the functionalists are committed to
> rejecting that position. Cleverly simulated conversation just doesn't seem
> to be real conversation if what is going on behind the scenes is just a
> bunch of rules for pulling lines out of a database. It's Craig's clever
> garbage lids. We can make a doll that screams and recoils from damaging
> inputs and learns to avoid them, but the functional outputs of pain are not
> the experience of pain. Imagine a being neurologically incapable of pain.
> Like "Mary", the hypothetical woman who lives her life seeing the world
> through a black and white monitor and cannot imagine colour qualia until she
> is released, such an entity could not begin to comprehend the meaning of
> screams of pain - beyond possibly recognising a self-protective function.
> The elision of qualia from functional theories of mind has potentially very
> serious ethical consequences - for only a subject with access to those
> qualia truly understand them. Understanding the human condition as it really
> is involves inhabiting human qualia. Otherwise you end up with Dr Mengele —
> humans as objects.
>
> I've read Dennett's arguments against the "qualophiles" and I find them
> singularly unconvincing - though to say why is another long post. Dennett
> says we only "seem" to have qualia, but what can "seem" possibly mean in the
> absence of qualia? An illusion of a quality is an oxymoron, for the quality
> *is* only the way it seems. The comp assumption that computations have
> qualia hidden inside them is not much of an answer either in my view. Why
> not grant the qualia equal ontological status to the computations
> themselves, if they are part and parcel? And if they cannot be known except
> from the inside, and if the computation's result can't be known in advance,
> why not say that the "logic" of the qualitiative experience is reflected in
> the mathematics as much as the other way round?
>
> Well enough. I don't have the answer. All I'm prepared to say is we are
> still confronted by mystery. "PIP" seems to me to be more impressionistic
> than theoretical. Comp still seems to struggle with qualia and zombies. I
> suspect we still await the unifying perspective.

Have you read this paper by David Chalmers?

http://consc.net/papers/qualia.html

It assumes for the sake of argument that it is possible to make a
device that replicates the externally observable behaviour of a brain
component, but lacking qualia, and then shows that this leads to
absurdity.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to