On 30 Sep 2013, at 03:36, Pierz wrote:

If I might just butt in (said the barman)...

It seems to me that Craig's insistence that "nothing is Turing emulable, only the measurements are" expresses a different ontological assumption from the one that computationalists take for granted. It's evident that if we make a flight simulator, we will never leave the ground, regardless of the verisimilitude of the simulation. So why would a simulated consciousness be expected to actually be conscious? Because of different ontological assumptions about matter and consciousness. Science has given up on the notion of consciousness as having "being" the same way that matter is assumed to. Because consciousness has no place in an objective description of the world (i.e., one which is defined purely in terms of the measurable), contemporary scientific thinking reduces consciousness to those apparent behavioural outputs of consciousness which *can* be measured. This is functionalism. Because we can't measure the presence or absence of awareness, functionalism gives up on the attempt and presents the functional outputs as the only things that are "really real". Hence we get the Turing test. If we can't tell the difference, the simulator is no longer a simulator: it *is* the thing simulated. This conclusion is shored up by the apparently water-tight argument that the brain is made of atoms and molecules which are Turing emulable (even if it would take the lifetime of the universe to simulate the behaviour of a protein in a complex cellular environment, but oh well, we can ignore quantum effects because it's too hot in there anyway and just fast forward to the neuronal level, right?). It's also supported by the objectifying mental habit of people conditioned through years of scientific training. It becomes so natural to step into the god- level third person perspective that the elision of private experience starts seems like a small matter, and a step that one has no choice but to make.

Of course, the alternative does present problems of its own! Craig frequently seems to slip into a kind of naturalism that would have it that brains possess soft, non-mechanical sense because they are soft and non-mechanical seeming. They can't be machines because they don't have cables and transistors. "Wetware" can't possibly be hardware. A lot of his arguments seem to be along those lines — the refusal to accept abstractions which others accept, as telmo aptly puts it. He claims to "solve the hard problem of consciousness" but the solution involves manoeuvres like "putting the whole universe into the explanatory gap" between objective and subjective: hardly illuminating! I get irritated by neologisms like PIP (whatever that stands for now - was "multi-sense realism' not obscure enough?), which to me seem to be about trying to add substance to vague and poetic intuitions about reality by attaching big, intellectual- sounding labels to them.

However the same grain of sand that seems to get in Craig's eye does get in mine too. It's conceivable that some future incarnation of "cleverbot" (cleverbot.com, in case you don't know it) could reach a point of passing a Turing test through a combination of a vast repertoire of recorded conversation and some clever linguistic parsing to do a better job of keeping track of a semantic thread to the conversation (where the program currently falls down). But in this case, what goes in inside the machine seems to make all the difference, though the functionalists are committed to rejecting that position. Cleverly simulated conversation just doesn't seem to be real conversation if what is going on behind the scenes is just a bunch of rules for pulling lines out of a database. It's Craig's clever garbage lids. We can make a doll that screams and recoils from damaging inputs and learns to avoid them, but the functional outputs of pain are not the experience of pain. Imagine a being neurologically incapable of pain. Like "Mary", the hypothetical woman who lives her life seeing the world through a black and white monitor and cannot imagine colour qualia until she is released, such an entity could not begin to comprehend the meaning of screams of pain - beyond possibly recognising a self-protective function. The elision of qualia from functional theories of mind has potentially very serious ethical consequences - for only a subject with access to those qualia truly understand them. Understanding the human condition as it really is involves inhabiting human qualia. Otherwise you end up with Dr Mengele — humans as objects.

I've read Dennett's arguments against the "qualophiles" and I find them singularly unconvincing - though to say why is another long post. Dennett says we only "seem" to have qualia, but what can "seem" possibly mean in the absence of qualia? An illusion of a quality is an oxymoron, for the quality *is* only the way it seems. The comp assumption that computations have qualia hidden inside them is not much of an answer either in my view. Why not grant the qualia equal ontological status to the computations themselves, if they are part and parcel? And if they cannot be known except from the inside, and if the computation's result can't be known in advance, why not say that the "logic" of the qualitiative experience is reflected in the mathematics as much as the other way round?

Well enough. I don't have the answer. All I'm prepared to say is we are still confronted by mystery. "PIP" seems to me to be more impressionistic than theoretical. Comp still seems to struggle with qualia and zombies. I suspect we still await the unifying perspective.

Dennett just put the mind under the rug, like he has, as he believe in comp and in materialism.

Now, I do think that the intensional variant of the self-reference logics does provide a genuine theory of qualia, which explain them as knowable truth, having some special semantics (related to perception field and imaging) and we do have the explanation why they seem irreductibly not explainable in third person term.

For me, this is enough to believe that RA and PA is conscious, and has already some qualia.

Computer science is sophisticated enough to explain why a self- introspecting machine can understand why there are things about herself which she cannot understand, yet can memorize, and even described relatively to other entities having similar experience.

Consciousness is NOT Turing emulable, as it is a (self) selection on a non computable domain, with the unification price: we have to recover physics from number theology.

Bruno






On Thursday, September 26, 2013 8:17:04 PM UTC+10, telmo_menezes wrote:
Hi Craig (and all),

Now that I have a better understanding of your ideas, I would like to
confront you with a thought experiment. Some of the stuff you say
looks completely esoteric to me, so I imagine there are three
possibilities: either you are significantly more intelligent than me
or you're a bit crazy, or both. I'm not joking, I don't know.

But I would like to focus on sensory participation as the fundamental
stuff of reality and your claim that strong AI is impossible because
the machines we build are just Frankensteins, in a sense. If I
understand correctly, you still believe these machines have sensory
participation just because they exist, but not in the sense that they
could emulate our human experiences. They have the sensory
participation level of the stuff they're made of and nothing else.
Right?

So let's talk about seeds.

We now know how a human being grows from a seed that we pretty much
understand. We might not be able to model all the complexity involved
in networks of gene expression, protein folding and so on, but we
understand the building blocks. We understand them to a point where we
can actually engineer the outcome to a degree. It is now 2013 and we
are, in a sense, living in the future.

So we can now take a fertilised egg and tweak it somehow. When done
successfully, a human being will grow out of it. Doing this with human
eggs is considered unethical, but I believe it is technically
possible. So a human being grows out of this egg. Is he/she normal?

What if someone actually designs the entire DNA string and grows a
human being out of it? Still normal?

What if we simulate the growth of the organism from a string of
virtual DNA and then just assemble the outcome at some stage? Still
normal?

What if now we do away with DNA altogether and use some other Turing
complete self-modifying system?

What if we never build the outcome but just let it live inside a
simulation? We can even visit this simulation with appropriate
hardware: http://www.oculusvr.com/. What now?

In your view, at what point does this break? And why?

Best,
Telmo.

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to