> On 12 Jun 2020, at 20:26, Jason Resch <jasonre...@gmail.com> wrote:
> 
> 
> 
> On Thu, Jun 11, 2020 at 11:03 AM Bruno Marchal <marc...@ulb.ac.be 
> <mailto:marc...@ulb.ac.be>> wrote:
> 
>> On 9 Jun 2020, at 19:08, Jason Resch <jasonre...@gmail.com 
>> <mailto:jasonre...@gmail.com>> wrote:
>> 
>> For the present discussion/question, I want to ignore the testable 
>> implications of computationalism on physical law, and instead focus on the 
>> following idea:
>> 
>> "How can we know if a robot is conscious?”
> 
> That question is very different than “is functionalism/computationalism 
> unfalsifiable?”.
> 
> Note that in my older paper, I relate computationisme to Putnam’s ambiguous 
> functionalism, by defining computationalism by asserting the existence of of 
> level of description of my body/brain such that I survive (ma consciousness 
> remains relatively invariant) with a digital machine (supposedly physically 
> implemented) replacing my body/brain.
> 
> 
> 
>> 
>> Let's say there are two brains, one biological and one an exact 
>> computational emulation, meaning exact functional equivalence.
> 
> I guess you mean “for all possible inputs”.
> 
> 
> 
> 
>> Then let's say we can exactly control sensory input and perfectly monitor 
>> motor control outputs between the two brains.
>> 
>> Given that computationalism implies functional equivalence, then identical 
>> inputs yield identical internal behavior (nerve activations, etc.) and 
>> outputs, in terms of muscle movement, facial expressions, and speech.
>> 
>> If we stimulate nerves in the person's back to cause pain, and ask them both 
>> to describe the pain, both will speak identical sentences. Both will say it 
>> hurts when asked, and if asked to write a paragraph describing the pain, 
>> will provide identical accounts.
>> 
>> Does the definition of functional equivalence mean that any scientific 
>> objective third-person analysis or test is doomed to fail to find any 
>> distinction in behaviors, and thus necessarily fails in its ability to 
>> disprove consciousness in the functionally equivalent robot mind?
> 
> With computationalism, (and perhaps without) we cannot prove that anything is 
> conscious (we can know our own consciousness, but still cannot justified it 
> to ourself in any public way, or third person communicable way). 
> 
> 
> 
>> 
>> Is computationalism as far as science can go on a theory of mind before it 
>> reaches this testing roadblock?
> 
> Computationalism is indirectly testable. By verifying the physics implied by 
> the theory of consciousness, we verify it indirectly.
> 
> As you know, I define consciousness by that indubitable truth that all 
> universal machine, cognitively enough rich to know that they are universal, 
> finds by looking inward (in the Gödel-Kleene sense), and which is also non 
> provable (non rationally justifiable) and even non definable without invoking 
> *some* notion of truth. Then such consciousness appears to be a fixed point 
> for the doubting procedure, like in Descartes, and it get a key role: 
> self-speeding up relatively to universal machine(s).
> 
> So, it seems so clear to me that nobody can prove that anything is conscious 
> that I make it into one of the main way to characterise it.
> 
> Consciousness is already very similar with consistency, which is (for 
> effective theories, and sound machine) equivalent to a belief in some 
> reality. No machine can prove its own consistency, and no machines can prove 
> that there is reality satisfying their beliefs.
> 
> In all case, it is never the machine per se which is conscious, but the first 
> person associated with the machine. There is a core universal person common 
> to each of “us” (with “us” in a very large sense of universal 
> numbers/machines).
> 
> Consciousness is not much more than knowledge, and in particular indubitable 
> knowledge.
> 
> Bruno
> 
> 
> 
> 
> So to summarize: is it right to say that our only hope to prove anything 
> about what theory of consciousness is correct, or any fact concerning the 
> consciousness of others will on indirect tests that involve one's own own 
> first-person experiences?  (Such as whether our apparent reality becomes 
> fuzzy below a certain level.)

For the first person plural test, yes. But from the first person singular 
personal “test”, it is all up to you and your experience, but that will not be 
communicable, not even to yourself due to anosognosia. You light believe 
sincerely that you have completely survive the classical teleportation, but now 
you are deaf and blind, but fail to realise this, by lacking also the ability 
to realise it.

Bruno


> 
> Jason
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiQCaZWo2tpCW-_Z%2BRMfrgOkKDoz5%3Dcpwk%3DxKDZZMDQsQ%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiQCaZWo2tpCW-_Z%2BRMfrgOkKDoz5%3Dcpwk%3DxKDZZMDQsQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/BBF60BC6-389F-41C2-AE4A-B1549827EE74%40ulb.ac.be.

Reply via email to