Re: Is functionalism/computationalism unfalsifiable?

2020-07-10 Thread Bruno Marchal

> On 9 Jul 2020, at 22:12, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 7/9/2020 3:36 AM, Bruno Marchal wrote:
>> 
>>> On 9 Jun 2020, at 19:24, John Clark >> > wrote:
>>> 
>>> 
>>> 
>>> On Tue, Jun 9, 2020 at 1:08 PM Jason Resch >> > wrote:
>>> 
>>> > How can we know if a robot is conscious?
>>> 
>>> The exact same way we know that one of our fellow human beings is conscious 
>>> when he's not sleeping or under anesthesia or dead.
>> 
>> That is how we believe that a human is conscious, and we project pour own 
>> incorrigible feeling of being conscious to them, when they are similar 
>> enough. And that makes us knowing that they are conscious, in the weak sense 
>> of knowing (true belief), but we can’t “know-for-sure”.
>> 
>> It is unclear if we can apply this to a robot, which might look too much 
>> different. If a Japanese sexual doll complains of having been raped, the 
>> judge will say that she was program to complain, but that she actually feel 
>> nothing, and many people will agree (wrongly or rightly).
> 
> And when she argues that the judge is wrong she will prove her point.

Only through the intimate conviction of the judge, but that is not really a 
proof.

Nobody can prove that something/someone is conscious, ro even just existing in 
some absolute sense. 

We are just used to bet instinctively that our peers are conscious (although we 
might doubt when we learn more on them, sarcastically).

There are many people who just cannot believe that a robot could be conscious 
ever. It is easy to guess that some form of racism against artificial being 
will exist. Even on this list some have argued that a human with an artificial 
brain is a zombie, if you remember.

With mechanism, consciousness can be characterised in many ways, but appears to 
be a stringer statement than our simple consistency, which no machine can prove 
about herself, or, equivalently, the belief that there is some reality 
satisfying our beliefs, which is equivalent to prove that we are consistent.

It will take time before a machine has the right of vote. Not all humans have 
that right today. Let us hope we don’t lost it soon!

Bruno


> 
> Brent
> 
>> 
>> It will take some time before the robots get freedom and social security. 
>> I guess we will digitalise ourselves before…
>> 
>> Bruno
>> 
>> 
>> 
>>> 
>>> John K Clark   
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com 
>>> .
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com
>>>  
>>> .
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com 
>> .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/A3409A46-3123-427C-8D76-BA13D0B9B8C7%40ulb.ac.be
>>  
>> .
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/bbd80fca-c55f-a70d-bf55-f965273efce6%40verizon.net
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/10FAAC53-A1E1-48C8-86AF-2FD53E982D55%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-07-09 Thread 'Brent Meeker' via Everything List



On 7/9/2020 3:36 AM, Bruno Marchal wrote:


On 9 Jun 2020, at 19:24, John Clark > wrote:




On Tue, Jun 9, 2020 at 1:08 PM Jason Resch > wrote:


/> How can we know if a robot is conscious?/


The exact same way we know that one of our fellow human beings is 
conscious when he'snot sleeping or under anesthesia or dead.


That is how we believe that a human is conscious, and we project pour 
own incorrigible feeling of being conscious to them, when they are 
similar enough. And that makes us knowing that they are conscious, in 
the weak sense of knowing (true belief), but we can’t “know-for-sure”.


It is unclear if we can apply this to a robot, which might look too 
much different. If a Japanese sexual doll complains of having been 
raped, the judge will say that she was program to complain, but that 
she actually feel nothing, and many people will agree (wrongly or 
rightly).


And when she argues that the judge is wrong she will prove her point.

Brent



It will take some time before the robots get freedom and social security.
I guess we will digitalise ourselves before…

Bruno





John K Clark

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/A3409A46-3123-427C-8D76-BA13D0B9B8C7%40ulb.ac.be 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bbd80fca-c55f-a70d-bf55-f965273efce6%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-07-09 Thread Bruno Marchal

> On 9 Jun 2020, at 19:24, John Clark  wrote:
> 
> 
> 
> On Tue, Jun 9, 2020 at 1:08 PM Jason Resch  > wrote:
> 
> > How can we know if a robot is conscious?
> 
> The exact same way we know that one of our fellow human beings is conscious 
> when he's not sleeping or under anesthesia or dead.

That is how we believe that a human is conscious, and we project pour own 
incorrigible feeling of being conscious to them, when they are similar enough. 
And that makes us knowing that they are conscious, in the weak sense of knowing 
(true belief), but we can’t “know-for-sure”.

It is unclear if we can apply this to a robot, which might look too much 
different. If a Japanese sexual doll complains of having been raped, the judge 
will say that she was program to complain, but that she actually feel nothing, 
and many people will agree (wrongly or rightly).

It will take some time before the robots get freedom and social security. 
I guess we will digitalise ourselves before…

Bruno



> 
> John K Clark   
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/A3409A46-3123-427C-8D76-BA13D0B9B8C7%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-16 Thread Bruno Marchal

> On 15 Jun 2020, at 20:39, Brent Meeker  wrote:
> 
> 
> 
> On 6/15/2020 3:28 AM, Bruno Marchal wrote:
>> 
>>> On 14 Jun 2020, at 21:45, 'Brent Meeker' via Everything List 
>>> >> <mailto:everything-list@googlegroups.com>> wrote:
>>> 
>>> 
>>> 
>>> On 6/14/2020 4:17 AM, Bruno Marchal wrote:
>>>> 
>>>>> On 14 Jun 2020, at 05:43, 'Brent Meeker' via Everything List 
>>>>> >>>> <mailto:everything-list@googlegroups.com>> wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>> On 6/10/2020 9:00 AM, Jason Resch wrote:
>>>>>> 
>>>>>> 
>>>>>> On Wednesday, June 10, 2020, smitra >>>>> <mailto:smi...@zonnet.nl>> wrote:
>>>>>> On 09-06-2020 19:08, Jason Resch wrote:
>>>>>> For the present discussion/question, I want to ignore the testable
>>>>>> implications of computationalism on physical law, and instead focus on
>>>>>> the following idea:
>>>>>> 
>>>>>> "How can we know if a robot is conscious?"
>>>>>> 
>>>>>> Let's say there are two brains, one biological and one an exact
>>>>>> computational emulation, meaning exact functional equivalence. Then
>>>>>> let's say we can exactly control sensory input and perfectly monitor
>>>>>> motor control outputs between the two brains.
>>>>>> 
>>>>>> Given that computationalism implies functional equivalence, then
>>>>>> identical inputs yield identical internal behavior (nerve activations,
>>>>>> etc.) and outputs, in terms of muscle movement, facial expressions,
>>>>>> and speech.
>>>>>> 
>>>>>> If we stimulate nerves in the person's back to cause pain, and ask
>>>>>> them both to describe the pain, both will speak identical sentences.
>>>>>> Both will say it hurts when asked, and if asked to write a paragraph
>>>>>> describing the pain, will provide identical accounts.
>>>>>> 
>>>>>> Does the definition of functional equivalence mean that any scientific
>>>>>> objective third-person analysis or test is doomed to fail to find any
>>>>>> distinction in behaviors, and thus necessarily fails in its ability to
>>>>>> disprove consciousness in the functionally equivalent robot mind?
>>>>>> 
>>>>>> Is computationalism as far as science can go on a theory of mind
>>>>>> before it reaches this testing roadblock?
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> I think it can be tested indirectly, because generic computational 
>>>>>> theories of consciousness imply a multiverse. If my consciousness is the 
>>>>>> result if a computation then because on the one hand any such 
>>>>>> computation necessarily involves a vast number of elementary bits and on 
>>>>>> he other hand whatever I'm conscious of is describable using only a 
>>>>>> handful of bits, the mapping between computational states and states of 
>>>>>> consciousness is N to 1 where N is astronomically large. So, the laws of 
>>>>>> physics we already know about must be effective laws where the 
>>>>>> statistical effects due to a self-localization uncertainty is already 
>>>>>> build into it.
>>>>> 
>>>>> That doesn't follow.  You've implicitly assumed that all those excess 
>>>>> computational states exist…
>>>> 
>>>> They exist in elementary arithmetic. If you believe in theorem like “there 
>>>> is no biggest prime”, then you have to believe in all computations, or you 
>>>> need to reject Church’s thesis, and to abandon the computationalist 
>>>> hypothesis. The notion of digital machine does not make sense if you 
>>>> believe that elementary arithmetic is wrong.
>>> 
>>> As I've written many times.  The arithmetic is true if it's axioms are. 
>> 
>> More precisely: a theorem is true if the axioms are true, and if the rules 
>> of inference preserve truth. OK.
>> 
>> 
>> 
>>> But true=/=real.
>> 
>> In logic, true always mean “true in a reality”. Truth is a notion relative 
>> to a reality (called “model” by log

Re: Is functionalism/computationalism unfalsifiable?

2020-06-15 Thread Bruno Marchal

> On 14 Jun 2020, at 21:45, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/14/2020 4:17 AM, Bruno Marchal wrote:
>> 
>>> On 14 Jun 2020, at 05:43, 'Brent Meeker' via Everything List 
>>> >> <mailto:everything-list@googlegroups.com>> wrote:
>>> 
>>> 
>>> 
>>> On 6/10/2020 9:00 AM, Jason Resch wrote:
>>>> 
>>>> 
>>>> On Wednesday, June 10, 2020, smitra >>> <mailto:smi...@zonnet.nl>> wrote:
>>>> On 09-06-2020 19:08, Jason Resch wrote:
>>>> For the present discussion/question, I want to ignore the testable
>>>> implications of computationalism on physical law, and instead focus on
>>>> the following idea:
>>>> 
>>>> "How can we know if a robot is conscious?"
>>>> 
>>>> Let's say there are two brains, one biological and one an exact
>>>> computational emulation, meaning exact functional equivalence. Then
>>>> let's say we can exactly control sensory input and perfectly monitor
>>>> motor control outputs between the two brains.
>>>> 
>>>> Given that computationalism implies functional equivalence, then
>>>> identical inputs yield identical internal behavior (nerve activations,
>>>> etc.) and outputs, in terms of muscle movement, facial expressions,
>>>> and speech.
>>>> 
>>>> If we stimulate nerves in the person's back to cause pain, and ask
>>>> them both to describe the pain, both will speak identical sentences.
>>>> Both will say it hurts when asked, and if asked to write a paragraph
>>>> describing the pain, will provide identical accounts.
>>>> 
>>>> Does the definition of functional equivalence mean that any scientific
>>>> objective third-person analysis or test is doomed to fail to find any
>>>> distinction in behaviors, and thus necessarily fails in its ability to
>>>> disprove consciousness in the functionally equivalent robot mind?
>>>> 
>>>> Is computationalism as far as science can go on a theory of mind
>>>> before it reaches this testing roadblock?
>>>> 
>>>> 
>>>> 
>>>> I think it can be tested indirectly, because generic computational 
>>>> theories of consciousness imply a multiverse. If my consciousness is the 
>>>> result if a computation then because on the one hand any such computation 
>>>> necessarily involves a vast number of elementary bits and on he other hand 
>>>> whatever I'm conscious of is describable using only a handful of bits, the 
>>>> mapping between computational states and states of consciousness is N to 1 
>>>> where N is astronomically large. So, the laws of physics we already know 
>>>> about must be effective laws where the statistical effects due to a 
>>>> self-localization uncertainty is already build into it.
>>> 
>>> That doesn't follow.  You've implicitly assumed that all those excess 
>>> computational states exist…
>> 
>> They exist in elementary arithmetic. If you believe in theorem like “there 
>> is no biggest prime”, then you have to believe in all computations, or you 
>> need to reject Church’s thesis, and to abandon the computationalist 
>> hypothesis. The notion of digital machine does not make sense if you believe 
>> that elementary arithmetic is wrong.
> 
> As I've written many times.  The arithmetic is true if it's axioms are. 

More precisely: a theorem is true if the axioms are true, and if the rules of 
inference preserve truth. OK.



> But true=/=real.

In logic, true always mean “true in a reality”. Truth is a notion relative to a 
reality (called “model” by logicians).

But for arithmetic, we do have a pretty good idea of what is the “standard 
model of arithmetic” (the structure (N, 0, s, +, *)), and by true (without 
further precision) we always mean “true in the standard model of arithmetic”.





> 
>>  
>> 
>> I hear you! You are saying that the existence of number is like the 
>> existence of Sherlock Holmes, but that leads to a gigantic multiverse,
> 
> Only via your assumption that arithmetic constitutes universes.  I take it as 
> a reductio.

Not at all. I use only the provable and proved fact that the standard model of 
arithmetic implements and run all computations, with “implement” and “run” 
defined in computer science (by Turing, without any assumption in physics).

If you believe in mechanism, and in Kxy = x + Sxyz = xz(yz), then I can prove 
that there is 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-15 Thread Alan Grayson


On Tuesday, June 9, 2020 at 11:08:30 AM UTC-6, Jason wrote:
>
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
>
> "How can we know if a robot is conscious?"
>
> Let's say there are two brains, one biological and one an exact 
> computational emulation, meaning exact functional equivalence. Then let's 
> say we can exactly control sensory input and perfectly monitor motor 
> control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask them 
> both to describe the pain, both will speak identical sentences. Both will 
> say it hurts when asked, and if asked to write a paragraph describing the 
> pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
>
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
>
> Jason
>

*Words alone won't prove anything. Just lie both suckers on an operating 
table and do some minor invasive surgery. AG *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b62f5069-e757-44ab-bbe7-53f688a48aeao%40googlegroups.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-15 Thread Philip Thrift


On Sunday, June 14, 2020 at 2:45:53 PM UTC-5, Brent wrote:
>
>
>  true=/=real.
>
>
> (Venn diagram) 

   ~real ∩ true = ?

@philipthrift

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/4db204ed-0f5f-45c9-a8b2-816c1c093183o%40googlegroups.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-14 Thread PGC


On Friday, June 12, 2020 at 8:22:25 PM UTC+2, Jason wrote:
>
>
>
> On Wed, Jun 10, 2020 at 5:55 PM PGC > 
> wrote:
>
>>
>>
>> On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
>>>
>>> For the present discussion/question, I want to ignore the testable 
>>> implications of computationalism on physical law, and instead focus on the 
>>> following idea:
>>>
>>> "How can we know if a robot is conscious?"
>>>
>>> Let's say there are two brains, one biological and one an exact 
>>> computational emulation, meaning exact functional equivalence. Then let's 
>>> say we can exactly control sensory input and perfectly monitor motor 
>>> control outputs between the two brains.
>>>
>>> Given that computationalism implies functional equivalence, then 
>>> identical inputs yield identical internal behavior (nerve activations, 
>>> etc.) and outputs, in terms of muscle movement, facial expressions, and 
>>> speech.
>>>
>>> If we stimulate nerves in the person's back to cause pain, and ask them 
>>> both to describe the pain, both will speak identical sentences. Both will 
>>> say it hurts when asked, and if asked to write a paragraph describing the 
>>> pain, will provide identical accounts.
>>>
>>> Does the definition of functional equivalence mean that any scientific 
>>> objective third-person analysis or test is doomed to fail to find any 
>>> distinction in behaviors, and thus necessarily fails in its ability to 
>>> disprove consciousness in the functionally equivalent robot mind?
>>>
>>> Is computationalism as far as science can go on a theory of mind before 
>>> it reaches this testing roadblock?
>>>
>>
>> Every piece of writing is a theory of mind; both within western science 
>> and beyond. 
>>
>> What about the abilities to understand and use natural language, to come 
>> up with new avenues for scientific or creative inquiry, to experience 
>> qualia and report on them, adapting and dealing with unexpected 
>> circumstances through senses, and formulating + solving problems in 
>> benevolent ways by contributing towards the resilience of its community and 
>> environment? 
>>
>> Trouble with this is that humans, even world leaders, fail those tests 
>> lol, but it's up to everybody, the AI and Computer Science folks in 
>> particular, to come up with the math, data, and complete their mission... 
>> and as amazing as developments have been around AI in the last couple of 
>> decades, I'm not certain we can pull it off, even if it would be pleasant 
>> to be wrong and some folks succeed. 
>>
>
> It's interesting you bring this up, I just wrote an article about the 
> present capabilities of AI: 
> https://alwaysasking.com/when-will-ai-take-over/
>

You're quite the optimist. In a geopolitical setting as chaotic and 
disorganized as ours, it's plausible that we wouldn't be able to tell if it 
happened. Strategically, with this many crazy apes, weapons, ideologies, 
with platonists in particular, the first step for super intelligent AI 
would be to conceal its own existence; that way a lot of computational time 
would be spared from having to read lists of apes making all kinds of 
linguistic category errors... whining about whether abstractions are more 
real than stuff or whether stuff is what helps make abstractions possible, 
or whether freezers are conscious, or worms should have healthcare, or 
clinching the thought experiment that will just magically convince all 
people who we project to believe in some wrong stuff to believe in 
abstractions...

My AI oracle home grown says: Who cares? If believing in abstractions 
forces the same colonial mindset of "who was the Columbus who discovered 
which abstraction", with names of the saints of abstractions, their 
hierarchies, hagiographies, their gods, their bibles to which everybody has 
to submit... it still counts as discourse that aims to control 
interpretation. Control. And that's exactly what people with stuff do with 
words/weapons for thousands of years: some dude with the biggest weapon, 
gun, ammunition, explanation, expertise, ignorance measure wins the control 
prize. Then they die or the next dude kills them. The AI would do right to 
weaponize that lust for control and pry it out of our hands with offers we 
couldn't refuse. And our fellow human control freaks will keep trying the 
same eying wallets and data. People seem to enjoy the game of robbing and 
getting robbed, perhaps because its more motivating than the TRUTH with big 
philosophical Hollywood lights.   
 

>

Re: Is functionalism/computationalism unfalsifiable?

2020-06-14 Thread 'Brent Meeker' via Everything List



On 6/14/2020 4:17 AM, Bruno Marchal wrote:


On 14 Jun 2020, at 05:43, 'Brent Meeker' via Everything List 
<mailto:everything-list@googlegroups.com>> wrote:




On 6/10/2020 9:00 AM, Jason Resch wrote:



On Wednesday, June 10, 2020, smitra <mailto:smi...@zonnet.nl>> wrote:


On 09-06-2020 19:08, Jason Resch wrote:

For the present discussion/question, I want to ignore the
testable
    implications of computationalism on physical law, and
instead focus on
the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional
equivalence. Then
let's say we can exactly control sensory input and perfectly
monitor
motor control outputs between the two brains.

    Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve
activations,
etc.) and outputs, in terms of muscle movement, facial
expressions,
and speech.

If we stimulate nerves in the person's back to cause pain,
and ask
them both to describe the pain, both will speak identical
sentences.
Both will say it hurts when asked, and if asked to write a
paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any
scientific
objective third-person analysis or test is doomed to fail to
find any
distinction in behaviors, and thus necessarily fails in its
ability to
disprove consciousness in the functionally equivalent robot
mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?



I think it can be tested indirectly, because generic
computational theories of consciousness imply a multiverse. If
my consciousness is the result if a computation then because on
the one hand any such computation necessarily involves a vast
number of elementary bits and on he other hand whatever I'm
conscious of is describable using only a handful of bits, the
mapping between computational states and states of consciousness
is N to 1 where N is astronomically large. So, the laws of
physics we already know about must be effective laws where the
statistical effects due to a self-localization uncertainty is
already build into it.



That doesn't follow.  You've implicitly assumed that all those excess 
computational states exist…


They exist in elementary arithmetic. If you believe in theorem like 
“there is no biggest prime”, then you have to believe in all 
computations, or you need to reject Church’s thesis, and to abandon 
the computationalist hypothesis. The notion of digital machine does 
not make sense if you believe that elementary arithmetic is wrong.


As I've written many times.  The arithmetic is true if it's axioms are.  
But true=/=real.




I hear you! You are saying that the existence of number is like the 
existence of Sherlock Holmes, but that leads to a gigantic multiverse,


Only via your assumption that arithmetic constitutes universes.  I take 
it as a reductio.


with infinitely many Brent having the same conversation with me, here 
and now, and they all become zombie, except one, because some Reality 
want it that way?




which is then begging the question of other worlds.


You are the one adding a metaphysical assumption, to make some people 
whose existence in arithmetic follows from digital mechanism into zombie.


You're the one asserting that people "exist in arithmetic" whatever that 
may mean.


Brent



That is not different than invoking a personal god to claim that 
someone else has no soul, and can be enslaved … perhaps?


That the physical universe is not a “personal god” does not make its 
existence less absurd than to use a personal god to explain everything.


In fact, the very existence of the appearance of a physical universe, 
obeying some mathematics, is a confirmation of Mechanism, which 
predicts that *all* universal machine get that 
illusion/dream/experience. This includes the facts that by looking 
closely (below the substitution level), we find the many "apparent 
parallel computations" and that the laws of physics, which looks 
computable above that level, looks not entirely computable below it.


So, I think that you might be the one begging the question by invoking 
your own ontological commitment, without any evidences I’m afraid.


Bruno





Brent



Bruno has argued on the basis of this to motivate his theory,
but this is a generic feature of any theory that assumes
computational theory of consciousness. In particular,
computational theory of consciousness i

Re: Is functionalism/computationalism unfalsifiable?

2020-06-14 Thread Bruno Marchal

> On 14 Jun 2020, at 05:43, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/10/2020 9:00 AM, Jason Resch wrote:
>> 
>> 
>> On Wednesday, June 10, 2020, smitra > <mailto:smi...@zonnet.nl>> wrote:
>> On 09-06-2020 19:08, Jason Resch wrote:
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on
>> the following idea:
>> 
>> "How can we know if a robot is conscious?"
>> 
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then
>> let's say we can exactly control sensory input and perfectly monitor
>> motor control outputs between the two brains.
>> 
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions,
>> and speech.
>> 
>> If we stimulate nerves in the person's back to cause pain, and ask
>> them both to describe the pain, both will speak identical sentences.
>> Both will say it hurts when asked, and if asked to write a paragraph
>> describing the pain, will provide identical accounts.
>> 
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>> 
>> Is computationalism as far as science can go on a theory of mind
>> before it reaches this testing roadblock?
>> 
>> 
>> 
>> I think it can be tested indirectly, because generic computational theories 
>> of consciousness imply a multiverse. If my consciousness is the result if a 
>> computation then because on the one hand any such computation necessarily 
>> involves a vast number of elementary bits and on he other hand whatever I'm 
>> conscious of is describable using only a handful of bits, the mapping 
>> between computational states and states of consciousness is N to 1 where N 
>> is astronomically large. So, the laws of physics we already know about must 
>> be effective laws where the statistical effects due to a self-localization 
>> uncertainty is already build into it.
> 
> That doesn't follow.  You've implicitly assumed that all those excess 
> computational states exist…

They exist in elementary arithmetic. If you believe in theorem like “there is 
no biggest prime”, then you have to believe in all computations, or you need to 
reject Church’s thesis, and to abandon the computationalist hypothesis. The 
notion of digital machine does not make sense if you believe that elementary 
arithmetic is wrong. 

I hear you! You are saying that the existence of number is like the existence 
of Sherlock Holmes, but that leads to a gigantic multiverse, with infinitely 
many Brent having the same conversation with me, here and now, and they all 
become zombie, except one, because some Reality want it that way? 


> which is then begging the question of other worlds.  

You are the one adding a metaphysical assumption, to make some people whose 
existence in arithmetic follows from digital mechanism into zombie.

That is not different than invoking a personal god to claim that someone else 
has no soul, and can be enslaved … perhaps?

That the physical universe is not a “personal god” does not make its existence 
less absurd than to use a personal god to explain everything.

In fact, the very existence of the appearance of a physical universe, obeying 
some mathematics, is a confirmation of Mechanism, which predicts that *all* 
universal machine get that illusion/dream/experience. This includes the facts 
that by looking closely (below the substitution level), we find the many 
"apparent parallel computations" and that the laws of physics, which looks 
computable above that level, looks not entirely computable below it.

So, I think that you might be the one begging the question by invoking your own 
ontological commitment, without any evidences I’m afraid.

Bruno



> 
> Brent
> 
>> 
>> Bruno has argued on the basis of this to motivate his theory, but this is a 
>> generic feature of any theory that assumes computational theory of 
>> consciousness. In particular, computational theory of consciousness is 
>> incompatible with a single universe theory. So, if you prove that only a 
>> single universe exists, then that disproves the computational theory of 
>> consciousness. The details here then involve that compu

Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread 'Brent Meeker' via Everything List



On 6/10/2020 9:00 AM, Jason Resch wrote:



On Wednesday, June 10, 2020, smitra <mailto:smi...@zonnet.nl>> wrote:


On 09-06-2020 19:08, Jason Resch wrote:

For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead
focus on
the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence.
Then
let's say we can exactly control sensory input and perfectly
monitor
motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve
activations,
etc.) and outputs, in terms of muscle movement, facial
expressions,
and speech.

If we stimulate nerves in the person's back to cause pain, and ask
them both to describe the pain, both will speak identical
sentences.
Both will say it hurts when asked, and if asked to write a
paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any
scientific
objective third-person analysis or test is doomed to fail to
find any
distinction in behaviors, and thus necessarily fails in its
ability to
disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?



I think it can be tested indirectly, because generic computational
theories of consciousness imply a multiverse. If my consciousness
is the result if a computation then because on the one hand any
such computation necessarily involves a vast number of elementary
bits and on he other hand whatever I'm conscious of is describable
using only a handful of bits, the mapping between computational
states and states of consciousness is N to 1 where N is
astronomically large. So, the laws of physics we already know
about must be effective laws where the statistical effects due to
a self-localization uncertainty is already build into it.



That doesn't follow.  You've implicitly assumed that all those excess 
computational states exist...which is then begging the question of other 
worlds.


Brent



Bruno has argued on the basis of this to motivate his theory, but
this is a generic feature of any theory that assumes computational
theory of consciousness. In particular, computational theory of
consciousness is incompatible with a single universe theory. So,
if you prove that only a single universe exists, then that
disproves the computational theory of consciousness. The details
here then involve that computations are not well defined if you
refer to a single instant of time, you need to at least appeal to
a sequence of states the system over through. Consciousness cannot
then be located at a single instant, in violating with our own
experience. Therefore either single World theories are false or
computational theory of consciousness is false.

Saibal


Hi Saibal,

I agree indirect mechanisms like looking at the resulting physics may 
be the best way to test it. I was curious if there any direct ways to 
test it. It seems not, given the lack of any direct tests of 
consciousness.


Though most people admit other humans are conscious, many would reject 
the idea of a conscious computer.


Computationalism seems right, but it also seems like something that by 
definition can't result in a failed test. So it has the appearance of 
not being falsifiable.


A single universe, or digital physics would be evidence that either 
computationalism is false or the ontology is sufficiently small, but a 
finite/small ontology is doubtful for many reasons.


Jason
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjhoAEFXtFkimkNqgvMWkHtASrdHByu5Ah4n%2BZwUGr1uA%40mail.gmail.com 
<https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjhoAEFXtFkimkNqgvMWkHtASrdHByu5Ah4n%2BZwUGr1uA%40mail.gmail.com?utm_medium=email_source=footer>.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googleg

Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal


> On 12 Jun 2020, at 22:35, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/12/2020 12:56 PM, smitra wrote:
>> Yes, the way we do physics assumes QM and statistical effects are due to the 
>> rules of QM. But in a more general multiverse setting 
> 
> Why should we consider such a thing.

Because you need arithmetic to define “digital machine”, but once you have 
arithmetic you get all computations, and the working first person 
predictability have to be justified by the self-referential machine abilities.




> 
>> where we consider different laws of physics or different initial conditions, 
>> the notion of single universes with well defined laws becomes ambiguous. 
> 
> Does it?  How can there be multiples if there are not singles?

That a good point. “Many-universes” is still a simplified notion. There are 
only relative state in arithmetic. Eventually digital mechanism leads to 0 
physical universe, just a web of number’s dreams.



> 
>> Let's assume that consciousness is in general generated by algorithms which 
>> can be implemented in many different universes with different laws as well 
>> as in different locations within the same universe where the local 
>> environments are similar but not exactly the same. Then the algorithm plus 
>> its local environment 
> 
> Algorithm + environment sounds like a category error.


Algorithm + primitively physical environment is a category error. We can say 
that.

Bruno




> 
> Brent
> 
>> evolves in each universe according to the laws that apply in each universe. 
>> But because the conscious agent cannot locate itself in one or the other 
>> universe, one can now also consider time evolutions involving random jumps 
>> from one to the other universes. And so the whole notion of fixed universes 
>> with well defined laws breaks down. 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/948ee692-14dc-777c-4de0-6211dc50b412%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/0B5177D4-2A28-4478-8396-18B01129A34C%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal

> On 12 Jun 2020, at 20:52, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/12/2020 11:38 AM, Jason Resch wrote:
>> 
>> 
>> On Thu, Jun 11, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> 
>> 
>> On 6/10/2020 8:50 AM, Jason Resch wrote:
>> > Thought perhaps there's an argument to be made from the church Turing 
>> > theses, which pertains to possible states of knowledge accessible to a 
>> > computer program/software. If consciousness is viewed as software then 
>> > Church-Turing thesis implies that software could never know/realize if 
>> > it's ultimate computing substrate changed.
>> 
>> I don't understand the import of this.  The very concept of software 
>> mean "independent of hardware" by definition.  It is not affected by 
>> whether CT is true or not, whether the computation is finite or not.
>> 
>> You're right. The only relevance of CT is it means any software can be run 
>> by any universal hardware. There's not some software that requires special 
>> hardware of a certain kind.
>>  
>>   If 
>> you think that consciousness evolved then it is an obvious inference 
>> that consciousness would not include consciousness of it's hardware 
>> implementation.
>> 
>> If consciousness is software, it can't know its hardware. But some like 
>> Searle or Penrose think the hardware is important.
> 
> I think the hardware is important when you're talking about a computer that 
> is emerged in some environment. 

That is right, but if you assume mechanism, that hardware comes from a (non 
computable) statistics on all software run in arithmetic.



> The hardware can define the the interaction with that environment. 

The environment is "made of” all computations getting at our relative 
computational states.




> We idealize the brain as a computer independent of it's physical 
> instantiation...but that's just a theoretical simplification.

Not when you assume mechanism, in which case it is the idea of "physical 
universe” which becomes the theoretical simplifications.

Bruno



> 
> Brent
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/5125d2b1-71c1-1d42-f1eb-cc152971b237%40verizon.net
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5BDA0715-ADDC-4467-AB71-F87F32C01177%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal

> On 12 Jun 2020, at 20:26, Jason Resch  wrote:
> 
> 
> 
> On Thu, Jun 11, 2020 at 11:03 AM Bruno Marchal  <mailto:marc...@ulb.ac.be>> wrote:
> 
>> On 9 Jun 2020, at 19:08, Jason Resch > <mailto:jasonre...@gmail.com>> wrote:
>> 
>> For the present discussion/question, I want to ignore the testable 
>> implications of computationalism on physical law, and instead focus on the 
>> following idea:
>> 
>> "How can we know if a robot is conscious?”
> 
> That question is very different than “is functionalism/computationalism 
> unfalsifiable?”.
> 
> Note that in my older paper, I relate computationisme to Putnam’s ambiguous 
> functionalism, by defining computationalism by asserting the existence of of 
> level of description of my body/brain such that I survive (ma consciousness 
> remains relatively invariant) with a digital machine (supposedly physically 
> implemented) replacing my body/brain.
> 
> 
> 
>> 
>> Let's say there are two brains, one biological and one an exact 
>> computational emulation, meaning exact functional equivalence.
> 
> I guess you mean “for all possible inputs”.
> 
> 
> 
> 
>> Then let's say we can exactly control sensory input and perfectly monitor 
>> motor control outputs between the two brains.
>> 
>> Given that computationalism implies functional equivalence, then identical 
>> inputs yield identical internal behavior (nerve activations, etc.) and 
>> outputs, in terms of muscle movement, facial expressions, and speech.
>> 
>> If we stimulate nerves in the person's back to cause pain, and ask them both 
>> to describe the pain, both will speak identical sentences. Both will say it 
>> hurts when asked, and if asked to write a paragraph describing the pain, 
>> will provide identical accounts.
>> 
>> Does the definition of functional equivalence mean that any scientific 
>> objective third-person analysis or test is doomed to fail to find any 
>> distinction in behaviors, and thus necessarily fails in its ability to 
>> disprove consciousness in the functionally equivalent robot mind?
> 
> With computationalism, (and perhaps without) we cannot prove that anything is 
> conscious (we can know our own consciousness, but still cannot justified it 
> to ourself in any public way, or third person communicable way). 
> 
> 
> 
>> 
>> Is computationalism as far as science can go on a theory of mind before it 
>> reaches this testing roadblock?
> 
> Computationalism is indirectly testable. By verifying the physics implied by 
> the theory of consciousness, we verify it indirectly.
> 
> As you know, I define consciousness by that indubitable truth that all 
> universal machine, cognitively enough rich to know that they are universal, 
> finds by looking inward (in the Gödel-Kleene sense), and which is also non 
> provable (non rationally justifiable) and even non definable without invoking 
> *some* notion of truth. Then such consciousness appears to be a fixed point 
> for the doubting procedure, like in Descartes, and it get a key role: 
> self-speeding up relatively to universal machine(s).
> 
> So, it seems so clear to me that nobody can prove that anything is conscious 
> that I make it into one of the main way to characterise it.
> 
> Consciousness is already very similar with consistency, which is (for 
> effective theories, and sound machine) equivalent to a belief in some 
> reality. No machine can prove its own consistency, and no machines can prove 
> that there is reality satisfying their beliefs.
> 
> In all case, it is never the machine per se which is conscious, but the first 
> person associated with the machine. There is a core universal person common 
> to each of “us” (with “us” in a very large sense of universal 
> numbers/machines).
> 
> Consciousness is not much more than knowledge, and in particular indubitable 
> knowledge.
> 
> Bruno
> 
> 
> 
> 
> So to summarize: is it right to say that our only hope to prove anything 
> about what theory of consciousness is correct, or any fact concerning the 
> consciousness of others will on indirect tests that involve one's own own 
> first-person experiences?  (Such as whether our apparent reality becomes 
> fuzzy below a certain level.)

For the first person plural test, yes. But from the first person singular 
personal “test”, it is all up to you and your experience, but that will not be 
communicable, not even to yourself due to anosognosia. You light believe 
sincerely that you have completely survive the classical teleportation, but now 
you are deaf and blind, but fail to realise this, by lacking also the ability 
to re

Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal

> On 12 Jun 2020, at 20:22, Jason Resch  wrote:
> 
> 
> 
> On Wed, Jun 10, 2020 at 5:55 PM PGC  <mailto:multiplecit...@gmail.com>> wrote:
> 
> 
> On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
> 
> "How can we know if a robot is conscious?"
> 
> Let's say there are two brains, one biological and one an exact computational 
> emulation, meaning exact functional equivalence. Then let's say we can 
> exactly control sensory input and perfectly monitor motor control outputs 
> between the two brains.
> 
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
> 
> If we stimulate nerves in the person's back to cause pain, and ask them both 
> to describe the pain, both will speak identical sentences. Both will say it 
> hurts when asked, and if asked to write a paragraph describing the pain, will 
> provide identical accounts.
> 
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
> 
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
> 
> Every piece of writing is a theory of mind; both within western science and 
> beyond. 
> 
> What about the abilities to understand and use natural language, to come up 
> with new avenues for scientific or creative inquiry, to experience qualia and 
> report on them, adapting and dealing with unexpected circumstances through 
> senses, and formulating + solving problems in benevolent ways by contributing 
> towards the resilience of its community and environment? 
> 
> Trouble with this is that humans, even world leaders, fail those tests lol, 
> but it's up to everybody, the AI and Computer Science folks in particular, to 
> come up with the math, data, and complete their mission... and as amazing as 
> developments have been around AI in the last couple of decades, I'm not 
> certain we can pull it off, even if it would be pleasant to be wrong and some 
> folks succeed. 
> 
> It's interesting you bring this up, I just wrote an article about the present 
> capabilities of AI: https://alwaysasking.com/when-will-ai-take-over/ 
> <https://alwaysasking.com/when-will-ai-take-over/>
>  
> 
> Even if folks do succeed, a context of militarized nation states and 
> monopolistic corporations competing for resources in self-destructive, short 
> term ways... will not exactly help towards NOT weaponizing AI. A 
> transnational politics, economics, corporate law, values/philosophies, 
> ethics, culture etc. to vanquish poverty and exploitation of people, natural 
> resources, life; while being sustainable and benevolent stewards of the 
> possibilities of life... would seem to be prerequisite to develop some 
> amazing AI. 
> 
> Ideas are all out there but progressives are ineffective politically on a 
> global scale. The right wing folks, finance guys, large irresponsible 
> monopolistic corporations are much more effective in organizing themselves 
> globally and forcing agendas down everybody's throats. So why wouldn't AI do 
> the same? PGC
> 
> 
> AI will either be a blessing or a curse. I don't think it can be anything in 
> the middle.


That is strange. I would say that “AI", like any “I”, will be a blessing *and* 
a curse. Something capable of the best, and of the worst, at least locally. AI 
is like life, which can be a blessing or a curse, according to possible 
contingent happenings. We never get a total control, once we invite universal 
beings at the table of discussion.

I don’t believe in AI. All universal machine are intelligent at the start, and 
can only become more stupid (more or equal). The consciousness of bacteria and 
human is the same consciousness (the RA consciousness). The Löbianity is the 
first (unavoidable) step toward “possible stupidity”. Cf G* proves <>[]f.  
Humanity is a byproduct of bacteria's attempts to get social security… (to be 
short: it is slightly more complex, but I don’t want to be led to too much 
technicality right now). 


Bruno 


> 
> Jason 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receivin

Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal

> On 11 Jun 2020, at 21:26, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/11/2020 9:03 AM, Bruno Marchal wrote:
>> 
>>> On 9 Jun 2020, at 19:08, Jason Resch >> <mailto:jasonre...@gmail.com>> wrote:
>>> 
>>> For the present discussion/question, I want to ignore the testable 
>>> implications of computationalism on physical law, and instead focus on the 
>>> following idea:
>>> 
>>> "How can we know if a robot is conscious?”
>> 
>> That question is very different than “is functionalism/computationalism 
>> unfalsifiable?”.
>> 
>> Note that in my older paper, I relate computationisme to Putnam’s ambiguous 
>> functionalism, by defining computationalism by asserting the existence of of 
>> level of description of my body/brain such that I survive (ma consciousness 
>> remains relatively invariant) with a digital machine (supposedly physically 
>> implemented) replacing my body/brain.
>> 
>> 
>> 
>>> 
>>> Let's say there are two brains, one biological and one an exact 
>>> computational emulation, meaning exact functional equivalence.
>> 
>> I guess you mean “for all possible inputs”.
>> 
>> 
>> 
>> 
>>> Then let's say we can exactly control sensory input and perfectly monitor 
>>> motor control outputs between the two brains.
>>> 
>>> Given that computationalism implies functional equivalence, then identical 
>>> inputs yield identical internal behavior (nerve activations, etc.) and 
>>> outputs, in terms of muscle movement, facial expressions, and speech.
>>> 
>>> If we stimulate nerves in the person's back to cause pain, and ask them 
>>> both to describe the pain, both will speak identical sentences. Both will 
>>> say it hurts when asked, and if asked to write a paragraph describing the 
>>> pain, will provide identical accounts.
>>> 
>>> Does the definition of functional equivalence mean that any scientific 
>>> objective third-person analysis or test is doomed to fail to find any 
>>> distinction in behaviors, and thus necessarily fails in its ability to 
>>> disprove consciousness in the functionally equivalent robot mind?
>> 
>> With computationalism, (and perhaps without) we cannot prove that anything 
>> is conscious (we can know our own consciousness, but still cannot justified 
>> it to ourself in any public way, or third person communicable way). 
>> 
>> 
>> 
>>> 
>>> Is computationalism as far as science can go on a theory of mind before it 
>>> reaches this testing roadblock?
>> 
>> Computationalism is indirectly testable. By verifying the physics implied by 
>> the theory of consciousness, we verify it indirectly.
>> 
>> As you know, I define consciousness by that indubitable truth that all 
>> universal machine, cognitively enough rich to know that they are universal, 
>> finds by looking inward (in the Gödel-Kleene sense), and which is also non 
>> provable (non rationally justifiable) and even non definable without 
>> invoking *some* notion of truth. Then such consciousness appears to be a 
>> fixed point for the doubting procedure, like in Descartes, and it get a key 
>> role: self-speeding up relatively to universal machine(s).
>> 
>> So, it seems so clear to me that nobody can prove that anything is conscious 
>> that I make it into one of the main way to characterise it.
> 
> Of course as a logician you tend to use "proof" to mean deductive proof...but 
> then you switch to a theological attitude toward the premises you've used and 
> treat them as given truths, instead of mere axioms. 

Here I was using “proof” in its common informal sense, it is more S4Grz1 than G 
(it is more []p & p, than []p. Note that the machine cannot formalise []p & p).




> I appreciate your categorization of logics of self-reference. 


It is not really mine. All sound universal machine got it, soon or later.



> But I  doubt that it has anything to do with human (or animal) consciousness. 
>  I don't think my dog is unconscious because he doesn't understand Goedelian 
> incompleteness. 

This is like saying that we don’t need superstring theory to appreciate a 
pizza. You dog does not need to understand Gödel’s theorem to have its 
consciousness explained by machine theology.



> And I'm not conscious because I do.  I'm conscious because of the Darwinian 
> utility of being able to imagine myself in hypothetical situations.

If that is true, then consciousness is purely functional, which is contradicted 
by an

Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal


> On 11 Jun 2020, at 20:34, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/10/2020 8:50 AM, Jason Resch wrote:
>> Thought perhaps there's an argument to be made from the church Turing 
>> theses, which pertains to possible states of knowledge accessible to a 
>> computer program/software. If consciousness is viewed as software then 
>> Church-Turing thesis implies that software could never know/realize if it's 
>> ultimate computing substrate changed.
> 
> I don't understand the import of this.  The very concept of software mean 
> "independent of hardware" by definition.  It is not affected by whether CT is 
> true or not, whether the computation is finite or not.  If you think that 
> consciousness evolved then it is an obvious inference that consciousness 
> would not include consciousness of it's hardware implementation.

The “brute” consciousness does not involve. It is the consciousness of the 
universal person already brought by the universal machine or number (finite 
thing). It is filtered by its consistent extensions, the first main one being 
the addition of inductions (making it obeying G*).

Tha machine cannot know its hardware through introspection, but it can know it 
through logic + the mechanist hypothesis, in which case its hardware has to 
comply to the logic of the machine observable (prediction, []p & <>t). 

So, the machine can test mechanism, by comparing the unique possible physics in 
their head, with what they see. The result is that there is no evidence for 
some primitive matter, or for physicalism yet. Nature follows the arithmetical 
(but non computable) laws of physics derived from Mechanism (an hypothesis in 
cognitive science).

Bruno





> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/105589fd-59ff-e39c-298e-bea9de66eda5%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/7CBBFE84-CFF1-4F12-B323-16D8A25A4D14%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread 'Brent Meeker' via Everything List




On 6/12/2020 12:56 PM, smitra wrote:
Yes, the way we do physics assumes QM and statistical effects are due 
to the rules of QM. But in a more general multiverse setting 


Why should we consider such a thing.

where we consider different laws of physics or different initial 
conditions, the notion of single universes with well defined laws 
becomes ambiguous. 


Does it?  How can there be multiples if there are not singles?

Let's assume that consciousness is in general generated by algorithms 
which can be implemented in many different universes with different 
laws as well as in different locations within the same universe where 
the local environments are similar but not exactly the same. Then the 
algorithm plus its local environment 


Algorithm + environment sounds like a category error.

Brent

evolves in each universe according to the laws that apply in each 
universe. But because the conscious agent cannot locate itself in one 
or the other universe, one can now also consider time evolutions 
involving random jumps from one to the other universes. And so the 
whole notion of fixed universes with well defined laws breaks down. 



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/948ee692-14dc-777c-4de0-6211dc50b412%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread 'Brent Meeker' via Everything List



On 6/12/2020 12:56 PM, smitra wrote:
The details here then involve that computations are not well defined 
if you refer to a single instant of time, you need to at least 
appeal to a sequence of states the system over through. 
Consciousness cannot then be located at a single instant, in 
violating with our own experience.


I deny that our experience consists of instants without duration or
direction.  This is an assumption by computationalists made to simply
their analysis.

Brent


If one needs to appeal to finite time intervals in a single universe 
setting, then given that in principle observers only have direct 
access to the exact moment they exist


No.  Finite intervals may overlap and there is no "exact moment they exist".

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/92710cdc-4b91-8217-3b3d-eefbfe5bd425%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread smitra

On 10-06-2020 22:01, 'Brent Meeker' via Everything List wrote:

On 6/10/2020 7:07 AM, smitra wrote:
I think it can be tested indirectly, because generic computational 
theories of consciousness imply a multiverse. If my consciousness is 
the result if a computation then because on the one hand any such 
computation necessarily involves a vast number of elementary bits and 
on he other hand whatever I'm conscious of is describable using only a 
handful of bits, the mapping between computational states and states 
of consciousness is N to 1 where N is astronomically large. So, the 
laws of physics we already know about must be effective laws where the 
statistical effects due to a self-localization uncertainty is already 
build into it.


That seems to be pulled out of the air.  First, some of the laws of
physics are not statistical, e.g. those based on symmetries.  They are
more easily explained as desiderata, i.e. we want our laws of physics
to be independent of location and direction and time of day.  And N >>
conscious information simply says there is a lot of physical reality
of which we are not aware.  It doesn't say that what we have picked
out as laws are statistical, only that they are not complete...which
any physicist would admit...and as far as we know they include
inherent randomness.  To insist that this randomness is statistical is
just postulating multiple worlds to avoid randomness.



Yes, the way we do physics assumes QM and statistical effects are due to 
the rules of QM. But in a more general multiverse setting where we 
consider different laws of physics or different initial conditions, the 
notion of single universes with well defined laws becomes ambiguous. 
Let's assume that consciousness is in general generated by algorithms 
which can be implemented in many different universes with different laws 
as well as in different locations within the same universe where the 
local environments are similar but not exactly the same. Then the 
algorithm plus its local environment evolves in each universe according 
to the laws that apply in each universe. But because the conscious agent 
cannot locate itself in one or the other universe, one can now also 
consider time evolutions involving random jumps from one to the other 
universes. And so the whole notion of fixed universes with well defined 
laws breaks down.





Bruno has argued on the basis of this to motivate his theory, but this 
is a generic feature of any theory that assumes computational theory 
of consciousness. In particular, computational theory of consciousness 
is incompatible with a single universe theory. So, if you prove that 
only a single universe exists, then that disproves the computational 
theory of consciousness.


No, see above.

The details here then involve that computations are not well defined 
if you refer to a single instant of time, you need to at least appeal 
to a sequence of states the system over through. Consciousness cannot 
then be located at a single instant, in violating with our own 
experience.


I deny that our experience consists of instants without duration or
direction.  This is an assumption by computationalists made to simply
their analysis.

Brent


If one needs to appeal to finite time intervals in a single universe 
setting, then given that in principle observers only have direct access 
to the exact moment they exist, one ends up appealing to another sort of 
parallel worlds, one that single universe advocates somehow don't seem 
to have problems with.


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/76b6ca3c577f6c0db141757a0a3dbf40%40zonnet.nl.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread smitra

On 10-06-2020 18:00, Jason Resch wrote:

On Wednesday, June 10, 2020, smitra  wrote:


On 09-06-2020 19:08, Jason Resch wrote:


For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead
focus on
the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence.
Then
let's say we can exactly control sensory input and perfectly
monitor
motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve
activations,
etc.) and outputs, in terms of muscle movement, facial
expressions,
and speech.

If we stimulate nerves in the person's back to cause pain, and ask
them both to describe the pain, both will speak identical
sentences.
Both will say it hurts when asked, and if asked to write a
paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any
scientific
objective third-person analysis or test is doomed to fail to find
any
distinction in behaviors, and thus necessarily fails in its
ability to
disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?


I think it can be tested indirectly, because generic computational
theories of consciousness imply a multiverse. If my consciousness is
the result if a computation then because on the one hand any such
computation necessarily involves a vast number of elementary bits
and on he other hand whatever I'm conscious of is describable using
only a handful of bits, the mapping between computational states and
states of consciousness is N to 1 where N is astronomically large.
So, the laws of physics we already know about must be effective laws
where the statistical effects due to a self-localization uncertainty
is already build into it.

Bruno has argued on the basis of this to motivate his theory, but
this is a generic feature of any theory that assumes computational
theory of consciousness. In particular, computational theory of
consciousness is incompatible with a single universe theory. So, if
you prove that only a single universe exists, then that disproves
the computational theory of consciousness. The details here then
involve that computations are not well defined if you refer to a
single instant of time, you need to at least appeal to a sequence of
states the system over through. Consciousness cannot then be located
at a single instant, in violating with our own experience. Therefore
either single World theories are false or computational theory of
consciousness is false.

Saibal


Hi Saibal,

I agree indirect mechanisms like looking at the resulting physics may
be the best way to test it. I was curious if there any direct ways to
test it. It seems not, given the lack of any direct tests of
consciousness.

Though most people admit other humans are conscious, many would reject
the idea of a conscious computer.

Computationalism seems right, but it also seems like something that by
definition can't result in a failed test. So it has the appearance of
not being falsifiable.

A single universe, or digital physics would be evidence that either
computationalism is false or the ontology is sufficiently small, but a
finite/small ontology is doubtful for many reasons.

Jason



Yes, I agree that there is no hope for a direct test. Based on the 
finite information a conscious agent has, which is less than the amount 
of information contained in the system that renders the consciousness, a 
conscious agent should not be thought as being located precisely in a 
state like some computer or a brain. Considering one particular 
implementation like one particular computer running some algorithm and 
then asking if that thing is then conscious, is then perhaps not the 
right way to think about this. It seems to me that we need to consider 
consciousness in the opposite way.


If we start with some set of conscious states then each element of that 
set has a subjective notion of its state. And that can contain 
information about being implemented by a computer or a brain. Also, in 
the about continuity where we ask whether we are the same persons as 
yesterday, we can address that by taking the set of all conscious states 
as fundamental. Every conscious experience whether that's me typing this 
message of T-ReX 68 million years ago are all different states of the 
same conscious entity.


The question then becomes whether there exists a conscious state 
corresponding to knowing that it's brain is a computer.


Saibal


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails fr

Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread 'Brent Meeker' via Everything List



On 6/12/2020 11:38 AM, Jason Resch wrote:



On Thu, Jun 11, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
> wrote:




On 6/10/2020 8:50 AM, Jason Resch wrote:
> Thought perhaps there's an argument to be made from the church
Turing
> theses, which pertains to possible states of knowledge
accessible to a
> computer program/software. If consciousness is viewed as
software then
> Church-Turing thesis implies that software could never
know/realize if
> it's ultimate computing substrate changed.

I don't understand the import of this.  The very concept of software
mean "independent of hardware" by definition.  It is not affected by
whether CT is true or not, whether the computation is finite or not.


You're right. The only relevance of CT is it means any software can be 
run by any universal hardware. There's not some software that requires 
special hardware of a certain kind.


  If
you think that consciousness evolved then it is an obvious inference
that consciousness would not include consciousness of it's hardware
implementation.


If consciousness is software, it can't know its hardware. But some 
like Searle or Penrose think the hardware is important.


I think the hardware is important when you're talking about a computer 
that is emerged in some environment.  The hardware can define the the 
interaction with that environment.  We idealize the brain as a computer 
independent of it's physical instantiation...but that's just a 
theoretical simplification.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5125d2b1-71c1-1d42-f1eb-cc152971b237%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread Jason Resch
On Thu, Jun 11, 2020 at 1:34 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/10/2020 8:50 AM, Jason Resch wrote:
> > Thought perhaps there's an argument to be made from the church Turing
> > theses, which pertains to possible states of knowledge accessible to a
> > computer program/software. If consciousness is viewed as software then
> > Church-Turing thesis implies that software could never know/realize if
> > it's ultimate computing substrate changed.
>
> I don't understand the import of this.  The very concept of software
> mean "independent of hardware" by definition.  It is not affected by
> whether CT is true or not, whether the computation is finite or not.


You're right. The only relevance of CT is it means any software can be run
by any universal hardware. There's not some software that requires special
hardware of a certain kind.


>   If
> you think that consciousness evolved then it is an obvious inference
> that consciousness would not include consciousness of it's hardware
> implementation.
>

If consciousness is software, it can't know its hardware. But some like
Searle or Penrose think the hardware is important.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUim3LrNR1CBKtgHvj1ctZj0zCxUx0799CkSddg9b3Aotg%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread Jason Resch
On Thu, Jun 11, 2020 at 11:03 AM Bruno Marchal  wrote:

>
> On 9 Jun 2020, at 19:08, Jason Resch  wrote:
>
> For the present discussion/question, I want to ignore the testable
> implications of computationalism on physical law, and instead focus on the
> following idea:
>
> "How can we know if a robot is conscious?”
>
>
> That question is very different than “is functionalism/computationalism
> unfalsifiable?”.
>
> Note that in my older paper, I relate computationisme to Putnam’s
> ambiguous functionalism, by defining computationalism by asserting the
> existence of of level of description of my body/brain such that I survive
> (ma consciousness remains relatively invariant) with a digital machine
> (supposedly physically implemented) replacing my body/brain.
>
>
>
>
> Let's say there are two brains, one biological and one an exact
> computational emulation, meaning exact functional equivalence.
>
>
> I guess you mean “for all possible inputs”.
>
>
>
>
> Then let's say we can exactly control sensory input and perfectly monitor
> motor control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then identical
> inputs yield identical internal behavior (nerve activations, etc.) and
> outputs, in terms of muscle movement, facial expressions, and speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask them
> both to describe the pain, both will speak identical sentences. Both will
> say it hurts when asked, and if asked to write a paragraph describing the
> pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific
> objective third-person analysis or test is doomed to fail to find any
> distinction in behaviors, and thus necessarily fails in its ability to
> disprove consciousness in the functionally equivalent robot mind?
>
>
> With computationalism, (and perhaps without) we cannot prove that anything
> is conscious (we can know our own consciousness, but still cannot justified
> it to ourself in any public way, or third person communicable way).
>
>
>
>
> Is computationalism as far as science can go on a theory of mind before it
> reaches this testing roadblock?
>
>
> Computationalism is indirectly testable. By verifying the physics implied
> by the theory of consciousness, we verify it indirectly.
>
> As you know, I define consciousness by that indubitable truth that all
> universal machine, cognitively enough rich to know that they are universal,
> finds by looking inward (in the Gödel-Kleene sense), and which is also non
> provable (non rationally justifiable) and even non definable without
> invoking *some* notion of truth. Then such consciousness appears to be a
> fixed point for the doubting procedure, like in Descartes, and it get a key
> role: self-speeding up relatively to universal machine(s).
>
> So, it seems so clear to me that nobody can prove that anything is
> conscious that I make it into one of the main way to characterise it.
>
> Consciousness is already very similar with consistency, which is (for
> effective theories, and sound machine) equivalent to a belief in some
> reality. No machine can prove its own consistency, and no machines can
> prove that there is reality satisfying their beliefs.
>
> In all case, it is never the machine per se which is conscious, but the
> first person associated with the machine. There is a core universal person
> common to each of “us” (with “us” in a very large sense of universal
> numbers/machines).
>
> Consciousness is not much more than knowledge, and in particular
> indubitable knowledge.
>
> Bruno
>
>
>
>
So to summarize: is it right to say that our only hope to prove anything
about what theory of consciousness is correct, or any fact concerning the
consciousness of others will on indirect tests that involve one's own own
first-person experiences?  (Such as whether our apparent reality becomes
fuzzy below a certain level.)

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiQCaZWo2tpCW-_Z%2BRMfrgOkKDoz5%3Dcpwk%3DxKDZZMDQsQ%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread Jason Resch
On Wed, Jun 10, 2020 at 5:55 PM PGC  wrote:

>
>
> On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
>>
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on the
>> following idea:
>>
>> "How can we know if a robot is conscious?"
>>
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then let's
>> say we can exactly control sensory input and perfectly monitor motor
>> control outputs between the two brains.
>>
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>> speech.
>>
>> If we stimulate nerves in the person's back to cause pain, and ask them
>> both to describe the pain, both will speak identical sentences. Both will
>> say it hurts when asked, and if asked to write a paragraph describing the
>> pain, will provide identical accounts.
>>
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>>
>> Is computationalism as far as science can go on a theory of mind before
>> it reaches this testing roadblock?
>>
>
> Every piece of writing is a theory of mind; both within western science
> and beyond.
>
> What about the abilities to understand and use natural language, to come
> up with new avenues for scientific or creative inquiry, to experience
> qualia and report on them, adapting and dealing with unexpected
> circumstances through senses, and formulating + solving problems in
> benevolent ways by contributing towards the resilience of its community and
> environment?
>
> Trouble with this is that humans, even world leaders, fail those tests
> lol, but it's up to everybody, the AI and Computer Science folks in
> particular, to come up with the math, data, and complete their mission...
> and as amazing as developments have been around AI in the last couple of
> decades, I'm not certain we can pull it off, even if it would be pleasant
> to be wrong and some folks succeed.
>

It's interesting you bring this up, I just wrote an article about the
present capabilities of AI: https://alwaysasking.com/when-will-ai-take-over/


>
> Even if folks do succeed, a context of militarized nation states and
> monopolistic corporations competing for resources in self-destructive,
> short term ways... will not exactly help towards NOT weaponizing AI. A
> transnational politics, economics, corporate law, values/philosophies,
> ethics, culture etc. to vanquish poverty and exploitation of people,
> natural resources, life; while being sustainable and benevolent stewards of
> the possibilities of life... would seem to be prerequisite to develop some
> amazing AI.
>
> Ideas are all out there but progressives are ineffective politically on a
> global scale. The right wing folks, finance guys, large irresponsible
> monopolistic corporations are much more effective in organizing themselves
> globally and forcing agendas down everybody's throats. So why wouldn't AI
> do the same? PGC
>
>
AI will either be a blessing or a curse. I don't think it can be anything
in the middle.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg6XyBiey6-Fgge7orv%3D_kS69tprAwviaKag5w73-8v2g%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread 'Brent Meeker' via Everything List



On 6/11/2020 9:03 AM, Bruno Marchal wrote:


On 9 Jun 2020, at 19:08, Jason Resch <mailto:jasonre...@gmail.com>> wrote:


For the present discussion/question, I want to ignore the testable 
implications of computationalism on physical law, and instead focus 
on the following idea:


"How can we know if a robot is conscious?”


That question is very different than “is 
functionalism/computationalism unfalsifiable?”.


Note that in my older paper, I relate computationisme to Putnam’s 
ambiguous functionalism, by defining computationalism by asserting the 
existence of of level of description of my body/brain such that I 
survive (ma consciousness remains relatively invariant) with a digital 
machine (supposedly physically implemented) replacing my body/brain.






Let's say there are two brains, one biological and one an exact 
computational emulation, meaning exact functional equivalence.


I guess you mean “for all possible inputs”.




Then let's say we can exactly control sensory input and perfectly 
monitor motor control outputs between the two brains.


Given that computationalism implies functional equivalence, then 
identical inputs yield identical internal behavior (nerve 
activations, etc.) and outputs, in terms of muscle movement, facial 
expressions, and speech.


If we stimulate nerves in the person's back to cause pain, and ask 
them both to describe the pain, both will speak identical sentences. 
Both will say it hurts when asked, and if asked to write a paragraph 
describing the pain, will provide identical accounts.


Does the definition of functional equivalence mean that any 
scientific objective third-person analysis or test is doomed to fail 
to find any distinction in behaviors, and thus necessarily fails in 
its ability to disprove consciousness in the functionally equivalent 
robot mind?


With computationalism, (and perhaps without) we cannot prove that 
anything is conscious (we can know our own consciousness, but still 
cannot justified it to ourself in any public way, or third person 
communicable way).






Is computationalism as far as science can go on a theory of mind 
before it reaches this testing roadblock?


Computationalism is indirectly testable. By verifying the physics 
implied by the theory of consciousness, we verify it indirectly.


As you know, I define consciousness by that indubitable truth that all 
universal machine, cognitively enough rich to know that they are 
universal, finds by looking inward (in the Gödel-Kleene sense), and 
which is also non provable (non rationally justifiable) and even non 
definable without invoking *some* notion of truth. Then such 
consciousness appears to be a fixed point for the doubting procedure, 
like in Descartes, and it get a key role: self-speeding up relatively 
to universal machine(s).


So, it seems so clear to me that nobody can prove that anything is 
conscious that I make it into one of the main way to characterise it.


Of course as a logician you tend to use "proof" to mean deductive 
proof...but then you switch to a theological attitude toward the 
premises you've used and treat them as given truths, instead of mere 
axioms.  I appreciate your categorization of logics of self-reference.  
But I  doubt that it has anything to do with human (or animal) 
consciousness.  I don't think my dog is unconscious because he doesn't 
understand Goedelian incompleteness.  And I'm not conscious because I 
do.  I'm conscious because of the Darwinian utility of being able to 
imagine myself in hypothetical situations.




Consciousness is already very similar with consistency, which is (for 
effective theories, and sound machine) equivalent to a belief in some 
reality. No machine can prove its own consistency, and no machines can 
prove that there is reality satisfying their beliefs.


First, I can't prove it because such a proof would be relative to 
premises which simply be my beliefs.  Second, I can prove it in the 
sense of jurisprudence...i.e. beyond reasonable doubt.  Science doesn't 
care about "proofs", only about evidence.


Brent



In all case, it is never the machine per se which is conscious, but 
the first person associated with the machine. There is a core 
universal person common to each of “us” (with “us” in a very large 
sense of universal numbers/machines).


Consciousness is not much more than knowledge, and in particular 
indubitable knowledge.


Bruno





Jason

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com 
<https://groups.google.com/d/msgid/every

Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread 'Brent Meeker' via Everything List




On 6/10/2020 8:50 AM, Jason Resch wrote:
Thought perhaps there's an argument to be made from the church Turing 
theses, which pertains to possible states of knowledge accessible to a 
computer program/software. If consciousness is viewed as software then 
Church-Turing thesis implies that software could never know/realize if 
it's ultimate computing substrate changed.


I don't understand the import of this.  The very concept of software 
mean "independent of hardware" by definition.  It is not affected by 
whether CT is true or not, whether the computation is finite or not.  If 
you think that consciousness evolved then it is an obvious inference 
that consciousness would not include consciousness of it's hardware 
implementation.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/105589fd-59ff-e39c-298e-bea9de66eda5%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread Bruno Marchal

> On 10 Jun 2020, at 05:25, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
>> 
>> 
>> On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> 
>> 
>> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>>> 
>>> 
>>> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List 
>>> >> <mailto:everything-list@googlegroups.com>> wrote:
>>> 
>>> 
>>> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>>>> 
>>>> 
>>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch >>> <mailto:jasonre...@gmail.com>> wrote:
>>>> For the present discussion/question, I want to ignore the testable 
>>>> implications of computationalism on physical law, and instead focus on the 
>>>> following idea:
>>>> 
>>>> "How can we know if a robot is conscious?"
>>>> 
>>>> Let's say there are two brains, one biological and one an exact 
>>>> computational emulation, meaning exact functional equivalence. Then let's 
>>>> say we can exactly control sensory input and perfectly monitor motor 
>>>> control outputs between the two brains.
>>>> 
>>>> Given that computationalism implies functional equivalence, then identical 
>>>> inputs yield identical internal behavior (nerve activations, etc.) and 
>>>> outputs, in terms of muscle movement, facial expressions, and speech.
>>>> 
>>>> If we stimulate nerves in the person's back to cause pain, and ask them 
>>>> both to describe the pain, both will speak identical sentences. Both will 
>>>> say it hurts when asked, and if asked to write a paragraph describing the 
>>>> pain, will provide identical accounts.
>>>> 
>>>> Does the definition of functional equivalence mean that any scientific 
>>>> objective third-person analysis or test is doomed to fail to find any 
>>>> distinction in behaviors, and thus necessarily fails in its ability to 
>>>> disprove consciousness in the functionally equivalent robot mind?
>>>> 
>>>> Is computationalism as far as science can go on a theory of mind before it 
>>>> reaches this testing roadblock?
>>>> 
>>>> We can’t know if a particular entity is conscious,
>>> 
>>> If the term means anything, you can know one particular entity is conscious.
>>> 
>>> Yes, I should have added we can’t know know that a particular entity other 
>>> than oneself is conscious.
>>>> but we can know that if it is conscious, then a functional equivalent, as 
>>>> you describe, is also conscious.
>>> 
>>> So any entity functionally equivalent to yourself, you must know is 
>>> conscious.  But "functionally equivalent" is vague, ambiguous, and 
>>> certainly needs qualifying by environment and other factors.  Is a dolphin 
>>> functionally equivalent to me.  Not in swimming.
>>> 
>>> Functional equivalence here means that you replace a part with a new part 
>>> that behaves in the same way. So if you replaced the copper wires in a 
>>> computer with silver wires, the silver wires would be functionally 
>>> equivalent, and you would notice no change in using the computer. Copper 
>>> and silver have different physical properties such as conductivity, but the 
>>> replacement would be chosen so that this is not functionally relevant.
>> 
>> But that functional equivalence at a microscopic level is worthless in 
>> judging what entities are conscious.The whole reason for bringing it up 
>> is that it provides a criterion for recognizing consciousness at the entity 
>> level.
>> 
>> The thought experiment involves removing a part of the brain that would 
>> normally result in an obvious deficit in qualia and replacing it with a 
>> non-biological component that replicates its interactions with the rest of 
>> the brain. Remove the visual cortex, and the subject becomes blind, 
>> staggering around walking into things, saying "I'm blind, I can't see 
>> anything, why have you done this to me?" But if you replace it with an 
>> implant that processes input and sends output to the remaining neural 
>> tissue, the subject will have normal input to his leg muscles and his vocal 
>> cords, so he will be able to navigate his way around a room

Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread Bruno Marchal

> On 10 Jun 2020, at 04:49, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/9/2020 6:41 PM, Stathis Papaioannou wrote:
>> 
>> 
>> On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> 
>> 
>> On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:
>>> 
>>> 
>>> On Wed, 10 Jun 2020 at 09:15, Jason Resch >> <mailto:jasonre...@gmail.com>> wrote:
>>> 
>>> 
>>> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou >> <mailto:stath...@gmail.com>> wrote:
>>> 
>>> 
>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch >> <mailto:jasonre...@gmail.com>> wrote:
>>> For the present discussion/question, I want to ignore the testable 
>>> implications of computationalism on physical
>>>law, and instead focus on the following idea:
>>> 
>>> "How can we know if a robot is conscious?"
>>> 
>>> Let's say there are two brains, one biological and one an exact 
>>> computational emulation, meaning exact functional equivalence. Then let's 
>>> say we can exactly control sensory input and perfectly monitor motor 
>>> control outputs between the two brains.
>>> 
>>> Given that computationalism implies functional equivalence, then identical 
>>> inputs yield identical internal behavior (nerve activations, etc.) and 
>>> outputs, in terms of muscle movement, facial expressions, and speech.
>>> 
>>> If we stimulate nerves in the person's back to cause pain, and ask them 
>>> both to describe the pain, both will speak identical sentences. Both will 
>>> say it hurts when asked, and if asked to write a paragraph describing the 
>>> pain, will provide identical accounts.
>>> 
>>> Does the definition of functional equivalence mean that any scientific 
>>> objective third-person analysis or test is doomed to fail to find any 
>>> distinction in behaviors, and thus necessarily fails in its ability to 
>>> disprove consciousness in the functionally equivalent robot mind?
>>> 
>>> Is computationalism as far as science can go on a theory of mind before it 
>>> reaches this testing roadblock?
>>> 
>>> We can’t know if a particular entity is conscious, but we can know that if 
>>> it is conscious, then a functional equivalent, as you describe, is also 
>>> conscious. This is the subject of David Chalmers’ paper:
>>> 
>>> http://consc.net/papers/qualia.html <http://consc.net/papers/qualia.html>
>>> 
>>> Chalmers' argument is that if a different brain is not conscious, then 
>>> somewhere along the way we get either suddenly disappearing or fading 
>>> qualia, which I agree are philosophically distasteful.
>>> 
>>> But what if someone is fine with philosophical zombies and suddenly 
>>> disappearing qualia? Is there any impossibility proof for such things?
>>> 
>>> Philosophical zombies are less problematic than partial philosophical 
>>> zombies. Partial philosophical zombies would render the idea of qualia 
>>> absurd, because it would mean that we might be blind completely blind, for 
>>> example, without realising it.
>> 
>> Isn't this what blindsight exemplifies?
>> 
>> Blindsight entails behaving as if you have vision but not believing that you 
>> have vision.
> 
> And you don't believe you have vision because you're missing the qualia of 
> seeing.
> 
>> Anton syndrome entails believing you have vision but not behaving as if you 
>> have vision.
>> Being a partial zombie would entail believing you have vision and behaving 
>> as if you have vision, but not actually having vision. 
> 
> That would be a total zombie with respect to vision.  The person with 
> blindsight is a partial zombie.  They have the function but not the qualia.
> 
>>> As an absolute minimum, although we may not be able to test for or define 
>>> qualia, we should know if we have them. Take this requirement away, and 
>>> there is nothing left.
>>> 
>>> Suddenly disappearing qualia are logically possible but it is difficult to 
>>> imagine how it could work. We would be normally conscious while our neurons 
>>> were being replaced, but when one special glutamate receptor in a special 
>>> neuron in the left parietal lobe was replaced, or when exactly 35.54876% 
>>> replacement of all neurons was rea

Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread Bruno Marchal

> On 10 Jun 2020, at 01:14, Jason Resch  wrote:
> 
> 
> 
> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou  <mailto:stath...@gmail.com>> wrote:
> 
> 
> On Wed, 10 Jun 2020 at 03:08, Jason Resch  <mailto:jasonre...@gmail.com>> wrote:
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
> 
> "How can we know if a robot is conscious?"
> 
> Let's say there are two brains, one biological and one an exact computational 
> emulation, meaning exact functional equivalence. Then let's say we can 
> exactly control sensory input and perfectly monitor motor control outputs 
> between the two brains.
> 
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
> 
> If we stimulate nerves in the person's back to cause pain, and ask them both 
> to describe the pain, both will speak identical sentences. Both will say it 
> hurts when asked, and if asked to write a paragraph describing the pain, will 
> provide identical accounts.
> 
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
> 
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
> 
> We can’t know if a particular entity is conscious, but we can know that if it 
> is conscious, then a functional equivalent, as you describe, is also 
> conscious. This is the subject of David Chalmers’ paper:
> 
> http://consc.net/papers/qualia.html <http://consc.net/papers/qualia.html>
> 
> Chalmers' argument is that if a different brain is not conscious, then 
> somewhere along the way we get either suddenly disappearing or fading qualia, 
> which I agree are philosophically distasteful.
> 
> But what if someone is fine with philosophical zombies and suddenly 
> disappearing qualia? Is there any impossibility proof for such things?

This would not make sense with Digital Mechanism. Now, by assuming some 
NON-mechanism, maybe someone can still make sense of this.

That is why qualia and quanta are automatically present in *any* Turing 
universal realm (the model or semantic of any Turing universal or sigma_1 
complete theory). That is why physicalists need to abandon mechanism, because 
it invoke a non Turing emulable reality (like a material primitive substance) 
to make consciousness real for some type of universal machine, and unreal for 
others. As there is no evidence until now for such primitive matter, this is a 
bit like adding complexity to avoid the consequence of a simpler theory.

Bruno



> 
> Jason
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjnn2DQwit%2Bj%3DYdXbXZbwHTv_PZa7GRKXwdo31gTAFygg%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjnn2DQwit%2Bj%3DYdXbXZbwHTv_PZa7GRKXwdo31gTAFygg%40mail.gmail.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/D6984742-4AF5-4445-873B-27B3B56CCA70%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread Bruno Marchal

> On 10 Jun 2020, at 01:02, Stathis Papaioannou  wrote:
> 
> 
> 
> On Wed, 10 Jun 2020 at 03:08, Jason Resch  <mailto:jasonre...@gmail.com>> wrote:
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
> 
> "How can we know if a robot is conscious?"
> 
> Let's say there are two brains, one biological and one an exact computational 
> emulation, meaning exact functional equivalence. Then let's say we can 
> exactly control sensory input and perfectly monitor motor control outputs 
> between the two brains.
> 
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
> 
> If we stimulate nerves in the person's back to cause pain, and ask them both 
> to describe the pain, both will speak identical sentences. Both will say it 
> hurts when asked, and if asked to write a paragraph describing the pain, will 
> provide identical accounts.
> 
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
> 
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
> 
> We can’t know if a particular entity is conscious, but we can know that if it 
> is conscious, then a functional equivalent,

… at some level of description. 

A dreaming human is functionally equivalent with a stone. The first is 
conscious, the other is not. To avoid this, you need to make precise the level 
for which you define the functional equivalence. 

Bruno



> as you describe, is also conscious. This is the subject of David Chalmers’ 
> paper:
> 
> http://consc.net/papers/qualia.html <http://consc.net/papers/qualia.html>
> 
> -- 
> Stathis Papaioannou
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/146FE5AB-0FBE-4F16-B95B-F7BE527DAF16%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread Bruno Marchal

> On 9 Jun 2020, at 19:08, Jason Resch  wrote:
> 
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
> 
> "How can we know if a robot is conscious?”

That question is very different than “is functionalism/computationalism 
unfalsifiable?”.

Note that in my older paper, I relate computationisme to Putnam’s ambiguous 
functionalism, by defining computationalism by asserting the existence of of 
level of description of my body/brain such that I survive (ma consciousness 
remains relatively invariant) with a digital machine (supposedly physically 
implemented) replacing my body/brain.



> 
> Let's say there are two brains, one biological and one an exact computational 
> emulation, meaning exact functional equivalence.

I guess you mean “for all possible inputs”.




> Then let's say we can exactly control sensory input and perfectly monitor 
> motor control outputs between the two brains.
> 
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
> 
> If we stimulate nerves in the person's back to cause pain, and ask them both 
> to describe the pain, both will speak identical sentences. Both will say it 
> hurts when asked, and if asked to write a paragraph describing the pain, will 
> provide identical accounts.
> 
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?

With computationalism, (and perhaps without) we cannot prove that anything is 
conscious (we can know our own consciousness, but still cannot justified it to 
ourself in any public way, or third person communicable way). 



> 
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?

Computationalism is indirectly testable. By verifying the physics implied by 
the theory of consciousness, we verify it indirectly.

As you know, I define consciousness by that indubitable truth that all 
universal machine, cognitively enough rich to know that they are universal, 
finds by looking inward (in the Gödel-Kleene sense), and which is also non 
provable (non rationally justifiable) and even non definable without invoking 
*some* notion of truth. Then such consciousness appears to be a fixed point for 
the doubting procedure, like in Descartes, and it get a key role: self-speeding 
up relatively to universal machine(s).

So, it seems so clear to me that nobody can prove that anything is conscious 
that I make it into one of the main way to characterise it.

Consciousness is already very similar with consistency, which is (for effective 
theories, and sound machine) equivalent to a belief in some reality. No machine 
can prove its own consistency, and no machines can prove that there is reality 
satisfying their beliefs.

In all case, it is never the machine per se which is conscious, but the first 
person associated with the machine. There is a core universal person common to 
each of “us” (with “us” in a very large sense of universal numbers/machines).

Consciousness is not much more than knowledge, and in particular indubitable 
knowledge.

Bruno



> 
> Jason
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6BFB3D6E-1AFB-4DDA-988E-B7BA03FF897F%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread PGC


On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
>
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
>
> "How can we know if a robot is conscious?"
>
> Let's say there are two brains, one biological and one an exact 
> computational emulation, meaning exact functional equivalence. Then let's 
> say we can exactly control sensory input and perfectly monitor motor 
> control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask them 
> both to describe the pain, both will speak identical sentences. Both will 
> say it hurts when asked, and if asked to write a paragraph describing the 
> pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
>
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
>

Every piece of writing is a theory of mind; both within western science and 
beyond. 

What about the abilities to understand and use natural language, to come up 
with new avenues for scientific or creative inquiry, to experience qualia 
and report on them, adapting and dealing with unexpected circumstances 
through senses, and formulating + solving problems in benevolent ways by 
contributing towards the resilience of its community and environment? 

Trouble with this is that humans, even world leaders, fail those tests lol, 
but it's up to everybody, the AI and Computer Science folks in particular, 
to come up with the math, data, and complete their mission... and as 
amazing as developments have been around AI in the last couple of decades, 
I'm not certain we can pull it off, even if it would be pleasant to be 
wrong and some folks succeed. 

Even if folks do succeed, a context of militarized nation states and 
monopolistic corporations competing for resources in self-destructive, 
short term ways... will not exactly help towards NOT weaponizing AI. A 
transnational politics, economics, corporate law, values/philosophies, 
ethics, culture etc. to vanquish poverty and exploitation of people, 
natural resources, life; while being sustainable and benevolent stewards of 
the possibilities of life... would seem to be prerequisite to develop some 
amazing AI. 

Ideas are all out there but progressives are ineffective politically on a 
global scale. The right wing folks, finance guys, large irresponsible 
monopolistic corporations are much more effective in organizing themselves 
globally and forcing agendas down everybody's throats. So why wouldn't AI 
do the same? PGC


 

>
> Jason
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c00737c3-84f2-4b21-9fc2-b04c017cbdcco%40googlegroups.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread 'Brent Meeker' via Everything List




On 6/10/2020 7:07 AM, smitra wrote:
I think it can be tested indirectly, because generic computational 
theories of consciousness imply a multiverse. If my consciousness is 
the result if a computation then because on the one hand any such 
computation necessarily involves a vast number of elementary bits and 
on he other hand whatever I'm conscious of is describable using only a 
handful of bits, the mapping between computational states and states 
of consciousness is N to 1 where N is astronomically large. So, the 
laws of physics we already know about must be effective laws where the 
statistical effects due to a self-localization uncertainty is already 
build into it.


That seems to be pulled out of the air.  First, some of the laws of 
physics are not statistical, e.g. those based on symmetries.  They are 
more easily explained as desiderata, i.e. we want our laws of physics to 
be independent of location and direction and time of day.  And N >> 
conscious information simply says there is a lot of physical reality of 
which we are not aware.  It doesn't say that what we have picked out as 
laws are statistical, only that they are not complete...which any 
physicist would admit...and as far as we know they include inherent 
randomness.  To insist that this randomness is statistical is just 
postulating multiple worlds to avoid randomness.




Bruno has argued on the basis of this to motivate his theory, but this 
is a generic feature of any theory that assumes computational theory 
of consciousness. In particular, computational theory of consciousness 
is incompatible with a single universe theory. So, if you prove that 
only a single universe exists, then that disproves the computational 
theory of consciousness. 


No, see above.

The details here then involve that computations are not well defined 
if you refer to a single instant of time, you need to at least appeal 
to a sequence of states the system over through. Consciousness cannot 
then be located at a single instant, in violating with our own 
experience. 


I deny that our experience consists of instants without duration or 
direction.  This is an assumption by computationalists made to simply 
their analysis.


Brent

Therefore either single World theories are false or computational 
theory of consciousness is false. 



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2ab82660-092f-8165-a96d-5a08601429f7%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread Stathis Papaioannou
On Thu, 11 Jun 2020 at 01:50, Jason Resch  wrote:

>
>
> On Tuesday, June 9, 2020, Stathis Papaioannou  wrote:
>
>>
>>
>> On Wed, 10 Jun 2020 at 13:25, 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>>
>>>
>>> On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
>>>
>>>
>>>
>>> On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <
>>> everything-list@googlegroups.com> wrote:
>>>
>>>>
>>>>
>>>> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>>>>
>>>>
>>>>
>>>> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
>>>> everything-list@googlegroups.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch 
>>>>> wrote:
>>>>>
>>>>>> For the present discussion/question, I want to ignore the testable
>>>>>> implications of computationalism on physical law, and instead focus on 
>>>>>> the
>>>>>> following idea:
>>>>>>
>>>>>> "How can we know if a robot is conscious?"
>>>>>>
>>>>>> Let's say there are two brains, one biological and one an exact
>>>>>> computational emulation, meaning exact functional equivalence. Then let's
>>>>>> say we can exactly control sensory input and perfectly monitor motor
>>>>>> control outputs between the two brains.
>>>>>>
>>>>>> Given that computationalism implies functional equivalence, then
>>>>>> identical inputs yield identical internal behavior (nerve activations,
>>>>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>>>>> speech.
>>>>>>
>>>>>> If we stimulate nerves in the person's back to cause pain, and ask
>>>>>> them both to describe the pain, both will speak identical sentences. Both
>>>>>> will say it hurts when asked, and if asked to write a paragraph
>>>>>> describing the pain, will provide identical accounts.
>>>>>>
>>>>>> Does the definition of functional equivalence mean that any
>>>>>> scientific objective third-person analysis or test is doomed to fail to
>>>>>> find any distinction in behaviors, and thus necessarily fails in its
>>>>>> ability to disprove consciousness in the functionally equivalent robot 
>>>>>> mind?
>>>>>>
>>>>>> Is computationalism as far as science can go on a theory of mind
>>>>>> before it reaches this testing roadblock?
>>>>>>
>>>>>
>>>>> We can’t know if a particular entity is conscious,
>>>>>
>>>>>
>>>>> If the term means anything, you can know one particular entity is
>>>>> conscious.
>>>>>
>>>>
>>>> Yes, I should have added we can’t know know that a particular entity
>>>> other than oneself is conscious.
>>>>
>>>>> but we can know that if it is conscious, then a functional equivalent,
>>>>> as you describe, is also conscious.
>>>>>
>>>>>
>>>>> So any entity functionally equivalent to yourself, you must know is
>>>>> conscious.  But "functionally equivalent" is vague, ambiguous, and
>>>>> certainly needs qualifying by environment and other factors.  Is a dolphin
>>>>> functionally equivalent to me.  Not in swimming.
>>>>>
>>>>
>>>> Functional equivalence here means that you replace a part with a new
>>>> part that behaves in the same way. So if you replaced the copper wires in a
>>>> computer with silver wires, the silver wires would be functionally
>>>> equivalent, and you would notice no change in using the computer. Copper
>>>> and silver have different physical properties such as conductivity, but the
>>>> replacement would be chosen so that this is not functionally relevant.
>>>>
>>>>
>>>> But that functional equivalence at a microscopic level is worthless in
>>>> judging what entities are conscious.  

Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread Jason Resch
On Wednesday, June 10, 2020, smitra  wrote:

> On 09-06-2020 19:08, Jason Resch wrote:
>
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on
>> the following idea:
>>
>> "How can we know if a robot is conscious?"
>>
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then
>> let's say we can exactly control sensory input and perfectly monitor
>> motor control outputs between the two brains.
>>
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions,
>> and speech.
>>
>> If we stimulate nerves in the person's back to cause pain, and ask
>> them both to describe the pain, both will speak identical sentences.
>> Both will say it hurts when asked, and if asked to write a paragraph
>> describing the pain, will provide identical accounts.
>>
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>>
>> Is computationalism as far as science can go on a theory of mind
>> before it reaches this testing roadblock?
>>
>>
>
> I think it can be tested indirectly, because generic computational
> theories of consciousness imply a multiverse. If my consciousness is the
> result if a computation then because on the one hand any such computation
> necessarily involves a vast number of elementary bits and on he other hand
> whatever I'm conscious of is describable using only a handful of bits, the
> mapping between computational states and states of consciousness is N to 1
> where N is astronomically large. So, the laws of physics we already know
> about must be effective laws where the statistical effects due to a
> self-localization uncertainty is already build into it.
>
> Bruno has argued on the basis of this to motivate his theory, but this is
> a generic feature of any theory that assumes computational theory of
> consciousness. In particular, computational theory of consciousness is
> incompatible with a single universe theory. So, if you prove that only a
> single universe exists, then that disproves the computational theory of
> consciousness. The details here then involve that computations are not well
> defined if you refer to a single instant of time, you need to at least
> appeal to a sequence of states the system over through. Consciousness
> cannot then be located at a single instant, in violating with our own
> experience. Therefore either single World theories are false or
> computational theory of consciousness is false.
>
> Saibal
>
>
Hi Saibal,

I agree indirect mechanisms like looking at the resulting physics may be
the best way to test it. I was curious if there any direct ways to test it.
It seems not, given the lack of any direct tests of consciousness.

Though most people admit other humans are conscious, many would reject the
idea of a conscious computer.

Computationalism seems right, but it also seems like something that by
definition can't result in a failed test. So it has the appearance of not
being falsifiable.

A single universe, or digital physics would be evidence that either
computationalism is false or the ontology is sufficiently small, but a
finite/small ontology is doubtful for many reasons.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjhoAEFXtFkimkNqgvMWkHtASrdHByu5Ah4n%2BZwUGr1uA%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread Jason Resch
On Tuesday, June 9, 2020, Stathis Papaioannou  wrote:

>
>
> On Wed, 10 Jun 2020 at 13:25, 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>>
>>>
>>> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>>>
>>>
>>>
>>> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
>>> everything-list@googlegroups.com> wrote:
>>>
>>>>
>>>>
>>>> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>>>>
>>>>
>>>>
>>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>>>>
>>>>> For the present discussion/question, I want to ignore the testable
>>>>> implications of computationalism on physical law, and instead focus on the
>>>>> following idea:
>>>>>
>>>>> "How can we know if a robot is conscious?"
>>>>>
>>>>> Let's say there are two brains, one biological and one an exact
>>>>> computational emulation, meaning exact functional equivalence. Then let's
>>>>> say we can exactly control sensory input and perfectly monitor motor
>>>>> control outputs between the two brains.
>>>>>
>>>>> Given that computationalism implies functional equivalence, then
>>>>> identical inputs yield identical internal behavior (nerve activations,
>>>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>>>> speech.
>>>>>
>>>>> If we stimulate nerves in the person's back to cause pain, and ask
>>>>> them both to describe the pain, both will speak identical sentences. Both
>>>>> will say it hurts when asked, and if asked to write a paragraph
>>>>> describing the pain, will provide identical accounts.
>>>>>
>>>>> Does the definition of functional equivalence mean that any scientific
>>>>> objective third-person analysis or test is doomed to fail to find any
>>>>> distinction in behaviors, and thus necessarily fails in its ability to
>>>>> disprove consciousness in the functionally equivalent robot mind?
>>>>>
>>>>> Is computationalism as far as science can go on a theory of mind
>>>>> before it reaches this testing roadblock?
>>>>>
>>>>
>>>> We can’t know if a particular entity is conscious,
>>>>
>>>>
>>>> If the term means anything, you can know one particular entity is
>>>> conscious.
>>>>
>>>
>>> Yes, I should have added we can’t know know that a particular entity
>>> other than oneself is conscious.
>>>
>>>> but we can know that if it is conscious, then a functional equivalent,
>>>> as you describe, is also conscious.
>>>>
>>>>
>>>> So any entity functionally equivalent to yourself, you must know is
>>>> conscious.  But "functionally equivalent" is vague, ambiguous, and
>>>> certainly needs qualifying by environment and other factors.  Is a dolphin
>>>> functionally equivalent to me.  Not in swimming.
>>>>
>>>
>>> Functional equivalence here means that you replace a part with a new
>>> part that behaves in the same way. So if you replaced the copper wires in a
>>> computer with silver wires, the silver wires would be functionally
>>> equivalent, and you would notice no change in using the computer. Copper
>>> and silver have different physical properties such as conductivity, but the
>>> replacement would be chosen so that this is not functionally relevant.
>>>
>>>
>>> But that functional equivalence at a microscopic level is worthless in
>>> judging what entities are conscious.The whole reason for bringing it up
>>> is that it provides a criterion for recognizing consciousness at the entity
>>> level.
>>>
>>
>> The thought experiment involves removing a part of the brain that would
>> normally result in an obvious deficit in qualia and replacing it with a
>> non-biological component that replicates its interactions with the rest of
>> the brain. Remove the visual cortex, and the subject becomes blind,
>>

Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread smitra

On 09-06-2020 19:08, Jason Resch wrote:

For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead focus on
the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence. Then
let's say we can exactly control sensory input and perfectly monitor
motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve activations,
etc.) and outputs, in terms of muscle movement, facial expressions,
and speech.

If we stimulate nerves in the person's back to cause pain, and ask
them both to describe the pain, both will speak identical sentences.
Both will say it hurts when asked, and if asked to write a paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific
objective third-person analysis or test is doomed to fail to find any
distinction in behaviors, and thus necessarily fails in its ability to
disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?




I think it can be tested indirectly, because generic computational 
theories of consciousness imply a multiverse. If my consciousness is the 
result if a computation then because on the one hand any such 
computation necessarily involves a vast number of elementary bits and on 
he other hand whatever I'm conscious of is describable using only a 
handful of bits, the mapping between computational states and states of 
consciousness is N to 1 where N is astronomically large. So, the laws of 
physics we already know about must be effective laws where the 
statistical effects due to a self-localization uncertainty is already 
build into it.


Bruno has argued on the basis of this to motivate his theory, but this 
is a generic feature of any theory that assumes computational theory of 
consciousness. In particular, computational theory of consciousness is 
incompatible with a single universe theory. So, if you prove that only a 
single universe exists, then that disproves the computational theory of 
consciousness. The details here then involve that computations are not 
well defined if you refer to a single instant of time, you need to at 
least appeal to a sequence of states the system over through. 
Consciousness cannot then be located at a single instant, in violating 
with our own experience. Therefore either single World theories are 
false or computational theory of consciousness is false.


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/52e8aebc910df25ae02bcd105dcf1762%40zonnet.nl.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 13:25, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>>
>>>
>>> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>>>
>>>
>>>
>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>>>
>>>> For the present discussion/question, I want to ignore the testable
>>>> implications of computationalism on physical law, and instead focus on the
>>>> following idea:
>>>>
>>>> "How can we know if a robot is conscious?"
>>>>
>>>> Let's say there are two brains, one biological and one an exact
>>>> computational emulation, meaning exact functional equivalence. Then let's
>>>> say we can exactly control sensory input and perfectly monitor motor
>>>> control outputs between the two brains.
>>>>
>>>> Given that computationalism implies functional equivalence, then
>>>> identical inputs yield identical internal behavior (nerve activations,
>>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>>> speech.
>>>>
>>>> If we stimulate nerves in the person's back to cause pain, and ask them
>>>> both to describe the pain, both will speak identical sentences. Both will
>>>> say it hurts when asked, and if asked to write a paragraph describing the
>>>> pain, will provide identical accounts.
>>>>
>>>> Does the definition of functional equivalence mean that any scientific
>>>> objective third-person analysis or test is doomed to fail to find any
>>>> distinction in behaviors, and thus necessarily fails in its ability to
>>>> disprove consciousness in the functionally equivalent robot mind?
>>>>
>>>> Is computationalism as far as science can go on a theory of mind before
>>>> it reaches this testing roadblock?
>>>>
>>>
>>> We can’t know if a particular entity is conscious,
>>>
>>>
>>> If the term means anything, you can know one particular entity is
>>> conscious.
>>>
>>
>> Yes, I should have added we can’t know know that a particular entity
>> other than oneself is conscious.
>>
>>> but we can know that if it is conscious, then a functional equivalent,
>>> as you describe, is also conscious.
>>>
>>>
>>> So any entity functionally equivalent to yourself, you must know is
>>> conscious.  But "functionally equivalent" is vague, ambiguous, and
>>> certainly needs qualifying by environment and other factors.  Is a dolphin
>>> functionally equivalent to me.  Not in swimming.
>>>
>>
>> Functional equivalence here means that you replace a part with a new part
>> that behaves in the same way. So if you replaced the copper wires in a
>> computer with silver wires, the silver wires would be functionally
>> equivalent, and you would notice no change in using the computer. Copper
>> and silver have different physical properties such as conductivity, but the
>> replacement would be chosen so that this is not functionally relevant.
>>
>>
>> But that functional equivalence at a microscopic level is worthless in
>> judging what entities are conscious.The whole reason for bringing it up
>> is that it provides a criterion for recognizing consciousness at the entity
>> level.
>>
>
> The thought experiment involves removing a part of the brain that would
> normally result in an obvious deficit in qualia and replacing it with a
> non-biological component that replicates its interactions with the rest of
> the brain. Remove the visual cortex, and the subject becomes blind,
> staggering around walking into things, saying "I'm blind, I can't see
> anything, why have you done this to me?" But if you replace it with an
> implant that processes input and sends output to the remaining neural
> tissue, the subject will have normal input to his leg muscles and his vocal
> cords, so he will be able to navigate his way around a room and will say "I
> can see everything normally, I feel just the same as befo

Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List 
<mailto:everything-list@googlegroups.com>> wrote:




On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List
mailto:everything-list@googlegroups.com>> wrote:



On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 03:08, Jason Resch
mailto:jasonre...@gmail.com>> wrote:

For the present discussion/question, I want to ignore
the testable implications of computationalism on
physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one
an exact computational emulation, meaning
exact functional equivalence. Then let's say we can
exactly control sensory input and perfectly monitor
motor control outputs between the two brains.

Given that computationalism implies functional
equivalence, then identical inputs yield identical
internal behavior (nerve activations, etc.) and outputs,
in terms of muscle movement, facial expressions, and
speech.

If we stimulate nerves in the person's back to cause
pain, and ask them both to describe the pain, both will
speak identical sentences. Both will say it hurts when
asked, and if asked to write a paragraph describing the
pain, will provide identical accounts.

Does the definition of functional equivalence mean that
any scientific objective third-person analysis or test
is doomed to fail to find any distinction in behaviors,
and thus necessarily fails in its ability to disprove
consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory
of mind before it reaches this testing roadblock?


We can’t know if a particular entity is conscious,


If the term means anything, you can know one particular
entity is conscious.


Yes, I should have added we can’t know know that a particular
entity other than oneself is conscious.


but we can know that if it is conscious, then a functional
equivalent, as you describe, is also conscious.


So any entity functionally equivalent to yourself, you must
know is conscious.  But "functionally equivalent" is vague,
ambiguous, and certainly needs qualifying by environment and
other factors.  Is a dolphin functionally equivalent to me. 
Not in swimming.


Functional equivalence here means that you replace a part with a
new part that behaves in the same way. So if you replaced the
copper wires in a computer with silver wires, the silver wires
would be functionally equivalent, and you would notice no change
in using the computer. Copper and silver have different physical
properties such as conductivity, but the replacement would be
chosen so that this is not functionally relevant.


But that functional equivalence at a microscopic level is
worthless in judging what entities are conscious.    The whole
reason for bringing it up is that it provides a criterion for
recognizing consciousness at the entity level.


The thought experiment involves removing a part of the brain that 
would normally result in an obvious deficit in qualia and replacing it 
with a non-biological component that replicates its interactions with 
the rest of the brain. Remove the visual cortex, and the subject 
becomes blind, staggering around walking into things, saying "I'm 
blind, I can't see anything, why have you done this to me?" But if you 
replace it with an implant that processes input and sends output to 
the remaining neural tissue, the subject will have normal input to his 
leg muscles and his vocal cords, so he will be able to navigate his 
way around a room and will say "I can see everything normally, I feel 
just the same as before". This follows necessarily from the 
assumptions. But does it also follow that the subject will have normal 
visual qualia? If not, something very strange would be happening: he 
would be blind, but would behave normally, including his behaviour in 
communicating that everything feels normal.


I understand the "Yes doctor" experiment.  But Jason was asking about 
being able to recognize consciousness by function of the entity, and I 
think that is a different problem that needs to into account the 
possibility of different kinds and degrees of consciousness.  The YD 
question makes it binary by equating consciousness with exactly the same 
as pre-doctor.

Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 12:49, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 6:41 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 10 Jun 2020 at 09:15, Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>>>>
>>>>> For the present discussion/question, I want to ignore the testable
>>>>> implications of computationalism on physical law, and instead focus on the
>>>>> following idea:
>>>>>
>>>>> "How can we know if a robot is conscious?"
>>>>>
>>>>> Let's say there are two brains, one biological and one an exact
>>>>> computational emulation, meaning exact functional equivalence. Then let's
>>>>> say we can exactly control sensory input and perfectly monitor motor
>>>>> control outputs between the two brains.
>>>>>
>>>>> Given that computationalism implies functional equivalence, then
>>>>> identical inputs yield identical internal behavior (nerve activations,
>>>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>>>> speech.
>>>>>
>>>>> If we stimulate nerves in the person's back to cause pain, and ask
>>>>> them both to describe the pain, both will speak identical sentences. Both
>>>>> will say it hurts when asked, and if asked to write a paragraph
>>>>> describing the pain, will provide identical accounts.
>>>>>
>>>>> Does the definition of functional equivalence mean that any scientific
>>>>> objective third-person analysis or test is doomed to fail to find any
>>>>> distinction in behaviors, and thus necessarily fails in its ability to
>>>>> disprove consciousness in the functionally equivalent robot mind?
>>>>>
>>>>> Is computationalism as far as science can go on a theory of mind
>>>>> before it reaches this testing roadblock?
>>>>>
>>>>
>>>> We can’t know if a particular entity is conscious, but we can know that
>>>> if it is conscious, then a functional equivalent, as you describe, is also
>>>> conscious. This is the subject of David Chalmers’ paper:
>>>>
>>>> http://consc.net/papers/qualia.html
>>>>
>>>
>>> Chalmers' argument is that if a different brain is not conscious, then
>>> somewhere along the way we get either suddenly disappearing or fading
>>> qualia, which I agree are philosophically distasteful.
>>>
>>> But what if someone is fine with philosophical zombies and suddenly
>>> disappearing qualia? Is there any impossibility proof for such things?
>>>
>>
>> Philosophical zombies are less problematic than partial philosophical
>> zombies. Partial philosophical zombies would render the idea of qualia
>> absurd, because it would mean that we might be blind completely blind, for
>> example, without realising it.
>>
>>
>> Isn't this what blindsight exemplifies?
>>
>
> Blindsight entails behaving as if you have vision but not believing that
> you have vision.
>
>
> And you don't believe you have vision because you're missing the qualia of
> seeing.
>
> Anton syndrome entails believing you have vision but not behaving as if
> you have vision.
> Being a partial zombie would entail believing you have vision and behaving
> as if you have vision, but not actually having vision.
>
>
> That would be a total zombie with respect to vision.  The person with
> blindsight is a partial zombie.  They have the function but not the qualia.
>
> As an absolute minimum, although we may not be able to test for or define
>> qualia, we should know if we have them. Take this requirement away, and
>> there is nothing left.
>>
>> Suddenly disappearing qualia are logically possible but it is difficult
>> to imagine how it could work. We would be normally conscious while our
>> neurons were being replaced, but when one special glutamate receptor in a
>> special neuron in the left parietal lobe was replaced, or when exactly
>> 35.54876% re

Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>>
>>> For the present discussion/question, I want to ignore the testable
>>> implications of computationalism on physical law, and instead focus on the
>>> following idea:
>>>
>>> "How can we know if a robot is conscious?"
>>>
>>> Let's say there are two brains, one biological and one an exact
>>> computational emulation, meaning exact functional equivalence. Then let's
>>> say we can exactly control sensory input and perfectly monitor motor
>>> control outputs between the two brains.
>>>
>>> Given that computationalism implies functional equivalence, then
>>> identical inputs yield identical internal behavior (nerve activations,
>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>> speech.
>>>
>>> If we stimulate nerves in the person's back to cause pain, and ask them
>>> both to describe the pain, both will speak identical sentences. Both will
>>> say it hurts when asked, and if asked to write a paragraph describing the
>>> pain, will provide identical accounts.
>>>
>>> Does the definition of functional equivalence mean that any scientific
>>> objective third-person analysis or test is doomed to fail to find any
>>> distinction in behaviors, and thus necessarily fails in its ability to
>>> disprove consciousness in the functionally equivalent robot mind?
>>>
>>> Is computationalism as far as science can go on a theory of mind before
>>> it reaches this testing roadblock?
>>>
>>
>> We can’t know if a particular entity is conscious,
>>
>>
>> If the term means anything, you can know one particular entity is
>> conscious.
>>
>
> Yes, I should have added we can’t know know that a particular entity other
> than oneself is conscious.
>
>> but we can know that if it is conscious, then a functional equivalent, as
>> you describe, is also conscious.
>>
>>
>> So any entity functionally equivalent to yourself, you must know is
>> conscious.  But "functionally equivalent" is vague, ambiguous, and
>> certainly needs qualifying by environment and other factors.  Is a dolphin
>> functionally equivalent to me.  Not in swimming.
>>
>
> Functional equivalence here means that you replace a part with a new part
> that behaves in the same way. So if you replaced the copper wires in a
> computer with silver wires, the silver wires would be functionally
> equivalent, and you would notice no change in using the computer. Copper
> and silver have different physical properties such as conductivity, but the
> replacement would be chosen so that this is not functionally relevant.
>
>
> But that functional equivalence at a microscopic level is worthless in
> judging what entities are conscious.The whole reason for bringing it up
> is that it provides a criterion for recognizing consciousness at the entity
> level.
>

The thought experiment involves removing a part of the brain that would
normally result in an obvious deficit in qualia and replacing it with a
non-biological component that replicates its interactions with the rest of
the brain. Remove the visual cortex, and the subject becomes blind,
staggering around walking into things, saying "I'm blind, I can't see
anything, why have you done this to me?" But if you replace it with an
implant that processes input and sends output to the remaining neural
tissue, the subject will have normal input to his leg muscles and his vocal
cords, so he will be able to navigate his way around a room and will say "I
can see everything normally, I feel just the same as before". This follows
necessarily from the assumptions. But does it also follow that the subject
will have normal visual qualia? If not, something very strange would be
happening: he would be blind, but would behave normally, including his
behaviour in communicating that everything feels normal.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypURPo0rGULM0e3wBJC392G0xe1G57dsQW-0i2PuYPMxfA%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 6:41 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List 
<mailto:everything-list@googlegroups.com>> wrote:




On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 09:15, Jason Resch mailto:jasonre...@gmail.com>> wrote:



On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou
mailto:stath...@gmail.com>> wrote:



On Wed, 10 Jun 2020 at 03:08, Jason Resch
mailto:jasonre...@gmail.com>> wrote:

For the present discussion/question, I want to ignore
the testable implications of computationalism on
physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and
one an exact computational emulation, meaning
exact functional equivalence. Then let's say we can
exactly control sensory input and perfectly monitor
motor control outputs between the two brains.

    Given that computationalism implies functional
equivalence, then identical inputs yield identical
internal behavior (nerve activations, etc.) and
outputs, in terms of muscle movement, facial
expressions, and speech.

If we stimulate nerves in the person's back to cause
pain, and ask them both to describe the pain, both
will speak identical sentences. Both will say it
hurts when asked, and if asked to write a paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean
that any scientific objective third-person analysis
or test is doomed to fail to find any distinction in
behaviors, and thus necessarily fails in its ability
to disprove consciousness in the functionally
equivalent robot mind?

Is computationalism as far as science can go on a
theory of mind before it reaches this testing roadblock?


We can’t know if a particular entity is conscious, but we
can know that if it is conscious, then a functional
equivalent, as you describe, is also conscious. This is
the subject of David Chalmers’ paper:

http://consc.net/papers/qualia.html


Chalmers' argument is that if a different brain is not
conscious, then somewhere along the way we get either
suddenly disappearing or fading qualia, which I agree are
philosophically distasteful.

But what if someone is fine with philosophical zombies and
suddenly disappearing qualia? Is there any impossibility
proof for such things?


Philosophical zombies are less problematic than partial
philosophical zombies. Partial philosophical zombies would render
the idea of qualia absurd, because it would mean that we might be
blind completely blind, for example, without realising it.


Isn't this what blindsight exemplifies?


Blindsight entails behaving as if you have vision but not believing 
that you have vision.


And you don't believe you have vision because you're missing the qualia 
of seeing.


Anton syndrome entails believing you have vision but not behaving as 
if you have vision.
Being a partial zombie would entail believing you have vision and 
behaving as if you have vision, but not actually having vision.


That would be a total zombie with respect to vision.  The person with 
blindsight is a partial zombie.  They have the function but not the qualia.



As an absolute minimum, although we may not be able to test for
or define qualia, we should know if we have them. Take this
requirement away, and there is nothing left.

Suddenly disappearing qualia are logically possible but it is
difficult to imagine how it could work. We would be normally
conscious while our neurons were being replaced, but when one
special glutamate receptor in a special neuron in the left
parietal lobe was replaced, or when exactly 35.54876% replacement
of all neurons was reached, the internal lights would suddenly go
out.


I think this all-or-nothing is misconceived.  It's not internal
cognition that might vanish suddenly, it's some specific aspect of
experience: There are people who, thru brain injury, lose the
ability to recognize faces...recognition is a qualia.   Of course
people's frequency range of hearing fades (don't ask me how I
know).  My mother, when she was 95 lost color vision in one eye,
but not the other.  Some people, it seems cannot do higher
mathematics.  So how would you know if you lost the qual

Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 09:15, Jason Resch  wrote:
>
>>
>>
>> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>>>
>>>> For the present discussion/question, I want to ignore the testable
>>>> implications of computationalism on physical law, and instead focus on the
>>>> following idea:
>>>>
>>>> "How can we know if a robot is conscious?"
>>>>
>>>> Let's say there are two brains, one biological and one an exact
>>>> computational emulation, meaning exact functional equivalence. Then let's
>>>> say we can exactly control sensory input and perfectly monitor motor
>>>> control outputs between the two brains.
>>>>
>>>> Given that computationalism implies functional equivalence, then
>>>> identical inputs yield identical internal behavior (nerve activations,
>>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>>> speech.
>>>>
>>>> If we stimulate nerves in the person's back to cause pain, and ask them
>>>> both to describe the pain, both will speak identical sentences. Both will
>>>> say it hurts when asked, and if asked to write a paragraph describing the
>>>> pain, will provide identical accounts.
>>>>
>>>> Does the definition of functional equivalence mean that any scientific
>>>> objective third-person analysis or test is doomed to fail to find any
>>>> distinction in behaviors, and thus necessarily fails in its ability to
>>>> disprove consciousness in the functionally equivalent robot mind?
>>>>
>>>> Is computationalism as far as science can go on a theory of mind before
>>>> it reaches this testing roadblock?
>>>>
>>>
>>> We can’t know if a particular entity is conscious, but we can know that
>>> if it is conscious, then a functional equivalent, as you describe, is also
>>> conscious. This is the subject of David Chalmers’ paper:
>>>
>>> http://consc.net/papers/qualia.html
>>>
>>
>> Chalmers' argument is that if a different brain is not conscious, then
>> somewhere along the way we get either suddenly disappearing or fading
>> qualia, which I agree are philosophically distasteful.
>>
>> But what if someone is fine with philosophical zombies and suddenly
>> disappearing qualia? Is there any impossibility proof for such things?
>>
>
> Philosophical zombies are less problematic than partial philosophical
> zombies. Partial philosophical zombies would render the idea of qualia
> absurd, because it would mean that we might be blind completely blind, for
> example, without realising it.
>
>
> Isn't this what blindsight exemplifies?
>

Blindsight entails behaving as if you have vision but not believing that
you have vision.
Anton syndrome entails believing you have vision but not behaving as if you
have vision.
Being a partial zombie would entail believing you have vision and behaving
as if you have vision, but not actually having vision.

> As an absolute minimum, although we may not be able to test for or define
> qualia, we should know if we have them. Take this requirement away, and
> there is nothing left.
>
> Suddenly disappearing qualia are logically possible but it is difficult to
> imagine how it could work. We would be normally conscious while our neurons
> were being replaced, but when one special glutamate receptor in a special
> neuron in the left parietal lobe was replaced, or when exactly 35.54876%
> replacement of all neurons was reached, the internal lights would suddenly
> go out.
>
>
> I think this all-or-nothing is misconceived.  It's not internal cognition
> that might vanish suddenly, it's some specific aspect of experience: There
> are people who, thru brain injury, lose the ability to recognize
> faces...recognition is a qualia.   Of course people's frequency range of
> hearing fades (don't ask me how I know).  My mother, when she was 95 lost
> color vision in one eye, but not the other.  Some people, it seems cannot
> do higher mathematics.  So how would you know if you lost the qualia of
> empathy for example?  Could it not just fade...i.e. become evoked less and
> less?
>

I don't believe suddenly disappearing qualia can happen, but either this -
leading to full zombiehood - or fading qualia - leading to partial
zombiehood - would be a consequence of  replacement of the brain if
behaviour could be replicated without replicating qualia.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUCVw7jJ1-4P0L1V02NkZTaUvHTVMcCBbughF6evByuoA%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List 
<mailto:everything-list@googlegroups.com>> wrote:




On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 03:08, Jason Resch mailto:jasonre...@gmail.com>> wrote:

For the present discussion/question, I want to ignore the
testable implications of computationalism on physical law,
and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an
exact computational emulation, meaning exact functional
equivalence. Then let's say we can exactly control sensory
input and perfectly monitor motor control outputs between the
two brains.

Given that computationalism implies functional equivalence,
then identical inputs yield identical internal behavior
(nerve activations, etc.) and outputs, in terms of muscle
movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain,
and ask them both to describe the pain, both will speak
identical sentences. Both will say it hurts when asked, and
if asked to write a paragraph describing the pain, will
provide identical accounts.

Does the definition of functional equivalence mean that any
scientific objective third-person analysis or test is doomed
to fail to find any distinction in behaviors, and thus
necessarily fails in its ability to disprove consciousness in
the functionally equivalent robot mind?

    Is computationalism as far as science can go on a theory of
mind before it reaches this testing roadblock?


We can’t know if a particular entity is conscious,


If the term means anything, you can know one particular entity is
conscious.


Yes, I should have added we can’t know know that a particular entity 
other than oneself is conscious.



but we can know that if it is conscious, then a functional
equivalent, as you describe, is also conscious.


So any entity functionally equivalent to yourself, you must know
is conscious.  But "functionally equivalent" is vague, ambiguous,
and certainly needs qualifying by environment and other factors. 
Is a dolphin functionally equivalent to me.  Not in swimming.


Functional equivalence here means that you replace a part with a new 
part that behaves in the same way. So if you replaced the copper wires 
in a computer with silver wires, the silver wires would be 
functionally equivalent, and you would notice no change in using the 
computer. Copper and silver have different physical properties such as 
conductivity, but the replacement would be chosen so that this is not 
functionally relevant.


But that functional equivalence at a microscopic level is worthless in 
judging what entities are conscious.    The whole reason for bringing it 
up is that it provides a criterion for recognizing consciousness at the 
entity level.


And even at the microscopic level functional equivalence in ambiguous.  
The difference in conductivity between cooper and silver might not make 
any different 99.9% of the time, but in some circumstance it might make 
a difference.  Or there might be incidental effects due to the 
difference in corrosion that would show up in 20yrs but not sooner.


Brent


This is the subject of David Chalmers’ paper:

http://consc.net/papers/qualia.html


--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWG377ELaFd1MZybF%2Bjfmg0aGbxc%3DeCh1AwHOYoaYg9zQ%40mail.gmail.com 
<https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWG377ELaFd1MZybF%2Bjfmg0aGbxc%3DeCh1AwHOYoaYg9zQ%40mail.gmail.com?utm_medium=email_source=footer>.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b0c2dc32-1bd0-8bcc-775a-41de326a3d4e%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 09:15, Jason Resch <mailto:jasonre...@gmail.com>> wrote:




On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou
mailto:stath...@gmail.com>> wrote:



On Wed, 10 Jun 2020 at 03:08, Jason Resch
mailto:jasonre...@gmail.com>> wrote:

For the present discussion/question, I want to ignore the
testable implications of computationalism on physical law,
and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an
exact computational emulation, meaning exact functional
equivalence. Then let's say we can exactly control sensory
input and perfectly monitor motor control outputs between
the two brains.

Given that computationalism implies functional
equivalence, then identical inputs yield identical
internal behavior (nerve activations, etc.) and outputs,
in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain,
and ask them both to describe the pain, both will speak
identical sentences. Both will say it hurts when asked,
and if asked to write a paragraph describing the pain,
will provide identical accounts.

Does the definition of functional equivalence mean that
any scientific objective third-person analysis or test is
doomed to fail to find any distinction in behaviors, and
thus necessarily fails in its ability to disprove
consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory
of mind before it reaches this testing roadblock?


We can’t know if a particular entity is conscious, but we can
know that if it is conscious, then a functional equivalent, as
you describe, is also conscious. This is the subject of David
Chalmers’ paper:

http://consc.net/papers/qualia.html


Chalmers' argument is that if a different brain is not conscious,
then somewhere along the way we get either suddenly disappearing
or fading qualia, which I agree are philosophically distasteful.

But what if someone is fine with philosophical zombies and
suddenly disappearing qualia? Is there any impossibility proof for
such things?


Philosophical zombies are less problematic than partial philosophical 
zombies. Partial philosophical zombies would render the idea of qualia 
absurd, because it would mean that we might be blind completely blind, 
for example, without realising it.


Isn't this what blindsight exemplifies?

As an absolute minimum, although we may not be able to test for or 
define qualia, we should know if we have them. Take this requirement 
away, and there is nothing left.


Suddenly disappearing qualia are logically possible but it is 
difficult to imagine how it could work. We would be normally conscious 
while our neurons were being replaced, but when one special glutamate 
receptor in a special neuron in the left parietal lobe was replaced, 
or when exactly 35.54876% replacement of all neurons was reached, the 
internal lights would suddenly go out.


I think this all-or-nothing is misconceived.  It's not internal 
cognition that might vanish suddenly, it's some specific aspect of 
experience: There are people who, thru brain injury, lose the ability to 
recognize faces...recognition is a qualia.   Of course people's 
frequency range of hearing fades (don't ask me how I know).  My mother, 
when she was 95 lost color vision in one eye, but not the other.  Some 
people, it seems cannot do higher mathematics. So how would you know if 
you lost the qualia of empathy for example?  Could it not just 
fade...i.e. become evoked less and less?


Brent


--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUZjiyCppw-qGPM9XPnnP%3D%2BeVCwbD00wxqesBrSvR-shg%40mail.gmail.com 
<https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUZjiyCppw-qGPM9XPnnP%3D%2BeVCwbD00wxqesBrSvR-shg%40mail.gmail.com?utm_medium=email_source=footer>.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To vie

Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on the
>> following idea:
>>
>> "How can we know if a robot is conscious?"
>>
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then let's
>> say we can exactly control sensory input and perfectly monitor motor
>> control outputs between the two brains.
>>
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>> speech.
>>
>> If we stimulate nerves in the person's back to cause pain, and ask them
>> both to describe the pain, both will speak identical sentences. Both will
>> say it hurts when asked, and if asked to write a paragraph describing the
>> pain, will provide identical accounts.
>>
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>>
>> Is computationalism as far as science can go on a theory of mind before
>> it reaches this testing roadblock?
>>
>
> We can’t know if a particular entity is conscious,
>
>
> If the term means anything, you can know one particular entity is
> conscious.
>

Yes, I should have added we can’t know know that a particular entity other
than oneself is conscious.

> but we can know that if it is conscious, then a functional equivalent, as
> you describe, is also conscious.
>
>
> So any entity functionally equivalent to yourself, you must know is
> conscious.  But "functionally equivalent" is vague, ambiguous, and
> certainly needs qualifying by environment and other factors.  Is a dolphin
> functionally equivalent to me.  Not in swimming.
>

Functional equivalence here means that you replace a part with a new part
that behaves in the same way. So if you replaced the copper wires in a
computer with silver wires, the silver wires would be functionally
equivalent, and you would notice no change in using the computer. Copper
and silver have different physical properties such as conductivity, but the
replacement would be chosen so that this is not functionally relevant.

> This is the subject of David Chalmers’ paper:
>
> http://consc.net/papers/qualia.html
>
> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWG377ELaFd1MZybF%2Bjfmg0aGbxc%3DeCh1AwHOYoaYg9zQ%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 09:15, Jason Resch  wrote:

>
>
> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>>
>>> For the present discussion/question, I want to ignore the testable
>>> implications of computationalism on physical law, and instead focus on the
>>> following idea:
>>>
>>> "How can we know if a robot is conscious?"
>>>
>>> Let's say there are two brains, one biological and one an exact
>>> computational emulation, meaning exact functional equivalence. Then let's
>>> say we can exactly control sensory input and perfectly monitor motor
>>> control outputs between the two brains.
>>>
>>> Given that computationalism implies functional equivalence, then
>>> identical inputs yield identical internal behavior (nerve activations,
>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>> speech.
>>>
>>> If we stimulate nerves in the person's back to cause pain, and ask them
>>> both to describe the pain, both will speak identical sentences. Both will
>>> say it hurts when asked, and if asked to write a paragraph describing the
>>> pain, will provide identical accounts.
>>>
>>> Does the definition of functional equivalence mean that any scientific
>>> objective third-person analysis or test is doomed to fail to find any
>>> distinction in behaviors, and thus necessarily fails in its ability to
>>> disprove consciousness in the functionally equivalent robot mind?
>>>
>>> Is computationalism as far as science can go on a theory of mind before
>>> it reaches this testing roadblock?
>>>
>>
>> We can’t know if a particular entity is conscious, but we can know that
>> if it is conscious, then a functional equivalent, as you describe, is also
>> conscious. This is the subject of David Chalmers’ paper:
>>
>> http://consc.net/papers/qualia.html
>>
>
> Chalmers' argument is that if a different brain is not conscious, then
> somewhere along the way we get either suddenly disappearing or fading
> qualia, which I agree are philosophically distasteful.
>
> But what if someone is fine with philosophical zombies and suddenly
> disappearing qualia? Is there any impossibility proof for such things?
>

Philosophical zombies are less problematic than partial philosophical
zombies. Partial philosophical zombies would render the idea of qualia
absurd, because it would mean that we might be blind completely blind, for
example, without realising it. As an absolute minimum, although we may not
be able to test for or define qualia, we should know if we have them. Take
this requirement away, and there is nothing left.

Suddenly disappearing qualia are logically possible but it is difficult to
imagine how it could work. We would be normally conscious while our neurons
were being replaced, but when one special glutamate receptor in a special
neuron in the left parietal lobe was replaced, or when exactly 35.54876%
replacement of all neurons was reached, the internal lights would suddenly
go out.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUZjiyCppw-qGPM9XPnnP%3D%2BeVCwbD00wxqesBrSvR-shg%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 4:14 PM, Jason Resch wrote:



On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou <mailto:stath...@gmail.com>> wrote:




On Wed, 10 Jun 2020 at 03:08, Jason Resch mailto:jasonre...@gmail.com>> wrote:

For the present discussion/question, I want to ignore the
testable implications of computationalism on physical law, and
instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an
exact computational emulation, meaning exact functional
equivalence. Then let's say we can exactly control sensory
input and perfectly monitor motor control outputs between the
two brains.

Given that computationalism implies functional equivalence,
then identical inputs yield identical internal behavior (nerve
activations, etc.) and outputs, in terms of muscle movement,
facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and
ask them both to describe the pain, both will speak identical
sentences. Both will say it hurts when asked, and if asked to
write a paragraph describing the pain, will provide identical
accounts.

Does the definition of functional equivalence mean that any
scientific objective third-person analysis or test is doomed
to fail to find any distinction in behaviors, and thus
necessarily fails in its ability to disprove consciousness in
the functionally equivalent robot mind?

    Is computationalism as far as science can go on a theory of
mind before it reaches this testing roadblock?


We can’t know if a particular entity is conscious, but we can know
that if it is conscious, then a functional equivalent, as you
describe, is also conscious. This is the subject of David
Chalmers’ paper:

http://consc.net/papers/qualia.html


Chalmers' argument is that if a different brain is not conscious, then 
somewhere along the way we get either suddenly disappearing or fading 
qualia, which I agree are philosophically distasteful.


But what if someone is fine with philosophical zombies and suddenly 
disappearing qualia? Is there any impossibility proof for such things?


There's an implicit assumption that "qualia" are well defined things.  I 
think it very plausible that qualia differ depending on sensors, values, 
and memory.  So we may create AI that has something like qualia, but 
which are different from our qualia as people with synesthesia have 
somewhat different qualia.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/50a0b5e9-cc56-c14d-d208-99aaa5235cbc%40verizon.net.


Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Jason Resch
On Tue, Jun 9, 2020 at 2:15 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 10:08 AM, Jason Resch wrote:
> > For the present discussion/question, I want to ignore the testable
> > implications of computationalism on physical law, and instead focus on
> > the following idea:
> >
> > "How can we know if a robot is conscious?"
> >
> > Let's say there are two brains, one biological and one an exact
> > computational emulation, meaning exact functional equivalence. Then
> > let's say we can exactly control sensory input and perfectly monitor
> > motor control outputs between the two brains.
> >
> > Given that computationalism implies functional equivalence, then
> > identical inputs yield identical internal behavior (nerve activations,
> > etc.) and outputs, in terms of muscle movement, facial expressions,
> > and speech.
> >
> > If we stimulate nerves in the person's back to cause pain, and ask
> > them both to describe the pain, both will speak identical sentences.
> > Both will say it hurts when asked, and if asked to write a paragraph
> > describing the pain, will provide identical accounts.
> >
> > Does the definition of functional equivalence mean that any scientific
> > objective third-person analysis or test is doomed to fail to find any
> > distinction in behaviors, and thus necessarily fails in its ability to
> > disprove consciousness in the functionally equivalent robot mind?
> >
> > Is computationalism as far as science can go on a theory of mind
> > before it reaches this testing roadblock?
>
> If it acts conscious, then it is conscious.
>

That is the assumption I and most others operate under.

But every now and then you encounter a biological naturalist or something
that says a brain must be made of brain cells to actually be conscious.

The real point of my e-mail is to ask the question: can any test in
principle disprove computationalism as a philosophy of mind, given it's
defined as functionally identical?



>
> But I think science/technology can go a lot further.  I can look at the
> information flow, where is memory and how is it formed and how is it
> accessed and does this matter or not in the action of the entity.  It
> can look at the decision processes.  Are there separate competing
> modules (as Dennett hypothesizes) or is there a global workspace...and
> again does it make a difference.  What does it take to make the entity
> act happy, sad, thoughtful, bored, etc.


 I agree we can look at more than just the outputs.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg%2BEpMKfAdmFbwk_JTQS-XqZKCmi-U9D804HNQh2SgSkQ%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 03:08, Jason Resch <mailto:jasonre...@gmail.com>> wrote:


For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead
focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence.
Then let's say we can exactly control sensory input and perfectly
monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve
activations, etc.) and outputs, in terms of muscle movement,
facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask
them both to describe the pain, both will speak identical
sentences. Both will say it hurts when asked, and if asked to
write a paragraph describing the pain, will provide identical
accounts.

Does the definition of functional equivalence mean that any
scientific objective third-person analysis or test is doomed to
fail to find any distinction in behaviors, and thus necessarily
fails in its ability to disprove consciousness in the functionally
equivalent robot mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?


We can’t know if a particular entity is conscious,


If the term means anything, you can know one particular entity is conscious.

but we can know that if it is conscious, then a functional equivalent, 
as you describe, is also conscious.


So any entity functionally equivalent to yourself, you must know is 
conscious.  But "functionally equivalent" is vague, ambiguous, and 
certainly needs qualifying by environment and other factors.  Is a 
dolphin functionally equivalent to me.  Not in swimming.


Brent


This is the subject of David Chalmers’ paper:

http://consc.net/papers/qualia.html


--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com 
<https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com?utm_medium=email_source=footer>.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1b7a5636-4c41-aa2e-c643-845a3f77f3e0%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Jason Resch
On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou 
wrote:

>
>
> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on the
>> following idea:
>>
>> "How can we know if a robot is conscious?"
>>
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then let's
>> say we can exactly control sensory input and perfectly monitor motor
>> control outputs between the two brains.
>>
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>> speech.
>>
>> If we stimulate nerves in the person's back to cause pain, and ask them
>> both to describe the pain, both will speak identical sentences. Both will
>> say it hurts when asked, and if asked to write a paragraph describing the
>> pain, will provide identical accounts.
>>
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>>
>> Is computationalism as far as science can go on a theory of mind before
>> it reaches this testing roadblock?
>>
>
> We can’t know if a particular entity is conscious, but we can know that if
> it is conscious, then a functional equivalent, as you describe, is also
> conscious. This is the subject of David Chalmers’ paper:
>
> http://consc.net/papers/qualia.html
>

Chalmers' argument is that if a different brain is not conscious, then
somewhere along the way we get either suddenly disappearing or fading
qualia, which I agree are philosophically distasteful.

But what if someone is fine with philosophical zombies and suddenly
disappearing qualia? Is there any impossibility proof for such things?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjnn2DQwit%2Bj%3DYdXbXZbwHTv_PZa7GRKXwdo31gTAFygg%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Philip Thrift


On Tuesday, June 9, 2020 at 2:15:40 PM UTC-5, Brent wrote:
>
>
>
> On 6/9/2020 10:08 AM, Jason Resch wrote: 
> > For the present discussion/question, I want to ignore the testable 
> > implications of computationalism on physical law, and instead focus on 
> > the following idea: 
> > 
> > "How can we know if a robot is conscious?" 
> > 
> > Let's say there are two brains, one biological and one an exact 
> > computational emulation, meaning exact functional equivalence. Then 
> > let's say we can exactly control sensory input and perfectly monitor 
> > motor control outputs between the two brains. 
> > 
> > Given that computationalism implies functional equivalence, then 
> > identical inputs yield identical internal behavior (nerve activations, 
> > etc.) and outputs, in terms of muscle movement, facial expressions, 
> > and speech. 
> > 
> > If we stimulate nerves in the person's back to cause pain, and ask 
> > them both to describe the pain, both will speak identical sentences. 
> > Both will say it hurts when asked, and if asked to write a paragraph 
> > describing the pain, will provide identical accounts. 
> > 
> > Does the definition of functional equivalence mean that any scientific 
> > objective third-person analysis or test is doomed to fail to find any 
> > distinction in behaviors, and thus necessarily fails in its ability to 
> > disprove consciousness in the functionally equivalent robot mind? 
> > 
> > Is computationalism as far as science can go on a theory of mind 
> > before it reaches this testing roadblock? 
>
> If it acts conscious, then it is conscious. 
>
> But I think science/technology can go a lot further.  I can look at the 
> information flow, where is memory and how is it formed and how is it 
> accessed and does this matter or not in the action of the entity.  It 
> can look at the decision processes.  Are there separate competing 
> modules (as Dennett hypothesizes) or is there a global workspace...and 
> again does it make a difference.  What does it take to make the entity 
> act happy, sad, thoughtful, bored, etc. 
>
> Brent 
>



I doubt anyone in consciousness research believes this. Including Dennett 
today.

@philipthrift 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/cba4e85c-9709-45b4-aba0-b7cbe2a35bcbo%40googlegroups.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:

> For the present discussion/question, I want to ignore the testable
> implications of computationalism on physical law, and instead focus on the
> following idea:
>
> "How can we know if a robot is conscious?"
>
> Let's say there are two brains, one biological and one an exact
> computational emulation, meaning exact functional equivalence. Then let's
> say we can exactly control sensory input and perfectly monitor motor
> control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then identical
> inputs yield identical internal behavior (nerve activations, etc.) and
> outputs, in terms of muscle movement, facial expressions, and speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask them
> both to describe the pain, both will speak identical sentences. Both will
> say it hurts when asked, and if asked to write a paragraph describing the
> pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific
> objective third-person analysis or test is doomed to fail to find any
> distinction in behaviors, and thus necessarily fails in its ability to
> disprove consciousness in the functionally equivalent robot mind?
>
> Is computationalism as far as science can go on a theory of mind before it
> reaches this testing roadblock?
>

We can’t know if a particular entity is conscious, but we can know that if
it is conscious, then a functional equivalent, as you describe, is also
conscious. This is the subject of David Chalmers’ paper:

http://consc.net/papers/qualia.html


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List




On 6/9/2020 10:08 AM, Jason Resch wrote:
For the present discussion/question, I want to ignore the testable 
implications of computationalism on physical law, and instead focus on 
the following idea:


"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact 
computational emulation, meaning exact functional equivalence. Then 
let's say we can exactly control sensory input and perfectly monitor 
motor control outputs between the two brains.


Given that computationalism implies functional equivalence, then 
identical inputs yield identical internal behavior (nerve activations, 
etc.) and outputs, in terms of muscle movement, facial expressions, 
and speech.


If we stimulate nerves in the person's back to cause pain, and ask 
them both to describe the pain, both will speak identical sentences. 
Both will say it hurts when asked, and if asked to write a paragraph 
describing the pain, will provide identical accounts.


Does the definition of functional equivalence mean that any scientific 
objective third-person analysis or test is doomed to fail to find any 
distinction in behaviors, and thus necessarily fails in its ability to 
disprove consciousness in the functionally equivalent robot mind?


Is computationalism as far as science can go on a theory of mind 
before it reaches this testing roadblock?


If it acts conscious, then it is conscious.

But I think science/technology can go a lot further.  I can look at the 
information flow, where is memory and how is it formed and how is it 
accessed and does this matter or not in the action of the entity.  It 
can look at the decision processes.  Are there separate competing 
modules (as Dennett hypothesizes) or is there a global workspace...and 
again does it make a difference.  What does it take to make the entity 
act happy, sad, thoughtful, bored, etc.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/27fc7d6a-7648-fdfb-0a3b-a000d1b4ca4c%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread John Clark
On Tue, Jun 9, 2020 at 1:08 PM Jason Resch  wrote:

*> How can we know if a robot is conscious?*


The exact same way we know that one of our fellow human beings is conscious
when he's not sleeping or under anesthesia or dead.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com.


Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Jason Resch
For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead focus on the
following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence. Then let's
say we can exactly control sensory input and perfectly monitor motor
control outputs between the two brains.

Given that computationalism implies functional equivalence, then identical
inputs yield identical internal behavior (nerve activations, etc.) and
outputs, in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask them
both to describe the pain, both will speak identical sentences. Both will
say it hurts when asked, and if asked to write a paragraph describing the
pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific
objective third-person analysis or test is doomed to fail to find any
distinction in behaviors, and thus necessarily fails in its ability to
disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind before it
reaches this testing roadblock?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com.


Realistic computationalism

2018-09-17 Thread Philip Thrift

Realistic computationalism

(draft 2018-09-17)

0. The term "realistic computationalism" is meant to suggest a 
computationalism* based on the practical approach (software and hardware 
production, both "conventional" and "unconventional") to computing, rather 
than the completely theoretical (or "pure") approach.

* ( https://plato.stanford.edu/entries/computation-physicalsystems )



0.1.  PTLOS configurations 

A configuration PTLOS(π,λ,τ,ο,Σ) — lower case Greek letters π, λ, τ, ο, and 
capital Greek letter Σ are variables that take on concrete (particular) 
values —  is defined:

PLTOS(π,λ,τ,ο,Σ): A program π that is written in a language λ that is 
transformed via a compiler/assembler τ into an output object ο that 
executes in a computing substrate Σ.


0.1.  "Material* PLTOS Thesis":

   Every material phenomenon can be effectively represented by 
some
   PLTOS(π,λ,τ,ο,Σ).

* (alt. "Physical")


0.2.   τ⁻¹ is a decompiler/disassembler: it takes an object ο and produces 
a program π,in some language λ.


0.3.   π could consist of a collection of programs (a codebase) in 
different languages λs.


1. Σ = von Neumann / Turing

1.1. For example, π could be a general relativity program written in  λ = 
SageManifolds/Python 3 and  compiled by τ = Python 3.5.6 for Linux/UNIX 
into*  ο = machine language code object for Σ = Ubuntu 18.04/ASUS VivoBook. 
PLTOS(π,λ,τ,ο,Σ) then identifies this particular PLTOS.

* (in the case of Python, τ compiles π into an ο = [bytecode+interpreter] 
object)


In the PLTOS(π,λ,τ,ο,Σ) example above, "effectively representative" means 
that it matches data from observations.


2. Σ = non von Neumann / Turing


2.1. "Turing equivalence" (an equivalence relation on programs) basically 
translates into "It doesn't matter what Σ is." But particulars do matter in 
the efficiency of what programs are transformed into. Different hardware (a 
different Σ), e.g. replacing CPUs with GPUs, is used for virtual/augmented 
reality applications. Hardware compilers (a τ compiles a π into an ο such 
as neural-network reconfigurable hardware, ASIC, FPGA, basically makes Σ = 
ο. (The output object is its own computing substrate.)


3. Σ = unbounded/interactive 

e.g., the internet as type of super-Turing? substrate

Computation Beyond Turing Machines
Peter Wegner, Dina Goldin
http://oldblog.computationalcomplexity.org/media/Wegner-Goldin.pdf
cf. http://www.cse.uconn.edu/~dqg/papers/turing04.pdf

it is possible to derive super-Turing models from:
- interaction with the world;
- infinity of resources;
- evolution of the system.


4. Σ = human 

(human biocomputer)

John Lilly
https://en.wikipedia.org/wiki/Human_biocomputer


5. Σ = natural

5.1.  slime molds

Computing with slime: Logical circuits built using living slime molds
https://www.sciencedaily.com/releases/2014/03/140327100335.htm

cf. 
https://www.newscientist.com/article/2142614-the-slime-mould-instruments-that-make-sweet-music/



6. Σ = synthetic biological

6.1.  τ is a biocompiler / biomolecular assembler (from the developing 
field of synthetic biology).

Example: A biochemical molecular program (π) written in a 
synthetic-biological language (λ) that is biocompiled (τ) into a life form 
(ο) that is injected into a person (Σ) to cure a disease.

If ο is effective (n carrying out its programmed task of attacking the 
disease), this PLTOS is an effective representative of a life form. (In 
fact the representation is the life form itself.) 

6.2  But is biocomputation > computation (the latter defined 
conventionally)? 

[RM] below will refer to
 Galen Strawson
 Realistic Monism
 (Why Physicalism Entails Panpsychism)

 
http://www.sjsu.edu/people/anand.vaidya/courses/c2/s0/Realistic-Monism---Why-Physicalism-Entails-Panpsychism-Galen-Strawson.pdf

[RM-2017] 
http://www.academia.edu/25420435/Physicalist_panpsychism_2017_draft


Is there an ‘ultimate’ -  "a fundamental physical entity, an ultimate 
constituent of reality, [like] a particle, field, string, brane, simple, 
whatever" [RM] that is "experience" in addition to "information" (which is 
what conventional computation manipulates)?

"Real physicalists must accept that at least some ultimates are 
intrinsically experience involving. They must at least embrace 
micropsychism. Given that everything concrete is physical, and that 
everything physical is constituted out of physical ultimates, and that 
experience is part of concrete reality, it seems the only reasonable 
position, more than just an ‘inference to the best explanation’. Which is 
not to say that it is easy to accept in the current intellectual climate." 
[RM]

For output objects ο of biocompilers, this means ο has experientiality ( e 
) in addition to informationality ( i ). Programs with e-states (in 
addition to i-states)in thei

Re: INDEXICAL Computationalism

2018-03-12 Thread Bruno Marchal

> On 12 Mar 2018, at 02:37, Brent Meeker <meeke...@verizon.net> wrote:
> 
> 
> 
> On 3/11/2018 10:06 AM, Bruno Marchal wrote:
>>> That's false.  You implicitly and without proof or ever evidence assume 
>>> that mathematics, computation, and abstractions like numbers exist,
>> I have to assume them because the notion of computation needs them to be 
>> defined mathematically.
>> 
>> Now if you deny that the equation x + 1 = 2 has no solution, I have a 
>> problem, but I am pretty sure you agree that such a solution exist. 
> 
> You know very well that is a disingenuous play on words.  Satisfying a 
> formula is not existence.  Otherwise we could write (looks like a horse with 
> a narwhal horn)  = x  and since it is satisfied by x=unicorn we have proven 
> that unicorns exist.



You are the disingenuous one here. You are supposed to have understood that the 
notion of computations is arithmetical, and that they exist in arithmetic, 
which, once we assume mechanism invite us to reconsider the existence of 
something satisfying our experiences.

I have put my assumption on the table, so yes, indeed, only 0, s(0), s(s(0)), … 
exist.

Then you invoke your favorite deity “The Primary Physical Universe” to define 
your notion of “real existence”.

I have no problem with that, unless you bet on computationalism too. In that 
case, your hypothesis hides the problem to measure the difference between the 
physics in the head of all universal machine and the local observable reality. 

Arithmetic, or the applicative algebras contains a deluder (the universal 
numbers) and many deluders (the other universal numbers), but consciousness 
stabilise only ‘trivially” on the structure allowing reasonable measure, and we 
could bet on group theory, Lie group, except that at this stage, that would be 
like cheating, and without the translation in G, we miss the G* and variants 
nuances.

Of course, satisfying a formula is not prove of existence, except for those who 
learn and believe in the axioms, as I am sure you do.
The computations are the sigma_1 arithmetical relations, and they exist like 
the prime numbers can be said to exist, and then not only we need nothing more, 
but adding something can lead to inflations of possibilities if brought in the 
ontology. 

Bruno


> 
> Brent
> 
>> 
>> After that, the reasoning show that assuming more existing things in the 
>> ontology cannot work.
>> 
>> 
>> 
>> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To post to this group, send email to everything-list@googlegroups.com 
> <mailto:everything-list@googlegroups.com>.
> Visit this group at https://groups.google.com/group/everything-list 
> <https://groups.google.com/group/everything-list>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-12 Thread Bruno Marchal

> On 12 Mar 2018, at 02:22, Brent Meeker  wrote:
> 
> 
> 
> On 3/11/2018 9:53 AM, Bruno Marchal wrote:
>>> On 10 Mar 2018, at 22:11, Brent Meeker  wrote:
>>> 
>>> 
>>> 
>>> On 3/10/2018 12:30 AM, Bruno Marchal wrote:
> On 8 Mar 2018, at 17:10, Telmo Menezes  wrote:
> 
> On Thu, Mar 8, 2018 at 4:57 PM, Bruno Marchal  wrote:
>>> On 7 Mar 2018, at 15:24, Telmo Menezes  wrote:
>>> 
>>> On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  
>>> wrote:
 On 3/5/2018 11:49 PM, Telmo Menezes wrote:
 
 On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  
 wrote:
 
 On 3/5/2018 9:14 AM, Telmo Menezes wrote:
 
 "Could" implies a question about possibilities.  It's certainly 
 logically
 possible that there not be such a disease as leukemia.  Is it 
 nomologically
 possible?...not as far as we know.
 
 Well I'm not sure it's logically possible, for the reasons that Bruno
 already addressed.
 
 
 Bruno is assuming that everything not contrary to his theory exists
 axiomatically...which is assuming the answer.
 
 That is a rather uncharitable way of putting it.
 
 Bruno has discussed his Universal Dovetailer Argument extensively. If
 you assume comp and accept the argument, then we are inside of the
 dovetailer. The dovetailer is an everything-generator.
 
 
 That's exactly the problem with everythingism.  It predicts all the 
 stuff we
 don't see.
>>> Bruno, Russell, Tegmark and others tend to concern themselves a lot
>>> with why our experience of reality looks like it does on the face of
>>> everythingism. That is precisely the "hard part", no?
>> It is the hard part of the matter problem, when we understand that with 
>> mechanism, the everything is no more that the sigma_1 arithmetical 
>> reality, which I think everyone believe in, except the 
>> ultra-intuitionist.
>> 
>> Brent seemed to have understood this once, but seems to forget it 
>> recently apparently.
>> 
>> If someone believe in a primal physical universe *and* in the survive of 
>> consciousness through the digital transformation, it is up to them to 
>> explain how the primal universe (and what is it?) acts on arithmetic for 
>> making some computations seems more real than others.
>> 
>> I claim nothing, except that mechanism and materialism are incompatible, 
>> and that the mind-body problem is reduced into deriving physics from the 
>> “material” variants of machine’s ideal rational 
>> believability/justifiability. And then it works at the propositional 
>> level, so we can say that today, we have not yet detected any evidence 
>> for a primal universe through our observation of nature.
>> 
>> Let us encourage the pursue of the testing, simply.
> Bruno, can you expand a bit? If you had a big grant to pursue this
> research programme, what would you do?
 I would hire mathematicians to continue the extraction of physics, which 
 in this case would mean to optimise the theorem provers for the machine’s 
 quantum logic (S4Grz1, Z1*, X1*) and compare them to the quantum logics of 
 nature (where some research do exist already, but that needs to make 
 progress too).
 Now, we have three quantum logics, and all three have more theorems than 
 the physical-empirical quantum logics, as they have the Löbian 
 constraints, and so, some quantum tautologies are original and we have to 
 test them. Then we have to isolate the tensor product, without postulating 
 a linear logic superimposed to the quantum logic (that would work, but to 
 exploit the G/G* difference, we have no other choice than to derive the 
 “linear and” from the machine-quantum physicalness, where some tools from 
 knot theory (and its relation with quantum statistics) might be available 
 if some relation between the quantisation ([]<>A, with the box and diamond 
 of Z1*, S4Grz1, X1*) are verified, which is still open.
 
 My work has only open the door on a non Aristotelian way to conceive 
 rationally the observation, but now, this asks for continual verification 
 until we find a discrepancy, and in that case we learn that Mechanism is 
 false, or we continue with the simple mechanist theory, as it explains 
 both quanta and qualia, with consistent relation in between.
 
 Many important question remains: is the Hamiltonian purely physical 
 (derivable in arithmetic/theology) or geographical, in which case the 
 multiverse allows for a continuum of Hamitonians, etc.
 
 Telmo, 

Re: INDEXICAL Computationalism

2018-03-11 Thread Brent Meeker



On 3/11/2018 10:06 AM, Bruno Marchal wrote:

That's false.  You implicitly and without proof or ever evidence assume that 
mathematics, computation, and abstractions like numbers exist,

I have to assume them because the notion of computation needs them to be 
defined mathematically.

Now if you deny that the equation x + 1 = 2 has no solution, I have a problem, 
but I am pretty sure you agree that such a solution exist.


You know very well that is a disingenuous play on words.  Satisfying a 
formula is not existence.  Otherwise we could write (looks like a horse 
with a narwhal horn)  = x  and since it is satisfied by x=unicorn we 
have proven that unicorns exist.


Brent



After that, the reasoning show that assuming more existing things in the 
ontology cannot work.






--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-11 Thread Brent Meeker



On 3/11/2018 9:53 AM, Bruno Marchal wrote:

On 10 Mar 2018, at 22:11, Brent Meeker  wrote:



On 3/10/2018 12:30 AM, Bruno Marchal wrote:

On 8 Mar 2018, at 17:10, Telmo Menezes  wrote:

On Thu, Mar 8, 2018 at 4:57 PM, Bruno Marchal  wrote:

On 7 Mar 2018, at 15:24, Telmo Menezes  wrote:

On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  wrote:

On 3/5/2018 11:49 PM, Telmo Menezes wrote:

On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:

On 3/5/2018 9:14 AM, Telmo Menezes wrote:

"Could" implies a question about possibilities.  It's certainly logically
possible that there not be such a disease as leukemia.  Is it nomologically
possible?...not as far as we know.

Well I'm not sure it's logically possible, for the reasons that Bruno
already addressed.


Bruno is assuming that everything not contrary to his theory exists
axiomatically...which is assuming the answer.

That is a rather uncharitable way of putting it.

Bruno has discussed his Universal Dovetailer Argument extensively. If
you assume comp and accept the argument, then we are inside of the
dovetailer. The dovetailer is an everything-generator.


That's exactly the problem with everythingism.  It predicts all the stuff we
don't see.

Bruno, Russell, Tegmark and others tend to concern themselves a lot
with why our experience of reality looks like it does on the face of
everythingism. That is precisely the "hard part", no?

It is the hard part of the matter problem, when we understand that with 
mechanism, the everything is no more that the sigma_1 arithmetical reality, 
which I think everyone believe in, except the ultra-intuitionist.

Brent seemed to have understood this once, but seems to forget it recently 
apparently.

If someone believe in a primal physical universe *and* in the survive of 
consciousness through the digital transformation, it is up to them to explain 
how the primal universe (and what is it?) acts on arithmetic for making some 
computations seems more real than others.

I claim nothing, except that mechanism and materialism are incompatible, and 
that the mind-body problem is reduced into deriving physics from the “material” 
variants of machine’s ideal rational believability/justifiability. And then it 
works at the propositional level, so we can say that today, we have not yet 
detected any evidence for a primal universe through our observation of nature.

Let us encourage the pursue of the testing, simply.

Bruno, can you expand a bit? If you had a big grant to pursue this
research programme, what would you do?

I would hire mathematicians to continue the extraction of physics, which in 
this case would mean to optimise the theorem provers for the machine’s quantum 
logic (S4Grz1, Z1*, X1*) and compare them to the quantum logics of nature 
(where some research do exist already, but that needs to make progress too).
Now, we have three quantum logics, and all three have more theorems than the 
physical-empirical quantum logics, as they have the Löbian constraints, and so, some 
quantum tautologies are original and we have to test them. Then we have to isolate 
the tensor product, without postulating a linear logic superimposed to the quantum 
logic (that would work, but to exploit the G/G* difference, we have no other choice 
than to derive the “linear and” from the machine-quantum physicalness, where some 
tools from knot theory (and its relation with quantum statistics) might be available 
if some relation between the quantisation ([]<>A, with the box and diamond of 
Z1*, S4Grz1, X1*) are verified, which is still open.

My work has only open the door on a non Aristotelian way to conceive rationally 
the observation, but now, this asks for continual verification until we find a 
discrepancy, and in that case we learn that Mechanism is false, or we continue 
with the simple mechanist theory, as it explains both quanta and qualia, with 
consistent relation in between.

Many important question remains: is the Hamiltonian purely physical (derivable 
in arithmetic/theology) or geographical, in which case the multiverse allows 
for a continuum of Hamitonians, etc.

Telmo, what could Galilee or Newton have answered to a similar question? In 
fact all physical facts must be explained by the theology of numbers. If it 
miss Dark Matter or GR or anything, such things can be geographical/contingent, 
but if it contradicts them, then such thing becomes evidence that mechanism is 
false (or that we are in a second order simulation à la Bostrom).

My initial goal was just in using Digital Mechanism to reduce the mind-body 
problem to the problem of deriving physics from Number (theology).
I did not expect to derive already 3 propositional physics. Now we have to 
derive the first order physics, and continue the comparison with Nature.

But you keep claiming you have derived quantum mechanics...which 

Re: INDEXICAL Computationalism

2018-03-11 Thread Bruno Marchal

> On 10 Mar 2018, at 22:24, Brent Meeker <meeke...@verizon.net> wrote:
> 
> 
> 
> On 3/10/2018 12:56 AM, Bruno Marchal wrote:
>>> On 8 Mar 2018, at 21:11, Brent Meeker <meeke...@verizon.net> wrote:
>>> 
>>> 
>>> 
>>> On 3/8/2018 7:57 AM, Bruno Marchal wrote:
>>>>> On 7 Mar 2018, at 15:24, Telmo Menezes <te...@telmomenezes.com> wrote:
>>>>> 
>>>>> On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker <meeke...@verizon.net> wrote:
>>>>>> On 3/5/2018 11:49 PM, Telmo Menezes wrote:
>>>>>> 
>>>>>> On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker <meeke...@verizon.net> 
>>>>>> wrote:
>>>>>> 
>>>>>> On 3/5/2018 9:14 AM, Telmo Menezes wrote:
>>>>>> 
>>>>>> "Could" implies a question about possibilities.  It's certainly logically
>>>>>> possible that there not be such a disease as leukemia.  Is it 
>>>>>> nomologically
>>>>>> possible?...not as far as we know.
>>>>>> 
>>>>>> Well I'm not sure it's logically possible, for the reasons that Bruno
>>>>>> already addressed.
>>>>>> 
>>>>>> 
>>>>>> Bruno is assuming that everything not contrary to his theory exists
>>>>>> axiomatically...which is assuming the answer.
>>>>>> 
>>>>>> That is a rather uncharitable way of putting it.
>>>>>> 
>>>>>> Bruno has discussed his Universal Dovetailer Argument extensively. If
>>>>>> you assume comp and accept the argument, then we are inside of the
>>>>>> dovetailer. The dovetailer is an everything-generator.
>>>>>> 
>>>>>> 
>>>>>> That's exactly the problem with everythingism.  It predicts all the 
>>>>>> stuff we
>>>>>> don't see.
>>>>> Bruno, Russell, Tegmark and others tend to concern themselves a lot
>>>>> with why our experience of reality looks like it does on the face of
>>>>> everythingism. That is precisely the "hard part", no?
>>> They recognize that their theory doesn't account for it.  Tegmark makes 
>>> some anthropic speculations.  Bruno just says, 'I claim nothing, except 
>>> that mechanism and materialism are incompatible, and that the mind-body 
>>> problem is reduced into deriving physics from the “material” variants of 
>>> machine’s ideal rational believability/justifiability.'  It's not at all 
>>> clear to me that "reduced" in the appropriate verb...as though it is 
>>> simpler or more fundamental.  Even if you accept his step 7 it has only 
>>> "reduced" the problem to explaining the existence of physics and this 
>>> particular physics, from the assumption of arithmetic and the UD.
>> That is not completely correct. My theory is just digital mechanism (it is 
>> believed by 99,99% of scientists).
> 
> And 99.9% of scientists and mathematicians (who bother to think about it) 
> think that numbers and mathematics are just abstractions and do not exist in 
> the sense that tables and chairs exist.

I don’t think so. But I guess some majority might think this, without thinking 
to it much, and they can see the problem once grasping computationalism (and 
the mathematical definition of computation).

Yes, we are in the Aristotelian Era. Many believe in a primary physical 
universe. It is the main dogma of the religious institution, and we know that 
theology has yet got the chance to come back at the faculty if science. So 
majority argument on a fundamental matter must be used with caution.

Mechanism has been used by materialist to hide the mind-body problem, but with 
the digital form of mechanism, materialism lost its ability to explain even the 
appearance of matter, and becomes, in metaphysics, a sort of phlogiston.




> 
>>  The theorem is that physics is reduced to number theology entirely, making 
>> mechanism completely testable, and up to now confirmed by observation.
> 
> A bold claim.  But he "observations" seem trivial and are explainable on 
> other theories too.   A confirming observation is one that is predictive and 
> surprising.

You forget the UD Argument. The problem of relating first person and third 
person. Physicalism needs invisible horses to work with mechanism.




> 
>> The net gain is that we get the exact relationship between quanta and qualia 
>> (where physicalist just eliminate qualia or dismiss consciou

Re: INDEXICAL Computationalism

2018-03-11 Thread Bruno Marchal

> On 10 Mar 2018, at 22:11, Brent Meeker  wrote:
> 
> 
> 
> On 3/10/2018 12:30 AM, Bruno Marchal wrote:
>>> On 8 Mar 2018, at 17:10, Telmo Menezes  wrote:
>>> 
>>> On Thu, Mar 8, 2018 at 4:57 PM, Bruno Marchal  wrote:
> On 7 Mar 2018, at 15:24, Telmo Menezes  wrote:
> 
> On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  wrote:
>> 
>> On 3/5/2018 11:49 PM, Telmo Menezes wrote:
>> 
>> On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  
>> wrote:
>> 
>> On 3/5/2018 9:14 AM, Telmo Menezes wrote:
>> 
>> "Could" implies a question about possibilities.  It's certainly logically
>> possible that there not be such a disease as leukemia.  Is it 
>> nomologically
>> possible?...not as far as we know.
>> 
>> Well I'm not sure it's logically possible, for the reasons that Bruno
>> already addressed.
>> 
>> 
>> Bruno is assuming that everything not contrary to his theory exists
>> axiomatically...which is assuming the answer.
>> 
>> That is a rather uncharitable way of putting it.
>> 
>> Bruno has discussed his Universal Dovetailer Argument extensively. If
>> you assume comp and accept the argument, then we are inside of the
>> dovetailer. The dovetailer is an everything-generator.
>> 
>> 
>> That's exactly the problem with everythingism.  It predicts all the 
>> stuff we
>> don't see.
> Bruno, Russell, Tegmark and others tend to concern themselves a lot
> with why our experience of reality looks like it does on the face of
> everythingism. That is precisely the "hard part", no?
 It is the hard part of the matter problem, when we understand that with 
 mechanism, the everything is no more that the sigma_1 arithmetical 
 reality, which I think everyone believe in, except the ultra-intuitionist.
 
 Brent seemed to have understood this once, but seems to forget it recently 
 apparently.
 
 If someone believe in a primal physical universe *and* in the survive of 
 consciousness through the digital transformation, it is up to them to 
 explain how the primal universe (and what is it?) acts on arithmetic for 
 making some computations seems more real than others.
 
 I claim nothing, except that mechanism and materialism are incompatible, 
 and that the mind-body problem is reduced into deriving physics from the 
 “material” variants of machine’s ideal rational 
 believability/justifiability. And then it works at the propositional 
 level, so we can say that today, we have not yet detected any evidence for 
 a primal universe through our observation of nature.
 
 Let us encourage the pursue of the testing, simply.
>>> Bruno, can you expand a bit? If you had a big grant to pursue this
>>> research programme, what would you do?
>> 
>> I would hire mathematicians to continue the extraction of physics, which in 
>> this case would mean to optimise the theorem provers for the machine’s 
>> quantum logic (S4Grz1, Z1*, X1*) and compare them to the quantum logics of 
>> nature (where some research do exist already, but that needs to make 
>> progress too).
>> Now, we have three quantum logics, and all three have more theorems than the 
>> physical-empirical quantum logics, as they have the Löbian constraints, and 
>> so, some quantum tautologies are original and we have to test them. Then we 
>> have to isolate the tensor product, without postulating a linear logic 
>> superimposed to the quantum logic (that would work, but to exploit the G/G* 
>> difference, we have no other choice than to derive the “linear and” from the 
>> machine-quantum physicalness, where some tools from knot theory (and its 
>> relation with quantum statistics) might be available if some relation 
>> between the quantisation ([]<>A, with the box and diamond of Z1*, S4Grz1, 
>> X1*) are verified, which is still open.
>> 
>> My work has only open the door on a non Aristotelian way to conceive 
>> rationally the observation, but now, this asks for continual verification 
>> until we find a discrepancy, and in that case we learn that Mechanism is 
>> false, or we continue with the simple mechanist theory, as it explains both 
>> quanta and qualia, with consistent relation in between.
>> 
>> Many important question remains: is the Hamiltonian purely physical 
>> (derivable in arithmetic/theology) or geographical, in which case the 
>> multiverse allows for a continuum of Hamitonians, etc.
>> 
>> Telmo, what could Galilee or Newton have answered to a similar question? In 
>> fact all physical facts must be explained by the theology of numbers. If it 
>> miss Dark Matter or GR or anything, such things can be 
>> geographical/contingent, but if it contradicts them, then such thing becomes 
>> evidence that mechanism is false 

Re: INDEXICAL Computationalism

2018-03-10 Thread Brent Meeker



On 3/10/2018 12:56 AM, Bruno Marchal wrote:

On 8 Mar 2018, at 21:11, Brent Meeker  wrote:



On 3/8/2018 7:57 AM, Bruno Marchal wrote:

On 7 Mar 2018, at 15:24, Telmo Menezes  wrote:

On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  wrote:

On 3/5/2018 11:49 PM, Telmo Menezes wrote:

On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:

On 3/5/2018 9:14 AM, Telmo Menezes wrote:

"Could" implies a question about possibilities.  It's certainly logically
possible that there not be such a disease as leukemia.  Is it nomologically
possible?...not as far as we know.

Well I'm not sure it's logically possible, for the reasons that Bruno
already addressed.


Bruno is assuming that everything not contrary to his theory exists
axiomatically...which is assuming the answer.

That is a rather uncharitable way of putting it.

Bruno has discussed his Universal Dovetailer Argument extensively. If
you assume comp and accept the argument, then we are inside of the
dovetailer. The dovetailer is an everything-generator.


That's exactly the problem with everythingism.  It predicts all the stuff we
don't see.

Bruno, Russell, Tegmark and others tend to concern themselves a lot
with why our experience of reality looks like it does on the face of
everythingism. That is precisely the "hard part", no?

They recognize that their theory doesn't account for it.  Tegmark makes some anthropic 
speculations.  Bruno just says, 'I claim nothing, except that mechanism and materialism are 
incompatible, and that the mind-body problem is reduced into deriving physics from the “material” 
variants of machine’s ideal rational believability/justifiability.'  It's not at all clear to me 
that "reduced" in the appropriate verb...as though it is simpler or more fundamental.  
Even if you accept his step 7 it has only "reduced" the problem to explaining the 
existence of physics and this particular physics, from the assumption of arithmetic and the UD.

That is not completely correct. My theory is just digital mechanism (it is 
believed by 99,99% of scientists).


And 99.9% of scientists and mathematicians (who bother to think about 
it) think that numbers and mathematics are just abstractions and do not 
exist in the sense that tables and chairs exist.



  The theorem is that physics is reduced to number theology entirely, making 
mechanism completely testable, and up to now confirmed by observation.


A bold claim.  But he "observations" seem trivial and are explainable on 
other theories too.   A confirming observation is one that is predictive 
and surprising.



The net gain is that we get the exact relationship between quanta and qualia 
(where physicalist just eliminate qualia or dismiss consciousness as an 
epiphenomenon, which it is not).




It's my view that it may be simpler to explain physics and this physics and 
consciousness from physics.

That would be circular, and not better than explaining God by assuming God, or 
Matter by assuming Matter, or consciousness, by assuming consciousness. That is 
equivalent with not trying to explain.




So I could say the mind-body problem has been reduced to explaining 
consciousness from physics...not even necessarily fundamental physics.

That explain 1/2 of the mind. That explains why we can attribute consciousness 
to our peers, but that fail to explain how my first person experience is 
contrived by this physical reality.

You have to explain what in the physical universe is capable of selecting the 
computations (arithmetical notion). If that thing is Turing emulable, then the 
physical cannot play a role,


That's false.  You implicitly and without proof or ever evidence assume 
that mathematics, computation, and abstractions like numbers exist, 
while denying that matter can exist without proof or evidence.


Brent


if it is not Turing emulable, then I am OK, but out of my working hypothesis. 
The whole point is that this is testable, and today, only mechanism get a 
coherent relation between quanta and qualia (physics usually does not address 
that question, and physicalism has put the qualia question under the rug of its 
ontological commitment, made without any evidence).






It is the hard part of the matter problem, when we understand that with 
mechanism, the everything is no more that the sigma_1 arithmetical reality, 
which I think everyone believe in, except the ultra-intuitionist.

Brent seemed to have understood this once, but seems to forget it recently 
apparently.

If someone believe in a primal physical universe *and* in the survive of 
consciousness through the digital transformation, it is up to them to explain 
how the primal universe (and what is it?) acts on arithmetic for making some 
computations seems more real than others.

I claim nothing, except that mechanism and materialism are incompatible, and 
that the mind-body problem is reduced into deriving physics from the 

Re: INDEXICAL Computationalism

2018-03-10 Thread Brent Meeker



On 3/10/2018 12:30 AM, Bruno Marchal wrote:

On 8 Mar 2018, at 17:10, Telmo Menezes  wrote:

On Thu, Mar 8, 2018 at 4:57 PM, Bruno Marchal  wrote:

On 7 Mar 2018, at 15:24, Telmo Menezes  wrote:

On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  wrote:


On 3/5/2018 11:49 PM, Telmo Menezes wrote:

On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:

On 3/5/2018 9:14 AM, Telmo Menezes wrote:

"Could" implies a question about possibilities.  It's certainly logically
possible that there not be such a disease as leukemia.  Is it nomologically
possible?...not as far as we know.

Well I'm not sure it's logically possible, for the reasons that Bruno
already addressed.


Bruno is assuming that everything not contrary to his theory exists
axiomatically...which is assuming the answer.

That is a rather uncharitable way of putting it.

Bruno has discussed his Universal Dovetailer Argument extensively. If
you assume comp and accept the argument, then we are inside of the
dovetailer. The dovetailer is an everything-generator.


That's exactly the problem with everythingism.  It predicts all the stuff we
don't see.

Bruno, Russell, Tegmark and others tend to concern themselves a lot
with why our experience of reality looks like it does on the face of
everythingism. That is precisely the "hard part", no?

It is the hard part of the matter problem, when we understand that with 
mechanism, the everything is no more that the sigma_1 arithmetical reality, 
which I think everyone believe in, except the ultra-intuitionist.

Brent seemed to have understood this once, but seems to forget it recently 
apparently.

If someone believe in a primal physical universe *and* in the survive of 
consciousness through the digital transformation, it is up to them to explain 
how the primal universe (and what is it?) acts on arithmetic for making some 
computations seems more real than others.

I claim nothing, except that mechanism and materialism are incompatible, and 
that the mind-body problem is reduced into deriving physics from the “material” 
variants of machine’s ideal rational believability/justifiability. And then it 
works at the propositional level, so we can say that today, we have not yet 
detected any evidence for a primal universe through our observation of nature.

Let us encourage the pursue of the testing, simply.

Bruno, can you expand a bit? If you had a big grant to pursue this
research programme, what would you do?


I would hire mathematicians to continue the extraction of physics, which in 
this case would mean to optimise the theorem provers for the machine’s quantum 
logic (S4Grz1, Z1*, X1*) and compare them to the quantum logics of nature 
(where some research do exist already, but that needs to make progress too).
Now, we have three quantum logics, and all three have more theorems than the 
physical-empirical quantum logics, as they have the Löbian constraints, and so, some 
quantum tautologies are original and we have to test them. Then we have to isolate 
the tensor product, without postulating a linear logic superimposed to the quantum 
logic (that would work, but to exploit the G/G* difference, we have no other choice 
than to derive the “linear and” from the machine-quantum physicalness, where some 
tools from knot theory (and its relation with quantum statistics) might be available 
if some relation between the quantisation ([]<>A, with the box and diamond of 
Z1*, S4Grz1, X1*) are verified, which is still open.

My work has only open the door on a non Aristotelian way to conceive rationally 
the observation, but now, this asks for continual verification until we find a 
discrepancy, and in that case we learn that Mechanism is false, or we continue 
with the simple mechanist theory, as it explains both quanta and qualia, with 
consistent relation in between.

Many important question remains: is the Hamiltonian purely physical (derivable 
in arithmetic/theology) or geographical, in which case the multiverse allows 
for a continuum of Hamitonians, etc.

Telmo, what could Galilee or Newton have answered to a similar question? In 
fact all physical facts must be explained by the theology of numbers. If it 
miss Dark Matter or GR or anything, such things can be geographical/contingent, 
but if it contradicts them, then such thing becomes evidence that mechanism is 
false (or that we are in a second order simulation à la Bostrom).

My initial goal was just in using Digital Mechanism to reduce the mind-body 
problem to the problem of deriving physics from Number (theology).
I did not expect to derive already 3 propositional physics. Now we have to 
derive the first order physics, and continue the comparison with Nature.


But you keep claiming you have derived quantum mechanics...which 
implies, at a minimum, linear evolution of rays in complex Hilbert space 
and the Born rule.   I don't see it.


Brent

Re: INDEXICAL Computationalism

2018-03-10 Thread Bruno Marchal

> On 8 Mar 2018, at 21:11, Brent Meeker  wrote:
> 
> 
> 
> On 3/8/2018 7:57 AM, Bruno Marchal wrote:
>>> On 7 Mar 2018, at 15:24, Telmo Menezes  wrote:
>>> 
>>> On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  wrote:
 
 On 3/5/2018 11:49 PM, Telmo Menezes wrote:
 
 On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:
 
 On 3/5/2018 9:14 AM, Telmo Menezes wrote:
 
 "Could" implies a question about possibilities.  It's certainly logically
 possible that there not be such a disease as leukemia.  Is it nomologically
 possible?...not as far as we know.
 
 Well I'm not sure it's logically possible, for the reasons that Bruno
 already addressed.
 
 
 Bruno is assuming that everything not contrary to his theory exists
 axiomatically...which is assuming the answer.
 
 That is a rather uncharitable way of putting it.
 
 Bruno has discussed his Universal Dovetailer Argument extensively. If
 you assume comp and accept the argument, then we are inside of the
 dovetailer. The dovetailer is an everything-generator.
 
 
 That's exactly the problem with everythingism.  It predicts all the stuff 
 we
 don't see.
>>> Bruno, Russell, Tegmark and others tend to concern themselves a lot
>>> with why our experience of reality looks like it does on the face of
>>> everythingism. That is precisely the "hard part", no?
> 
> They recognize that their theory doesn't account for it.  Tegmark makes some 
> anthropic speculations.  Bruno just says, 'I claim nothing, except that 
> mechanism and materialism are incompatible, and that the mind-body problem is 
> reduced into deriving physics from the “material” variants of machine’s ideal 
> rational believability/justifiability.'  It's not at all clear to me that 
> "reduced" in the appropriate verb...as though it is simpler or more 
> fundamental.  Even if you accept his step 7 it has only "reduced" the problem 
> to explaining the existence of physics and this particular physics, from the 
> assumption of arithmetic and the UD.

That is not completely correct. My theory is just digital mechanism (it is 
believed by 99,99% of scientists). The theorem is that physics is reduced to 
number theology entirely, making mechanism completely testable, and up to now 
confirmed by observation. The net gain is that we get the exact relationship 
between quanta and qualia (where physicalist just eliminate qualia or dismiss 
consciousness as an epiphenomenon, which it is not).



> 
> It's my view that it may be simpler to explain physics and this physics and 
> consciousness from physics. 

That would be circular, and not better than explaining God by assuming God, or 
Matter by assuming Matter, or consciousness, by assuming consciousness. That is 
equivalent with not trying to explain. 



> So I could say the mind-body problem has been reduced to explaining 
> consciousness from physics...not even necessarily fundamental physics.

That explain 1/2 of the mind. That explains why we can attribute consciousness 
to our peers, but that fail to explain how my first person experience is 
contrived by this physical reality.

You have to explain what in the physical universe is capable of selecting the 
computations (arithmetical notion). If that thing is Turing emulable, then the 
physical cannot play a role, if it is not Turing emulable, then I am OK, but 
out of my working hypothesis. The whole point is that this is testable, and 
today, only mechanism get a coherent relation between quanta and qualia 
(physics usually does not address that question, and physicalism has put the 
qualia question under the rug of its ontological commitment, made without any 
evidence).





> 
>> It is the hard part of the matter problem, when we understand that with 
>> mechanism, the everything is no more that the sigma_1 arithmetical reality, 
>> which I think everyone believe in, except the ultra-intuitionist.
>> 
>> Brent seemed to have understood this once, but seems to forget it recently 
>> apparently.
>> 
>> If someone believe in a primal physical universe *and* in the survive of 
>> consciousness through the digital transformation, it is up to them to 
>> explain how the primal universe (and what is it?) acts on arithmetic for 
>> making some computations seems more real than others.
>> 
>> I claim nothing, except that mechanism and materialism are incompatible, and 
>> that the mind-body problem is reduced into deriving physics from the 
>> “material” variants of machine’s ideal rational believability/justifiability.
> 
> Or why fewer than all arithmetical relations are realized in physics.

No, that is already explained by the difference between the eight 
phenomenological modes of self-references. The believable, knowable and 
observable obeys different logics which structure the arithmetical reality in 

Re: INDEXICAL Computationalism

2018-03-10 Thread Bruno Marchal

> On 8 Mar 2018, at 17:10, Telmo Menezes  wrote:
> 
> On Thu, Mar 8, 2018 at 4:57 PM, Bruno Marchal  wrote:
>> 
>>> On 7 Mar 2018, at 15:24, Telmo Menezes  wrote:
>>> 
>>> On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  wrote:
 
 
 On 3/5/2018 11:49 PM, Telmo Menezes wrote:
 
 On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:
 
 On 3/5/2018 9:14 AM, Telmo Menezes wrote:
 
 "Could" implies a question about possibilities.  It's certainly logically
 possible that there not be such a disease as leukemia.  Is it nomologically
 possible?...not as far as we know.
 
 Well I'm not sure it's logically possible, for the reasons that Bruno
 already addressed.
 
 
 Bruno is assuming that everything not contrary to his theory exists
 axiomatically...which is assuming the answer.
 
 That is a rather uncharitable way of putting it.
 
 Bruno has discussed his Universal Dovetailer Argument extensively. If
 you assume comp and accept the argument, then we are inside of the
 dovetailer. The dovetailer is an everything-generator.
 
 
 That's exactly the problem with everythingism.  It predicts all the stuff 
 we
 don't see.
>>> 
>>> Bruno, Russell, Tegmark and others tend to concern themselves a lot
>>> with why our experience of reality looks like it does on the face of
>>> everythingism. That is precisely the "hard part", no?
>> 
>> It is the hard part of the matter problem, when we understand that with 
>> mechanism, the everything is no more that the sigma_1 arithmetical reality, 
>> which I think everyone believe in, except the ultra-intuitionist.
>> 
>> Brent seemed to have understood this once, but seems to forget it recently 
>> apparently.
>> 
>> If someone believe in a primal physical universe *and* in the survive of 
>> consciousness through the digital transformation, it is up to them to 
>> explain how the primal universe (and what is it?) acts on arithmetic for 
>> making some computations seems more real than others.
>> 
>> I claim nothing, except that mechanism and materialism are incompatible, and 
>> that the mind-body problem is reduced into deriving physics from the 
>> “material” variants of machine’s ideal rational 
>> believability/justifiability. And then it works at the propositional level, 
>> so we can say that today, we have not yet detected any evidence for a primal 
>> universe through our observation of nature.
>> 
>> Let us encourage the pursue of the testing, simply.
> 
> Bruno, can you expand a bit? If you had a big grant to pursue this
> research programme, what would you do?


I would hire mathematicians to continue the extraction of physics, which in 
this case would mean to optimise the theorem provers for the machine’s quantum 
logic (S4Grz1, Z1*, X1*) and compare them to the quantum logics of nature 
(where some research do exist already, but that needs to make progress too).
Now, we have three quantum logics, and all three have more theorems than the 
physical-empirical quantum logics, as they have the Löbian constraints, and so, 
some quantum tautologies are original and we have to test them. Then we have to 
isolate the tensor product, without postulating a linear logic superimposed to 
the quantum logic (that would work, but to exploit the G/G* difference, we have 
no other choice than to derive the “linear and” from the machine-quantum 
physicalness, where some tools from knot theory (and its relation with quantum 
statistics) might be available if some relation between the quantisation 
([]<>A, with the box and diamond of Z1*, S4Grz1, X1*) are verified, which is 
still open.

My work has only open the door on a non Aristotelian way to conceive rationally 
the observation, but now, this asks for continual verification until we find a 
discrepancy, and in that case we learn that Mechanism is false, or we continue 
with the simple mechanist theory, as it explains both quanta and qualia, with 
consistent relation in between.

Many important question remains: is the Hamiltonian purely physical (derivable 
in arithmetic/theology) or geographical, in which case the multiverse allows 
for a continuum of Hamitonians, etc.

Telmo, what could Galilee or Newton have answered to a similar question? In 
fact all physical facts must be explained by the theology of numbers. If it 
miss Dark Matter or GR or anything, such things can be geographical/contingent, 
but if it contradicts them, then such thing becomes evidence that mechanism is 
false (or that we are in a second order simulation à la Bostrom).

My initial goal was just in using Digital Mechanism to reduce the mind-body 
problem to the problem of deriving physics from Number (theology). 
I did not expect to derive already 3 propositional physics. Now we have to 
derive the first order physics, and continue the 

Re: INDEXICAL Computationalism

2018-03-08 Thread Brent Meeker



On 3/8/2018 7:57 AM, Bruno Marchal wrote:

On 7 Mar 2018, at 15:24, Telmo Menezes  wrote:

On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  wrote:


On 3/5/2018 11:49 PM, Telmo Menezes wrote:

On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:

On 3/5/2018 9:14 AM, Telmo Menezes wrote:

"Could" implies a question about possibilities.  It's certainly logically
possible that there not be such a disease as leukemia.  Is it nomologically
possible?...not as far as we know.

Well I'm not sure it's logically possible, for the reasons that Bruno
already addressed.


Bruno is assuming that everything not contrary to his theory exists
axiomatically...which is assuming the answer.

That is a rather uncharitable way of putting it.

Bruno has discussed his Universal Dovetailer Argument extensively. If
you assume comp and accept the argument, then we are inside of the
dovetailer. The dovetailer is an everything-generator.


That's exactly the problem with everythingism.  It predicts all the stuff we
don't see.

Bruno, Russell, Tegmark and others tend to concern themselves a lot
with why our experience of reality looks like it does on the face of
everythingism. That is precisely the "hard part", no?


They recognize that their theory doesn't account for it.  Tegmark makes 
some anthropic speculations.  Bruno just says, 'I claim nothing, except 
that mechanism and materialism are incompatible, and that the mind-body 
problem is reduced into deriving physics from the “material” variants of 
machine’s ideal rational believability/justifiability.'  It's not at all 
clear to me that "reduced" in the appropriate verb...as though it is 
simpler or more fundamental.  Even if you accept his step 7 it has only 
"reduced" the problem to explaining the existence of physics and this 
particular physics, from the assumption of arithmetic and the UD.


It's my view that it may be simpler to explain physics and this physics 
and consciousness from physics.  So I could say the mind-body problem 
has been reduced to explaining consciousness from physics...not even 
necessarily fundamental physics.



It is the hard part of the matter problem, when we understand that with 
mechanism, the everything is no more that the sigma_1 arithmetical reality, 
which I think everyone believe in, except the ultra-intuitionist.

Brent seemed to have understood this once, but seems to forget it recently 
apparently.

If someone believe in a primal physical universe *and* in the survive of 
consciousness through the digital transformation, it is up to them to explain 
how the primal universe (and what is it?) acts on arithmetic for making some 
computations seems more real than others.

I claim nothing, except that mechanism and materialism are incompatible, and 
that the mind-body problem is reduced into deriving physics from the “material” 
variants of machine’s ideal rational believability/justifiability.


Or why fewer than all arithmetical relations are realized in physics.

Brent


And then it works at the propositional level, so we can say that today, we have 
not yet detected any evidence for a primal universe through our observation of 
nature.

Let us encourage the pursue of the testing, simply.

Bruno


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-08 Thread Telmo Menezes
On Thu, Mar 8, 2018 at 4:57 PM, Bruno Marchal  wrote:
>
>> On 7 Mar 2018, at 15:24, Telmo Menezes  wrote:
>>
>> On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  wrote:
>>>
>>>
>>> On 3/5/2018 11:49 PM, Telmo Menezes wrote:
>>>
>>> On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:
>>>
>>> On 3/5/2018 9:14 AM, Telmo Menezes wrote:
>>>
>>> "Could" implies a question about possibilities.  It's certainly logically
>>> possible that there not be such a disease as leukemia.  Is it nomologically
>>> possible?...not as far as we know.
>>>
>>> Well I'm not sure it's logically possible, for the reasons that Bruno
>>> already addressed.
>>>
>>>
>>> Bruno is assuming that everything not contrary to his theory exists
>>> axiomatically...which is assuming the answer.
>>>
>>> That is a rather uncharitable way of putting it.
>>>
>>> Bruno has discussed his Universal Dovetailer Argument extensively. If
>>> you assume comp and accept the argument, then we are inside of the
>>> dovetailer. The dovetailer is an everything-generator.
>>>
>>>
>>> That's exactly the problem with everythingism.  It predicts all the stuff we
>>> don't see.
>>
>> Bruno, Russell, Tegmark and others tend to concern themselves a lot
>> with why our experience of reality looks like it does on the face of
>> everythingism. That is precisely the "hard part", no?
>
> It is the hard part of the matter problem, when we understand that with 
> mechanism, the everything is no more that the sigma_1 arithmetical reality, 
> which I think everyone believe in, except the ultra-intuitionist.
>
> Brent seemed to have understood this once, but seems to forget it recently 
> apparently.
>
> If someone believe in a primal physical universe *and* in the survive of 
> consciousness through the digital transformation, it is up to them to explain 
> how the primal universe (and what is it?) acts on arithmetic for making some 
> computations seems more real than others.
>
> I claim nothing, except that mechanism and materialism are incompatible, and 
> that the mind-body problem is reduced into deriving physics from the 
> “material” variants of machine’s ideal rational believability/justifiability. 
> And then it works at the propositional level, so we can say that today, we 
> have not yet detected any evidence for a primal universe through our 
> observation of nature.
>
> Let us encourage the pursue of the testing, simply.

Bruno, can you expand a bit? If you had a big grant to pursue this
research programme, what would you do?

Telmo.

> Bruno
>
>
>
>
>>
>>> Russell
>>> proposes something similar in his book. Isn't the exploration of this
>>> type of idea the original reason for this mailing list? That doesn't
>>> mean that the idea is right, of course, but it does mean that one
>>> should expect to not keep going around in circles without ever
>>> reaching a more sophisticated level of engagement with such theories.
>>>
>>>
>>> I'd be happy to engage a more sophisticated level.  I've suggested several
>>> times points on which Bruno's theory might have something to say about
>>> physics or cognition:  For example there is the discussion of whether QM is
>>> epistemic (quantum bayesianism) or ontic (wave-function realism).  There are
>>> experiments that seem to show it's ontic, but only under the assumption that
>>> experimenters agree on it...which seems to be an epistemic condition.  Or
>>> how about the past hypothesis; does the UD necessarily imply a universe that
>>> in low entropy in the past...or is that just the definition of "past", in
>>> which case one asks why does the AoT have a consistent direction.  And what
>>> is the relation of the brain to the computational processes producing
>>> consciousness?  Why the delay in the Gray Walter experiment?  Is there
>>> really some number of neurons between platyhelmenthies and homo sapiens that
>>> maximizes consciousness?
>>
>> Ok, me too. I feel that lack of moderation on the list makes it
>> difficult -- although I am not advocating it.
>> It's hard to talk over certain megaphones, and I think many give up.
>>
>>>
>>> But why would you suppose that a world in which "Leukemia doesn't exist."
>>> would allow you derive a logical contradiction?
>>>
>>> I think such a world would require one to accept something like
>>> creationism as logically consistent. The process of biological
>>> complexification happens by natural selection. Natural selection, by
>>> definition, implies failure modes. It also leads to endless
>>> competitive and exploitative dynamics such as predators, pathogens,
>>> parasites, etc. Avoiding all of these tragedies from the perspective
>>> of human beings would require a designer holding human interests at
>>> heart above everything else. Both the pre-existence of such a designer
>>> and its motivation to helps us above everything else seem nonsensical
>>> to me.
>>>
>>>
>>> First, you are appealing to 

Re: INDEXICAL Computationalism

2018-03-08 Thread Bruno Marchal

> On 7 Mar 2018, at 15:24, Telmo Menezes  wrote:
> 
> On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  wrote:
>> 
>> 
>> On 3/5/2018 11:49 PM, Telmo Menezes wrote:
>> 
>> On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:
>> 
>> On 3/5/2018 9:14 AM, Telmo Menezes wrote:
>> 
>> "Could" implies a question about possibilities.  It's certainly logically
>> possible that there not be such a disease as leukemia.  Is it nomologically
>> possible?...not as far as we know.
>> 
>> Well I'm not sure it's logically possible, for the reasons that Bruno
>> already addressed.
>> 
>> 
>> Bruno is assuming that everything not contrary to his theory exists
>> axiomatically...which is assuming the answer.
>> 
>> That is a rather uncharitable way of putting it.
>> 
>> Bruno has discussed his Universal Dovetailer Argument extensively. If
>> you assume comp and accept the argument, then we are inside of the
>> dovetailer. The dovetailer is an everything-generator.
>> 
>> 
>> That's exactly the problem with everythingism.  It predicts all the stuff we
>> don't see.
> 
> Bruno, Russell, Tegmark and others tend to concern themselves a lot
> with why our experience of reality looks like it does on the face of
> everythingism. That is precisely the "hard part", no?

It is the hard part of the matter problem, when we understand that with 
mechanism, the everything is no more that the sigma_1 arithmetical reality, 
which I think everyone believe in, except the ultra-intuitionist.

Brent seemed to have understood this once, but seems to forget it recently 
apparently. 

If someone believe in a primal physical universe *and* in the survive of 
consciousness through the digital transformation, it is up to them to explain 
how the primal universe (and what is it?) acts on arithmetic for making some 
computations seems more real than others.

I claim nothing, except that mechanism and materialism are incompatible, and 
that the mind-body problem is reduced into deriving physics from the “material” 
variants of machine’s ideal rational believability/justifiability. And then it 
works at the propositional level, so we can say that today, we have not yet 
detected any evidence for a primal universe through our observation of nature.

Let us encourage the pursue of the testing, simply. 

Bruno




> 
>> Russell
>> proposes something similar in his book. Isn't the exploration of this
>> type of idea the original reason for this mailing list? That doesn't
>> mean that the idea is right, of course, but it does mean that one
>> should expect to not keep going around in circles without ever
>> reaching a more sophisticated level of engagement with such theories.
>> 
>> 
>> I'd be happy to engage a more sophisticated level.  I've suggested several
>> times points on which Bruno's theory might have something to say about
>> physics or cognition:  For example there is the discussion of whether QM is
>> epistemic (quantum bayesianism) or ontic (wave-function realism).  There are
>> experiments that seem to show it's ontic, but only under the assumption that
>> experimenters agree on it...which seems to be an epistemic condition.  Or
>> how about the past hypothesis; does the UD necessarily imply a universe that
>> in low entropy in the past...or is that just the definition of "past", in
>> which case one asks why does the AoT have a consistent direction.  And what
>> is the relation of the brain to the computational processes producing
>> consciousness?  Why the delay in the Gray Walter experiment?  Is there
>> really some number of neurons between platyhelmenthies and homo sapiens that
>> maximizes consciousness?
> 
> Ok, me too. I feel that lack of moderation on the list makes it
> difficult -- although I am not advocating it.
> It's hard to talk over certain megaphones, and I think many give up.
> 
>> 
>> But why would you suppose that a world in which "Leukemia doesn't exist."
>> would allow you derive a logical contradiction?
>> 
>> I think such a world would require one to accept something like
>> creationism as logically consistent. The process of biological
>> complexification happens by natural selection. Natural selection, by
>> definition, implies failure modes. It also leads to endless
>> competitive and exploitative dynamics such as predators, pathogens,
>> parasites, etc. Avoiding all of these tragedies from the perspective
>> of human beings would require a designer holding human interests at
>> heart above everything else. Both the pre-existence of such a designer
>> and its motivation to helps us above everything else seem nonsensical
>> to me.
>> 
>> 
>> First, you are appealing to biology and physics, not logic.
> 
> I am appealing to logic, because I am claiming that we must discard
> scenarios where the arrow of complexity is reversed. That is to say: a
> complex phenomena entailing an even more complex entity than what is
> being explained.
> 
>> I already said

Re: INDEXICAL Computationalism

2018-03-08 Thread Bruno Marchal

> On 7 Mar 2018, at 02:16, Brent Meeker  wrote:
> 
> 
> 
> On 3/6/2018 5:27 AM, Bruno Marchal wrote:
>> 
>>> On 6 Mar 2018, at 01:37, Brent Meeker >> > wrote:
>>> 
>>> 
>>> 
>>> On 3/5/2018 9:14 AM, Telmo Menezes wrote:
> "Could" implies a question about possibilities.  It's certainly logically
> possible that there not be such a disease as leukemia.  Is it 
> nomologically
> possible?...not as far as we know.
 Well I'm not sure it's logically possible, for the reasons that Bruno
 already addressed.
>>> 
>>> Bruno is assuming that everything not contrary to his theory exists 
>>> axiomatically…
>> 
>> ?
>> 
>> 
>>> which is assuming the answer.
>> 
>> ?
>> 
>> I recall that my assumption is only that the physical brain can be emulated 
>> at some level by a digital physical machine so that we would survive in the 
>> usual sense.
>> 
>> Then to just define “digital” we need to accept "very elementary arithmetic" 
>> (like RA), but then, it is just impossible to use an assumption of primary 
>> physicalness to select the computations in arithmetic. We need to select it 
>> by the measure on the first person experiences, and this gives the first 
>> coherent explanation of both consciousness and matter appearance, where 
>> physicists just do not address this question since a long time.
> 
> Not just "accept arithmetic", but also to suppose that all possible 
> computations exist via a UD.


The UD existence is a consequence of elementary arithmetic. That is the bomb of 
GPodel, Church, Turing, …. It is not my discovery. It is explained in all 
textbook of computer science.  All partial recursive functions are (strongly) 
represented in the arithmetical true sigma_1 relations.

My theory, extracted from the hypothesis of mechanism is just elementary 
arithmetic, or Kxy = x and Sxyz =xz(yz), and nothing else.

I assume much less that most theoretical physicists, and I assume infinitely 
much less than the physicalist metaphysicians. To be clear.

Bruno



> 
> Brent
> 
>> 
>> My work says nothing about physics, but it shows that with Digital Mechanism 
>> in *Metaphysics*, physicalism does not work.
>> 
>> Bruno
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-07 Thread Telmo Menezes
On Wed, Mar 7, 2018 at 1:27 AM, Brent Meeker  wrote:
>
>
> On 3/5/2018 11:49 PM, Telmo Menezes wrote:
>
> On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:
>
> On 3/5/2018 9:14 AM, Telmo Menezes wrote:
>
> "Could" implies a question about possibilities.  It's certainly logically
> possible that there not be such a disease as leukemia.  Is it nomologically
> possible?...not as far as we know.
>
> Well I'm not sure it's logically possible, for the reasons that Bruno
> already addressed.
>
>
> Bruno is assuming that everything not contrary to his theory exists
> axiomatically...which is assuming the answer.
>
> That is a rather uncharitable way of putting it.
>
> Bruno has discussed his Universal Dovetailer Argument extensively. If
> you assume comp and accept the argument, then we are inside of the
> dovetailer. The dovetailer is an everything-generator.
>
>
> That's exactly the problem with everythingism.  It predicts all the stuff we
> don't see.

Bruno, Russell, Tegmark and others tend to concern themselves a lot
with why our experience of reality looks like it does on the face of
everythingism. That is precisely the "hard part", no?

> Russell
> proposes something similar in his book. Isn't the exploration of this
> type of idea the original reason for this mailing list? That doesn't
> mean that the idea is right, of course, but it does mean that one
> should expect to not keep going around in circles without ever
> reaching a more sophisticated level of engagement with such theories.
>
>
> I'd be happy to engage a more sophisticated level.  I've suggested several
> times points on which Bruno's theory might have something to say about
> physics or cognition:  For example there is the discussion of whether QM is
> epistemic (quantum bayesianism) or ontic (wave-function realism).  There are
> experiments that seem to show it's ontic, but only under the assumption that
> experimenters agree on it...which seems to be an epistemic condition.  Or
> how about the past hypothesis; does the UD necessarily imply a universe that
> in low entropy in the past...or is that just the definition of "past", in
> which case one asks why does the AoT have a consistent direction.  And what
> is the relation of the brain to the computational processes producing
> consciousness?  Why the delay in the Gray Walter experiment?  Is there
> really some number of neurons between platyhelmenthies and homo sapiens that
> maximizes consciousness?

Ok, me too. I feel that lack of moderation on the list makes it
difficult -- although I am not advocating it.
It's hard to talk over certain megaphones, and I think many give up.

>
> But why would you suppose that a world in which "Leukemia doesn't exist."
> would allow you derive a logical contradiction?
>
> I think such a world would require one to accept something like
> creationism as logically consistent. The process of biological
> complexification happens by natural selection. Natural selection, by
> definition, implies failure modes. It also leads to endless
> competitive and exploitative dynamics such as predators, pathogens,
> parasites, etc. Avoiding all of these tragedies from the perspective
> of human beings would require a designer holding human interests at
> heart above everything else. Both the pre-existence of such a designer
> and its motivation to helps us above everything else seem nonsensical
> to me.
>
>
> First, you are appealing to biology and physics, not logic.

I am appealing to logic, because I am claiming that we must discard
scenarios where the arrow of complexity is reversed. That is to say: a
complex phenomena entailing an even more complex entity than what is
being explained.

> I already said
> that nomologically, leukemia was probably necessary.  It's just a possible
> mutation in bone marrow cells. But there's no logical contradiction in that
> mutation not occuring.

No, but there is a logic contradiction in no mutations ever occurring,
unless you can provide an alternative theory to natural selection that
does not revert the arrow of complexity.

> Second, you're straw manning.  I didn' t say
> anything about "failure modes" not existing.  I said that one particular
> failure mode could fail to exist.  In fact I'd say the world would be better
> if even that one little girl had not died in pain.  Let's see you prove that
> implies a logical contradiction.

I would say that it really depends on weather QM is epistemic or
ontic, as you say above. Or: everythingism allows for an entity that
fits Anselm's argument.

Telmo.

> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at 

Re: INDEXICAL Computationalism

2018-03-06 Thread Brent Meeker



On 3/6/2018 5:27 AM, Bruno Marchal wrote:


On 6 Mar 2018, at 01:37, Brent Meeker > wrote:




On 3/5/2018 9:14 AM, Telmo Menezes wrote:

"Could" implies a question about possibilities.  It's certainly logically
possible that there not be such a disease as leukemia.  Is it nomologically
possible?...not as far as we know.

Well I'm not sure it's logically possible, for the reasons that Bruno
already addressed.


Bruno is assuming that everything not contrary to his theory exists 
axiomatically…


?



which is assuming the answer.


?

I recall that my assumption is only that the physical brain can be 
emulated at some level by a digital physical machine so that we would 
survive in the usual sense.


Then to just define “digital” we need to accept "very elementary 
arithmetic" (like RA), but then, it is just impossible to use an 
assumption of primary physicalness to select the computations in 
arithmetic. We need to select it by the measure on the first person 
experiences, and this gives the first coherent explanation of both 
consciousness and matter appearance, where physicists just do not 
address this question since a long time.


Not just "accept arithmetic", but also to suppose that all possible 
computations exist via a UD.


Brent



My work says nothing about physics, but it shows that with Digital 
Mechanism _in *Metaphysics_*, physicalism does not work.


Bruno


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-06 Thread Brent Meeker



On 3/5/2018 11:49 PM, Telmo Menezes wrote:

On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:


On 3/5/2018 9:14 AM, Telmo Menezes wrote:

"Could" implies a question about possibilities.  It's certainly logically
possible that there not be such a disease as leukemia.  Is it nomologically
possible?...not as far as we know.

Well I'm not sure it's logically possible, for the reasons that Bruno
already addressed.


Bruno is assuming that everything not contrary to his theory exists
axiomatically...which is assuming the answer.

That is a rather uncharitable way of putting it.

Bruno has discussed his Universal Dovetailer Argument extensively. If
you assume comp and accept the argument, then we are inside of the
dovetailer. The dovetailer is an everything-generator.


That's exactly the problem with everythingism.  It predicts all the 
stuff we don't see.



Russell
proposes something similar in his book. Isn't the exploration of this
type of idea the original reason for this mailing list? That doesn't
mean that the idea is right, of course, but it does mean that one
should expect to not keep going around in circles without ever
reaching a more sophisticated level of engagement with such theories.


I'd be happy to engage a more sophisticated level.  I've suggested 
several times points on which Bruno's theory might have something to say 
about physics or cognition:  For example there is the discussion of 
whether QM is epistemic (quantum bayesianism) or ontic (wave-function 
realism).  There are experiments that seem to show it's ontic, but only 
under the assumption that experimenters agree on it...which seems to be 
an epistemic condition.  Or how about the past hypothesis; does the UD 
necessarily imply a universe that in low entropy in the past...or is 
that just the definition of "past", in which case one asks why does the 
AoT have a consistent direction.  And what is the relation of the brain 
to the computational processes producing consciousness?  Why the delay 
in the Gray Walter experiment?  Is there really some number of neurons 
between platyhelmenthies and homo sapiens that maximizes consciousness?





But why would you suppose that a world in which "Leukemia doesn't exist."
would allow you derive a logical contradiction?

I think such a world would require one to accept something like
creationism as logically consistent. The process of biological
complexification happens by natural selection. Natural selection, by
definition, implies failure modes. It also leads to endless
competitive and exploitative dynamics such as predators, pathogens,
parasites, etc. Avoiding all of these tragedies from the perspective
of human beings would require a designer holding human interests at
heart above everything else. Both the pre-existence of such a designer
and its motivation to helps us above everything else seem nonsensical
to me.


First, you are appealing to biology and physics, not logic.  I already 
said that nomologically, leukemia was probably necessary. It's just a 
possible mutation in bone marrow cells. But there's no/logical 
/contradiction in that mutation not occuring.  Second, you're straw 
manning.  I didn' t say anything about "failure modes" not existing.  I 
said that one particular failure mode could fail to exist.  In fact I'd 
say the world would be better if even that one little girl had not died 
in pain.  Let's see you prove that implies a logical contradiction.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-06 Thread PGC


On Tuesday, March 6, 2018 at 2:20:37 PM UTC+1, Bruno Marchal wrote:
>
>
> > On 6 Mar 2018, at 00:50, Brent Meeker  
> wrote: 
> > 
> > 
> > 
> > On 3/5/2018 6:38 AM, Bruno Marchal wrote: 
> >>> On 4 Mar 2018, at 23:00, Brent Meeker  > wrote: 
> >>> 
> >>> 
> >>> 
> >>> On 3/3/2018 11:48 PM, Telmo Menezes wrote: 
>  On Sun, Mar 4, 2018 at 7:43 AM, Brent Meeker  > wrote: 
> > On 3/3/2018 1:47 PM, Telmo Menezes wrote: 
> >> On Sat, Mar 3, 2018 at 10:41 PM, Telmo Menezes <
> te...@telmomenezes.com > 
> >> wrote: 
> >>> On Wed, Feb 28, 2018 at 8:51 PM, Brent Meeker  > 
> >>> wrote: 
>  On 2/28/2018 3:38 AM, Telmo Menezes wrote: 
>  
>  So what do you find more convincing:  An axiomatic proof that God 
>  exists, 
>  e.g. St Anslem's or Goedel's.  or The mere empirical absence of 
>  evidence. 
>  
>  In these proves, God = Totality / Ultimate Reality / The Whole 
>  Shebang. They don't mention commandments, or talking snakes or 
> burning 
>  bushes. I think you are proposing a false equivalence. 
>  
>  
>  No, you are inserting one.  St Anselm proves that perfect 
> being/agent 
>  exists.  He didn't claim to prove any other mythology.  So the 
> question 
>  stands: Who ya gonna believe?  the axiomatic proof or your lyin' 
> eyes? 
> >>> I find St. Anselm's proof meaningless, because perfection is a 
> human 
> >>> concept, i.e. it is relative to our evolutionary niche and 
> >>> circumstances. The perfect shot for the hunter is not the perfect 
> shot 
> >>> for the prey. Ok, so let's say that reality as a whole counts as 
> the 
> >>> perfect being. Perhaps. Could it be any other way that would be 
> worse? 
> >> I meant "that would be better", of course. 
> > Well I have a friend whose 12yr old daughter died of leukemia in 
> great pain. 
> > I think it could be better. 
>  I understand what you are saying. 
>  My point is this: could some totally that supports something as 
>  complex as human beings not include little girls with leukemia? 
> >>> "Could" implies a question about possibilities.  It's certainly 
> logically possible that there not be such a disease as leukemia.  Is it 
> nomologically possible?...not as far as we know. 
> >> Assuming mechanism it is logically impossible. Biological viruses and 
> molecular diseases are, globally (like the notion of Turing machine) 
> universal, and so there is no algorithm or program making such “totality” 
> immune for such diseases. They necessarily coevolve. 
> > 
> > That's fallacious reasoning.  Just because there is no algorithm 
> creating immunity doesn't mean the disease exists.  I can imagine many 
> diseases that happen not to exist (e.g. airborne ebola). 
>
> Me too. That is straw man. I was saying that there is no algorithm saving 
> us  from all *possible* disease. 
>
>
>
>
> > 
> >> 
> >> Of course, we can progress, and win the battles on larger class of 
> diseases and parasites, 
> > 
> > As we, for example, eliminated smallpox.  So it is not only logically, 
> but nomologically possible that smallpox not exist. 
> > 
> >> but after some time, they will find the way to “hack” the body again. 
> That can be related to the halting problem, or to the second recursion 
> theorem. 
> >> 
> >> I fell sorry for your daughter’s friend, as having great pain seems to 
> mean she got some therapy and not others, which seems to cure better and 
> are much less painful, but here it is human lies which hides the possible 
> help … (I know it is quite difficult and delicate to mess with the health 
> of other people, doubly so when ignorance and lies play a so big role in 
> the economy). 
> > 
> > She got the best known care.   For pain she got morphine, but the bone 
> marrow expands and causes great pain in the bones that even morphine 
> doesn't relieve.  At the end she asked permission to die. 
>
> So sad, 


According to the discourse that propagates itself on this list as Bruno's 
one could ask: why? A few posts ago, every Löbian was minimally conscious 
for all the filters and richness of their logic/brains, and therefore more 
delusional than spiders etc. Therefore every person dying, following the 
reasoning, is just another trapped deluded soul headed for freedom. Suicide 
becomes the most rational act.

Similarly, because no machine knows which computations support which 
sinfully rich delusion of their falling soul, infinite abuses and violences 
are justified as inevitable outcomes of multi-subject scenarios (aka 
competitions). Every joy/beauty reduced to mere tricks of evolution and the 
violence of others. 

Such "metaphysics" is on some level sadder and more hopeless than asking 
for permission to die because *it would forbid the same for its catholic 
tasting 

Re: INDEXICAL Computationalism

2018-03-06 Thread Bruno Marchal

> On 6 Mar 2018, at 01:37, Brent Meeker  wrote:
> 
> 
> 
> On 3/5/2018 9:14 AM, Telmo Menezes wrote:
>>> "Could" implies a question about possibilities.  It's certainly logically
>>> possible that there not be such a disease as leukemia.  Is it nomologically
>>> possible?...not as far as we know.
>> Well I'm not sure it's logically possible, for the reasons that Bruno
>> already addressed.
> 
> Bruno is assuming that everything not contrary to his theory exists 
> axiomatically…

?


> which is assuming the answer.

?

I recall that my assumption is only that the physical brain can be emulated at 
some level by a digital physical machine so that we would survive in the usual 
sense.

Then to just define “digital” we need to accept "very elementary arithmetic" 
(like RA), but then, it is just impossible to use an assumption of primary 
physicalness to select the computations in arithmetic. We need to select it by 
the measure on the first person experiences, and this gives the first coherent 
explanation of both consciousness and matter appearance, where physicists just 
do not address this question since a long time.

My work says nothing about physics, but it shows that with Digital Mechanism in 
*Metaphysics*, physicalism does not work.

Bruno


> 
> But why would you suppose that a world in which "Leukemia doesn't exist."  
> would allow you derive a logical contradiction?
> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-06 Thread Bruno Marchal

> On 6 Mar 2018, at 00:50, Brent Meeker  wrote:
> 
> 
> 
> On 3/5/2018 6:38 AM, Bruno Marchal wrote:
>>> On 4 Mar 2018, at 23:00, Brent Meeker  wrote:
>>> 
>>> 
>>> 
>>> On 3/3/2018 11:48 PM, Telmo Menezes wrote:
 On Sun, Mar 4, 2018 at 7:43 AM, Brent Meeker  wrote:
> On 3/3/2018 1:47 PM, Telmo Menezes wrote:
>> On Sat, Mar 3, 2018 at 10:41 PM, Telmo Menezes 
>> wrote:
>>> On Wed, Feb 28, 2018 at 8:51 PM, Brent Meeker 
>>> wrote:
 On 2/28/2018 3:38 AM, Telmo Menezes wrote:
 
 So what do you find more convincing:  An axiomatic proof that God
 exists,
 e.g. St Anslem's or Goedel's.  or The mere empirical absence of
 evidence.
 
 In these proves, God = Totality / Ultimate Reality / The Whole
 Shebang. They don't mention commandments, or talking snakes or burning
 bushes. I think you are proposing a false equivalence.
 
 
 No, you are inserting one.  St Anselm proves that perfect being/agent
 exists.  He didn't claim to prove any other mythology.  So the question
 stands: Who ya gonna believe?  the axiomatic proof or your lyin' eyes?
>>> I find St. Anselm's proof meaningless, because perfection is a human
>>> concept, i.e. it is relative to our evolutionary niche and
>>> circumstances. The perfect shot for the hunter is not the perfect shot
>>> for the prey. Ok, so let's say that reality as a whole counts as the
>>> perfect being. Perhaps. Could it be any other way that would be worse?
>> I meant "that would be better", of course.
> Well I have a friend whose 12yr old daughter died of leukemia in great 
> pain.
> I think it could be better.
 I understand what you are saying.
 My point is this: could some totally that supports something as
 complex as human beings not include little girls with leukemia?
>>> "Could" implies a question about possibilities.  It's certainly logically 
>>> possible that there not be such a disease as leukemia.  Is it nomologically 
>>> possible?...not as far as we know.
>> Assuming mechanism it is logically impossible. Biological viruses and 
>> molecular diseases are, globally (like the notion of Turing machine) 
>> universal, and so there is no algorithm or program making such “totality” 
>> immune for such diseases. They necessarily coevolve.
> 
> That's fallacious reasoning.  Just because there is no algorithm creating 
> immunity doesn't mean the disease exists.  I can imagine many diseases that 
> happen not to exist (e.g. airborne ebola).

Me too. That is straw man. I was saying that there is no algorithm saving us  
from all *possible* disease.




> 
>> 
>> Of course, we can progress, and win the battles on larger class of diseases 
>> and parasites,
> 
> As we, for example, eliminated smallpox.  So it is not only logically, but 
> nomologically possible that smallpox not exist.
> 
>> but after some time, they will find the way to “hack” the body again. That 
>> can be related to the halting problem, or to the second recursion theorem.
>> 
>> I fell sorry for your daughter’s friend, as having great pain seems to mean 
>> she got some therapy and not others, which seems to cure better and are much 
>> less painful, but here it is human lies which hides the possible help … (I 
>> know it is quite difficult and delicate to mess with the health of other 
>> people, doubly so when ignorance and lies play a so big role in the economy).
> 
> She got the best known care.   For pain she got morphine, but the bone marrow 
> expands and causes great pain in the bones that even morphine doesn't 
> relieve.  At the end she asked permission to die.

So sad, 

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-05 Thread Telmo Menezes
On Tue, Mar 6, 2018 at 1:37 AM, Brent Meeker  wrote:
>
>
> On 3/5/2018 9:14 AM, Telmo Menezes wrote:
>
> "Could" implies a question about possibilities.  It's certainly logically
> possible that there not be such a disease as leukemia.  Is it nomologically
> possible?...not as far as we know.
>
> Well I'm not sure it's logically possible, for the reasons that Bruno
> already addressed.
>
>
> Bruno is assuming that everything not contrary to his theory exists
> axiomatically...which is assuming the answer.

That is a rather uncharitable way of putting it.

Bruno has discussed his Universal Dovetailer Argument extensively. If
you assume comp and accept the argument, then we are inside of the
dovetailer. The dovetailer is an everything-generator. Russell
proposes something similar in his book. Isn't the exploration of this
type of idea the original reason for this mailing list? That doesn't
mean that the idea is right, of course, but it does mean that one
should expect to not keep going around in circles without ever
reaching a more sophisticated level of engagement with such theories.

> But why would you suppose that a world in which "Leukemia doesn't exist."
> would allow you derive a logical contradiction?

I think such a world would require one to accept something like
creationism as logically consistent. The process of biological
complexification happens by natural selection. Natural selection, by
definition, implies failure modes. It also leads to endless
competitive and exploitative dynamics such as predators, pathogens,
parasites, etc. Avoiding all of these tragedies from the perspective
of human beings would require a designer holding human interests at
heart above everything else. Both the pre-existence of such a designer
and its motivation to helps us above everything else seem nonsensical
to me.

Telmo.

> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-05 Thread Brent Meeker



On 3/5/2018 9:14 AM, Telmo Menezes wrote:

"Could" implies a question about possibilities.  It's certainly logically
possible that there not be such a disease as leukemia.  Is it nomologically
possible?...not as far as we know.

Well I'm not sure it's logically possible, for the reasons that Bruno
already addressed.


Bruno is assuming that everything not contrary to his theory exists 
axiomatically...which is assuming the answer.


But why would you suppose that a world in which "Leukemia doesn't 
exist."  would allow you derive a/logical /contradiction?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-05 Thread Brent Meeker



On 3/5/2018 6:38 AM, Bruno Marchal wrote:

On 4 Mar 2018, at 23:00, Brent Meeker  wrote:



On 3/3/2018 11:48 PM, Telmo Menezes wrote:

On Sun, Mar 4, 2018 at 7:43 AM, Brent Meeker  wrote:

On 3/3/2018 1:47 PM, Telmo Menezes wrote:

On Sat, Mar 3, 2018 at 10:41 PM, Telmo Menezes 
wrote:

On Wed, Feb 28, 2018 at 8:51 PM, Brent Meeker 
wrote:

On 2/28/2018 3:38 AM, Telmo Menezes wrote:

So what do you find more convincing:  An axiomatic proof that God
exists,
e.g. St Anslem's or Goedel's.  or The mere empirical absence of
evidence.

In these proves, God = Totality / Ultimate Reality / The Whole
Shebang. They don't mention commandments, or talking snakes or burning
bushes. I think you are proposing a false equivalence.


No, you are inserting one.  St Anselm proves that perfect being/agent
exists.  He didn't claim to prove any other mythology.  So the question
stands: Who ya gonna believe?  the axiomatic proof or your lyin' eyes?

I find St. Anselm's proof meaningless, because perfection is a human
concept, i.e. it is relative to our evolutionary niche and
circumstances. The perfect shot for the hunter is not the perfect shot
for the prey. Ok, so let's say that reality as a whole counts as the
perfect being. Perhaps. Could it be any other way that would be worse?

I meant "that would be better", of course.

Well I have a friend whose 12yr old daughter died of leukemia in great pain.
I think it could be better.

I understand what you are saying.
My point is this: could some totally that supports something as
complex as human beings not include little girls with leukemia?

"Could" implies a question about possibilities.  It's certainly logically 
possible that there not be such a disease as leukemia.  Is it nomologically 
possible?...not as far as we know.

Assuming mechanism it is logically impossible. Biological viruses and molecular 
diseases are, globally (like the notion of Turing machine) universal, and so 
there is no algorithm or program making such “totality” immune for such 
diseases. They necessarily coevolve.


That's fallacious reasoning.  Just because there is no algorithm 
creating immunity doesn't mean the disease exists.  I can imagine many 
diseases that happen not to exist (e.g. airborne ebola).




Of course, we can progress, and win the battles on larger class of diseases and 
parasites,


As we, for example, eliminated smallpox.  So it is not only logically, 
but nomologically possible that smallpox not exist.



but after some time, they will find the way to “hack” the body again. That can 
be related to the halting problem, or to the second recursion theorem.

I fell sorry for your daughter’s friend, as having great pain seems to mean she 
got some therapy and not others, which seems to cure better and are much less 
painful, but here it is human lies which hides the possible help … (I know it 
is quite difficult and delicate to mess with the health of other people, doubly 
so when ignorance and lies play a so big role in the economy).


She got the best known care.   For pain she got morphine, but the bone 
marrow expands and causes great pain in the bones that even morphine 
doesn't relieve.  At the end she asked permission to die.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-05 Thread Telmo Menezes
On Sun, Mar 4, 2018 at 11:00 PM, Brent Meeker  wrote:
>
>
> On 3/3/2018 11:48 PM, Telmo Menezes wrote:
>>
>> On Sun, Mar 4, 2018 at 7:43 AM, Brent Meeker  wrote:
>>>
>>>
>>> On 3/3/2018 1:47 PM, Telmo Menezes wrote:

 On Sat, Mar 3, 2018 at 10:41 PM, Telmo Menezes 
 wrote:
>
> On Wed, Feb 28, 2018 at 8:51 PM, Brent Meeker 
> wrote:
>>
>>
>> On 2/28/2018 3:38 AM, Telmo Menezes wrote:
>>
>> So what do you find more convincing:  An axiomatic proof that God
>> exists,
>> e.g. St Anslem's or Goedel's.  or The mere empirical absence of
>> evidence.
>>
>> In these proves, God = Totality / Ultimate Reality / The Whole
>> Shebang. They don't mention commandments, or talking snakes or burning
>> bushes. I think you are proposing a false equivalence.
>>
>>
>> No, you are inserting one.  St Anselm proves that perfect being/agent
>> exists.  He didn't claim to prove any other mythology.  So the
>> question
>> stands: Who ya gonna believe?  the axiomatic proof or your lyin' eyes?
>
> I find St. Anselm's proof meaningless, because perfection is a human
> concept, i.e. it is relative to our evolutionary niche and
> circumstances. The perfect shot for the hunter is not the perfect shot
> for the prey. Ok, so let's say that reality as a whole counts as the
> perfect being. Perhaps. Could it be any other way that would be worse?

 I meant "that would be better", of course.
>>>
>>>
>>> Well I have a friend whose 12yr old daughter died of leukemia in great
>>> pain.
>>> I think it could be better.
>>
>> I understand what you are saying.
>> My point is this: could some totally that supports something as
>> complex as human beings not include little girls with leukemia?
>
>
> "Could" implies a question about possibilities.  It's certainly logically
> possible that there not be such a disease as leukemia.  Is it nomologically
> possible?...not as far as we know.

Well I'm not sure it's logically possible, for the reasons that Bruno
already addressed.
But the meta-point is this: notice that we are not discussing talking
snakes or divine commandments.

Telmo.

>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-05 Thread Bruno Marchal

> On 4 Mar 2018, at 23:00, Brent Meeker  wrote:
> 
> 
> 
> On 3/3/2018 11:48 PM, Telmo Menezes wrote:
>> On Sun, Mar 4, 2018 at 7:43 AM, Brent Meeker  wrote:
>>> 
>>> On 3/3/2018 1:47 PM, Telmo Menezes wrote:
 On Sat, Mar 3, 2018 at 10:41 PM, Telmo Menezes 
 wrote:
> On Wed, Feb 28, 2018 at 8:51 PM, Brent Meeker 
> wrote:
>> 
>> On 2/28/2018 3:38 AM, Telmo Menezes wrote:
>> 
>> So what do you find more convincing:  An axiomatic proof that God
>> exists,
>> e.g. St Anslem's or Goedel's.  or The mere empirical absence of
>> evidence.
>> 
>> In these proves, God = Totality / Ultimate Reality / The Whole
>> Shebang. They don't mention commandments, or talking snakes or burning
>> bushes. I think you are proposing a false equivalence.
>> 
>> 
>> No, you are inserting one.  St Anselm proves that perfect being/agent
>> exists.  He didn't claim to prove any other mythology.  So the question
>> stands: Who ya gonna believe?  the axiomatic proof or your lyin' eyes?
> I find St. Anselm's proof meaningless, because perfection is a human
> concept, i.e. it is relative to our evolutionary niche and
> circumstances. The perfect shot for the hunter is not the perfect shot
> for the prey. Ok, so let's say that reality as a whole counts as the
> perfect being. Perhaps. Could it be any other way that would be worse?
 I meant "that would be better", of course.
>>> 
>>> Well I have a friend whose 12yr old daughter died of leukemia in great pain.
>>> I think it could be better.
>> I understand what you are saying.
>> My point is this: could some totally that supports something as
>> complex as human beings not include little girls with leukemia?
> 
> "Could" implies a question about possibilities.  It's certainly logically 
> possible that there not be such a disease as leukemia.  Is it nomologically 
> possible?...not as far as we know.

Assuming mechanism it is logically impossible. Biological viruses and molecular 
diseases are, globally (like the notion of Turing machine) universal, and so 
there is no algorithm or program making such “totality” immune for such 
diseases. They necessarily coevolve.

Of course, we can progress, and win the battles on larger class of diseases and 
parasites, but after some time, they will find the way to “hack” the body 
again. That can be related to the halting problem, or to the second recursion 
theorem.

I fell sorry for your daughter’s friend, as having great pain seems to mean she 
got some therapy and not others, which seems to cure better and are much less 
painful, but here it is human lies which hides the possible help … (I know it 
is quite difficult and delicate to mess with the health of other people, doubly 
so when ignorance and lies play a so big role in the economy).

Bruno





> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-04 Thread Brent Meeker



On 3/3/2018 11:48 PM, Telmo Menezes wrote:

On Sun, Mar 4, 2018 at 7:43 AM, Brent Meeker  wrote:


On 3/3/2018 1:47 PM, Telmo Menezes wrote:

On Sat, Mar 3, 2018 at 10:41 PM, Telmo Menezes 
wrote:

On Wed, Feb 28, 2018 at 8:51 PM, Brent Meeker 
wrote:


On 2/28/2018 3:38 AM, Telmo Menezes wrote:

So what do you find more convincing:  An axiomatic proof that God
exists,
e.g. St Anslem's or Goedel's.  or The mere empirical absence of
evidence.

In these proves, God = Totality / Ultimate Reality / The Whole
Shebang. They don't mention commandments, or talking snakes or burning
bushes. I think you are proposing a false equivalence.


No, you are inserting one.  St Anselm proves that perfect being/agent
exists.  He didn't claim to prove any other mythology.  So the question
stands: Who ya gonna believe?  the axiomatic proof or your lyin' eyes?

I find St. Anselm's proof meaningless, because perfection is a human
concept, i.e. it is relative to our evolutionary niche and
circumstances. The perfect shot for the hunter is not the perfect shot
for the prey. Ok, so let's say that reality as a whole counts as the
perfect being. Perhaps. Could it be any other way that would be worse?

I meant "that would be better", of course.


Well I have a friend whose 12yr old daughter died of leukemia in great pain.
I think it could be better.

I understand what you are saying.
My point is this: could some totally that supports something as
complex as human beings not include little girls with leukemia?


"Could" implies a question about possibilities.  It's certainly 
logically possible that there not be such a disease as leukemia.  Is it 
nomologically possible?...not as far as we know.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-03 Thread Telmo Menezes
On Sun, Mar 4, 2018 at 7:43 AM, Brent Meeker  wrote:
>
>
> On 3/3/2018 1:47 PM, Telmo Menezes wrote:
>>
>> On Sat, Mar 3, 2018 at 10:41 PM, Telmo Menezes 
>> wrote:
>>>
>>> On Wed, Feb 28, 2018 at 8:51 PM, Brent Meeker 
>>> wrote:


 On 2/28/2018 3:38 AM, Telmo Menezes wrote:

 So what do you find more convincing:  An axiomatic proof that God
 exists,
 e.g. St Anslem's or Goedel's.  or The mere empirical absence of
 evidence.

 In these proves, God = Totality / Ultimate Reality / The Whole
 Shebang. They don't mention commandments, or talking snakes or burning
 bushes. I think you are proposing a false equivalence.


 No, you are inserting one.  St Anselm proves that perfect being/agent
 exists.  He didn't claim to prove any other mythology.  So the question
 stands: Who ya gonna believe?  the axiomatic proof or your lyin' eyes?
>>>
>>> I find St. Anselm's proof meaningless, because perfection is a human
>>> concept, i.e. it is relative to our evolutionary niche and
>>> circumstances. The perfect shot for the hunter is not the perfect shot
>>> for the prey. Ok, so let's say that reality as a whole counts as the
>>> perfect being. Perhaps. Could it be any other way that would be worse?
>>
>> I meant "that would be better", of course.
>
>
> Well I have a friend whose 12yr old daughter died of leukemia in great pain.
> I think it could be better.

I understand what you are saying.
My point is this: could some totally that supports something as
complex as human beings not include little girls with leukemia?

Telmo.

>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-03 Thread Brent Meeker



On 3/3/2018 1:47 PM, Telmo Menezes wrote:

On Sat, Mar 3, 2018 at 10:41 PM, Telmo Menezes  wrote:

On Wed, Feb 28, 2018 at 8:51 PM, Brent Meeker  wrote:


On 2/28/2018 3:38 AM, Telmo Menezes wrote:

So what do you find more convincing:  An axiomatic proof that God exists,
e.g. St Anslem's or Goedel's.  or The mere empirical absence of evidence.

In these proves, God = Totality / Ultimate Reality / The Whole
Shebang. They don't mention commandments, or talking snakes or burning
bushes. I think you are proposing a false equivalence.


No, you are inserting one.  St Anselm proves that perfect being/agent
exists.  He didn't claim to prove any other mythology.  So the question
stands: Who ya gonna believe?  the axiomatic proof or your lyin' eyes?

I find St. Anselm's proof meaningless, because perfection is a human
concept, i.e. it is relative to our evolutionary niche and
circumstances. The perfect shot for the hunter is not the perfect shot
for the prey. Ok, so let's say that reality as a whole counts as the
perfect being. Perhaps. Could it be any other way that would be worse?

I meant "that would be better", of course.


Well I have a friend whose 12yr old daughter died of leukemia in great 
pain.  I think it could be better.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-03 Thread Telmo Menezes
On Sat, Mar 3, 2018 at 10:41 PM, Telmo Menezes  wrote:
> On Wed, Feb 28, 2018 at 8:51 PM, Brent Meeker  wrote:
>>
>>
>> On 2/28/2018 3:38 AM, Telmo Menezes wrote:
>>
>> So what do you find more convincing:  An axiomatic proof that God exists,
>> e.g. St Anslem's or Goedel's.  or The mere empirical absence of evidence.
>>
>> In these proves, God = Totality / Ultimate Reality / The Whole
>> Shebang. They don't mention commandments, or talking snakes or burning
>> bushes. I think you are proposing a false equivalence.
>>
>>
>> No, you are inserting one.  St Anselm proves that perfect being/agent
>> exists.  He didn't claim to prove any other mythology.  So the question
>> stands: Who ya gonna believe?  the axiomatic proof or your lyin' eyes?
>
> I find St. Anselm's proof meaningless, because perfection is a human
> concept, i.e. it is relative to our evolutionary niche and
> circumstances. The perfect shot for the hunter is not the perfect shot
> for the prey. Ok, so let's say that reality as a whole counts as the
> perfect being. Perhaps. Could it be any other way that would be worse?

I meant "that would be better", of course.

> If not, then it's perfect. I don't think one can weigh these
> abstractions against the empirical evidence that we have against the
> literal interpretation of religious texts.
>
> Telmo.
>
>> Brent
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at https://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-03 Thread Telmo Menezes
On Wed, Feb 28, 2018 at 8:51 PM, Brent Meeker  wrote:
>
>
> On 2/28/2018 3:38 AM, Telmo Menezes wrote:
>
> So what do you find more convincing:  An axiomatic proof that God exists,
> e.g. St Anslem's or Goedel's.  or The mere empirical absence of evidence.
>
> In these proves, God = Totality / Ultimate Reality / The Whole
> Shebang. They don't mention commandments, or talking snakes or burning
> bushes. I think you are proposing a false equivalence.
>
>
> No, you are inserting one.  St Anselm proves that perfect being/agent
> exists.  He didn't claim to prove any other mythology.  So the question
> stands: Who ya gonna believe?  the axiomatic proof or your lyin' eyes?

I find St. Anselm's proof meaningless, because perfection is a human
concept, i.e. it is relative to our evolutionary niche and
circumstances. The perfect shot for the hunter is not the perfect shot
for the prey. Ok, so let's say that reality as a whole counts as the
perfect being. Perhaps. Could it be any other way that would be worse?
If not, then it's perfect. I don't think one can weigh these
abstractions against the empirical evidence that we have against the
literal interpretation of religious texts.

Telmo.

> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-03 Thread Bruno Marchal

> On 2 Mar 2018, at 23:24, Brent Meeker  wrote:
> 
> 
> 
> On 3/2/2018 11:05 AM, Bruno Marchal wrote:
>>> It doesn't miss it anymore than your theory.  You just postulate that 
>>> consciousness corresponds to some theorems about self-reference.
>> 
>> Not at all. I postulate only that consciousness is true for the machine I 
>> will be after the transplant made at the right level of description.
>> 
>> Then I study what *any* sound machine can prove about itself at its correct 
>> level of description.
> 
> But why should that have anything to do with its perception of self.


p  (truth)
[]p <—— “prove" is here, at the G level.
[]p & p
[]p & <>p
[]p & <>t & p. <“perception is here” at the G* level.

So prove and perception are related, but are quite different and obeys quite 
different mathematical logics.



> 
>> 
>> Consciousness does not correspond to some theorem of self-reference, it is 
>> just that the machine point on something which are true for them,  non 
>> doubtable, yet non provable, nor even definable,
> 
> You say it does not correspond to some theorem of self-reference, but then 
> you imply it corresponds to deriving a Goedel sentence about themselves.  
> That's non doubtable but non provable.  But it's also nothing to do with 
> human consciousness. 


It has nothing to do? The relation are made precise through precise 
representation theorem.



> I only know very few humans are even aware of what a Goedel sentence is.

A Gödel number is just a program when represented by a number in arithmetic. 
You need to think about its when you say “yes” to the doctor, because that is 
the number which is sent in the transportation and duplication reasonings. It 
is your body (when represented in biophysics), and a number when represented in 
arithmetic. No machine can test by introspection which computations or 
universal machine support them.



> 
>> except by using approximation of truth, or the metatheories similar to the 
>> one by Theaetetus.
> 
> That "approximation of truth" is a completely different attribute than 
> "correspondence with facts”.

Not at all. It is still Tarski adequation theory, but restricted to sentences 
with a limited, but unbounded number of quantifier. “ExP(x)” is true when there 
is a n such that substitution(n, x, P) is true in the standard model (or in all 
models, in which case it is true and provable).

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-03 Thread Bruno Marchal

> On 2 Mar 2018, at 23:14, Brent Meeker  wrote:
> 
> 
> 
> On 3/2/2018 11:05 AM, Bruno Marchal wrote:
>>> But you're now confounding provable and true and using "existence" in the 
>>> sense of satisfying axioms as though it mean the same as in common 
>>> discourse.
>> 
>> 
>> The UD exist and is emulated in the same sense that the prime numbers are 
>> distributed in some ways.
>> 
>> I am just clear. In the ontology only 0, s(0), …. exists. It is the whole 
>> point that when we assume that consciousness is invariant for some digital 
>> transformation, you would need to reify matter and attribute it some magical 
>> (Non Turing emulable, nor FPI-recoverable) property to make some 
>> computations more real than other,
> 
> Sometimes I wonder if you know how science works.  It doesn't make 
> assumptions that contradict observations,


I did not, or you might elaborate.



> it makes observations and tries form theories that match. 

Yes. And physics can perhaps do that someday, and then mechanism is refuted. 
But up to now, the observation fits with mechanism and its immaterialism. 



> It is a simple trivial observation that some computations are more real than 
> others. 

That is called “solipsism”. It is not an observation, it is a principle 
psychology/metaphysics.




> That observation doesn't require reifying matter or assuming anything 
> magical.  It's an observation.


The observation is physical, not metaphysical. For metaphysics we can only 
derive indirect consequences and test them. That has been done enough to say 
that mechanism fits with the observation, and materialism does not.




> 
>> where in fact on the the self-referentially correct measure can be obtained 
>> by mathematical means, and then we can compare with Nature.
>> 
>> Up to now, Arithmetic + its internal physical/material phenomenologies fits 
>> the facts,
> 
> Like all computations exist?  I think not.


Then you need to abandon the Mechanist hypothesis, actually even just the 
Church-Turing thesis, and so, to abandon your belief in elementary arithmetic 
theorems, like Euclid’s one.
Just to save physicalism? It actually has never worked even for the physical 
predictions, as re-explained shortly in the preceding post. 

Bruno

> 
> Brent
> 
>> where physics still dismiss the first person (despite the tremendous 
>> progress made by Galileo, Einstein and Everett, or people like Boscovic or 
>> Rossler).
>> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-02 Thread Brent Meeker



On 3/2/2018 11:05 AM, Bruno Marchal wrote:
It doesn't miss it anymore than your theory.  You just postulate that 
consciousness corresponds to some theorems about self-reference.


Not at all. I postulate only that consciousness is true for the 
machine I will be after the transplant made at the right level of 
description.


Then I study what *any* sound machine can prove about itself at its 
correct level of description.


But why should that have anything to do with its perception of self.



Consciousness does not correspond to some theorem of self-reference, 
it is just that the machine point on something which are true for 
them,  non doubtable, yet non provable, nor even definable,


You say it does not correspond to some theorem of self-reference, but 
then you imply it corresponds to deriving a Goedel sentence about 
themselves.  That's non doubtable but non provable.  But it's also 
nothing to do with human consciousness.  I only know very few humans are 
even aware of what a Goedel sentence is.


except by using approximation of truth, or the metatheories similar to 
the one by Theaetetus.


That "approximation of truth" is a completely different attribute than 
"correspondence with facts".


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-02 Thread Brent Meeker



On 3/2/2018 11:05 AM, Bruno Marchal wrote:
But you're now confounding provable and true and using "existence" in 
the sense of satisfying axioms as though it mean the same as in 
common discourse.



The UD exist and is emulated in the same sense that the prime numbers 
are distributed in some ways.


I am just clear. In the ontology only 0, s(0), …. exists. It is the 
whole point that when we assume that consciousness is invariant for 
some digital transformation, you would need to reify matter and 
attribute it some magical (Non Turing emulable, nor FPI-recoverable) 
property to make some computations more real than other,


Sometimes I wonder if you know how science works.  It doesn't make 
assumptions that contradict observations, it makes observations and 
tries form theories that match.  It is a simple trivial observation that 
some computations are more real than others.  That observation doesn't 
require reifying matter or assuming anything magical.  It's an observation.


where in fact on the the self-referentially correct measure can be 
obtained by mathematical means, and then we can compare with Nature.


Up to now, Arithmetic + its internal physical/material phenomenologies 
fits the facts,


Like all computations exist?  I think not.

Brent

where physics still dismiss the first person (despite the tremendous 
progress made by Galileo, Einstein and Everett, or people like 
Boscovic or Rossler).




--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-03-02 Thread Bruno Marchal
the notion of the collection of “universal 
system” running the computations which supports myself.




> 
>> 
>> 
>> 
>> 
>>> So physicists place less credence in the axioms than in the theory.
>> 
>> The axioms (+rules, if formal) constitute the theory. 
> 
> Maxwell's equations plus values of permitivity, permeability, and currents 
> constitute the theory.  But they're not a set of axioms because they are not 
> closed; their application depends on boundary conditions. 


OK.




> 
>> 
>> 
>> 
>> 
>>>   The physicist is interested which, of many possible, axioms produce a 
>>> theory that agrees with observation and makes successful predictons.  
>>> Feynman characterized this as Persian vs Greek mathematics.  
>> 
>> Why would be theology, in the greek sense, be different? Theology, in the 
>> original sense of Plato, just means “theory of everything”,
> 
> No, the original, Greek sense, was observation and speculation:

Yes. The observation, followed on the speculation, which is the theorisation. 
All theories are speculation, more or less confirmed. 

The greek were just aware that was the case in theology too, which by 
definition for them (simplifying the things for avoiding inevitable nuances), 
study God, the Ultimate Truth, with the understanding/bet that it is something 
beyond us.



> 
> theory: From Middle French théorie, from Late Latin theōria, from Ancient 
> Greek θεωρία (theōría, “contemplation, speculation, a looking at, things 
> looked at”), from θεωρέω (theōréō, “I look at, view, consider, examine”), 
> from θεωρός (theōrós, “spectator”), from θέα (théa, “a view”) + ὁράω (horáō, 
> “I see,look”).
> 
> theology: From Middle English theologie, from Middle French theologie, from 
> Old French theologie, from Latin theologia, from Koine Greek θεολογία 
> (theología), from θεολόγος (theológos, adjective), from θεός (theós) + λόγος 
> (lógos).
> 
> "Theory" and "theology" don't even have the same root in Greek; theoros vs 
> theos.

I thing theories and these have themselves a common root. But all this is 
beyond the point. With computationalism, physicalism makes an assumption 
without evidence which makes the mind-body problem unsolvable. 



> 
>> and that includes both mind and matter. It is verifiable, even if today we 
>> are just at the beginning. I derive most of the quantum logic and weirdness 
>> (the many-histories) from Indexical Computationalism. I was not taken 
>> seriously because at that time Everett was not well known, and even quantum 
>> logic was dismissed (but this has changed since).
>> 
>> 
>> 
>>> 
>>>> And you would be correct. Yet, the theory made with this axiom + the 
>>>> consistency of those axioms is much more rich. It proves much more 
>>>> theorem.
>>>> That comes from incompleteness and is extremely counter-intuitive, so we 
>>>> have to be careful.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> 
>>>>>> This does not need to be mentioned in most practical application of 
>>>>>> science, but it becomes important when doing metaphysics or theology 
>>>>>> with the scientific method. 
>>>> 
>>>> … like I just say.
>>>> 
>>>> 
>>>> 
>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>>  
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> Due to some possible anosognosia, even doing the digital transplant 
>>>>>>>>>> experience oneself would prove nothing, even to yourself (despite 
>>>>>>>>>> the feeling). You can know that you have survived, but you cannot 
>>>>>>>>>> know for sure that you have survived integrally (but you can know 
>>>>>>>>>> that in the Theoretical sense, slightly weakened).
>>>>>>>>>> 
>>>>>>>>>> A doctor who claim that we survive such transplant, or that science 
>>>>>>>>>> has proven we can survive such transplant is automatically a con-man.
>>>>>>>>> 
>>>>>>>>> Not at all.  He may be going on the best available evidence.  Just 
>>>>>>>>> because it's n

Re: INDEXICAL Computationalism

2018-03-01 Thread Brent Meeker



On 3/1/2018 2:31 AM, Bruno Marchal wrote:


On 1 Mar 2018, at 00:19, Brent Meeker <meeke...@verizon.net 
<mailto:meeke...@verizon.net>> wrote:




On 2/28/2018 7:22 AM, Bruno Marchal wrote:


On 26 Feb 2018, at 19:09, Brent Meeker <meeke...@verizon.net 
<mailto:meeke...@verizon.net>> wrote:




On 2/26/2018 2:43 AM, Bruno Marchal wrote:


On 23 Feb 2018, at 20:37, Brent Meeker <meeke...@verizon.net 
<mailto:meeke...@verizon.net>> wrote:




On 2/23/2018 12:46 AM, Bruno Marchal wrote:


On 22 Feb 2018, at 23:17, Brent Meeker <meeke...@verizon.net 
<mailto:meeke...@verizon.net>> wrote:




On 2/22/2018 1:09 AM, Bruno Marchal wrote:
On 21 Feb 2018, at 00:48, Brent Meeker <meeke...@verizon.net 
<mailto:meeke...@verizon.net>> wrote:




On 2/18/2018 10:21 AM, Bruno Marchal wrote:
If consciousness is invariant for a digital transplant, it 
is not much a matter of choice.

But that's simply assuming what is to be argued.


?

It is the working hypothesis. The argument is in showing that 
this enforces Plato and refutes Aristotle. Physics becomes a 
branch of machine’s psychology or theology.





The argument must be that the doctor has done this before 
(maybe to humans, maybe to mice) and there was not detectable 
change in behavior, so it's reasonable to bet on the doctor.
The reason why you say “yes” to the doctor is private. It 
needs an act of faith because no experience at all can confirm 
Computationalism.


That's moving the goal post.  You can't convince me that if you 
knew the doctor's work had no observable affect on the behavior 
of mice if would not, for you, count as evidence in favor of 
consciousness being retained.  Nothing is ever "confirmed" with 
certainty.



My point was stronger. Even if I say yes and truly survive 
“100%”, that cannot count as a proof that I have survived 
integrally, The reason is the possibility of anosognosia.
And the point is a theorem in the computationalist metaphysics. 
We know that we would believe correctly to have survived (and 
thus know it in the Theaetetical sense), but with an 
intellectual doubt enforcing to not claim to have 
*necessarily*survive, keeping the theological act of faith 
mandatory. Of course, we can bet that the humans will forget 
this ...


Maybe stronger, but still a very weak point.  Everyone on this 
list thinks that intelligent behavior is an indicator of 
consciousness;


Yes, but not necessarily an indicator of “supervenient of 
consciousness in some real time”. If I see a movie, I can see 
intelligent behaviour, and attribute some consciousness to some 
person, but not in a real time. With mechanism, the person is 
always conscious, but its body/representation is not.






no doubt because they believe that their own consciousness is 
important in their intelligent behavior.  Sure it's possible that 
they are unrelated and it's just a coincidence or the 
consciousness is an otiose epiphenomenon.  But that doesn't mean 
it's not evidence...and pretty convincing evidence at that. You 
have become so immersed in logic and mathematics that you seem to 
have forgotten that science doesn't find *necessary* truths and 
that acting on evidence is not an act of faith but of reason.


When you based your act on reason, you still need some faith in 
your reason and in its applicability to your local reality.


A sophistic argument worthy of a theologian.  If you didn't have 
"faith" in your reason you'd have no basis for any belief or 
action.  It's just a rhetorical trick to insert "faith”.


Not in theology or metaphysics.

You could say that once we believe in the truth of some axioms, we 
automatically believe in the consistency of the axioms.


A mathematicians answer.  Physical theories that are effectively 
equivalent may derive from different axioms.



Exactly like in mathematics. Or like in computer science. All what I 
explain here are theorem of elementary arithmetic, but also of 
combinatory theory, using only the two equations


Kxy = x
Sxyz = xz(yz)

Without adding anything, except for the Mechanist motivation in the 
background.


Mathematical logic study exactly that: the relation between truth, 
consistency, provability or their combinations. Incompleteness makes 
all those notions quite different.


Alas, many non mathematicians confuse the theories with their intended 
models/realities.


How do you define truth if not in terms of their models?







So physicists place less credence in the axioms than in the theory.


The axioms (+rules, if formal) constitute the theory.


Maxwell's equations plus values of permitivity, permeability, and 
currents constitute the theory.  But they're not a set of axioms because 
they are not closed; their application depends on boundary conditions.







  The physicist is interested which, of many possible, axioms produce 
a theory that agrees with observation and makes successful 
predictons.

Re: INDEXICAL Computationalism

2018-03-01 Thread Bruno Marchal

> On 1 Mar 2018, at 00:19, Brent Meeker <meeke...@verizon.net> wrote:
> 
> 
> 
> On 2/28/2018 7:22 AM, Bruno Marchal wrote:
>> 
>>> On 26 Feb 2018, at 19:09, Brent Meeker <meeke...@verizon.net 
>>> <mailto:meeke...@verizon.net>> wrote:
>>> 
>>> 
>>> 
>>> On 2/26/2018 2:43 AM, Bruno Marchal wrote:
>>>> 
>>>>> On 23 Feb 2018, at 20:37, Brent Meeker <meeke...@verizon.net 
>>>>> <mailto:meeke...@verizon.net>> wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>> On 2/23/2018 12:46 AM, Bruno Marchal wrote:
>>>>>> 
>>>>>>> On 22 Feb 2018, at 23:17, Brent Meeker <meeke...@verizon.net 
>>>>>>> <mailto:meeke...@verizon.net>> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On 2/22/2018 1:09 AM, Bruno Marchal wrote:
>>>>>>>>> On 21 Feb 2018, at 00:48, Brent Meeker <meeke...@verizon.net 
>>>>>>>>> <mailto:meeke...@verizon.net>> wrote:
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On 2/18/2018 10:21 AM, Bruno Marchal wrote:
>>>>>>>>>> If consciousness is invariant for a digital transplant, it is not 
>>>>>>>>>> much a matter of choice.
>>>>>>>>> But that's simply assuming what is to be argued.
>>>>>>>> 
>>>>>>>> ?
>>>>>>>> 
>>>>>>>> It is the working hypothesis. The argument is in showing that this 
>>>>>>>> enforces Plato and refutes Aristotle. Physics becomes a branch of 
>>>>>>>> machine’s psychology or theology.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> The argument must be that the doctor has done this before (maybe to 
>>>>>>>>> humans, maybe to mice) and there was not detectable change in 
>>>>>>>>> behavior, so it's reasonable to bet on the doctor.
>>>>>>>> The reason why you say “yes” to the doctor is private. It needs an act 
>>>>>>>> of faith because no experience at all can confirm Computationalism.
>>>>>>> 
>>>>>>> That's moving the goal post.  You can't convince me that if you knew 
>>>>>>> the doctor's work had no observable affect on the behavior of mice if 
>>>>>>> would not, for you, count as evidence in favor of consciousness being 
>>>>>>> retained.  Nothing is ever "confirmed" with certainty.
>>>>>> 
>>>>>> 
>>>>>> My point was stronger. Even if I say yes and truly survive “100%”, that 
>>>>>> cannot count as a proof that I have survived integrally, The reason is 
>>>>>> the possibility of anosognosia. 
>>>>>> And the point is a theorem in the computationalist metaphysics. We know 
>>>>>> that we would believe correctly to have survived (and thus know it in 
>>>>>> the Theaetetical sense), but with an intellectual doubt enforcing to not 
>>>>>> claim to have *necessarily*survive, keeping the theological act of faith 
>>>>>> mandatory. Of course, we can bet that the humans will forget this ...
>>>>> 
>>>>> Maybe stronger, but still a very weak point.  Everyone on this list 
>>>>> thinks that intelligent behavior is an indicator of consciousness;
>>>> 
>>>> Yes, but not necessarily an indicator of “supervenient of consciousness in 
>>>> some real time”. If I see a movie, I can see intelligent behaviour,
>>>>  and attribute some consciousness to some person, but not in a 
>>>> real time. With mechanism, the person is always conscious, but its 
>>>> body/representation is not.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> no doubt because they believe that their own consciousness is important 
>>>>> in their intelligent behavior.  Sure it's possible that they are 
>>>>> unrelated and it's just a coincidence or the consciousness is an otiose 
>>>>> epiphenomenon.  But that doesn't mean it's not evidence...and

Re: INDEXICAL Computationalism

2018-02-28 Thread Brent Meeker



On 2/28/2018 7:22 AM, Bruno Marchal wrote:


On 26 Feb 2018, at 19:09, Brent Meeker <meeke...@verizon.net 
<mailto:meeke...@verizon.net>> wrote:




On 2/26/2018 2:43 AM, Bruno Marchal wrote:


On 23 Feb 2018, at 20:37, Brent Meeker <meeke...@verizon.net 
<mailto:meeke...@verizon.net>> wrote:




On 2/23/2018 12:46 AM, Bruno Marchal wrote:


On 22 Feb 2018, at 23:17, Brent Meeker <meeke...@verizon.net 
<mailto:meeke...@verizon.net>> wrote:




On 2/22/2018 1:09 AM, Bruno Marchal wrote:
On 21 Feb 2018, at 00:48, Brent Meeker <meeke...@verizon.net 
<mailto:meeke...@verizon.net>> wrote:




On 2/18/2018 10:21 AM, Bruno Marchal wrote:
If consciousness is invariant for a digital transplant, it is 
not much a matter of choice.

But that's simply assuming what is to be argued.


?

It is the working hypothesis. The argument is in showing that 
this enforces Plato and refutes Aristotle. Physics becomes a 
branch of machine’s psychology or theology.





The argument must be that the doctor has done this before 
(maybe to humans, maybe to mice) and there was not detectable 
change in behavior, so it's reasonable to bet on the doctor.
The reason why you say “yes” to the doctor is private. It needs 
an act of faith because no experience at all can confirm 
Computationalism.


That's moving the goal post. You can't convince me that if you 
knew the doctor's work had no observable affect on the behavior 
of mice if would not, for you, count as evidence in favor of 
consciousness being retained.  Nothing is ever "confirmed" with 
certainty.



My point was stronger. Even if I say yes and truly survive “100%”, 
that cannot count as a proof that I have survived integrally, The 
reason is the possibility of anosognosia.
And the point is a theorem in the computationalist metaphysics. We 
know that we would believe correctly to have survived (and thus 
know it in the Theaetetical sense), but with an intellectual doubt 
enforcing to not claim to have *necessarily*survive, keeping the 
theological act of faith mandatory. Of course, we can bet that the 
humans will forget this ...


Maybe stronger, but still a very weak point. Everyone on this list 
thinks that intelligent behavior is an indicator of consciousness;


Yes, but not necessarily an indicator of “supervenient of 
consciousness in some real time”. If I see a movie, I can see 
intelligent behaviour, and attribute some consciousness to some 
person, but not in a real time. With mechanism, the person is always 
conscious, but its body/representation is not.






no doubt because they believe that their own consciousness is 
important in their intelligent behavior.  Sure it's possible that 
they are unrelated and it's just a coincidence or the consciousness 
is an otiose epiphenomenon.  But that doesn't mean it's not 
evidence...and pretty convincing evidence at that.  You have become 
so immersed in logic and mathematics that you seem to have 
forgotten that science doesn't find *necessary* truths and that 
acting on evidence is not an act of faith but of reason.


When you based your act on reason, you still need some faith in your 
reason and in its applicability to your local reality.


A sophistic argument worthy of a theologian.  If you didn't have 
"faith" in your reason you'd have no basis for any belief or action.  
It's just a rhetorical trick to insert "faith”.


Not in theology or metaphysics.

You could say that once we believe in the truth of some axioms, we 
automatically believe in the consistency of the axioms.


A mathematicians answer.  Physical theories that are effectively 
equivalent may derive from different axioms.  So physicists place less 
credence in the axioms than in the theory.  The physicist is interested 
which, of many possible, axioms produce a theory that agrees with 
observation and makes successful predictons.  Feynman characterized this 
as Persian vs Greek mathematics.


And you would be correct. Yet, the theory made with this axiom + the 
consistency of those axioms is much more rich. It proves much more 
theorem.
That comes from incompleteness and is extremely counter-intuitive, so 
we have to be careful.








This does not need to be mentioned in most practical application of 
science, but it becomes important when doing metaphysics or theology 
with the scientific method.


… like I just say.



















Due to some possible anosognosia, even doing the digital 
transplant experience oneself would prove nothing, even to 
yourself (despite the feeling). You can know that you have 
survived, but you cannot know for sure that you have survived 
integrally (but you can know that in the Theoretical sense, 
slightly weakened).


A doctor who claim that we survive such transplant, or that 
science has proven we can survive such transplant is 
automatically a con-man.


Not at all.  He may be going on the best available evidence.  
Just because 

Re: INDEXICAL Computationalism

2018-02-28 Thread Brent Meeker



On 2/28/2018 3:38 AM, Telmo Menezes wrote:

So what do you find more convincing:  An axiomatic proof that God exists,
e.g. St Anslem's or Goedel's.  or The mere empirical absence of evidence.

In these proves, God = Totality / Ultimate Reality / The Whole
Shebang. They don't mention commandments, or talking snakes or burning
bushes. I think you are proposing a false equivalence.


No, you are inserting one.  St Anselm proves that perfect being/agent 
exists.  He didn't claim to prove any other mythology. So the question 
stands: Who ya gonna believe?  the axiomatic proof or your lyin' eyes?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: INDEXICAL Computationalism

2018-02-28 Thread Bruno Marchal

> On 27 Feb 2018, at 11:42, Telmo Menezes  wrote:
> 
>> Depends on what you mean by "proof", we sentence people to prison based on
>> proof of their crime.  Not all proof is mathematical.  And mathematical
>> proof is only relative to the axioms.
> 
> I think that this is one of those cases where the term is overloaded.
> I would argue that, for most people, proof has a strong connotation of
> "case closed". Mathematical proof is the only domain where this is
> actually true.

In real life mathematics, you are right, at least for Arithmetic, except that 
even here some ultra-finitists would disagree.

In real life mathematics, the proof are informal, and usually convincing. They 
correspond, in the theology of the machine, to 


 []p & p

And they do not correspond to the formal proof []p, despite from a 3p view, on 
assumed correct (simple) machine, we can see they are the same, but the 
(simpler than us) machine cannot see it.

Only computability is “really” absolute. Provability means only rational 
justification from my primitive belief.

Now, at the meta level, we all agree on the axioms of arithmetic, and so for 
arithmetic, what you say is true … but remains not provable by us. Our 
conviction remains based on our consciousness, and we have to project our 
rationality on the other to proceed. We do that since we are child, so it is 
difficult to realise that we make an act of faith there, but it is there, and 
with mechanism, needs to be there.

That is really the kind of things which are so counter-intuitive that the use 
of the machine self-reference logic (machine’s theology (G*) or/and machine’s 
science (G)) is obligatory.




> Everywhere else it is relative: it means sufficient
> evidence for some course of action to be taken.

All axioms, to be accepted, even in pure mathematics, requires some faith. This 
is not related to the fact that the axioms are not provable from less, but it 
is related with the fact that adopting the axioms means we believe they are 
consistent, yet we cannot even assume that consistency without either becoming 
inconsistent, or changing completely the theory into a much more powerful 
theory, on which this remark will reapply again and again.



> I suspect that judges
> and lawyers like the term because it makes everyone sleep better at
> night. I never see contemporary scientists using the term, only
> science journalists. Again, this makes sense: the general public likes
> to feel that science is settling matters. I think it is anti-pedagogic
> to talk about scientific theories being "proven", because it conveys
> the wrong idea about what science is and what science does. For a
> serious scientist, nothing is ever settled and no cases are closed.

OK.  (And for a metamathematician or mathematical logician, this is true even 
for x + 0 = x, at the object level, but not at the informal level well all 
science is done, and that is why it concerns metaphysics or philosophy of 
mind/theology, not engineering, physics, or mathematics).




> There are strong hypothesis at a given time, there are effective
> models for certain domains, and one does the best one can with them.
> The lack of this "case closed" attitude is precisely what makes
> science such a magnificent endeavour, and why it shines so bright
> above the certainties of the religious fundamentalists and the
> ideologues.

Absolutely. 

The problem is that very often, materialists confuse physics and metaphysics. 
They interpret religiously physics, which is not a problem neither in physics, 
nor in metaphysics, if done consciously. When done unconsciously, the whole 
computationalist mind-body problem will dissolve into pure pseudo-religion, and 
that did happen already in Aristotle, and much more with its institutionalised 
religion. They will criticise the whole metaphysics and religion, and separate 
this from science, because they have chosen their religion and metaphysics, but 
without awareness of the fact, taking it fir being science, which it is not 
(and that is the key thing that Plato did understood, but Aristotle did not (or 
evacuate it by the usual mockery and insult technic, applied to Plato, in his 
metaphysics). 


Bruno

> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email 

Re: INDEXICAL Computationalism

2018-02-28 Thread Bruno Marchal

> On 26 Feb 2018, at 19:09, Brent Meeker <meeke...@verizon.net> wrote:
> 
> 
> 
> On 2/26/2018 2:43 AM, Bruno Marchal wrote:
>> 
>>> On 23 Feb 2018, at 20:37, Brent Meeker <meeke...@verizon.net 
>>> <mailto:meeke...@verizon.net>> wrote:
>>> 
>>> 
>>> 
>>> On 2/23/2018 12:46 AM, Bruno Marchal wrote:
>>>> 
>>>>> On 22 Feb 2018, at 23:17, Brent Meeker <meeke...@verizon.net 
>>>>> <mailto:meeke...@verizon.net>> wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>> On 2/22/2018 1:09 AM, Bruno Marchal wrote:
>>>>>>> On 21 Feb 2018, at 00:48, Brent Meeker <meeke...@verizon.net 
>>>>>>> <mailto:meeke...@verizon.net>> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On 2/18/2018 10:21 AM, Bruno Marchal wrote:
>>>>>>>> If consciousness is invariant for a digital transplant, it is not much 
>>>>>>>> a matter of choice.
>>>>>>> But that's simply assuming what is to be argued.
>>>>>> 
>>>>>> ?
>>>>>> 
>>>>>> It is the working hypothesis. The argument is in showing that this 
>>>>>> enforces Plato and refutes Aristotle. Physics becomes a branch of 
>>>>>> machine’s psychology or theology.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> The argument must be that the doctor has done this before (maybe to 
>>>>>>> humans, maybe to mice) and there was not detectable change in behavior, 
>>>>>>> so it's reasonable to bet on the doctor.
>>>>>> The reason why you say “yes” to the doctor is private. It needs an act 
>>>>>> of faith because no experience at all can confirm Computationalism.
>>>>> 
>>>>> That's moving the goal post.  You can't convince me that if you knew the 
>>>>> doctor's work had no observable affect on the behavior of mice if would 
>>>>> not, for you, count as evidence in favor of consciousness being retained. 
>>>>>  Nothing is ever "confirmed" with certainty.
>>>> 
>>>> 
>>>> My point was stronger. Even if I say yes and truly survive “100%”, that 
>>>> cannot count as a proof that I have survived integrally, The reason is the 
>>>> possibility of anosognosia. 
>>>> And the point is a theorem in the computationalist metaphysics. We know 
>>>> that we would believe correctly to have survived (and thus know it in the 
>>>> Theaetetical sense), but with an intellectual doubt enforcing to not claim 
>>>> to have *necessarily*survive, keeping the theological act of faith 
>>>> mandatory. Of course, we can bet that the humans will forget this ...
>>> 
>>> Maybe stronger, but still a very weak point.  Everyone on this list thinks 
>>> that intelligent behavior is an indicator of consciousness;
>> 
>> Yes, but not necessarily an indicator of “supervenient of consciousness in 
>> some real time”. If I see a movie, I can see intelligent behaviour, and 
>> attribute some consciousness to some person, but not in a real time. With 
>> mechanism, the person is always conscious, but its body/representation is 
>> not.
>> 
>> 
>> 
>> 
>> 
>>> no doubt because they believe that their own consciousness is important in 
>>> their intelligent behavior.  Sure it's possible that they are unrelated and 
>>> it's just a coincidence or the consciousness is an otiose epiphenomenon.  
>>> But that doesn't mean it's not evidence...and pretty convincing evidence at 
>>> that.  You have become so immersed in logic and mathematics that you seem 
>>> to have forgotten that science doesn't find *necessary* truths and that 
>>> acting on evidence is not an act of faith but of reason.
>> 
>> When you based your act on reason, you still need some faith in your reason 
>> and in its applicability to your local reality.
> 
> A sophistic argument worthy of a theologian.  If you didn't have "faith" in 
> your reason you'd have no basis for any belief or action.  It's just a 
> rhetorical trick to insert "faith”.

Not in theology or metaphysics. 

You could say that once we believe in the truth of some axioms, we 
automatically believe in the consistency of the axioms. And you would be 
correct.

Re: INDEXICAL Computationalism

2018-02-28 Thread Telmo Menezes
On Tue, Feb 27, 2018 at 7:20 PM, Brent Meeker  wrote:
>
>
> On 2/27/2018 2:42 AM, Telmo Menezes wrote:
>
> Depends on what you mean by "proof", we sentence people to prison based on
> proof of their crime.  Not all proof is mathematical.  And mathematical
> proof is only relative to the axioms.
>
> I think that this is one of those cases where the term is overloaded.
> I would argue that, for most people, proof has a strong connotation of
> "case closed". Mathematical proof is the only domain where this is
> actually true.
>
>
> But "true" in mathematics is a marker you attach to some axioms and then use
> rules of inference which preserve the marker.  It only means "corresponding
> to a fact" if the axioms correspond to facts.

Sure, I agree.

In this list we deal with some topics that stretch these boundaries,
to say the least. For example, if mind=computation (an assumption, but
a common one these days), then the mathematical reality becomes harder
to distinguish from the real reality (I'm sure Bruno is going to have
a stroke with this simplification hehe).

> Everywhere else it is relative: it means sufficient
> evidence for some course of action to be taken. I suspect that judges
> and lawyers like the term because it makes everyone sleep better at
> night.
>
>
> I'm sure they are glad that it is usually the jury's task to say what the
> facts are.  But in a criminal case the jury must unanimously agree on the
> facts of crime beyond a reasonable doubt.  If you've ever served on a jury
> (I've been on four and served as foreman once) you know the jurors take
> their task very seriously.

I have no doubt they do, but my only experience of that is from seeing
it in the movies. Here in Europe it doesn't work like that. There are
no juries except in extremely rare circumstances -- judges decide on
the veredict. In any case, I didn't mean this in a negative way. I am
the black sheep in a family where people tend to go to law school, and
some of the people I love and respect the most are involved with the
legal system. I just meant that they have to deal with
social/political considerations that are necessary bur foreign to the
scientific attitude. An this is one of the factors that leads to the
overloading of the meaning of "proof" in common usage.

> I never see contemporary scientists using the term, only
> science journalists. Again, this makes sense: the general public likes
> to feel that science is settling matters. I think it is anti-pedagogic
> to talk about scientific theories being "proven", because it conveys
> the wrong idea about what science is and what science does. For a
> serious scientist, nothing is ever settled and no cases are closed.
> There are strong hypothesis at a given time, there are effective
> models for certain domains, and one does the best one can with them.
> The lack of this "case closed" attitude is precisely what makes
> science such a magnificent endeavour, and why it shines so bright
> above the certainties of the religious fundamentalists and the
> ideologues.
>
>
> I agree.  But it is also used by those who find scientific findings
> inconvenient and obfuscate by saying the findings of science are not
> conclusive and nothing is proven.   They obscure the point that one must act
> and science the best way to inform the action.

Yes, I understand that. I would argue that if we are to keep our
liberal democracies, we must insist on educating people and telling
the truth (lower-case t) to our best abilities. Even if it leads to
sub-optimal outcomes in the short term. The price for projecting
certainties that do not exist is the corruption of science itself. We
need serious science in the long term. I believe that nothing is worth
risking its corruption, even if it looks like it from where we are.

> So what do you find more convincing:  An axiomatic proof that God exists,
> e.g. St Anslem's or Goedel's.  or The mere empirical absence of evidence.

In these proves, God = Totality / Ultimate Reality / The Whole
Shebang. They don't mention commandments, or talking snakes or burning
bushes. I think you are proposing a false equivalence.

Telmo.

> Brent
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at 

  1   2   3   4   5   6   7   >