MGA revisited paper

2014-08-10 Thread Russell Standish
As long, long time promised, I now have a draft of my "MGA revisited"
paper for critical comment. I have uploaded this to my blog, which
gives people the ability to attach comments.

http://www.hpcoders.com.au/blog/?p=73

Whilst I'm happy I now understand the issue, I still not happy with
how I've expressed it - the text could still do with some work.

So let the games begin!

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
 (http://www.hpcoders.com.au/AmoebasSecret.html)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-10 Thread meekerdb

On 8/10/2014 3:38 PM, Russell Standish wrote:

As long, long time promised, I now have a draft of my "MGA revisited"
paper for critical comment. I have uploaded this to my blog, which
gives people the ability to attach comments.

http://www.hpcoders.com.au/blog/?p=73

Whilst I'm happy I now understand the issue, I still not happy with
how I've expressed it - the text could still do with some work.

So let the games begin!


I went to your blog and I found:

/In this paper, we reexamine Bruno Marchal's Movie Graph//
//Argument, which demonstrates a basic incompatibility between//
//computationalism and materialism. We discover that the incompatibility//
//is only manifest in singular classical-like universes. If we accept//
//that we live in a Multiverse, then the incompatibility goes away, but//
//in that case another line of argument shows that with//
//computationalism, fundamental, or primitive materiality has no causal//
//influence on what is observed, which must must be derivable from basic//
//arithmetic properties./

But I didn't find "this paper"?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-10 Thread Russell Standish
Apologies to everybody. For some reason, when I clicked "publish",
Wordpress posted an earlier draft of the post, not the most recent one
I was working on.

I have now restored the correct version of the post - follow the link
"Draft paper here" to find the paper.

Cheers


On Sun, Aug 10, 2014 at 08:08:55PM -0700, meekerdb wrote:
> On 8/10/2014 3:38 PM, Russell Standish wrote:
> >As long, long time promised, I now have a draft of my "MGA revisited"
> >paper for critical comment. I have uploaded this to my blog, which
> >gives people the ability to attach comments.
> >
> >http://www.hpcoders.com.au/blog/?p=73
> >
> >Whilst I'm happy I now understand the issue, I still not happy with
> >how I've expressed it - the text could still do with some work.
> >
> >So let the games begin!
> >
> I went to your blog and I found:
> 
> /In this paper, we reexamine Bruno Marchal's Movie Graph//
> //Argument, which demonstrates a basic incompatibility between//
> //computationalism and materialism. We discover that the incompatibility//
> //is only manifest in singular classical-like universes. If we accept//
> //that we live in a Multiverse, then the incompatibility goes away, but//
> //in that case another line of argument shows that with//
> //computationalism, fundamental, or primitive materiality has no causal//
> //influence on what is observed, which must must be derivable from basic//
> //arithmetic properties./
> 
> But I didn't find "this paper"?
> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
 (http://www.hpcoders.com.au/AmoebasSecret.html)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-11 Thread Bruno Marchal


On 11 Aug 2014, at 06:42, Russell Standish wrote:


Apologies to everybody. For some reason, when I clicked "publish",
Wordpress posted an earlier draft of the post, not the most recent one
I was working on.

I have now restored the correct version of the post - follow the link
"Draft paper here" to find the paper.



I got it. I will read it.

...

It looks now, that I have lost the ability to read my mails.  
Apparently someone deleted my password at my ULB account. It might  
take some time before I can read my mail again.


Sorry. It is a  good thing that I got your text before this happened.  
I might soon been unable to send message, too.


Bruno






Cheers


On Sun, Aug 10, 2014 at 08:08:55PM -0700, meekerdb wrote:

On 8/10/2014 3:38 PM, Russell Standish wrote:
As long, long time promised, I now have a draft of my "MGA  
revisited"

paper for critical comment. I have uploaded this to my blog, which
gives people the ability to attach comments.

http://www.hpcoders.com.au/blog/?p=73

Whilst I'm happy I now understand the issue, I still not happy with
how I've expressed it - the text could still do with some work.

So let the games begin!


I went to your blog and I found:

/In this paper, we reexamine Bruno Marchal's Movie Graph//
//Argument, which demonstrates a basic incompatibility between//
//computationalism and materialism. We discover that the  
incompatibility//
//is only manifest in singular classical-like universes. If we  
accept//
//that we live in a Multiverse, then the incompatibility goes away,  
but//

//in that case another line of argument shows that with//
//computationalism, fundamental, or primitive materiality has no  
causal//
//influence on what is observed, which must must be derivable from  
basic//

//arithmetic properties./

But I didn't find "this paper"?

Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

Latest project: The Amoeba's Secret
(http://www.hpcoders.com.au/AmoebasSecret.html)


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-11 Thread LizR
Got it, thanks. Not too long so I will be able to read it in the near
future :-)

I hope that is just an honest mistake, Bruno, and no one has been messing
with your email deliberately. Do you have another email you can use? (e.g.
a GMail one)


On 11 August 2014 20:43, Bruno Marchal  wrote:

>
> On 11 Aug 2014, at 06:42, Russell Standish wrote:
>
>  Apologies to everybody. For some reason, when I clicked "publish",
>> Wordpress posted an earlier draft of the post, not the most recent one
>> I was working on.
>>
>> I have now restored the correct version of the post - follow the link
>> "Draft paper here" to find the paper.
>>
>
>
> I got it. I will read it.
>
> ...
>
> It looks now, that I have lost the ability to read my mails. Apparently
> someone deleted my password at my ULB account. It might take some time
> before I can read my mail again.
>
> Sorry. It is a  good thing that I got your text before this happened. I
> might soon been unable to send message, too.
>
> Bruno
>
>
>
>
>
>
>> Cheers
>>
>>
>> On Sun, Aug 10, 2014 at 08:08:55PM -0700, meekerdb wrote:
>>
>>> On 8/10/2014 3:38 PM, Russell Standish wrote:
>>>
>>>> As long, long time promised, I now have a draft of my "MGA revisited"
>>>> paper for critical comment. I have uploaded this to my blog, which
>>>> gives people the ability to attach comments.
>>>>
>>>> http://www.hpcoders.com.au/blog/?p=73
>>>>
>>>> Whilst I'm happy I now understand the issue, I still not happy with
>>>> how I've expressed it - the text could still do with some work.
>>>>
>>>> So let the games begin!
>>>>
>>>>  I went to your blog and I found:
>>>
>>> /In this paper, we reexamine Bruno Marchal's Movie Graph//
>>> //Argument, which demonstrates a basic incompatibility between//
>>> //computationalism and materialism. We discover that the
>>> incompatibility//
>>> //is only manifest in singular classical-like universes. If we accept//
>>> //that we live in a Multiverse, then the incompatibility goes away, but//
>>> //in that case another line of argument shows that with//
>>> //computationalism, fundamental, or primitive materiality has no causal//
>>> //influence on what is observed, which must must be derivable from
>>> basic//
>>> //arithmetic properties./
>>>
>>> But I didn't find "this paper"?
>>>
>>> Brent
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>>
>> 
>> 
>> Prof Russell Standish  Phone 0425 253119 (mobile)
>> Principal, High Performance Coders
>> Visiting Professor of Mathematics  hpco...@hpcoders.com.au
>> University of New South Wales  http://www.hpcoders.com.au
>>
>> Latest project: The Amoeba's Secret
>> (http://www.hpcoders.com.au/AmoebasSecret.html)
>> 
>> 
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-11 Thread LizR
I have never got this idea of "counterfactual correctness". It seems to be
that the argument goes ...

Assume computational process A is conscious
Take process B, which replays A - B passes through the same machine states
as A, but it doesn't work them out, it's driven by a recording of A - B
isn't conscious because it isn't counterfactually correct.

I can't see how this works. (Except insofar as if we assume consciousness
doesn't supervene on material processes, then neither A nor B is conscious,
they are just somehow attached to conscious experiences generated
elsewhere, maybe by a UD.)




On 12 August 2014 09:40, LizR  wrote:

> Got it, thanks. Not too long so I will be able to read it in the near
> future :-)
>
> I hope that is just an honest mistake, Bruno, and no one has been messing
> with your email deliberately. Do you have another email you can use? (e.g.
> a GMail one)
>
>
> On 11 August 2014 20:43, Bruno Marchal  wrote:
>
>>
>> On 11 Aug 2014, at 06:42, Russell Standish wrote:
>>
>>  Apologies to everybody. For some reason, when I clicked "publish",
>>> Wordpress posted an earlier draft of the post, not the most recent one
>>> I was working on.
>>>
>>> I have now restored the correct version of the post - follow the link
>>> "Draft paper here" to find the paper.
>>>
>>
>>
>> I got it. I will read it.
>>
>> ...
>>
>> It looks now, that I have lost the ability to read my mails. Apparently
>> someone deleted my password at my ULB account. It might take some time
>> before I can read my mail again.
>>
>> Sorry. It is a  good thing that I got your text before this happened. I
>> might soon been unable to send message, too.
>>
>> Bruno
>>
>>
>>
>>
>>
>>
>>> Cheers
>>>
>>>
>>> On Sun, Aug 10, 2014 at 08:08:55PM -0700, meekerdb wrote:
>>>
>>>> On 8/10/2014 3:38 PM, Russell Standish wrote:
>>>>
>>>>> As long, long time promised, I now have a draft of my "MGA revisited"
>>>>> paper for critical comment. I have uploaded this to my blog, which
>>>>> gives people the ability to attach comments.
>>>>>
>>>>> http://www.hpcoders.com.au/blog/?p=73
>>>>>
>>>>> Whilst I'm happy I now understand the issue, I still not happy with
>>>>> how I've expressed it - the text could still do with some work.
>>>>>
>>>>> So let the games begin!
>>>>>
>>>>>  I went to your blog and I found:
>>>>
>>>> /In this paper, we reexamine Bruno Marchal's Movie Graph//
>>>> //Argument, which demonstrates a basic incompatibility between//
>>>> //computationalism and materialism. We discover that the
>>>> incompatibility//
>>>> //is only manifest in singular classical-like universes. If we accept//
>>>> //that we live in a Multiverse, then the incompatibility goes away,
>>>> but//
>>>> //in that case another line of argument shows that with//
>>>> //computationalism, fundamental, or primitive materiality has no
>>>> causal//
>>>> //influence on what is observed, which must must be derivable from
>>>> basic//
>>>> //arithmetic properties./
>>>>
>>>> But I didn't find "this paper"?
>>>>
>>>> Brent
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Everything List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to everything-list+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to everything-list@googlegroups.com.
>>>> Visit this group at http://groups.google.com/group/everything-list.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>> --
>>>
>>> 
>>> 
>>> Prof Russell Standish  Phone 0425 253119 (mobile)
>>> Principal, High Performance Coders
>>> Visiting Professor of Mathematics  hpco...@hpcoders.com.au
>>> University of New South Wales  http://www.hpcoders.com.au
>>>
>>> Latest project: The Amoeba's Secret
>>> (http://www.hpcoders.com.au/AmoebasSecret.h

Re: MGA revisited paper

2014-08-11 Thread LizR
Having now read the paper, ISTM that the "counterfactual" part of the
argument is the only part that I *really* don't get. Or rather ISTM that it
demonstrates that consciousness can't supervene on physical computational
states, because those states can't know anything about these
counterfactuals, which by definition don't happen. Then again, I also have
some trouble with the multiverse part. A MV "is" a quantum computer? How do
we know that, without even knowing the laws of physics? Is this something
to do with Feynman's idea about a QC as something that could perform exact
physical simulations? (if I got that right)


On 12 August 2014 11:03, LizR  wrote:

> I have never got this idea of "counterfactual correctness". It seems to be
> that the argument goes ...
>
> Assume computational process A is conscious
> Take process B, which replays A - B passes through the same machine states
> as A, but it doesn't work them out, it's driven by a recording of A - B
> isn't conscious because it isn't counterfactually correct.
>
> I can't see how this works. (Except insofar as if we assume consciousness
> doesn't supervene on material processes, then neither A nor B is conscious,
> they are just somehow attached to conscious experiences generated
> elsewhere, maybe by a UD.)
>
>
>
>
> On 12 August 2014 09:40, LizR  wrote:
>
>> Got it, thanks. Not too long so I will be able to read it in the near
>> future :-)
>>
>> I hope that is just an honest mistake, Bruno, and no one has been messing
>> with your email deliberately. Do you have another email you can use? (e.g.
>> a GMail one)
>>
>>
>> On 11 August 2014 20:43, Bruno Marchal  wrote:
>>
>>>
>>> On 11 Aug 2014, at 06:42, Russell Standish wrote:
>>>
>>>  Apologies to everybody. For some reason, when I clicked "publish",
>>>> Wordpress posted an earlier draft of the post, not the most recent one
>>>> I was working on.
>>>>
>>>> I have now restored the correct version of the post - follow the link
>>>> "Draft paper here" to find the paper.
>>>>
>>>
>>>
>>> I got it. I will read it.
>>>
>>> ...
>>>
>>> It looks now, that I have lost the ability to read my mails. Apparently
>>> someone deleted my password at my ULB account. It might take some time
>>> before I can read my mail again.
>>>
>>> Sorry. It is a  good thing that I got your text before this happened. I
>>> might soon been unable to send message, too.
>>>
>>> Bruno
>>>
>>>
>>>
>>>
>>>
>>>
>>>> Cheers
>>>>
>>>>
>>>> On Sun, Aug 10, 2014 at 08:08:55PM -0700, meekerdb wrote:
>>>>
>>>>> On 8/10/2014 3:38 PM, Russell Standish wrote:
>>>>>
>>>>>> As long, long time promised, I now have a draft of my "MGA revisited"
>>>>>> paper for critical comment. I have uploaded this to my blog, which
>>>>>> gives people the ability to attach comments.
>>>>>>
>>>>>> http://www.hpcoders.com.au/blog/?p=73
>>>>>>
>>>>>> Whilst I'm happy I now understand the issue, I still not happy with
>>>>>> how I've expressed it - the text could still do with some work.
>>>>>>
>>>>>> So let the games begin!
>>>>>>
>>>>>>  I went to your blog and I found:
>>>>>
>>>>> /In this paper, we reexamine Bruno Marchal's Movie Graph//
>>>>> //Argument, which demonstrates a basic incompatibility between//
>>>>> //computationalism and materialism. We discover that the
>>>>> incompatibility//
>>>>> //is only manifest in singular classical-like universes. If we accept//
>>>>> //that we live in a Multiverse, then the incompatibility goes away,
>>>>> but//
>>>>> //in that case another line of argument shows that with//
>>>>> //computationalism, fundamental, or primitive materiality has no
>>>>> causal//
>>>>> //influence on what is observed, which must must be derivable from
>>>>> basic//
>>>>> //arithmetic properties./
>>>>>
>>>>> But I didn't find "this paper"?
>>>>>
>>>>> Brent
>>>>>
>>>>> --
>>>>> You received this 

Re: MGA revisited paper

2014-08-11 Thread Telmo Menezes
On Tue, Aug 12, 2014 at 12:22 AM, LizR  wrote:

> Having now read the paper,
>

Ok, I finished it too. Russell's "state of the art" is a very nice
introduction to the MGA and Maudlin's argument. Very clear and concise,
helped me organize my thoughts on these.


> ISTM that the "counterfactual" part of the argument is the only part that
> I *really* don't get. Or rather ISTM that it demonstrates that
> consciousness can't supervene on physical computational states, because
> those states can't know anything about these counterfactuals, which by
> definition don't happen.
>

I think the point here is that if we assume consciousness supervenes on
matter, then we are forced to reject comp, by reductio ad absurdum. Any
computation supported by matter on which consciousness would supervene
could be replaced with a dumb playback of the sequence of states produced
by the computation (contradicting comp). In the Klara / Olympia case,
Olympia could be made compatible with comp by being replaceable by Klara to
deal with counterfactuals that would never happen. Enabling / disabling the
Olympia / Klara connection would turn consciousness on or off
(contradicting primitive matter, because the possibility of enabling
material computations that would never happen would determine the presence
of absence of consciousness).

I am writing this to help organize my own thoughts, and hope to be
corrected if I am making a mistake.


> Then again, I also have some trouble with the multiverse part. A MV "is" a
> quantum computer? How do we know that, without even knowing the laws of
> physics?
>

I think Russell is referring to a MWI multiverse which is necessarily a
quantum computer (we are assuming the wave equation with MWI, so the laws
of physics are known).

I am not convinced that the MWI + the anthropic principle is equivalent to
the subset of the universal dovetailer computations that supports all
possible human experiences. I am also not convinced that the set of all
possible human experiences is finite. Russell, could you elaborate on these?

(I am going to comment on the blog post too, in a rather redundant way)

Cheers
Telmo.


> Is this something to do with Feynman's idea about a QC as something that
> could perform exact physical simulations? (if I got that right)
>
>
> On 12 August 2014 11:03, LizR  wrote:
>
>> I have never got this idea of "counterfactual correctness". It seems to
>> be that the argument goes ...
>>
>> Assume computational process A is conscious
>> Take process B, which replays A - B passes through the same machine
>> states as A, but it doesn't work them out, it's driven by a recording of A
>> - B isn't conscious because it isn't counterfactually correct.
>>
>> I can't see how this works. (Except insofar as if we assume consciousness
>> doesn't supervene on material processes, then neither A nor B is conscious,
>> they are just somehow attached to conscious experiences generated
>> elsewhere, maybe by a UD.)
>>
>>
>>
>>
>> On 12 August 2014 09:40, LizR  wrote:
>>
>>> Got it, thanks. Not too long so I will be able to read it in the near
>>> future :-)
>>>
>>> I hope that is just an honest mistake, Bruno, and no one has been
>>> messing with your email deliberately. Do you have another email you can
>>> use? (e.g. a GMail one)
>>>
>>>
>>> On 11 August 2014 20:43, Bruno Marchal  wrote:
>>>
>>>>
>>>> On 11 Aug 2014, at 06:42, Russell Standish wrote:
>>>>
>>>>  Apologies to everybody. For some reason, when I clicked "publish",
>>>>> Wordpress posted an earlier draft of the post, not the most recent one
>>>>> I was working on.
>>>>>
>>>>> I have now restored the correct version of the post - follow the link
>>>>> "Draft paper here" to find the paper.
>>>>>
>>>>
>>>>
>>>> I got it. I will read it.
>>>>
>>>> ...
>>>>
>>>> It looks now, that I have lost the ability to read my mails. Apparently
>>>> someone deleted my password at my ULB account. It might take some time
>>>> before I can read my mail again.
>>>>
>>>> Sorry. It is a  good thing that I got your text before this happened. I
>>>> might soon been unable to send message, too.
>>>>
>>>> Bruno
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> Cheers
>>>>>
>>>>>
>>&g

Re: MGA revisited paper

2014-08-11 Thread meekerdb

On 8/11/2014 4:03 PM, LizR wrote:
I have never got this idea of "counterfactual correctness". It seems to be that the 
argument goes ...


Assume computational process A is conscious
Take process B, which replays A - B passes through the same machine states as A, but it 
doesn't work them out, it's driven by a recording of A - B isn't conscious because it 
isn't counterfactually correct.


I can't see how this works. (Except insofar as if we assume consciousness doesn't 
supervene on material processes, then neither A nor B is conscious, they are just 
somehow attached to conscious experiences generated elsewhere, maybe by a UD.)


It doesn't work, because it ignores the fact that consciousness is about something.  It 
can only exist in the context of thoughts (machine states and processes) referring to a 
"world"; being part of a representational and predictive model.  Without the 
counterfactuals, it's just a sequence of states and not a model of anything.  But in order 
that it be a model it must interact or have interacted in the past in order that the model 
be causally connected to the world.  It is this connection that gives meaning to the 
model. Because Bruno is a logician he tends to think of consciousness as performing 
deductive proofs, executing a proof in the sense that every computer program is a proof.  
He models belief as proof.  But this overlooks where the meaning of the program comes 
from.  People that want to deny computers can be conscious point out that the meaning 
comes from the programmer.  But it doesn't have to.  If the computer has goals and can 
learn and act within the world then its internal modeling and decision processes get 
meaning through their potential for actions.


This is why I don't agree with the conclusion drawn from step 8.  I think the requirement 
to counterfactually correct implies that a whole world, a physics, needs to be simulated 
too, or else the Movie Graph or Klara need to be able to interact with the world to supply 
the meaning to their program.  But if the Movie Graph computer is a counterfactually 
correct simulation of a person within a simulated world, there's no longer a "reversal".  
Simulated consciousness exists in simulated worlds - dog bites man.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-11 Thread meekerdb

On 8/11/2014 5:13 PM, Telmo Menezes wrote:




On Tue, Aug 12, 2014 at 12:22 AM, LizR mailto:lizj...@gmail.com>> wrote:

Having now read the paper,


Ok, I finished it too. Russell's "state of the art" is a very nice introduction to the 
MGA and Maudlin's argument. Very clear and concise, helped me organize my thoughts on these.


ISTM that the "counterfactual" part of the argument is the only part that I 
/really/
don't get. Or rather ISTM that it demonstrates that consciousness can't 
supervene on
physical computational states, because those states can't know anything 
about these
counterfactuals, which by definition don't happen.


I think the point here is that if we assume consciousness supervenes on matter, then we 
are forced to reject comp, by reductio ad absurdum.


I think the mistake, the reductio, is the assumption that consciousness can supervene on 
this piece of matter without reference to the world in which the matter exists.  This 
fallacy is encouraged by considering conscious thoughts to be about abstractions like 
arithmetic and dreams (as though dreams did not derive from reality).


Brent

Any computation supported by matter on which consciousness would supervene could be 
replaced with a dumb playback of the sequence of states produced by the computation 
(contradicting comp). In the Klara / Olympia case, Olympia could be made compatible with 
comp by being replaceable by Klara to deal with counterfactuals that would never happen. 
Enabling / disabling the Olympia / Klara connection would turn consciousness on or off 
(contradicting primitive matter, because the possibility of enabling material 
computations that would never happen would determine the presence of absence of 
consciousness).


I am writing this to help organize my own thoughts, and hope to be corrected if I am 
making a mistake.


Then again, I also have some trouble with the multiverse part. A MV "is" a 
quantum
computer? How do we know that, without even knowing the laws of physics?


I think Russell is referring to a MWI multiverse which is necessarily a quantum computer 
(we are assuming the wave equation with MWI, so the laws of physics are known).


I am not convinced that the MWI + the anthropic principle is equivalent to the subset of 
the universal dovetailer computations that supports all possible human experiences. I am 
also not convinced that the set of all possible human experiences is finite. Russell, 
could you elaborate on these?


(I am going to comment on the blog post too, in a rather redundant way)

Cheers
Telmo.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-11 Thread LizR
On 12 August 2014 12:48, meekerdb  wrote:

> On 8/11/2014 4:03 PM, LizR wrote:
>
>> I have never got this idea of "counterfactual correctness". It seems to
>> be that the argument goes ...
>>
>> Assume computational process A is conscious
>> Take process B, which replays A - B passes through the same machine
>> states as A, but it doesn't work them out, it's driven by a recording of A
>> - B isn't conscious because it isn't counterfactually correct.
>>
>> I can't see how this works. (Except insofar as if we assume consciousness
>> doesn't supervene on material processes, then neither A nor B is conscious,
>> they are just somehow attached to conscious experiences generated
>> elsewhere, maybe by a UD.)
>>
>
> It doesn't work, because it ignores the fact that consciousness is about
> something. It can only exist in the context of thoughts (machine states and
> processes) referring to a "world"; being part of a representational and
> predictive model.  Without the counterfactuals, it's just a sequence of
> states and not a model of anything.  But in order that it be a model it
> must interact or have interacted in the past in order that the model be
> causally connected to the world.  It is this connection that gives meaning
> to the model.


What differentiates A and B, given that they use the same machine states?
How can A be more about something than B? Or to put it another way, what is
the "meaning" that makes A conscious, but not B?


> Because Bruno is a logician he tends to think of consciousness as
> performing deductive proofs, executing a proof in the sense that every
> computer program is a proof.  He models belief as proof.  But this
> overlooks where the meaning of the program comes from.  People that want to
> deny computers can be conscious point out that the meaning comes from the
> programmer.  But it doesn't have to.  If the computer has goals and can
> learn and act within the world then its internal modeling and decision
> processes get meaning through their potential for actions.
>
> This is why I don't agree with the conclusion drawn from step 8.  I think
> the requirement to counterfactually correct implies that a whole world, a
> physics, needs to be simulated too, or else the Movie Graph or Klara need
> to be able to interact with the world to supply the meaning to their
> program.  But if the Movie Graph computer is a counterfactually correct
> simulation of a person within a simulated world, there's no longer a
> "reversal".  Simulated consciousness exists in simulated worlds - dog bites
> man.
>
> Are you assuming that the world with which the MG interacts it itself
digitally emulable? If so, doesn't Bruno's argument go through for the
whole emulated world, if not for a subcomponent of it ("Klara") ? ISTM
you're saying that a conscious being has to interact with a world - which
may be true (people go mad in sensory isolation eventually). But if the
world is emulable then the MGA can be applied to it as a whole. Or at least
I remember Bruno saying that the substitution level and region to be
emulated weren't important to the argument, as long as there is some level
and region in which it holds. I'm sure he said that it might involve
emulating the world, or a chunk of the universe, but that the argument
still goes through.

Or did I misremember that, or did he say that, but there's a flaw in his
argument?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-11 Thread meekerdb

On 8/11/2014 7:29 PM, LizR wrote:
On 12 August 2014 12:48, meekerdb mailto:meeke...@verizon.net>> 
wrote:


On 8/11/2014 4:03 PM, LizR wrote:

I have never got this idea of "counterfactual correctness". It seems to 
be that
the argument goes ...

Assume computational process A is conscious
Take process B, which replays A - B passes through the same machine 
states as A,
but it doesn't work them out, it's driven by a recording of A - B isn't
conscious because it isn't counterfactually correct.

I can't see how this works. (Except insofar as if we assume 
consciousness
doesn't supervene on material processes, then neither A nor B is 
conscious, they
are just somehow attached to conscious experiences generated elsewhere, 
maybe by
a UD.)


It doesn't work, because it ignores the fact that consciousness is about 
something.
It can only exist in the context of thoughts (machine states and processes)
referring to a "world"; being part of a representational and predictive 
model.
 Without the counterfactuals, it's just a sequence of states and not a 
model of
anything.  But in order that it be a model it must interact or have 
interacted in
the past in order that the model be causally connected to the world.  It is 
this
connection that gives meaning to the model.


What differentiates A and B, given that they use the same machine states? How can A be 
more about something than B? Or to put it another way, what is the "meaning" that makes 
A conscious, but not B?


A makes decisions in response to the world.  Although, ex hypothesi, the world is 
repeating its inputs and A is repeating his decisions. Note that this assumes QM doesn't 
apply at the computational level of A.  In the argument we're asked to consider a dream so 
that we're led to overlook the fact that the meaning of A's internal processes actually 
derive from A's interaction with a world.  Imagine A as being born and living in a sensory 
deprivation tank - will A be conscious?  I think not.  But in Bruno's and Maudlin's 
thought experiments A might be, A could be aware of Peano's axioms and could prove all 
provable theorems plus Godel's incompleteness.




Because Bruno is a logician he tends to think of consciousness as performing
deductive proofs, executing a proof in the sense that every computer 
program is a
proof.  He models belief as proof.  But this overlooks where the meaning of 
the
program comes from.  People that want to deny computers can be conscious 
point out
that the meaning comes from the programmer.  But it doesn't have to.  If the
computer has goals and can learn and act within the world then its internal 
modeling
and decision processes get meaning through their potential for actions.

This is why I don't agree with the conclusion drawn from step 8.  I think 
the
requirement to counterfactually correct implies that a whole world, a 
physics, needs
to be simulated too, or else the Movie Graph or Klara need to be able to 
interact
with the world to supply the meaning to their program.  But if the Movie 
Graph
computer is a counterfactually correct simulation of a person within a 
simulated
world, there's no longer a "reversal".  Simulated consciousness exists in 
simulated
worlds - dog bites man.

Are you assuming that the world with which the MG interacts it itself digitally 
emulable? If so, doesn't Bruno's argument go through for the whole emulated world, if 
not for a subcomponent of it ("Klara") ? ISTM you're saying that a conscious being has 
to interact with a world - which may be true (people go mad in sensory isolation 
eventually). But if the world is emulable then the MGA can be applied to it as a whole.


Right.

Or at least I remember Bruno saying that the substitution level and region to be 
emulated weren't important to the argument, as long as there is some level and region in 
which it holds. I'm sure he said that it might involve emulating the world, or a chunk 
of the universe, but that the argument still goes through.


Or did I misremember that, or did he say that, but there's a flaw in his 
argument?


It's not exactly a flaw.  He always says, sure just make the simulation more 
comprehensive, include more of the environment, even the whole universe.  Which is OK, but 
then when you think about the reversal of physics and psychology you see that it is the 
physics here, in the non-simulated world, which has been replaced by the psychology PLUS 
physics in the simulated world.  If I say I can replace you with a simulation - I'll 
probably be greeted with skepticism.  But if I say I can replace you with a simulation of 
you in a simulation of the world - well then it's not so clear what I mean or how hard it 
will be.


Brent



--
You received this message because you are subscribed to the Google Groups "Everything 
List" group.
To unsubscrib

Re: MGA revisited paper

2014-08-11 Thread LizR
On 12 August 2014 15:12, meekerdb  wrote:

>  A makes decisions in response to the world.  Although, ex hypothesi, the
> world is repeating its inputs and A is repeating his decisions.
>

(I assume you mean B is repeating?)


> Note that this assumes QM doesn't apply at the computational level of A.
>

Well, I guess a physical UD would be made robust against quantum
uncertainty, like all computers, but why do we need to assume QM apply?


> In the argument we're asked to consider a dream so that we're led to
> overlook the fact that the meaning of A's internal processes actually
> derive from A's interaction with a world.
>

This is what I don't see. Why do A's internal processes have meaning, while
B's don't - given that they're physically identical?

> Or at least I remember Bruno saying that the substitution level and region
> to be emulated weren't important to the argument, as long as there is some
> level and region in which it holds. I'm sure he said that it might involve
> emulating the world, or a chunk of the universe, but that the argument
> still goes through.
>
>  Or did I misremember that, or did he say that, but there's a flaw in his
> argument?
>
> It's not exactly a flaw.  He always says, sure just make the simulation
> more comprehensive, include more of the environment, even the whole
> universe.  Which is OK, but then when you think about the reversal of
> physics and psychology you see that it is the physics here, in the
> non-simulated world, which has been replaced by the psychology PLUS physics
> in the simulated world.  If I say I can replace you with a simulation -
> I'll probably be greeted with skepticism.  But if I say I can replace you
> with a simulation of you in a simulation of the world - well then it's not
> so clear what I mean or how hard it will be.
>

I'm not sure I follow you here. Why does making the simulation bigger
invalidate the argument? Is there a cut-off point?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-11 Thread meekerdb

On 8/11/2014 8:26 PM, LizR wrote:
On 12 August 2014 15:12, meekerdb mailto:meeke...@verizon.net>> 
wrote:


A makes decisions in response to the world. Although, ex hypothesi, the 
world is
repeating its inputs and A is repeating his decisions.


(I assume you mean B is repeating?)


Sorry, right I meant B.


Note that this assumes QM doesn't apply at the computational level of A.


Well, I guess a physical UD would be made robust against quantum uncertainty, like all 
computers, but why do we need to assume QM apply?


The argument assumes it doesn't apply, so that the computation can be deterministic. I 
don't know that it affects the argument, but worries me a little that we make this 
unrealistic assumption; especially if we have include a whole 'world context' for the MG 
simulation.



In the argument we're asked to consider a dream so that we're led to 
overlook the
fact that the meaning of A's internal processes actually derive from A's 
interaction
with a world.


This is what I don't see. Why do A's internal processes have meaning, while B's don't - 
given that they're physically identical?


B's have meaning too, but it is derivative meaning because the meanings are copies of A's 
and A's refer to a world.  So it's an unwarranted conclusion to say, see B is conscious 
and there's no physics going on.  There's plenty of physics going on in the past that 
causally connects B to A's experience.  Just because it's not going on at the moment B is 
supposed to be experiencing it isn't determinative.  Real QM physics can require 
counterfactual correctness in the past (e.g. Wheeler's quantum erasure, Elitzur and 
Dolev's quantum liar's paradox).



Or at least I remember Bruno saying that the substitution level and region 
to be
emulated weren't important to the argument, as long as there is some level 
and
region in which it holds. I'm sure he said that it might involve emulating 
the
world, or a chunk of the universe, but that the argument still goes through.

Or did I misremember that, or did he say that, but there's a flaw in his 
argument?

It's not exactly a flaw.  He always says, sure just make the simulation more
comprehensive, include more of the environment, even the whole universe.  
Which is
OK, but then when you think about the reversal of physics and psychology 
you see
that it is the physics here, in the non-simulated world, which has been 
replaced by
the psychology PLUS physics in the simulated world.  If I say I can replace 
you with
a simulation - I'll probably be greeted with skepticism.  But if I say I 
can replace
you with a simulation of you in a simulation of the world - well then it's 
not so
clear what I mean or how hard it will be.


I'm not sure I follow you here. Why does making the simulation bigger invalidate the 
argument? Is there a cut-off point?


I don't know about a cut-off.  The argument is a reductio.  The conclusion Bruno makes is 
that no physical process is necessary to support consciousness, consciousness can be 
instantiated in a Turing machine simulation.  But my argument is that the simulation must 
also simulate a world that the consciousness interacts with, is conscious *of*, that a 
physical world is necessary for consciousness.  If it's a simulated consciousness, then it 
can be a simulated physics but it has to be some physics.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-11 Thread LizR
On 12 August 2014 15:50, meekerdb  wrote:

>  On 8/11/2014 8:26 PM, LizR wrote:
>
>  Well, I guess a physical UD would be made robust against quantum
> uncertainty, like all computers, but why do we need to assume QM apply?
>
> The argument assumes it doesn't apply, so that the computation can be
> deterministic. I don't know that it affects the argument, but worries me a
> little that we make this unrealistic assumption; especially if we have
> include a whole 'world context' for the MG simulation.
>

Ah, I see. One of the assumptions of comp is that consciousness is a
classical computation. At least I think that's what it means to say that
the Church-Turing thesis applies. I suppose a question here is whether QM
can introduce some "magic" that allows it to create consciousness from a
purely materialistic basis. If so then there's no need for comp because
consciousness isn't classically emulableyes?

>   This is what I don't see. Why do A's internal processes have meaning,
>> while B's don't - given that they're physically identical?
>>
>   B's have meaning too, but it is derivative meaning because the meanings
> are copies of A's and A's refer to a world.  So it's an unwarranted
> conclusion to say, see B is conscious and there's no physics going on.
> There's plenty of physics going on in the past that causally connects B to
> A's experience.  Just because it's not going on at the moment B is supposed
> to be experiencing it isn't determinative.  Real QM physics can require
> counterfactual correctness in the past (e.g. Wheeler's quantum erasure,
> Elitzur and Dolev's quantum liar's paradox).
>

Well, as I've mentioned previously I think time symmetry may sort out those
awkward retroactive quantum measurements. But anyway, I guess this is
putting the schrodinger's cat before the horse, in that comp only assumes
classical computation and attempts to derive a quantum world from it. So I
guess we can't necessarily assume real QM physics, or at least not unless
we've shown comp to be based on false premises or internally inconsistent,
or have a rival theory of consciousness arising naturally from qm and
materialism, or some other good reason to do so. I think what I'm trying to
say here is that to assume comp must work with real physics is to assume
from the start that there is no reversal.

>   I'm not sure I follow you here. Why does making the simulation bigger
> invalidate the argument? Is there a cut-off point?
>
> I don't know about a cut-off.  The argument is a reductio.  The conclusion
> Bruno makes is that no physical process is necessary to support
> consciousness,
>

OK


> consciousness can be instantiated in a Turing machine simulation.
>

Sorry to split the sentence, but I must admit I thought that latter part
was his initial assumption, rather than his conclusion?


> But my argument is that the simulation must also simulate a world that the
> consciousness interacts with, is conscious *of*, that a physical world is
> necessary for consciousness.  If it's a simulated consciousness, then it
> can be a simulated physics but it has to be some physics.
>

Right, yes, I see. Or I think I see. That's implying that the comp argument
is assuming what it sets out to show, that is, it sets out to show that
physics can be derived from consciousness as computation, but if it has to
introduce physics to show this, then the argument has become circular. So
if interactions with an environment are necessary for consciousness to
exist (as part of the definition of consciousness) then the argument is
necessarily circular. The question is whether the interaction is necessary,
or incidental - "incidental" would mean that consciousness has arisen in a
physical world through evolution, and hence is highly specialised as an
agent interacting with that world, but it could at least in theory arise
some other way (e.g. inside a computer). Although it's hard to imagine how
any conscious being could learn anything useful without interacting with
some sort of world - it would sure be a blank slate otherwise. So I guess
the question boils down to: is a blank slate consciousness - one that isn't
aware of anything (except its own existence, I guess) possible? Or to put
it yet another way, is Descartes right that "je pense donc je suis" or
isn't that enough?

Which I have to admit I don't know the answer to.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-11 Thread meekerdb

On 8/11/2014 9:27 PM, LizR wrote:
On 12 August 2014 15:50, meekerdb mailto:meeke...@verizon.net>> 
wrote:


On 8/11/2014 8:26 PM, LizR wrote:

Well, I guess a physical UD would be made robust against quantum 
uncertainty, like
all computers, but why do we need to assume QM apply?

The argument assumes it doesn't apply, so that the computation can be 
deterministic.
I don't know that it affects the argument, but worries me a little that we 
make this
unrealistic assumption; especially if we have include a whole 'world 
context' for
the MG simulation.


Ah, I see. One of the assumptions of comp is that consciousness is a classical 
computation. At least I think that's what it means to say that the Church-Turing thesis 
applies. I suppose a question here is whether QM can introduce some "magic" that allows 
it to create consciousness from a purely materialistic basis. If so then there's no need 
for comp because consciousness isn't classically emulableyes?


Although a quantum computer can compute some things much faster than a classical computer, 
it still can't compute things that are Turing uncomputable, so I don't think it provides 
that kind of magic.  I was thinking more of the fact that the recorded inputs to B and the 
response to the projection of the movie onto the graph will not be perfectly 
deterministic, but only with high statistical probability.  Also, in QM it generally makes 
a difference to the evolution of the system whether other states are available even if 
they are never occupied.



This is what I don't see. Why do A's internal processes have meaning, 
while B's
don't - given that they're physically identical?


B's have meaning too, but it is derivative meaning because the meanings are 
copies
of A's and A's refer to a world.  So it's an unwarranted conclusion to say, 
see B is
conscious and there's no physics going on.  There's plenty of physics going 
on in
the past that causally connects B to A's experience.  Just because it's not 
going on
at the moment B is supposed to be experiencing it isn't determinative.  
Real QM
physics can require counterfactual correctness in the past (e.g. Wheeler's 
quantum
erasure, Elitzur and Dolev's quantum liar's paradox).


Well, as I've mentioned previously I think time symmetry may sort out those awkward 
retroactive quantum measurements. But anyway, I guess this is putting the schrodinger's 
cat before the horse, in that comp only assumes classical computation and attempts to 
derive a quantum world from it. So I guess we can't necessarily assume real QM physics, 
or at least not unless we've shown comp to be based on false premises or internally 
inconsistent, or have a rival theory of consciousness arising naturally from qm and 
materialism, or some other good reason to do so. I think what I'm trying to say here is 
that to assume comp must work with real physics is to assume from the start that there 
is no reversal.


Well there's also the question of whether comp and the UD solve the hard problem any 
better than psychophysical parallelism. Pierz did a good job of examining this and I made 
some comments on his post.  I would like to look at comp+UD as just another scientific 
hypothesis which we will adopt when it makes some surprising prediction which is proved 
out by tests.  Obviously getting some surprising, testable prediction out of it is likely 
to be very difficulty.  But unlike Bruno I'm not much persuaded by logical inference from 
logic, Church-Turing, or Peano arithmetic because I think they aren't "The Truth" but just 
models we use in our thinking.  Just reflect on how all logicians and philosophers would 
have said, "No object can be in two different places at the same time. It's just logic." - 
before quantum mechanics.



I'm not sure I follow you here. Why does making the simulation bigger 
invalidate
the argument? Is there a cut-off point?

I don't know about a cut-off.  The argument is a reductio.  The conclusion 
Bruno
makes is that no physical process is necessary to support consciousness,


OK

consciousness can be instantiated in a Turing machine simulation.


Sorry to split the sentence, but I must admit I thought that latter part was his initial 
assumption, rather than his conclusion?


The initial assumption is consciousness can be instantiated by a physical computation (one 
that replicates the I/O of your neurons), but step 8 is to show it must be independent of 
the physical computation and can be instantiated by an abstraction.



But my argument is that the simulation must also simulate a world that the
consciousness interacts with, is conscious *of*, that a physical world is 
necessary
for consciousness.  If it's a simulated consciousness, then it can be a 
simulated
physics but it has to be some physics.


Right, yes, I see. Or I think I see. That's implying that the comp argument is assuming 

Re: MGA revisited paper

2014-08-12 Thread LizR
On 12 August 2014 17:17, meekerdb  wrote:

>  On 8/11/2014 9:27 PM, LizR wrote:
>
>  On 12 August 2014 15:50, meekerdb  wrote:
>
>>  On 8/11/2014 8:26 PM, LizR wrote:
>>
>>  Well, I guess a physical UD would be made robust against quantum
>> uncertainty, like all computers, but why do we need to assume QM apply?
>>
>>  The argument assumes it doesn't apply, so that the computation can be
>> deterministic. I don't know that it affects the argument, but worries me a
>> little that we make this unrealistic assumption; especially if we have
>> include a whole 'world context' for the MG simulation.
>>
>
>  Ah, I see. One of the assumptions of comp is that consciousness is a
> classical computation. At least I think that's what it means to say that
> the Church-Turing thesis applies. I suppose a question here is whether QM
> can introduce some "magic" that allows it to create consciousness from a
> purely materialistic basis. If so then there's no need for comp because
> consciousness isn't classically emulableyes?
>
> Although a quantum computer can compute some things much faster than a
> classical computer, it still can't compute things that are Turing
> uncomputable, so I don't think it provides that kind of magic.  I was
> thinking more of the fact that the recorded inputs to B and the response to
> the projection of the movie onto the graph will not be perfectly
> deterministic, but only with high statistical probability.  Also, in QM it
> generally makes a difference to the evolution of the system whether other
> states are available even if they are never occupied.
>

OK, I think I follow. This would appear to assume that physics precedes
computation, in an explanatory sense, which I would say means it probably
assumes comp is false as a premise?

Well there's also the question of whether comp and the UD solve the hard
> problem any better than psychophysical parallelism.
>

Yes indeed, I haven't really got to grips with that. I think the only way
in which comp tries to tackle the hard problem is (something to do with)
the fact that an infinite number of computations are involved. I must admit
I have great difficulty even thinking about the hard problem, my tendency
is to dismiss it as either "too mystical" or "too nonexistent". But I do
feel there is an ineffable quality to consciousness. (Or is the word
numinous?) On days which start with a 'T', at least.


> Pierz did a good job of examining this and I made some comments on his
> post.  I would like to look at comp+UD as just another scientific
> hypothesis which we will adopt when it makes some surprising prediction
> which is proved out by tests.  Obviously getting some surprising, testable
> prediction out of it is likely to be very difficulty.
>

Yes.

(I suppose if one didn't have experience of consciousness, predictions of
incommunicable qualia and so on might be surprising...if something that
didn't have experience of consciousness could be surprised, at least. But
yes,)


> But unlike Bruno I'm not much persuaded by logical inference from logic,
> Church-Turing, or Peano arithmetic because I think they aren't "The Truth"
> but just models we use in our thinking.  Just reflect on how all logicians
> and philosophers would have said, "No object can be in two different places
> at the same time. It's just logic." - before quantum mechanics.
>

And indeed physicists. Although does QM say unequivocally that an electron,
say, can be in two places at the same time, or does that depend on the
model? (I know the wave function can be in lots of places at the same time,
of course).

>I don't know about a cut-off.  The argument is a reductio.  The
>> conclusion Bruno makes is that no physical process is necessary to support
>> consciousness,
>>
>
>  OK
>
>
>>  consciousness can be instantiated in a Turing machine simulation.
>>
>
>  Sorry to split the sentence, but I must admit I thought that latter part
> was his initial assumption, rather than his conclusion?
>
> The initial assumption is consciousness can be instantiated by a physical
> computation (one that replicates the I/O of your neurons), but step 8 is to
> show it must be independent of the physical computation and can be
> instantiated by an abstraction.
>

Yes. I'm only quibbling here, but I think you only need the bit before the
comma to make your point. (I think the part after the comma comes into his
argument at about step 6?).

>  So I guess the question boils down to: is a blank slate consciousness -
>> one that isn't aware of anything (except its own existence, I guess)
>> possible?
>>
>   It is in Bruno's conception.  It is MOST conscious because it can go
> anywhere from there, be anybody or any being.  That's why he thinks
> intelligence, which he deprecates as mere "competence", detracts from
> consciousness.  It has narrowed or directed consciousness.  As you can see
> that is quite different from my idea of consciousness as something that
> arose as a way for evolution to t

Re: MGA revisited paper

2014-08-12 Thread Bruno Marchal


On 11 Aug 2014, at 23:40, LizR wrote:

Got it, thanks. Not too long so I will be able to read it in the  
near future :-)


I hope that is just an honest mistake, Bruno, and no one has been  
messing with your email deliberately. Do you have another email you  
can use? (e.g. a GMail one)


Thanks Liz. It looks in order now. We have been hacked at IRIDIA. yes  
I have gmail, but I don't like it, and try to avoid it as it put mess  
in my posts. I use "mail" of the mac.


Bruno








On 11 August 2014 20:43, Bruno Marchal  wrote:

On 11 Aug 2014, at 06:42, Russell Standish wrote:

Apologies to everybody. For some reason, when I clicked "publish",
Wordpress posted an earlier draft of the post, not the most recent one
I was working on.

I have now restored the correct version of the post - follow the link
"Draft paper here" to find the paper.


I got it. I will read it.

...

It looks now, that I have lost the ability to read my mails.  
Apparently someone deleted my password at my ULB account. It might  
take some time before I can read my mail again.


Sorry. It is a  good thing that I got your text before this  
happened. I might soon been unable to send message, too.


Bruno






Cheers


On Sun, Aug 10, 2014 at 08:08:55PM -0700, meekerdb wrote:
On 8/10/2014 3:38 PM, Russell Standish wrote:
As long, long time promised, I now have a draft of my "MGA revisited"
paper for critical comment. I have uploaded this to my blog, which
gives people the ability to attach comments.

http://www.hpcoders.com.au/blog/?p=73

Whilst I'm happy I now understand the issue, I still not happy with
how I've expressed it - the text could still do with some work.

So let the games begin!

I went to your blog and I found:

/In this paper, we reexamine Bruno Marchal's Movie Graph//
//Argument, which demonstrates a basic incompatibility between//
//computationalism and materialism. We discover that the  
incompatibility//
//is only manifest in singular classical-like universes. If we  
accept//
//that we live in a Multiverse, then the incompatibility goes away,  
but//

//in that case another line of argument shows that with//
//computationalism, fundamental, or primitive materiality has no  
causal//
//influence on what is observed, which must must be derivable from  
basic//

//arithmetic properties./

But I didn't find "this paper"?

Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

Latest project: The Amoeba's Secret
(http://www.hpcoders.com.au/AmoebasSecret.html)


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-

Re: MGA revisited paper

2014-08-12 Thread Bruno Marchal

Hi,

I think it is better I let you discuss a little bit.

Yes Russell made a nice introduction to this problematic.

Below, I just put a , which you might try to guess from my  
preview post (notably to Brent) on this issue.




On 12 Aug 2014, at 02:48, meekerdb wrote:


On 8/11/2014 4:03 PM, LizR wrote:
I have never got this idea of "counterfactual correctness". It  
seems to be that the argument goes ...


Assume computational process A is conscious
Take process B, which replays A - B passes through the same machine  
states as A, but it doesn't work them out, it's driven by a  
recording of A - B isn't conscious because it isn't  
counterfactually correct.


I can't see how this works. (Except insofar as if we assume  
consciousness doesn't supervene on material processes, then neither  
A nor B is conscious, they are just somehow attached to conscious  
experiences generated elsewhere, maybe by a UD.)


It doesn't work, because it ignores the fact that consciousness is  
about something.  It can only exist in the context of thoughts  
(machine states and processes) referring to a "world"; being part of  
a representational and predictive model.  Without the  
counterfactuals, it's just a sequence of states and not a model of  
anything.  But in order that it be a model it must interact or have  
interacted in the past in order that the model be causally connected  
to the world.  It is this connection that gives meaning to the  
model. Because Bruno is a logician he tends to think of  
consciousness as performing deductive proofs, executing a proof in  
the sense that every computer program is a proof.  He models belief  
as proof.  But this overlooks where the meaning of the program comes  
from.






People that want to deny computers can be conscious point out that  
the meaning comes from the programmer.  But it doesn't have to.  If  
the computer has goals and can learn and act within the world then  
its internal modeling and decision processes get meaning through  
their potential for actions.


This is why I don't agree with the conclusion drawn from step 8.  I  
think the requirement to counterfactually correct implies that a  
whole world, a physics, needs to be simulated too, or else the Movie  
Graph or Klara need to be able to interact with the world to supply  
the meaning to their program.  But if the Movie Graph computer is a  
counterfactually correct simulation of a person within a simulated  
world, there's no longer a "reversal".  Simulated consciousness  
exists in simulated worlds - dog bites man.



Bruno






Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-12 Thread Telmo Menezes
On Tue, Aug 12, 2014 at 1:57 AM, meekerdb  wrote:

>  On 8/11/2014 5:13 PM, Telmo Menezes wrote:
>
>
>
>
> On Tue, Aug 12, 2014 at 12:22 AM, LizR  wrote:
>
>> Having now read the paper,
>>
>
>  Ok, I finished it too. Russell's "state of the art" is a very nice
> introduction to the MGA and Maudlin's argument. Very clear and concise,
> helped me organize my thoughts on these.
>
>
>>  ISTM that the "counterfactual" part of the argument is the only part
>> that I *really* don't get. Or rather ISTM that it demonstrates that
>> consciousness can't supervene on physical computational states, because
>> those states can't know anything about these counterfactuals, which by
>> definition don't happen.
>>
>
>  I think the point here is that if we assume consciousness supervenes on
> matter, then we are forced to reject comp, by reductio ad absurdum.
>
>
> I think the mistake, the reductio, is the assumption that consciousness
> can supervene on this piece of matter without reference to the world in
> which the matter exists.
>

This smells of dualism. What is "the world in which the matter exists" if
not matter?


>   This fallacy is encouraged by considering conscious thoughts to be about
> abstractions like arithmetic and dreams (as though dreams did not derive
> from reality).
>

Dreams both derive from and are reality. Reality is everything that is, no?

Telmo.


>
> Brent
>
>
>   Any computation supported by matter on which consciousness would
> supervene could be replaced with a dumb playback of the sequence of states
> produced by the computation (contradicting comp). In the Klara / Olympia
> case, Olympia could be made compatible with comp by being replaceable by
> Klara to deal with counterfactuals that would never happen. Enabling /
> disabling the Olympia / Klara connection would turn consciousness on or off
> (contradicting primitive matter, because the possibility of enabling
> material computations that would never happen would determine the presence
> of absence of consciousness).
>
>  I am writing this to help organize my own thoughts, and hope to be
> corrected if I am making a mistake.
>
>
>>  Then again, I also have some trouble with the multiverse part. A MV "is"
>> a quantum computer? How do we know that, without even knowing the laws of
>> physics?
>>
>
>  I think Russell is referring to a MWI multiverse which is necessarily a
> quantum computer (we are assuming the wave equation with MWI, so the laws
> of physics are known).
>
>  I am not convinced that the MWI + the anthropic principle is equivalent
> to the subset of the universal dovetailer computations that supports all
> possible human experiences. I am also not convinced that the set of all
> possible human experiences is finite. Russell, could you elaborate on these?
>
>  (I am going to comment on the blog post too, in a rather redundant way)
>
>  Cheers
> Telmo.
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-12 Thread Telmo Menezes
On Tue, Aug 12, 2014 at 6:17 AM, meekerdb  wrote:

>  On 8/11/2014 9:27 PM, LizR wrote:
>
>  On 12 August 2014 15:50, meekerdb  wrote:
>
>>  On 8/11/2014 8:26 PM, LizR wrote:
>>
>>  Well, I guess a physical UD would be made robust against quantum
>> uncertainty, like all computers, but why do we need to assume QM apply?
>>
>>  The argument assumes it doesn't apply, so that the computation can be
>> deterministic. I don't know that it affects the argument, but worries me a
>> little that we make this unrealistic assumption; especially if we have
>> include a whole 'world context' for the MG simulation.
>>
>
>  Ah, I see. One of the assumptions of comp is that consciousness is a
> classical computation. At least I think that's what it means to say that
> the Church-Turing thesis applies. I suppose a question here is whether QM
> can introduce some "magic" that allows it to create consciousness from a
> purely materialistic basis. If so then there's no need for comp because
> consciousness isn't classically emulableyes?
>
>
> Although a quantum computer can compute some things much faster than a
> classical computer, it still can't compute things that are Turing
> uncomputable, so I don't think it provides that kind of magic.  I was
> thinking more of the fact that the recorded inputs to B and the response to
> the projection of the movie onto the graph will not be perfectly
> deterministic, but only with high statistical probability.  Also, in QM it
> generally makes a difference to the evolution of the system whether other
> states are available even if they are never occupied.
>
>
>This is what I don't see. Why do A's internal processes have
>>> meaning, while B's don't - given that they're physically identical?
>>>
>>B's have meaning too, but it is derivative meaning because the
>> meanings are copies of A's and A's refer to a world.  So it's an
>> unwarranted conclusion to say, see B is conscious and there's no physics
>> going on.  There's plenty of physics going on in the past that causally
>> connects B to A's experience.  Just because it's not going on at the moment
>> B is supposed to be experiencing it isn't determinative.  Real QM physics
>> can require counterfactual correctness in the past (e.g. Wheeler's quantum
>> erasure, Elitzur and Dolev's quantum liar's paradox).
>>
>
>  Well, as I've mentioned previously I think time symmetry may sort out
> those awkward retroactive quantum measurements. But anyway, I guess this is
> putting the schrodinger's cat before the horse, in that comp only assumes
> classical computation and attempts to derive a quantum world from it. So I
> guess we can't necessarily assume real QM physics, or at least not unless
> we've shown comp to be based on false premises or internally inconsistent,
> or have a rival theory of consciousness arising naturally from qm and
> materialism, or some other good reason to do so. I think what I'm trying to
> say here is that to assume comp must work with real physics is to assume
> from the start that there is no reversal.
>
>
> Well there's also the question of whether comp and the UD solve the hard
> problem any better than psychophysical parallelism.
>

I'm not sure that comp and UD propose to solve the hard problem, as much as
proposing why it's not solvable.


> Pierz did a good job of examining this and I made some comments on his
> post.  I would like to look at comp+UD as just another scientific
> hypothesis which we will adopt when it makes some surprising prediction
> which is proved out by tests.  Obviously getting some surprising, trestable
> prediction out of it is likely to be very difficulty.  But unlike Bruno I'm
> not much persuaded by logical inference from logic, Church-Turing, or Peano
> arithmetic because I think they aren't "The Truth" but just models we use
> in our thinking.  Just reflect on how all logicians and philosophers would
> have said, "No object can be in two different places at the same time. It's
> just logic." - before quantum mechanics.
>

It is my impression that progress in logic is done by removing all that is
non-abstract. It's a simplification effort. My difficulties with logic
usually arise from not being able to grasp the counter-intuitive simple
level at which it operates. Confusing common sense with logic is a common
mistake. You see this a lot on you tube these days, where well-meaning
atheists like to say "it's just logic" when they are in fact referring to
scientific common sense. I am an atheist, an agnostic and a lover of
science, so I never like this -- it's resorting to the tricks of the
"enemy".


>
>
>   I'm not sure I follow you here. Why does making the simulation
>> bigger invalidate the argument? Is there a cut-off point?
>>
>>  I don't know about a cut-off.  The argument is a reductio.  The
>> conclusion Bruno makes is that no physical process is necessary to support
>> consciousness,
>>
>
>  OK
>
>
>>  consciousness can be instantiated in a Turing mach

Re: MGA revisited paper

2014-08-12 Thread meekerdb

On 8/12/2014 2:08 AM, LizR wrote:
On 12 August 2014 17:17, meekerdb mailto:meeke...@verizon.net>> 
wrote:


On 8/11/2014 9:27 PM, LizR wrote:

On 12 August 2014 15:50, meekerdb mailto:meeke...@verizon.net>> wrote:

On 8/11/2014 8:26 PM, LizR wrote:

Well, I guess a physical UD would be made robust against quantum 
uncertainty,
like all computers, but why do we need to assume QM apply?

The argument assumes it doesn't apply, so that the computation can be
deterministic. I don't know that it affects the argument, but worries 
me a
little that we make this unrealistic assumption; especially if we have 
include
a whole 'world context' for the MG simulation.


Ah, I see. One of the assumptions of comp is that consciousness is a 
classical
computation. At least I think that's what it means to say that the 
Church-Turing
thesis applies. I suppose a question here is whether QM can introduce some 
"magic"
that allows it to create consciousness from a purely materialistic basis. 
If so
then there's no need for comp because consciousness isn't classically 
emulableyes?

Although a quantum computer can compute some things much faster than a 
classical
computer, it still can't compute things that are Turing uncomputable, so I 
don't
think it provides that kind of magic.  I was thinking more of the fact that 
the
recorded inputs to B and the response to the projection of the movie onto 
the graph
will not be perfectly deterministic, but only with high statistical probability. 
Also, in QM it generally makes a difference to the evolution of the system whether

other states are available even if they are never occupied.


OK, I think I follow. This would appear to assume that physics precedes computation, in 
an explanatory sense, which I would say means it probably assumes comp is false as a 
premise?


There I agree with JKC.  "Comp" is ambiguous.  If it's just that it would be a good bet to 
have the doctor replace some brain parts with I/O functionally identical parts, then no 
that is not assumed false.  But Bruno claims that the whole argument, through step 8, 
follows from "comp", which I doubt.  The problem is that it's a reductio ad absurdum.  
When your argument reaches an absurdity then it implies something in the chain of 
inference is wrong.  But it isn't necessarily the main premise you started with; it can be 
a mistake anywhere along the way.




Well there's also the question of whether comp and the UD solve the hard 
problem any
better than psychophysical parallelism.


Yes indeed, I haven't really got to grips with that. I think the only way in which comp 
tries to tackle the hard problem is (something to do with) the fact that an infinite 
number of computations are involved. I must admit I have great difficulty even thinking 
about the hard problem, my tendency is to dismiss it as either "too mystical" or "too 
nonexistent". But I do feel there is an ineffable quality to consciousness. (Or is the 
word numinous?) On days which start with a 'T', at least.


Pierz did a good job of examining this and I made some comments on his 
post. I would
like to look at comp+UD as just another scientific hypothesis which we will 
adopt
when it makes some surprising prediction which is proved out by tests.  
Obviously
getting some surprising, testable prediction out of it is likely to be very 
difficulty.


Yes.

(I suppose if one didn't have experience of consciousness, predictions of incommunicable 
qualia and so on might be surprising...if something that didn't have experience of 
consciousness could be surprised, at least. But yes,)


But unlike Bruno I'm not much persuaded by logical inference from logic,
Church-Turing, or Peano arithmetic because I think they aren't "The Truth" 
but just
models we use in our thinking.  Just reflect on how all logicians and 
philosophers
would have said, "No object can be in two different places at the same 
time. It's
just logic." - before quantum mechanics.


And indeed physicists. Although does QM say unequivocally that an electron, say, can be 
in two places at the same time, or does that depend on the model? (I know the wave 
function can be in lots of places at the same time, of course).



I don't know about a cut-off.  The argument is a reductio.  The 
conclusion
Bruno makes is that no physical process is necessary to support 
consciousness,


OK

consciousness can be instantiated in a Turing machine simulation.


Sorry to split the sentence, but I must admit I thought that latter part 
was his
initial assumption, rather than his conclusion?

The initial assumption is consciousness can be instantiated by a physical
computation (one that replicates the I/O of your neurons), but step 8 is to 
show it
must be independent of the physical computation and can be instanti

Re: MGA revisited paper

2014-08-12 Thread meekerdb

On 8/12/2014 6:20 AM, Telmo Menezes wrote:




On Tue, Aug 12, 2014 at 1:57 AM, meekerdb > wrote:


On 8/11/2014 5:13 PM, Telmo Menezes wrote:




On Tue, Aug 12, 2014 at 12:22 AM, LizR mailto:lizj...@gmail.com>> wrote:

Having now read the paper,


Ok, I finished it too. Russell's "state of the art" is a very nice 
introduction to
the MGA and Maudlin's argument. Very clear and concise, helped me organize 
my
thoughts on these.

ISTM that the "counterfactual" part of the argument is the only part 
that I
/really/ don't get. Or rather ISTM that it demonstrates that 
consciousness
can't supervene on physical computational states, because those states 
can't
know anything about these counterfactuals, which by definition don't 
happen.


I think the point here is that if we assume consciousness supervenes on 
matter,
then we are forced to reject comp, by reductio ad absurdum.


I think the mistake, the reductio, is the assumption that consciousness can
supervene on this piece of matter without reference to the world in which 
the matter
exists.


This smells of dualism. What is "the world in which the matter exists" if not 
matter?


Sure it's matter.  It's as much or as little dualism as the idea that consciousness 
supervenes on material processes.  I'm not much concerned with onotological labels - find 
a model that works and then worry about what to call it is my attitude.  "Matter" in 
modern physics is already so abstract it inspires questions like, "What makes the 
equations fly?"


Brent


  This fallacy is encouraged by considering conscious thoughts to be about
abstractions like arithmetic and dreams (as though dreams did not derive 
from reality).


Dreams both derive from and are reality. Reality is everything that is, no?

Telmo.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-12 Thread meekerdb

On 8/12/2014 6:36 AM, Telmo Menezes wrote:




On Tue, Aug 12, 2014 at 6:17 AM, meekerdb > wrote:


On 8/11/2014 9:27 PM, LizR wrote:

On 12 August 2014 15:50, meekerdb mailto:meeke...@verizon.net>> wrote:

On 8/11/2014 8:26 PM, LizR wrote:

Well, I guess a physical UD would be made robust against quantum 
uncertainty,
like all computers, but why do we need to assume QM apply?

The argument assumes it doesn't apply, so that the computation can be
deterministic. I don't know that it affects the argument, but worries 
me a
little that we make this unrealistic assumption; especially if we have 
include
a whole 'world context' for the MG simulation.


Ah, I see. One of the assumptions of comp is that consciousness is a 
classical
computation. At least I think that's what it means to say that the 
Church-Turing
thesis applies. I suppose a question here is whether QM can introduce some 
"magic"
that allows it to create consciousness from a purely materialistic basis. 
If so
then there's no need for comp because consciousness isn't classically 
emulableyes?


Although a quantum computer can compute some things much faster than a 
classical
computer, it still can't compute things that are Turing uncomputable, so I 
don't
think it provides that kind of magic.  I was thinking more of the fact that 
the
recorded inputs to B and the response to the projection of the movie onto 
the graph
will not be perfectly deterministic, but only with high statistical probability. 
Also, in QM it generally makes a difference to the evolution of the system whether

other states are available even if they are never occupied.



This is what I don't see. Why do A's internal processes have 
meaning,
while B's don't - given that they're physically identical?


B's have meaning too, but it is derivative meaning because the meanings 
are
copies of A's and A's refer to a world.  So it's an unwarranted 
conclusion to
say, see B is conscious and there's no physics going on.  There's 
plenty of
physics going on in the past that causally connects B to A's 
experience.  Just
because it's not going on at the moment B is supposed to be 
experiencing it
isn't determinative.  Real QM physics can require counterfactual 
correctness in
the past (e.g. Wheeler's quantum erasure, Elitzur and Dolev's quantum 
liar's
paradox).


Well, as I've mentioned previously I think time symmetry may sort out those 
awkward
retroactive quantum measurements. But anyway, I guess this is putting the
schrodinger's cat before the horse, in that comp only assumes classical 
computation
and attempts to derive a quantum world from it. So I guess we can't 
necessarily
assume real QM physics, or at least not unless we've shown comp to be based 
on
false premises or internally inconsistent, or have a rival theory of 
consciousness
arising naturally from qm and materialism, or some other good reason to do 
so. I
think what I'm trying to say here is that to assume comp must work with real
physics is to assume from the start that there is no reversal.


Well there's also the question of whether comp and the UD solve the hard 
problem any
better than psychophysical parallelism.


I'm not sure that comp and UD propose to solve the hard problem, as much as proposing 
why it's not solvable.


I agree.  I don't think it's solvable in the way people ask for.  I think it's solvable in 
the engineering sense.  The advantage of Bruno's theory is that, if the theory is right, 
then he can prove within it what the hard problem is not solvable and can say why.



Pierz did a good job of examining this and I made some comments on his 
post.  I
would like to look at comp+UD as just another scientific hypothesis which 
we will
adopt when it makes some surprising prediction which is proved out by tests. 
Obviously getting some surprising, trestable prediction out of it is likely to be

very difficulty. But unlike Bruno I'm not much persuaded by logical 
inference from
logic, Church-Turing, or Peano arithmetic because I think they aren't "The 
Truth"
but just models we use in our thinking.  Just reflect on how all logicians 
and
philosophers would have said, "No object can be in two different places at 
the same
time. It's just logic." - before quantum mechanics.


It is my impression that progress in logic is done by removing all that is non-abstract. 
It's a simplification effort. My difficulties with logic usually arise from not being 
able to grasp the counter-intuitive simple level at which it operates. Confusing common 
sense with logic is a common mistake. You see this a lot on you tube these days, where 
well-meaning atheists like to say "it's just logic" when they are in fact refe

Re: MGA revisited paper

2014-08-13 Thread Bruno Marchal


On 12 Aug 2014, at 11:24, Bruno Marchal wrote:


Hi,

I think it is better I let you discuss a little bit.

Yes Russell made a nice introduction to this problematic.

Below, I just put a , which you might try to guess from my  
preview post (notably to Brent) on this issue.




On 12 Aug 2014, at 02:48, meekerdb wrote:


On 8/11/2014 4:03 PM, LizR wrote:
I have never got this idea of "counterfactual correctness". It  
seems to be that the argument goes ...


Assume computational process A is conscious
Take process B, which replays A - B passes through the same  
machine states as A, but it doesn't work them out, it's driven by  
a recording of A - B isn't conscious because it isn't  
counterfactually correct.


I can't see how this works. (Except insofar as if we assume  
consciousness doesn't supervene on material processes, then  
neither A nor B is conscious, they are just somehow attached to  
conscious experiences generated elsewhere, maybe by a UD.)


It doesn't work, because it ignores the fact that consciousness is  
about something.  It can only exist in the context of thoughts  
(machine states and processes) referring to a "world"; being part  
of a representational and predictive model.  Without the  
counterfactuals, it's just a sequence of states and not a model of  
anything.  But in order that it be a model it must interact or have  
interacted in the past in order that the model be causally  
connected to the world.  It is this connection that gives meaning  
to the model. Because Bruno is a logician he tends to think of  
consciousness as performing deductive proofs, executing a proof in  
the sense that every computer program is a proof.  He models belief  
as proof.  But this overlooks where the meaning of the program  
comes from.





You might say that I model mind and belief ([]p), and the whole  
working of a computer, by proof in arithmetic, but with comp, it has  
to be valid at the correct susbstitution level for the case of correct  
machines.


But for consciousness, I model it by knowledge (modal logic S4), which  
I obtained from the Theaetetus' method applied on belief/provability  
([]p & p). Then I insist that this is not modeled by *anything* you  
can define exclusively with 3p terms. Indeed that (meta) definition,  
in the comp-arithmetical frame makes knowledge being a non  
propositional attitude. I am even led (may be influenced by salvia) to  
the idea that consciousness is more in the "& p" than in []p. The  
proof aspect of the brain machinery would be a filter of consciousness  
and memory, and consciousness itself is more in the complementary of  
what is provable. The meaning comes from truth, not proof.









People that want to deny computers can be conscious point out that  
the meaning comes from the programmer.  But it doesn't have to.  If  
the computer has goals and can learn and act within the world then  
its internal modeling and decision processes get meaning through  
their potential for actions.


OK.





This is why I don't agree with the conclusion drawn from step 8.  I  
think the requirement to counterfactually correct implies that a  
whole world, a physics, needs to be simulated too, or else the  
Movie Graph or Klara need to be able to interact with the world to  
supply the meaning to their program.


Hmm... If you want the full counterfactualness of a *universal*  
machine, you need the UD*, and thus the full sigma_1 complete reality,  
which is tiny compared to the full arithmetical truth, but still  
infinitely bigger than known physical universe.


Then with a computer, the interaction with the environment are finite,  
and can be re-entered in the computer machinery locally, and the MGA  
can be resume again.






But if the Movie Graph computer is a counterfactually correct  
simulation of a person within a simulated world, there's no longer  
a "reversal".


Why?

Bruno



Simulated consciousness exists in simulated worlds - dog bites man.



Bruno






Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message beca

Re: MGA revisited paper

2014-08-13 Thread Bruno Marchal


On 12 Aug 2014, at 15:20, Telmo Menezes wrote:





On Tue, Aug 12, 2014 at 1:57 AM, meekerdb   
wrote:

On 8/11/2014 5:13 PM, Telmo Menezes wrote:




On Tue, Aug 12, 2014 at 12:22 AM, LizR  wrote:
Having now read the paper,

Ok, I finished it too. Russell's "state of the art" is a very nice  
introduction to the MGA and Maudlin's argument. Very clear and  
concise, helped me organize my thoughts on these.


ISTM that the "counterfactual" part of the argument is the only  
part that I really don't get. Or rather ISTM that it demonstrates  
that consciousness can't supervene on physical computational  
states, because those states can't know anything about these  
counterfactuals, which by definition don't happen.


I think the point here is that if we assume consciousness  
supervenes on matter, then we are forced to reject comp, by  
reductio ad absurdum.


I think the mistake, the reductio, is the assumption that  
consciousness can supervene on this piece of matter without  
reference to the world in which the matter exists.


This smells of dualism. What is "the world in which the matter  
exists" if not matter?



Yes, the problem here is that Brent needs either *primitive* matter,  
and then step 8 will show that it is something like a gap-of-the-god.  
As long as matter is 3p definable, a brain cannot immediately  
distinguish it with a program, or with the FPI on all programs, and  
that is why the FPI will fight back on such matter notion. So it has  
to be a mysterious thing, non Turing emulable, and non FPI-recoverable.


But why introduce this when the reasoning shows that if such matter  
exists, then the logic of []p & p, or other variants (anyone) should  
provide a comp-quantum logic distinct from QL. So let us see, before  
concluding if we want stay cold (keep the scientific attitude) on that  
issue. Brent is just betting that comp will be violated by "nature".  
Up to now, the comp-QL does fit with nature.


I see that some people have still a difficulty to get the importance,  
which stems from incompleteness of the intensional variance of the [].  
It is normal, it asks for more familiarity with mathematical logic,  
than , say, the UDA.


Bruno






  This fallacy is encouraged by considering conscious thoughts to be  
about abstractions like arithmetic and dreams (as though dreams did  
not derive from reality).


Dreams both derive from and are reality. Reality is everything that  
is, no?


Telmo.


Brent


Any computation supported by matter on which consciousness would  
supervene could be replaced with a dumb playback of the sequence of  
states produced by the computation (contradicting comp). In the  
Klara / Olympia case, Olympia could be made compatible with comp by  
being replaceable by Klara to deal with counterfactuals that would  
never happen. Enabling / disabling the Olympia / Klara connection  
would turn consciousness on or off (contradicting primitive matter,  
because the possibility of enabling material computations that  
would never happen would determine the presence of absence of  
consciousness).


I am writing this to help organize my own thoughts, and hope to be  
corrected if I am making a mistake.


Then again, I also have some trouble with the multiverse part. A MV  
"is" a quantum computer? How do we know that, without even knowing  
the laws of physics?


I think Russell is referring to a MWI multiverse which is  
necessarily a quantum computer (we are assuming the wave equation  
with MWI, so the laws of physics are known).


I am not convinced that the MWI + the anthropic principle is  
equivalent to the subset of the universal dovetailer computations  
that supports all possible human experiences. I am also not  
convinced that the set of all possible human experiences is finite.  
Russell, could you elaborate on these?


(I am going to comment on the blog post too, in a rather redundant  
way)


Cheers
Telmo.



--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and s

Re: MGA revisited paper

2014-08-13 Thread Bruno Marchal


On 12 Aug 2014, at 15:36, Telmo Menezes wrote:





On Tue, Aug 12, 2014 at 6:17 AM, meekerdb   
wrote:

On 8/11/2014 9:27 PM, LizR wrote:

On 12 August 2014 15:50, meekerdb  wrote:
On 8/11/2014 8:26 PM, LizR wrote:
Well, I guess a physical UD would be made robust against quantum  
uncertainty, like all computers, but why do we need to assume QM  
apply?


The argument assumes it doesn't apply, so that the computation can  
be deterministic. I don't know that it affects the argument, but  
worries me a little that we make this unrealistic  
assumption;  especially if we have include a whole  
'world context' for the MG simulation.


Ah, I see. One of the assumptions of comp is that consciousness is  
a classical computation. At least I think that's what it means to  
say that the Church-Turing thesis applies. I suppose a question  
here is whether QM can introduce some "magic" that allows it to  
create consciousness from a purely materialistic basis. If so then  
there's no need for comp because consciousness isn't classically  
emulableyes?


Although a quantum computer can compute some things much faster than  
a classical computer, it still can't compute things that are Turing  
uncomputable, so I don't think it provides that kind of magic.  I  
was thinking more of the fact that the recorded inputs to B and the  
response to the projection of the movie onto the graph will not be  
perfectly deterministic, but only with high statistical  
probability.  Also, in QM it generally makes a difference to the  
evolution of the system whether other states are available even if  
they are never occupied.



This is what I don't see. Why do A's internal processes have  
meaning, while B's don't - given that they're physically identical?
B's have meaning too, but it is derivative meaning because the  
meanings are copies of A's and A's refer to a world.  So it's an  
unwarranted conclusion to say, see B is conscious and there's no  
physics going on.  There's plenty of physics going on in the past  
that causally connects B to A's experience.  Just because it's not  
going on at the moment B is supposed to be experiencing it isn't  
determinative.  Real QM physics can require counterfactual  
correctness in the past (e.g. Wheeler's quantum erasure, Elitzur  
and Dolev's quantum liar's paradox).


Well, as I've mentioned previously I think time symmetry may sort  
out those awkward retroactive quantum measurements. But anyway, I  
guess this is putting the schrodinger's cat before the horse, in  
that comp only assumes classical computation and attempts to derive  
a quantum world from it. So I guess we can't necessarily assume  
real QM physics, or at least not unless we've shown comp to be  
based on false premises or internally inconsistent, or have a rival  
theory of consciousness arising naturally from qm and materialism,  
or some other good reason to do so. I think what I'm trying to say  
here is that to assume comp must work with real physics is to  
assume from the start that there is no reversal.


Well there's also the question of whether comp and the UD solve the  
hard problem any better than psychophysical parallelism.


I'm not sure that comp and UD propose to solve the hard problem, as  
much as proposing why it's not solvable.


Well, it is mainly the AUDA which shows that it is not solvable.

The UDA does not solve any problem. It provides a new problem. It  
assumes computationalism, and derive a problem for matter. UDA just  
shows that the mind-body problem is two times more difficult with  
comp, as it can no more take matter for granted.


But AUDA arguably solves, or meta-solves the mind-body problem, by  
providing a theory of mind and consciousness (mainly by the logics of  
[]p and []p & p), and it does provide a theory of matter (taken now in  
the UD sense, i.e. a measure on computations), and this mainly by the  
logics of []p & <>p and []p & <>p & p.







Pierz did a good job of examining this and I made some comments on  
his post.  I would like to look at comp+UD as just another  
scientific hypothesis which we will adopt when it makes some  
surprising prediction which is proved out by tests.  Obviously  
getting some surprising, trestable prediction out of it is likely to  
be very difficulty.  But unlike Bruno I'm not much persuaded by  
logical inference from logic, Church-Turing, or Peano arithmetic  
because I think they aren't "The Truth" but just models we use in  
our thinking.  Just reflect on how all logicians and philosophers  
would have said, "No object can be in two different places at the  
same time. It's just logic." - before quantum mechanics.


It is my impression that progress in logic is done by removing all  
that is non-abstract. It's a simplification effort. My difficulties  
with logic usually arise from not being able to grasp the counter- 
intuitive simple level at which it operates. Confusing common sense  
with logic is a com

Re: MGA revisited paper

2014-08-13 Thread Bruno Marchal


On 12 Aug 2014, at 18:32, meekerdb wrote:


On 8/12/2014 2:08 AM, LizR wrote:

On 12 August 2014 17:17, meekerdb  wrote:
On 8/11/2014 9:27 PM, LizR wrote:

On 12 August 2014 15:50, meekerdb  wrote:
On 8/11/2014 8:26 PM, LizR wrote:
Well, I guess a physical UD would be made robust against quantum  
uncertainty, like all computers, but why do we need to assume QM  
apply?


The argument assumes it doesn't apply, so that the computation can  
be deterministic. I don't know that it affects the argument, but  
worries me a little that we make this unrealistic assumption;  
especially if we have include a whole 'world context' for the MG  
simulation.


Ah, I see. One of the assumptions of comp is that consciousness is  
a classical computation. At least I think that's what it means to  
say that the Church-Turing thesis applies. I suppose a question  
here is whether QM can introduce some "magic" that allows it to  
create consciousness from a purely materialistic basis. If so then  
there's no need for comp because consciousness isn't classically  
emulableyes?
Although a quantum computer can compute some things much faster  
than a classical computer, it still can't compute things that are  
Turing uncomputable, so I don't think it provides that kind of  
magic.  I was thinking more of the fact that the recorded inputs to  
B and the response to the projection of the movie onto the graph  
will not be perfectly deterministic, but only with high statistical  
probability.  Also, in QM it generally makes a difference to the  
evolution of the system whether other states are available even if  
they are never occupied.


OK, I think I follow. This would appear to assume that physics  
precedes computation, in an explanatory sense, which I would say  
means it probably assumes comp is false as a premise?


There I agree with JKC.  "Comp" is ambiguous.


Well, certainly not for the same reason. JKC has no problem with steps  
0, 1, 2. By definition of comp, this means he has no problem with comp.





If it's just that it would be a good bet to have the doctor replace  
some brain parts with I/O functionally identical parts, then no that  
is not assumed false.  But Bruno claims that the whole argument,  
through step 8, follows from "comp", which I doubt.  The problem is  
that it's a reductio ad absurdum.  When your argument reaches an  
absurdity then it implies something in the chain of inference is  
wrong.  But it isn't necessarily the main premise you started with;  
it can be a mistake anywhere along the way.


OK, but then you need to find the flaw. UDA1-7 is purely deductive.  
careful as step 8, MGA, is not a purely deductive argument, as it  
point to "reality", so it is just an argument showing that when  
primitive matter is used to block the consequence of UDA1-7, it is  
equivalent to an introduction of something magic (primitive matter) to  
avoid ... its own testability (modulo the idea that we might be  
dreaming or already in a special but normal (in the statistical sense  
on all computations) emulation, à-la Galouye or Bostrom.









Well there's also the question of whether comp and the UD solve the  
hard problem any better than psychophysical parallelism.


Yes indeed, I haven't really got to grips with that. I think the  
only way in which comp tries to tackle the hard problem is  
(something to do with) the fact that an infinite number of  
computations are involved. I must admit I have great difficulty  
even thinking about the hard problem, my tendency is to dismiss it  
as either "too mystical" or "too nonexistent". But I do feel there  
is an ineffable quality to consciousness. (Or is the word  
numinous?) On days which start with a 'T', at least.


Pierz did a good job of examining this and I made some comments on  
his post.  I would like to look at comp+UD as just another  
scientific hypothesis which we will adopt when it makes some  
surprising prediction which is proved out by tests.  Obviously  
getting some surprising, testable prediction out of it is likely to  
be very difficulty.


Yes.

(I suppose if one didn't have experience of consciousness,  
predictions of incommunicable qualia and so on might be  
surprising...if something that didn't have experience of  
consciousness could be surprised, at least. But yes,)


But unlike Bruno I'm not much persuaded by logical inference from  
logic, Church-Turing, or Peano arithmetic because I think they  
aren't "The Truth" but just models we use in our thinking.  Just  
reflect on how all logicians and philosophers would have said, "No  
object can be in two different places at the same time. It's just  
logic." - before quantum mechanics.


And indeed physicists. Although does QM say unequivocally that an  
electron, say, can be in two places at the same time, or does that  
depend on the model? (I know the wave function can be in lots of  
places at the same time, of course).
I don't know about a cut-off.  The argument

Re: MGA revisited paper

2014-08-13 Thread Bruno Marchal


On 12 Aug 2014, at 18:54, meekerdb wrote:


On 8/12/2014 6:36 AM, Telmo Menezes wrote:




On Tue, Aug 12, 2014 at 6:17 AM, meekerdb   
wrote:

On 8/11/2014 9:27 PM, LizR wrote:

On 12 August 2014 15:50, meekerdb  wrote:
On 8/11/2014 8:26 PM, LizR wrote:
Well, I guess a physical UD would be made robust against quantum  
uncertainty, like all computers, but why do we need to assume QM  
apply?


The argument assumes it doesn't apply, so that the computation can  
be deterministic. I don't know that it affects the argument, but  
worries me a little that we make this unrealistic assumption;  
especially if we have include a whole 'world context' for the MG  
simulation.


Ah, I see. One of the assumptions of comp is that consciousness is  
a classical computation. At least I think that's what it means to  
say that the Church-Turing thesis applies. I suppose a question  
here is whether QM can introduce some "magic" that allows it to  
create consciousness from a purely materialistic basis. If so then  
there's no need for comp because consciousness isn't classically  
emulableyes?


Although a quantum computer can compute some things much faster  
than a classical computer, it still can't compute things that are  
Turing uncomputable, so I don't think it provides that kind of  
magic.  I was thinking more of the fact that the recorded inputs to  
B and the response to the projection of the movie onto the graph  
will not be perfectly deterministic, but only with high statistical  
probability.  Also, in QM it generally makes a difference to the  
evolution of the system whether other states are available even if  
they are never occupied.



This is what I don't see. Why do A's internal processes have  
meaning, while B's don't - given that they're physically identical?
B's have meaning too, but it is derivative meaning because the  
meanings are copies of A's and A's refer to a world.  So it's an  
unwarranted conclusion to say, see B is conscious and there's no  
physics going on.  There's plenty of physics going on in the past  
that causally connects B to A's experience.  Just because it's not  
going on at the moment B is supposed to be experiencing it isn't  
determinative.  Real QM physics can require counterfactual  
correctness in the past (e.g. Wheeler's quantum erasure, Elitzur  
and Dolev's quantum liar's paradox).


Well, as I've mentioned previously I think time symmetry may sort  
out those awkward retroactive quantum measurements. But anyway, I  
guess this is putting the schrodinger's cat before the horse, in  
that comp only assumes classical computation and attempts to  
derive a quantum world from it. So I guess we can't necessarily  
assume real QM physics, or at least not unless we've shown comp to  
be based on false premises or internally inconsistent, or have a  
rival theory of consciousness arising naturally from qm and  
materialism, or some other good reason to do so. I think what I'm  
trying to say here is that to assume comp must work with real  
physics is to assume from the start that there is no reversal.


Well there's also the question of whether comp and the UD solve the  
hard problem any better than psychophysical parallelism.


I'm not sure that comp and UD propose to solve the hard problem, as  
much as proposing why it's not solvable.


I agree.  I don't think it's solvable in the way people ask for.  I  
think it's solvable in the engineering sense.  The advantage of  
Bruno's theory is that, if the theory is right, then he can prove  
within it what the hard problem is not solvable and can say why.


OK. Cool.







Pierz did a good job of examining this and I made some comments on  
his post.  I would like to look at comp+UD as just another  
scientific hypothesis which we will adopt when it makes some  
surprising prediction which is proved out by tests.  Obviously  
getting some surprising, trestable prediction out of it is likely  
to be very difficulty.  But unlike Bruno I'm not much persuaded by  
logicalinference from logic, Church-Turing, or  
Peano arithmetic because I think they aren't "The Truth" but just  
models we use in our thinking.  Just reflect on how all logicians  
and philosophers would have said, "No object can be in two  
different places at the same time. It's just logic." - before  
quantum mechanics.


It is my impression that progress in logic is done by removing all  
that is non-abstract. It's a simplification effort. My difficulties  
with logic usually arise from not being able to grasp the counter- 
intuitive simple level at which it operates. Confusing common sense  
with logic is a common mistake. You see this a lot on you tube  
these days, where well-meaning atheists like to say "it's just  
logic" when they are in fact referring to scientific common sense.  
I am an atheist, an agnostic and a lover of  science,  
so I never like this -- it's resorting to the tricks of the "enemy".




I'm not

Re: MGA revisited paper

2014-08-13 Thread meekerdb

On 8/13/2014 6:26 AM, Bruno Marchal wrote:


On 12 Aug 2014, at 11:24, Bruno Marchal wrote:


Hi,

I think it is better I let you discuss a little bit.

Yes Russell made a nice introduction to this problematic.

Below, I just put a , which you might try to guess from my preview post 
(notably to Brent) on this issue.




On 12 Aug 2014, at 02:48, meekerdb wrote:


On 8/11/2014 4:03 PM, LizR wrote:
I have never got this idea of "counterfactual correctness". It seems to be that the 
argument goes ...


Assume computational process A is conscious
Take process B, which replays A - B passes through the same machine states as A, but 
it doesn't work them out, it's driven by a recording of A - B isn't conscious because 
it isn't counterfactually correct.


I can't see how this works. (Except insofar as if we assume consciousness doesn't 
supervene on material processes, then neither A nor B is conscious, they are just 
somehow attached to conscious experiences generated elsewhere, maybe by a UD.)


It doesn't work, because it ignores the fact that consciousness is about something.  
It can only exist in the context of thoughts (machine states and processes) referring 
to a "world"; being part of a representational and predictive model.  Without the 
counterfactuals, it's just a sequence of states and not a model of anything.  But in 
order that it be a model it must interact or have interacted in the past in order that 
the model be causally connected to the world.  It is this connection that gives 
meaning to the model. Because Bruno is a logician he tends to think of consciousness 
as performing deductive proofs, executing a proof in the sense that every computer 
program is a proof.  He models belief as proof.  But this overlooks where the meaning 
of the program comes from.





You might say that I model mind and belief ([]p), and the whole working of a computer, 
by proof in arithmetic, but with comp, it has to be valid at the correct susbstitution 
level for the case of correct machines.


But for consciousness, I model it by knowledge (modal logic S4), which I obtained from 
the Theaetetus' method applied on belief/provability ([]p & p). Then I insist that this 
is not modeled by *anything* you can define exclusively with 3p terms. 


Insisting sounds like an attempt to cut off debate.  You model it by true belief and then 
you say belief is an attitude, which is implicitly a conscious thought or feeling.


Indeed that (meta) definition, in the comp-arithmetical frame makes knowledge being a 
non propositional attitude. 


Yet it depends on the proposition being believed being true.

I am even led (may be influenced by salvia) to the idea that consciousness is more in 
the "& p" than in []p. The proof aspect of the brain machinery would be a filter of 
consciousness and memory, and consciousness itself is more in the complementary of what 
is provable. The meaning comes from truth, not proof.


Which leads to my position, which is that epistemology precedes ontology.  How do we 
determine what is true and worthy of belief.  I think we can never be certain, so it is a 
question of degrees of belief and weight of evidence.  But those don't fit in your scheme 
which depends on necessity and truth.











People that want to deny computers can be conscious point out that the meaning comes 
from the programmer.  But it doesn't have to.  If the computer has goals and can learn 
and act within the world then its internal modeling and decision processes get meaning 
through their potential for actions.


OK.





This is why I don't agree with the conclusion drawn from step 8.  I think the 
requirement to counterfactually correct implies that a whole world, a physics, needs 
to be simulated too, or else the Movie Graph or Klara need to be able to interact with 
the world to supply the meaning to their program.


Hmm... If you want the full counterfactualness of a *universal* machine, you need the 
UD*, and thus the full sigma_1 complete reality, which is tiny compared to the full 
arithmetical truth, but still infinitely bigger than known physical universe.


I don't understand what "full counterfactualness" means.  A human being or any physical 
system reacts to the world in one way or another.  What was asked was for was 
counterfactual correctness, i.e. the the MG reacts the same as would the conscious being 
emulated - which might be no change at all.




Then with a computer, the interaction with the environment are finite, and can be 
re-entered in the computer machinery locally, and the MGA can be resume again.


They are finite over a finite time span, limited by the speed of light.  But the number is 
very large for even the shortest time period that could be called "conscious", and my 
point is that this is a physical context.  It can't be just any random counterfactuals, 
but only those consistent with "meanings" instantiated in the brain.








But if the Movie Graph computer is a counterfactua

Re: MGA revisited paper

2014-08-13 Thread meekerdb

On 8/13/2014 7:01 AM, Bruno Marchal wrote:
Does Bruno actually say what he thinks consciousness is? (This is probably somewhere 
beyond the MGA, which is where I tend to get stuck...)


When I've asked directly what it would take to make a robot conscious, he's said 
Lobianity.  Essentially it's the ability to do proofs by mathematical induction and 
prove Godel's theorem.  But "ability" seems to be just in the sense of potential, as a 
Turing machine has the ability to compute anything computable.


That is what you need for your robot being able to be conscious. OK. But to be 
conscious, you need not just the machine/man, but some connection with god/truth.


To put is roughly the believer []p is never conscious, it is the knower []p & p who is 
conscious.  It is very different: []p can be defined in arithmetic. []p & p cannot be 
defined in arithmetic, or in the machine's language.


But that's just an abstract definition.  What is the operational meaning of "p".  If 
consciousness depends on knowing and knowing depends of my belief being true, then I will 
be unconscious if my belief is mistaken. That makes no sense.  Consciousness obviously 
does not depend on "& p".  In my view consciousnees is creating an internal mode of the 
world.  The model includes propositions "p" which are more or less true depending on their 
correspondence with the world. Operationally this means they have consistency and 
predictive power.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-13 Thread meekerdb

On 8/13/2014 7:16 AM, Bruno Marchal wrote:


On 12 Aug 2014, at 18:54, meekerdb wrote:


On 8/12/2014 6:36 AM, Telmo Menezes wrote:




On Tue, Aug 12, 2014 at 6:17 AM, meekerdb > wrote:


On 8/11/2014 9:27 PM, LizR wrote:

On 12 August 2014 15:50, meekerdb mailto:meeke...@verizon.net>> wrote:

On 8/11/2014 8:26 PM, LizR wrote:

Well, I guess a physical UD would be made robust against quantum
uncertainty, like all computers, but why do we need to assume QM apply?

The argument assumes it doesn't apply, so that the computation can be
deterministic. I don't know that it affects the argument, but worries 
me a
little that we make this unrealistic assumption; especially if we have
include a whole 'world context' for the MG simulation.


Ah, I see. One of the assumptions of comp is that consciousness is a 
classical
computation. At least I think that's what it means to say that the 
Church-Turing
thesis applies. I suppose a question here is whether QM can introduce some
"magic" that allows it to create consciousness from a purely materialistic 
basis.
If so then there's no need for comp because consciousness isn't classically
emulableyes?


Although a quantum computer can compute some things much faster than a 
classical
computer, it still can't compute things that are Turing uncomputable, so I 
don't
think it provides that kind of magic.  I was thinking more of the fact that 
the
recorded inputs to B and the response to the projection of the movie onto 
the
graph will not be perfectly deterministic, but only with high statistical
probability.  Also, in QM it generally makes a difference to the evolution 
of the
system whether other states are available even if they are never occupied.



This is what I don't see. Why do A's internal processes have 
meaning,
while B's don't - given that they're physically identical?


B's have meaning too, but it is derivative meaning because the meanings 
are
copies of A's and A's refer to a world.  So it's an unwarranted 
conclusion to
say, see B is conscious and there's no physics going on.  There's 
plenty of
physics going on in the past that causally connects B to A's experience. 
Just because it's not going on at the moment B is supposed to be experiencing

it isn't determinative.  Real QM physics can require counterfactual
correctness in the past (e.g. Wheeler's quantum erasure, Elitzur and 
Dolev's
quantum liar's paradox).


Well, as I've mentioned previously I think time symmetry may sort out those
awkward retroactive quantum measurements. But anyway, I guess this is 
putting the
schrodinger's cat before the horse, in that comp only assumes classical
computation and attempts to derive a quantum world from it. So I guess we 
can't
necessarily assume real QM physics, or at least not unless we've shown comp 
to be
based on false premises or internally inconsistent, or have a rival theory 
of
consciousness arising naturally from qm and materialism, or some other good
reason to do so. I think what I'm trying to say here is that to assume comp 
must
work with real physics is to assume from the start that there is no 
reversal.


Well there's also the question of whether comp and the UD solve the hard 
problem
any better than psychophysical parallelism.


I'm not sure that comp and UD propose to solve the hard problem, as much as proposing 
why it's not solvable.


I agree.  I don't think it's solvable in the way people ask for.  I think it's solvable 
in the engineering sense.  The advantage of Bruno's theory is that, if the theory is 
right, then he can prove within it what the hard problem is not solvable and can say why.


OK. Cool.






Pierz did a good job of examining this and I made some comments on his 
post.  I
would like to look at comp+UD as just another scientific hypothesis which 
we will
adopt when it makes some surprising prediction which is proved out by tests. 
Obviously getting some surprising, trestable prediction out of it is likely to be

very difficulty.  But unlike Bruno I'm not much persuaded by logical 
inference
from logic, Church-Turing, or Peano arithmetic because I think they aren't 
"The
Truth" but just models we use in our thinking.  Just reflect on how all 
logicians
and philosophers would have said, "No object can be in two different places 
at the
same time. It's just logic." - before quantum mechanics.


It is my impression that progress in logic is done by removing all that is 
non-abstract. It's a simplification effort. My difficulties with logic usually arise 
from not being able to grasp the counter-intuitive simple level at which it operates. 
Confusing common sense with logic is a common mistake. You see this a lot on you

Re: MGA revisited paper

2014-08-13 Thread LizR
On 14 August 2014 07:35, meekerdb  wrote:

> On 8/13/2014 6:26 AM, Bruno Marchal wrote:
>
>>
>> On 12 Aug 2014, at 11:24, Bruno Marchal wrote:
>>
>>  Hi,
>>>
>>> I think it is better I let you discuss a little bit.
>>>
>>> Yes Russell made a nice introduction to this problematic.
>>>
>>> Below, I just put a , which you might try to guess from my
>>> preview post (notably to Brent) on this issue.
>>>
>>>
>>>
>>> On 12 Aug 2014, at 02:48, meekerdb wrote:
>>>
>>>  On 8/11/2014 4:03 PM, LizR wrote:

> I have never got this idea of "counterfactual correctness". It seems
> to be that the argument goes ...
>
> Assume computational process A is conscious
> Take process B, which replays A - B passes through the same machine
> states as A, but it doesn't work them out, it's driven by a recording of A
> - B isn't conscious because it isn't counterfactually correct.
>
> I can't see how this works. (Except insofar as if we assume
> consciousness doesn't supervene on material processes, then neither A nor 
> B
> is conscious, they are just somehow attached to conscious experiences
> generated elsewhere, maybe by a UD.)
>

 It doesn't work, because it ignores the fact that consciousness is
 about something.  It can only exist in the context of thoughts (machine
 states and processes) referring to a "world"; being part of a
 representational and predictive model.  Without the counterfactuals, it's
 just a sequence of states and not a model of anything.  But in order that
 it be a model it must interact or have interacted in the past in order that
 the model be causally connected to the world.  It is this connection that
 gives meaning to the model. Because Bruno is a logician he tends to think
 of consciousness as performing deductive proofs, executing a proof in the
 sense that every computer program is a proof.  He models belief as proof.
  But this overlooks where the meaning of the program comes from.

>>>
>>> 
>>>
>>
>> You might say that I model mind and belief ([]p), and the whole working
>> of a computer, by proof in arithmetic, but with comp, it has to be valid at
>> the correct susbstitution level for the case of correct machines.
>>
>> But for consciousness, I model it by knowledge (modal logic S4), which I
>> obtained from the Theaetetus' method applied on belief/provability ([]p &
>> p). Then I insist that this is not modeled by *anything* you can define
>> exclusively with 3p terms.
>>
>
> Insisting sounds like an attempt to cut off debate.


It would be polite to point out that you realise that isn't what Bruno is
doing, as I assume you do realise - i.e. that you are just pointing out a
language nuance.

Otherwise you sound argumentative, when presumably that isn't your
intention.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-13 Thread Platonist Guitar Cowboy
On Wed, Aug 13, 2014 at 11:41 PM, LizR  wrote:

> On 14 August 2014 07:35, meekerdb  wrote:
>
>> On 8/13/2014 6:26 AM, Bruno Marchal wrote:
>>
>>>
>>> On 12 Aug 2014, at 11:24, Bruno Marchal wrote:
>>>
>>> You might say that I model mind and belief ([]p), and the whole working
>>> of a computer, by proof in arithmetic, but with comp, it has to be valid at
>>> the correct susbstitution level for the case of correct machines.
>>>
>>> But for consciousness, I model it by knowledge (modal logic S4), which I
>>> obtained from the Theaetetus' method applied on belief/provability ([]p &
>>> p). Then I insist that this is not modeled by *anything* you can define
>>> exclusively with 3p terms.
>>>
>>
>> Insisting sounds like an attempt to cut off debate.
>
>
> It would be polite to point out that you realise that isn't what Bruno is
> doing, as I assume you do realise - i.e. that you are just pointing out a
> language nuance.
>
> Otherwise you sound argumentative, when presumably that isn't your
> intention.
>

I'd guess it isn't, issuing from the background that for years Brent has
provided us with valued posts from my end, especially physics +
environment/energy related and crucially, in genuine critical spirit,
keeping the rather astonishing implications of possible comp, as a
theology/science, to the straight and narrow; which is strong poise of
rigor and precision in any domain. Proofs are not written in stone and game
changers on various levels are what we're after, and I appreciate that
about Brent's posts in general. PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-13 Thread LizR
On 14 August 2014 15:10, Platonist Guitar Cowboy 
wrote:

> On Wed, Aug 13, 2014 at 11:41 PM, LizR  wrote:
>
>> On 14 August 2014 07:35, meekerdb  wrote:
>>
>>> On 8/13/2014 6:26 AM, Bruno Marchal wrote:
>>>

 On 12 Aug 2014, at 11:24, Bruno Marchal wrote:

 You might say that I model mind and belief ([]p), and the whole working
 of a computer, by proof in arithmetic, but with comp, it has to be valid at
 the correct susbstitution level for the case of correct machines.

 But for consciousness, I model it by knowledge (modal logic S4), which
 I obtained from the Theaetetus' method applied on belief/provability ([]p &
 p). Then I insist that this is not modeled by *anything* you can define
 exclusively with 3p terms.

>>>
>>> Insisting sounds like an attempt to cut off debate.
>>>
>>
>> It would be polite to point out that you realise that isn't what Bruno is
>> doing, as I assume you do realise - i.e. that you are just pointing out a
>> language nuance.
>>
>> Otherwise you sound argumentative, when presumably that isn't your
>> intention.
>>
>
> I'd guess it isn't, issuing from the background that for years Brent has
> provided us with valued posts from my end, especially physics +
> environment/energy related and crucially, in genuine critical spirit,
> keeping the rather astonishing implications of possible comp, as a
> theology/science, to the straight and narrow; which is strong poise of
> rigor and precision in any domain. Proofs are not written in stone and game
> changers on various levels are what we're after, and I appreciate that
> about Brent's posts in general. PGC
>
> I guessed it wasn't, but it could certainly sound that way. He appears to
be picking Bruno up for "insisting" on something, and accusing him of
trying to cut off the debate. I guessed that wasn't his intention, but
given that he often appears to be sniping - picking people up on minor
points - I thought it was worthwhile to point out when he does so, on the
assumption that this isn't intentional and it's useful for someone to know
how they come across to others, especially when it isn't as they intended
to.

But then, I haven't been around here for years, so I may still be dealing
with things that others have long since accepted.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Bruno Marchal


On 13 Aug 2014, at 21:35, meekerdb wrote:


On 8/13/2014 6:26 AM, Bruno Marchal wrote:


On 12 Aug 2014, at 11:24, Bruno Marchal wrote:


Hi,

I think it is better I let you discuss a little bit.

Yes Russell made a nice introduction to this problematic.

Below, I just put a , which you might try to guess from  
my preview post (notably to Brent) on this issue.




On 12 Aug 2014, at 02:48, meekerdb wrote:


On 8/11/2014 4:03 PM, LizR wrote:
I have never got this idea of "counterfactual correctness". It  
seems to be that the argument goes ...


Assume computational process A is conscious
Take process B, which replays A - B passes through the same  
machine states as A, but it doesn't work them out, it's driven  
by a recording of A - B isn't conscious because it isn't  
counterfactually correct.


I can't see how this works. (Except insofar as if we assume  
consciousness doesn't supervene on material processes, then  
neither A nor B is conscious, they are just somehow attached to  
conscious experiences generated elsewhere, maybe by a UD.)


It doesn't work, because it ignores the fact that consciousness  
is about something.  It can only exist in the context of thoughts  
(machine states and processes) referring to a "world"; being part  
of a representational and predictive model.  Without the  
counterfactuals, it's just a sequence of states and not a model  
of anything.  But in order that it be a model it must interact or  
have interacted in the past in order that the model be causally  
connected to the world.  It is this connection that gives meaning  
to the model. Because Bruno is a logician he tends to think of  
consciousness as performing deductive proofs, executing a proof  
in the sense that every computer program is a proof.  He models  
belief as proof.  But this overlooks where the meaning of the  
program comes from.





You might say that I model mind and belief ([]p), and the whole  
working of a computer, by proof in arithmetic, but with comp, it  
has to be valid at the correct susbstitution level for the case of  
correct machines.


But for consciousness, I model it by knowledge (modal logic S4),  
which I obtained from the Theaetetus' method applied on belief/ 
provability ([]p & p). Then I insist that this is not modeled by  
*anything* you can define exclusively with 3p terms.


Insisting sounds like an attempt to cut off debate.


I insist because I see that people forget this very often, or perhaps  
have not yet realize the impact of the fact that incompleteness makes  
[]p and ([]p & p) obeying quite different logic, and have a quite  
different nature ([]p is definable, []p & p is not: we can only  
describe it for each individual arithmetical sentence p).







 You model it by true belief and then you say belief is an attitude,  
which is implicitly a conscious thought or feeling.


Only when you are conscious of believing something. The belief itself  
might be purely representational and not conscious in general. You  
have to distinguish "the machine believes p = []p", with the fact that  
such a belief can be known by the entity which is [][]p & []p, that [k] 
[]p, wit [k]p = []p & p. For Löbian entity we have indeed thvalid  
inference: []p / [k][]p.










Indeed that (meta) definition, in the comp-arithmetical frame makes  
knowledge being a non propositional attitude.


Yet it depends on the proposition being believed being true.


Yes. Why should that be a problem?





I am even led (may be influenced by salvia) to the idea that  
consciousness is more in the "& p" than in []p. The proof aspect of  
the brain machinery would be a filter of consciousness and memory,  
and consciousness itself is more in the complementary of what is  
provable. The meaning comes from truth, not proof.


Which leads to my position, which is that epistemology precedes  
ontology.


Well, it looks like the comp assumption leads to this, with ontology  
being the physical observable. But I am astonished that you say this.  
If epistemology precedes ontology, physicalism is directly false.  It  
might help you axiomatize a little bit your position. I can understand  
that epistemology precedes the physical ontology, but with comp,  
epistemology is defined by the knowledge of the machine, ([]p & p),  
and to define "[]p" you need to assume elementary arithmetic,  
universal numbers, etc.







How do we determine what is true and worthy of belief.


We can't, by incompleteness. But "true" is still well defined, and can  
be defined in set theory. p is true if it is satisfied in the  
mathematical structure (N, +, *) studied in high school and used in  
math and physics all the time.







 I think we can never be certain,


That's a theorem in comp.



so it is a question of degrees of belief and weight of evidence.   
But those don't fit in your scheme which depends on necessity and  
truth.


Only because I define the "probability one", but it is not a certainty.









Re: MGA revisited paper

2014-08-14 Thread Bruno Marchal


On 13 Aug 2014, at 21:47, meekerdb wrote:


On 8/13/2014 7:01 AM, Bruno Marchal wrote:
Does Bruno actually say what he thinks consciousness is? (This is  
probably somewhere beyond the MGA, which is where I tend to get  
stuck...)


When I've asked directly what it would take to make a robot  
conscious, he's said Lobianity.  Essentially it's the ability to  
do proofs by mathematical induction and prove Godel's theorem.   
But "ability" seems to be just in the sense of potential, as a  
Turing machine has the ability to compute anything computable.


That is what you need for your robot being able to be conscious.  
OK. But to be conscious, you need not just the machine/man, but  
some connection with god/truth.


To put is roughly the believer []p is never conscious, it is the  
knower []p & p who is conscious.  It is very different: []p can be  
defined in arithmetic. []p & p cannot be defined in arithmetic, or  
in the machine's language.


But that's just an abstract definition.  What is the operational  
meaning of "p".


It is means true in (N, +, *). This cannot be defined in PA, but you  
don't need to define it in PA, to get the needed consequences.





If consciousness depends on knowing and knowing depends of my belief  
being true, then I will be unconscious if my belief is mistaken.


Not necessarily, because although your belief is false, you can still  
have the true belief that you believe it. []p can be false, yet [k][]p  
can be true. That would be the case in a dream, for example. You  
believe that you can walk on water (false), but you believe also that  
you believe that you can walk and that belief is true, so you are  
conscious in the dream, even if the belief that you can walk on water  
is false.


I recall that [k]x = []x & x.


That makes no sense.  Consciousness obviously does not depend on "&  
p".  In my view consciousnees is creating an internal mode of the  
world.


That is what []p does. It is related to the 1p consciousness of that  
belief through [k][]p




The model includes propositions "p" which are more or less true  
depending on their correspondence with the world.


Which world? The arithmetical reality, or a primitive physical world?




Operationally this means they have consistency and predictive power.


That works for unconscious belief too. You need truth to make it into  
knowledge (which is not certainty).

A weak form of certainty needs the "& <>p" or equivalently "& <>t".

Bruno







Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Bruno Marchal


On 13 Aug 2014, at 22:29, meekerdb wrote:


On 8/13/2014 7:16 AM, Bruno Marchal wrote:


On 12 Aug 2014, at 18:54, meekerdb wrote:


On 8/12/2014 6:36 AM, Telmo Menezes wrote:




On Tue, Aug 12, 2014 at 6:17 AM, meekerdb   
wrote:

On 8/11/2014 9:27 PM, LizR wrote:

On 12 August 2014 15:50, meekerdb  wrote:
On 8/11/2014 8:26 PM, LizR wrote:
Well, I guess a physical UD would be made robust against  
quantum uncertainty, like all computers, but why do we need to  
assume QM apply?


The argument assumes it doesn't apply, so that the computation  
can be deterministic. I don't know that it affects the argument,  
but worries me a little that we make this unrealistic  
assumption; especially if we have include a whole 'world  
context' for the MG simulation.


Ah, I see. One of the assumptions of comp is that consciousness  
is a classical computation. At least I think that's what it  
means to say that the Church-Turing thesis applies. I suppose a  
question here is whether QM can introduce some "magic" that  
allows it to create consciousness from a purely materialistic  
basis. If so then there's no need for comp because consciousness  
isn't classically emulableyes?


Although a quantum computer can compute some things much faster  
than a classical computer, it still can't compute things that are  
Turing uncomputable, so I don't think it provides that kind of  
magic.  I was thinking more of the fact that the recorded inputs  
to B and the response to the projection of the movie onto the  
graph will not be perfectly deterministic, but only with high  
statistical probability.  Also, in QM it generally makes a  
difference to the evolutionof the system  
whether other states are available even if they are never occupied.



This is what I don't see. Why do A's internal processes have  
meaning, while B's don't - given that they're physically  
identical?
B's have meaning too, but it is derivative meaning because the  
meanings are copies of A's and A's refer to a world.  So it's an  
unwarranted conclusion to say, see B is conscious and there's no  
physics going on.  There's plenty of physics going on in the  
past that causally connects B to A's experience.  Just because  
it's not going on at the moment B is supposed to be experiencing  
it isn't determinative.  Real QM physics can require  
counterfactual correctness in the past (e.g. Wheeler's quantum  
erasure, Elitzur and Dolev's quantum  
liar's  paradox).


Well, as I've mentioned previously I think time symmetry may  
sort out those awkward retroactive quantum measurements. But  
anyway, I guess this is putting the schrodinger's cat before the  
horse, in that comp only assumes classical computation and  
attempts to derive a quantum world from it. So I guess we can't  
necessarily assume real QM physics, or at least not unless we've  
shown comp to be based on false premises or internally  
inconsistent, or have a rival theory of consciousness arising  
naturally from qm and materialism, or some other good reason to  
do so. I think what I'm trying to say here is that to assume  
comp must work with real physics is to assume from the start  
that there is no reversal.


Well there's also the question of whether comp and the UD solve  
the hard problem any better than psychophysical parallelism.


I'm not sure that comp and UD propose to solve the hard problem,  
as much as proposing why it's not solvable.


I agree.  I don't think it's solvable in the way people ask for.   
I think it's solvable in the engineering sense.  The advantage of  
Bruno's theory is that, if the theory is right, then he can prove  
within it what the hard problem is not solvable and can say why.


OK. Cool.







Pierz did a good job of examining this and I made some comments  
on his post.  I would like to look at comp+UD as just another  
scientific hypothesis which we will adopt when it makes  
somesurprising prediction which is proved  
out by tests.  Obviously getting some surprising, trestable  
prediction out of it is likely to be very difficulty.  But unlike  
Bruno I'm not much persuaded by logical inference from logic,  
Church-Turing, or Peano arithmetic because I think they aren't  
"The Truth" but just models we use in our thinking.  Just reflect  
on how all logicians and philosophers would have said, "No object  
can be in two different places at the same time. It's just  
logic." - before quantum mechanics.


It is my impression that progress in logic is done by removing  
all that is non-abstract. It's a simplification effort. My  
difficulties with logic usually arise from not being able to  
grasp the counter-intuitive simple level at which it operates.  
Confusing common sense with logic is a common mistake. You see  
this a lot on you tube these days, where well-meaning atheists  
like to say "it's just logic" when they are in fact referring to  
scientific

Re: MGA revisited paper

2014-08-14 Thread Bruno Marchal


On 14 Aug 2014, at 05:10, Platonist Guitar Cowboy wrote:





On Wed, Aug 13, 2014 at 11:41 PM, LizR  wrote:
On 14 August 2014 07:35, meekerdb  wrote:
On 8/13/2014 6:26 AM, Bruno Marchal wrote:

On 12 Aug 2014, at 11:24, Bruno Marchal wrote:

You might say that I model mind and belief ([]p), and the whole  
working of a computer, by proof in arithmetic, but with comp, it has  
to be valid at the correct susbstitution level for the case of  
correct machines.


But for consciousness, I model it by knowledge (modal logic S4),  
which I obtained from the Theaetetus' method applied on belief/ 
provability ([]p & p). Then I insist that this is not modeled by  
*anything* you can define exclusively with 3p terms.


Insisting sounds like an attempt to cut off debate.

It would be polite to point out that you realise that isn't what  
Bruno is doing, as I assume you do realise - i.e. that you are just  
pointing out a language nuance.


Otherwise you sound argumentative, when presumably that isn't your  
intention.


I'd guess it isn't, issuing from the background that for years Brent  
has provided us with valued posts from my end, especially physics +  
environment/energy related and crucially, in genuine critical  
spirit, keeping the rather astonishing implications of possible  
comp, as a theology/science, to the straight and narrow; which is  
strong poise of rigor and precision in any domain. Proofs are not  
written in stone and game changers on various levels are what we're  
after, and I appreciate that about Brent's posts in general. PGC


I agree with you in general, but I can agree a little bit with Liz  
too, as I find Brent slightly sneaky on this issue, but all in all  
Brent is rather polite and seems sincere. Yet his critics (of step 8)  
is not that clear. But then that is why we discuss. Anyone seeing  
Brent's point can help to make it clearer.


Bruno







--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Russell Standish
On Thu, Aug 14, 2014 at 09:59:31AM +0200, Bruno Marchal wrote:
> 
> 
> > A human being or any physical system reacts to the world in one
> >way or another.  What was asked was for was counterfactual
> >correctness, i.e. the the MG reacts the same as would the
> >conscious being emulated - which might be no change at all.
> 
> I agree with you. The counterfactualness needs "If we change the
> input then the output will change in the relevant way".  But I am
> not sure that we need the actual (physical) counterfactual behavior.
> I might differ from Russell here.
> 

Counterfactual correctness is needed as part of the computational
supervenience thesis in order to forbid supervenience on recordings.

Physical supervenience is something observed. Again, in order to
prevent supervenience on physical recordings, actual physical
counterfactuality is required - which is basically the quantum nature
of physical reality.

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
 (http://www.hpcoders.com.au/AmoebasSecret.html)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Russell Standish
On Thu, Aug 14, 2014 at 10:25:40AM +0200, Bruno Marchal wrote:
> 
> I agree with you in general, but I can agree a little bit with Liz
> too, as I find Brent slightly sneaky on this issue, but all in all
> Brent is rather polite and seems sincere. Yet his critics (of step
> 8) is not that clear. But then that is why we discuss. Anyone seeing
> Brent's point can help to make it clearer.
> 

His point is that he doesn't believe input free computations can be
conscious - there must always be some referrent to the environment
(which is noisy, counterfactual, etc). If so, it prevents the MGA, and
Maudlin's argument, from working.

I guess for Brent that even dream states still have some referrent to
the environment, even if it be some sort of random synaptic noise.

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
 (http://www.hpcoders.com.au/AmoebasSecret.html)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Pierz
Thanks Russell for this very concise, comprehensible summary. I loved that 
you managed to condense Maudlin's argument into three brief paragraphs 
which made perfect sense. I remember when I first read his paper I lost 
sight of the wood for the trees and completely misunderstood the ultimate 
point he was trying to make. Your summary spells it out very lucidly. I did 
understand this whole step 8 business very clearly at one point and then 
more recently when I tried to explain it to someone I realised it had 
slipped from my grasp once more! Thanks to you I now have it firmly in my 
clutches again. Cheers.

On Monday, August 11, 2014 8:38:00 AM UTC+10, Russell Standish wrote:
>
> As long, long time promised, I now have a draft of my "MGA revisited" 
> paper for critical comment. I have uploaded this to my blog, which 
> gives people the ability to attach comments. 
>
> http://www.hpcoders.com.au/blog/?p=73 
>
> Whilst I'm happy I now understand the issue, I still not happy with 
> how I've expressed it - the text could still do with some work. 
>
> So let the games begin! 
>
> -- 
>
>  
>
> Prof Russell Standish  Phone 0425 253119 (mobile) 
> Principal, High Performance Coders 
> Visiting Professor of Mathematics  hpc...@hpcoders.com.au 
>  
> University of New South Wales  http://www.hpcoders.com.au 
>
>  Latest project: The Amoeba's Secret 
>  (http://www.hpcoders.com.au/AmoebasSecret.html) 
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Pierz
Liz did you ever get to grips with the counterfactuals business? In case 
not, the way I would summarize it is this. Consider a computer game in 
which you fly through some 3D landscape. The game is "intelligent" because 
it can respond to whatever you do with the controls. If you fly in any 
direction, it accurately changes the rendered environment to show you what 
you see from the new position. The intuition behind computationalism is 
that this intelligent responsiveness is what defines consciousness. But now 
I can imagine recording someone's flight through the game environment and 
replaying it. If I watched the recording, thinking I was playing the game, 
and moved the controls in just such a way that the recording showed me the 
right scenes by pure chance, I would think the computer was being 
intelligent, when in fact it wasn't. Even though it moves through the 
correct  sequence of visual states, the recording has no intelligent 
capacity to deal with counterfactuals, i.e., the possibility of my moving 
the controls in some other way from the ones that happen to coincide with 
the recording's visuals. So consciousness can't supervene on the mere 
sequence of physical states. It has to supervene on more than that, the 
actual (abstract) computation including counterfactuals. Maybe you already 
got all that, but I thought I'd spell it out... ;)

On Tuesday, August 12, 2014 9:22:44 AM UTC+10, Liz R wrote:
>
> Having now read the paper, ISTM that the "counterfactual" part of the 
> argument is the only part that I *really* don't get. Or rather ISTM that 
> it demonstrates that consciousness can't supervene on physical 
> computational states, because those states can't know anything about these 
> counterfactuals, which by definition don't happen. Then again, I also have 
> some trouble with the multiverse part. A MV "is" a quantum computer? How do 
> we know that, without even knowing the laws of physics? Is this something 
> to do with Feynman's idea about a QC as something that could perform exact 
> physical simulations? (if I got that right)
>
>
> On 12 August 2014 11:03, LizR > wrote:
>
>> I have never got this idea of "counterfactual correctness". It seems to 
>> be that the argument goes ...
>>
>> Assume computational process A is conscious
>> Take process B, which replays A - B passes through the same machine 
>> states as A, but it doesn't work them out, it's driven by a recording of A 
>> - B isn't conscious because it isn't counterfactually correct.
>>
>> I can't see how this works. (Except insofar as if we assume consciousness 
>> doesn't supervene on material processes, then neither A nor B is conscious, 
>> they are just somehow attached to conscious experiences generated 
>> elsewhere, maybe by a UD.)
>>
>>
>>
>>
>> On 12 August 2014 09:40, LizR > wrote:
>>
>>> Got it, thanks. Not too long so I will be able to read it in the near 
>>> future :-)
>>>
>>> I hope that is just an honest mistake, Bruno, and no one has been 
>>> messing with your email deliberately. Do you have another email you can 
>>> use? (e.g. a GMail one) 
>>>
>>>
>>> On 11 August 2014 20:43, Bruno Marchal > 
>>> wrote:
>>>
>>>>
>>>> On 11 Aug 2014, at 06:42, Russell Standish wrote:
>>>>
>>>>  Apologies to everybody. For some reason, when I clicked "publish",
>>>>> Wordpress posted an earlier draft of the post, not the most recent one
>>>>> I was working on.
>>>>>
>>>>> I have now restored the correct version of the post - follow the link
>>>>> "Draft paper here" to find the paper.
>>>>>
>>>>
>>>>
>>>> I got it. I will read it.
>>>>
>>>> ...
>>>>
>>>> It looks now, that I have lost the ability to read my mails. Apparently 
>>>> someone deleted my password at my ULB account. It might take some time 
>>>> before I can read my mail again.
>>>>
>>>> Sorry. It is a  good thing that I got your text before this happened. I 
>>>> might soon been unable to send message, too.
>>>>
>>>> Bruno
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>  
>>>>> Cheers
>>>>>
>>>>>
>>>>> On Sun, Aug 10, 2014 at 08:08:55PM -0700, meekerdb wrote:
>>>>>
>>>>>> On 8/10/2014 3:38 PM, Russell Standish wrote:
>

Re: MGA revisited paper

2014-08-14 Thread Pierz


On Tuesday, August 12, 2014 1:12:10 PM UTC+10, Brent wrote:
>
>  On 8/11/2014 7:29 PM, LizR wrote:
>  
>  On 12 August 2014 12:48, meekerdb > 
> wrote:
>
>> On 8/11/2014 4:03 PM, LizR wrote:
>>
>>> I have never got this idea of "counterfactual correctness". It seems to 
>>> be that the argument goes ...
>>>
>>> Assume computational process A is conscious
>>> Take process B, which replays A - B passes through the same machine 
>>> states as A, but it doesn't work them out, it's driven by a recording of A 
>>> - B isn't conscious because it isn't counterfactually correct.
>>>
>>> I can't see how this works. (Except insofar as if we assume 
>>> consciousness doesn't supervene on material processes, then neither A nor B 
>>> is conscious, they are just somehow attached to conscious experiences 
>>> generated elsewhere, maybe by a UD.)
>>>
>>
>>  It doesn't work, because it ignores the fact that consciousness is about 
>> something. It can only exist in the context of thoughts (machine states and 
>> processes) referring to a "world"; being part of a representational and 
>> predictive model.  Without the counterfactuals, it's just a sequence of 
>> states and not a model of anything.  But in order that it be a model it 
>> must interact or have interacted in the past in order that the model be 
>> causally connected to the world.  It is this connection that gives meaning 
>> to the model.
>
>
>  What differentiates A and B, given that they use the same machine 
> states? How can A be more about something than B? Or to put it another way, 
> what is the "meaning" that makes A conscious, but not B?
>   
>
> A makes decisions in response to the world.  Although, ex hypothesi, the 
> world is repeating its inputs and A is repeating his decisions.  Note that 
> this assumes QM doesn't apply at the computational level of A.  In the 
> argument we're asked to consider a dream so that we're led to overlook the 
> fact that the meaning of A's internal processes actually derive from A's 
> interaction with a world.  Imagine A as being born and living in a sensory 
> deprivation tank - will A be conscious?  I think not.  
>

That is a weird assumption to me and completely contrary to my own 
intuition. Certainly a person born and kept alive in sensory deprivation 
will be extremely limited in the complexity of the mental states s/he can 
develop, but I would certainly expect that such a person would have 
consciousness, ie., that there is something it would be like to be such a 
person. Indeed I expect that such a person would suffer horribly. Such a 
conclusion requires no mystical view of consciousness. It is based purely 
on biology - we are programmed with biological expectations/predispositions 
which when not met, cause us to suffer. As much as the brain can't be 
separated completely from other matter, it *does* seem to house 
consciousness in a semi-autonomous fashion.

Indeed I am puzzled by your insistence on consciousness deriving from 
relationships with the world, given you seem to be a reductionist 
materialist. In a reductionist view, such relationships don't have any 
intrinsic meaning, so how is it that the presence or absence of such 
relationships can make the difference between "having an experience" and 
"not having an experience"? What turns the light on as it were, turning the 
zombie into the human, the robot into the "real boy" (guess you've seen the 
movie?)? The fact that its internal states are meaningfully correlated to 
some "world", whatever that is? Such a correlation might define the 
difference between adaptive and non-adaptive functioning, but how does that 
distinction instantiate consciousness (or not)? 

OK so that is back to "hard problem", which for people who are 
fundamentally interested in engineering is also the "uninteresting problem" 
or the "pointlessly distracting problem". For me, software engineer by 
trade, philosopher/psychologist/tripper by nature, it's the very other way 
around. The hard problem deeply troubled me even as a kid. I still find it 
difficult to comprehend those whom it doesn't bother, or who can't even see 
it, as if they're colour blind or something, but I've come to understand 
their practical perspective. Still, to call it "uninteresting" (I don't 
know if you do) is not to make an objective statement, it's merely to 
assert the sphere of one's interest - 3p rather than 1p in the local 
vernacular. 

 

> But in Bruno's and Maudlin's thought experiments A might be, A could be 
> aware of Peano's axioms and could prove all provable theorems plus Godel's 
> incompleteness. 
>
> 
>  
>> Because Bruno is a logician he tends to think of consciousness as 
>> performing deductive proofs, executing a proof in the sense that every 
>> computer program is a proof.  He models belief as proof.  But this 
>> overlooks where the meaning of the program comes from.  People that want to 
>> deny computers can be conscious point out that the meaning comes from t

Re: MGA revisited paper

2014-08-14 Thread meekerdb

On 8/14/2014 1:09 AM, Bruno Marchal wrote:


On 13 Aug 2014, at 21:47, meekerdb wrote:


On 8/13/2014 7:01 AM, Bruno Marchal wrote:
Does Bruno actually say what he thinks consciousness is? (This is probably somewhere 
beyond the MGA, which is where I tend to get stuck...)


When I've asked directly what it would take to make a robot conscious, he's said 
Lobianity.  Essentially it's the ability to do proofs by mathematical induction and 
prove Godel's theorem.  But "ability" seems to be just in the sense of potential, as 
a Turing machine has the ability to compute anything computable.


That is what you need for your robot being able to be conscious. OK. But to be 
conscious, you need not just the machine/man, but some connection with god/truth.


To put is roughly the believer []p is never conscious, it is the knower []p & p who is 
conscious.  It is very different: []p can be defined in arithmetic. []p & p cannot be 
defined in arithmetic, or in the machine's language.


But that's just an abstract definition.  What is the operational meaning of "p".


It is means true in (N, +, *).


That's not operational.  The only operational meaning of true derivable in (N, +, *) is 
true=provable, but it's essential to your theory that there are true and unprovable 
propositions.  You can believe there are such propositions and prove that there must be 
one, but can you actually produce one?  In other words it seems you can get []p, and 
[][]p, and [][][]p... but you can't get to p.


This cannot be defined in PA, but you don't need to define it in PA, to get the needed 
consequences.





If consciousness depends on knowing and knowing depends of my belief being true, then I 
will be unconscious if my belief is mistaken.


Not necessarily, because although your belief is false, you can still have the true 
belief that you believe it.


Yes that's [][]p & []p.  But people who believe the Earth is flat are not believing that 
they believe the Earth is flat.  Yet they are conscious. Yet it seems that []p & p, where 
p=f implies one is unconscious.  I don't think consciousness depends on knowing (as 
defined by Thaetateus).  Does mere belief, []p, already require consciousness. Or if you 
allow unconscious belief what does it add to require that they be true?


[]p can be false, yet [k][]p can be true. That would be the case in a dream, for 
example. You believe that you can walk on water (false), but you believe also that you 
believe that you can walk and that belief is true, so you are conscious in the dream, 
even if the belief that you can walk on water is false.


I recall that [k]x = []x & x.


That makes no sense.  Consciousness obviously does not depend on "& p".  In my view 
consciousnees is creating an internal mode of the world.


That is what []p does. It is related to the 1p consciousness of that belief 
through [k][]p



The model includes propositions "p" which are more or less true depending on their 
correspondence with the world.


Which world? The arithmetical reality, or a primitive physical world?


The physical world that is necessary for consciousness.  Although you said we agreed in 
the last post, you revert to assuming that physical=primitive physical and that 
arithmetic=reality.  I thought what we agreed was that in order for there to be 
consciousness there must be some kind or level of physical world that provides a context.  
This is what I might refer to as "our reality" allowing that there might be other kinds 
(although I doubt it) which is not everything in arithmetic.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread meekerdb

On 8/14/2014 1:41 AM, Russell Standish wrote:

On Thu, Aug 14, 2014 at 10:25:40AM +0200, Bruno Marchal wrote:

I agree with you in general, but I can agree a little bit with Liz
too, as I find Brent slightly sneaky on this issue, but all in all
Brent is rather polite and seems sincere. Yet his critics (of step
8) is not that clear. But then that is why we discuss. Anyone seeing
Brent's point can help to make it clearer.


His point is that he doesn't believe input free computations can be
conscious - there must always be some referrent to the environment
(which is noisy, counterfactual, etc).


Right.


If so, it prevents the MGA, and
Maudlin's argument, from working.

I guess for Brent that even dream states still have some referrent to
the environment, even if it be some sort of random synaptic noise.


I think it's pretty obvious that dreams have external referents. Don't your dreams have 
people and places and objects in them that you recognize as such?


I think the sharper question is whether there are referents when you think of numbers, 
when you do number theory proofs - essentially it's the question of Platonism.  Does 
arithmetic and Turing machine 'exist' apart from brains that think about them?  Does 
putting "..." really justify inferences about infinite processes?  Or on a more 
philosophical level, if everything exists does "exists" have any meaning?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Platonist Guitar Cowboy
On Thu, Aug 14, 2014 at 7:59 PM, meekerdb  wrote:

> On 8/14/2014 1:41 AM, Russell Standish wrote:
>
>> On Thu, Aug 14, 2014 at 10:25:40AM +0200, Bruno Marchal wrote:
>>
>>> I agree with you in general, but I can agree a little bit with Liz
>>> too, as I find Brent slightly sneaky on this issue, but all in all
>>> Brent is rather polite and seems sincere. Yet his critics (of step
>>> 8) is not that clear. But then that is why we discuss. Anyone seeing
>>> Brent's point can help to make it clearer.
>>>
>>>  His point is that he doesn't believe input free computations can be
>> conscious - there must always be some referrent to the environment
>> (which is noisy, counterfactual, etc).
>>
>
> Right.


Then it'd be no problem for you guys to clearly spell out what that
environment is.


>
>
>  If so, it prevents the MGA, and
>> Maudlin's argument, from working.
>>
>> I guess for Brent that even dream states still have some referrent to
>> the environment, even if it be some sort of random synaptic noise.
>>
>
> I think it's pretty obvious that dreams have external referents. Don't
> your dreams have people and places and objects in them that you recognize
> as such?
>
> I think the sharper question is whether there are referents when you think
> of numbers, when you do number theory proofs - essentially it's the
> question of Platonism.  Does arithmetic and Turing machine 'exist' apart
> from brains that think about them?  Does putting "..." really justify
> inferences about infinite processes?  Or on a more philosophical level, if
> everything exists does "exists" have any meaning?
>

So you are sniping away at step 0 in the context of discussing step 8. That
is weird because this is a philosophy discussion at a bar  concerning
ultimate questions rather than the thread's focus of discussion it would
seem.

My bad, my mistake, Liz. It seems you were right. PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread meekerdb

On 8/14/2014 6:45 AM, Pierz wrote:



On Tuesday, August 12, 2014 1:12:10 PM UTC+10, Brent wrote:

On 8/11/2014 7:29 PM, LizR wrote:

On 12 August 2014 12:48, meekerdb > wrote:

On 8/11/2014 4:03 PM, LizR wrote:

I have never got this idea of "counterfactual correctness". It 
seems to be
that the argument goes ...

Assume computational process A is conscious
Take process B, which replays A - B passes through the same machine 
states
as A, but it doesn't work them out, it's driven by a recording of A 
- B
isn't conscious because it isn't counterfactually correct.

I can't see how this works. (Except insofar as if we assume 
consciousness
doesn't supervene on material processes, then neither A nor B is 
conscious,
they are just somehow attached to conscious experiences generated
elsewhere, maybe by a UD.)


It doesn't work, because it ignores the fact that consciousness is about
something. It can only exist in the context of thoughts (machine states 
and
processes) referring to a "world"; being part of a representational and
predictive model.  Without the counterfactuals, it's just a sequence of 
states
and not a model of anything.  But in order that it be a model it must 
interact
or have interacted in the past in order that the model be causally 
connected to
the world.  It is this connection that gives meaning to the model.


What differentiates A and B, given that they use the same machine states? 
How can A
be more about something than B? Or to put it another way, what is the 
"meaning"
that makes A conscious, but not B?


A makes decisions in response to the world.  Although, ex hypothesi, the 
world is
repeating its inputs and A is repeating his decisions.  Note that this 
assumes QM
doesn't apply at the computational level of A.  In the argument we're asked 
to
consider a dream so that we're led to overlook the fact that the meaning of 
A's
internal processes actually derive from A's interaction with a world.  
Imagine A as
being born and living in a sensory deprivation tank - will A be conscious?  
I think
not.


That is a weird assumption to me and completely contrary to my own intuition. Certainly 
a person born and kept alive in sensory deprivation will be extremely limited in the 
complexity of the mental states s/he can develop, but I would certainly expect that such 
a person would have consciousness, ie., that there is something it would be like to be 
such a person. Indeed I expect that such a person would suffer horribly. Such a 
conclusion requires no mystical view of consciousness. It is based purely on biology - 
we are programmed with biological expectations/predispositions which when not met, cause 
us to suffer. As much as the brain can't be separated completely from other matter, it 
*does* seem to house consciousness in a semi-autonomous fashion.


So how did you suffer in the womb?



Indeed I am puzzled by your insistence on consciousness deriving from relationships with 
the world, given you seem to be a reductionist materialist. In a reductionist view, such 
relationships don't have any intrinsic meaning, so how is it that the presence or 
absence of such relationships can make the difference between "having an experience" and 
"not having an experience"? What turns the light on as it were, turning the zombie into 
the human, the robot into the "real boy" (guess you've seen the movie?)? The fact that 
its internal states are meaningfully correlated to some "world", whatever that is? Such 
a correlation might define the difference between adaptive and non-adaptive functioning, 
but how does that distinction instantiate consciousness (or not)?


OK so that is back to "hard problem", which for people who are fundamentally interested 
in engineering is also the "uninteresting problem" or the "pointlessly distracting 
problem".


I don't think it's unintersting, I think it's unsolvable becuase it's demanding an 
explanation and at the same time ruling out any explanation because it rejects the 
engineering level explanation. Yet the engineering level explanation is the one we praise 
and accept as the gold standard in every other field.  In fact one of the things I like 
about Bruno's theory is that it can prove within the computational paradigm exactly what 
it unsolvable about the hard problem and why.


Within a materialist/evolutionist model it is also clear why it is unsolvable, why we 
cannot experience the brain processes that produce experience of the world. It would be an 
irrelevant and useless and wasteful use of brain resources at best and would be selected 
against.  At worst it might produced confusion and instability in thought processes.  I 
think it is really only through language and symbolic thought that the "hard problem" can 
be formu

Re: MGA revisited paper

2014-08-14 Thread Platonist Guitar Cowboy
On Thu, Aug 14, 2014 at 8:51 PM, meekerdb  wrote:

>
> So how did you suffer in the womb?
>
>
>
>  Indeed I am puzzled by your insistence on consciousness deriving from
> relationships with the world, given you seem to be a reductionist
> materialist. In a reductionist view, such relationships don't have any
> intrinsic meaning, so how is it that the presence or absence of such
> relationships can make the difference between "having an experience" and
> "not having an experience"? What turns the light on as it were, turning the
> zombie into the human, the robot into the "real boy" (guess you've seen the
> movie?)? The fact that its internal states are meaningfully correlated to
> some "world", whatever that is? Such a correlation might define the
> difference between adaptive and non-adaptive functioning, but how does that
> distinction instantiate consciousness (or not)?
>
>  OK so that is back to "hard problem", which for people who are
> fundamentally interested in engineering is also the "uninteresting problem"
> or the "pointlessly distracting problem".
>
>
> I don't think it's unintersting, I think it's unsolvable becuase it's
> demanding an explanation and at the same time ruling out any explanation
> because it rejects the engineering level explanation.  Yet the engineering
> level explanation is the one we praise and accept as the gold standard in
> every other field.  In fact one of the things I like about Bruno's theory
> is that it can prove within the computational paradigm exactly what it
> unsolvable about the hard problem and why.
>
> Within a materialist/evolutionist model it is also clear why it is
> unsolvable, why we cannot experience the brain processes that produce
> experience of the world. It would be an irrelevant and useless and wasteful
> use of brain resources at best and would be selected against.  At worst it
> might produced confusion and instability in thought processes.  I think it
> is really only through language and symbolic thought that the "hard
> problem" can be formulated.
>

Linguistically sounds like you have a "war on x problem" which is why I
think this has left step 8 and is more firmly rooted in denying step 0 at
the coffee house or bar, and prohibiting it like we usually do. PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread meekerdb

On 8/14/2014 11:40 AM, Platonist Guitar Cowboy wrote:




On Thu, Aug 14, 2014 at 7:59 PM, meekerdb > wrote:


On 8/14/2014 1:41 AM, Russell Standish wrote:

On Thu, Aug 14, 2014 at 10:25:40AM +0200, Bruno Marchal wrote:

I agree with you in general, but I can agree a little bit with Liz
too, as I find Brent slightly sneaky on this issue, but all in all
Brent is rather polite and seems sincere. Yet his critics (of step
8) is not that clear. But then that is why we discuss. Anyone seeing
Brent's point can help to make it clearer.

His point is that he doesn't believe input free computations can be
conscious - there must always be some referrent to the environment
(which is noisy, counterfactual, etc).


Right.


Then it'd be no problem for you guys to clearly spell out what that environment 
is.


Yes, that's a problem.  The MGA considers a computational sequence that produces some 
conscious thought.  I think that in order for the computational sequence to have meaning 
it must refer to some context in which decision or action is possible.  That's what makes 
it about something and not just a sequence of events. I initially thought of it in terms 
of the extra states that had to be available for counterfactual correctness in response to 
an external environment, e.g. seeing something, having a K_40 atom decay in your brain. 
But now I've think the necessity of reference is different than counterfactual 
correctness.  For example if you had a recording of the computations of an autonomous Mars 
Rover they wouldn't really constitute a computation because the recording would not have 
the possibility of branching in response to inputs.  And the inputs wouldn't necessarily 
be external, at a different state of the Rover's learning the same sequence might have 
triggered a different association from memory.  So the referents are not necessarily just 
external, they include all of memory as well.





If so, it prevents the MGA, and
Maudlin's argument, from working.

I guess for Brent that even dream states still have some referrent to
the environment, even if it be some sort of random synaptic noise.


I think it's pretty obvious that dreams have external referents. Don't your 
dreams
have people and places and objects in them that you recognize as such?

I think the sharper question is whether there are referents when you think 
of
numbers, when you do number theory proofs - essentially it's the question of
Platonism.  Does arithmetic and Turing machine 'exist' apart from brains 
that think
about them?  Does putting "..." really justify inferences about infinite 
processes?
 Or on a more philosophical level, if everything exists does "exists" have 
any meaning?


So you are sniping away at step 0 in the context of discussing step 8.


"Sniping" is pejorative.  Is there some rule I can't question step 0?  Unlike JKC I'm 
generally willing to take something I doubt as a hypothetical to see where it leads.  But 
that doesn't mean I can never go back.


That is weird because this is a philosophy discussion at a bar  concerning ultimate 
questions rather than the thread's focus of discussion it would seem.


Seems it's devolved into a discussion of the style of discussion.

Brent



My bad, my mistake, Liz. It seems you were right. PGC
--
You received this message because you are subscribed to the Google Groups "Everything 
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Platonist Guitar Cowboy
Sniping may have sophisticated pejorative connotation in English, but it
also happens to be appropriate for provoking responses, or pushing people's
buttons on fundamental question, while pretending to discuss some other
topic, like MGA in our case.

For two weeks now, I've read what appear to be not so subtle jabs taken at
mystic, platonic, immaterialist kind of position; with you always going
"what?", when queried. First I reacted, guessing you were being grumpy...
then everything kept on business as usual, until gradually the image
emerged that I'm not the only one who has felt this lately or is being
oversensitive/defensive. Doesn't mean I'm right on this, just underlines
that there might me more to your posts of late than meets the eye of
"devolved discussion".

If you want to roll out the fundamental discussion yet again, than it would
be kind to let someone as obtuse as yours truly know, as I try to parse
your lines in this thread as relating to MGA and repeatedly hit walls. It's
not transgression of some rule, it's as Liz states "a politeness thing",
which you usually observe. PGC


On Thu, Aug 14, 2014 at 11:29 PM, meekerdb  wrote:

>  On 8/14/2014 11:40 AM, Platonist Guitar Cowboy wrote:
>
>
>
>
> On Thu, Aug 14, 2014 at 7:59 PM, meekerdb  wrote:
>
>> On 8/14/2014 1:41 AM, Russell Standish wrote:
>>
>>> On Thu, Aug 14, 2014 at 10:25:40AM +0200, Bruno Marchal wrote:
>>>
 I agree with you in general, but I can agree a little bit with Liz
 too, as I find Brent slightly sneaky on this issue, but all in all
 Brent is rather polite and seems sincere. Yet his critics (of step
 8) is not that clear. But then that is why we discuss. Anyone seeing
 Brent's point can help to make it clearer.

  His point is that he doesn't believe input free computations can be
>>> conscious - there must always be some referrent to the environment
>>> (which is noisy, counterfactual, etc).
>>>
>>
>>  Right.
>
>
>  Then it'd be no problem for you guys to clearly spell out what that
> environment is.
>
>
> Yes, that's a problem.  The MGA considers a computational sequence that
> produces some conscious thought.  I think that in order for the
> computational sequence to have meaning it must refer to some context in
> which decision or action is possible.
>

You repeatedly beg the question here to give rise to some unspecified
context though that isn't Turing emulable it seems, especially with the
force you insist these days.


>   That's what makes it about something and not just a sequence of events.
> I initially thought of it in terms of the extra states that had to be
> available for counterfactual correctness in response to an external
> environment, e.g. seeing something, having a K_40 atom decay in your brain.
> But now I've think the necessity of reference is different than
> counterfactual correctness.  For example if you had a recording of the
> computations of an autonomous Mars Rover they wouldn't really constitute a
> computation because the recording would not have the possibility of
> branching in response to inputs.  And the inputs wouldn't necessarily be
> external, at a different state of the Rover's learning the same sequence
> might have triggered a different association from memory.  So the referents
> are not necessarily just external, they include all of memory as well.
>

That doesn't seem to be a problem for MGA reasoning on comp supervenience
though, although I'd have to reflect on it and read what you or others may
reply elaborating on this matter. Without trivializing and indeed seeing
this as a strength of comp, I'd chalk up the necessity of reference you
posit as part and parcel of any universal number observing a vast
arithmetic. This would not be "anything goes-Land" because 4 is not prime
and such property would hold unless you do some really funky moves. PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 01:15, Pierz  wrote:

> Liz did you ever get to grips with the counterfactuals business? In case
> not, the way I would summarize it is this. Consider a computer game in
> which you fly through some 3D landscape. The game is "intelligent" because
> it can respond to whatever you do with the controls. If you fly in any
> direction, it accurately changes the rendered environment to show you what
> you see from the new position. The intuition behind computationalism is
> that this intelligent responsiveness is what defines consciousness. But now
> I can imagine recording someone's flight through the game environment and
> replaying it. If I watched the recording, thinking I was playing the game,
> and moved the controls in just such a way that the recording showed me the
> right scenes by pure chance, I would think the computer was being
> intelligent, when in fact it wasn't. Even though it moves through the
> correct  sequence of visual states, the recording has no intelligent
> capacity to deal with counterfactuals, i.e., the possibility of my moving
> the controls in some other way from the ones that happen to coincide with
> the recording's visuals. So consciousness can't supervene on the mere
> sequence of physical states. It has to supervene on more than that, the
> actual (abstract) computation including counterfactuals. Maybe you already
> got all that, but I thought I'd spell it out... ;)
>
> OK, so this is an argument that consciousness can't supervene on physical
states? Presumably Brent's suggestion that consciousness needs
environmental interaction to exist is similar to this? (You need the
environment to provide the counterfactualness, I assume).

Please spell things out! I have found quite a few times that an apparent
disagreement turns out to be an agreement with linguistic problems, as it
were.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 01:45, Pierz  wrote:

>
> OK so that is back to "hard problem", which for people who are
> fundamentally interested in engineering is also the "uninteresting problem"
> or the "pointlessly distracting problem". For me, software engineer by
> trade, philosopher/psychologist/tripper by nature, it's the very other way
> around. The hard problem deeply troubled me even as a kid. I still find it
> difficult to comprehend those whom it doesn't bother, or who can't even see
> it, as if they're colour blind or something
>
> Maybe they're philosophical zombies :-)

(Actually they think we all are, as far as I can tell.)

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 06:51, meekerdb  wrote:

>  On 8/14/2014 6:45 AM, Pierz wrote:
>
>  That is a weird assumption to me and completely contrary to my own
> intuition. Certainly a person born and kept alive in sensory deprivation
> will be extremely limited in the complexity of the mental states s/he can
> develop, but I would certainly expect that such a person would have
> consciousness, ie., that there is something it would be like to be such a
> person. Indeed I expect that such a person would suffer horribly. Such a
> conclusion requires no mystical view of consciousness. It is based purely
> on biology - we are programmed with biological expectations/predispositions
> which when not met, cause us to suffer. As much as the brain can't be
> separated completely from other matter, it *does* seem to house
> consciousness in a semi-autonomous fashion.
>
> So how did you suffer in the womb?
>

But there's a lot of environmental interaction in the womb. You're
undercutting your own case! To do a 180 degree, it would make more sense to
claim that consciousness requires an environment because even before we're
born we're already getting plenty of stimuli. You need to imagine a person
put into an artificial womb with no light or sound etc from the moment they
start to develop a nervous system, and consider whether that person would
be conscious.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread meekerdb

On 8/14/2014 3:19 PM, Platonist Guitar Cowboy wrote:
Sniping may have sophisticated pejorative connotation in English, but it also happens to 
be appropriate for provoking responses, or pushing people's buttons on fundamental 
question, while pretending to discuss some other topic, like MGA in our case.


For two weeks now, I've read what appear to be not so subtle jabs taken at mystic, 
platonic, immaterialist kind of position; with you always going "what?", when queried.


So does it offend you that I take jabs at mysticism and platonism? I thought I'd been 
quite up front that I think some things happen and some don't and if you're going to 
explain the world you need to explain why this and not that.  That's why I asked Bruno if 
he thought the UD could produce a Newtonian world.


Incidentally see plenty of jabs at materialism on this list.

First I reacted, guessing you were being grumpy... then everything kept on business as 
usual, until gradually the image emerged that I'm not the only one who has felt this 
lately or is being oversensitive/defensive. Doesn't mean I'm right on this, just 
underlines that there might me more to your posts of late than meets the eye of 
"devolved discussion".


If you want to roll out the fundamental discussion yet again, than it would be kind to 
let someone as obtuse as yours truly know, as I try to parse your lines in this thread 
as relating to MGA and repeatedly hit walls.


It's seem like you (and Liz) have a complaint that I don't have some fixed world view that 
I'm proposing and defending and which I'm obliged to explain.


It's not transgression of some rule, it's as Liz states "a politeness thing", which you 
usually observe. PGC


Sorry.  I may intend to be critical, but not rude.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 09:29, meekerdb  wrote:

>  On 8/14/2014 11:40 AM, Platonist Guitar Cowboy wrote:
>
> Then it'd be no problem for you guys to clearly spell out what that
> environment is.
>
> Yes, that's a problem.  The MGA considers a computational sequence that
> produces some conscious thought.  I think that in order for the
> computational sequence to have meaning it must refer to some context in
> which decision or action is possible.  That's what makes it about something
> and not just a sequence of events. I initially thought of it in terms of
> the extra states that had to be available for counterfactual correctness in
> response to an external environment, e.g. seeing something, having a K_40
> atom decay in your brain. But now I've think the necessity of reference is
> different than counterfactual correctness.  For example if you had a
> recording of the computations of an autonomous Mars Rover they wouldn't
> really constitute a computation because the recording would not have the
> possibility of branching in response to inputs.  And the inputs wouldn't
> necessarily be external, at a different state of the Rover's learning the
> same sequence might have triggered a different association from memory.  So
> the referents are not necessarily just external, they include all of memory
> as well.
>

Given that comp assumes consciousness supervenes on classical computation,
it's still hard for me to imagine what the difference is that
counterfactuals or meaning supply. That is, a classical computation (as
opposed to a quantum one...perhaps???) is a well-defined set of steps, and
if you re-run them in the MGA they're identical. There may be no
possibility of reacting differently to different inputs, but I can't see
what difference - i.e. what real, physical, engineering (etc) type
difference that makes. If consciousness is digitally emulable, then it can
be replayed, and whatever "counterfactuals" and "meanings" that the
consciousness may attach to its internal states or (replayed) inputs will
be repeated.

So in a nutshell I can't see how, assuming consciousness supervenes on
physical computation, that "being about something" or having "meaning" or
"needing counterfactual correctness" -- or needing a real environment, for
that matter, as opposed to identically repeated inputs -- can make any
difference to whether the UTM in question is conscious. Because a system
that interacts with an environment and one that replays that interaction
exactly are, or can in theory be made, physically identical.

What am I missing?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 09:29, meekerdb  wrote:

> So you are sniping away at step 0 in the context of discussing step 8.
>
> "Sniping" is pejorative.  Is there some rule I can't question step 0?
> Unlike JKC I'm generally willing to take something I doubt as a
> hypothetical to see where it leads.  But that doesn't mean I can never go
> back.
>
> I think the point is that if you're discussing your objections to step 8,
then you shouldn't throw in something that questions step 0, because if
you're discussing step 8, it's generally assumed that you're provisionally
accepting steps 0-7, *for the purposes of the present discussion. (*After
all, Bruno has gone to great lengths to present his argument in this way,
with each step assuming the correctness of the previous ones.)

This doesn't mean you can never go back to questioning step 0, of course,
but you should make it clear that's what you're doing, and perhaps start a
new thread. Otherwise it's just confusing for everyone else.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread meekerdb

On 8/14/2014 4:58 PM, LizR wrote:
On 15 August 2014 06:51, meekerdb mailto:meeke...@verizon.net>> 
wrote:


On 8/14/2014 6:45 AM, Pierz wrote:

That is a weird assumption to me and completely contrary to my own 
intuition.
Certainly a person born and kept alive in sensory deprivation will be 
extremely
limited in the complexity of the mental states s/he can develop, but I would
certainly expect that such a person would have consciousness, ie., that 
there is
something it would be like to be such a person. Indeed I expect that such a 
person
would suffer horribly. Such a conclusion requires no mystical view of
consciousness. It is based purely on biology - we are programmed with 
biological
expectations/predispositions which when not met, cause us to suffer. As 
much as the
brain can't be separated completely from other matter, it *does* seem to 
house
consciousness in a semi-autonomous fashion.

So how did you suffer in the womb?


But there's a lot of environmental interaction in the womb. You're undercutting your own 
case! To do a 180 degree, it would make more sense to claim that consciousness requires 
an environment because even before we're born we're already getting plenty of stimuli.


A fetus does get some environmental interaction, but I don't see how that proves it is 
necessary.  It might be interesting to look at those few sad cases in which women have 
been in a coma during the latter part of their pregnancy.  Presumably the fetus would have 
received less stimulus although there still would have been some and it would be hard to 
tell whether a recently born baby was more or less conscious.


You need to imagine a person put into an artificial womb with no light or sound etc from 
the moment they start to develop a nervous system, and consider whether that person 
would be conscious.




I think they would be severely deficient.  Remember I think there can be degrees of 
consciousness, while Bruno thinks it's all-or-nothing.  I think that even a "wolf-child" 
that grows up without learning speech has a qualitatively different and lesser consciousness.


I think we have some empirical evidence.  If kittens are raised in complete darkness they 
don't develop vision.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread meekerdb

On 8/14/2014 5:09 PM, LizR wrote:
On 15 August 2014 09:29, meekerdb mailto:meeke...@verizon.net>> 
wrote:


On 8/14/2014 11:40 AM, Platonist Guitar Cowboy wrote:

Then it'd be no problem for you guys to clearly spell out what that 
environment is.

Yes, that's a problem.  The MGA considers a computational sequence that 
produces
some conscious thought.  I think that in order for the computational 
sequence to
have meaning it must refer to some context in which decision or action is possible. 
That's what makes it about something and not just a sequence of events. I initially

thought of it in terms of the extra states that had to be available for
counterfactual correctness in response to an external environment, e.g. 
seeing
something, having a K_40 atom decay in your brain. But now I've think the 
necessity
of reference is different than counterfactual correctness.  For example if 
you had a
recording of the computations of an autonomous Mars Rover they wouldn't 
really
constitute a computation because the recording would not have the 
possibility of
branching in response to inputs.  And the inputs wouldn't necessarily be 
external,
at a different state of the Rover's learning the same sequence might have 
triggered
a different association from memory.  So the referents are not necessarily 
just
external, they include all of memory as well.


Given that comp assumes consciousness supervenes on classical computation, it's still 
hard for me to imagine what the difference is that counterfactuals or meaning supply. 
That is, a classical computation (as opposed to a quantum one...perhaps???) is a 
well-defined set of steps, and if you re-run them in the MGA they're identical. There 
may be no possibility of reacting differently to different inputs, but I can't see what 
difference - i.e. what real, physical, engineering (etc) type difference that makes. If 
consciousness is digitally emulable, then it can be replayed, and whatever 
"counterfactuals" and "meanings" that the consciousness may attach to its internal 
states or (replayed) inputs will be repeated.


So in a nutshell I can't see how, assuming consciousness supervenes on physical 
computation, that "being about something" or having "meaning" or "needing counterfactual 
correctness" -- or needing a real environment, for that matter, as opposed to 
identically repeated inputs -- can make any difference to whether the UTM in question is 
conscious. Because a system that interacts with an environment and one that replays that 
interaction exactly are, or can in theory be made, physically identical.


What am I missing?


If counterfactual correctness and causal environmental reference are not needed for 
consciousness then consciousness will be instantiated by any sequence of states, including 
just repetitions of the same state, since with a certain mapping any sequence "I seek a 
chair." can be encoded as any other sequence "00" as a recording.  Then it 
seems the conclusion would be dualism - conscious is not related to either physics or 
computation.  Yet we know that consciousness can be changed by physics (and chemistry). So 
we reached an absurdity, a reductio.  The question is, where did the chain of inference go 
wrong?  It could be that losing counterfactual correctness makes the sequence of states 
not a computation and we need to have CC if we're going to identify consciousness with 
computation.  Another possibility is that we need the causal reference to give meaning to 
the sequence in order that it instantiate consciousness.  For example if we have an 
intelligent, conscious autopilot land an airplane; then when we play it back we won't have 
CC but it will still have reference to a world in which there was interaction and decision 
- in the past - and that allows it to instantiate consciousness without being absurd.


Personally I think consciousness needs both CC and reference.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Russell Standish
On Fri, Aug 15, 2014 at 12:09:27PM +1200, LizR wrote:
> On 15 August 2014 09:29, meekerdb  wrote:
> 
> >  On 8/14/2014 11:40 AM, Platonist Guitar Cowboy wrote:
> >
> > Then it'd be no problem for you guys to clearly spell out what that
> > environment is.
> >
> > Yes, that's a problem.  The MGA considers a computational sequence that
> > produces some conscious thought.  I think that in order for the
> > computational sequence to have meaning it must refer to some context in
> > which decision or action is possible.  That's what makes it about something
> > and not just a sequence of events. I initially thought of it in terms of
> > the extra states that had to be available for counterfactual correctness in
> > response to an external environment, e.g. seeing something, having a K_40
> > atom decay in your brain. But now I've think the necessity of reference is
> > different than counterfactual correctness.  For example if you had a
> > recording of the computations of an autonomous Mars Rover they wouldn't
> > really constitute a computation because the recording would not have the
> > possibility of branching in response to inputs.  And the inputs wouldn't
> > necessarily be external, at a different state of the Rover's learning the
> > same sequence might have triggered a different association from memory.  So
> > the referents are not necessarily just external, they include all of memory
> > as well.
> >
> 
> Given that comp assumes consciousness supervenes on classical computation,
> it's still hard for me to imagine what the difference is that
> counterfactuals or meaning supply. That is, a classical computation (as
> opposed to a quantum one...perhaps???) is a well-defined set of steps, and
> if you re-run them in the MGA they're identical. There may be no
> possibility of reacting differently to different inputs, but I can't see
> what difference - i.e. what real, physical, engineering (etc) type
> difference that makes. If consciousness is digitally emulable, then it can
> be replayed, and whatever "counterfactuals" and "meanings" that the
> consciousness may attach to its internal states or (replayed) inputs will
> be repeated.
> 
> So in a nutshell I can't see how, assuming consciousness supervenes on
> physical computation, that "being about something" or having "meaning" or
> "needing counterfactual correctness" -- or needing a real environment, for
> that matter, as opposed to identically repeated inputs -- can make any
> difference to whether the UTM in question is conscious. Because a system
> that interacts with an environment and one that replays that interaction
> exactly are, or can in theory be made, physically identical.
> 
> What am I missing?
> 

The consequence of assuming that counterfactuals make no difference in
your supervenience thesis is that it implies consciousness supervenes
on a recording. I constantly stumbled over this point too, as it is not
adequately spelled out in typical formulations of the computational
supervenience thesis.

For some, that is a bridge too far. Maybe you could try following
Bruno's "stroboscope argument" to see if that persuades. (Not sure if
there's an English language version about, though).

Where Brent and I differ from the usual interpretation is that we
don't think counterfactuals are irrelevant to the physical
supervenience thesis either. That is because the physical world is
fundamentally quantum, and for all intents and purposes acts as a
Multiverse, so the counterfactuals are just as physically real as the factuals.

But the MGA does show a contradiction between the physical and
computational supervenience thesis in a classical, low resource (aka
non-robust) physical reality, and this suffices to handle the
non-robust case of the UDA (aka step 8).

Cheers
-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
 (http://www.hpcoders.com.au/AmoebasSecret.html)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 12:24, meekerdb  wrote:

>  On 8/14/2014 4:58 PM, LizR wrote:
>
>  On 15 August 2014 06:51, meekerdb  wrote:
>
>>  On 8/14/2014 6:45 AM, Pierz wrote:
>>
>>  That is a weird assumption to me and completely contrary to my own
>> intuition. Certainly a person born and kept alive in sensory deprivation
>> will be extremely limited in the complexity of the mental states s/he can
>> develop, but I would certainly expect that such a person would have
>> consciousness, ie., that there is something it would be like to be such a
>> person. Indeed I expect that such a person would suffer horribly. Such a
>> conclusion requires no mystical view of consciousness. It is based purely
>> on biology - we are programmed with biological expectations/predispositions
>> which when not met, cause us to suffer. As much as the brain can't be
>> separated completely from other matter, it *does* seem to house
>> consciousness in a semi-autonomous fashion.
>>
>>  So how did you suffer in the womb?
>>
>
>  But there's a lot of environmental interaction in the womb. You're
> undercutting your own case! To do a 180 degree, it would make more sense to
> claim that consciousness requires an environment because even before we're
> born we're already getting plenty of stimuli.
>
> A fetus does get some environmental interaction, but I don't see how that
> proves it is necessary.
>

I imagine it could be used as part of a case for it being necessary (and
your comments about kittens and wolf children indicate that you do, too). I
haven't yet grasped the case for environmental interaction being necessary
myself, so I may be missing the point, but FWIW there are a few points I
can see here...

You responded to Pierz's suggestion that a brain could be conscious without
having experienced any external stimuli by saying "So how did you suffer in
the womb?"

To which I would say...

First off, your comment is phrased as though it answers Pierz's point. That
is, it appears to be a riposte, especially given that you start "So..."
Your question is phrased to suggest that if Pierz *can't *tell you how he
suffered in the womb, his suggestion is invalidated. The comments I made,
and am about to make, are predicated on the assumption that my
understanding of your comment (as I've outlined it here) is correct.

In my opinion your comment fails to adequately answer, or even address,
Pierz's point for two reasons. As I already mentioned, the womb isn't an
environment involving sensory deprivation; and your (apparent) assumption
that Peirz should be able to tell you how he suffered in the womb relies on
him being able to remember his experiences in the womb. But as we know
nowadays, the infant brain is more or less completely rewired during the
first year or so of life, so it's unlikely that many memories of the womb
survive to adulthood.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 12:03, meekerdb  wrote:

>
> It's seem like you (and Liz) have a complaint that I don't have some fixed
> world view that I'm proposing and defending and which I'm obliged to
> explain.
>
> I can't speak for Pierz but my complaint is that you often don't make your
comments clearly enough. For example you often appear to be saying one
thing, but when challenged backpedal until you reach a different (and
generally less provocative) position, which indicates the first comment was
merely an attempt to push someone's buttons by making a bland statement in
provocative language. You also mix up arguing about one subject with
another, different subject, with no indication that you've changed (the
womb comment does this, for example, once looked at closely). You act as
though someone argued a particular case which you have shot down when they
did no such thing (like your comment that "A fetus does get some
environmental interaction, but I don't see how that proves it is necessary"
- I wasn't the one arguing it was necessary, you were.) These are all down
to either a lack of clear thought on your part, or are rhetorical tricks of
the sort employed by politicians. Either way they are confusing, and if
intentional, downright rude.

On the subject of not having a fixed worldview, that's not in itself a
problem. I don't have a fixed worldview either, which is why you think I'm
an arithmetical realist and Bruno thinks I'm a hard-headed materialist.
However, I try to stick to a given position, in a given discussion, unless
I state otherwise. Not having a fixed worldview but feeling free to change
it between paragraphs without explaining what you're doing is, as stated,
confusing (and if it's used to score imaginary points, it's also a cheap
rhetorical trick that no one with any intellectual rigour should be
employing).

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 12:40, meekerdb  wrote:

>  If counterfactual correctness and causal environmental reference are not
> needed for consciousness then consciousness will be instantiated by any
> sequence of states,
>

OK. From which I can only deduce that either consciousness isn't related
(purely) to classical computational states, but requires some extras, or it
can be instantiated in ANY sequence of states that meet some set of
criteria, regardless of whether these occur in a rock or a Boltzmann brain
or whatever. (Or indeed in a book called "Einstein's Brain" that I read
about in another book by Doug Hofstadter.)

Have I missed anything so far?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 12:50, Russell Standish  wrote:

> The consequence of assuming that counterfactuals make no difference in
> your supervenience thesis is that it implies consciousness supervenes
> on a recording. I constantly stumbled over this point too, as it is not
> adequately spelled out in typical formulations of the computational
> supervenience thesis.
>

Yes, I see that.

>
> For some, that is a bridge too far. Maybe you could try following
> Bruno's "stroboscope argument" to see if that persuades. (Not sure if
> there's an English language version about, though).
>

I seem to remember coming across that somewhere, can't remember how it goes
though (hint! :-)

>
> Where Brent and I differ from the usual interpretation is that we
> don't think counterfactuals are irrelevant to the physical
> supervenience thesis either. That is because the physical world is
> fundamentally quantum, and for all intents and purposes acts as a
> Multiverse, so the counterfactuals are just as physically real as the
> factuals.
>

With the multiverse I can grok this, it's only when we assume classical
computation that I don't get the point. The quantum version implies that
consciousness isn't classical, not that it has some sort of quantum magic,
but that it's (as it were) a cut-down version of a larger phenomenon. There
are multiple I's, or rather one big monstrous ego that (perhaps
fortunately) isn't aware of most of itself. Starting from that basis and
trying to show how our apparent single valued consciousness drops out i
neach Everett world would no doubt be an interesting project, though one
the margins of my brain are, unfortunately, not large enough to contain.
But maybe some genius could have a go, and might even be able to come up
with a purely quantum-materialist version (maybe even one in which 17 is
only prime because we think it is...)

>
> But the MGA does show a contradiction between the physical and
> computational supervenience thesis in a classical, low resource (aka
> non-robust) physical reality, and this suffices to handle the
> non-robust case of the UDA (aka step 8).
>

I'm getting confused again. Is step 8 the one where the universe isn't big
enough to contain a UD which creates all human experiences, and so we set
course for arithmetical realism?

>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 13:10, LizR  wrote:

> On 15 August 2014 12:03, meekerdb  wrote:
>
>>
>> It's seem like you (and Liz) have a complaint that I don't have some
>> fixed world view that I'm proposing and defending and which I'm obliged to
>> explain.
>>
>> Sorry, I should have added that, despite not having a fixed worldview
(like most of us, I imagine) it is to be expected that if you make a point
which relies on a particular worldview then, yes, of course you are
expected to defend and explain that point within that worldview. Otherwise
your comments are just noise.

And I was also going to add (damn this job, and having to take work
breaks!) that my reason, at least, for telling you all this is because you
obviously have important things to say, things that I want to hear, and I
don't want to be distracted by you not being able to stick to the point or
express yourself clearly. There are certain people who have posted on this
list (I won't mention names, certainly not Edgar Owens') that I wouldn't
bother to point out their inconsistencies, rudeness, etc to (beyond a
certain point) because I don't find what they have to say interesting
enough to make the effort.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread meekerdb

On 8/14/2014 5:56 PM, LizR wrote:
On 15 August 2014 12:24, meekerdb mailto:meeke...@verizon.net>> 
wrote:


On 8/14/2014 4:58 PM, LizR wrote:

On 15 August 2014 06:51, meekerdb mailto:meeke...@verizon.net>> wrote:

On 8/14/2014 6:45 AM, Pierz wrote:

That is a weird assumption to me and completely contrary to my own 
intuition.
Certainly a person born and kept alive in sensory deprivation will be
extremely limited in the complexity of the mental states s/he can 
develop, but
I would certainly expect that such a person would have consciousness, 
ie.,
that there is something it would be like to be such a person. Indeed I 
expect
that such a person would suffer horribly. Such a conclusion requires no
mystical view of consciousness. It is based purely on biology - we are
programmed with biological expectations/predispositions which when not 
met,
cause us to suffer. As much as the brain can't be separated completely 
from
other matter, it *does* seem to house consciousness in a 
semi-autonomous fashion.

So how did you suffer in the womb?


But there's a lot of environmental interaction in the womb. You're 
undercutting
your own case! To do a 180 degree, it would make more sense to claim that
consciousness requires an environment because even before we're born we're 
already
getting plenty of stimuli.

A fetus does get some environmental interaction, but I don't see how that 
proves it
is necessary.


I imagine it could be used as part of a case for it being necessary (and your comments 
about kittens and wolf children indicate that you do, too). I haven't yet grasped the 
case for environmental interaction being necessary myself, so I may be missing the 
point, but FWIW there are a few points I can see here...


You responded to Pierz's suggestion that a brain could be conscious without having 
experienced any external stimuli by saying "So how did you suffer in the womb?"


To which I would say...

First off, your comment is phrased as though it answers Pierz's point. That is, it 
appears to be a riposte, especially given that you start "So..." Your question is 
phrased to suggest that if Pierz /can't /tell you how he suffered in the womb, his 
suggestion is invalidated.


I was suggesting that his idea that sensory deprivation would be terrible was an 
unjustified intuition based on how */he/*, as an adult, would feel if he were deprived of 
all sensation.  And while the womb does not produce complete sensory deprivation I think 
it is close enough that Pierz the adult would feel very deprived - but I don't think he 
would find it horrific.


But actually I have another reason for thinking it's not horrific. Back in the 50's 
sensory deprivation was a fad and a lot of people paid to sit in sensory deprivation 
tanks. I guess they still do; you can buy the tanks.  At the time Richard Feynman had met 
John Lily inventor of the sensory deprivation tank and he tried it. Like most things 
Feynman investigated he wanted to push the limit. He wanted to experience hallucinations 
and he did.  But he didn't report anything bad about the experience.


https://www.dmt-nexus.me/forum/default.aspx?g=posts&t=51786

The comments I made, and am about to make, are predicated on the assumption that my 
understanding of your comment (as I've outlined it here) is correct.


In my opinion your comment fails to adequately answer, or even address, Pierz's point 
for two reasons. As I already mentioned, the womb isn't an environment involving sensory 
deprivation; and your (apparent) assumption that Peirz should be able to tell you how he 
suffered in the womb relies on him being able to remember his experiences in the womb. 
But as we know nowadays, the infant brain is more or less completely rewired during the 
first year or so of life, so it's unlikely that many memories of the womb survive to 
adulthood.


Which would also imply that whether sensory deprivation was bad or not would depend on how 
your brain was wired.  I don't know whether a fetus or even a baby is conscious or not.  I 
think human-like consciousness is partly dependent on language, but I also think, unlike 
Bruno, that there are degrees and kinds of consciousness and a fetus or a newborn may be 
conscious like my dog is conscious.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread LizR
On 15 August 2014 14:15, meekerdb  wrote:

 I was suggesting that his idea that sensory deprivation would be terrible
> was an unjustified intuition based on how *he*, as an adult, would feel
> if he were deprived of all sensation.  And while the womb does not produce
> complete sensory deprivation I think it is close enough that Pierz the
> adult would feel very deprived - but I don't think he would find it
> horrific.
>

No, you weren't suggesting that. Perhaps you *intended* to suggest it, but
you certainly didn't supply enough information for what you actually said
to remotely suggest that as the most natural reading. It would have helped
if you'd split Pierz's comment at the point you were responding to (or
bolded the relevant part), since the most natural assumption is that when
you have an entire paragraph followed by a response, the response refers to
the main point made in the paragraph. It certainly isn't natural to assume
that you were just picking out one small aspect of what Pierz was saying,
and ignoring the rest.

Perhaps you could consider taking to heart what I've said about you not
being very clear, and make a bit more effort? Then people won't think
you're just sniping.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread meekerdb

On 8/14/2014 5:50 PM, Russell Standish wrote:

On Fri, Aug 15, 2014 at 12:09:27PM +1200, LizR wrote:

On 15 August 2014 09:29, meekerdb  wrote:


  On 8/14/2014 11:40 AM, Platonist Guitar Cowboy wrote:

Then it'd be no problem for you guys to clearly spell out what that
environment is.

Yes, that's a problem.  The MGA considers a computational sequence that
produces some conscious thought.  I think that in order for the
computational sequence to have meaning it must refer to some context in
which decision or action is possible.  That's what makes it about something
and not just a sequence of events. I initially thought of it in terms of
the extra states that had to be available for counterfactual correctness in
response to an external environment, e.g. seeing something, having a K_40
atom decay in your brain. But now I've think the necessity of reference is
different than counterfactual correctness.  For example if you had a
recording of the computations of an autonomous Mars Rover they wouldn't
really constitute a computation because the recording would not have the
possibility of branching in response to inputs.  And the inputs wouldn't
necessarily be external, at a different state of the Rover's learning the
same sequence might have triggered a different association from memory.  So
the referents are not necessarily just external, they include all of memory
as well.


Given that comp assumes consciousness supervenes on classical computation,
it's still hard for me to imagine what the difference is that
counterfactuals or meaning supply. That is, a classical computation (as
opposed to a quantum one...perhaps???) is a well-defined set of steps, and
if you re-run them in the MGA they're identical. There may be no
possibility of reacting differently to different inputs, but I can't see
what difference - i.e. what real, physical, engineering (etc) type
difference that makes. If consciousness is digitally emulable, then it can
be replayed, and whatever "counterfactuals" and "meanings" that the
consciousness may attach to its internal states or (replayed) inputs will
be repeated.

So in a nutshell I can't see how, assuming consciousness supervenes on
physical computation, that "being about something" or having "meaning" or
"needing counterfactual correctness" -- or needing a real environment, for
that matter, as opposed to identically repeated inputs -- can make any
difference to whether the UTM in question is conscious. Because a system
that interacts with an environment and one that replays that interaction
exactly are, or can in theory be made, physically identical.

What am I missing?


The consequence of assuming that counterfactuals make no difference in
your supervenience thesis is that it implies consciousness supervenes
on a recording. I constantly stumbled over this point too, as it is not
adequately spelled out in typical formulations of the computational
supervenience thesis.


That does seem strange, but I don't know that it strikes me as *absurd*.  Isn't it clearer 
that a recording is not a computation? And so if consciousness supervened on a recording 
it would prove that consciousness did not require computation?


Brent



For some, that is a bridge too far. Maybe you could try following
Bruno's "stroboscope argument" to see if that persuades. (Not sure if
there's an English language version about, though).

Where Brent and I differ from the usual interpretation is that we
don't think counterfactuals are irrelevant to the physical
supervenience thesis either. That is because the physical world is
fundamentally quantum, and for all intents and purposes acts as a
Multiverse, so the counterfactuals are just as physically real as the factuals

But the MGA does show a contradiction between the physical and
computational supervenience thesis in a classical, low resource (aka
non-robust) physical reality, and this suffices to handle the
non-robust case of the UDA (aka step 8).

Cheers


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Russell Standish
On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:
> 
> That does seem strange, but I don't know that it strikes me as
> *absurd*.  Isn't it clearer that a recording is not a computation?
> And so if consciousness supervened on a recording it would prove
> that consciousness did not require computation?
> 

To be precise "supervening on the playback of a recording". Playback
of a recording _is_ a computation too, just a rather simple one.

In other words:

#include 
int main()
{
  printf("hello world!\n");
  return 1;
}

is very much a computer program (and a playback of recording of the
words "hello world" when run). I could change "hello world" to the contents of
Wikipedia, to illustrate the point more forcibly.

Cheers 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
 (http://www.hpcoders.com.au/AmoebasSecret.html)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread meekerdb

On 8/14/2014 8:32 PM, Russell Standish wrote:

On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:

That does seem strange, but I don't know that it strikes me as
*absurd*.  Isn't it clearer that a recording is not a computation?
And so if consciousness supervened on a recording it would prove
that consciousness did not require computation?


To be precise "supervening on the playback of a recording". Playback
of a recording _is_ a computation too, just a rather simple one.

In other words:

#include 
int main()
{
   printf("hello world!\n");
   return 1;
}

is very much a computer program (and a playback of recording of the
words "hello world" when run). I could change "hello world" to the contents of
Wikipedia, to illustrate the point more forcibly.
OK.  So do you think consciousness supervenes on such a simple computation - one that's 
functionally identical with a recording? Or does instantiating consciousness require some 
degree of complexity such that CC comes into play?


What do you think of the requirement that consciousness (and the computation on which it 
supervenes) have some causal reference to environment to give them meaning?  It seems to 
me this is a different, and additional, requirement over and above CC.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Quentin Anciaux
Le 15 août 2014 06:41, "meekerdb"  a écrit :
>
> On 8/14/2014 8:32 PM, Russell Standish wrote:
>>
>> On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:
>>>
>>> That does seem strange, but I don't know that it strikes me as
>>> *absurd*.  Isn't it clearer that a recording is not a computation?
>>> And so if consciousness supervened on a recording it would prove
>>> that consciousness did not require computation?
>>>
>> To be precise "supervening on the playback of a recording". Playback
>> of a recording _is_ a computation too, just a rather simple one.
>>
>> In other words:
>>
>> #include 
>> int main()
>> {
>>printf("hello world!\n");
>>return 1;
>> }
>>
>> is very much a computer program (and a playback of recording of the
>> words "hello world" when run). I could change "hello world" to the
contents of
>> Wikipedia, to illustrate the point more forcibly.
>
> OK.  So do you think consciousness supervenes on such a simple
computation - one that's functionally identical with a recording?

I think it does... as it does on a giant lookup table. But as it is in fact
supervening on *all* computations going through the same states and not on
this or that precise computation,  it's not a problem.

Quentin

Or does instantiating consciousness require some degree of complexity such
that CC comes into play?
>
> What do you think of the requirement that consciousness (and the
computation on which it supervenes) have some causal reference to
environment to give them meaning?  It seems to me this is a different, and
additional, requirement over and above CC.
>
> Brent
>
>
> --
> You received this message because you are subscribed to the Google Groups
"Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Russell Standish
On Thu, Aug 14, 2014 at 09:41:00PM -0700, meekerdb wrote:
> On 8/14/2014 8:32 PM, Russell Standish wrote:
> >On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:
> >>That does seem strange, but I don't know that it strikes me as
> >>*absurd*.  Isn't it clearer that a recording is not a computation?
> >>And so if consciousness supervened on a recording it would prove
> >>that consciousness did not require computation?
> >>
> >To be precise "supervening on the playback of a recording". Playback
> >of a recording _is_ a computation too, just a rather simple one.
> >
> >In other words:
> >
> >#include 
> >int main()
> >{
> >   printf("hello world!\n");
> >   return 1;
> >}
> >
> >is very much a computer program (and a playback of recording of the
> >words "hello world" when run). I could change "hello world" to the contents 
> >of
> >Wikipedia, to illustrate the point more forcibly.
> OK.  So do you think consciousness supervenes on such a simple
> computation - one that's functionally identical with a recording? Or
> does instantiating consciousness require some degree of complexity
> such that CC comes into play?
> 

My opinion on whether the recording is conscious or not aint worth a
penny.

Nevertheless, the definition of computational supervenience requires
countefactual correctness in the class of programs being supervened
on.

AFAICT, the main motivation for that is to prevent recordings being conscious.

> What do you think of the requirement that consciousness (and the
> computation on which it supervenes) have some causal reference to
> environment to give them meaning?  It seems to me this is a
> different, and additional, requirement over and above CC.
> 

Maybe, but it has an identical effect on the MGA.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
 (http://www.hpcoders.com.au/AmoebasSecret.html)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-14 Thread Russell Standish
On Fri, Aug 15, 2014 at 06:52:47AM +0200, Quentin Anciaux wrote:
> Le 15 août 2014 06:41, "meekerdb"  a écrit :
> >
> > OK.  So do you think consciousness supervenes on such a simple
> computation - one that's functionally identical with a recording?
> 
> I think it does... as it does on a giant lookup table. But as it is in fact
> supervening on *all* computations going through the same states and not on
> this or that precise computation,  it's not a problem.
> 

Hi Quentin, I'm pretty sure it was you who straightened me out on this
topic a bit over a year ago, so it seems surprising you're going
back on this.

The computation supervenenience thesis is that consciousness
supervenes on all counterfactually equivalent computations (to a
particular execution step), which is a more restricted set than those
passing through a given sequence of states.

This allows us to assert that the consciousness does not supervene on
the recording (but will supervene on the huge lookup table, as the
latter is counterfactually correct).

IIRC, Searle's Chinese Room is a huge lookup table. It fails the
intuition pump test because the lookup table would need to be
astronomically vast, not a human sized book as painted in the thought
experiment. 

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
 (http://www.hpcoders.com.au/AmoebasSecret.html)


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-15 Thread Quentin Anciaux
Le 15 août 2014 07:29, "Russell Standish"  a écrit :
>
> On Fri, Aug 15, 2014 at 06:52:47AM +0200, Quentin Anciaux wrote:
> > Le 15 août 2014 06:41, "meekerdb"  a écrit :
> > >
> > > OK.  So do you think consciousness supervenes on such a simple
> > computation - one that's functionally identical with a recording?
> >
> > I think it does... as it does on a giant lookup table. But as it is in
fact
> > supervening on *all* computations going through the same states and not
on
> > this or that precise computation,  it's not a problem.
> >
>
> Hi Quentin, I'm pretty sure it was you who straightened me out on this
> topic a bit over a year ago, so it seems surprising you're going
> back on this.
>
> The computation supervenenience thesis is that consciousness
> supervenes on all counterfactually equivalent computations (to a
> particular execution step)

It doesn't have to be counterfactually correct to be equivalent up to a
particular step.  The thing is that 'recording'  have a negligible measure
and as  I said,  consciousness does not supervenes on *a* particular
computation but on *all* the equivalent one up to step N.  Like we could
say in everett that your consciousness state supervenes on all fungible
universes.

Quentin

, which is a more restricted set than those
> passing through a given sequence of states.
>
> This allows us to assert that the consciousness does not supervene on
> the recording (but will supervene on the huge lookup table, as the
> latter is counterfactually correct).
>
> IIRC, Searle's Chinese Room is a huge lookup table. It fails the
> intuition pump test because the lookup table would need to be
> astronomically vast, not a human sized book as painted in the thought
> experiment.
>
> Cheers
>
> --
>
>

> Prof Russell Standish  Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics  hpco...@hpcoders.com.au
> University of New South Wales  http://www.hpcoders.com.au
>
>  Latest project: The Amoeba's Secret
>  (http://www.hpcoders.com.au/AmoebasSecret.html)
>

>
> --
> You received this message because you are subscribed to the Google Groups
"Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-15 Thread Platonist Guitar Cowboy
On Fri, Aug 15, 2014 at 10:35 AM, Quentin Anciaux 
wrote:

>
> Le 15 août 2014 07:29, "Russell Standish"  a écrit
> :
>
> >
> > On Fri, Aug 15, 2014 at 06:52:47AM +0200, Quentin Anciaux wrote:
> > > Le 15 août 2014 06:41, "meekerdb"  a écrit :
> > > >
> > > > OK.  So do you think consciousness supervenes on such a simple
> > > computation - one that's functionally identical with a recording?
> > >
> > > I think it does... as it does on a giant lookup table. But as it is in
> fact
> > > supervening on *all* computations going through the same states and
> not on
> > > this or that precise computation,  it's not a problem.
> > >
> >
> > Hi Quentin, I'm pretty sure it was you who straightened me out on this
> > topic a bit over a year ago, so it seems surprising you're going
> > back on this.
> >
> > The computation supervenenience thesis is that consciousness
> > supervenes on all counterfactually equivalent computations (to a
> > particular execution step)
>
> It doesn't have to be counterfactually correct to be equivalent up to a
> particular step.  The thing is that 'recording'  have a negligible measure
> and as  I said,  consciousness does not supervenes on *a* particular
> computation but on *all* the equivalent one up to step N.  Like we could
> say in everett that your consciousness state supervenes on all fungible
> universes.
>

Some localizable appearance of Olympias or Klaras does not cause
supervenience, which would imply for example that our building and
calibration of a Mars rover forces some computation sequence to exist
consciously (to some extent), which is upside down, as we suppose that our
physical activity and interaction of coding and building the thing causes
the supervenience...

Instead, I thought the pairing of all computational sequences with some
consciousness property (and there can be degrees here, instead of the "all
or nothing" extreme which Brent claims Bruno to posit; that's why I'd like
to see a reference/quote on that) already exists in more primitive sense.

Our construction merely permits it to "exist in a branch accessible to us",
I'd say. This is why I have a funny feeling with Russell's version of comp
supervenience as it feels excessively local in this regard. PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-15 Thread Bruno Marchal


On 14 Aug 2014, at 10:34, Russell Standish wrote:


On Thu, Aug 14, 2014 at 09:59:31AM +0200, Bruno Marchal wrote:




A human being or any physical system reacts to the world in one
way or another.  What was asked was for was counterfactual
correctness, i.e. the the MG reacts the same as would the
conscious being emulated - which might be no change at all.


I agree with you. The counterfactualness needs "If we change the
input then the output will change in the relevant way".  But I am
not sure that we need the actual (physical) counterfactual behavior.
I might differ from Russell here.



Counterfactual correctness is needed as part of the computational
supervenience thesis in order to forbid supervenience on recordings.


Counterfactual Correctness (hereafter CC) is an attribute of a machine  
(or a program) which compute some function. In a first approximation,  
the machine is CC is it computes the right function whatever the input  
is.


We can extend the sense of CC on a computation, and say that a  
computation is CC, if it is the computation (done by some universal  
machine) of some CC program. We suppose by defalut that the universal  
machine executing that CC machine/program is itself CC.


So I think that CC is already needed for having a computation,  
although in practice, the CC character can be restricted on its domain  
(the machine computes correctly the function, unless some input is too  
big, for example).


Let me try to be slightly more technical. People can ask question if  
they don't remind the meaning of a term here.


Let phi_i be an enumeration of all partial computable functions (=  
programmable on a computer).


I will call a computation "raw" if it is described by a sequence  
phi_i(j)^n. So it is the steps of the computation of the UD itself,  
when computing the n first steps of the computation of the function  
phi_i (= executing the first n step of program i) on the input j. OK?


Here, it is the program or machine i which is CC, and this makes sense  
only relatively to some semantics. A program computing wrongly the  
factorial function, might be said to compute correctly some variant of  
the factorial function!)


Now the computationalist supervenience thesis will associate  
consciousness to an abstract entity, called the first person (and  
approximated by the believer/prover + truth). Here the truth (as we  
assume comp) is (notably) that the relevant computational state  
belongs to an infinity of raw computations made by the UD. By  
definition they are all CC (in the extended sense). When they diverge  
on different inputs (like Washington or Moscow) they both do the  
relevant corresponding behavior.


So, counterfactualness is "in" the program (even before it run), and  
is kept in the raw computations corresponding to the relevant program  
in the UD.


To sum up: the computationalist supervenience thesis associate a  
conscious state (including its feeling being at this time at this  
place) to an infinity of computations, which are CC by definition of  
computations.


Now, a record of a computation, is obviously not CC. I would say that  
it does not compute at all. It is a description of the sequence of  
steps of a computation, but there is no universal machine going  
through those steps in virtue of being itself a universal computer.  
The movie projector, in particular does not compute (or just in a weak  
sense unrelated to the computation it projects the movie of).


I disagree with your idea that to have counterfactual correctness we  
need the actualisation of the diverging computations. We do have them  
in UD*, so that is not a problem. But we don't need them.


If I give a classical well defined input to a classical (non quantum)  
program, it will computes the same output, from the same input in all  
the multiverses, except for the non normal "white rabbit" worlds. The  
quantum counterfactualness (which exists and can be related to the  
multiverse structure) does not seem relevant here, except that UDA  
imposes such an actualization of different computations below our  
level of substitution (and this will be confirmed by the machine's  
talk about the []p & <>p, which gives a quantum logic (and quantum  
logic are sort of logic of counterfactual or conditionals, as shown by  
Hardegree).


The problem with physical supervenience, is that we can build a record  
of a computation (thus non conscious), but becoming CC when inactive  
physical stuff is in its neighborhood, making weird the role of the  
physical with respect to consciousness. This suggests that the (often  
considered immaterial) consciousness is related to the immaterial set  
of raw computations going through that states, or up to that states  
(ate least something like that), and not to any of its particular  
emulation by the UD, or another program (non raw computation, itself  
done by a raw computation).






Physical supervenience is something observed.


I do

Re: MGA revisited paper

2014-08-15 Thread Bruno Marchal


On 14 Aug 2014, at 10:41, Russell Standish wrote:


On Thu, Aug 14, 2014 at 10:25:40AM +0200, Bruno Marchal wrote:


I agree with you in general, but I can agree a little bit with Liz
too, as I find Brent slightly sneaky on this issue, but all in all
Brent is rather polite and seems sincere. Yet his critics (of step
8) is not that clear. But then that is why we discuss. Anyone seeing
Brent's point can help to make it clearer.



His point is that he doesn't believe input free computations can be
conscious


But it is a most fundamental principle in computer science that you  
can always internalize the input.


From the program Factorial, you can build (uniformly for all programs  
and inputs, for any number of inputs) a program Factorial5 which on no  
input computes the factorial of 5.



Each night we do dreams, with input having been internalized, or are  
recreated to mimic correct referent.


If the environment is needed, we might add it in the "generalized  
brain", that will not invalidate the reasoning, as long as we keep  
comp (so that the environment is preserved itself by a digital  
emulation at some level, if that is not the case, we go out of the  
scope of our working hypothesis.










- there must always be some referrent to the environment
(which is noisy, counterfactual, etc). If so, it prevents the MGA, and
Maudlin's argument, from working.

I guess for Brent that even dream states still have some referrent to
the environment, even if it be some sort of random synaptic noise.



In all case the referents can be internalized, even the infinite  
streams, on which the UD dovetails.
From the first person view, they cannot, the domain of indeterminacy  
seems to be at least 2^aleph_0. We need a topology (provided by G,  
S4Grz, ..) and a proximity space (also provided by those logics, on p  
sigma_1).  The first person is by defaut connected to a random oracle,  
which is the FPI on all its emulation in the sigma_1 arithmetic.


Brent seems to assume those "physical environment" (what are there,  
really?) to abort an explanation of the origin of their appearance and  
their relative stability from simpler principles,

like Kxy = x, and Sxyz = xz(yz).

Brent tries valiantly to resist the charm of computationalism :)

It is a different theology. Instead of a creation, with or without a  
creator, we have a universal dreamer losing itself again and again in  
an infinite web of dreams, and it looks structured like in Plotinus,  
so a sort of Abramanic god is not excluded, like an *apparent* stable  
aristotelian matter is not excluded. We are at the beginning, so of  
course, there are not much things we can exclude from that theology,  
except a classical boolean matter or any institutionalization of the  
unnameable. That excludes proselytism for example, but comp excludes  
all public pretension or claim to truth in general.


Bruno



Bruno





Cheers

--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

Latest project: The Amoeba's Secret
(http://www.hpcoders.com.au/AmoebasSecret.html)


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-15 Thread Bruno Marchal
on of the post - follow the link
"Draft paper here" to find the paper.


I got it. I will read it.

...

It looks now, that I have lost the ability to read my mails.  
Apparently someone deleted my password at my ULB account. It might  
take some time before I can read my mail again.


Sorry. It is a  good thing that I got your text before this  
happened. I might soon been unable to send message, too.


Bruno






Cheers


On Sun, Aug 10, 2014 at 08:08:55PM -0700, meekerdb wrote:
On 8/10/2014 3:38 PM, Russell Standish wrote:
As long, long time promised, I now have a draft of my "MGA revisited"
paper for critical comment. I have uploaded this to my blog, which
gives people the ability to attach comments.

http://www.hpcoders.com.au/blog/?p=73

Whilst I'm happy I now understand the issue, I still not happy with
how I've expressed it - the text could still do with some work.

So let the games begin!

I went to your blog and I found:

/In this paper, we reexamine Bruno Marchal's Movie Graph//
//Argument, which demonstrates a basic incompatibility between//
//computationalism and materialism. We discover that the  
incompatibility//
//is only manifest in singular classical-like universes. If we  
accept//
//that we live in a Multiverse, then the incompatibility goes away,  
but//

//in that case another line of argument shows that with//
//computationalism, fundamental, or primitive materiality has no  
causal//
//influence on what is observed, which must must be derivable from  
basic//

//arithmetic properties./

But I didn't find "this paper"?

Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpc...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

Latest project: The Amoeba's Secret
(http://www.hpcoders.com.au/AmoebasSecret.html)


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-15 Thread Bruno Marchal


On 14 Aug 2014, at 19:41, meekerdb wrote:


On 8/14/2014 1:09 AM, Bruno Marchal wrote:


On 13 Aug 2014, at 21:47, meekerdb wrote:


On 8/13/2014 7:01 AM, Bruno Marchal wrote:
Does Bruno actually say what he thinks consciousness is? (This  
is probably somewhere beyond the MGA, which is where I tend to  
get stuck...)


When I've asked directly what it would take to make a robot  
conscious, he's said Lobianity.  Essentially it's the ability to  
do proofs by mathematical induction and prove Godel's theorem.   
But "ability" seems to be just in the sense of potential, as a  
Turing machine has the ability to compute anything computable.


That is what you need for your robot being able to be conscious.  
OK. But to be conscious, you need not just the machine/man, but  
some connection with god/truth.


To put is roughly the believer []p is never conscious, it is the  
knower []p & p who is conscious.  It is very different: []p can  
be defined in arithmetic. []p & p cannot be defined in  
arithmetic, or in the machine's language.


But that's just an abstract definition.  What is the operational  
meaning of "p".


It is means true in (N, +, *).


That's not operational.



Indeed. but p here is for the truth of p, and that has no operational  
definition. It is not computable, unless true and sigma_1.




The only operational meaning of true derivable in (N, +, *) is  
true=provable,


That will not work. You have an infinity of different bigger and  
bigger notion of proofs.


RA-provable (contain the full sigma_1 truth, but a quite tiny part of  
the pi_1 truth)
PA-provable (contains a *much* larger part of the pi_truth, almost all  
the "interesting" mathematics, but still an infinitesimal part of the  
pi_1 truth)
ZF-provable (contains a *vastly* much larger part of the pi_1 truth ,  
but still not the whole pi_1 truth, indeed "ZF is consistent" is an  
arithmetical pi_1 sentence)
ZF+Ex(x = kappa), with kappa a very big cardinal (so big that you can  
define set theoretical truth in that theory), but you will still miss  
the pi_1 truth that ZF+Ex(x = kappa) is consistent, but you do extends  
the set of arithmetical propositions you can prove.

etc.

You can easily define the notion of arithmetical truth in ZF, like you  
can define set-theoretical truth in ZF+j-kappa, and that asks for less  
than some work in topology, not to talk on category theory. The  
definition will indeed not be operational, but that is the case of  
many definition, already in analysis.


Except for RA, a bit too weak, all those provability notions are all  
Löbian.







but it's essential to your theory that there are true and unprovable  
propositions.


Unprovable by this or that machine. yes. For all machine there are  
infinitely many such unprovable proposition, some concerning them. It  
is their theology.






You can believe there are such propositions and prove that there  
must be one, but can you actually produce one?


"I am consistent". If true, I can't prove it. But I can hope for it.




In other words it seems you can get []p, and [][]p, and [][][]p...  
but you can't get to p.


I can get to p, by proving p. I cannot assert that p is true (as I  
cannot define true), but I can use a simple generic truth, like the  
constant t, or like "0= 0" and proves that p is equivalent with it. I  
do that implicitly in proving p. Then from that p, I can even prove  
[]p -> p. By Löb, that will be the only case in which I can prove []p - 
> p. In particulat I cannot prove []f -> f (if I am consistent).







This cannot be defined in PA, but you don't need to define it in  
PA, to get the needed consequences.





If consciousness depends on knowing and knowing depends of my  
belief being true, then I will be unconscious if my belief is  
mistaken.


Not necessarily, because although your belief is false, you can  
still have the true belief that you believe it.


Yes that's [][]p & []p.  But people who believe the Earth is flat  
are not believing that they believe the Earth is flat.  Yet they are  
conscious.


Well, in this case they are not conscious that they believe the earth  
is flat, but they might still be conscious of something else. Then.






Yet it seems that []p & p, where p=f implies one is unconscious.


OK.




I don't think consciousness depends on knowing (as defined by  
Thaetateus).


Agreed. It is too much. The "[]p" can be weakened, especially for the  
raw consciousness. But to get the physics we need those rich  
introspective machine. The other are conscious, but can't really talk  
about all this.





Does mere belief, []p, already require consciousness.


No, it needs the reality intended in the proposition p, and it needs  
it explicitly, only that make consciousness non representational.   
That "& p" is a tour de force, as it requires God (truth) at the  
metalevel, but not at the level of the numbers and its beliefs.




Or if you allow unconscious belief what does it add to re

Re: MGA revisited paper

2014-08-15 Thread meekerdb

On 8/15/2014 9:48 AM, Bruno Marchal wrote:


On 14 Aug 2014, at 10:34, Russell Standish wrote:


On Thu, Aug 14, 2014 at 09:59:31AM +0200, Bruno Marchal wrote:




A human being or any physical system reacts to the world in one
way or another.  What was asked was for was counterfactual
correctness, i.e. the the MG reacts the same as would the
conscious being emulated - which might be no change at all.


I agree with you. The counterfactualness needs "If we change the
input then the output will change in the relevant way".  But I am
not sure that we need the actual (physical) counterfactual behavior.
I might differ from Russell here.



Counterfactual correctness is needed as part of the computational
supervenience thesis in order to forbid supervenience on recordings.


Counterfactual Correctness (hereafter CC) is an attribute of a machine (or a program) 
which compute some function. In a first approximation, the machine is CC is it computes 
the right function whatever the input is.


We can extend the sense of CC on a computation, and say that a computation is CC, if it 
is the computation (done by some universal machine) of some CC program. We suppose by 
defalut that the universal machine executing that CC machine/program is itself CC.


So I think that CC is already needed for having a computation, although in practice, the 
CC character can be restricted on its domain (the machine computes correctly the 
function, unless some input is too big, for example).


Let me try to be slightly more technical. People can ask question if they don't remind 
the meaning of a term here.


Let phi_i be an enumeration of all partial computable functions (= programmable on a 
computer).


I will call a computation "raw" if it is described by a sequence phi_i(j)^n. So it is 
the steps of the computation of the UD itself, when computing the n first steps of the 
computation of the function phi_i (= executing the first n step of program i) on the 
input j. OK?


Here, it is the program or machine i which is CC, and this makes sense only relatively 
to some semantics. A program computing wrongly the factorial function, might be said to 
compute correctly some variant of the factorial function!)


Isn't being "relative to some semantics" another way of saying there is reference to some 
external relations to provide meaning?


How does this apply to partial functions, as in the UD, which do not terminate?  If they 
are not computing a function isn't there still a sense in which they should be CC?




Now the computationalist supervenience thesis will associate consciousness to an 
abstract entity, called the first person (and approximated by the believer/prover + 
truth). Here the truth (as we assume comp) is (notably) that the relevant computational 
state belongs to an infinity of raw computations made by the UD. By definition they are 
all CC (in the extended sense). When they diverge on different inputs (like Washington 
or Moscow) they both do the relevant corresponding behavior.


So, counterfactualness is "in" the program (even before it run), and is kept in the raw 
computations corresponding to the relevant program in the UD.


To sum up: the computationalist supervenience thesis associate a conscious state 
(including its feeling being at this time at this place) to an infinity of computations, 
which are CC by definition of computations.


I don't understand that.  Consciousness is assumed to consist of a sequence of conscious 
states (which I suppose to have duration and overlap at the subconscious level).  So I can 
see that a conscious state could be associated with a sequence of computational steps. Are 
you saying that, for a given conscious state, this sequence is well defined, but the same 
sequence appears in infinitely many threads of the UD?  Or are you saying that for a given 
conscious state, there are infinitely many different sequences of computational steps that 
together instantiate that state?


I'm also confused by references to computations being "in the same state".  If a function 
is computed then I can understand having the same output for the same input, but having 
some intermediate computational states be "the same" seems ill defined.  Why would there 
necessarily be any intermediate states that were "the same" - even assuming the same 
universal Turing machine?




Now, a record of a computation, is obviously not CC. I would say that it does not 
compute at all. It is a description of the sequence of steps of a computation, but there 
is no universal machine going through those steps in virtue of being itself a universal 
computer. The movie projector, in particular does not compute (or just in a weak sense 
unrelated to the computation it projects the movie of).


I disagree with your idea that to have counterfactual correctness we need the 
actualisation of the diverging computations. We do have them in UD*, so that is not a 
problem. But we don't need them.


If I give a classical 

Re: MGA revisited paper

2014-08-15 Thread meekerdb

On 8/15/2014 10:37 AM, Bruno Marchal wrote:


On 14 Aug 2014, at 10:41, Russell Standish wrote:


On Thu, Aug 14, 2014 at 10:25:40AM +0200, Bruno Marchal wrote:


I agree with you in general, but I can agree a little bit with Liz
too, as I find Brent slightly sneaky on this issue, but all in all
Brent is rather polite and seems sincere. Yet his critics (of step
8) is not that clear. But then that is why we discuss. Anyone seeing
Brent's point can help to make it clearer.



His point is that he doesn't believe input free computations can be
conscious


But it is a most fundamental principle in computer science that you can always 
internalize the input.


From the program Factorial, you can build (uniformly for all programs and inputs, for 
any number of inputs) a program Factorial5 which on no input computes the factorial of 5.


But a program with the input internalized is no different than a recording.  If you 
proposed to internalized all possible inputs, then that means making what I referred to as 
the environment part of the program.





Each night we do dreams, with input having been internalized, or are recreated to mimic 
correct referent.


If the environment is needed, we might add it in the "generalized brain", that will not 
invalidate the reasoning, as long as we keep comp (so that the environment is preserved 
itself by a digital emulation at some level, 


I agree, except that it does invalidate the assertion that psychology and physics have 
been reversed.  I'd say that they had been unified - both as aspects of the UD computation.





if that is not the case, we go out of the scope of our working hypothesis.









- there must always be some referrent to the environment
(which is noisy, counterfactual, etc). If so, it prevents the MGA, and
Maudlin's argument, from working.

I guess for Brent that even dream states still have some referrent to
the environment, even if it be some sort of random synaptic noise.



In all case the referents can be internalized, even the infinite streams, on which the 
UD dovetails.
From the first person view, they cannot, the domain of indeterminacy seems to be at 
least 2^aleph_0. We need a topology (provided by G, S4Grz, ..) and a proximity space 
(also provided by those logics, on p sigma_1).  The first person is by defaut connected 
to a random oracle, which is the FPI on all its emulation in the sigma_1 arithmetic.


Brent seems to assume those "physical environment" (what are there, really?)


A good question.  Do you have a good answer?

to abort an explanation of the origin of their appearance and their relative stability 
from simpler principles,

like Kxy = x, and Sxyz = xz(yz).

Brent tries valiantly to resist the charm of computationalism :)


But I have not seen this explanation - only assertions that there must be one - "assuming 
comp".




It is a different theology. Instead of a creation, with or without a creator, we have a 
universal dreamer losing itself again and again in an infinite web of dreams, and it 
looks structured like in Plotinus, so a sort of Abramanic god is not excluded, like an 
*apparent* stable aristotelian matter is not excluded. We are at the beginning, so of 
course, there are not much things we can exclude from that theology, except a classical 
boolean matter or any institutionalization of the unnameable. That excludes proselytism 
for example, but comp excludes all public pretension or claim to truth in general.


Yet it depends on there being unprovable truths.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-15 Thread meekerdb

On 8/15/2014 11:40 AM, Bruno Marchal wrote:


On 14 Aug 2014, at 19:41, meekerdb wrote:


On 8/14/2014 1:09 AM, Bruno Marchal wrote:


On 13 Aug 2014, at 21:47, meekerdb wrote:


On 8/13/2014 7:01 AM, Bruno Marchal wrote:
Does Bruno actually say what he thinks consciousness is? (This is probably 
somewhere beyond the MGA, which is where I tend to get stuck...)


When I've asked directly what it would take to make a robot conscious, he's said 
Lobianity. Essentially it's the ability to do proofs by mathematical induction and 
prove Godel's theorem.  But "ability" seems to be just in the sense of potential, 
as a Turing machine has the ability to compute anything computable.


That is what you need for your robot being able to be conscious. OK. But to be 
conscious, you need not just the machine/man, but some connection with god/truth.


To put is roughly the believer []p is never conscious, it is the knower []p & p who 
is conscious.  It is very different: []p can be defined in arithmetic. []p & p 
cannot be defined in arithmetic, or in the machine's language.


But that's just an abstract definition.  What is the operational meaning of "p".


It is means true in (N, +, *).


That's not operational.



Indeed. but p here is for the truth of p, and that has no operational definition. It is 
not computable, unless true and sigma_1.





The only operational meaning of true derivable in (N, +, *) is true=provable,


That will not work. You have an infinity of different bigger and bigger notion 
of proofs.

RA-provable (contain the full sigma_1 truth, but a quite tiny part of the pi_1 
truth)
PA-provable (contains a *much* larger part of the pi_truth, almost all the "interesting" 
mathematics, but still an infinitesimal part of the pi_1 truth)
ZF-provable (contains a *vastly* much larger part of the pi_1 truth , but still not the 
whole pi_1 truth, indeed "ZF is consistent" is an arithmetical pi_1 sentence)
ZF+Ex(x = kappa), with kappa a very big cardinal (so big that you can define set 
theoretical truth in that theory), but you will still miss the pi_1 truth that ZF+Ex(x = 
kappa) is consistent, but you do extends the set of arithmetical propositions you can prove.

etc.

You can easily define the notion of arithmetical truth in ZF, like you can define 
set-theoretical truth in ZF+j-kappa, and that asks for less than some work in topology, 
not to talk on category theory. The definition will indeed not be operational, but that 
is the case of many definition, already in analysis.


Except for RA, a bit too weak, all those provability notions are all Löbian.







but it's essential to your theory that there are true and unprovable 
propositions.


Unprovable by this or that machine. yes. For all machine there are infinitely many such 
unprovable proposition, some concerning them. It is their theology.






You can believe there are such propositions and prove that there must be one, but can 
you actually produce one?


"I am consistent". If true, I can't prove it. But I can hope for it.





In other words it seems you can get []p, and [][]p, and [][][]p... but you 
can't get to p.


I can get to p, by proving p. I cannot assert that p is true (as I cannot define true), 
but I can use a simple generic truth, like the constant t, or like "0= 0" and proves 
that p is equivalent with it. I do that implicitly in proving p. Then from that p, I can 
even prove []p -> p. By Löb, that will be the only case in which I can prove []p -> p. 
In particulat I cannot prove []f -> f (if I am consistent).







This cannot be defined in PA, but you don't need to define it in PA, to get the needed 
consequences.





If consciousness depends on knowing and knowing depends of my belief being true, then 
I will be unconscious if my belief is mistaken.


Not necessarily, because although your belief is false, you can still have the true 
belief that you believe it.


Yes that's [][]p & []p.  But people who believe the Earth is flat are not believing 
that they believe the Earth is flat.  Yet they are conscious.


Well, in this case they are not conscious that they believe the earth is flat, but they 
might still be conscious of something else. Then.






Yet it seems that []p & p, where p=f implies one is unconscious.


OK.





I don't think consciousness depends on knowing (as defined by Thaetateus).


Agreed. It is too much. The "[]p" can be weakened, especially for the raw consciousness. 
But to get the physics we need those rich introspective machine. The other are 
conscious, but can't really talk about all this.





Does mere belief, []p, already require consciousness.


No, it needs the reality intended in the proposition p, and it needs it explicitly, only 
that make consciousness non representational.  That "& p" is a tour de force, as it 
requires God (truth) at the metalevel, but not at the level of the numbers and its beliefs.





Or if you allow unconscious belief what does it add to require that they be 
tru

[foar] MGA revisited paper

2014-08-15 Thread Stathis Papaioannou
On Friday, August 15, 2014, LizR > wrote:

> On 15 August 2014 12:40, meekerdb  wrote:
>
>>  If counterfactual correctness and causal environmental reference are
>> not needed for consciousness then consciousness will be instantiated by any
>> sequence of states,
>>
>
> OK. From which I can only deduce that either consciousness isn't related
> (purely) to classical computational states, but requires some extras, or it
> can be instantiated in ANY sequence of states that meet some set of
> criteria, regardless of whether these occur in a rock or a Boltzmann brain
> or whatever. (Or indeed in a book called "Einstein's Brain" that I read
> about in another book by Doug Hofstadter.)
>

I think these sorts of considerations show that the physical states cannot
be responsible for generating or affecting consciousness. The immediate
objection to this is that physical changes in the brain *do* affect
consciousness. But if physical states cannot be responsible for generating
or affecting consciousness, there can be no evidence for a separate,
fundamental physical world. What we are left with is the platonic
reality in which all computations are realised and physical reality is a
simulation. It is meaningless to ask if consciousness supervenes on the
computations implemented on the simulated rock or the simulated recording.


--Stathis Papaioannou


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-15 Thread Jesse Mazer
On Fri, Aug 15, 2014 at 1:27 AM, Russell Standish 
wrote:

> On Thu, Aug 14, 2014 at 09:41:00PM -0700, meekerdb wrote:
> > On 8/14/2014 8:32 PM, Russell Standish wrote:
> > >On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:
> > >>That does seem strange, but I don't know that it strikes me as
> > >>*absurd*.  Isn't it clearer that a recording is not a computation?
> > >>And so if consciousness supervened on a recording it would prove
> > >>that consciousness did not require computation?
> > >>
> > >To be precise "supervening on the playback of a recording". Playback
> > >of a recording _is_ a computation too, just a rather simple one.
> > >
> > >In other words:
> > >
> > >#include 
> > >int main()
> > >{
> > >   printf("hello world!\n");
> > >   return 1;
> > >}
> > >
> > >is very much a computer program (and a playback of recording of the
> > >words "hello world" when run). I could change "hello world" to the
> contents of
> > >Wikipedia, to illustrate the point more forcibly.
> > OK.  So do you think consciousness supervenes on such a simple
> > computation - one that's functionally identical with a recording? Or
> > does instantiating consciousness require some degree of complexity
> > such that CC comes into play?
> >
>
> My opinion on whether the recording is conscious or not aint worth a
> penny.
>
> Nevertheless, the definition of computational supervenience requires
> countefactual correctness in the class of programs being supervened
> on.
>
> AFAICT, the main motivation for that is to prevent recordings being
> conscious.


I think it is possible to have a different definition of when a computation
is "instantiated" in the physical world that prevents recordings from being
conscious, a solution which doesn't actually depend on counterfactuals at
all. I described it in the post at
http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html
 (or
https://groups.google.com/d/msg/everything-list/GC6bwqCqsfQ/rFvg1dnKoWMJ on
google groups). Basically the idea is that in any system following
mathematical rules, including both abstract Turing machines and the
physical universe, everything about its mathematical structure can be
encoded as a (possibly infinite) set of logical propositions. So if you
have a Turing machine running whose computations over some finite period
are supposed to correspond to a particular "observer moment", you can take
all the propositions dealing with the Turing machine's behavior during that
period (propositions like "on time-increment 107234320 the read/write head
moved to square 2398311 and changed the digit there from 0 to 1, and
changed its internal state from M to Q"), and look at the structure of
logical relations between them (like "proposition A and B together imply
proposition C, proposition B and C together do not imply A", etc.). Then
for any other computation or even any physical process, you can see if it's
possible to find a set of propositions with a completely *isomorphic*
logical structure. In the case of the physical world, it seems to me you
could do this using only propositions about physical events that actually
occur, along with the general laws governing them--no propositions about
counterfactuals would be needed.

I suggested something like this to Bruno, and he seemed to agree that at
least in the case of computational *simulations* of the physical world, if
you use a rule like this to define when a simpler computation is
"instantiated" within some more detailed physical simulation, it would be
the case that a detailed simulation of a physical computer running some
simpler program P would qualify as instantiating P, whereas a detailed
simulation of a physical device that was really just playing back a
recording of a computer program (like Bruno's movie graph where all the
optical gates have been replaced by projected images) would *not*
instantiate P. See my comment at
https://groups.google.com/d/msg/everything-list/Ljp3s2885Co/kght-F5LZeUJ
and Bruno's response at
https://groups.google.com/d/msg/everything-list/Ljp3s2885Co/__PZn6hmCb4J

Assuming this idea for defining "instantiations" of sub-computations within
larger computations makes sense, why wouldn't it make just as much sense if
instead of propositions about computer programs running detailed physical
simulations, you instead considered propositions about actual events and
physical laws (but not counterfactuals) that are true in the physical
universe, and looked for collections of propositions whose internal logical
relations were isomorphic to those of some program?

Jesse

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://gro

Re: MGA revisited paper

2014-08-15 Thread Pierz
l keep some kind of 
objective "stuff" at its centre runs very deep.

Plus I don't believe it can be said that Bruno's theory makes everything 
clear with respect to consciousness, as I've argued elsewhere. We might 
hope that a theory based purely on a mathematical ontology would not have 
to resort to an apparently magical proposition like there *being* an 
interior perspective to mathematics.  We have no reason to imagine that 
there should be one, other perhaps than the fact that *we* are conscious. 
So the description of what mathematics is has this dimension of interiority 
added it to by the comp assumption - and the only answer as to "why" is 
that there is no answer. So some magic brute fact remains, albeit within a 
nicely unified ontological framework. I would say only that I have little 
reason to go on thinking of this mathematical Platonia as purely 
mathematical. Perhaps all is subsumed within consciousness itself, and 
mathematics is an emergent phenomenon so long as our consciousness remains 
limited within Form, which by its nature demands self-consistency. Sheesh, 
getting very mystical here. Enough.



On Monday, August 11, 2014 8:38:00 AM UTC+10, Russell Standish wrote:
>
> As long, long time promised, I now have a draft of my "MGA revisited" 
> paper for critical comment. I have uploaded this to my blog, which 
> gives people the ability to attach comments. 
>
> http://www.hpcoders.com.au/blog/?p=73 
>
> Whilst I'm happy I now understand the issue, I still not happy with 
> how I've expressed it - the text could still do with some work. 
>
> So let the games begin! 
>
> -- 
>
>  
>
> Prof Russell Standish  Phone 0425 253119 (mobile) 
> Principal, High Performance Coders 
> Visiting Professor of Mathematics  hpc...@hpcoders.com.au 
>  
> University of New South Wales  http://www.hpcoders.com.au 
>
>  Latest project: The Amoeba's Secret 
>  (http://www.hpcoders.com.au/AmoebasSecret.html) 
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-15 Thread meekerdb

On 8/15/2014 5:30 PM, Jesse Mazer wrote:



On Fri, Aug 15, 2014 at 1:27 AM, Russell Standish > wrote:


On Thu, Aug 14, 2014 at 09:41:00PM -0700, meekerdb wrote:
> On 8/14/2014 8:32 PM, Russell Standish wrote:
> >On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:
> >>That does seem strange, but I don't know that it strikes me as
> >>*absurd*.  Isn't it clearer that a recording is not a computation?
> >>And so if consciousness supervened on a recording it would prove
> >>that consciousness did not require computation?
> >>
> >To be precise "supervening on the playback of a recording". Playback
> >of a recording _is_ a computation too, just a rather simple one.
> >
> >In other words:
> >
> >#include 
> >int main()
> >{
> >   printf("hello world!\n");
> >   return 1;
> >}
> >
> >is very much a computer program (and a playback of recording of the
> >words "hello world" when run). I could change "hello world" to the 
contents of
> >Wikipedia, to illustrate the point more forcibly.
> OK.  So do you think consciousness supervenes on such a simple
> computation - one that's functionally identical with a recording? Or
> does instantiating consciousness require some degree of complexity
> such that CC comes into play?
>

My opinion on whether the recording is conscious or not aint worth a
penny.

Nevertheless, the definition of computational supervenience requires
countefactual correctness in the class of programs being supervened
on.

AFAICT, the main motivation for that is to prevent recordings being 
conscious.


I think it is possible to have a different definition of when a computation is 
"instantiated" in the physical world that prevents recordings from being conscious, a 
solution which doesn't actually depend on counterfactuals at all. I described it in the 
post at http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html  (or 
https://groups.google.com/d/msg/everything-list/GC6bwqCqsfQ/rFvg1dnKoWMJ on google 
groups). Basically the idea is that in any system following mathematical rules, 
including both abstract Turing machines and the physical universe, everything about its 
mathematical structure can be encoded as a (possibly infinite) set of logical 
propositions. So if you have a Turing machine running whose computations over some 
finite period are supposed to correspond to a particular "observer moment", you can take 
all the propositions dealing with the Turing machine's behavior during that period 
(propositions like "on time-increment 107234320 the read/write head moved to square 
2398311 and changed the digit there from 0 to 1, and changed its internal state from M 
to Q"), and look at the structure of logical relations between them (like "proposition A 
and B together imply proposition C, proposition B and C together do not imply A", etc.). 
Then for any other computation or even any physical process, you can see if it's 
possible to find a set of propositions with a completely *isomorphic* logical structure.


But physical processes don't have *logical* structure.  Theories of physical processes do, 
but I don't think that serves your purpose. And even restricting the domain to Turing 
machines, I don't see what proposition A and proposition B are?  Aren't they just they 
transition diagram of the Turing machine?  So if the Turing machine goes thru the same set 
of states that set defines an equivalence class of computations.  But what about a 
different Turing machine that computes the same function?  It may not go thru the same 
states even for the same input and output.  In fact there is one such Turing machine that 
just executes the recording.  Right?


Brent

In the case of the physical world, it seems to me you could do this using only 
propositions about physical events that actually occur, along with the general laws 
governing them--no propositions about counterfactuals would be needed.


I suggested something like this to Bruno, and he seemed to agree that at least in the 
case of computational *simulations* of the physical world, if you use a rule like this 
to define when a simpler computation is "instantiated" within some more detailed 
physical simulation, it would be the case that a detailed simulation of a physical 
computer running some simpler program P would qualify as instantiating P, whereas a 
detailed simulation of a physical device that was really just playing back a recording 
of a computer program (like Bruno's movie graph where all the optical gates have been 
replaced by projected images) would *not* instantiate P. See my comment at 
https://groups.google.com/d/msg/everything-list/Ljp3s2885Co/kght-F5LZeUJ and Bruno's 
response at https://groups.google.com/d/msg/everything-list/Ljp3s2885Co/__PZn6hmCb4J


Assuming this idea for defining "instantiations" of sub-computations withi

Re: MGA revisited paper

2014-08-15 Thread meekerdb

On 8/15/2014 7:07 PM, Pierz wrote:
Liz, I've been thinking about the best way to illustrate the core of the MGA and Olympia 
arguments. Perhaps this will help.


The Olympia idea is indeed a "baroque" construction as the paper nicely puts it, but I 
suspect Maudlin was trying to illustrate what amounts to a simple intuition, namely that 
when a specific calculation is carried out (in any medium), the machine carrying it out 
is physically like Olympia: there is a single sequence of steps in which one can ignore 
the inactive counterfactuals completely. All the complexity and intelligence is in the 
capacity of the computer to handle many possible inputs, yet when any specific input 
goes through, we can always think of the computer as an Olympia that can only do that 
one task. To realize this we can imagine that we remove all the circuits and pathways 
that weren't actually employed. All the baroque elements of hoses, troughs and rusting 
gates really just serves to make this point particularly clear. Another way to think 
about this in purely logical/arithmetic terms is to imagine a program to calculate the 
sum of any two positive integers x and y. Imagine the program does this by adding 1 to x 
sequentially, each time asking itself have I done this 'y' times yet? If yes, stop and 
output x, if no, do it again. For the inputs 1 and 2, we get:


x=1, y=2
x=x+1=2, count=1. Is count=y? no, so repeat
x=x+1=3, count=2. Is count=y? yes, so output
output x (3)

Now I want to remove the capacity to handle counterfactuals, that is to say, I want to 
remove all decision making logic from the machine but still let it output 3 for the 
inputs 1 an 2. How do I do it and still get the result of 3 for those inputs. The answer 
depends on whether the machinery that performs each step of the calculation is reused or 
not. If it isn't, then the size of the calculation I can perform will be limited by the 
size of the machinery (I need to repeat the mechanism y times to add the number y to x), 
but if I imagine I can manufacture new machinery on the fly or simply have an infinite 
machine, then this is not an issue. If I don't reuse the machinery, then the way to 
remove the capacity to handle counterfactuals is to solder shut the logic gates  such 
that the machine (program) now looks like:


x=1, y=2
x=x+1=2 repeat
x=x+1=3 output
output x(3)

We can see that it becomes a machine for counting to 3. This is Olympia (on a very small 
scale).


However if I *do* reuse the machinery, then I can't solder the logic gates in any fixed 
position because I still need to know when to stop and output the result. In this case I 
simply hard wire the machine to run the addition step exactly twice before outputting, 
which is exactly what the 'filmed graph' scenario does - it removes the logic gates and 
remotely controls the operation of the machine in a fixed, mindless fashion. 
Interestingly this is what robots do in manufacturing plants. Because the routine is the 
same each time, there's no need for all that human counterfactual processing capacity - 
just record the sequence and output it over and over.


The reason why the MGA is possibly less convincing (superficially) is that it's not 
obviously the same physical process being carried out when the recorded light beam 
activates the nodes as when they are activated by the logic of the connections between 
them. Maudlin removes that possible (meretricious) objection by having machinery that 
can't be reused. That way he can more easily show the physical equivalence of the 
process in both cases.


Finally though, there is another possible objection even if we accept this type of 
argument. That is to say that, sure, consciousness must supervene on the logic not the 
physical operation. However we still insist that physical instantiation of the logic is 
required to instantiate the relevant consciousness. i.e., consciousness supervenes on 
logic + matter, or the logical organization of matter. This would be Brent's position I 
believe. Now Bruno counters this by calling it disgraceful and ad hoc, yet perhaps we 
can read from the uncharacteristically emotive adjective that he senses a weakness here. 
To be clear, I tend to agree with Bruno's conclusion, but I fear that the acceptance of 
this theory will always stumble over this point, because for materialists there is 
already some assumed ontological magic to matter. It's what "puts the fire in the 
equations" or whatever.


I don't want to pick on your post because I pretty much agree with it.  But physicists 
like Wheeler who ask, "What puts fire in the equations." are really mystics like Bruno.  
My attitude is we found a fire and here's an equation that describes it.


Bruno's theory gets around the magic of material instantiation, the "brute fact" of 
something happening to exist physically, by showing how everything is instantiated in 
some relative perspective interior to arithmetic. That is very elegant and nice. But 

Re: MGA revisited paper

2014-08-15 Thread Jesse Mazer
On Fri, Aug 15, 2014 at 11:09 PM, meekerdb  wrote:

>  On 8/15/2014 5:30 PM, Jesse Mazer wrote:
>
>
>
> On Fri, Aug 15, 2014 at 1:27 AM, Russell Standish 
> wrote:
>
>> On Thu, Aug 14, 2014 at 09:41:00PM -0700, meekerdb wrote:
>> > On 8/14/2014 8:32 PM, Russell Standish wrote:
>> > >On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:
>> > >>That does seem strange, but I don't know that it strikes me as
>> > >>*absurd*.  Isn't it clearer that a recording is not a computation?
>> > >>And so if consciousness supervened on a recording it would prove
>> > >>that consciousness did not require computation?
>> > >>
>> > >To be precise "supervening on the playback of a recording". Playback
>> > >of a recording _is_ a computation too, just a rather simple one.
>> > >
>> > >In other words:
>> > >
>> > >#include 
>> > >int main()
>> > >{
>> > >   printf("hello world!\n");
>> > >   return 1;
>> > >}
>> > >
>> > >is very much a computer program (and a playback of recording of the
>> > >words "hello world" when run). I could change "hello world" to the
>> contents of
>> > >Wikipedia, to illustrate the point more forcibly.
>> > OK.  So do you think consciousness supervenes on such a simple
>> > computation - one that's functionally identical with a recording? Or
>> > does instantiating consciousness require some degree of complexity
>> > such that CC comes into play?
>> >
>>
>>  My opinion on whether the recording is conscious or not aint worth a
>> penny.
>>
>> Nevertheless, the definition of computational supervenience requires
>> countefactual correctness in the class of programs being supervened
>> on.
>>
>> AFAICT, the main motivation for that is to prevent recordings being
>> conscious.
>
>
>  I think it is possible to have a different definition of when a
> computation is "instantiated" in the physical world that prevents
> recordings from being conscious, a solution which doesn't actually depend
> on counterfactuals at all. I described it in the post at
> http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html
>  (or
> https://groups.google.com/d/msg/everything-list/GC6bwqCqsfQ/rFvg1dnKoWMJ
> on google groups). Basically the idea is that in any system following
> mathematical rules, including both abstract Turing machines and the
> physical universe, everything about its mathematical structure can be
> encoded as a (possibly infinite) set of logical propositions. So if you
> have a Turing machine running whose computations over some finite period
> are supposed to correspond to a particular "observer moment", you can take
> all the propositions dealing with the Turing machine's behavior during that
> period (propositions like "on time-increment 107234320 the read/write head
> moved to square 2398311 and changed the digit there from 0 to 1, and
> changed its internal state from M to Q"), and look at the structure of
> logical relations between them (like "proposition A and B together imply
> proposition C, proposition B and C together do not imply A", etc.). Then
> for any other computation or even any physical process, you can see if it's
> possible to find a set of propositions with a completely *isomorphic*
> logical structure.
>
>
> But physical processes don't have *logical* structure.  Theories of
> physical processes do, but I don't think that serves your purpose.
>

Propositions about physical processes have a logical structure, don't they?
And wouldn't such propositions--if properly defined using variables that
appear in whatever the correct fundamental theory turns out to be--have
objective truth-values?

Also, would you say physical processes don't have a mathematical structure?
If you would say that, what sort of "structure" would you say they *do*
have, given that we have no way of empirically measuring any properties
other than ones with mathematical values? Any talk of physical properties
beyond mathematical ones gets into the territory of some kind of
"thing-in-itself" beyond all human comprehension.



>   And even restricting the domain to Turing machines, I don't see what
> proposition A and proposition B are?
>

They could be propositions about basic "events" in the course of the
computation--state changes of the Turing machine and string on each
time-step, like the example I gave "on time-increment 107234320 the
read/write head moved to square 2398311 and changed the digit there from 0
to 1, and changed its internal state from M to Q". There would also have to
be propositions for the general rules followed by the Turing machine, like
"if the read/write head arrives at a square with a 1 and the machine's
internal state is P, change the 1 to a 0, change the internal state to S,
and advance along the tape by 3 squares".




>   Aren't they just they transition diagram of the Turing machine?  So if
> the Turing machine goes thru the same set of states that set defines an
> equivalence class of computations.  But what about a different Turing
> machine that computes the same

Re: MGA revisited paper

2014-08-15 Thread Pierz


On Saturday, August 16, 2014 1:48:03 PM UTC+10, Brent wrote:
>
>  On 8/15/2014 7:07 PM, Pierz wrote:
>  
> Liz, I've been thinking about the best way to illustrate the core of the 
> MGA and Olympia arguments. Perhaps this will help.  
>
>  The Olympia idea is indeed a "baroque" construction as the paper nicely 
> puts it, but I suspect Maudlin was trying to illustrate what amounts to a 
> simple intuition, namely that when a specific calculation is carried out 
> (in any medium), the machine carrying it out is physically like Olympia: 
> there is a single sequence of steps in which one can ignore the inactive 
> counterfactuals completely. All the complexity and intelligence is in the 
> capacity of the computer to handle many possible inputs, yet when any 
> specific input goes through, we can always think of the computer as an 
> Olympia that can only do that one task. To realize this we can imagine that 
> we remove all the circuits and pathways that weren't actually employed. All 
> the baroque elements of hoses, troughs and rusting gates really just serves 
> to make this point particularly clear. Another way to think about this in 
> purely logical/arithmetic terms is to imagine a program to calculate the 
> sum of any two positive integers x and y. Imagine the program does this by 
> adding 1 to x sequentially, each time asking itself have I done this 'y' 
> times yet? If yes, stop and output x, if no, do it again. For the inputs 1 
> and 2, we get: 
>
>  x=1, y=2
> x=x+1=2, count=1. Is count=y? no, so repeat
> x=x+1=3, count=2. Is count=y? yes, so output
> output x (3)
>
>  Now I want to remove the capacity to handle counterfactuals, that is to 
> say, I want to remove all decision making logic from the machine but still 
> let it output 3 for the inputs 1 an 2. How do I do it and still get the 
> result of 3 for those inputs. The answer depends on whether the machinery 
> that performs each step of the calculation is reused or not. If it isn't, 
> then the size of the calculation I can perform will be limited by the size 
> of the machinery (I need to repeat the mechanism y times to add the number 
> y to x), but if I imagine I can manufacture new machinery on the fly or 
> simply have an infinite machine, then this is not an issue. If I don't 
> reuse the machinery, then the way to remove the capacity to handle 
> counterfactuals is to solder shut the logic gates  such that the machine 
> (program) now looks like:
>
>  x=1, y=2
> x=x+1=2 repeat
> x=x+1=3 output
> output x(3)
>
>  We can see that it becomes a machine for counting to 3. This is Olympia 
> (on a very small scale). 
>
>  However if I *do* reuse the machinery, then I can't solder the logic 
> gates in any fixed position because I still need to know when to stop and 
> output the result. In this case I simply hard wire the machine to run the 
> addition step exactly twice before outputting, which is exactly what the 
> 'filmed graph' scenario does - it removes the logic gates and remotely 
> controls the operation of the machine in a fixed, mindless fashion. 
> Interestingly this is what robots do in manufacturing plants. Because the 
> routine is the same each time, there's no need for all that human 
> counterfactual processing capacity - just record the sequence and output it 
> over and over. 
>
>  The reason why the MGA is possibly less convincing (superficially) is 
> that it's not obviously the same physical process being carried out when 
> the recorded light beam activates the nodes as when they are activated by 
> the logic of the connections between them. Maudlin removes that possible 
> (meretricious) objection by having machinery that can't be reused. That way 
> he can more easily show the physical equivalence of the process in both 
> cases.
>
>  Finally though, there is another possible objection even if we accept 
> this type of argument. That is to say that, sure, consciousness must 
> supervene on the logic not the physical operation. However we still insist 
> that physical instantiation of the logic is required to instantiate the 
> relevant consciousness. i.e., consciousness supervenes on logic + matter, 
> or the logical organization of matter. This would be Brent's position I 
> believe. Now Bruno counters this by calling it disgraceful and ad hoc, yet 
> perhaps we can read from the uncharacteristically emotive adjective that he 
> senses a weakness here. To be clear, I tend to agree with Bruno's 
> conclusion, but I fear that the acceptance of this theory will always 
> stumble over this point, because for materialists there is already some 
> assumed ontological magic to matter. It's what "puts the fire in the 
> equations" or whatever. 
>  
>
> I don't want to pick on your post because I pretty much agree with it.  
> But physicists like Wheeler who ask, "What puts fire in the equations." are 
> really mystics like Bruno.  My attitude is we found a fire and here's an 
> equation that describes it.
>

Sure

Re: MGA revisited paper

2014-08-15 Thread Pierz


On Saturday, August 16, 2014 2:28:32 PM UTC+10, jessem wrote:
>
>
>
> On Fri, Aug 15, 2014 at 11:09 PM, meekerdb  > wrote:
>
>>  On 8/15/2014 5:30 PM, Jesse Mazer wrote:
>>  
>>  
>>
>> On Fri, Aug 15, 2014 at 1:27 AM, Russell Standish > > wrote:
>>
>>> On Thu, Aug 14, 2014 at 09:41:00PM -0700, meekerdb wrote:
>>> > On 8/14/2014 8:32 PM, Russell Standish wrote:
>>> > >On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:
>>> > >>That does seem strange, but I don't know that it strikes me as
>>> > >>*absurd*.  Isn't it clearer that a recording is not a computation?
>>> > >>And so if consciousness supervened on a recording it would prove
>>> > >>that consciousness did not require computation?
>>> > >>
>>> > >To be precise "supervening on the playback of a recording". Playback
>>> > >of a recording _is_ a computation too, just a rather simple one.
>>> > >
>>> > >In other words:
>>> > >
>>> > >#include 
>>> > >int main()
>>> > >{
>>> > >   printf("hello world!\n");
>>> > >   return 1;
>>> > >}
>>> > >
>>> > >is very much a computer program (and a playback of recording of the
>>> > >words "hello world" when run). I could change "hello world" to the 
>>> contents of
>>> > >Wikipedia, to illustrate the point more forcibly.
>>> > OK.  So do you think consciousness supervenes on such a simple
>>> > computation - one that's functionally identical with a recording? Or
>>> > does instantiating consciousness require some degree of complexity
>>> > such that CC comes into play?
>>> >
>>>
>>>  My opinion on whether the recording is conscious or not aint worth a
>>> penny.
>>>
>>> Nevertheless, the definition of computational supervenience requires
>>> countefactual correctness in the class of programs being supervened
>>> on.
>>>
>>> AFAICT, the main motivation for that is to prevent recordings being 
>>> conscious.
>>
>>
>>  I think it is possible to have a different definition of when a 
>> computation is "instantiated" in the physical world that prevents 
>> recordings from being conscious, a solution which doesn't actually depend 
>> on counterfactuals at all. I described it in the post at 
>> http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html 
>>  (or 
>> https://groups.google.com/d/msg/everything-list/GC6bwqCqsfQ/rFvg1dnKoWMJ 
>> on google groups). Basically the idea is that in any system following 
>> mathematical rules, including both abstract Turing machines and the 
>> physical universe, everything about its mathematical structure can be 
>> encoded as a (possibly infinite) set of logical propositions. So if you 
>> have a Turing machine running whose computations over some finite period 
>> are supposed to correspond to a particular "observer moment", you can take 
>> all the propositions dealing with the Turing machine's behavior during that 
>> period (propositions like "on time-increment 107234320 the read/write head 
>> moved to square 2398311 and changed the digit there from 0 to 1, and 
>> changed its internal state from M to Q"), and look at the structure of 
>> logical relations between them (like "proposition A and B together imply 
>> proposition C, proposition B and C together do not imply A", etc.). Then 
>> for any other computation or even any physical process, you can see if it's 
>> possible to find a set of propositions with a completely *isomorphic* 
>> logical structure. 
>>   
>>
>> But physical processes don't have *logical* structure.  Theories of 
>> physical processes do, but I don't think that serves your purpose.
>>
>
> Propositions about physical processes have a logical structure, don't 
> they? And wouldn't such propositions--if properly defined using variables 
> that appear in whatever the correct fundamental theory turns out to 
> be--have objective truth-values?
>
> Also, would you say physical processes don't have a mathematical 
> structure? If you would say that, what sort of "structure" would you say 
> they *do* have, given that we have no way of empirically measuring any 
> properties other than ones with mathematical values? Any talk of physical 
> properties beyond mathematical ones gets into the territory of some kind of 
> "thing-in-itself" beyond all human comprehension.
>
>  
>
>>   And even restricting the domain to Turing machines, I don't see what 
>> proposition A and proposition B are?
>>
>
> They could be propositions about basic "events" in the course of the 
> computation--state changes of the Turing machine and string on each 
> time-step, like the example I gave "on time-increment 107234320 the 
> read/write head moved to square 2398311 and changed the digit there from 0 
> to 1, and changed its internal state from M to Q". There would also have to 
> be propositions for the general rules followed by the Turing machine, like 
> "if the read/write head arrives at a square with a 1 and the machine's 
> internal state is P, change the 1 to a 0, change the internal state to S, 
> and advance along the tape by 3 squares".
>
>
>  
>

Re: MGA revisited paper

2014-08-15 Thread Pierz


On Saturday, August 16, 2014 2:48:04 PM UTC+10, Pierz wrote:
>
>
>
> On Saturday, August 16, 2014 2:28:32 PM UTC+10, jessem wrote:
>>
>>
>>
>> On Fri, Aug 15, 2014 at 11:09 PM, meekerdb  wrote:
>>
>>>  On 8/15/2014 5:30 PM, Jesse Mazer wrote:
>>>  
>>>  
>>>
>>> On Fri, Aug 15, 2014 at 1:27 AM, Russell Standish >> > wrote:
>>>
 On Thu, Aug 14, 2014 at 09:41:00PM -0700, meekerdb wrote:
 > On 8/14/2014 8:32 PM, Russell Standish wrote:
 > >On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:
 > >>That does seem strange, but I don't know that it strikes me as
 > >>*absurd*.  Isn't it clearer that a recording is not a computation?
 > >>And so if consciousness supervened on a recording it would prove
 > >>that consciousness did not require computation?
 > >>
 > >To be precise "supervening on the playback of a recording". Playback
 > >of a recording _is_ a computation too, just a rather simple one.
 > >
 > >In other words:
 > >
 > >#include 
 > >int main()
 > >{
 > >   printf("hello world!\n");
 > >   return 1;
 > >}
 > >
 > >is very much a computer program (and a playback of recording of the
 > >words "hello world" when run). I could change "hello world" to the 
 contents of
 > >Wikipedia, to illustrate the point more forcibly.
 > OK.  So do you think consciousness supervenes on such a simple
 > computation - one that's functionally identical with a recording? Or
 > does instantiating consciousness require some degree of complexity
 > such that CC comes into play?
 >

  My opinion on whether the recording is conscious or not aint worth a
 penny.

 Nevertheless, the definition of computational supervenience requires
 countefactual correctness in the class of programs being supervened
 on.

 AFAICT, the main motivation for that is to prevent recordings being 
 conscious.
>>>
>>>
>>>  I think it is possible to have a different definition of when a 
>>> computation is "instantiated" in the physical world that prevents 
>>> recordings from being conscious, a solution which doesn't actually depend 
>>> on counterfactuals at all. I described it in the post at 
>>> http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html 
>>>  (or 
>>> https://groups.google.com/d/msg/everything-list/GC6bwqCqsfQ/rFvg1dnKoWMJ 
>>> on google groups). Basically the idea is that in any system following 
>>> mathematical rules, including both abstract Turing machines and the 
>>> physical universe, everything about its mathematical structure can be 
>>> encoded as a (possibly infinite) set of logical propositions. So if you 
>>> have a Turing machine running whose computations over some finite period 
>>> are supposed to correspond to a particular "observer moment", you can take 
>>> all the propositions dealing with the Turing machine's behavior during that 
>>> period (propositions like "on time-increment 107234320 the read/write head 
>>> moved to square 2398311 and changed the digit there from 0 to 1, and 
>>> changed its internal state from M to Q"), and look at the structure of 
>>> logical relations between them (like "proposition A and B together imply 
>>> proposition C, proposition B and C together do not imply A", etc.). Then 
>>> for any other computation or even any physical process, you can see if it's 
>>> possible to find a set of propositions with a completely *isomorphic* 
>>> logical structure. 
>>>   
>>>
>>> But physical processes don't have *logical* structure.  Theories of 
>>> physical processes do, but I don't think that serves your purpose.
>>>
>>
>> Propositions about physical processes have a logical structure, don't 
>> they? And wouldn't such propositions--if properly defined using variables 
>> that appear in whatever the correct fundamental theory turns out to 
>> be--have objective truth-values?
>>
>> Also, would you say physical processes don't have a mathematical 
>> structure? If you would say that, what sort of "structure" would you say 
>> they *do* have, given that we have no way of empirically measuring any 
>> properties other than ones with mathematical values? Any talk of physical 
>> properties beyond mathematical ones gets into the territory of some kind of 
>> "thing-in-itself" beyond all human comprehension.
>>
>>  
>>
>>>   And even restricting the domain to Turing machines, I don't see what 
>>> proposition A and proposition B are?
>>>
>>
>> They could be propositions about basic "events" in the course of the 
>> computation--state changes of the Turing machine and string on each 
>> time-step, like the example I gave "on time-increment 107234320 the 
>> read/write head moved to square 2398311 and changed the digit there from 0 
>> to 1, and changed its internal state from M to Q". There would also have to 
>> be propositions for the general rules followed by the Turing machine, like 
>> "if the read/write head arriv

Re: MGA revisited paper

2014-08-16 Thread LizR
Pierz, you have said exactly the reason why I am willing to give Bruno's
ideas so much time. It's the fact that IF he's right, then he has actually
caught sight of the end of the explanatory chain, which otherwise has only
ever been grounded in an unsatisfactory deity or a "chain of turtles" -
i.e. it's thought to never end - or it ends at a brute fact of some sort,
some "shut up and calculate" beyond which we supposedly can't go.

A TOE should start from something that's necessarily so, and so far the
only thing I've ever come across that's necessarily so is stuff like 1+1=2,
with apologies to Stephen P King and anyone else who thinks we just made
that up. But so far there isn't anything else except God, turtles and shut
up  is there?

Admittedly we may just not have thought of the correct end-of-chain yet, so
this may be like looking for your keys under a lamp-post because that's the
well lit part of the street. But it's always *possible* the keys are in the
well-lit part Hence I give a lot of mental houseroom to comp, and any
other theory that starts from something that's grounded in (apparent)
logical necessity. Are there any other such theories? I have a feeling that
"it from bit" goes in that sort of direction, as does A. Garrett Lisi, Max
T of course, Julian Barbour? I guess any TOE which claims that some set of
equations is isomorphic to the universe is nodding in that direction, and
as Max Tegmark says we just need to reduce the baggage allowance. Even
Edgar Owen's computational idea has some merit on the "it from bit" front
(although I don't think it's particularly original ... and of course it
fails to address about 99% of known physics.)

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-16 Thread Pierz
I've been thinking more on the lookup table business and my suggestion that 
the lookup table becomes so large in mapping all input-outputs that it ends 
up being the same as doing the computation. It's wrong, so long as we only 
record some final behavioural output and not the actual "machine states". 
However if by a recording, we mean a recording of all the machine's 
intermediate states, as in the MGA, then my argument holds. In that case, 
the work required to "find" the machine's state in some static table is as 
much as that required to do the calculation. I'm trying to work that out 
formally.

On Saturday, August 16, 2014 2:28:32 PM UTC+10, jessem wrote:
>
>
>
> On Fri, Aug 15, 2014 at 11:09 PM, meekerdb  > wrote:
>
>>  On 8/15/2014 5:30 PM, Jesse Mazer wrote:
>>  
>>  
>>
>> On Fri, Aug 15, 2014 at 1:27 AM, Russell Standish > > wrote:
>>
>>> On Thu, Aug 14, 2014 at 09:41:00PM -0700, meekerdb wrote:
>>> > On 8/14/2014 8:32 PM, Russell Standish wrote:
>>> > >On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:
>>> > >>That does seem strange, but I don't know that it strikes me as
>>> > >>*absurd*.  Isn't it clearer that a recording is not a computation?
>>> > >>And so if consciousness supervened on a recording it would prove
>>> > >>that consciousness did not require computation?
>>> > >>
>>> > >To be precise "supervening on the playback of a recording". Playback
>>> > >of a recording _is_ a computation too, just a rather simple one.
>>> > >
>>> > >In other words:
>>> > >
>>> > >#include 
>>> > >int main()
>>> > >{
>>> > >   printf("hello world!\n");
>>> > >   return 1;
>>> > >}
>>> > >
>>> > >is very much a computer program (and a playback of recording of the
>>> > >words "hello world" when run). I could change "hello world" to the 
>>> contents of
>>> > >Wikipedia, to illustrate the point more forcibly.
>>> > OK.  So do you think consciousness supervenes on such a simple
>>> > computation - one that's functionally identical with a recording? Or
>>> > does instantiating consciousness require some degree of complexity
>>> > such that CC comes into play?
>>> >
>>>
>>>  My opinion on whether the recording is conscious or not aint worth a
>>> penny.
>>>
>>> Nevertheless, the definition of computational supervenience requires
>>> countefactual correctness in the class of programs being supervened
>>> on.
>>>
>>> AFAICT, the main motivation for that is to prevent recordings being 
>>> conscious.
>>
>>
>>  I think it is possible to have a different definition of when a 
>> computation is "instantiated" in the physical world that prevents 
>> recordings from being conscious, a solution which doesn't actually depend 
>> on counterfactuals at all. I described it in the post at 
>> http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html 
>>  (or 
>> https://groups.google.com/d/msg/everything-list/GC6bwqCqsfQ/rFvg1dnKoWMJ 
>> on google groups). Basically the idea is that in any system following 
>> mathematical rules, including both abstract Turing machines and the 
>> physical universe, everything about its mathematical structure can be 
>> encoded as a (possibly infinite) set of logical propositions. So if you 
>> have a Turing machine running whose computations over some finite period 
>> are supposed to correspond to a particular "observer moment", you can take 
>> all the propositions dealing with the Turing machine's behavior during that 
>> period (propositions like "on time-increment 107234320 the read/write head 
>> moved to square 2398311 and changed the digit there from 0 to 1, and 
>> changed its internal state from M to Q"), and look at the structure of 
>> logical relations between them (like "proposition A and B together imply 
>> proposition C, proposition B and C together do not imply A", etc.). Then 
>> for any other computation or even any physical process, you can see if it's 
>> possible to find a set of propositions with a completely *isomorphic* 
>> logical structure. 
>>   
>>
>> But physical processes don't have *logical* structure.  Theories of 
>> physical processes do, but I don't think that serves your purpose.
>>
>
> Propositions about physical processes have a logical structure, don't 
> they? And wouldn't such propositions--if properly defined using variables 
> that appear in whatever the correct fundamental theory turns out to 
> be--have objective truth-values?
>
> Also, would you say physical processes don't have a mathematical 
> structure? If you would say that, what sort of "structure" would you say 
> they *do* have, given that we have no way of empirically measuring any 
> properties other than ones with mathematical values? Any talk of physical 
> properties beyond mathematical ones gets into the territory of some kind of 
> "thing-in-itself" beyond all human comprehension.
>
>  
>
>>   And even restricting the domain to Turing machines, I don't see what 
>> proposition A and proposition B are?
>>
>
> They could be propositions about basic "eve

Re: MGA revisited paper

2014-08-16 Thread Richard Ruquist
On Sat, Aug 16, 2014 at 6:35 AM, LizR  wrote:

> Pierz, you have said exactly the reason why I am willing to give Bruno's
> ideas so much time. It's the fact that IF he's right, then he has actually
> caught sight of the end of the explanatory chain, which otherwise has only
> ever been grounded in an unsatisfactory deity or a "chain of turtles" -
> i.e. it's thought to never end - or it ends at a brute fact of some sort,
> some "shut up and calculate" beyond which we supposedly can't go.
>
> A TOE should start from something that's necessarily so, and so far the
> only thing I've ever come across that's necessarily so is stuff like 1+1=2,
> with apologies to Stephen P King and anyone else who thinks we just made
> that up. But so far there isn't anything else except God, turtles and shut
> up  is there?
>
> Admittedly we may just not have thought of the correct end-of-chain yet,
> so this may be like looking for your keys under a lamp-post because that's
> the well lit part of the street. But it's always *possible* the keys are
> in the well-lit part Hence I give a lot of mental houseroom to comp,
> and any other theory that starts from something that's grounded in
> (apparent) logical necessity. Are there any other such theories? I have a
> feeling that "it from bit" goes in that sort of direction, as does A.
> Garrett Lisi, Max T of course, Julian Barbour? I guess any TOE which claims
> that some set of equations is isomorphic to the universe is nodding in that
> direction, and as Max Tegmark says we just need to reduce the baggage
> allowance. Even Edgar Owen's computational idea has some merit on the "it
> from bit" front (although I don't think it's particularly original ... and
> of course it fails to address about 99% of known physics.)
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-16 Thread LizR
On 16 August 2014 16:48, Pierz  wrote:

> I assert this confidently on the basis of my intuitions as a programmer,
> without being able to rigorously prove it, but a short thought experiment
> should get halfway to proving it. Imagine a lookup table of all possible
> additions of two numbers up to some number n. First I calculate all the
> possible results and put them into a large n by n table. Now I'm asked what
> is the sum of say 10 and 70. So I go across to row 10 and column 70 and
> read out the number 80. But in doing so, I've had to count to 10 and to 70!
> So I've added the two numbers together finding the correct value to look
> up! I'm sure the same equivalence could be proven to apply in all analogous
> situations.
>
>>
>> But if your table gives the results of multiplying them, you get a
slightly free lunch (actually I have a nasty feeling you have to perform a
multiplication to get an answer from an NxN grid ... to get to row 70,
column 10, don't you count N x 70 + 10?)

So suppose your table gives the result of dividing them, I'm sure you're
getting at least a cheap lunch now?

Sorry this is probably complete nitpicking. I can see that the humungous
L.T. needed to speak Chinese would require astronomical calculations to
find the right answer, which does probably prove the point.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-16 Thread Richard Ruquist
Bruno IMO does not end the chain so-to-speak because he does not say where
the natural numbers come from other than invoking Platonia. Super-string
theory does. But it invokes even more turtles, like where do the ten
dimensions come from.
http://vixra.org/pdf/1101.0044v1.pdf


On Sat, Aug 16, 2014 at 6:35 AM, LizR  wrote:

> Pierz, you have said exactly the reason why I am willing to give Bruno's
> ideas so much time. It's the fact that IF he's right, then he has actually
> caught sight of the end of the explanatory chain, which otherwise has only
> ever been grounded in an unsatisfactory deity or a "chain of turtles" -
> i.e. it's thought to never end - or it ends at a brute fact of some sort,
> some "shut up and calculate" beyond which we supposedly can't go.
>
> A TOE should start from something that's necessarily so, and so far the
> only thing I've ever come across that's necessarily so is stuff like 1+1=2,
> with apologies to Stephen P King and anyone else who thinks we just made
> that up. But so far there isn't anything else except God, turtles and shut
> up  is there?
>
> Admittedly we may just not have thought of the correct end-of-chain yet,
> so this may be like looking for your keys under a lamp-post because that's
> the well lit part of the street. But it's always *possible* the keys are
> in the well-lit part Hence I give a lot of mental houseroom to comp,
> and any other theory that starts from something that's grounded in
> (apparent) logical necessity. Are there any other such theories? I have a
> feeling that "it from bit" goes in that sort of direction, as does A.
> Garrett Lisi, Max T of course, Julian Barbour? I guess any TOE which claims
> that some set of equations is isomorphic to the universe is nodding in that
> direction, and as Max Tegmark says we just need to reduce the baggage
> allowance. Even Edgar Owen's computational idea has some merit on the "it
> from bit" front (although I don't think it's particularly original ... and
> of course it fails to address about 99% of known physics.)
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-16 Thread LizR
Um, I hadn't read your subsequent posts when I wrote the above. It looks
like this is quite complicated, and I'm not going to bother my pretty head
trying to be clever about it when you're obviously far more so on this
subject.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-16 Thread LizR
On 16 August 2014 22:45, Richard Ruquist  wrote:

> Bruno IMO does not end the chain so-to-speak because he does not say where
> the natural numbers come from other than invoking Platonia. Super-string
> theory does. But it invokes even more turtles, like where do the ten
> dimensions come from.
> http://vixra.org/pdf/1101.0044v1.pdf
>
> Are the natural numbers the integers?

If so I don't think he needs to say where they come from. They may
exist (abstractly) from logical necessity, that is they couldn't be any
other way in any possible world. This is of course a bone of contention,
because some people think there's nothing natural about 1+1=2, but it seems
to me, at least, less contentious than any of the other contenders,
although I'm willing to entertain any possibilities that anyone suggests,
when that happens (except God, I've worked out that using something
infinitely complicated to explain the world is a retrograde step).

I'm pretty sure string theory is mathematical in form, and so can't be the
end of the chain because it is relying on maths - hence it has (at least)
one lower level in explanatory space, so to speak,

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-16 Thread Pierz


On Saturday, August 16, 2014 8:35:23 PM UTC+10, Liz R wrote:
>
> Pierz, you have said exactly the reason why I am willing to give Bruno's 
> ideas so much time. It's the fact that IF he's right, then he has actually 
> caught sight of the end of the explanatory chain, which otherwise has only 
> ever been grounded in an unsatisfactory deity or a "chain of turtles" - 
> i.e. it's thought to never end - or it ends at a brute fact of some sort, 
> some "shut up and calculate" beyond which we supposedly can't go.
>
> A TOE should start from something that's necessarily so, and so far the 
> only thing I've ever come across that's necessarily so is stuff like 1+1=2, 
> with apologies to Stephen P King and anyone else who thinks we just made 
> that up. But so far there isn't anything else except God, turtles and shut 
> up  is there?
>

Not that I know of and in fact if you're looking for something "necessarily 
so" then as far as I can tell logic and maths is not just the lighted bit 
of the street, there's nothing outside of the lighted bit, because only in 
maths can you find what is "necessarily so". I suppose the question then 
is, is the universe "necessarily so", or just a brute fact? Or on the other 
hand, are we embedded in infinities which mean that nothing is a brute 
fact, everything having an explanation, but also that there is no ultimate 
explanation (turtles forever!).

>
> Admittedly we may just not have thought of the correct end-of-chain yet, 
> so this may be like looking for your keys under a lamp-post because that's 
> the well lit part of the street. But it's always *possible* the keys are 
> in the well-lit part Hence I give a lot of mental houseroom to comp, 
> and any other theory that starts from something that's grounded in 
> (apparent) logical necessity. Are there any other such theories? I have a 
> feeling that "it from bit" goes in that sort of direction, as does A. 
> Garrett Lisi, Max T of course, Julian Barbour? 
>

It from bit inverts the ontological priority of matter and information, but 
it's unclear what the information is "floating around in". The information 
space still seems arbitrary, but then I don't know Wheeler's work well.

I guess any TOE which claims that some set of equations is isomorphic to 
> the universe is nodding in that direction, and as Max Tegmark says we just 
> need to reduce the baggage allowance. Even Edgar Owen's computational idea 
> has some merit on the "it from bit" front (although I don't think it's 
> particularly original ... and of course it fails to address about 99% of 
> known physics.)
>

oh please, Edgar is a crank pure and simple.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-16 Thread Pierz


On Saturday, August 16, 2014 8:45:47 PM UTC+10, yanniru wrote:
>
> Bruno IMO does not end the chain so-to-speak because he does not say where 
> the natural numbers come from other than invoking Platonia. Super-string 
> theory does. But it invokes even more turtles, like where do the ten 
> dimensions come from.
> http://vixra.org/pdf/1101.0044v1.pdf
>

Agree with Liz on this one. It seems much more reasonable to believe that 
string theory derives from maths than the other way around. String theory 
is a mathematical theory, therefore necessarily subsumed by mathematics in 
general, and specifically by computable mathematics including Peano 
arithmetic.   

>
>
>
> On Sat, Aug 16, 2014 at 6:35 AM, LizR > 
> wrote:
>
>> Pierz, you have said exactly the reason why I am willing to give Bruno's 
>> ideas so much time. It's the fact that IF he's right, then he has actually 
>> caught sight of the end of the explanatory chain, which otherwise has only 
>> ever been grounded in an unsatisfactory deity or a "chain of turtles" - 
>> i.e. it's thought to never end - or it ends at a brute fact of some sort, 
>> some "shut up and calculate" beyond which we supposedly can't go.
>>
>> A TOE should start from something that's necessarily so, and so far the 
>> only thing I've ever come across that's necessarily so is stuff like 1+1=2, 
>> with apologies to Stephen P King and anyone else who thinks we just made 
>> that up. But so far there isn't anything else except God, turtles and shut 
>> up  is there?
>>
>> Admittedly we may just not have thought of the correct end-of-chain yet, 
>> so this may be like looking for your keys under a lamp-post because that's 
>> the well lit part of the street. But it's always *possible* the keys are 
>> in the well-lit part Hence I give a lot of mental houseroom to comp, 
>> and any other theory that starts from something that's grounded in 
>> (apparent) logical necessity. Are there any other such theories? I have a 
>> feeling that "it from bit" goes in that sort of direction, as does A. 
>> Garrett Lisi, Max T of course, Julian Barbour? I guess any TOE which claims 
>> that some set of equations is isomorphic to the universe is nodding in that 
>> direction, and as Max Tegmark says we just need to reduce the baggage 
>> allowance. Even Edgar Owen's computational idea has some merit on the "it 
>> from bit" front (although I don't think it's particularly original ... and 
>> of course it fails to address about 99% of known physics.)
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com .
>> To post to this group, send email to everyth...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   5   6   >