Re: Uploaded Worm Mind

2015-09-06 Thread Bruno Marchal


On 04 Sep 2015, at 20:26, meekerdb wrote:


On 9/4/2015 7:35 AM, Bruno Marchal wrote:


On 03 Sep 2015, at 20:26, meekerdb wrote:


On 9/3/2015 8:35 AM, Bruno Marchal wrote:


On 02 Sep 2015, at 22:48, meekerdb wrote:


On 9/2/2015 8:25 AM, Bruno Marchal wrote:
So now you agree with me that there are different kinds and  
degrees of consciousness; that it is not just a binary  
attribute of an axiom + inference system.


?

Either you are conscious, or you are not.


But is a roundworm either conscious or not?  an amoeba?


I don't know, but i think they are. Even bacteria, and perhaps  
even some viruses, but on a different time scale than us.




If they can be conscious, but not self-conscious then there  
are two kinds of "being conscious".


Yes, at least two kinds, but each arithmetical hypostases  
having either "<>t" or "& p" describes a type of consciousness,  
I would say.
And they all differentiate on the infinitely many version of  
"[]A", be it the "[]" predicate of PA, ZF, an amoeba or you and  
me ...


So if there are different kinds of consciousness then a being  
with more kinds is more conscious.  It seems that your dictum,  
"Your either conscious or not." is being diluted away to mere  
slogan.



There are basically two levels, without criterion of  
decidability, but with simple operational definition:


1) something is conscious if it is torturable, and arguably  
ethically wrong of doing so.


So when Capt Sequra tells Wormold that he's "not of the torturable  
class" he means he's not conscious.  :-)


You might need to give some references here, I'm afraid.



It's from "Our Man In Havana" by Grahame Green.  Only poor Cubans  
are in the torturable class, not Englishmen.


So the englishmen are not conscious, which might explain some thing...

Well, of course by "torturable" I meant "judged as being able to feel  
pain", not "judged as not being able to feel plain", like some people  
believed it is the case for animals.











How is this an operational defintion?  What is the operation to  
determine whether a being is torturable?


Yu make the torture publicly, and if you are sent to jail, the  
entity is conscious, at least in the 3-1 view of the people you are  
living with.


You mean the people who sent me to jail are conscious, i.e. they  
have empathy which implies they are conscious.  But that doesn't  
really solve the problem.  They might just be pretending empathy.


If they pretend empathy, they are self-conscious, even if you are not.  
Pretending and lying requires self-consciousness.





And it doesn't help with my design of a Mars Rover.  Will it be  
conscious only if I program it to show empathy when another Mars  
Rover is tortured?  Does a jumping spider show empathy when a fly is  
tortured, or only when another jumping spider is tortured?


I don't know. Not sure spider (even jumping spider) have a lot of  
empathy.


In matter of consciousness, there are no definite criteria, and the  
operational or quasi-operational criterium I am suggesting is to give  
an idea of a sufficient condition to attribute consciousness,  
certainly not to NOT attribute consciousness. Ethically, it is better  
to attribute consciousness wrongly that to attribute absence of  
consciousness wrongly.













I think all invertebrates are already at that level, and in  
arithmetic that might correspond to the sigma_1 complete (Turing  
universality). Robinson Arithmetic, the universal dovetailer, are  
at that level.


2) something is self-conscious if it is Löbian, basically he is  
aware of its unnameable name. PA, ZF,  are "at  
that level", like all their sound recursively enumerable  
extensions. At that level, the entity is able to ascribe  
consciousness to another, and can get the the moral understanding  
of good and wrong (with or without a forbidden fruit).


What's the operation to determine it is aware of its unamable name?


Ok, you torture a fellow, now, and all people complaining about  
this can be said to have the ability to ascribe consciousness to  
others.


In principle you have to repeat this often to avoid the partial  
zombie case. The criteria are operational in the weak sense of  
making the statement plausible, as we know already that there is no  
definite criterion for consciousness. We might not been able to  
convince an alien about this.


Essentially you are saying just rely on your intuition about what's  
conscious and what's not.  But as Scott Aaronson point out we seek a  
theory of consciousness that we can apply to machines and aliens  
where our intuition doesn't work.


But this was already given. My current theoretical attribution is  
simple: Turing universality is enough for consciousness, and Löbianity  
("awareness of one own Turing universality) is enough for self- 
consciousness. But there are no mechanical criterion to recognize  
Turing universality, nor any program actually (cf Rice Theorem). There  

Re: Uploaded Worm Mind

2015-09-06 Thread Bruno Marchal


On 06 Sep 2015, at 02:52, Pierz wrote:




On Friday, September 4, 2015 at 1:35:50 AM UTC+10, Bruno Marchal  
wrote:


On 02 Sep 2015, at 22:48, meekerdb wrote:


On 9/2/2015 8:25 AM, Bruno Marchal wrote:
So now you agree with me that there are different kinds and  
degrees of consciousness; that it is not just a binary  
attribute of an axiom + inference system.


?

Either you are conscious, or you are not.


But is a roundworm either conscious or not?  an amoeba?


I don't know, but i think they are. Even bacteria, and perhaps  
even some viruses, but on a different time scale than us.




If they can be conscious, but not self-conscious then there are  
two kinds of "being conscious".


Yes, at least two kinds, but each arithmetical hypostases having  
either "<>t" or "& p" describes a type of consciousness, I would  
say.
And they all differentiate on the infinitely many version of  
"[]A", be it the "[]" predicate of PA, ZF, an amoeba or you and  
me ...


So if there are different kinds of consciousness then a being with  
more kinds is more conscious.  It seems that your dictum, "Your  
either conscious or not." is being diluted away to mere slogan.



There are basically two levels, without criterion of decidability,  
but with simple operational definition:


1) something is conscious if it is torturable, and arguably  
ethically wrong of doing so. I think all invertebrates are already  
at that level, and in arithmetic that might correspond to the  
sigma_1 complete (Turing universality). Robinson Arithmetic, the  
universal dovetailer, are at that level.


How does one torture arithmetic? Hold on, I was probably guilty of  
that in school... Oh the guilt!


Like Alice who was beating time  lol




But seriously, why torturable as the criteria? Isn't a conscious  
being incapable of pain perfectly conceivable? (Like the woman I  
heard of recently who is incapable of fear because her amygdala is  
calcified. And there are people who can't feel physical pain. So  
take away fear and pain and torture becomes rather difficult to  
execute.)


You are right. I was giving a sufficient criterium. Not a necessary  
one, as I think that does not exist. We can't even recognize if a  
program compute the factorial function or not (Rice theorem, an easy  
consequence of the second recursion of KleeneI can come back on  
this someday).


To torture an arithmetic, extend it consistently into a terrestrial  
(or oceanic) creature, let it meet the humans ...


Bruno






2) something is self-conscious if it is Löbian, basically he is  
aware of its unnameable name. PA, ZF, are "at that level", like all  
their sound recursively enumerable extensions. At that level, the  
entity is able to ascribe consciousness to another, and can get the  
the moral understanding of good and wrong (with or without a  
forbidden fruit).


But the content of the consciousness can be extremely variable, and  
then there are many different types of consciousness states. By  
incompleteness, machine's psychology is transfinitely rich. The  
first person self is not a machine from the machine first person  
perspective. Machines are naturally non computationalist, and the  
origin of consciousness is plausibly more on the side of the truth  
than on the representation.


Bruno







Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-09-05 Thread Pierz


On Friday, September 4, 2015 at 1:35:50 AM UTC+10, Bruno Marchal wrote:
>
>
> On 02 Sep 2015, at 22:48, meekerdb wrote:
>
> On 9/2/2015 8:25 AM, Bruno Marchal wrote:
>
> So now you agree with me that there are different kinds and degrees of 
> consciousness; that it is not just a binary attribute of an axiom + 
> inference system. 
>
>
> ? 
>
> Either you are conscious, or you are not. 
>
>
> But is a roundworm either conscious or not?  an amoeba? 
>
>
> I don't know, but i think they are. Even bacteria, and perhaps even some 
> viruses, but on a different time scale than us. 
>
>
>
> If they can be conscious, but not self-conscious then there are two kinds 
> of "being conscious". 
>
>
> Yes, at least two kinds, but each arithmetical hypostases having either 
> "<>t" or "& p" describes a type of consciousness, I would say. 
> And they all differentiate on the infinitely many version of "[]A", be it 
> the "[]" predicate of PA, ZF, an amoeba or you and me ... 
>
>
> So if there are different kinds of consciousness then a being with more 
> kinds is more conscious.  It seems that your dictum, "Your either conscious 
> or not." is being diluted away to mere slogan.
>
>
>
> There are basically two levels, without criterion of decidability, but 
> with simple operational definition:
>
> 1) something is conscious if it is torturable, and arguably ethically 
> wrong of doing so. I think all invertebrates are already at that level, and 
> in arithmetic that might correspond to the sigma_1 complete (Turing 
> universality). Robinson Arithmetic, the universal dovetailer, are at that 
> level.
>

How does one torture arithmetic? Hold on, I was probably guilty of that in 
school... Oh the guilt! But seriously, why torturable as the criteria? 
Isn't a conscious being incapable of pain perfectly conceivable? (Like the 
woman I heard of recently who is incapable of fear because her amygdala is 
calcified. And there are people who can't feel physical pain. So take away 
fear and pain and torture becomes rather difficult to execute.)
 

>
> 2) something is self-conscious if it is Löbian, basically he is aware of 
> its unnameable name. PA, ZF, are "at that level", like all their sound 
> recursively enumerable extensions. At that level, the entity is able to 
> ascribe consciousness to another, and can get the the moral understanding 
> of good and wrong (with or without a forbidden fruit). 
>
> But the content of the consciousness can be extremely variable, and then 
> there are many different types of consciousness states. By incompleteness, 
> machine's psychology is transfinitely rich. The first person self is not a 
> machine from the machine first person perspective. Machines are naturally 
> non computationalist, and the origin of consciousness is plausibly more on 
> the side of the truth than on the representation.
>
> Bruno
>
>
>
>
>
>
> Brent
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-09-04 Thread Bruno Marchal


On 03 Sep 2015, at 20:26, meekerdb wrote:


On 9/3/2015 8:35 AM, Bruno Marchal wrote:


On 02 Sep 2015, at 22:48, meekerdb wrote:


On 9/2/2015 8:25 AM, Bruno Marchal wrote:
So now you agree with me that there are different kinds and  
degrees of consciousness; that it is not just a binary  
attribute of an axiom + inference system.


?

Either you are conscious, or you are not.


But is a roundworm either conscious or not?  an amoeba?


I don't know, but i think they are. Even bacteria, and perhaps  
even some viruses, but on a different time scale than us.




If they can be conscious, but not self-conscious then there are  
two kinds of "being conscious".


Yes, at least two kinds, but each arithmetical hypostases having  
either "<>t" or "& p" describes a type of consciousness, I would  
say.
And they all differentiate on the infinitely many version of  
"[]A", be it the "[]" predicate of PA, ZF, an amoeba or you and  
me ...


So if there are different kinds of consciousness then a being with  
more kinds is more conscious.  It seems that your dictum, "Your  
either conscious or not." is being diluted away to mere slogan.



There are basically two levels, without criterion of decidability,  
but with simple operational definition:


1) something is conscious if it is torturable, and arguably  
ethically wrong of doing so.


So when Capt Sequra tells Wormold that he's "not of the torturable  
class" he means he's not conscious.  :-)


You might need to give some references here, I'm afraid.





How is this an operational defintion?  What is the operation to  
determine whether a being is torturable?


Yu make the torture publicly, and if you are sent to jail, the entity  
is conscious, at least in the 3-1 view of the people you are living  
with.







I think all invertebrates are already at that level, and in  
arithmetic that might correspond to the sigma_1 complete (Turing  
universality). Robinson Arithmetic, the universal dovetailer, are  
at that level.


2) something is self-conscious if it is Löbian, basically he is  
aware of its unnameable name. PA, ZF, are "at that level", like all  
their sound recursively enumerable extensions. At that level, the  
entity is able to ascribe consciousness to another, and can get the  
the moral understanding of good and wrong (with or without a  
forbidden fruit).


What's the operation to determine it is aware of its unamable name?


Ok, you torture a fellow, now, and all people complaining about this  
can be said to have the ability to ascribe consciousness to others.


In principle you have to repeat this often to avoid the partial zombie  
case. The criteria are operational in the weak sense of making the  
statement plausible, as we know already that there is no definite  
criterion for consciousness. We might not been able to convince an  
alien about this.


Bruno






Brent



But the content of the consciousness can be extremely variable, and  
then there are many different types of consciousness states. By  
incompleteness, machine's psychology is transfinitely rich. The  
first person self is not a machine from the machine first person  
perspective. Machines are naturally non computationalist, and the  
origin of consciousness is plausibly more on the side of the truth  
than on the representation.


Bruno



--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-09-04 Thread meekerdb

On 9/4/2015 7:35 AM, Bruno Marchal wrote:


On 03 Sep 2015, at 20:26, meekerdb wrote:


On 9/3/2015 8:35 AM, Bruno Marchal wrote:


On 02 Sep 2015, at 22:48, meekerdb wrote:


On 9/2/2015 8:25 AM, Bruno Marchal wrote:
So now you agree with me that there are different kinds and degrees of 
consciousness; that it is not just a binary attribute of an axiom + inference 
system.


?

Either you are conscious, or you are not.


But is a roundworm either conscious or not?  an amoeba?


I don't know, but i think they are. Even bacteria, and perhaps even some viruses, 
but on a different time scale than us.




If they can be conscious, but not self-conscious then there are two kinds of "being 
conscious".


Yes, at least two kinds, but each arithmetical hypostases having either "<>t" or "& 
p" describes a type of consciousness, I would say.
And they all differentiate on the infinitely many version of "[]A", be it the "[]" 
predicate of PA, ZF, an amoeba or you and me ...


So if there are different kinds of consciousness then a being with more kinds is more 
conscious.  It seems that your dictum, "Your either conscious or not." is being 
diluted away to mere slogan.



There are basically two levels, without criterion of decidability, but with simple 
operational definition:


1) something is conscious if it is torturable, and arguably ethically wrong of 
doing so.


So when Capt Sequra tells Wormold that he's "not of the torturable class" he means he's 
not conscious.  :-)


You might need to give some references here, I'm afraid.



It's from "Our Man In Havana" by Grahame Green.  Only poor Cubans are in the torturable 
class, not Englishmen.







How is this an operational defintion?  What is the operation to determine whether a 
being is torturable?


Yu make the torture publicly, and if you are sent to jail, the entity is conscious, at 
least in the 3-1 view of the people you are living with.


You mean the people who sent me to jail are conscious, i.e. they have empathy which 
implies they are conscious.  But that doesn't really solve the problem.  They might just 
be pretending empathy. And it doesn't help with my design of a Mars Rover.  Will it be 
conscious only if I program it to show empathy when another Mars Rover is tortured?  Does 
a jumping spider show empathy when a fly is tortured, or only when another jumping spider 
is tortured?









I think all invertebrates are already at that level, and in arithmetic that might 
correspond to the sigma_1 complete (Turing universality). Robinson Arithmetic, the 
universal dovetailer, are at that level.


2) something is self-conscious if it is Löbian, basically he is aware of its 
unnameable name. PA, ZF, are "at that level", like all their sound recursively 
enumerable extensions. At that level, the entity is able to ascribe consciousness to 
another, and can get the the moral understanding of good and wrong (with or without a 
forbidden fruit).


What's the operation to determine it is aware of its unamable name?


Ok, you torture a fellow, now, and all people complaining about this can be said to have 
the ability to ascribe consciousness to others.


In principle you have to repeat this often to avoid the partial zombie case. The 
criteria are operational in the weak sense of making the statement plausible, as we know 
already that there is no definite criterion for consciousness. We might not been able to 
convince an alien about this.


Essentially you are saying just rely on your intuition about what's conscious and what's 
not.  But as Scott Aaronson point out we seek a /*theory*/ of consciousness that we can 
apply to machines and aliens where our intuition doesn't work.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-09-03 Thread Bruno Marchal


On 02 Sep 2015, at 22:48, meekerdb wrote:


On 9/2/2015 8:25 AM, Bruno Marchal wrote:
So now you agree with me that there are different kinds and  
degrees of consciousness; that it is not just a binary attribute  
of an axiom + inference system.


?

Either you are conscious, or you are not.


But is a roundworm either conscious or not?  an amoeba?


I don't know, but i think they are. Even bacteria, and perhaps even  
some viruses, but on a different time scale than us.




If they can be conscious, but not self-conscious then there are  
two kinds of "being conscious".


Yes, at least two kinds, but each arithmetical hypostases having  
either "<>t" or "& p" describes a type of consciousness, I would say.
And they all differentiate on the infinitely many version of "[]A",  
be it the "[]" predicate of PA, ZF, an amoeba or you and me ...


So if there are different kinds of consciousness then a being with  
more kinds is  more conscious.  It seems that your dictum, "Your  
either conscious or not." is being diluted away to mere slogan.



There are basically two levels, without criterion of decidability, but  
with simple operational definition:


1) something is conscious if it is torturable, and arguably ethically  
wrong of doing so. I think all invertebrates are already at that  
level, and in arithmetic that might correspond to the sigma_1 complete  
(Turing universality). Robinson Arithmetic, the universal dovetailer,  
are at that level.


2) something is self-conscious if it is Löbian, basically he is aware  
of its unnameable name. PA, ZF, are "at that level", like all their  
sound recursively enumerable extensions. At that level, the entity is  
able to ascribe consciousness to another, and can get the the moral  
understanding of good and wrong (with or without a forbidden fruit).


But the content of the consciousness can be extremely variable, and  
then there are many different types of consciousness states. By  
incompleteness, machine's psychology is transfinitely rich. The first  
person self is not a machine from the machine first person  
perspective. Machines are naturally non computationalist, and the  
origin of consciousness is plausibly more on the side of the truth  
than on the representation.


Bruno







Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-09-03 Thread meekerdb

On 9/3/2015 8:35 AM, Bruno Marchal wrote:


On 02 Sep 2015, at 22:48, meekerdb wrote:


On 9/2/2015 8:25 AM, Bruno Marchal wrote:
So now you agree with me that there are different kinds and degrees of 
consciousness; that it is not just a binary attribute of an axiom + inference system.


?

Either you are conscious, or you are not.


But is a roundworm either conscious or not?  an amoeba?


I don't know, but i think they are. Even bacteria, and perhaps even some viruses, but 
on a different time scale than us.




If they can be conscious, but not self-conscious then there are two kinds of "being 
conscious".


Yes, at least two kinds, but each arithmetical hypostases having either "<>t" or "& p" 
describes a type of consciousness, I would say.
And they all differentiate on the infinitely many version of "[]A", be it the "[]" 
predicate of PA, ZF, an amoeba or you and me ...


So if there are different kinds of consciousness then a being with more kinds is more 
conscious.  It seems that your dictum, "Your either conscious or not." is being diluted 
away to mere slogan.



There are basically two levels, without criterion of decidability, but with simple 
operational definition:


1) something is conscious if it is torturable, and arguably ethically wrong of 
doing so.


So when Capt Sequra tells Wormold that he's "not of the torturable class" he means he's 
not conscious.  :-)


How is this an operational defintion?  What is the operation to determine whether a being 
is torturable?


I think all invertebrates are already at that level, and in arithmetic that might 
correspond to the sigma_1 complete (Turing universality). Robinson Arithmetic, the 
universal dovetailer, are at that level.


2) something is self-conscious if it is Löbian, basically he is aware of its unnameable 
name. PA, ZF, are "at that level", like all their sound recursively enumerable 
extensions. At that level, the entity is able to ascribe consciousness to another, and 
can get the the moral understanding of good and wrong (with or without a forbidden fruit).


What's the operation to determine it is aware of its unamable name?

Brent



But the content of the consciousness can be extremely variable, and then there are many 
different types of consciousness states. By incompleteness, machine's psychology is 
transfinitely rich. The first person self is not a machine from the machine first person 
perspective. Machines are naturally non computationalist, and the origin of 
consciousness is plausibly more on the side of the truth than on the representation.


Bruno


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-09-03 Thread Jason Resch
Brent,

We had this exact discussion several months ago.

Perhaps this discussion is better suited for an English mailing list, but
last time the conclusion reached was that consciousness vs. unconsciousness
could be likened to zero vs. positive.

You seem to be making the argument that it is like finite vs. infinite,
with many ways of being finite and many ways of being infinite.

I lean towards Quentin's usage wherein there is only one way of being
unconscious, and many ways of being conscious. Sure, different states of
consciousness are conscious of different things, and you might say a
certain conscious being is not conscious of a certain thing, but I would
not say that the general English usage of "unconscious" could be applied to
that being, just because it lacks some God like omniscient consciousness.

Jason

On Wednesday, September 2, 2015, meekerdb  wrote:
> On 9/2/2015 2:23 PM, Quentin Anciaux wrote:
>
>> So if there are different kinds of consciousness then a being with more
kinds is more conscious.  It seems that your dictum, "Your either conscious
or not." is being diluted away to mere slogan.
>
> There is only one way of not being conscious, so you're either not
conscious or you're conscious whatever level it is.
>
> Question begging.  If there's more than one kind of consciousness, then
there is more than one kind of being unconscious.  Per Bruno's example one
could be unconscious of your self.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
"Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-09-02 Thread Bruno Marchal


On 31 Aug 2015, at 20:11, meekerdb wrote:


On 8/31/2015 5:56 AM, Stathis Papaioannou wrote:



On Monday, August 31, 2015, Bruno Marchal  wrote:

On 31 Aug 2015, at 00:42, Russell Standish wrote:

On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:

On 30 Aug 2015, at 03:08, Russell Standish wrote:

Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts about
ants, and they're massively more complex creatures.

I think personally that C. Elegans, and Planaria (!), even amoeba,
are conscious, although very plausibly not self-conscious.

I tend to think since 2008 that even RA is already conscious, even
maximally so, and that PA is already as much self-conscious than a
human (when in some dissociative state).

But I don't know if PA is more or less conscious than RA. That
depends of the role of the higher part of the brain consists in
filtering consciousness or enacting it.


But it probably won't be long before we simulate a mouse brain in  
toto

- about 2 decades is my guess, maybe even less given enough dollars -
then we're definitely in grey philosophical territory :).

I am slightly less optimistic than you. It will take one of two
decades before we simulate the hippocampus of a rat, but probably
more time will be needed for the rest of their brain. And the result
can be a conscious creature, with a quite different consciousness
that a rat, as I find plausible that pain are related to the glial
cells and their metabolism, which are not  taken into account by the
current "copies".

What is blocking us is not the computing power - already whole "rat
brain" simulations have been done is something like 1/1 of real
time - so all we need is about a decade of performane improvement
through Moores law.

What development is needed is ways of determining the neural
circuitry. There have been leaps and bounds in the process of slicing
frozen brains, and imaging the slices with electron microscopes, but
clearly it is still far too slow.

As for the hypothesis that glial cells have something to do with it,
well that can be tested via the sort of whole rat brain simulation
I've been talking about. Run the simulation in a robotic rat, and
compare the behaviour with a real rat. Basically what the open worm
guys a doing, but scaled up to a rat. If the simulation is way
different from the real rat, then we know something else is required.


I can imagine that the rat will have a "normal behavior", but as he  
cannot talk to us, we might fail to appreciate some internal change  
or even some anosognosia. The rat would not be a zombie rat, but  
still be in a quite different conscious state (perhaps better, as  
it seems the glial cell might have some role in the chronic pain.


In general, if there is a difference in consciousness then there  
should be a difference in behaviour. If the difference in  
consciousness is impossible to detect then arguably it is no  
difference.


I'd say more-than-arguably we don't know and can't know.  Which is  
why I think "the hard problem" will be dissolved by AI engineering  
rather than solved by philosophers.



That is plausible, and I think that is a frightening idea.

Worst, the problem might be solved by the philosopher, or theologian,  
in the context of some theory/hypothesis, and yet be dissolved in the  
usual authoritarianist manner, for the usual political purpose.


Woman can vote since recently. Not a long time ago, many would have  
said that most "exotic foreigners" have no soul, which is useful for  
doing slavery without feeling guilty.


If an eliminativist, à-la Churchland, understand the logic of the UDA,  
then he has to eliminate the physical reality too. But, having  
eliminate the conscious experience, he cannot regain the "illusion" of  
matter, so physics (the science) disappears too, and that is refuted  
by our common experience.


This explains also why computationalism *does* solve the hard problem,  
in the sense that it explains, from the law of addition and  
multiplication only, how the pieces of computations logically appears  
(p -> []p, for p sigma_1), and why universal numbers get entangled in  
many deep computations (with "many" used in Everett sense, and "deep"  
used in Bennett sense) and "linear" (hopefully enough, but we have the  
quantizations to verify that)


The hard part of the hard consciousness problem, is solved by the fact  
that it is shown that all universal machines with enough "inductive"  
beliefs is confronted with knowable but non justifiable truth.  
Actually, as all the hypostases are represented in one of them (the  
[]p one, which obeys G + 1, with 1 being the name of the axiom p ->  
[]p, with p atomic sentences, here, Sigma_1 sentences. The "theology"  
is very rich, and for all "views", things can disappear when being  
shifted, with some exception (I guess). So you can have justifiable  
but not knowable, 

Re: Uploaded Worm Mind

2015-09-02 Thread Bruno Marchal


On 31 Aug 2015, at 19:40, meekerdb wrote:


On 8/31/2015 1:56 AM, Bruno Marchal wrote:


On 30 Aug 2015, at 20:25, meekerdb wrote:


On 8/30/2015 3:34 AM, Bruno Marchal wrote:


On 30 Aug 2015, at 03:08, Russell Standish wrote:


Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts  
about

ants, and they're massively more complex creatures.


I think personally that C. Elegans, and Planaria (!), even  
amoeba, are conscious, although very plausibly not self-conscious.


I tend to think since 2008 that even RA is already conscious,  
even maximally so, and that PA is already as much self-conscious  
than a human (when in some dissociative state).


But I don't know if PA is more or less conscious than RA. That  
depends of the role of the higher part of the brain consists in  
filtering consciousness or enacting it.




But it probably won't be long before we simulate a mouse brain  
in toto
- about 2 decades is my guess, maybe even less given enough  
dollars -

then we're definitely in grey philosophical territory :).


I am slightly less optimistic than you. It will take one of two  
decades before we simulate the hippocampus of a rat, but probably  
more time will be needed for the rest of their brain. And the  
result can be a conscious creature, with a quite different  
consciousness that a rat, as I find plausible that pain are  
related to the glial cells and their metabolism, which are not  
taken into account by the current "copies".


So now you agree with me that there are different kinds and  
degrees of consciousness; that it is not just a binary attribute  
of an axiom + inference system.


?

Either you are conscious, or you are not.


But is a roundworm either conscious or not?  an amoeba?


I don't know, but i think they are. Even bacteria, and perhaps even  
some viruses, but on a different time scale than us.




If they can be conscious, but not self-conscious then there are two  
kinds of "being conscious".


Yes, at least two kinds, but each arithmetical hypostases having  
either "<>t" or "& p" describes a type of consciousness, I would say.
And they all differentiate on the infinitely many version of "[]A", be  
it the "[]" predicate of PA, ZF, an amoeba or you and me ...





And being self-conscious can have different modes.  A Mars Rover is  
conscious of itself having a certain location, battery charge,  
temperature,...but it's not conscious of its purpose or the effect  
it's success has on engineers at JPL.


OK. I mean plausible, but I am not sure that Mars Rover is self- 
conscious. He might have correct belief about its own location, but he  
might not (yet) have a "enough" rich notion of itself.






Then there are many type of consciousness states, and some can have  
some notion of degrees assigned to them. In the case I was talking,  
I might be obliged to accept the idea that RA is maximally  
conscious, and PA might be less conscious or more delusional about  
its consciousness. (but that is counter-intuitive, and depends on  
the validity of the "Galois connection" account of consciousness. I  
have no certainty here (even in the comp frame).


For another example,  I have strong evidences that we are conscious  
at *all* moment of the nocturnal sleep. It is a question of  
training to be able to memorize the episodes enough well to realize  
this, but apparently we are programmed to forget those experiences.


Sure, if your wife whispers your name at night while you're asleep  
you wake up instantly.


It depends of the man, and perhaps of the wife. I took holiday with a  
guy who was incredibly hard to wake up in the morning. Even shouting  
his name quite aloud did not woke up. We had too shake him for some  
time. Note that he warned us before. He never use an alarm clock, as  
he does not work for him. To wake in time, he has to just sleep his  
right number of hours.





But you don't if you're anesthetized.


Which proves nothing, as I am sure you agree.






Obviously "to be unconscious" cannot be a first person experience.


But it can be a first body experience.


Perhaps in some metaphorical sense.

But a body has no experience at all, and actually don't even exist.  
They are only sharable pattern of information computed in "special  
sheaf of computations", whose initial segments are dovetailed in the  
arithmetical reality.








To believe that *we have been unconscious* is consistent, but  
plausibly false, and probably false with computationalism, where,  
to put it with Otto Rossler's phrasing: consciousness is a prison.


I'd say it's more than plausibly true.  If there are time intervals  
during which we are inert and unresponsive and which we have no  
memory of, that's pretty good evidence we were unconscious - in fact  
it's the operational defintion.


Once I made a nap. I was very tired and fall asleep, rather deeply, as  
like the guy above people around me 

Re: Uploaded Worm Mind

2015-09-02 Thread meekerdb

On 9/2/2015 8:25 AM, Bruno Marchal wrote:
So now you agree with me that there are different kinds and degrees of consciousness; 
that it is not just a binary attribute of an axiom + inference system.


?

Either you are conscious, or you are not.


But is a roundworm either conscious or not?  an amoeba?


I don't know, but i think they are. Even bacteria, and perhaps even some viruses, but on 
a different time scale than us.




If they can be conscious, but not self-conscious then there are two kinds of "being 
conscious".


Yes, at least two kinds, but each arithmetical hypostases having either "<>t" or "& p" 
describes a type of consciousness, I would say.
And they all differentiate on the infinitely many version of "[]A", be it the "[]" 
predicate of PA, ZF, an amoeba or you and me ...


So if there are different kinds of consciousness then a being with more kinds is more 
conscious.  It seems that your dictum, "Your either conscious or not." is being diluted 
away to mere slogan.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-09-02 Thread Quentin Anciaux
Le 2 sept. 2015 22:48, "meekerdb"  a écrit :
>
> On 9/2/2015 8:25 AM, Bruno Marchal wrote:
>
> So now you agree with me that there are different kinds and degrees
of consciousness; that it is not just a binary attribute of an axiom +
inference system.


 ?

 Either you are conscious, or you are not.
>>>
>>>
>>> But is a roundworm either conscious or not?  an amoeba?
>>
>>
>> I don't know, but i think they are. Even bacteria, and perhaps even some
viruses, but on a different time scale than us.
>>
>>
>>
>>> If they can be conscious, but not self-conscious then there are two
kinds of "being conscious".
>>
>>
>> Yes, at least two kinds, but each arithmetical hypostases having either
"<>t" or "& p" describes a type of consciousness, I would say.
>> And they all differentiate on the infinitely many version of "[]A", be
it the "[]" predicate of PA, ZF, an amoeba or you and me ...
>
>
> So if there are different kinds of consciousness then a being with more
kinds is more conscious.  It seems that your dictum, "Your either conscious
or not." is being diluted away to mere slogan.

There is only one way of not being conscious, so you're either not
conscious or you're conscious whatever level it is.

>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
"Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-09-02 Thread meekerdb

On 9/2/2015 2:23 PM, Quentin Anciaux wrote:


> So if there are different kinds of consciousness then a being with more kinds is more 
conscious.  It seems that your dictum, "Your either conscious or not." is being diluted 
away to mere slogan.


There is only one way of not being conscious, so you're either not conscious or you're 
conscious whatever level it is.




Question begging.  If there's more than one kind of consciousness, then there is more than 
one kind of being unconscious.  Per Bruno's example one could be unconscious of your self.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-09-01 Thread Bruno Marchal


On 31 Aug 2015, at 21:54, Jason Resch wrote:




On Mon, Aug 31, 2015 at 12:39 PM, Bruno Marchal   
wrote:


On 31 Aug 2015, at 12:14, Russell Standish wrote:

On Mon, Aug 31, 2015 at 11:19:00AM +0200, Bruno Marchal wrote:

On 31 Aug 2015, at 00:42, Russell Standish wrote:

On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:


I guess that you remember that I am not yet convinced by your
argument that ants are not conscious, as it relies on anthropic use
of the Absolute Self-Sampling Assumption (ASSA) which I prefer to
avoid because the domain of  its statistic is not clear to me. (I am
not impressed by the doomsday argument for the same reason).


Yes, I've heard that a lot. "I'm not impressed" = "It sounds like a
crock of shit, but I can't put my finger on why".

Probably the best way forward is to put forward a toy model showing
the anthropic argument failing, and then the mechanism is clear.

It does not fail. It can explain some of the geography by bayesian
reasoning, but it can't explain the difference between physical
laws, and local physical/geographical fact. For the lwas, we have to
find something which does not depend on anything particular above
being Turing universal or Löbian.


I'm in agreement with your comments here, however I fail to see the
connection with the doomsday argument, or my anthropic ants argument,
as these are fundamentally about geography (in one case about how long
humans might be here on Earth, and the other about the consciousness
of certain Earthling creatures).

The problem for me is in the use of a "probability to be a human",  
or "probability to be an ant" without some relative conditional.


Is it necessarily even an exclusive?



In the frame of Russell's argument that Ant are not conscious, using  
bayesianenly the fact that we are human and that Ant are more  
numerous. (Like in Leslie Carter Doomsday argument). The use of Bays  
is correct, but the result assumes an absolute self-sampling  
assumption on which I am agnostic.




It feels like it is, but that might just be an illusion.


But reality is a sum on all the illusions. From inside we can already  
prove that if the sum converge, then we can't prove that the sum  
converge (that's why "God" or "Reality" requires faith or some  
optimisme of some sort).


You might say, in the WM-duplication, that the guy is both the W-guy  
and the M-guy, but the probability needed to get the physical still  
requires the fact that, illusion or not, the first person experience  
are exclusive, even if only *relatively* exclusive.






Can we not be both an ant and a human,


Assuming that ants are conscious, which was the point Russell's  
argument try to refute.


I have no first person objection to your point, identify myself with  
any animals and plants.


Then computer science protect this from trivialization by associating  
a non trivial notion of person to any self-referentially platonist  
correct universal machine. Platonist means that the machine believes,  
for all arithmetical sentences A, the proposition A v ~A.


So the universal machine defines a universal person.

That universal person can make sense being the same person looking  
through the Ant eyes and the Human eyes, but can the ant and the human  
do that experience without remembering being the universal person?


That some creature can do that is quite plausibly in its own G* - G  
proper theology, a protagorean virtue which can taught by exemplar  
behavior but go only without saying.








but be relatively unaware of it


That's the terrestrial condition, but by "demolishing" your brain so  
that for a moment it is close to the brain of an ant" might help to  
conceive this or make some sense.


What is the probability to have a continuation (when dying, or not) in  
which you do awake from both the Ant "dream" and the "Human "dream".


I have awaken from "parallel dreams" about 5 times, and Louis Jouvet  
(the french onirophysiologists) describes similar occurences, and  
explains them by an inhibition (or a lowering of activity) in the  
corpus callosum. So I think it can make sense on recognizing yourself  
in different creature experiences, and integrating them as "personal  
souvenir". Technologically, in some future(s) such merging can be  
"artificially" sustained in the relative stable "terrestrial "way.




such that we can't comment on the knowledge of being an ant from the  
human organism's point of view, nor can the ant react to its human  
sensations from the ant organism's view.


And the question is, could an ant experience merges with a human  
experience in the infinite universal person mind. The UD, and thus  
elementary arithmetic emulates such experiences, but the non trivial  
problem is what is the probability of global merging of all experiences?


With CT + "yes doctor", such questions can be translated into  
(complex) arithmetical (terrestrial) and analytical (divine)  

Re: Uploaded Worm Mind

2015-09-01 Thread Stathis Papaioannou
On Tuesday, September 1, 2015, Bruno Marchal  wrote:

>
> On 31 Aug 2015, at 14:56, Stathis Papaioannou wrote:
>
>
>
> On Monday, August 31, 2015, Bruno Marchal  > wrote:
>
>>
>> On 31 Aug 2015, at 00:42, Russell Standish wrote:
>>
>> On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:
>>>

 On 30 Aug 2015, at 03:08, Russell Standish wrote:

 Well as people probably know, I don't believe C. elegans can be
> conscious in any sense of the word. Hell - I have strong doubts about
> ants, and they're massively more complex creatures.
>

 I think personally that C. Elegans, and Planaria (!), even amoeba,
 are conscious, although very plausibly not self-conscious.

 I tend to think since 2008 that even RA is already conscious, even
 maximally so, and that PA is already as much self-conscious than a
 human (when in some dissociative state).

 But I don't know if PA is more or less conscious than RA. That
 depends of the role of the higher part of the brain consists in
 filtering consciousness or enacting it.


> But it probably won't be long before we simulate a mouse brain in toto
> - about 2 decades is my guess, maybe even less given enough dollars -
> then we're definitely in grey philosophical territory :).
>

 I am slightly less optimistic than you. It will take one of two
 decades before we simulate the hippocampus of a rat, but probably
 more time will be needed for the rest of their brain. And the result
 can be a conscious creature, with a quite different consciousness
 that a rat, as I find plausible that pain are related to the glial
 cells and their metabolism, which are not  taken into account by the
 current "copies".

>>>
>>> What is blocking us is not the computing power - already whole "rat
>>> brain" simulations have been done is something like 1/1 of real
>>> time - so all we need is about a decade of performane improvement
>>> through Moores law.
>>>
>>> What development is needed is ways of determining the neural
>>> circuitry. There have been leaps and bounds in the process of slicing
>>> frozen brains, and imaging the slices with electron microscopes, but
>>> clearly it is still far too slow.
>>>
>>> As for the hypothesis that glial cells have something to do with it,
>>> well that can be tested via the sort of whole rat brain simulation
>>> I've been talking about. Run the simulation in a robotic rat, and
>>> compare the behaviour with a real rat. Basically what the open worm
>>> guys a doing, but scaled up to a rat. If the simulation is way
>>> different from the real rat, then we know something else is required.
>>>
>>
>>
>> I can imagine that the rat will have a "normal behavior", but as he
>> cannot talk to us, we might fail to appreciate some internal change or even
>> some anosognosia. The rat would not be a zombie rat, but still be in a
>> quite different conscious state (perhaps better, as it seems the glial cell
>> might have some role in the chronic pain.
>>
>
> In general, if there is a difference in consciousness then there should be
> a difference in behaviour. If the difference in consciousness is impossible
> to detect then arguably it is no difference.
>
>
>
> How would you detect that the rat has a slight headache?
>

It should be detectable under ideal circumstances, or it should be
detectable statistically by sampling a large number of rats.


> Some drugs change *only* the "volume" of consciousness (notably alcool on
> high dose, but this one change also the behavior). It is quite unpleasant,
> like listening to music with a the sound made too much high, but you can
> behave in your normal way, and unless somebody ask, there is no noticeable
> difference in behavior.
>

The point is, it is detectable. If a subjective difference makes no
objective difference under any circumstances then arguably there is no
subjective difference.


> First order experiences are usually wider than anything we can communicate
> in a third person way, so it is natural that difference in consciousness
> does not necessarily entail a difference in behavior, especially for a
> finite time.
>
> The problem of inverse-spectrum for the qualia of color illustrates also
> that a difference of consciousness might not lead to a difference in
> behavior.
>

If the colours I see change every five minutes but I don't notice, then I
would say there is no subjective change.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For 

Re: Uploaded Worm Mind

2015-08-31 Thread Bruno Marchal


On 31 Aug 2015, at 14:56, Stathis Papaioannou wrote:




On Monday, August 31, 2015, Bruno Marchal  wrote:

On 31 Aug 2015, at 00:42, Russell Standish wrote:

On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:

On 30 Aug 2015, at 03:08, Russell Standish wrote:

Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts about
ants, and they're massively more complex creatures.

I think personally that C. Elegans, and Planaria (!), even amoeba,
are conscious, although very plausibly not self-conscious.

I tend to think since 2008 that even RA is already conscious, even
maximally so, and that PA is already as much self-conscious than a
human (when in some dissociative state).

But I don't know if PA is more or less conscious than RA. That
depends of the role of the higher part of the brain consists in
filtering consciousness or enacting it.


But it probably won't be long before we simulate a mouse brain in toto
- about 2 decades is my guess, maybe even less given enough dollars -
then we're definitely in grey philosophical territory :).

I am slightly less optimistic than you. It will take one of two
decades before we simulate the hippocampus of a rat, but probably
more time will be needed for the rest of their brain. And the result
can be a conscious creature, with a quite different consciousness
that a rat, as I find plausible that pain are related to the glial
cells and their metabolism, which are not  taken into account by the
current "copies".

What is blocking us is not the computing power - already whole "rat
brain" simulations have been done is something like 1/1 of real
time - so all we need is about a decade of performane improvement
through Moores law.

What development is needed is ways of determining the neural
circuitry. There have been leaps and bounds in the process of slicing
frozen brains, and imaging the slices with electron microscopes, but
clearly it is still far too slow.

As for the hypothesis that glial cells have something to do with it,
well that can be tested via the sort of whole rat brain simulation
I've been talking about. Run the simulation in a robotic rat, and
compare the behaviour with a real rat. Basically what the open worm
guys a doing, but scaled up to a rat. If the simulation is way
different from the real rat, then we know something else is required.


I can imagine that the rat will have a "normal behavior", but as he  
cannot talk to us, we might fail to appreciate some internal change  
or even some anosognosia. The rat would not be a zombie rat, but  
still be in a quite different conscious state (perhaps better, as it  
seems the glial cell might have some role in the chronic pain.


In general, if there is a difference in consciousness then there  
should be a difference in behaviour. If the difference in  
consciousness is impossible to detect then arguably it is no  
difference.



How would you detect that the rat has a slight headache?

Some drugs change *only* the "volume" of consciousness (notably alcool  
on high dose, but this one change also the behavior). It is quite  
unpleasant, like listening to music with a the sound made too much  
high, but you can behave in your normal way, and unless somebody ask,  
there is no noticeable difference in behavior.


First order experiences are usually wider than anything we can  
communicate in a third person way, so it is natural that difference in  
consciousness does not necessarily entail a difference in behavior,  
especially for a finite time.


The problem of inverse-spectrum for the qualia of color illustrates  
also that a difference of consciousness might not lead to a difference  
in behavior.


Bruno





--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-31 Thread Bruno Marchal


On 31 Aug 2015, at 12:14, Russell Standish wrote:


On Mon, Aug 31, 2015 at 11:19:00AM +0200, Bruno Marchal wrote:


On 31 Aug 2015, at 00:42, Russell Standish wrote:


On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:





I guess that you remember that I am not yet convinced by your
argument that ants are not conscious, as it relies on anthropic use
of the Absolute Self-Sampling Assumption (ASSA) which I prefer to
avoid because the domain of  its statistic is not clear to me. (I  
am

not impressed by the doomsday argument for the same reason).



Yes, I've heard that a lot. "I'm not impressed" = "It sounds like a
crock of shit, but I can't put my finger on why".

Probably the best way forward is to put forward a toy model showing
the anthropic argument failing, and then the mechanism is clear.


It does not fail. It can explain some of the geography by bayesian
reasoning, but it can't explain the difference between physical
laws, and local physical/geographical fact. For the lwas, we have to
find something which does not depend on anything particular above
being Turing universal or Löbian.



I'm in agreement with your comments here, however I fail to see the
connection with the doomsday argument, or my anthropic ants argument,
as these are fundamentally about geography (in one case about how long
humans might be here on Earth, and the other about the consciousness
of certain Earthling creatures).


The problem for me is in the use of a "probability to be a human", or  
"probability to be an ant" without some relative conditional. I have  
no frame or universe or reference for the ASSA. We can get  
geographical 3p conclusion from 3p data, but when sampling on oneself  
I have difficulties to make sense of the absolute probabilities. (I  
think that was part of the old RSSA versus ASSA debate).


Best,

Bruno



--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-31 Thread meekerdb

On 8/31/2015 1:56 AM, Bruno Marchal wrote:


On 30 Aug 2015, at 20:25, meekerdb wrote:


On 8/30/2015 3:34 AM, Bruno Marchal wrote:


On 30 Aug 2015, at 03:08, Russell Standish wrote:


Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts about
ants, and they're massively more complex creatures.


I think personally that C. Elegans, and Planaria (!), even amoeba, are conscious, 
although very plausibly not self-conscious.


I tend to think since 2008 that even RA is already conscious, even maximally so, and 
that PA is already as much self-conscious than a human (when in some dissociative state).


But I don't know if PA is more or less conscious than RA. That depends of the role of 
the higher part of the brain consists in filtering consciousness or enacting it.




But it probably won't be long before we simulate a mouse brain in toto
- about 2 decades is my guess, maybe even less given enough dollars -
then we're definitely in grey philosophical territory :).


I am slightly less optimistic than you. It will take one of two decades before we 
simulate the hippocampus of a rat, but probably more time will be needed for the rest 
of their brain. And the result can be a conscious creature, with a quite different 
consciousness that a rat, as I find plausible that pain are related to the glial cells 
and their metabolism, which are not taken into account by the current "copies".


So now you agree with me that there are different kinds and degrees of consciousness; 
that it is not just a binary attribute of an axiom + inference system.


?

Either you are conscious, or you are not. 


But is a roundworm either conscious or not?  an amoeba?  If they can be conscious, but not 
self-conscious then there are two kinds of "being conscious".  And being self-conscious 
can have different modes.  A Mars Rover is conscious of itself having a certain location, 
battery charge, temperature,...but it's not conscious of its purpose or the effect it's 
success has on engineers at JPL.


Then there are many type of consciousness states, and some can have some notion of 
degrees assigned to them. In the case I was talking, I might be obliged to accept the 
idea that RA is maximally conscious, and PA might be less conscious or more delusional 
about its consciousness. (but that is counter-intuitive, and depends on the validity of 
the "Galois connection" account of consciousness. I have no certainty here (even in the 
comp frame).


For another example,  I have strong evidences that we are conscious at *all* moment of 
the nocturnal sleep. It is a question of training to be able to memorize the episodes 
enough well to realize this, but apparently we are programmed to forget those experiences.


Sure, if your wife whispers your name at night while you're asleep you wake up instantly.  
But you don't if you're anesthetized.




Obviously "to be unconscious" cannot be a first person experience.


But it can be a first body experience.



To believe that *we have been unconscious* is consistent, but plausibly false, and 
probably false with computationalism, where, to put it with Otto Rossler's phrasing: 
consciousness is a prison.


I'd say it's more than plausibly true.  If there are time intervals during which we are 
inert and unresponsive and which we have no memory of, that's pretty good evidence we were 
unconscious - in fact it's the operational defintion.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-31 Thread meekerdb

On 8/31/2015 5:56 AM, Stathis Papaioannou wrote:



On Monday, August 31, 2015, Bruno Marchal > 
wrote:



On 31 Aug 2015, at 00:42, Russell Standish wrote:

On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:


On 30 Aug 2015, at 03:08, Russell Standish wrote:

Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts 
about
ants, and they're massively more complex creatures.


I think personally that C. Elegans, and Planaria (!), even amoeba,
are conscious, although very plausibly not self-conscious.

I tend to think since 2008 that even RA is already conscious, even
maximally so, and that PA is already as much self-conscious than a
human (when in some dissociative state).

But I don't know if PA is more or less conscious than RA. That
depends of the role of the higher part of the brain consists in
filtering consciousness or enacting it.


But it probably won't be long before we simulate a mouse brain 
in toto
- about 2 decades is my guess, maybe even less given enough 
dollars -
then we're definitely in grey philosophical territory :).


I am slightly less optimistic than you. It will take one of two
decades before we simulate the hippocampus of a rat, but probably
more time will be needed for the rest of their brain. And the result
can be a conscious creature, with a quite different consciousness
that a rat, as I find plausible that pain are related to the glial
cells and their metabolism, which are not  taken into account by the
current "copies".


What is blocking us is not the computing power - already whole "rat
brain" simulations have been done is something like 1/1 of real
time - so all we need is about a decade of performane improvement
through Moores law.

What development is needed is ways of determining the neural
circuitry. There have been leaps and bounds in the process of slicing
frozen brains, and imaging the slices with electron microscopes, but
clearly it is still far too slow.

As for the hypothesis that glial cells have something to do with it,
well that can be tested via the sort of whole rat brain simulation
I've been talking about. Run the simulation in a robotic rat, and
compare the behaviour with a real rat. Basically what the open worm
guys a doing, but scaled up to a rat. If the simulation is way
different from the real rat, then we know something else is required.



I can imagine that the rat will have a "normal behavior", but as he cannot 
talk to
us, we might fail to appreciate some internal change or even some 
anosognosia. The
rat would not be a zombie rat, but still be in a quite different conscious 
state
(perhaps better, as it seems the glial cell might have some role in the 
chronic pain.


In general, if there is a difference in consciousness then there should be a difference 
in behaviour. If the difference in consciousness is impossible to detect then arguably 
it is no difference.


I'd say more-than-arguably we don't know and can't know.  Which is why I think "the hard 
problem" will be dissolved by AI engineering rather than solved by philosophers.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-31 Thread Jason Resch
On Mon, Aug 31, 2015 at 12:39 PM, Bruno Marchal  wrote:

>
> On 31 Aug 2015, at 12:14, Russell Standish wrote:
>
> On Mon, Aug 31, 2015 at 11:19:00AM +0200, Bruno Marchal wrote:
>>
>>>
>>> On 31 Aug 2015, at 00:42, Russell Standish wrote:
>>>
>>> On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:

>
>
 I guess that you remember that I am not yet convinced by your
> argument that ants are not conscious, as it relies on anthropic use
> of the Absolute Self-Sampling Assumption (ASSA) which I prefer to
> avoid because the domain of  its statistic is not clear to me. (I am
> not impressed by the doomsday argument for the same reason).
>
>
 Yes, I've heard that a lot. "I'm not impressed" = "It sounds like a
 crock of shit, but I can't put my finger on why".

 Probably the best way forward is to put forward a toy model showing
 the anthropic argument failing, and then the mechanism is clear.

>>>
>>> It does not fail. It can explain some of the geography by bayesian
>>> reasoning, but it can't explain the difference between physical
>>> laws, and local physical/geographical fact. For the lwas, we have to
>>> find something which does not depend on anything particular above
>>> being Turing universal or Löbian.
>>>
>>>
>> I'm in agreement with your comments here, however I fail to see the
>> connection with the doomsday argument, or my anthropic ants argument,
>> as these are fundamentally about geography (in one case about how long
>> humans might be here on Earth, and the other about the consciousness
>> of certain Earthling creatures).
>>
>
> The problem for me is in the use of a "probability to be a human", or
> "probability to be an ant" without some relative conditional.


Is it necessarily even an exclusive? It feels like it is, but that might
just be an illusion. Can we not be both an ant and a human, but be
relatively unaware of it such that we can't comment on the knowledge of
being an ant from the human organism's point of view, nor can the ant react
to its human sensations from the ant organism's view.

Jason



> I have no frame or universe or reference for the ASSA. We can get
> geographical 3p conclusion from 3p data, but when sampling on oneself I
> have difficulties to make sense of the absolute probabilities. (I think
> that was part of the old RSSA versus ASSA debate).
>
> Best,
>
> Bruno
>
>
> --
>>
>>
>> 
>> Prof Russell Standish  Phone 0425 253119 (mobile)
>> Principal, High Performance Coders
>> Visiting Professor of Mathematics  hpco...@hpcoders.com.au
>> University of New South Wales  http://www.hpcoders.com.au
>>
>> 
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-31 Thread Bruno Marchal


On 30 Aug 2015, at 20:25, meekerdb wrote:


On 8/30/2015 3:34 AM, Bruno Marchal wrote:


On 30 Aug 2015, at 03:08, Russell Standish wrote:


Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts  
about

ants, and they're massively more complex creatures.


I think personally that C. Elegans, and Planaria (!), even amoeba,  
are conscious, although very plausibly not self-conscious.


I tend to think since 2008 that even RA is already conscious, even  
maximally so, and that PA is already as much self-conscious than a  
human (when in some dissociative state).


But I don't know if PA is more or less conscious than RA. That  
depends of the role of the higher part of the brain consists in  
filtering consciousness or enacting it.




But it probably won't be long before we simulate a mouse brain in  
toto
- about 2 decades is my guess, maybe even less given enough  
dollars -

then we're definitely in grey philosophical territory :).


I am slightly less optimistic than you. It will take one of two  
decades before we simulate the hippocampus of a rat, but probably  
more time will be needed for the rest of their brain. And the  
result can be a conscious creature, with a quite different  
consciousness that a rat, as I find plausible that pain are related  
to the glial cells and their metabolism, which are not taken into  
account by the current "copies".


So now you agree with me that there are different kinds and degrees  
of consciousness; that it is not just a binary attribute of an axiom  
+ inference system.


?

Either you are conscious, or you are not. Then there are many type of  
consciousness states, and some can have some notion of degrees  
assigned to them. In the case I was talking, I might be obliged to  
accept the idea that RA is maximally conscious, and PA might be less  
conscious or more delusional about its consciousness. (but that is  
counter-intuitive, and depends on the validity of the "Galois  
connection" account of consciousness. I have no certainty here (even  
in the comp frame).


For another example,  I have strong evidences that we are conscious at  
*all* moment of the nocturnal sleep. It is a question of training to  
be able to memorize the episodes enough well to realize this, but  
apparently we are programmed to forget those experiences.


Obviously "to be unconscious" cannot be a first person experience.

To believe that *we have been unconscious* is consistent, but  
plausibly false, and probably false with computationalism, where, to  
put it with Otto Rossler's phrasing: consciousness is a prison.


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-31 Thread Bruno Marchal


On 31 Aug 2015, at 00:42, Russell Standish wrote:


On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:


On 30 Aug 2015, at 03:08, Russell Standish wrote:


Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts  
about

ants, and they're massively more complex creatures.


I think personally that C. Elegans, and Planaria (!), even amoeba,
are conscious, although very plausibly not self-conscious.

I tend to think since 2008 that even RA is already conscious, even
maximally so, and that PA is already as much self-conscious than a
human (when in some dissociative state).

But I don't know if PA is more or less conscious than RA. That
depends of the role of the higher part of the brain consists in
filtering consciousness or enacting it.



But it probably won't be long before we simulate a mouse brain in  
toto
- about 2 decades is my guess, maybe even less given enough  
dollars -

then we're definitely in grey philosophical territory :).


I am slightly less optimistic than you. It will take one of two
decades before we simulate the hippocampus of a rat, but probably
more time will be needed for the rest of their brain. And the result
can be a conscious creature, with a quite different consciousness
that a rat, as I find plausible that pain are related to the glial
cells and their metabolism, which are not  taken into account by the
current "copies".


What is blocking us is not the computing power - already whole "rat
brain" simulations have been done is something like 1/1 of real
time - so all we need is about a decade of performane improvement
through Moores law.

What development is needed is ways of determining the neural
circuitry. There have been leaps and bounds in the process of slicing
frozen brains, and imaging the slices with electron microscopes, but
clearly it is still far too slow.

As for the hypothesis that glial cells have something to do with it,
well that can be tested via the sort of whole rat brain simulation
I've been talking about. Run the simulation in a robotic rat, and
compare the behaviour with a real rat. Basically what the open worm
guys a doing, but scaled up to a rat. If the simulation is way
different from the real rat, then we know something else is required.



I can imagine that the rat will have a "normal behavior", but as he  
cannot talk to us, we might fail to appreciate some internal change or  
even some anosognosia. The rat would not be a zombie rat, but still be  
in a quite different conscious state (perhaps better, as it seems the  
glial cell might have some role in the chronic pain.












One intersting test I'd like to see is applying Tononi's integrated
information measure to these simple creatures to see if they're
producing any integrated information. I suspect Integrated  
Information

is a necessary requirement for conscious, but not so sure about
sufficiency.


It is offred freely with the notion of self-reference, I would say.
The eight hypostases/persons-pov constitute each a different mode of
the self-integration. If Kauffman's idea that the DNA results from
something akin to Kleene's diagonalization (which I think too) the
amoeba and most protozoans are already quite self-integrated being.
Then elementary invertebrates might loose that integration (like
perhaps hydra), but quickly get it back, like with planaria.



You might be right that integrated information can be obtained from
your hypostases, but it is not obvious. More work is required before
you can plausibly make that claim.


Why? I think that by using the self-reference logic, we start from an  
integrated whole.








I guess that you remember that I am not yet convinced by your
argument that ants are not conscious, as it relies on anthropic use
of the Absolute Self-Sampling Assumption (ASSA) which I prefer to
avoid because the domain of  its statistic is not clear to me. (I am
not impressed by the doomsday argument for the same reason).



Yes, I've heard that a lot. "I'm not impressed" = "It sounds like a
crock of shit, but I can't put my finger on why".

Probably the best way forward is to put forward a toy model showing
the anthropic argument failing, and then the mechanism is clear.


It does not fail. It can explain some of the geography by bayesian  
reasoning, but it can't explain the difference between physical laws,  
and local physical/geographical fact. For the lwas, we have to find  
something which does not depend on anything particular above being  
Turing universal or Löbian.


Bruno






--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au



Re: Uploaded Worm Mind

2015-08-31 Thread Bruno Marchal


On 31 Aug 2015, at 01:54, Jason Resch wrote:




On Sun, Aug 30, 2015 at 6:34 AM, Bruno Marchal   
wrote:


On 30 Aug 2015, at 03:08, Russell Standish wrote:

Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts about
ants, and they're massively more complex creatures.

I think personally that C. Elegans, and Planaria (!), even amoeba,  
are conscious, although very plausibly not self-conscious.


I tend to think since 2008 that even RA is already conscious, even  
maximally so, and that PA is already as much self-conscious than a  
human (when in some dissociative state).


What realization did you have in 2008 that changed your mind?


The salvia experience. It corroborates the idea that the brain filter  
consciousness. By disabling or dissociating some neuronal pathway, you  
can get quite amnesic (not even remembering what a person is, nor what  
is time, space, ...), knowing basically nothing, and yet feeling much  
more conscious than in the "mundane state" + a felling that this is  
your normal basic state.


Of course I am biased on this, but some salvia experience are like  
remembering that we are indeed immaterial creature living in an  
immaterial reality (arithmetic?), and that our consciousness is  
processed mainly there. The brain is used only to make that  
consciousness able to manifest itself relatively to deep (in Bennett  
sense) first person plural sharable computations/experiences. It is an  
interface, and a local self-accelerator.


With salvia, Chardin statement is quite senseful: "we ar not hulan  
having divine experiences from time to time, but divine beings having  
human experiences from time to time".


But the "consciousness filter" theory of brain leads also to some  
difficulties. I can come back on them someday.


Bruno






But I don't know if PA is more or less conscious than RA. That  
depends of the role of the higher part of the brain consists in  
filtering consciousness or enacting it.



But it probably won't be long before we simulate a mouse brain in toto
- about 2 decades is my guess, maybe even less given enough dollars -
then we're definitely in grey philosophical territory :).

I am slightly less optimistic than you. It will take one of two  
decades before we simulate the hippocampus of a rat, but probably  
more time will be needed for the rest of their brain.


Perhaps 2 decades from simulating a rat brain on a PC, but super  
computers are generally up to a million times more powerful than a  
single CPU, which means they are roughly 2 decades ahead in  
computing power (assuming annual doubling in computational capacity).


OK. Maybe.




And the result can be a conscious creature, with a quite different  
consciousness that a rat, as I find plausible that pain are related  
to the glial cells and their metabolism, which are not  taken into  
account by the current "copies".


Interesting. Why do you think glial cell metabolism plays a roll in  
pain sensation?


I was thinking about the discovery that glial cells play a role in the  
nociceptive pathway.

See this for example: http://www.ncbi.nlm.nih.gov/pubmed/20581331
You can google on "glial cells chronic pain" to find many papers on  
this. We know that glial cells communicate with each others, although  
not with axons but with wave of chemical influences, and we know also  
that glial cells communicate with neurons, (and with the immune system).


I would not say "yes" to a doctor who does not take into account the  
glial cells, unless there is no choice. I can imagine staying  
conscious, but having different qualia. I can imagine supporting this  
for some weeks, but that it would be an heavy handicap for a longer  
survival.


Bruno




Jason





One intersting test I'd like to see is applying Tononi's integrated
information measure to these simple creatures to see if they're
producing any integrated information. I suspect Integrated Information
is a necessary requirement for conscious, but not so sure about  
sufficiency.


It is offred freely with the notion of self-reference, I would say.  
The eight hypostases/persons-pov constitute each a different mode of  
the self-integration. If Kauffman's idea that the DNA results from  
something akin to Kleene's diagonalization (which I think too) the  
amoeba and most protozoans are already quite self-integrated being.  
Then elementary invertebrates might loose that integration (like  
perhaps hydra), but quickly get it back, like with planaria.


I guess that you remember that I am not yet convinced by your  
argument that ants are not conscious, as it relies on anthropic use  
of the Absolute Self-Sampling Assumption (ASSA) which I prefer to  
avoid because the domain of  its statistic is not clear to me. (I am  
not impressed by the doomsday argument for the same reason).


Bruno





Cheers

On Sat, Aug 29, 2015 at 09:00:14AM +0200, Bruno 

Re: Uploaded Worm Mind

2015-08-31 Thread Russell Standish
On Mon, Aug 31, 2015 at 11:19:00AM +0200, Bruno Marchal wrote:
> 
> On 31 Aug 2015, at 00:42, Russell Standish wrote:
> 
> >On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:
> >>
> >
> >>I guess that you remember that I am not yet convinced by your
> >>argument that ants are not conscious, as it relies on anthropic use
> >>of the Absolute Self-Sampling Assumption (ASSA) which I prefer to
> >>avoid because the domain of  its statistic is not clear to me. (I am
> >>not impressed by the doomsday argument for the same reason).
> >>
> >
> >Yes, I've heard that a lot. "I'm not impressed" = "It sounds like a
> >crock of shit, but I can't put my finger on why".
> >
> >Probably the best way forward is to put forward a toy model showing
> >the anthropic argument failing, and then the mechanism is clear.
> 
> It does not fail. It can explain some of the geography by bayesian
> reasoning, but it can't explain the difference between physical
> laws, and local physical/geographical fact. For the lwas, we have to
> find something which does not depend on anything particular above
> being Turing universal or Löbian.
> 

I'm in agreement with your comments here, however I fail to see the
connection with the doomsday argument, or my anthropic ants argument,
as these are fundamentally about geography (in one case about how long
humans might be here on Earth, and the other about the consciousness
of certain Earthling creatures).

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-31 Thread Stathis Papaioannou
On Monday, August 31, 2015, Bruno Marchal  wrote:

>
> On 31 Aug 2015, at 00:42, Russell Standish wrote:
>
> On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:
>>
>>>
>>> On 30 Aug 2015, at 03:08, Russell Standish wrote:
>>>
>>> Well as people probably know, I don't believe C. elegans can be
 conscious in any sense of the word. Hell - I have strong doubts about
 ants, and they're massively more complex creatures.

>>>
>>> I think personally that C. Elegans, and Planaria (!), even amoeba,
>>> are conscious, although very plausibly not self-conscious.
>>>
>>> I tend to think since 2008 that even RA is already conscious, even
>>> maximally so, and that PA is already as much self-conscious than a
>>> human (when in some dissociative state).
>>>
>>> But I don't know if PA is more or less conscious than RA. That
>>> depends of the role of the higher part of the brain consists in
>>> filtering consciousness or enacting it.
>>>
>>>
 But it probably won't be long before we simulate a mouse brain in toto
 - about 2 decades is my guess, maybe even less given enough dollars -
 then we're definitely in grey philosophical territory :).

>>>
>>> I am slightly less optimistic than you. It will take one of two
>>> decades before we simulate the hippocampus of a rat, but probably
>>> more time will be needed for the rest of their brain. And the result
>>> can be a conscious creature, with a quite different consciousness
>>> that a rat, as I find plausible that pain are related to the glial
>>> cells and their metabolism, which are not  taken into account by the
>>> current "copies".
>>>
>>
>> What is blocking us is not the computing power - already whole "rat
>> brain" simulations have been done is something like 1/1 of real
>> time - so all we need is about a decade of performane improvement
>> through Moores law.
>>
>> What development is needed is ways of determining the neural
>> circuitry. There have been leaps and bounds in the process of slicing
>> frozen brains, and imaging the slices with electron microscopes, but
>> clearly it is still far too slow.
>>
>> As for the hypothesis that glial cells have something to do with it,
>> well that can be tested via the sort of whole rat brain simulation
>> I've been talking about. Run the simulation in a robotic rat, and
>> compare the behaviour with a real rat. Basically what the open worm
>> guys a doing, but scaled up to a rat. If the simulation is way
>> different from the real rat, then we know something else is required.
>>
>
>
> I can imagine that the rat will have a "normal behavior", but as he cannot
> talk to us, we might fail to appreciate some internal change or even some
> anosognosia. The rat would not be a zombie rat, but still be in a quite
> different conscious state (perhaps better, as it seems the glial cell might
> have some role in the chronic pain.
>

In general, if there is a difference in consciousness then there should be
a difference in behaviour. If the difference in consciousness is impossible
to detect then arguably it is no difference.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-30 Thread Bruno Marchal


On 30 Aug 2015, at 03:08, Russell Standish wrote:


Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts about
ants, and they're massively more complex creatures.


I think personally that C. Elegans, and Planaria (!), even amoeba, are  
conscious, although very plausibly not self-conscious.


I tend to think since 2008 that even RA is already conscious, even  
maximally so, and that PA is already as much self-conscious than a  
human (when in some dissociative state).


But I don't know if PA is more or less conscious than RA. That depends  
of the role of the higher part of the brain consists in filtering  
consciousness or enacting it.




But it probably won't be long before we simulate a mouse brain in toto
- about 2 decades is my guess, maybe even less given enough dollars -
then we're definitely in grey philosophical territory :).


I am slightly less optimistic than you. It will take one of two  
decades before we simulate the hippocampus of a rat, but probably more  
time will be needed for the rest of their brain. And the result can be  
a conscious creature, with a quite different consciousness that a rat,  
as I find plausible that pain are related to the glial cells and their  
metabolism, which are not  taken into account by the current copies.





One intersting test I'd like to see is applying Tononi's integrated
information measure to these simple creatures to see if they're
producing any integrated information. I suspect Integrated Information
is a necessary requirement for conscious, but not so sure about  
sufficiency.


It is offred freely with the notion of self-reference, I would say.  
The eight hypostases/persons-pov constitute each a different mode of  
the self-integration. If Kauffman's idea that the DNA results from  
something akin to Kleene's diagonalization (which I think too) the  
amoeba and most protozoans are already quite self-integrated being.  
Then elementary invertebrates might loose that integration (like  
perhaps hydra), but quickly get it back, like with planaria.


I guess that you remember that I am not yet convinced by your argument  
that ants are not conscious, as it relies on anthropic use of the  
Absolute Self-Sampling Assumption (ASSA) which I prefer to avoid  
because the domain of  its statistic is not clear to me. (I am not  
impressed by the doomsday argument for the same reason).


Bruno





Cheers

On Sat, Aug 29, 2015 at 09:00:14AM +0200, Bruno Marchal wrote:


On 29 Aug 2015, at 00:43, Jason Resch wrote:


I think so. It is at least as conscious as C. Elegans.



Assuming that the worm comp substitution level of neuronal
connection is the correct choice.

Low level animals behavior might rely heavily on smell and other
chemical interaction, and I am not sure what the sensor represents
in the robots represent, as C. Elegans is blind (I think).

Hard to really conclude it is thinking from the video, and
theoretically, we can't never be sure. It might be a philosophical
worm zombie!

Bruno





Jason

On Fri, Aug 28, 2015 at 6:21 PM, meekerdb meeke...@verizon.net
wrote:
On 8/28/2015 3:00 PM, Jason wrote:
https://www.youtube.com/watch?v=2_i1NKPzbjM

So what do you think?  Is it conscious?

Bremt

--
You received this message because you are subscribed to the Google
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

Re: Uploaded Worm Mind

2015-08-30 Thread meekerdb

On 8/30/2015 3:34 AM, Bruno Marchal wrote:


On 30 Aug 2015, at 03:08, Russell Standish wrote:


Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts about
ants, and they're massively more complex creatures.


I think personally that C. Elegans, and Planaria (!), even amoeba, are conscious, 
although very plausibly not self-conscious.


I tend to think since 2008 that even RA is already conscious, even maximally so, and 
that PA is already as much self-conscious than a human (when in some dissociative state).


But I don't know if PA is more or less conscious than RA. That depends of the role of 
the higher part of the brain consists in filtering consciousness or enacting it.




But it probably won't be long before we simulate a mouse brain in toto
- about 2 decades is my guess, maybe even less given enough dollars -
then we're definitely in grey philosophical territory :).


I am slightly less optimistic than you. It will take one of two decades before we 
simulate the hippocampus of a rat, but probably more time will be needed for the rest of 
their brain. And the result can be a conscious creature, with a quite different 
consciousness that a rat, as I find plausible that pain are related to the glial cells 
and their metabolism, which are not taken into account by the current copies.


So now you agree with me that there are different kinds and degrees of consciousness; that 
it is not just a binary attribute of an axiom + inference system.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-30 Thread Russell Standish
On Sun, Aug 30, 2015 at 07:46:33PM -0400, Jason Resch wrote:
 
 There's roughly a 100x increase in number of neurons, scaling from the
 nematode to the fruit fly, to the mouse, cat, and then human. If efficiency
 and power of computers for a given cost continue to double, then what it
 costs now to simulate a nematode brain, we will in 7 years (assuming
 doubling per year) be able to simulate a fruit fly brain. Seven years
 later, for the same cost, we will be able to simulate a mouse brain. Seven
 years later, a cat brain, and then finally, 28 years later, we'll be able
 to simulate neural networks with the same complexity of a human brain.

Historically, the rate of doubling is every 18 months, or 5 years for
an order of magnitude. So your numbers are 10 years to fruit fly, 20
to mouse and 30 to human brain resp.

But computational cost goes up faster than linearly with neuron count,
because the actual cost also depends on axon connectivity. IIRC, this
is something like xlog(x) complexity. So it might be a bit further out
for human complexity.

 
 
  One intersting test I'd like to see is applying Tononi's integrated
  information measure to these simple creatures to see if they're
  producing any integrated information. I suspect Integrated Information
  is a necessary requirement for conscious, but not so sure about
  sufficiency.
 
 
 Doesn't a NAND operation produce integrated information? 

Really? Did you have a cite for this?

It seems we're in
 danger of basing our answers to the binary question is it conscious? on
 purely quantitative, rather than qualitative differences in computations.
 How many neurons do you think are required to implement the algorithms you
 consider necessary for consciousness?
 

I haven't fully grokked this, but I would have thought Tononi's
measure was at least normalised by the size of the system in question.

A system of a billion uncoordinated nand gates will produce very low
amounts of integrated information.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-30 Thread Jason Resch
On Sat, Aug 29, 2015 at 9:08 PM, Russell Standish li...@hpcoders.com.au
wrote:

 Well as people probably know, I don't believe C. elegans can be
 conscious in any sense of the word. Hell - I have strong doubts about
 ants, and they're massively more complex creatures.

 But it probably won't be long before we simulate a mouse brain in toto
 - about 2 decades is my guess, maybe even less given enough dollars -
 then we're definitely in grey philosophical territory :).



Nematode brain - 10^3 neurons (302)
Fruit fly brain - 10^5 neurons (65,000)
Mouse brain - 10^7 neurons (10,000,000)
Cat brain - 10^9 neurons (1,000,000,000)
Human brain - 10^11 neurons (87,000,000,000)

There's roughly a 100x increase in number of neurons, scaling from the
nematode to the fruit fly, to the mouse, cat, and then human. If efficiency
and power of computers for a given cost continue to double, then what it
costs now to simulate a nematode brain, we will in 7 years (assuming
doubling per year) be able to simulate a fruit fly brain. Seven years
later, for the same cost, we will be able to simulate a mouse brain. Seven
years later, a cat brain, and then finally, 28 years later, we'll be able
to simulate neural networks with the same complexity of a human brain.

This of course assumes a continued exponential increase in computing power,
but biology shows us that it is physically possible, to run what amounts to
a super-computer's worth of FLOPS on just a human brain's energy
consumption of 20 Watts. Our largest super computers of today are estimated
to be within one order of magnitude of the computational power of the human
brain. So while by 2043, any personal computer could simulate a human
brain, it will be a possible feat for super computers 10-20 years earlier.




 One intersting test I'd like to see is applying Tononi's integrated
 information measure to these simple creatures to see if they're
 producing any integrated information. I suspect Integrated Information
 is a necessary requirement for conscious, but not so sure about
 sufficiency.


Doesn't a NAND operation produce integrated information? It seems we're in
danger of basing our answers to the binary question is it conscious? on
purely quantitative, rather than qualitative differences in computations.
How many neurons do you think are required to implement the algorithms you
consider necessary for consciousness?

Jason



 Cheers

 On Sat, Aug 29, 2015 at 09:00:14AM +0200, Bruno Marchal wrote:
 
  On 29 Aug 2015, at 00:43, Jason Resch wrote:
 
  I think so. It is at least as conscious as C. Elegans.
 
 
  Assuming that the worm comp substitution level of neuronal
  connection is the correct choice.
 
  Low level animals behavior might rely heavily on smell and other
  chemical interaction, and I am not sure what the sensor represents
  in the robots represent, as C. Elegans is blind (I think).
 
  Hard to really conclude it is thinking from the video, and
  theoretically, we can't never be sure. It might be a philosophical
  worm zombie!
 
  Bruno
 
 
 
  
  Jason
  
  On Fri, Aug 28, 2015 at 6:21 PM, meekerdb meeke...@verizon.net
  wrote:
  On 8/28/2015 3:00 PM, Jason wrote:
  https://www.youtube.com/watch?v=2_i1NKPzbjM
  
  So what do you think?  Is it conscious?
  
  Bremt
  
  --
  You received this message because you are subscribed to the Google
  Groups Everything List group.
  To unsubscribe from this group and stop receiving emails from it,
  send an email to everything-list+unsubscr...@googlegroups.com.
  To post to this group, send email to everything-list@googlegroups.com.
  Visit this group at http://groups.google.com/group/everything-list.
  For more options, visit https://groups.google.com/d/optout.
  
  
  --
  You received this message because you are subscribed to the Google
  Groups Everything List group.
  To unsubscribe from this group and stop receiving emails from it,
  send an email to everything-list+unsubscr...@googlegroups.com.
  To post to this group, send email to everything-list@googlegroups.com.
  Visit this group at http://groups.google.com/group/everything-list.
  For more options, visit https://groups.google.com/d/optout.
 
  http://iridia.ulb.ac.be/~marchal/
 
 
 
  --
  You received this message because you are subscribed to the Google
 Groups Everything List group.
  To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
  To post to this group, send email to everything-list@googlegroups.com.
  Visit this group at http://groups.google.com/group/everything-list.
  For more options, visit https://groups.google.com/d/optout.

 --


 
 Prof Russell Standish  Phone 0425 253119 (mobile)
 Principal, High Performance Coders
 Visiting Professor of Mathematics  hpco...@hpcoders.com.au
 University of New South Wales  http://www.hpcoders.com.au

 

Re: Uploaded Worm Mind

2015-08-30 Thread Jason Resch
On Sun, Aug 30, 2015 at 6:34 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 30 Aug 2015, at 03:08, Russell Standish wrote:

 Well as people probably know, I don't believe C. elegans can be
 conscious in any sense of the word. Hell - I have strong doubts about
 ants, and they're massively more complex creatures.


 I think personally that C. Elegans, and Planaria (!), even amoeba, are
 conscious, although very plausibly not self-conscious.

 I tend to think since 2008 that even RA is already conscious, even
 maximally so, and that PA is already as much self-conscious than a human
 (when in some dissociative state).


What realization did you have in 2008 that changed your mind?



 But I don't know if PA is more or less conscious than RA. That depends of
 the role of the higher part of the brain consists in filtering
 consciousness or enacting it.


 But it probably won't be long before we simulate a mouse brain in toto
 - about 2 decades is my guess, maybe even less given enough dollars -
 then we're definitely in grey philosophical territory :).


 I am slightly less optimistic than you. It will take one of two decades
 before we simulate the hippocampus of a rat, but probably more time will be
 needed for the rest of their brain.


Perhaps 2 decades from simulating a rat brain on a PC, but super computers
are generally up to a million times more powerful than a single CPU, which
means they are roughly 2 decades ahead in computing power (assuming annual
doubling in computational capacity).


 And the result can be a conscious creature, with a quite different
 consciousness that a rat, as I find plausible that pain are related to the
 glial cells and their metabolism, which are not  taken into account by the
 current copies.


Interesting. Why do you think glial cell metabolism plays a roll in pain
sensation?

Jason





 One intersting test I'd like to see is applying Tononi's integrated
 information measure to these simple creatures to see if they're
 producing any integrated information. I suspect Integrated Information
 is a necessary requirement for conscious, but not so sure about
 sufficiency.


 It is offred freely with the notion of self-reference, I would say. The
 eight hypostases/persons-pov constitute each a different mode of the
 self-integration. If Kauffman's idea that the DNA results from something
 akin to Kleene's diagonalization (which I think too) the amoeba and most
 protozoans are already quite self-integrated being. Then elementary
 invertebrates might loose that integration (like perhaps hydra), but
 quickly get it back, like with planaria.

 I guess that you remember that I am not yet convinced by your argument
 that ants are not conscious, as it relies on anthropic use of the Absolute
 Self-Sampling Assumption (ASSA) which I prefer to avoid because the domain
 of  its statistic is not clear to me. (I am not impressed by the doomsday
 argument for the same reason).

 Bruno





 Cheers

 On Sat, Aug 29, 2015 at 09:00:14AM +0200, Bruno Marchal wrote:


 On 29 Aug 2015, at 00:43, Jason Resch wrote:

 I think so. It is at least as conscious as C. Elegans.



 Assuming that the worm comp substitution level of neuronal
 connection is the correct choice.

 Low level animals behavior might rely heavily on smell and other
 chemical interaction, and I am not sure what the sensor represents
 in the robots represent, as C. Elegans is blind (I think).

 Hard to really conclude it is thinking from the video, and
 theoretically, we can't never be sure. It might be a philosophical
 worm zombie!

 Bruno




 Jason

 On Fri, Aug 28, 2015 at 6:21 PM, meekerdb meeke...@verizon.net
 wrote:
 On 8/28/2015 3:00 PM, Jason wrote:
 https://www.youtube.com/watch?v=2_i1NKPzbjM

 So what do you think?  Is it conscious?

 Bremt

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


 http://iridia.ulb.ac.be/~marchal/



 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to 

Re: Uploaded Worm Mind

2015-08-30 Thread Russell Standish
On Sun, Aug 30, 2015 at 12:34:18PM +0200, Bruno Marchal wrote:
 
 On 30 Aug 2015, at 03:08, Russell Standish wrote:
 
 Well as people probably know, I don't believe C. elegans can be
 conscious in any sense of the word. Hell - I have strong doubts about
 ants, and they're massively more complex creatures.
 
 I think personally that C. Elegans, and Planaria (!), even amoeba,
 are conscious, although very plausibly not self-conscious.
 
 I tend to think since 2008 that even RA is already conscious, even
 maximally so, and that PA is already as much self-conscious than a
 human (when in some dissociative state).
 
 But I don't know if PA is more or less conscious than RA. That
 depends of the role of the higher part of the brain consists in
 filtering consciousness or enacting it.
 
 
 But it probably won't be long before we simulate a mouse brain in toto
 - about 2 decades is my guess, maybe even less given enough dollars -
 then we're definitely in grey philosophical territory :).
 
 I am slightly less optimistic than you. It will take one of two
 decades before we simulate the hippocampus of a rat, but probably
 more time will be needed for the rest of their brain. And the result
 can be a conscious creature, with a quite different consciousness
 that a rat, as I find plausible that pain are related to the glial
 cells and their metabolism, which are not  taken into account by the
 current copies.

What is blocking us is not the computing power - already whole rat
brain simulations have been done is something like 1/1 of real
time - so all we need is about a decade of performane improvement
through Moores law.

What development is needed is ways of determining the neural
circuitry. There have been leaps and bounds in the process of slicing
frozen brains, and imaging the slices with electron microscopes, but
clearly it is still far too slow.

As for the hypothesis that glial cells have something to do with it,
well that can be tested via the sort of whole rat brain simulation
I've been talking about. Run the simulation in a robotic rat, and
compare the behaviour with a real rat. Basically what the open worm
guys a doing, but scaled up to a rat. If the simulation is way
different from the real rat, then we know something else is required.

 
 
 
 One intersting test I'd like to see is applying Tononi's integrated
 information measure to these simple creatures to see if they're
 producing any integrated information. I suspect Integrated Information
 is a necessary requirement for conscious, but not so sure about
 sufficiency.
 
 It is offred freely with the notion of self-reference, I would say.
 The eight hypostases/persons-pov constitute each a different mode of
 the self-integration. If Kauffman's idea that the DNA results from
 something akin to Kleene's diagonalization (which I think too) the
 amoeba and most protozoans are already quite self-integrated being.
 Then elementary invertebrates might loose that integration (like
 perhaps hydra), but quickly get it back, like with planaria.
 

You might be right that integrated information can be obtained from
your hypostases, but it is not obvious. More work is required before
you can plausibly make that claim.


 I guess that you remember that I am not yet convinced by your
 argument that ants are not conscious, as it relies on anthropic use
 of the Absolute Self-Sampling Assumption (ASSA) which I prefer to
 avoid because the domain of  its statistic is not clear to me. (I am
 not impressed by the doomsday argument for the same reason).
 

Yes, I've heard that a lot. I'm not impressed = It sounds like a
crock of shit, but I can't put my finger on why.

Probably the best way forward is to put forward a toy model showing
the anthropic argument failing, and then the mechanism is clear.


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-30 Thread meekerdb

On 8/30/2015 5:42 PM, Russell Standish wrote:

On Sun, Aug 30, 2015 at 07:46:33PM -0400, Jason Resch wrote:

There's roughly a 100x increase in number of neurons, scaling from the
nematode to the fruit fly, to the mouse, cat, and then human. If efficiency
and power of computers for a given cost continue to double, then what it
costs now to simulate a nematode brain, we will in 7 years (assuming
doubling per year) be able to simulate a fruit fly brain. Seven years
later, for the same cost, we will be able to simulate a mouse brain. Seven
years later, a cat brain, and then finally, 28 years later, we'll be able
to simulate neural networks with the same complexity of a human brain.

Historically, the rate of doubling is every 18 months, or 5 years for
an order of magnitude. So your numbers are 10 years to fruit fly, 20
to mouse and 30 to human brain resp.

But computational cost goes up faster than linearly with neuron count,
because the actual cost also depends on axon connectivity. IIRC, this
is something like xlog(x) complexity. So it might be a bit further out
for human complexity.


One intersting test I'd like to see is applying Tononi's integrated
information measure to these simple creatures to see if they're
producing any integrated information. I suspect Integrated Information
is a necessary requirement for conscious, but not so sure about
sufficiency.


Doesn't a NAND operation produce integrated information?

Really? Did you have a cite for this?

It seems we're in

danger of basing our answers to the binary question is it conscious? on
purely quantitative, rather than qualitative differences in computations.
How many neurons do you think are required to implement the algorithms you
consider necessary for consciousness?


I haven't fully grokked this, but I would have thought Tononi's
measure was at least normalised by the size of the system in question.


It's normalized by the size of the smallest subsystem.  But I wouldn't put in 
credence in IIT:

http://www.scottaaronson.com/blog/?p=1799

Bren

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-29 Thread Russell Standish
Well as people probably know, I don't believe C. elegans can be
conscious in any sense of the word. Hell - I have strong doubts about
ants, and they're massively more complex creatures.

But it probably won't be long before we simulate a mouse brain in toto
- about 2 decades is my guess, maybe even less given enough dollars -
then we're definitely in grey philosophical territory :).

One intersting test I'd like to see is applying Tononi's integrated
information measure to these simple creatures to see if they're
producing any integrated information. I suspect Integrated Information
is a necessary requirement for conscious, but not so sure about sufficiency.

Cheers 

On Sat, Aug 29, 2015 at 09:00:14AM +0200, Bruno Marchal wrote:
 
 On 29 Aug 2015, at 00:43, Jason Resch wrote:
 
 I think so. It is at least as conscious as C. Elegans.
 
 
 Assuming that the worm comp substitution level of neuronal
 connection is the correct choice.
 
 Low level animals behavior might rely heavily on smell and other
 chemical interaction, and I am not sure what the sensor represents
 in the robots represent, as C. Elegans is blind (I think).
 
 Hard to really conclude it is thinking from the video, and
 theoretically, we can't never be sure. It might be a philosophical
 worm zombie!
 
 Bruno
 
 
 
 
 Jason
 
 On Fri, Aug 28, 2015 at 6:21 PM, meekerdb meeke...@verizon.net
 wrote:
 On 8/28/2015 3:00 PM, Jason wrote:
 https://www.youtube.com/watch?v=2_i1NKPzbjM
 
 So what do you think?  Is it conscious?
 
 Bremt
 
 -- 
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.
 
 
 -- 
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.
 
 http://iridia.ulb.ac.be/~marchal/
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-28 Thread meekerdb

On 8/28/2015 3:00 PM, Jason wrote:

https://www.youtube.com/watch?v=2_i1NKPzbjM


So what do you think?  Is it conscious?

Bremt

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Uploaded Worm Mind

2015-08-28 Thread Jason Resch
I think so. It is at least as conscious as C. Elegans.

Jason

On Fri, Aug 28, 2015 at 6:21 PM, meekerdb meeke...@verizon.net wrote:

 On 8/28/2015 3:00 PM, Jason wrote:

 https://www.youtube.com/watch?v=2_i1NKPzbjM


 So what do you think?  Is it conscious?

 Bremt

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.