Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-30 Thread Craig Weinberg


On Saturday, March 30, 2013 7:01:47 AM UTC-4, Bruno Marchal wrote:
>
>
> On 29 Mar 2013, at 13:14, Craig Weinberg wrote:
>
>
>
> On Friday, March 29, 2013 6:21:59 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 28 Mar 2013, at 20:15, Craig Weinberg wrote:
>>
>>
>>
>> On Thursday, March 28, 2013 10:41:22 AM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 26 Mar 2013, at 17:53, Craig Weinberg wrote:
>>>
>>>
>>>
>>> On Tuesday, March 26, 2013 10:13:09 AM UTC-4, Bruno Marchal wrote:


 On 26 Mar 2013, at 13:35, Craig Weinberg wrote: 

 It is if you assume photons bouncing back and forth.
>>>  
>>>
 unlike a universal   
 number. The fixed point of the two mirrors needs infinities of   
 reflexions, but the machine self-reference needs only two   
 diagonalizations. As I said, you must study those things and convince   
 yourself. 

 It sounds like a dodge to me. Fundamental truths seem like they are 
>>> always conceptually simple. I can teach someone the principle of binary 
>>> math in two minutes without them having to learn to build a computer from 
>>> scratch. You don't have to learn to use Maxwell's equations to be convinced 
>>> that electromagnetism involves wave properties.
>>>
>>>
>>>
>>> ?
>>>
>>> I can explain diagonalization in two minutes. If this can help.
>>>
>>
>> What would help more is to explain how diagonalization contributes to a 
>> computation being an experienced awareness rather than an unconscious 
>> outcome.
>>
>>
>> Diagonalization shows that a machine can refer to itself in many sense, 
>> which are equivalent in "god's eyes", but completely different in the 
>> machine's eyes, and some of those self-reference verify accepted axioms for 
>> knowledge, observable, etc. 
>>
>
> How do you know that it  intentionally refers to itself rather than 
> unconsciously reflecting another view of itself? 
>
>
> I don't know. But you are saying you know that it does that, so how do you 
> know? 
>

Because every experience that I have ever had with symbols is that they do 
not literally refer to anything. A parrot need not speak English to repeat 
words interactively. A red octagon need not inherently refer to stopping 
just because we use it to tell ourselves to stop. I understand exactly what 
that is - how semiotics can help us tease apart the semantic from the 
pragmatic and syntactic, and my views help show how all three are 
symmetrical aspects of the whole, which is sense participation. Comp turns 
sense upside down, and conflates it with semantic and pragmatic modes under 
the completely inhospitable umbrella of syntax. Arithmetic is a 
disembodied, impersonal, and automated syntax which for some reason winds 
up becoming embodied as semantic personsbut there is no reason to 
imagine that could happen going by the arithmetic alone. The pathetic 
fallacy plugs the gap between who we know we are and what we want to 
believe got us here.
 

>
>
>
>
> If my car's wheel is out of alignment, the tire tracks might show that the 
> car is pulling to the right and is being constantly corrected. That entire 
> pattern is merely a symptom of the overall machine - the tracks themselves 
> are not referring or inferring any intelligence back to the car, and the 
> car does not use its tracks to realign itself. It is we who do the 
> inferring and referring.
>
>
>>
>>
>>
>>  
>>
>>>
>>>
>>>
>>>
>>>


 > or a cartoon of a lion talking about itself into some kind of   
 > subjective experience for the cartoon, or cartoon-ness, or lion- 
 > ness, or talking-ness. Self-reference has no significance unless we   
 > assume that the self already has awareness. 

 Hmm... I am open to that assumption, but usually I prefer to add the   
 universality assumption too. 




 > If I say 'these words refer to themselves', or rig up a camera to   
 > point at a screen displaying the output of Tupper's Self-Referential 
   
 > formula, I still have nothing but a camera, a screen and some   
 > meaningless graphics. This assumption pulls qualia out of thin air,   
 > ignores the pathetic fallacy completely, and conflates all   
 > territories with maps. 

 On the contrary, we get a rich and complex theory of qualia, even a   
 testable one, as we get the quanta too, and so can compare with   
 nature. Please, don't oversimplify something that you have not studied. 

>>>
>>> How can there be a such thing as a theory of qualia? Qualia is precisely 
>>> that which theory cannot access in any way.
>>>
>>>
>>> Yes, that is one the main axiom for qualia. Not only you have a theory, 
>>> but you share it with me.
>>>
>>
>> How do you know it is a main axiom for qualia? 
>>
>>
>> It is not someything I can know. It was just something we are agreeing 
>> on, so that your point made my points, and refute the idea that you can use 
>> it as a tool for invalidating comp.
>>
>>
> I agr

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-30 Thread Bruno Marchal


On 30 Mar 2013, at 02:13, Craig Weinberg wrote:




On Friday, March 29, 2013 1:59:44 PM UTC-4, Bruno Marchal wrote:

On 29 Mar 2013, at 16:02, Craig Weinberg wrote:




On Friday, March 29, 2013 10:47:09 AM UTC-4, Bruno Marchal wrote:

On 29 Mar 2013, at 10:44, Quentin Anciaux wrote:




2013/3/29 Bruno Marchal 

On 28 Mar 2013, at 18:59, meekerdb wrote:

On 3/28/2013 7:52 AM, Bruno Marchal wrote:
Intelligence, in my opinion is rather easy too. It is a question  
of "abstract thermodynamic", intelligence is when you get enough  
heat while young, something like that. It is close to courage, and  
it is what make competence possible.


??


Competence is the most difficult, as they are distributed on  
transfinite  lattice of incomparable degrees. Some can ask for  
necessary long work, and can have negative feedback on intelligence.


That sounds like a quibble.  Intelligence is usually just thought  
of as the the ability to learn competence over a very general  
domain.


That's why I think that intelligence is simple, almost a mental  
attitude, more akin to courage and humility, than anything else.
Competence asks for gift or work, and can often lead to the  
feeling that we are more intelligent than others, which is the  
first basic symptom of stupidity.



That sounds more and more "1984"ish... War is peace.


?




Freedom is slavery.


?




Ignorance is strength


I never said that.

I say that awareness of our ignorance is strength. It participates  
to our intelligence.


That is true only if our intelligence is grounded in something  
which transcends its own ignorance...


That's what the Löbian machines do, even just by looking inward.  
That's computer science.


They question their ignorance or the question their certainty?


They contemplate their ignorance, and use it to question their  
certainty.











otherwise awareness of our own ignorance is just another layer of  
ignorance. This carries over to simulation - the ability to discern  
one thing as more real than another is meaningless unless our sense  
of realism is grounded in something beyond simulation.


Right. The physical reality, with comp, is not simulable. Nor  
consciousness.


Then what are we saying yes to the doctor for?


For an artificial brain, which will hopefully simulate their organic  
brain at the right level, so that they keep the usual statistical  
relationship with their usual universal neighbors (the physical  
universe, their boss, their life partners, etc.).








But machines can makes possible for some person to manifest  
themselves with some other person, with some non negligible  
probability.


?


This is what you might understand if you read carefully UDA, at least  
up to step 7.












Patterns don't care about patterns, or to quote Deleuze -  
“Representation fails to capture the affirmed world of difference.  
Representation has only a single center, a unique and receding  
perspective, and in the consequence a false depth. It mediates  
everything, but mobilizes and moves nothing."


That makes sense in comp when describing the machine first person  
perspective.


How is it different in a third person perspective? How do  
computations discern between hypothesis and mobilization, or more  
importantly, how do they move anything?


Because they have faith in their ability to move something, which they  
develop through lasting and repeating true experiences.








In some sense we might argue that the first person associated to a  
machine, is not really a machine, after all, nor anything  
describable in any 3p way.


Which invites the question, in what way can comp claim to address  
consciousness? How does the 1p interface with the 3p?


By the reinstallation of the connection/conjunction with truth. It  
associates an unnameable knower to a nameable believer.









And that is what makes the first person immune for diagonalization,  
making it possible that [] x -> x. "[]" is not a number. Provably so  
with []p = Bp & p.


What makes the first person feel?


Their knowledge.






Comp is not so much "I am a machine" that "I (whatever I am) can  
survive locally with "normal probability" a digital brain/body  
transplant". What is saved in the process is an immaterial  
connection between some number, some environments or consistent  
computational-continuations, and an infinity of universal numbers".


If we don't know what "I" is, then we really can't pretend to know  
whether it is automatically transferred from location to location  
simply by an affinity of signs and functions.


We never know what things are, and that's why we assume theories,  
reason, and test them.
public nature never say that you are correct. It can only say that you  
are wrong.
In comp only God can tell you that you are correct, but if you repeat  
that, then you are wrong.








We are not machines, Craig, we borrow machines (arithmetical  
relations). We are living on the boundaries b

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-30 Thread Bruno Marchal


On 29 Mar 2013, at 13:44, Craig Weinberg wrote:




On Friday, March 29, 2013 5:41:19 AM UTC-4, Bruno Marchal wrote:

On 28 Mar 2013, at 18:59, meekerdb wrote:

> On 3/28/2013 7:52 AM, Bruno Marchal wrote:
>> Intelligence, in my opinion is rather easy too. It is a question of
>> "abstract thermodynamic", intelligence is when you get enough heat
>> while young, something like that. It is close to courage, and it is
>> what make competence possible.
>
> ??
>
>>
>> Competence is the most difficult, as they are distributed on
>> transfinite  lattice of incomparable degrees. Some can ask for
>> necessary long work, and can have negative feedback on  
intelligence.

>
> That sounds like a quibble.  Intelligence is usually just thought of
> as the the ability to learn competence over a very general domain.

Intelligence is an ability to learn and become competent, but more  
importantly to understand and discern. Intelligence is the cognitive- 
level modality of sensitivity.


"intelligence (n.)
late 14c., "faculty of understanding," from Old French  
intelligence (12c.), from Latin intelligentia, intellegentia  
"understanding, power of discerning; art, skill, taste," from  
intelligentem (nominative intelligens) "discerning," present  
participle of intelligere "to understand, comprehend," from inter-  
"between" (see inter-) + legere "choose, pick out, read" "


OK.





That's why I think that intelligence is simple, almost a mental
attitude, more akin to courage and humility, than anything else.
Competence asks for gift or work, and can often lead to the feeling
that we are more intelligent than others, which is the first basic
symptom of stupidity.

I don't know that feeling more intelligent than others means you are  
stupid, maybe just vain. If taken literally, how could anyone become  
more intelligent than anyone else if as soon as they are intelligent  
enough to realize it, that made them stupid?


Because they will never realize that (unless they confuse intelligence  
and competence, but then it is just a minor vocabulary issue). In case  
they realize that they are really intelligent (in the large sense  
exposed here), then they become stupid, indeed.


You have something similar with the mystical illumination. If someone  
tell you "I have seen God", you can be pretty sure he did not. Same  
with genuine "happiness": it goes without saying, etc. Of course I  
talk on ideal and public case. In private you can say more, but often,  
even there, saying too much leads to the contrary effect.


Bruno






Craig


Bruno



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-30 Thread Bruno Marchal


On 29 Mar 2013, at 13:14, Craig Weinberg wrote:




On Friday, March 29, 2013 6:21:59 AM UTC-4, Bruno Marchal wrote:

On 28 Mar 2013, at 20:15, Craig Weinberg wrote:




On Thursday, March 28, 2013 10:41:22 AM UTC-4, Bruno Marchal wrote:

On 26 Mar 2013, at 17:53, Craig Weinberg wrote:




On Tuesday, March 26, 2013 10:13:09 AM UTC-4, Bruno Marchal wrote:

On 26 Mar 2013, at 13:35, Craig Weinberg wrote:

It is if you assume photons bouncing back and forth.

unlike a universal
number. The fixed point of the two mirrors needs infinities of
reflexions, but the machine self-reference needs only two
diagonalizations. As I said, you must study those things and  
convince

yourself.

It sounds like a dodge to me. Fundamental truths seem like they  
are always conceptually simple. I can teach someone the principle  
of binary math in two minutes without them having to learn to  
build a computer from scratch. You don't have to learn to use  
Maxwell's equations to be convinced that electromagnetism involves  
wave properties.



?

I can explain diagonalization in two minutes. If this can help.

What would help more is to explain how diagonalization contributes  
to a computation being an experienced awareness rather than an  
unconscious outcome.


Diagonalization shows that a machine can refer to itself in many  
sense, which are equivalent in "god's eyes", but completely  
different in the machine's eyes, and some of those self-reference  
verify accepted axioms for knowledge, observable, etc.


How do you know that it  intentionally refers to itself rather than  
unconsciously reflecting another view of itself?


I don't know. But you are saying you know that it does that, so how do  
you know?





If my car's wheel is out of alignment, the tire tracks might show  
that the car is pulling to the right and is being constantly  
corrected. That entire pattern is merely a symptom of the overall  
machine - the tracks themselves are not referring or inferring any  
intelligence back to the car, and the car does not use its tracks to  
realign itself. It is we who do the inferring and referring.
















> or a cartoon of a lion talking about itself into some kind of
> subjective experience for the cartoon, or cartoon-ness, or lion-
> ness, or talking-ness. Self-reference has no significance unless  
we

> assume that the self already has awareness.

Hmm... I am open to that assumption, but usually I prefer to add the
universality assumption too.




> If I say 'these words refer to themselves', or rig up a camera to
> point at a screen displaying the output of Tupper's Self- 
Referential

> formula, I still have nothing but a camera, a screen and some
> meaningless graphics. This assumption pulls qualia out of thin  
air,

> ignores the pathetic fallacy completely, and conflates all
> territories with maps.

On the contrary, we get a rich and complex theory of qualia, even a
testable one, as we get the quanta too, and so can compare with
nature. Please, don't oversimplify something that you have not  
studied.


How can there be a such thing as a theory of qualia? Qualia is  
precisely that which theory cannot access in any way.


Yes, that is one the main axiom for qualia. Not only you have a  
theory, but you share it with me.


How do you know it is a main axiom for qualia?


It is not someything I can know. It was just something we are  
agreeing on, so that your point made my points, and refute the idea  
that you can use it as a tool for invalidating comp.



I agree that it is an important axiom, but only to discern qualia  
from quanta. It doesn't explain qualia itself or justify its  
existence (or insistence) in particular.



Sure. Nice we agree on that axiom. My point was just that this cannot  
be used against comp, as the comp theory of qualia explains that  
particular aspect.









It's like saying that the important thing about the Moon is that we  
can't swim there. The fact that I understand that the Moon is not  
in the ocean doesn't mean I can take credit for figuring out the  
Moon. To me it shows the confirmation bias of the approach. You are  
looking at reality from the start as if it were a kind of theory,


I bet I can find a theory, indeed. But this does not mean that  
anything about machine can be made into a theory.


Sure, I'm not denying that it is true that we can't swim to the  
Moon, or that this theory could not be part of a larger theory, but  
the theory still doesn't produce a theory justifying the Moon.


It justifies the existence of the appearance of the moon, and its  
stability. Then the actual existence is geographical, contingent. Comp  
justifies that we cannot justify such things.










so that this detail about qualia being non-theoretical has inflated  
significance.


It is important indeed, but of course it is not use here as an  
argument for comp, only as showing that you can't use the absence of  
a theory as an argument against comp, because

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Craig Weinberg


On Friday, March 29, 2013 1:59:44 PM UTC-4, Bruno Marchal wrote:
>
>
> On 29 Mar 2013, at 16:02, Craig Weinberg wrote:
>
>
>
> On Friday, March 29, 2013 10:47:09 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 29 Mar 2013, at 10:44, Quentin Anciaux wrote:
>>
>>
>>
>> 2013/3/29 Bruno Marchal 
>>
>>>
>>> On 28 Mar 2013, at 18:59, meekerdb wrote:
>>>
>>>  On 3/28/2013 7:52 AM, Bruno Marchal wrote:

> Intelligence, in my opinion is rather easy too. It is a question of 
> "abstract thermodynamic", intelligence is when you get enough heat while 
> young, something like that. It is close to courage, and it is what make 
> competence possible.
>

 ??

  
> Competence is the most difficult, as they are distributed on 
> transfinite  lattice of incomparable degrees. Some can ask for necessary 
> long work, and can have negative feedback on intelligence.
>

 That sounds like a quibble.  Intelligence is usually just thought of as 
 the the ability to learn competence over a very general domain.

>>>
>>> That's why I think that intelligence is simple, almost a mental 
>>> attitude, more akin to courage and humility, than anything else.
>>> Competence asks for gift or work, and can often lead to the feeling that 
>>> we are more intelligent than others, which is the first basic symptom of 
>>> stupidity.
>>>
>>>
>> That sounds more and more "1984"ish... War is peace. 
>>
>>
>> ?
>>
>>
>>
>> Freedom is slavery.
>>
>>
>> ?
>>
>>
>>
>> Ignorance is strength 
>>
>>
>> I never said that.
>>
>> I say that awareness of our ignorance is strength. It participates to our 
>> intelligence.
>>
>
> That is true only if our intelligence is grounded in something which 
> transcends its own ignorance...
>
>
> That's what the Löbian machines do, even just by looking inward. That's 
> computer science.
>

They question their ignorance or the question their certainty?
 

>
>
>
>
> otherwise awareness of our own ignorance is just another layer of 
> ignorance. This carries over to simulation - the ability to discern one 
> thing as more real than another is meaningless unless our sense of realism 
> is grounded in something beyond simulation. 
>
>
> Right. The physical reality, with comp, is not simulable. Nor 
> consciousness. 
>

Then what are we saying yes to the doctor for?
 

> But machines can makes possible for some person to manifest themselves 
> with some other person, with some non negligible probability.
>

?
 

>
>
>
>
>
> Patterns don't care about patterns, or to quote Deleuze - “Representation 
> fails to capture the affirmed world of difference. Representation has only 
> a single center, a unique and receding perspective, and in the consequence 
> a false depth. It mediates everything, but mobilizes and moves nothing."
>
>
> That makes sense in comp when describing the machine first person 
> perspective. 
>

How is it different in a third person perspective? How do computations 
discern between hypothesis and mobilization, or more importantly, how do 
they move anything?
 

>
> In some sense we might argue that the first person associated to a 
> machine, is not really a machine, after all, nor anything describable in 
> any 3p way. 
>

Which invites the question, in what way can comp claim to address 
consciousness? How does the 1p interface with the 3p?
 

>
> And that is what makes the first person immune for diagonalization, making 
> it possible that [] x -> x. "[]" is not a number. Provably so with []p = Bp 
> & p. 
>

What makes the first person feel?
 

>
> Comp is not so much "I am a machine" that "I (whatever I am) can survive 
> locally with "normal probability" a digital brain/body transplant". What is 
> saved in the process is an immaterial connection between some number, some 
> environments or consistent computational-continuations, and an infinity of 
> universal numbers". 
>

If we don't know what "I" is, then we really can't pretend to know whether 
it is automatically transferred from location to location simply by an 
affinity of signs and functions.
 

>
> We are not machines, Craig, we borrow machines (arithmetical relations). 
> We are living on the boundaries between the computable and the non 
> computable.
>

I can agree with that, but I go further to say that what machines are is 
actually the poorest possible reflection of our nature.

Craig
 

>
> Bruno
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Bruno Marchal


On 29 Mar 2013, at 16:02, Craig Weinberg wrote:




On Friday, March 29, 2013 10:47:09 AM UTC-4, Bruno Marchal wrote:

On 29 Mar 2013, at 10:44, Quentin Anciaux wrote:




2013/3/29 Bruno Marchal 

On 28 Mar 2013, at 18:59, meekerdb wrote:

On 3/28/2013 7:52 AM, Bruno Marchal wrote:
Intelligence, in my opinion is rather easy too. It is a question of  
"abstract thermodynamic", intelligence is when you get enough heat  
while young, something like that. It is close to courage, and it is  
what make competence possible.


??


Competence is the most difficult, as they are distributed on  
transfinite  lattice of incomparable degrees. Some can ask for  
necessary long work, and can have negative feedback on intelligence.


That sounds like a quibble.  Intelligence is usually just thought  
of as the the ability to learn competence over a very general domain.


That's why I think that intelligence is simple, almost a mental  
attitude, more akin to courage and humility, than anything else.
Competence asks for gift or work, and can often lead to the feeling  
that we are more intelligent than others, which is the first basic  
symptom of stupidity.



That sounds more and more "1984"ish... War is peace.


?




Freedom is slavery.


?




Ignorance is strength


I never said that.

I say that awareness of our ignorance is strength. It participates  
to our intelligence.


That is true only if our intelligence is grounded in something which  
transcends its own ignorance...


That's what the Löbian machines do, even just by looking inward.  
That's computer science.





otherwise awareness of our own ignorance is just another layer of  
ignorance. This carries over to simulation - the ability to discern  
one thing as more real than another is meaningless unless our sense  
of realism is grounded in something beyond simulation.


Right. The physical reality, with comp, is not simulable. Nor  
consciousness.
But machines can makes possible for some person to manifest themselves  
with some other person, with some non negligible probability.






Patterns don't care about patterns, or to quote Deleuze -  
“Representation fails to capture the affirmed world of difference.  
Representation has only a single center, a unique and receding  
perspective, and in the consequence a false depth. It mediates  
everything, but mobilizes and moves nothing."


That makes sense in comp when describing the machine first person  
perspective.


In some sense we might argue that the first person associated to a  
machine, is not really a machine, after all, nor anything describable  
in any 3p way.


And that is what makes the first person immune for diagonalization,  
making it possible that [] x -> x. "[]" is not a number. Provably so  
with []p = Bp & p.


Comp is not so much "I am a machine" that "I (whatever I am) can  
survive locally with "normal probability" a digital brain/body  
transplant". What is saved in the process is an immaterial connection  
between some number, some environments or consistent computational- 
continuations, and an infinity of universal numbers".


We are not machines, Craig, we borrow machines (arithmetical  
relations). We are living on the boundaries between the computable and  
the non computable.


Bruno







Craig







and now more intelligent is stupid.



That's a contradiction and is not what I said. I said that  
competence, or expertise, can have, and often have, a negative  
feedback on intelligence. Someone quoted Feynman saying that  
"Science is the belief in the ignorance of experts." That's deeply  
Löbian, if I can say.


I distinguish "intelligence" from competence. Competence can be  
evaluated, measured, relatively compared, trained, ... but  
intelligence is like free will and consciousness:  it can be hoped  
for oneself and others, but it is not measurable, and it corresponds  
to a state of mind. It is more like an attitude, close to modesty  
but also courage, as it is what makes it possible for persons to  
recognize their own mistake. I think that "intelligence" is a  
protagorean virtue: like consistency it obeys [] x -> ~x.


Bruno






Quentin

Bruno



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.





--
All those moments will be lost in time, like tears in rain.

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@google

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Bruno Marchal


On 29 Mar 2013, at 16:04, Quentin Anciaux wrote:




2013/3/29 Bruno Marchal 

On 29 Mar 2013, at 10:44, Quentin Anciaux wrote:




2013/3/29 Bruno Marchal 

On 28 Mar 2013, at 18:59, meekerdb wrote:

On 3/28/2013 7:52 AM, Bruno Marchal wrote:
Intelligence, in my opinion is rather easy too. It is a question of  
"abstract thermodynamic", intelligence is when you get enough heat  
while young, something like that. It is close to courage, and it is  
what make competence possible.


??


Competence is the most difficult, as they are distributed on  
transfinite  lattice of incomparable degrees. Some can ask for  
necessary long work, and can have negative feedback on intelligence.


That sounds like a quibble.  Intelligence is usually just thought  
of as the the ability to learn competence over a very general domain.


That's why I think that intelligence is simple, almost a mental  
attitude, more akin to courage and humility, than anything else.
Competence asks for gift or work, and can often lead to the feeling  
that we are more intelligent than others, which is the first basic  
symptom of stupidity.



That sounds more and more "1984"ish... War is peace.


?




Freedom is slavery.


?




Ignorance is strength


I never said that.

Never read George Orwell 1984 ? I just said that what you wrote  
sounds like that.



I read it and love it. Orwell wrote in there that "Freedom is the  
right to say 2+2=4".


A deep assertion which reminded me my father telling me that the  
humans does not want to hear the truth, most usually.








I say that awareness of our ignorance is strength. It participates  
to our intelligence.







and now more intelligent is stupid.



That's a contradiction and is not what I said.

Well I quote "to the feeling that we are more intelligent than  
others, which is the first basic symptom of stupidity.", that means  
if someone feels he is more intelligent than other he is in fact  
stupid, if that's not novlang nothing is...


No it means that to feel oneself intelligent, and worst, to assert it,  
is not intelligent.


This does not make intelligence contradictory. It means that no  
machine can really judge its own, or other intelligence. We can know  
that we are conscious, and we can know and communicate that we are  
competent, but we cannot know that we are intelligent.


Bruno






Quentin

I said that competence, or expertise, can have, and often have, a  
negative feedback on intelligence. Someone quoted Feynman saying  
that "Science is the belief in the ignorance of experts." That's  
deeply Löbian, if I can say.


I distinguish "intelligence" from competence. Competence can be  
evaluated, measured, relatively compared, trained, ... but  
intelligence is like free will and consciousness:  it can be hoped  
for oneself and others, but it is not measurable, and it corresponds  
to a state of mind. It is more like an attitude, close to modesty  
but also courage, as it is what makes it possible for persons to  
recognize their own mistake. I think that "intelligence" is a  
protagorean virtue: like consistency it obeys [] x -> ~x.


Bruno






Quentin

Bruno



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.





--
All those moments will be lost in time, like tears in rain.

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.





--
All those moments will be lost in time, like tears in rain.

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, sen

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Quentin Anciaux
2013/3/29 Bruno Marchal 

>
> On 29 Mar 2013, at 10:44, Quentin Anciaux wrote:
>
>
>
> 2013/3/29 Bruno Marchal 
>
>>
>> On 28 Mar 2013, at 18:59, meekerdb wrote:
>>
>>  On 3/28/2013 7:52 AM, Bruno Marchal wrote:
>>>
 Intelligence, in my opinion is rather easy too. It is a question of
 "abstract thermodynamic", intelligence is when you get enough heat while
 young, something like that. It is close to courage, and it is what make
 competence possible.

>>>
>>> ??
>>>
>>>
 Competence is the most difficult, as they are distributed on
 transfinite  lattice of incomparable degrees. Some can ask for necessary
 long work, and can have negative feedback on intelligence.

>>>
>>> That sounds like a quibble.  Intelligence is usually just thought of as
>>> the the ability to learn competence over a very general domain.
>>>
>>
>> That's why I think that intelligence is simple, almost a mental attitude,
>> more akin to courage and humility, than anything else.
>> Competence asks for gift or work, and can often lead to the feeling that
>> we are more intelligent than others, which is the first basic symptom of
>> stupidity.
>>
>>
> That sounds more and more "1984"ish... War is peace.
>
>
> ?
>
>
>
> Freedom is slavery.
>
>
> ?
>
>
>
> Ignorance is strength
>
>
> I never said that.
>

Never read George Orwell 1984 ? I just said that what you wrote sounds like
that.

>
> I say that awareness of our ignorance is strength. It participates to our
> intelligence.
>
>
>
>
>
> and now more intelligent is stupid.
>
>
>
> That's a contradiction and is not what I said.
>

Well I quote "to the feeling that we are more intelligent than others,
which is the first basic symptom of stupidity.", that means if someone
feels he is more intelligent than other he is in fact stupid, if that's not
novlang nothing is...

Quentin


> I said that competence, or expertise, can have, and often have, a negative
> feedback on intelligence. Someone quoted Feynman saying that "*Science* is
> the belief in the ignorance of *experts*." That's deeply Löbian, if I can
> say.
>
> I distinguish "intelligence" from competence. Competence can be evaluated,
> measured, relatively compared, trained, ... but intelligence is like free
> will and consciousness:  it can be hoped for oneself and others, but it is
> not measurable, and it corresponds to a state of mind. It is more like an
> attitude, close to modesty but also courage, as it is what makes it
> possible for persons to recognize their own mistake. I think that
> "intelligence" is a protagorean virtue: like consistency it obeys [] x ->
> ~x.
>
> Bruno
>
>
>
>
>
> Quentin
>
>
>> Bruno
>>
>>
>>
>> http://iridia.ulb.ac.be/~**marchal/ 
>>
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to 
>> everything-list+unsubscribe@**googlegroups.com
>> .
>> To post to this group, send email to 
>> everything-list@googlegroups.**com
>> .
>> Visit this group at http://groups.google.com/**
>> group/everything-list?hl=en
>> .
>> For more options, visit 
>> https://groups.google.com/**groups/opt_out
>> .
>>
>>
>>
>
>
> --
> All those moments will be lost in time, like tears in rain.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>
>
>  http://iridia.ulb.ac.be/~marchal/
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>



-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Craig Weinberg


On Friday, March 29, 2013 10:47:09 AM UTC-4, Bruno Marchal wrote:
>
>
> On 29 Mar 2013, at 10:44, Quentin Anciaux wrote:
>
>
>
> 2013/3/29 Bruno Marchal >
>
>>
>> On 28 Mar 2013, at 18:59, meekerdb wrote:
>>
>>  On 3/28/2013 7:52 AM, Bruno Marchal wrote:
>>>
 Intelligence, in my opinion is rather easy too. It is a question of 
 "abstract thermodynamic", intelligence is when you get enough heat while 
 young, something like that. It is close to courage, and it is what make 
 competence possible.

>>>
>>> ??
>>>
>>>  
 Competence is the most difficult, as they are distributed on 
 transfinite  lattice of incomparable degrees. Some can ask for necessary 
 long work, and can have negative feedback on intelligence.

>>>
>>> That sounds like a quibble.  Intelligence is usually just thought of as 
>>> the the ability to learn competence over a very general domain.
>>>
>>
>> That's why I think that intelligence is simple, almost a mental attitude, 
>> more akin to courage and humility, than anything else.
>> Competence asks for gift or work, and can often lead to the feeling that 
>> we are more intelligent than others, which is the first basic symptom of 
>> stupidity.
>>
>>
> That sounds more and more "1984"ish... War is peace. 
>
>
> ?
>
>
>
> Freedom is slavery.
>
>
> ?
>
>
>
> Ignorance is strength 
>
>
> I never said that.
>
> I say that awareness of our ignorance is strength. It participates to our 
> intelligence.
>

That is true only if our intelligence is grounded in something which 
transcends its own ignorance...otherwise awareness of our own ignorance is 
just another layer of ignorance. This carries over to simulation - the 
ability to discern one thing as more real than another is meaningless 
unless our sense of realism is grounded in something beyond simulation. 
Patterns don't care about patterns, or to quote Deleuze - “Representation 
fails to capture the affirmed world of difference. Representation has only 
a single center, a unique and receding perspective, and in the consequence 
a false depth. It mediates everything, but mobilizes and moves nothing."

Craig
 

>
>
>
>
>
> and now more intelligent is stupid.
>
>
>
> That's a contradiction and is not what I said. I said that competence, or 
> expertise, can have, and often have, a negative feedback on intelligence. 
> Someone quoted Feynman saying that "*Science* is the belief in the 
> ignorance of *experts*." That's deeply Löbian, if I can say.
>
> I distinguish "intelligence" from competence. Competence can be evaluated, 
> measured, relatively compared, trained, ... but intelligence is like free 
> will and consciousness:  it can be hoped for oneself and others, but it is 
> not measurable, and it corresponds to a state of mind. It is more like an 
> attitude, close to modesty but also courage, as it is what makes it 
> possible for persons to recognize their own mistake. I think that 
> "intelligence" is a protagorean virtue: like consistency it obeys [] x -> 
> ~x.
>
> Bruno
>
>
>
>
>
> Quentin
>  
>
>> Bruno
>>
>>
>>
>> http://iridia.ulb.ac.be/~**marchal/ 
>>
>>
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@**googlegroups.com .
>> To post to this group, send email to 
>> everyth...@googlegroups.**com
>> .
>> Visit this group at http://groups.google.com/**
>> group/everything-list?hl=en
>> .
>> For more options, visit 
>> https://groups.google.com/**groups/opt_out
>> .
>>
>>
>>
>
>
> -- 
> All those moments will be lost in time, like tears in rain. 
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>  
>  
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Bruno Marchal


On 29 Mar 2013, at 10:44, Quentin Anciaux wrote:




2013/3/29 Bruno Marchal 

On 28 Mar 2013, at 18:59, meekerdb wrote:

On 3/28/2013 7:52 AM, Bruno Marchal wrote:
Intelligence, in my opinion is rather easy too. It is a question of  
"abstract thermodynamic", intelligence is when you get enough heat  
while young, something like that. It is close to courage, and it is  
what make competence possible.


??


Competence is the most difficult, as they are distributed on  
transfinite  lattice of incomparable degrees. Some can ask for  
necessary long work, and can have negative feedback on intelligence.


That sounds like a quibble.  Intelligence is usually just thought of  
as the the ability to learn competence over a very general domain.


That's why I think that intelligence is simple, almost a mental  
attitude, more akin to courage and humility, than anything else.
Competence asks for gift or work, and can often lead to the feeling  
that we are more intelligent than others, which is the first basic  
symptom of stupidity.



That sounds more and more "1984"ish... War is peace.


?




Freedom is slavery.


?




Ignorance is strength


I never said that.

I say that awareness of our ignorance is strength. It participates to  
our intelligence.







and now more intelligent is stupid.



That's a contradiction and is not what I said. I said that competence,  
or expertise, can have, and often have, a negative feedback on  
intelligence. Someone quoted Feynman saying that "Science is the  
belief in the ignorance of experts." That's deeply Löbian, if I can say.


I distinguish "intelligence" from competence. Competence can be  
evaluated, measured, relatively compared, trained, ... but  
intelligence is like free will and consciousness:  it can be hoped for  
oneself and others, but it is not measurable, and it corresponds to a  
state of mind. It is more like an attitude, close to modesty but also  
courage, as it is what makes it possible for persons to recognize  
their own mistake. I think that "intelligence" is a protagorean  
virtue: like consistency it obeys [] x -> ~x.


Bruno






Quentin

Bruno



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.





--
All those moments will be lost in time, like tears in rain.

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Craig Weinberg


On Friday, March 29, 2013 5:41:19 AM UTC-4, Bruno Marchal wrote:
>
>
> On 28 Mar 2013, at 18:59, meekerdb wrote: 
>
> > On 3/28/2013 7:52 AM, Bruno Marchal wrote: 
> >> Intelligence, in my opinion is rather easy too. It is a question of   
> >> "abstract thermodynamic", intelligence is when you get enough heat   
> >> while young, something like that. It is close to courage, and it is   
> >> what make competence possible. 
> > 
> > ?? 
> > 
> >> 
> >> Competence is the most difficult, as they are distributed on   
> >> transfinite  lattice of incomparable degrees. Some can ask for   
> >> necessary long work, and can have negative feedback on intelligence. 
> > 
> > That sounds like a quibble.  Intelligence is usually just thought of   
> > as the the ability to learn competence over a very general domain. 
>

Intelligence is an ability to learn and become competent, but more 
importantly to understand and discern. Intelligence is the cognitive-level 
modality of sensitivity. 

"intelligence (n.)
late 14c., "faculty of understanding," from Old French intelligence 
(12c.), from Latin intelligentia, intellegentia "understanding, power of 
discerning; art, skill, taste," from intelligentem (nominative intelligens) 
"discerning," present participle of intelligere "to understand, 
comprehend," from inter- "between" (see inter-) + legere "choose, pick out, 
read" "
 

>
> That's why I think that intelligence is simple, almost a mental   
> attitude, more akin to courage and humility, than anything else. 
> Competence asks for gift or work, and can often lead to the feeling   
> that we are more intelligent than others, which is the first basic   
> symptom of stupidity. 
>

I don't know that feeling more intelligent than others means you are 
stupid, maybe just vain. If taken literally, how could anyone become more 
intelligent than anyone else if as soon as they are intelligent enough to 
realize it, that made them stupid?

Craig


> Bruno 
>
>
>
> http://iridia.ulb.ac.be/~marchal/ 
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Craig Weinberg


On Friday, March 29, 2013 6:21:59 AM UTC-4, Bruno Marchal wrote:
>
>
> On 28 Mar 2013, at 20:15, Craig Weinberg wrote:
>
>
>
> On Thursday, March 28, 2013 10:41:22 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 26 Mar 2013, at 17:53, Craig Weinberg wrote:
>>
>>
>>
>> On Tuesday, March 26, 2013 10:13:09 AM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 26 Mar 2013, at 13:35, Craig Weinberg wrote: 
>>>
>>> It is if you assume photons bouncing back and forth.
>>  
>>
>>> unlike a universal   
>>> number. The fixed point of the two mirrors needs infinities of   
>>> reflexions, but the machine self-reference needs only two   
>>> diagonalizations. As I said, you must study those things and convince   
>>> yourself. 
>>>
>>> It sounds like a dodge to me. Fundamental truths seem like they are 
>> always conceptually simple. I can teach someone the principle of binary 
>> math in two minutes without them having to learn to build a computer from 
>> scratch. You don't have to learn to use Maxwell's equations to be convinced 
>> that electromagnetism involves wave properties.
>>
>>
>>
>> ?
>>
>> I can explain diagonalization in two minutes. If this can help.
>>
>
> What would help more is to explain how diagonalization contributes to a 
> computation being an experienced awareness rather than an unconscious 
> outcome.
>
>
> Diagonalization shows that a machine can refer to itself in many sense, 
> which are equivalent in "god's eyes", but completely different in the 
> machine's eyes, and some of those self-reference verify accepted axioms for 
> knowledge, observable, etc. 
>

How do you know that it  intentionally refers to itself rather than 
unconsciously reflecting another view of itself? If my car's wheel is out 
of alignment, the tire tracks might show that the car is pulling to the 
right and is being constantly corrected. That entire pattern is merely a 
symptom of the overall machine - the tracks themselves are not referring or 
inferring any intelligence back to the car, and the car does not use its 
tracks to realign itself. It is we who do the inferring and referring.


>
>
>
>  
>
>>
>>
>>
>>
>>
>>>
>>>
>>> > or a cartoon of a lion talking about itself into some kind of   
>>> > subjective experience for the cartoon, or cartoon-ness, or lion- 
>>> > ness, or talking-ness. Self-reference has no significance unless we   
>>> > assume that the self already has awareness. 
>>>
>>> Hmm... I am open to that assumption, but usually I prefer to add the   
>>> universality assumption too. 
>>>
>>>
>>>
>>>
>>> > If I say 'these words refer to themselves', or rig up a camera to   
>>> > point at a screen displaying the output of Tupper's Self-Referential   
>>> > formula, I still have nothing but a camera, a screen and some   
>>> > meaningless graphics. This assumption pulls qualia out of thin air,   
>>> > ignores the pathetic fallacy completely, and conflates all   
>>> > territories with maps. 
>>>
>>> On the contrary, we get a rich and complex theory of qualia, even a   
>>> testable one, as we get the quanta too, and so can compare with   
>>> nature. Please, don't oversimplify something that you have not studied. 
>>>
>>
>> How can there be a such thing as a theory of qualia? Qualia is precisely 
>> that which theory cannot access in any way.
>>
>>
>> Yes, that is one the main axiom for qualia. Not only you have a theory, 
>> but you share it with me.
>>
>
> How do you know it is a main axiom for qualia? 
>
>
> It is not someything I can know. It was just something we are agreeing on, 
> so that your point made my points, and refute the idea that you can use it 
> as a tool for invalidating comp.
>
>
I agree that it is an important axiom, but only to discern qualia from 
quanta. It doesn't explain qualia itself or justify its existence (or 
insistence) in particular.
 

>
>
>
> It's like saying that the important thing about the Moon is that we can't 
> swim there. The fact that I understand that the Moon is not in the ocean 
> doesn't mean I can take credit for figuring out the Moon. To me it shows 
> the confirmation bias of the approach. You are looking at reality from the 
> start as if it were a kind of theory, 
>
>
> I bet I can find a theory, indeed. But this does not mean that anything 
> about machine can be made into a theory.
>

Sure, I'm not denying that it is true that we can't swim to the Moon, or 
that this theory could not be part of a larger theory, but the theory still 
doesn't produce a theory justifying the Moon.
 

>
>
>
>
> so that this detail about qualia being non-theoretical has inflated 
> significance. 
>
>
> It is important indeed, but of course it is not use here as an argument 
> for comp, only as showing that you can't use the absence of a theory as an 
> argument against comp, because computer science explains that absence of 
> theory, and the presence of useful meta-theory.
>

The meta-theory may be useful, but does it call for qualia in particular, 
rather than just an X whic

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Bruno Marchal


On 28 Mar 2013, at 20:15, Craig Weinberg wrote:




On Thursday, March 28, 2013 10:41:22 AM UTC-4, Bruno Marchal wrote:

On 26 Mar 2013, at 17:53, Craig Weinberg wrote:




On Tuesday, March 26, 2013 10:13:09 AM UTC-4, Bruno Marchal wrote:

On 26 Mar 2013, at 13:35, Craig Weinberg wrote:

It is if you assume photons bouncing back and forth.

unlike a universal
number. The fixed point of the two mirrors needs infinities of
reflexions, but the machine self-reference needs only two
diagonalizations. As I said, you must study those things and convince
yourself.

It sounds like a dodge to me. Fundamental truths seem like they are  
always conceptually simple. I can teach someone the principle of  
binary math in two minutes without them having to learn to build a  
computer from scratch. You don't have to learn to use Maxwell's  
equations to be convinced that electromagnetism involves wave  
properties.



?

I can explain diagonalization in two minutes. If this can help.

What would help more is to explain how diagonalization contributes  
to a computation being an experienced awareness rather than an  
unconscious outcome.


Diagonalization shows that a machine can refer to itself in many  
sense, which are equivalent in "god's eyes", but completely different  
in the machine's eyes, and some of those self-reference verify  
accepted axioms for knowledge, observable, etc.















> or a cartoon of a lion talking about itself into some kind of
> subjective experience for the cartoon, or cartoon-ness, or lion-
> ness, or talking-ness. Self-reference has no significance unless we
> assume that the self already has awareness.

Hmm... I am open to that assumption, but usually I prefer to add the
universality assumption too.




> If I say 'these words refer to themselves', or rig up a camera to
> point at a screen displaying the output of Tupper's Self- 
Referential

> formula, I still have nothing but a camera, a screen and some
> meaningless graphics. This assumption pulls qualia out of thin air,
> ignores the pathetic fallacy completely, and conflates all
> territories with maps.

On the contrary, we get a rich and complex theory of qualia, even a
testable one, as we get the quanta too, and so can compare with
nature. Please, don't oversimplify something that you have not  
studied.


How can there be a such thing as a theory of qualia? Qualia is  
precisely that which theory cannot access in any way.


Yes, that is one the main axiom for qualia. Not only you have a  
theory, but you share it with me.


How do you know it is a main axiom for qualia?


It is not someything I can know. It was just something we are agreeing  
on, so that your point made my points, and refute the idea that you  
can use it as a tool for invalidating comp.





It's like saying that the important thing about the Moon is that we  
can't swim there. The fact that I understand that the Moon is not in  
the ocean doesn't mean I can take credit for figuring out the Moon.  
To me it shows the confirmation bias of the approach. You are  
looking at reality from the start as if it were a kind of theory,


I bet I can find a theory, indeed. But this does not mean that  
anything about machine can be made into a theory.





so that this detail about qualia being non-theoretical has inflated  
significance.


It is important indeed, but of course it is not use here as an  
argument for comp, only as showing that you can't use the absence of a  
theory as an argument against comp, because computer science explains  
that absence of theory, and the presence of useful meta-theory.






If you were a shoemaker, the important thing about diamonds might be  
that they aren't shoes.


Lol.
















>
>
>
>
>
>>
>>
>>
>>
>>> I might find it convenient to invent an entirely new spectrum of
>>> colors to keep track of my file folders, but that doesn't mean
>>> that this new spectrum can just be 'developed' out of thin air.
>>
>> You must not ask a machine something that you can't do yourself,  
to

>> compare it to yourself.
>>
>> But if you are saying that a machine can come up with a new format
>> by virtue of its self reference, then that is what I assume Comp
>> says is the origination of color.
>
> Qualia obeys laws.
>
> Qualia makes laws. Laws are nothing except the interaction of  
qualia

> on multiple nested scales.

That's much too vague.

Vague is ok if it is accurate too.


Too vague leads to empty accuracy. It is accurate because we don't  
understand.



Or it could be that we understand that the reality can only be  
accurately described in vague terms - the reality itself is vague,  
hence it has flexibility to create the derived experiences of  
precision.


It is exactly the justification of letting people lacking rigor in  
philosophy, theology, etc.
By making the non-understanding intrinsic, you can jutisfy all the  
possible wishful thinking, and introduce all the arbitrariness you want.


Now, if reality is vague

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Bruno Marchal


On 28 Mar 2013, at 19:01, Richard Ruquist wrote:

On Thu, Mar 28, 2013 at 1:37 PM, Bruno Marchal   
wrote:


On 28 Mar 2013, at 16:08, Richard Ruquist wrote:

On Thu, Mar 28, 2013 at 10:52 AM, Bruno Marchal  
 wrote:



On 26 Mar 2013, at 18:19, meekerdb wrote:

On 3/26/2013 4:21 AM, Bruno Marchal wrote:

I can explain why if a machine can have experience and enough
reflexivity,
then the machine can already understand that she cannot justify
rationally
the presence of its experience. No machine, nor us, can ever see  
how that

could be true. It *is* in the range of the non communicable.

If some aliens decide that we are not conscious, we will not find  
any

test
to prove them wrong.


And if we decide the Mars Rover is conscious, can any test prove us
wrong?


Yes. But it is longer to explain than for comp. Strong AI is  
refutable in

a
weaker sense than comp. The refutation here are indirect and  
based on the

acceptance of the classical tgeory of knowledge, that is S4 (not
necessarily
Theaetetus).



Or if Craig decides an atom is conscious, can any test prove him  
wrong?



A person can be conscious. What would it mean that an atom is  
conscious?

What is an atom?




Davies suggests that the threshold for consciousness based on the
Lloyd limit is the complexity of the human cell.



In which physics?


Holographic (Bekenstein bound) physics of 10^120 bits (the Lloyd  
limit)



If he assumes comp, he must derive that physics first, to
get a valid consequences.


Davies does not assume comp. I thought I did in my paper.


BTW I don't see the use of comp in your paper.


I certainly discuss physics derived from comp in my paper
(http://vixra.org/abs/1303.0194) while leaving out all the math
details.
ie. CY manifolds->math->mind/physics-> matter

Could you expand when you have time how I do not use comp?


If you study sane04, you should easily be convinced by yourself. It is  
more like:


Numbers => machine's mind-psychology/theology => physics





What I do is to place resource limits on comp
(10^120 bits for the universe and perhaps 10^1000 for the Metaverse).
Is that perhaps what you refer to?


Which means that you assume a notion of physical resource at the  
start, but this can't work.

That is what I explain on this list since a long time.






Or is it the conjecture that CY manifolds are the comp machine,


That might be correct. In that case the CY should be derived from the  
"intelligible matter" hypostases.





one for the universe and another for the metaverse?
Thanks for reading the paper.


Unfortunately I am not so knowledgeable in string theory. It is  
interesting, but assuming it might hide the distinction quanta/qualia.  
It is still physics, and that's the problem here, somehow.


Best,

Bruno





Richard



Now, I can accept that human cells have already some consciousness.  
Even
bacteria. I dunno but I am open to the idea. Bacteria have already  
full
Turing universality, and exploit it in complex genetic regulation  
control.


Comp is open with a strict Moore law: the number of angels (or bit
processing) that you can put at the top of a needle might be  
unbounded. Like
Feynman said, there is room in the bottom. But we might have  
insuperable
read and write problems. There might be computer in which we can  
upload our

minds, but never came back.

Bruno










Which I think is John Clark's point: Consciousness is easy.   
Intelligence

is
hard.



Consciousness might be more easy than intelligence, and certainly  
than
matter. Consciousness is easy with UDA,  when you get the  
difference

between
both G and G*, and between Bp, Bp & p, Bp & Dt, etc. (AUDA).

Matter is more difficult. Today we have only the propositional
observable.

Intelligence, in my opinion is rather easy too. It is a question of
"abstract thermodynamic", intelligence is when you get enough  
heat while
young, something like that. It is close to courage, and it is  
what make

competence possible.

Competence is the most difficult, as they are distributed on  
transfinite
lattice of incomparable degrees. Some can ask for necessary long  
work,

and
can have negative feedback on intelligence.

Bruno






Brent

--
You received this message because you are subscribed to the  
Google Groups

"Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an

email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the  
Google Groups

"Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an

email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegro

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Quentin Anciaux
2013/3/29 Bruno Marchal 

>
> On 28 Mar 2013, at 18:59, meekerdb wrote:
>
>  On 3/28/2013 7:52 AM, Bruno Marchal wrote:
>>
>>> Intelligence, in my opinion is rather easy too. It is a question of
>>> "abstract thermodynamic", intelligence is when you get enough heat while
>>> young, something like that. It is close to courage, and it is what make
>>> competence possible.
>>>
>>
>> ??
>>
>>
>>> Competence is the most difficult, as they are distributed on transfinite
>>>  lattice of incomparable degrees. Some can ask for necessary long work, and
>>> can have negative feedback on intelligence.
>>>
>>
>> That sounds like a quibble.  Intelligence is usually just thought of as
>> the the ability to learn competence over a very general domain.
>>
>
> That's why I think that intelligence is simple, almost a mental attitude,
> more akin to courage and humility, than anything else.
> Competence asks for gift or work, and can often lead to the feeling that
> we are more intelligent than others, which is the first basic symptom of
> stupidity.
>
>
That sounds more and more "1984"ish... War is peace. Freedom is slavery.
Ignorance is strength and now more intelligent is stupid.

Quentin


> Bruno
>
>
>
> http://iridia.ulb.ac.be/~**marchal/ 
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to 
> everything-list+unsubscribe@**googlegroups.com
> .
> To post to this group, send email to 
> everything-list@googlegroups.**com
> .
> Visit this group at 
> http://groups.google.com/**group/everything-list?hl=en
> .
> For more options, visit 
> https://groups.google.com/**groups/opt_out
> .
>
>
>


-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Bruno Marchal


On 28 Mar 2013, at 18:59, meekerdb wrote:


On 3/28/2013 7:52 AM, Bruno Marchal wrote:
Intelligence, in my opinion is rather easy too. It is a question of  
"abstract thermodynamic", intelligence is when you get enough heat  
while young, something like that. It is close to courage, and it is  
what make competence possible.


??



Competence is the most difficult, as they are distributed on  
transfinite  lattice of incomparable degrees. Some can ask for  
necessary long work, and can have negative feedback on intelligence.


That sounds like a quibble.  Intelligence is usually just thought of  
as the the ability to learn competence over a very general domain.


That's why I think that intelligence is simple, almost a mental  
attitude, more akin to courage and humility, than anything else.
Competence asks for gift or work, and can often lead to the feeling  
that we are more intelligent than others, which is the first basic  
symptom of stupidity.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-29 Thread Bruno Marchal


On 28 Mar 2013, at 18:58, meekerdb wrote:


On 3/28/2013 7:52 AM, Bruno Marchal wrote:


And if we decide the Mars Rover is conscious, can any test prove  
us wrong?


Yes. But it is longer to explain than for comp. Strong AI is  
refutable in a weaker sense than comp. The refutation here are  
indirect and based on the acceptance of the classical tgeory of  
knowledge, that is S4 (not necessarily Theaetetus).


Is there an explanation in one of your papers?


It is in the second part of sane04 (the machine's interview). I will  
explain this on FOAR, and I have sketched the explanation here from  
time to time. But I use the stronger comp, not strong AI. It is  
sketched in most of my english papers.


Bruno






Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread Richard Ruquist
On Thu, Mar 28, 2013 at 1:37 PM, Bruno Marchal  wrote:
>
> On 28 Mar 2013, at 16:08, Richard Ruquist wrote:
>
>> On Thu, Mar 28, 2013 at 10:52 AM, Bruno Marchal  wrote:
>>>
>>>
>>> On 26 Mar 2013, at 18:19, meekerdb wrote:
>>>
>>> On 3/26/2013 4:21 AM, Bruno Marchal wrote:
>>>
>>> I can explain why if a machine can have experience and enough
>>> reflexivity,
>>> then the machine can already understand that she cannot justify
>>> rationally
>>> the presence of its experience. No machine, nor us, can ever see how that
>>> could be true. It *is* in the range of the non communicable.
>>>
>>> If some aliens decide that we are not conscious, we will not find any
>>> test
>>> to prove them wrong.
>>>
>>>
>>> And if we decide the Mars Rover is conscious, can any test prove us
>>> wrong?
>>>
>>>
>>> Yes. But it is longer to explain than for comp. Strong AI is refutable in
>>> a
>>> weaker sense than comp. The refutation here are indirect and based on the
>>> acceptance of the classical tgeory of knowledge, that is S4 (not
>>> necessarily
>>> Theaetetus).
>>>
>>>
>>>
>>> Or if Craig decides an atom is conscious, can any test prove him wrong?
>>>
>>>
>>> A person can be conscious. What would it mean that an atom is conscious?
>>> What is an atom?
>>>
>>>
>>
>> Davies suggests that the threshold for consciousness based on the
>> Lloyd limit is the complexity of the human cell.
>
>
> In which physics?

Holographic (Bekenstein bound) physics of 10^120 bits (the Lloyd limit)

>If he assumes comp, he must derive that physics first, to
> get a valid consequences.

Davies does not assume comp. I thought I did in my paper.

> BTW I don't see the use of comp in your paper.

I certainly discuss physics derived from comp in my paper
(http://vixra.org/abs/1303.0194) while leaving out all the math
details.
ie. CY manifolds->math->mind/physics-> matter

Could you expand when you have time how I do not use comp?
What I do is to place resource limits on comp
(10^120 bits for the universe and perhaps 10^1000 for the Metaverse).
Is that perhaps what you refer to?

Or is it the conjecture that CY manifolds are the comp machine,
one for the universe and another for the metaverse?
Thanks for reading the paper.
Richard

>
> Now, I can accept that human cells have already some consciousness. Even
> bacteria. I dunno but I am open to the idea. Bacteria have already full
> Turing universality, and exploit it in complex genetic regulation control.
>
> Comp is open with a strict Moore law: the number of angels (or bit
> processing) that you can put at the top of a needle might be unbounded. Like
> Feynman said, there is room in the bottom. But we might have insuperable
> read and write problems. There might be computer in which we can upload our
> minds, but never came back.
>
> Bruno
>
>
>
>
>
>>
>>>
>>>
>>> Which I think is John Clark's point: Consciousness is easy.  Intelligence
>>> is
>>> hard.
>>>
>>>
>>>
>>> Consciousness might be more easy than intelligence, and certainly than
>>> matter. Consciousness is easy with UDA,  when you get the difference
>>> between
>>> both G and G*, and between Bp, Bp & p, Bp & Dt, etc. (AUDA).
>>>
>>> Matter is more difficult. Today we have only the propositional
>>> observable.
>>>
>>> Intelligence, in my opinion is rather easy too. It is a question of
>>> "abstract thermodynamic", intelligence is when you get enough heat while
>>> young, something like that. It is close to courage, and it is what make
>>> competence possible.
>>>
>>> Competence is the most difficult, as they are distributed on transfinite
>>> lattice of incomparable degrees. Some can ask for necessary long work,
>>> and
>>> can have negative feedback on intelligence.
>>>
>>> Bruno
>>>
>>>
>>>
>>>
>>>
>>>
>>> Brent
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/everything-list?hl=en.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>>
>>>
>>>
>>> http://iridia.ulb.ac.be/~marchal/
>>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/everything-list?hl=en.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+un

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread meekerdb

On 3/28/2013 7:52 AM, Bruno Marchal wrote:
Intelligence, in my opinion is rather easy too. It is a question of "abstract 
thermodynamic", intelligence is when you get enough heat while young, something like 
that. It is close to courage, and it is what make competence possible.


??



Competence is the most difficult, as they are distributed on transfinite  lattice of 
incomparable degrees. Some can ask for necessary long work, and can have negative 
feedback on intelligence.


That sounds like a quibble.  Intelligence is usually just thought of as the the ability to 
learn competence over a very general domain.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread meekerdb

On 3/28/2013 7:52 AM, Bruno Marchal wrote:


And if we decide the Mars Rover is conscious, can any test prove us wrong?


Yes. But it is longer to explain than for comp. Strong AI is refutable in a weaker sense 
than comp. The refutation here are indirect and based on the acceptance of the classical 
tgeory of knowledge, that is S4 (not necessarily Theaetetus).


Is there an explanation in one of your papers?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread Bruno Marchal


On 28 Mar 2013, at 16:08, Richard Ruquist wrote:

On Thu, Mar 28, 2013 at 10:52 AM, Bruno Marchal   
wrote:


On 26 Mar 2013, at 18:19, meekerdb wrote:

On 3/26/2013 4:21 AM, Bruno Marchal wrote:

I can explain why if a machine can have experience and enough  
reflexivity,
then the machine can already understand that she cannot justify  
rationally
the presence of its experience. No machine, nor us, can ever see  
how that

could be true. It *is* in the range of the non communicable.

If some aliens decide that we are not conscious, we will not find  
any test

to prove them wrong.


And if we decide the Mars Rover is conscious, can any test prove us  
wrong?



Yes. But it is longer to explain than for comp. Strong AI is  
refutable in a
weaker sense than comp. The refutation here are indirect and based  
on the
acceptance of the classical tgeory of knowledge, that is S4 (not  
necessarily

Theaetetus).



Or if Craig decides an atom is conscious, can any test prove him  
wrong?



A person can be conscious. What would it mean that an atom is  
conscious?

What is an atom?




Davies suggests that the threshold for consciousness based on the
Lloyd limit is the complexity of the human cell.


In which physics? If he assumes comp, he must derive that physics  
first, to get a valid consequences.

BTW I don't see the use of comp in your paper.

Now, I can accept that human cells have already some consciousness.  
Even bacteria. I dunno but I am open to the idea. Bacteria have  
already full Turing universality, and exploit it in complex genetic  
regulation control.


Comp is open with a strict Moore law: the number of angels (or bit  
processing) that you can put at the top of a needle might be  
unbounded. Like Feynman said, there is room in the bottom. But we  
might have insuperable read and write problems. There might be  
computer in which we can upload our minds, but never came back.


Bruno










Which I think is John Clark's point: Consciousness is easy.   
Intelligence is

hard.



Consciousness might be more easy than intelligence, and certainly  
than
matter. Consciousness is easy with UDA,  when you get the  
difference between

both G and G*, and between Bp, Bp & p, Bp & Dt, etc. (AUDA).

Matter is more difficult. Today we have only the propositional  
observable.


Intelligence, in my opinion is rather easy too. It is a question of
"abstract thermodynamic", intelligence is when you get enough heat  
while
young, something like that. It is close to courage, and it is what  
make

competence possible.

Competence is the most difficult, as they are distributed on  
transfinite
lattice of incomparable degrees. Some can ask for necessary long  
work, and

can have negative feedback on intelligence.

Bruno






Brent

--
You received this message because you are subscribed to the Google  
Groups

"Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an

email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google  
Groups

"Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an

email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread Richard Ruquist
On Thu, Mar 28, 2013 at 10:52 AM, Bruno Marchal  wrote:
>
> On 26 Mar 2013, at 18:19, meekerdb wrote:
>
> On 3/26/2013 4:21 AM, Bruno Marchal wrote:
>
> I can explain why if a machine can have experience and enough reflexivity,
> then the machine can already understand that she cannot justify rationally
> the presence of its experience. No machine, nor us, can ever see how that
> could be true. It *is* in the range of the non communicable.
>
> If some aliens decide that we are not conscious, we will not find any test
> to prove them wrong.
>
>
> And if we decide the Mars Rover is conscious, can any test prove us wrong?
>
>
> Yes. But it is longer to explain than for comp. Strong AI is refutable in a
> weaker sense than comp. The refutation here are indirect and based on the
> acceptance of the classical tgeory of knowledge, that is S4 (not necessarily
> Theaetetus).
>
>
>
> Or if Craig decides an atom is conscious, can any test prove him wrong?
>
>
> A person can be conscious. What would it mean that an atom is conscious?
> What is an atom?
>
>

Davies suggests that the threshold for consciousness based on the
Lloyd limit is the complexity of the human cell.

>
>
> Which I think is John Clark's point: Consciousness is easy.  Intelligence is
> hard.
>
>
>
> Consciousness might be more easy than intelligence, and certainly than
> matter. Consciousness is easy with UDA,  when you get the difference between
> both G and G*, and between Bp, Bp & p, Bp & Dt, etc. (AUDA).
>
> Matter is more difficult. Today we have only the propositional observable.
>
> Intelligence, in my opinion is rather easy too. It is a question of
> "abstract thermodynamic", intelligence is when you get enough heat while
> young, something like that. It is close to courage, and it is what make
> competence possible.
>
> Competence is the most difficult, as they are distributed on transfinite
> lattice of incomparable degrees. Some can ask for necessary long work, and
> can have negative feedback on intelligence.
>
> Bruno
>
>
>
>
>
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread Bruno Marchal


On 26 Mar 2013, at 18:33, meekerdb wrote:


On 3/26/2013 7:13 AM, Bruno Marchal wrote:
It is a bit what happens, please study the theory. Qualia are  
useful to accelerate information processing, and the integration of  
that processing in a person. And they are unavoidable for machines  
in rich and statistically stable universal relations with each  
others.


Can you describe exactly how they are unavoidable?


You need the theory, but in a nutshell, they are unavoidable because  
they are truth that machine will discover when looking inward. They  
correspond to true facts, which are not sigma_1, but which concerns  
noneless the machine (like having a local model, or being in some  
situation, etc.).








Specifically I wonder what constraints this puts on them.


They obeys to the two modal logic system: S4Grz1, X1* minus X1, and  
their higher order extensions.





Looked at from the aspect of engineering intelligence I would assume  
it would depend on sensor capabilities, i.e. that machines would  
primarily communicate about what they can both see.  But that  
doesn't account for humans who communicate a lot about what they feel.


Qualia do not need sensors conceptually, with comp, but in practice,  
it is the simplest way to get them in accordance with the local  
universal neighbors. The theories manetionned above can explain well  
why qualia are non communicable---in the sense of rationally justified  
by the machine, but why we can still communicate on them, and project  
them on other machines or entities.


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread Bruno Marchal


On 26 Mar 2013, at 18:19, meekerdb wrote:


On 3/26/2013 4:21 AM, Bruno Marchal wrote:
I can explain why if a machine can have experience and enough  
reflexivity, then the machine can already understand that she  
cannot justify rationally the presence of its experience. No  
machine, nor us, can ever see how that could be true. It *is* in  
the range of the non communicable.


If some aliens decide that we are not conscious, we will not find  
any test to prove them wrong.


And if we decide the Mars Rover is conscious, can any test prove us  
wrong?


Yes. But it is longer to explain than for comp. Strong AI is refutable  
in a weaker sense than comp. The refutation here are indirect and  
based on the acceptance of the classical tgeory of knowledge, that is  
S4 (not necessarily Theaetetus).




Or if Craig decides an atom is conscious, can any test prove him  
wrong?


A person can be conscious. What would it mean that an atom is  
conscious? What is an atom?





Which I think is John Clark's point: Consciousness is easy.   
Intelligence is hard.



Consciousness might be more easy than intelligence, and certainly than  
matter. Consciousness is easy with UDA,  when you get the difference  
between both G and G*, and between Bp, Bp & p, Bp & Dt, etc. (AUDA).


Matter is more difficult. Today we have only the propositional  
observable.


Intelligence, in my opinion is rather easy too. It is a question of  
"abstract thermodynamic", intelligence is when you get enough heat  
while young, something like that. It is close to courage, and it is  
what make competence possible.


Competence is the most difficult, as they are distributed on  
transfinite  lattice of incomparable degrees. Some can ask for  
necessary long work, and can have negative feedback on intelligence.


Bruno







Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-26 Thread Craig Weinberg


On Tuesday, March 26, 2013 1:19:14 PM UTC-4, Brent wrote:
>
>  On 3/26/2013 4:21 AM, Bruno Marchal wrote:
>  
> I can explain why if a machine can have experience and enough reflexivity, 
> then the machine can already understand that she cannot justify rationally 
> the presence of its experience. No machine, nor us, can ever see how that 
> could be true. It *is* in the range of the non communicable.
>
>  If some aliens decide that we are not conscious, we will not find any 
> test to prove them wrong.
>
>
> And if we decide the Mars Rover is conscious, can any test prove us 
> wrong?  Or if Craig decides an atom is conscious, can any test prove him 
> wrong?  Which I think is John Clark's point: Consciousness is easy.  
> Intelligence is hard.
>

Consciousness is easy to the point of being inescapable if you take it for 
granted and hard to the point of being impossible if you don't - which is 
why it is the ground of being. 

Craig


> Brent
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-26 Thread Craig Weinberg


On Tuesday, March 26, 2013 1:33:11 PM UTC-4, Brent wrote:
>
>  On 3/26/2013 7:13 AM, Bruno Marchal wrote:
>  
> It is a bit what happens, please study the theory. Qualia are useful to 
> accelerate information processing, and the integration of that processing 
> in a person. And they are unavoidable for machines in rich and 
> statistically stable universal relations with each others. 
>
>
> Can you describe exactly how they are unavoidable?  Specifically I wonder 
> what constraints this puts on them.  Looked at from the aspect of 
> engineering intelligence I would assume it would depend on sensor 
> capabilities, i.e. that machines would primarily communicate about what 
> they can both see.  But that doesn't account for humans who communicate a 
> lot about what they feel.
>

What if I develop a sensor which has photological detection capacities 
which are so acute that it can detect chemical signatures. Would it see 
odors or would it smell patterns of light? 

Craig


> Brent
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-26 Thread meekerdb

On 3/26/2013 7:13 AM, Bruno Marchal wrote:
It is a bit what happens, please study the theory. Qualia are useful to accelerate 
information processing, and the integration of that processing in a person. And they are 
unavoidable for machines in rich and statistically stable universal relations with each 
others. 


Can you describe exactly how they are unavoidable?  Specifically I wonder what constraints 
this puts on them.  Looked at from the aspect of engineering intelligence I would assume 
it would depend on sensor capabilities, i.e. that machines would primarily communicate 
about what they can both see.  But that doesn't account for humans who communicate a lot 
about what they feel.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-26 Thread meekerdb

On 3/26/2013 4:21 AM, Bruno Marchal wrote:
I can explain why if a machine can have experience and enough reflexivity, then the 
machine can already understand that she cannot justify rationally the presence of its 
experience. No machine, nor us, can ever see how that could be true. It *is* in the 
range of the non communicable.


If some aliens decide that we are not conscious, we will not find any test to prove them 
wrong.


And if we decide the Mars Rover is conscious, can any test prove us wrong?  Or if Craig 
decides an atom is conscious, can any test prove him wrong?  Which I think is John Clark's 
point: Consciousness is easy. Intelligence is hard.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-26 Thread Bruno Marchal


On 26 Mar 2013, at 13:35, Craig Weinberg wrote:




On Tuesday, March 26, 2013 7:21:39 AM UTC-4, Bruno Marchal wrote:

On 25 Mar 2013, at 19:35, Craig Weinberg wrote:




On Monday, March 25, 2013 1:25:30 PM UTC-4, Bruno Marchal wrote:

On 25 Mar 2013, at 14:02, Craig Weinberg wrote:




On Monday, March 25, 2013 6:26:00 AM UTC-4, Bruno Marchal wrote:

On 24 Mar 2013, at 20:25, Craig Weinberg wrote:




On Sunday, March 24, 2013 1:44:01 PM UTC-4, Bruno Marchal wrote:



But that is what you get at the Turing universal threshold. If  
you look at the computer's functioning, you will see local  
computable rules obeyed by the gates, but that doesn't mean there  
aren't non-computable agendas being pursued by genuine person  
supported by those computations.


Absolutely, but does it mean that it has to be a genuine person?  
To me it makes sense that the natural development of persons may  
be restricted to experiences which are represented publicly in  
zoological terms. The zoological format is not the cause of the  
experience but it is the minimum vessel with the proper scale of  
sensitivity for that quality of experience to be supported.  
Trying to generate the same thing from the bottom up may not be  
feasible, because the zoological format arises organically,  
whereas an AI system skips zoology, biology, and chemistry  
entirely and assumes a universally low format.


It is does not. Self-reference leads machine to develop multi- 
variated leves of "formatting".


Why would it, and how could it?


You must study  bit of computer science.

But just in very general terms, what would be the principle which  
would tie together the function of self reference with any kind of  
presented experience?


The (arithmetical) reality of the experience itself. It is a fixed  
point of the map/brain when embedded in the arithmetical reality  
(which is beyond words).



That doesn't make sense to me. That would make two mirrors facing  
each other into a being,


Not really. A mirror is not a dynamical structure, unlike a universal  
number. The fixed point of the two mirrors needs infinities of  
reflexions, but the machine self-reference needs only two  
diagonalizations. As I said, you must study those things and convince  
yourself.





or a cartoon of a lion talking about itself into some kind of  
subjective experience for the cartoon, or cartoon-ness, or lion- 
ness, or talking-ness. Self-reference has no significance unless we  
assume that the self already has awareness.


Hmm... I am open to that assumption, but usually I prefer to add the  
universality assumption too.





If I say 'these words refer to themselves', or rig up a camera to  
point at a screen displaying the output of Tupper's Self-Referential  
formula, I still have nothing but a camera, a screen and some  
meaningless graphics. This assumption pulls qualia out of thin air,  
ignores the pathetic fallacy completely, and conflates all  
territories with maps.


On the contrary, we get a rich and complex theory of qualia, even a  
testable one, as we get the quanta too, and so can compare with  
nature. Please, don't oversimplify something that you have not studied.
















I might find it convenient to invent an entirely new spectrum of  
colors to keep track of my file folders, but that doesn't mean  
that this new spectrum can just be 'developed' out of thin air.


You must not ask a machine something that you can't do yourself, to  
compare it to yourself.


But if you are saying that a machine can come up with a new format  
by virtue of its self reference, then that is what I assume Comp  
says is the origination of color.


Qualia obeys laws.

Qualia makes laws. Laws are nothing except the interaction of qualia  
on multiple nested scales.


That's much too vague. I can agree and relate to comp. Qualia makes  
the quanta, notably, but I was just explaining that we get a theory of  
qualia.





If some qualia exist, some machine can realize them, but this does  
not mean we can create some new spectrum, or that this would be an  
easy task for a machine to complete when ordered. Most of our qualia  
needed long time computations, and trial and errors, etc.


You can't make blue by trial and error because there is nothing to  
try. It's a circular argument - for trial and error blue would  
already have to be one of the possible qualia in the universe, in  
which case trial would be redundant. By trial and error you could  
perhaps stumble upon time travel, invisibility, teleportation, and a  
thousand other super powers, but there is no way to stumble upon  
even a single qualia in a universe which lacks them. It isn't in the  
mix of possibilities. There is no solution to any function which  
could possibly be x = {the experience of seeing blue}.


Hmm...






















Consciousness does not seem to be compatible with low level  
unconscious origins to me. Looking at language, the rules of  
spelling and gram

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-25 Thread Craig Weinberg


On Monday, March 25, 2013 1:25:30 PM UTC-4, Bruno Marchal wrote:
>
>
> On 25 Mar 2013, at 14:02, Craig Weinberg wrote:
>
>
>
> On Monday, March 25, 2013 6:26:00 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 24 Mar 2013, at 20:25, Craig Weinberg wrote:
>>
>>
>>
>> On Sunday, March 24, 2013 1:44:01 PM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 24 Mar 2013, at 12:53, Craig Weinberg wrote:
>>>
>>>
>>>
>>> On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal wrote:


 On 21 Mar 2013, at 18:44, Craig Weinberg wrote:



 On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote:
>
>
> On 20 Mar 2013, at 19:16, Craig Weinberg wrote:
>
> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>
> "We are examining the activity in the cerebral cortex *as a whole*. The 
> brain is a non-stop, always-active system. When we perceive something, 
> the 
> information does not end up in a specific *part* of our brain. 
> Rather, it is added to the brain's existing activity. If we measure the 
> electrochemical activity of the whole cortex, we find wave-like patterns. 
> This shows that brain activity is not local but rather that activity 
> constantly moves from one part of the brain to another." 
>
>
>
> Please, don't confuse the very particular neuro-philosophy with the 
> much weaker assumption of computationalism. 
> Wave-like pattern are typically computable functions. 
> (I mentioned this when saying that I would say yes to a doctor only if 
> he copies my glial cells at the right chemical level).
>
> There are just no evidence for non computable activities acting in a 
> relevant way in the biological organism, or actually even in the physical 
> universe.
> You could point on the the wave packet reduction, but it does not make 
> much sense by itself.
>

 Right, I'm not arguing this as evidence of non-comp. Even if there was 
 non-comp activity in the brain, nothing that we could use to detect it 
 would be able to find anything since we would only know how to use an 
 exrternal detection instrument computationally. Mainly I posted this to 
 show the direction that the scientific evidence is leading us does not 
 support any kind of narrow folk-neuroscience of point to point 
 chain-reactions.



 Good.




>
> Not looking very charitable to the bottom-up, neuron machine view.
>
>
> Ideas don't need charity  but in this case it is totally charitable, 
> even with neurophilosophy, given that in your example, those waves still 
> seem neuron driven.
>

 How do you know that it seem neuron driven rather than whole brain 
 driven?


 In neurophilosophy, they are used to global complex and distributed 
 brain activity, but still implemented in term of local computable rules 
 obeyed by neurons.

>>>
>>> If you look at a city traffic pattern, you will see local computable 
>>> rules obeyed by cars, but that doesn't mean there aren't non-computable 
>>> agendas being pursued by the drivers.
>>>
>>>
>>> Indeed.
>>>
>>> But that is what you get at the Turing universal threshold. If you look 
>>> at the computer's functioning, you will see local computable rules obeyed 
>>> by the gates, but that doesn't mean there aren't non-computable agendas 
>>> being pursued by genuine person supported by those computations.
>>>
>>
>> Absolutely, but does it mean that it has to be a genuine person? To me it 
>> makes sense that the natural development of persons may be restricted to 
>> experiences which are represented publicly in zoological terms. The 
>> zoological format is not the cause of the experience but it is the minimum 
>> vessel with the proper scale of sensitivity for that quality of experience 
>> to be supported. Trying to generate the same thing from the bottom up may 
>> not be feasible, because the zoological format arises organically, whereas 
>> an AI system skips zoology, biology, and chemistry entirely and assumes a 
>> universally low format. 
>>
>>
>> It is does not. Self-reference leads machine to develop multi-variated 
>> leves of "formatting".
>>
>
> Why would it, and how could it?
>
>
> You must study  bit of computer science. 
>

But just in very general terms, what would be the principle which would tie 
together the function of self reference with any kind of presented 
experience? 


>
>
> I might find it convenient to invent an entirely new spectrum of colors to 
> keep track of my file folders, but that doesn't mean that this new spectrum 
> can just be 'developed' out of thin air.
>
>
> You must not ask a machine something that you can't do yourself, to 
> compare it to yourself.
>

But if you are saying that a machine can come up with a new format by 
virtue of its self reference, then that is what I assume Co

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-25 Thread Bruno Marchal


On 25 Mar 2013, at 14:02, Craig Weinberg wrote:




On Monday, March 25, 2013 6:26:00 AM UTC-4, Bruno Marchal wrote:

On 24 Mar 2013, at 20:25, Craig Weinberg wrote:




On Sunday, March 24, 2013 1:44:01 PM UTC-4, Bruno Marchal wrote:

On 24 Mar 2013, at 12:53, Craig Weinberg wrote:




On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal wrote:

On 21 Mar 2013, at 18:44, Craig Weinberg wrote:




On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote:

On 20 Mar 2013, at 19:16, Craig Weinberg wrote:


http://www.sciencedaily.com/releases/2013/03/130320115111.htm

"We are examining the activity in the cerebral cortex as a  
whole. The brain is a non-stop, always-active system. When we  
perceive something, the information does not end up in a  
specific part of our brain. Rather, it is added to the brain's  
existing activity. If we measure the electrochemical activity of  
the whole cortex, we find wave-like patterns. This shows that  
brain activity is not local but rather that activity constantly  
moves from one part of the brain to another."





Please, don't confuse the very particular neuro-philosophy with  
the much weaker assumption of computationalism.

Wave-like pattern are typically computable functions.
(I mentioned this when saying that I would say yes to a doctor  
only if he copies my glial cells at the right chemical level).


There are just no evidence for non computable activities acting  
in a relevant way in the biological organism, or actually even in  
the physical universe.
You could point on the the wave packet reduction, but it does not  
make much sense by itself.


Right, I'm not arguing this as evidence of non-comp. Even if  
there was non-comp activity in the brain, nothing that we could  
use to detect it would be able to find anything since we would  
only know how to use an exrternal detection instrument  
computationally. Mainly I posted this to show the direction that  
the scientific evidence is leading us does not support any kind  
of narrow folk-neuroscience of point to point chain-reactions.



Good.







Not looking very charitable to the bottom-up, neuron machine view.


Ideas don't need charity  but in this case it is totally  
charitable, even with neurophilosophy, given that in your  
example, those waves still seem neuron driven.


How do you know that it seem neuron driven rather than whole  
brain driven?


In neurophilosophy, they are used to global complex and  
distributed brain activity, but still implemented in term of local  
computable rules obeyed by neurons.


If you look at a city traffic pattern, you will see local  
computable rules obeyed by cars, but that doesn't mean there  
aren't non-computable agendas being pursued by the drivers.


Indeed.

But that is what you get at the Turing universal threshold. If you  
look at the computer's functioning, you will see local computable  
rules obeyed by the gates, but that doesn't mean there aren't non- 
computable agendas being pursued by genuine person supported by  
those computations.


Absolutely, but does it mean that it has to be a genuine person? To  
me it makes sense that the natural development of persons may be  
restricted to experiences which are represented publicly in  
zoological terms. The zoological format is not the cause of the  
experience but it is the minimum vessel with the proper scale of  
sensitivity for that quality of experience to be supported. Trying  
to generate the same thing from the bottom up may not be feasible,  
because the zoological format arises organically, whereas an AI  
system skips zoology, biology, and chemistry entirely and assumes a  
universally low format.


It is does not. Self-reference leads machine to develop multi- 
variated leves of "formatting".


Why would it, and how could it?


You must study  bit of computer science.



I might find it convenient to invent an entirely new spectrum of  
colors to keep track of my file folders, but that doesn't mean that  
this new spectrum can just be 'developed' out of thin air.


You must not ask a machine something that you can't do yourself, to  
compare it to yourself.












Consciousness does not seem to be compatible with low level  
unconscious origins to me. Looking at language, the rules of  
spelling and grammar do not drive the creation of new words. A word  
cannot be forced into common usage just because it is introduced  
into a culture. There is no rule in language which has a function  
of creating new words, nor could any rule like that possibly work.


You ignore completely the notion of creative set or universal  
machine. You talk like if we could have a complete theory about  
them, but we can't, provably so if we are Turing emulable.
You just communicate your feeling where the machine already can  
explain why their feeling can be misleading on this subject.


Any particular feeling can be misleading only relative to some other  
felt expectation and felt rea

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-25 Thread Craig Weinberg


On Monday, March 25, 2013 6:26:00 AM UTC-4, Bruno Marchal wrote:
>
>
> On 24 Mar 2013, at 20:25, Craig Weinberg wrote:
>
>
>
> On Sunday, March 24, 2013 1:44:01 PM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 24 Mar 2013, at 12:53, Craig Weinberg wrote:
>>
>>
>>
>> On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 21 Mar 2013, at 18:44, Craig Weinberg wrote:
>>>
>>>
>>>
>>> On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote:


 On 20 Mar 2013, at 19:16, Craig Weinberg wrote:

 http://www.sciencedaily.com/releases/2013/03/130320115111.htm

 "We are examining the activity in the cerebral cortex *as a whole*. The 
 brain is a non-stop, always-active system. When we perceive something, the 
 information does not end up in a specific *part* of our brain. Rather, 
 it is added to the brain's existing activity. If we measure the 
 electrochemical activity of the whole cortex, we find wave-like patterns. 
 This shows that brain activity is not local but rather that activity 
 constantly moves from one part of the brain to another." 



 Please, don't confuse the very particular neuro-philosophy with the 
 much weaker assumption of computationalism. 
 Wave-like pattern are typically computable functions. 
 (I mentioned this when saying that I would say yes to a doctor only if 
 he copies my glial cells at the right chemical level).

 There are just no evidence for non computable activities acting in a 
 relevant way in the biological organism, or actually even in the physical 
 universe.
 You could point on the the wave packet reduction, but it does not make 
 much sense by itself.

>>>
>>> Right, I'm not arguing this as evidence of non-comp. Even if there was 
>>> non-comp activity in the brain, nothing that we could use to detect it 
>>> would be able to find anything since we would only know how to use an 
>>> exrternal detection instrument computationally. Mainly I posted this to 
>>> show the direction that the scientific evidence is leading us does not 
>>> support any kind of narrow folk-neuroscience of point to point 
>>> chain-reactions.
>>>
>>>
>>>
>>> Good.
>>>
>>>
>>>
>>>

 Not looking very charitable to the bottom-up, neuron machine view.


 Ideas don't need charity  but in this case it is totally charitable, 
 even with neurophilosophy, given that in your example, those waves still 
 seem neuron driven.

>>>
>>> How do you know that it seem neuron driven rather than whole brain 
>>> driven?
>>>
>>>
>>> In neurophilosophy, they are used to global complex and distributed 
>>> brain activity, but still implemented in term of local computable rules 
>>> obeyed by neurons.
>>>
>>
>> If you look at a city traffic pattern, you will see local computable 
>> rules obeyed by cars, but that doesn't mean there aren't non-computable 
>> agendas being pursued by the drivers.
>>
>>
>> Indeed.
>>
>> But that is what you get at the Turing universal threshold. If you look 
>> at the computer's functioning, you will see local computable rules obeyed 
>> by the gates, but that doesn't mean there aren't non-computable agendas 
>> being pursued by genuine person supported by those computations.
>>
>
> Absolutely, but does it mean that it has to be a genuine person? To me it 
> makes sense that the natural development of persons may be restricted to 
> experiences which are represented publicly in zoological terms. The 
> zoological format is not the cause of the experience but it is the minimum 
> vessel with the proper scale of sensitivity for that quality of experience 
> to be supported. Trying to generate the same thing from the bottom up may 
> not be feasible, because the zoological format arises organically, whereas 
> an AI system skips zoology, biology, and chemistry entirely and assumes a 
> universally low format. 
>
>
> It is does not. Self-reference leads machine to develop multi-variated 
> leves of "formatting".
>

Why would it, and how could it? I might find it convenient to invent an 
entirely new spectrum of colors to keep track of my file folders, but that 
doesn't mean that this new spectrum can just be 'developed' out of thin air.
 

>
>
>
>
> Consciousness does not seem to be compatible with low level unconscious 
> origins to me. Looking at language, the rules of spelling and grammar do 
> not drive the creation of new words. A word cannot be forced into common 
> usage just because it is introduced into a culture. There is no rule in 
> language which has a function of creating new words, nor could any rule 
> like that possibly work. 
>
>
> You ignore completely the notion of creative set or universal machine. You 
> talk like if we could have a complete theory about them, but we can't, 
> provably so if we are Turing emulable. 
> You just communicate your feeling where the machine already can explain 
> why their f

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-25 Thread Bruno Marchal


On 24 Mar 2013, at 20:25, Craig Weinberg wrote:




On Sunday, March 24, 2013 1:44:01 PM UTC-4, Bruno Marchal wrote:

On 24 Mar 2013, at 12:53, Craig Weinberg wrote:




On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal wrote:

On 21 Mar 2013, at 18:44, Craig Weinberg wrote:




On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote:

On 20 Mar 2013, at 19:16, Craig Weinberg wrote:


http://www.sciencedaily.com/releases/2013/03/130320115111.htm

"We are examining the activity in the cerebral cortex as a whole.  
The brain is a non-stop, always-active system. When we perceive  
something, the information does not end up in a specific part of  
our brain. Rather, it is added to the brain's existing activity.  
If we measure the electrochemical activity of the whole cortex,  
we find wave-like patterns. This shows that brain activity is not  
local but rather that activity constantly moves from one part of  
the brain to another."





Please, don't confuse the very particular neuro-philosophy with  
the much weaker assumption of computationalism.

Wave-like pattern are typically computable functions.
(I mentioned this when saying that I would say yes to a doctor  
only if he copies my glial cells at the right chemical level).


There are just no evidence for non computable activities acting in  
a relevant way in the biological organism, or actually even in the  
physical universe.
You could point on the the wave packet reduction, but it does not  
make much sense by itself.


Right, I'm not arguing this as evidence of non-comp. Even if there  
was non-comp activity in the brain, nothing that we could use to  
detect it would be able to find anything since we would only know  
how to use an exrternal detection instrument computationally.  
Mainly I posted this to show the direction that the scientific  
evidence is leading us does not support any kind of narrow folk- 
neuroscience of point to point chain-reactions.



Good.







Not looking very charitable to the bottom-up, neuron machine view.


Ideas don't need charity  but in this case it is totally  
charitable, even with neurophilosophy, given that in your example,  
those waves still seem neuron driven.


How do you know that it seem neuron driven rather than whole brain  
driven?


In neurophilosophy, they are used to global complex and distributed  
brain activity, but still implemented in term of local computable  
rules obeyed by neurons.


If you look at a city traffic pattern, you will see local  
computable rules obeyed by cars, but that doesn't mean there aren't  
non-computable agendas being pursued by the drivers.


Indeed.

But that is what you get at the Turing universal threshold. If you  
look at the computer's functioning, you will see local computable  
rules obeyed by the gates, but that doesn't mean there aren't non- 
computable agendas being pursued by genuine person supported by  
those computations.


Absolutely, but does it mean that it has to be a genuine person? To  
me it makes sense that the natural development of persons may be  
restricted to experiences which are represented publicly in  
zoological terms. The zoological format is not the cause of the  
experience but it is the minimum vessel with the proper scale of  
sensitivity for that quality of experience to be supported. Trying  
to generate the same thing from the bottom up may not be feasible,  
because the zoological format arises organically, whereas an AI  
system skips zoology, biology, and chemistry entirely and assumes a  
universally low format.


It is does not. Self-reference leads machine to develop multi-variated  
leves of "formatting".






Consciousness does not seem to be compatible with low level  
unconscious origins to me. Looking at language, the rules of  
spelling and grammar do not drive the creation of new words. A word  
cannot be forced into common usage just because it is introduced  
into a culture. There is no rule in language which has a function of  
creating new words, nor could any rule like that possibly work.


You ignore completely the notion of creative set or universal machine.  
You talk like if we could have a complete theory about them, but we  
can't, provably so if we are Turing emulable.
You just communicate your feeling where the machine already can  
explain why their feeling can be misleading on this subject.





If you could control the behavior of language use from the bottom up  
however, you could simulate that such a rule would work, just by  
programming people to utter it with increasing frequency. This would  
satisfy any third person test for the effectiveness of the rule, but  
of course would be completely meaningless.


Don't confuse machine and language.















What would it look like if the brain as a whole were driving the  
neurons?


Either it would be like saying that a high level program can have a  
feedback on some of its low level implementations, which is not a  
p

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-24 Thread Craig Weinberg


On Sunday, March 24, 2013 10:05:45 PM UTC-4, Stephen Paul King wrote:
>
>  
> On 3/24/2013 8:00 PM, Craig Weinberg wrote:
>  
>
>
> On Sunday, March 24, 2013 6:15:53 PM UTC-4, Stephen Paul King wrote: 
>>
>>  
>> On 3/24/2013 3:25 PM, Craig Weinberg wrote:
>>  
>>
>>
>> On Sunday, March 24, 2013 1:44:01 PM UTC-4, Bruno Marchal wrote: 
>>>
>>>
>>>  On 24 Mar 2013, at 12:53, Craig Weinberg wrote:
>>>
>>>
>>>
>>> On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal wrote: 


  On 21 Mar 2013, at 18:44, Craig Weinberg wrote:



 On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote: 
>
>
>  On 20 Mar 2013, at 19:16, Craig Weinberg wrote:
>
> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>
> "We are examining the activity in the cerebral cortex *as a whole*. The 
> brain is a non-stop, always-active system. When we perceive something, 
> the 
> information does not end up in a specific *part* of our brain. 
> Rather, it is added to the brain's existing activity. If we measure the 
> electrochemical activity of the whole cortex, we find wave-like patterns. 
> This shows that brain activity is not local but rather that activity 
> constantly moves from one part of the brain to another." 
>
>  
>  
>  Please, don't confuse the very particular neuro-philosophy with the 
> much weaker assumption of computationalism. 
> Wave-like pattern are typically computable functions. 
> (I mentioned this when saying that I would say yes to a doctor only if 
> he copies my glial cells at the right chemical level).
>
>  There are just no evidence for non computable activities acting in a 
> relevant way in the biological organism, or actually even in the physical 
> universe.
> You could point on the the wave packet reduction, but it does not make 
> much sense by itself.
>  

 Right, I'm not arguing this as evidence of non-comp. Even if there was 
 non-comp activity in the brain, nothing that we could use to detect it 
 would be able to find anything since we would only know how to use an 
 exrternal detection instrument computationally. Mainly I posted this to 
 show the direction that the scientific evidence is leading us does not 
 support any kind of narrow folk-neuroscience of point to point 
 chain-reactions.
  

  
  Good.

  
  
   
>  
> Not looking very charitable to the bottom-up, neuron machine view.
>
>
>  Ideas don't need charity  but in this case it is totally charitable, 
> even with neurophilosophy, given that in your example, those waves still 
> seem neuron driven.
>  

 How do you know that it seem neuron driven rather than whole brain 
 driven?


  In neurophilosophy, they are used to global complex and distributed 
 brain activity, but still implemented in term of local computable rules 
 obeyed by neurons.
  
>>>
>>> If you look at a city traffic pattern, you will see local computable 
>>> rules obeyed by cars, but that doesn't mean there aren't non-computable 
>>> agendas being pursued by the drivers.
>>>  
>>>
>>>  Indeed.
>>>
>>>  But that is what you get at the Turing universal threshold. If you 
>>> look at the computer's functioning, you will see local computable rules 
>>> obeyed by the gates, but that doesn't mean there aren't non-computable 
>>> agendas being pursued by genuine person supported by those computations.
>>>  
>>
>> Absolutely, but does it mean that it has to be a genuine person?
>>
>>
>> Hi Craig,
>>
>> We must first admit that there does not exist a 3p representation of 
>> what it is like to be a genuine person! Therefore this qustion is off the 
>> mark.
>>  
>
> Hi Stephen,
>
> I agree no 3p representation can tell serve as evidence of personhood 
> (although I do not think that means that we can't have a sense which goes 
> beyond the 3p intuitively or instinctively,  but what I'm talking about is 
> more of the zombie question. Just because a simulation fools a high number 
> of observers doesn't mean that it isn't a simulation, i.e. the best Elvis 
> impersonator is not any closer to becoming Elvis Aaron Presley than they 
> are to becoming Groucho Marx.
>  
>
> Hi,
>
> Right, but here is what I am proposing: In the limit of computational 
> resources, the best possible simulation of an object *is* the object 
> itself. Any simulation of it would be, by definition 'a simulation' and 
> your point would be made.
>

I agree with that, but the whole notion of simulation I think gets turned 
upside down when we are looking as simulating a 1p subject instead of a 3p 
object. In this case, I think that the only possible simulation of the 
subject is the subject themselves, i.e. there is no possible simulation of 
any subject because the quality of being a subject is ide

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-24 Thread Stephen P. King

On 3/24/2013 8:00 PM, Craig Weinberg wrote:
>
>
> On Sunday, March 24, 2013 6:15:53 PM UTC-4, Stephen Paul King wrote:
>
>
> On 3/24/2013 3:25 PM, Craig Weinberg wrote:
>>
>>
>> On Sunday, March 24, 2013 1:44:01 PM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 24 Mar 2013, at 12:53, Craig Weinberg wrote:
>>
>>>
>>>
>>> On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal
>>> wrote:
>>>
>>>
>>> On 21 Mar 2013, at 18:44, Craig Weinberg wrote:
>>>


 On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno
 Marchal wrote:


 On 20 Mar 2013, at 19:16, Craig Weinberg wrote:

> 
> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
> 
> 
>
> "We are examining the activity in the cerebral
> cortex /as a whole/. The brain is a non-stop,
> always-active system. When we perceive something,
> the information does not end up in a specific
> /part/ of our brain. Rather, it is added to the
> brain's existing activity. If we measure the
> electrochemical activity of the whole cortex, we
> find wave-like patterns. This shows that brain
> activity is not local but rather that activity
> constantly moves from one part of the brain to
> another."
>


 Please, don't confuse the very particular
 neuro-philosophy with the much weaker assumption of
 computationalism. 
 Wave-like pattern are typically computable functions. 
 (I mentioned this when saying that I would say yes
 to a doctor only if he copies my glial cells at the
 right chemical level).

 There are just no evidence for non computable
 activities acting in a relevant way in the
 biological organism, or actually even in the
 physical universe.
 You could point on the the wave packet reduction,
 but it does not make much sense by itself.


 Right, I'm not arguing this as evidence of non-comp.
 Even if there was non-comp activity in the brain,
 nothing that we could use to detect it would be able to
 find anything since we would only know how to use an
 exrternal detection instrument computationally. Mainly
 I posted this to show the direction that the scientific
 evidence is leading us does not support any kind of
 narrow folk-neuroscience of point to point chain-reactions.
>>>
>>>
>>> Good.
>>>
>>>


>
> Not looking very charitable to the bottom-up,
> neuron machine view.

 Ideas don't need charity  but in this case it is
 totally charitable, even with neurophilosophy,
 given that in your example, those waves still seem
 neuron driven.


 How do you know that it seem neuron driven rather than
 whole brain driven?
>>>
>>> In neurophilosophy, they are used to global complex and
>>> distributed brain activity, but still implemented in
>>> term of local computable rules obeyed by neurons.
>>>
>>>
>>> If you look at a city traffic pattern, you will see local
>>> computable rules obeyed by cars, but that doesn't mean there
>>> aren't non-computable agendas being pursued by the drivers.
>>
>> Indeed.
>>
>> But that is what you get at the Turing universal threshold.
>> If you look at the computer's functioning, you will see local
>> computable rules obeyed by the gates, but that doesn't mean
>> there aren't non-computable agendas being pursued by genuine
>> person supported by those computations.
>>
>>
>> Absolutely, but does it mean that it has to be a genuine person?
>
> Hi Craig,
>
> We must first admit that there does not exist a 3p
> representation of what it is like to be a genuine person!
> Therefore this qustion is off the mark.
>
>
> Hi Stephen,
>
> I agree no 3p representation can tell serve as evidence of personhood
> (although I do not think that means that we can't have a sense which
> goes beyond the 3p intuitively or instinctively,  but what I'm talking
> about is more of the zombie question. Just because a simulation fools
> a high number of observers doesn't mean that it isn't a simulation,
> i

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-24 Thread Craig Weinberg


On Sunday, March 24, 2013 6:15:53 PM UTC-4, Stephen Paul King wrote:
>
>  
> On 3/24/2013 3:25 PM, Craig Weinberg wrote:
>  
>
>
> On Sunday, March 24, 2013 1:44:01 PM UTC-4, Bruno Marchal wrote: 
>>
>>
>>  On 24 Mar 2013, at 12:53, Craig Weinberg wrote:
>>
>>
>>
>> On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal wrote: 
>>>
>>>
>>>  On 21 Mar 2013, at 18:44, Craig Weinberg wrote:
>>>
>>>
>>>
>>> On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote: 


  On 20 Mar 2013, at 19:16, Craig Weinberg wrote:

 http://www.sciencedaily.com/releases/2013/03/130320115111.htm

 "We are examining the activity in the cerebral cortex *as a whole*. The 
 brain is a non-stop, always-active system. When we perceive something, the 
 information does not end up in a specific *part* of our brain. Rather, 
 it is added to the brain's existing activity. If we measure the 
 electrochemical activity of the whole cortex, we find wave-like patterns. 
 This shows that brain activity is not local but rather that activity 
 constantly moves from one part of the brain to another." 

  
  
  Please, don't confuse the very particular neuro-philosophy with the 
 much weaker assumption of computationalism. 
 Wave-like pattern are typically computable functions. 
 (I mentioned this when saying that I would say yes to a doctor only if 
 he copies my glial cells at the right chemical level).

  There are just no evidence for non computable activities acting in a 
 relevant way in the biological organism, or actually even in the physical 
 universe.
 You could point on the the wave packet reduction, but it does not make 
 much sense by itself.
  
>>>
>>> Right, I'm not arguing this as evidence of non-comp. Even if there was 
>>> non-comp activity in the brain, nothing that we could use to detect it 
>>> would be able to find anything since we would only know how to use an 
>>> exrternal detection instrument computationally. Mainly I posted this to 
>>> show the direction that the scientific evidence is leading us does not 
>>> support any kind of narrow folk-neuroscience of point to point 
>>> chain-reactions.
>>>  
>>>
>>>  
>>>  Good.
>>>
>>>  
>>>  
>>>   
  
 Not looking very charitable to the bottom-up, neuron machine view.


  Ideas don't need charity  but in this case it is totally charitable, 
 even with neurophilosophy, given that in your example, those waves still 
 seem neuron driven.
  
>>>
>>> How do you know that it seem neuron driven rather than whole brain 
>>> driven?
>>>
>>>
>>>  In neurophilosophy, they are used to global complex and distributed 
>>> brain activity, but still implemented in term of local computable rules 
>>> obeyed by neurons.
>>>  
>>
>> If you look at a city traffic pattern, you will see local computable 
>> rules obeyed by cars, but that doesn't mean there aren't non-computable 
>> agendas being pursued by the drivers.
>>  
>>
>>  Indeed.
>>
>>  But that is what you get at the Turing universal threshold. If you look 
>> at the computer's functioning, you will see local computable rules obeyed 
>> by the gates, but that doesn't mean there aren't non-computable agendas 
>> being pursued by genuine person supported by those computations.
>>  
>
> Absolutely, but does it mean that it has to be a genuine person?
>
>
> Hi Craig,
>
> We must first admit that there does not exist a 3p representation of 
> what it is like to be a genuine person! Therefore this qustion is off the 
> mark.
>

Hi Stephen,

I agree no 3p representation can tell serve as evidence of personhood 
(although I do not think that means that we can't have a sense which goes 
beyond the 3p intuitively or instinctively,  but what I'm talking about is 
more of the zombie question. Just because a simulation fools a high number 
of observers doesn't mean that it isn't a simulation, i.e. the best Elvis 
impersonator is not any closer to becoming Elvis Aaron Presley than they 
are to becoming Groucho Marx.


>
>  To me it makes sense that the natural development of persons may be 
> restricted to experiences which are represented publicly in zoological 
> terms. The zoological format is not the cause of the experience but it is 
> the minimum vessel with the proper scale of sensitivity for that quality of 
> experience to be supported. Trying to generate the same thing from the 
> bottom up may not be feasible, because the zoological format arises 
> organically, whereas an AI system skips zoology, biology, and chemistry 
> entirely and assumes a universally low format. 
>  
>
> It makes sense to you, sure, but we need to talk about things given 
> the fact above. We can beat around the bush forever ...
>

That's up to everyone else, all that I can do is explain why it makes sense 
to me.
 

>
>  
> Consciousness does not seem to be compatible with low level unconscious 
> or

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-24 Thread Stephen P. King

On 3/24/2013 3:25 PM, Craig Weinberg wrote:
>
>
> On Sunday, March 24, 2013 1:44:01 PM UTC-4, Bruno Marchal wrote:
>
>
> On 24 Mar 2013, at 12:53, Craig Weinberg wrote:
>
>>
>>
>> On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 21 Mar 2013, at 18:44, Craig Weinberg wrote:
>>
>>>
>>>
>>> On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal
>>> wrote:
>>>
>>>
>>> On 20 Mar 2013, at 19:16, Craig Weinberg wrote:
>>>
 http://www.sciencedaily.com/releases/2013/03/130320115111.htm
 

 "We are examining the activity in the cerebral cortex
 /as a whole/. The brain is a non-stop, always-active
 system. When we perceive something, the information
 does not end up in a specific /part/ of our brain.
 Rather, it is added to the brain's existing activity.
 If we measure the electrochemical activity of the whole
 cortex, we find wave-like patterns. This shows that
 brain activity is not local but rather that activity
 constantly moves from one part of the brain to another."

>>>
>>>
>>> Please, don't confuse the very particular
>>> neuro-philosophy with the much weaker assumption of
>>> computationalism. 
>>> Wave-like pattern are typically computable functions. 
>>> (I mentioned this when saying that I would say yes to a
>>> doctor only if he copies my glial cells at the right
>>> chemical level).
>>>
>>> There are just no evidence for non computable activities
>>> acting in a relevant way in the biological organism, or
>>> actually even in the physical universe.
>>> You could point on the the wave packet reduction, but it
>>> does not make much sense by itself.
>>>
>>>
>>> Right, I'm not arguing this as evidence of non-comp. Even if
>>> there was non-comp activity in the brain, nothing that we
>>> could use to detect it would be able to find anything since
>>> we would only know how to use an exrternal detection
>>> instrument computationally. Mainly I posted this to show the
>>> direction that the scientific evidence is leading us does
>>> not support any kind of narrow folk-neuroscience of point to
>>> point chain-reactions.
>>
>>
>> Good.
>>
>>
>>>
>>>

 Not looking very charitable to the bottom-up, neuron
 machine view.
>>>
>>> Ideas don't need charity  but in this case it is totally
>>> charitable, even with neurophilosophy, given that in
>>> your example, those waves still seem neuron driven.
>>>
>>>
>>> How do you know that it seem neuron driven rather than whole
>>> brain driven?
>>
>> In neurophilosophy, they are used to global complex and
>> distributed brain activity, but still implemented in term of
>> local computable rules obeyed by neurons.
>>
>>
>> If you look at a city traffic pattern, you will see local
>> computable rules obeyed by cars, but that doesn't mean there
>> aren't non-computable agendas being pursued by the drivers.
>
> Indeed.
>
> But that is what you get at the Turing universal threshold. If you
> look at the computer's functioning, you will see local computable
> rules obeyed by the gates, but that doesn't mean there aren't
> non-computable agendas being pursued by genuine person supported
> by those computations.
>
>
> Absolutely, but does it mean that it has to be a genuine person?

Hi Craig,

We must first admit that there does not exist a 3p representation of
what it is like to be a genuine person! Therefore this qustion is off
the mark.


> To me it makes sense that the natural development of persons may be
> restricted to experiences which are represented publicly in zoological
> terms. The zoological format is not the cause of the experience but it
> is the minimum vessel with the proper scale of sensitivity for that
> quality of experience to be supported. Trying to generate the same
> thing from the bottom up may not be feasible, because the zoological
> format arises organically, whereas an AI system skips zoology,
> biology, and chemistry entirely and assumes a universally low format.

It makes sense to you, sure, but we need to talk about things given
the fact above. We can beat around the bush forever ...

>
> Consciousness does not seem to be compatible with low level
> unconscious origins to me.

Why? Are molecules 'alive'? We do not have a measure of what it is
to be alive Maybe a global measure does not exist and we need to
stop looking for one!

> Looking at language, the rul

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-24 Thread Craig Weinberg


On Sunday, March 24, 2013 1:44:01 PM UTC-4, Bruno Marchal wrote:
>
>
> On 24 Mar 2013, at 12:53, Craig Weinberg wrote:
>
>
>
> On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 21 Mar 2013, at 18:44, Craig Weinberg wrote:
>>
>>
>>
>> On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 20 Mar 2013, at 19:16, Craig Weinberg wrote:
>>>
>>> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>>>
>>> "We are examining the activity in the cerebral cortex *as a whole*. The 
>>> brain is a non-stop, always-active system. When we perceive something, the 
>>> information does not end up in a specific *part* of our brain. Rather, 
>>> it is added to the brain's existing activity. If we measure the 
>>> electrochemical activity of the whole cortex, we find wave-like patterns. 
>>> This shows that brain activity is not local but rather that activity 
>>> constantly moves from one part of the brain to another." 
>>>
>>>
>>>
>>> Please, don't confuse the very particular neuro-philosophy with the much 
>>> weaker assumption of computationalism. 
>>> Wave-like pattern are typically computable functions. 
>>> (I mentioned this when saying that I would say yes to a doctor only if 
>>> he copies my glial cells at the right chemical level).
>>>
>>> There are just no evidence for non computable activities acting in a 
>>> relevant way in the biological organism, or actually even in the physical 
>>> universe.
>>> You could point on the the wave packet reduction, but it does not make 
>>> much sense by itself.
>>>
>>
>> Right, I'm not arguing this as evidence of non-comp. Even if there was 
>> non-comp activity in the brain, nothing that we could use to detect it 
>> would be able to find anything since we would only know how to use an 
>> exrternal detection instrument computationally. Mainly I posted this to 
>> show the direction that the scientific evidence is leading us does not 
>> support any kind of narrow folk-neuroscience of point to point 
>> chain-reactions.
>>
>>
>>
>> Good.
>>
>>
>>
>>
>>>
>>> Not looking very charitable to the bottom-up, neuron machine view.
>>>
>>>
>>> Ideas don't need charity  but in this case it is totally charitable, 
>>> even with neurophilosophy, given that in your example, those waves still 
>>> seem neuron driven.
>>>
>>
>> How do you know that it seem neuron driven rather than whole brain driven?
>>
>>
>> In neurophilosophy, they are used to global complex and distributed brain 
>> activity, but still implemented in term of local computable rules obeyed by 
>> neurons.
>>
>
> If you look at a city traffic pattern, you will see local computable rules 
> obeyed by cars, but that doesn't mean there aren't non-computable agendas 
> being pursued by the drivers.
>
>
> Indeed.
>
> But that is what you get at the Turing universal threshold. If you look at 
> the computer's functioning, you will see local computable rules obeyed by 
> the gates, but that doesn't mean there aren't non-computable agendas being 
> pursued by genuine person supported by those computations.
>

Absolutely, but does it mean that it has to be a genuine person? To me it 
makes sense that the natural development of persons may be restricted to 
experiences which are represented publicly in zoological terms. The 
zoological format is not the cause of the experience but it is the minimum 
vessel with the proper scale of sensitivity for that quality of experience 
to be supported. Trying to generate the same thing from the bottom up may 
not be feasible, because the zoological format arises organically, whereas 
an AI system skips zoology, biology, and chemistry entirely and assumes a 
universally low format. 

Consciousness does not seem to be compatible with low level unconscious 
origins to me. Looking at language, the rules of spelling and grammar do 
not drive the creation of new words. A word cannot be forced into common 
usage just because it is introduced into a culture. There is no rule in 
language which has a function of creating new words, nor could any rule 
like that possibly work. If you could control the behavior of language use 
from the bottom up however, you could simulate that such a rule would work, 
just by programming people to utter it with increasing frequency. This 
would satisfy any third person test for the effectiveness of the rule, but 
of course would be completely meaningless.


>
>
>  
>
>>
>>
>>
>>
>> What would it look like if the brain as a whole were driving the neurons?
>>
>>
>> Either it would be like saying that a high level program can have a 
>> feedback on some of its low level implementations, which is not a problem 
>> at all, as this already exist, in both biology and computer science, or it 
>> would be like saying that a brain can break the physical laws, or the 
>> arithmetical laws and it would be like pseudo-philosophy.
>>
>
> What about the relation between high level arithmetic laws - like the ones 
> whic

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-24 Thread Bruno Marchal


On 24 Mar 2013, at 12:53, Craig Weinberg wrote:




On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal wrote:

On 21 Mar 2013, at 18:44, Craig Weinberg wrote:




On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote:

On 20 Mar 2013, at 19:16, Craig Weinberg wrote:


http://www.sciencedaily.com/releases/2013/03/130320115111.htm

"We are examining the activity in the cerebral cortex as a whole.  
The brain is a non-stop, always-active system. When we perceive  
something, the information does not end up in a specific part of  
our brain. Rather, it is added to the brain's existing activity.  
If we measure the electrochemical activity of the whole cortex, we  
find wave-like patterns. This shows that brain activity is not  
local but rather that activity constantly moves from one part of  
the brain to another."





Please, don't confuse the very particular neuro-philosophy with the  
much weaker assumption of computationalism.

Wave-like pattern are typically computable functions.
(I mentioned this when saying that I would say yes to a doctor only  
if he copies my glial cells at the right chemical level).


There are just no evidence for non computable activities acting in  
a relevant way in the biological organism, or actually even in the  
physical universe.
You could point on the the wave packet reduction, but it does not  
make much sense by itself.


Right, I'm not arguing this as evidence of non-comp. Even if there  
was non-comp activity in the brain, nothing that we could use to  
detect it would be able to find anything since we would only know  
how to use an exrternal detection instrument computationally.  
Mainly I posted this to show the direction that the scientific  
evidence is leading us does not support any kind of narrow folk- 
neuroscience of point to point chain-reactions.



Good.







Not looking very charitable to the bottom-up, neuron machine view.


Ideas don't need charity  but in this case it is totally  
charitable, even with neurophilosophy, given that in your example,  
those waves still seem neuron driven.


How do you know that it seem neuron driven rather than whole brain  
driven?


In neurophilosophy, they are used to global complex and distributed  
brain activity, but still implemented in term of local computable  
rules obeyed by neurons.


If you look at a city traffic pattern, you will see local computable  
rules obeyed by cars, but that doesn't mean there aren't non- 
computable agendas being pursued by the drivers.


Indeed.

But that is what you get at the Turing universal threshold. If you  
look at the computer's functioning, you will see local computable  
rules obeyed by the gates, but that doesn't mean there aren't non- 
computable agendas being pursued by genuine person supported by those  
computations.










What would it look like if the brain as a whole were driving the  
neurons?


Either it would be like saying that a high level program can have a  
feedback on some of its low level implementations, which is not a  
problem at all, as this already exist, in both biology and computer  
science, or it would be like saying that a brain can break the  
physical laws, or the arithmetical laws and it would be like pseudo- 
philosophy.


What about the relation between high level arithmetic laws - like  
the ones which allow for 1p subjectivity in UM, LM, etc and the  
programs which support them?


To eat or to be eaten relatively to the most probable universal  
neighbors. The relations can be complicated.






Not between the high level program and the low level program, but  
between the X-Level truths and laws and all local functions?



Above the substitution level, only god knows, but you can bet and  
theorize locally, and, below the substitution level, you get the full  
arithmetical mess, the union on all sigma_i formula, well beyond the  
computable. It is not easy, but there are mathematical lanterns, and  
deep symmetries, and deep self-referential insight.

It is a reality that the universal machines cannot avoid.

It is the advantage of comp, you can translate the problem in  
arithmetic, but it is not necessarily a "simple", sigma_1, problem.
There is a no universal panacea capable of satisfying all universal  
machines at once, nothing is easy.

You have to look inward, eventually.

Bruno







Craig


Bruno





Craig


Bruno









Craig

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" gro

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-24 Thread Bruno Marchal

John,


On 22 Mar 2013, at 21:27, John Mikes wrote:

I tried to find FOAR - but failed. You kindly advised to 'show me  
how' get subscribed, but it was missing from your post.

Could you repeat it?


You might try this:

To post to this group, send email to f...@googlegroups.com.
Visit this group at http://groups.google.com/group/foar?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.

Tell me if it is OK.

Best,

Bruno







John M

On Fri, Mar 22, 2013 at 5:36 AM, Bruno Marchal   
wrote:

Hi John,


On 21 Mar 2013, at 21:40, John Mikes wrote:


Dear Bruno,
it is so fascinating to read about "universal machines".
Is there a place where I could learn in short, understandable terms  
what they may be? Then again the difference between a 'Turing  
machine' and a 'physical computer' (what I usually call our  
embryonic Kraxlwerk).
I grew up into my science without computers, got my doctorates in  
1948 and 1967 and faced a computer first on a different continent  
(USA) in 1980. At that time I had already ~30 patents and a  
reputation of a practical scientist.

So I need more than the 'difference' into the universal.

Descriptions I saw turned me off. My chemistry-based polymer  
science does not give me the base for most (and mostly  
theoretical!) descriptions.

How'bout common sense base?


Actually I quasi-discovered Turing universality by myself when  
studying Jacob and Monod 's work on genetic molecular control in  
bacteria. But I did not take that very much seriously, until I  
discovered (in the literature) the diagonalization technic (Cantor,  
Kleene) and Church's thesis, which makes me decide to study math  
instead of biology.


May be you could subscribe to the FOAR list, as I will explain all  
that there. But if you ask me, I can send it in cc here or provide  
other explanations (I think some people are on both list, but this  
should not be a problem as it will not be a great number of posts).  
I dunno. I will see.


Thanks for telling me your interest,

Best,

Bruno






On Thu, Mar 21, 2013 at 2:02 PM, Bruno Marchal   
wrote:


On 21 Mar 2013, at 02:32, Stephen P. King wrote:

Are physical computers truly "universal Turing Machines"? No! They  
do not have infinite tape, not precise read/write heads. They are  
subject to noise and error.



The infinite tape is not part of the universal machine. A universal  
machine is a number u such that phi_u(x, y) = phi_x(y).


Please concentrate to the thought experiments, the sum will be  
taken on the memories of those who get the continuations, and the  
extensions.


When a löbian universal number run out of memory, he asks for more  
memory space or write on the wall of the cave, soon or later. And  
if it does not get it it dies, but from the 1p, it will find itself  
in a situation extending the memory (by just 1p indeterminacy).



Universal machines are finite entities. Physical Computer are  
particular case of Turing machine, and can emulate all other  
possible universal number, and the same is true for each of them.  
All universal machine can imitate all universal machines.
But no universal machines can be universal for the notion of a  
belief, knowledge, observation, feeling, etc. In those matter, they  
can differ a lot.


But they are all finite, and their ability is measured by  
abstracting from the time and space (in the number theoretical or  
computer theoretical sense) needed to accomplish the task.


That they have no precise read/write components, makes them harder  
to recognize among the phi_i, but this is not a problem, given that  
we know that we already cannot know which machine we are, and form  
the first person point of view, we are supported by all the  
relevant machines and computations.


And they are all subject to noise and error, (that follows from  
arithmetic).  Those noise and errors are their best allies to build  
more stable realities, I guess.


Bruno




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscr

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-24 Thread Craig Weinberg


On Sunday, March 24, 2013 7:13:27 AM UTC-4, Bruno Marchal wrote:
>
>
> On 21 Mar 2013, at 18:44, Craig Weinberg wrote:
>
>
>
> On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 20 Mar 2013, at 19:16, Craig Weinberg wrote:
>>
>> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>>
>> "We are examining the activity in the cerebral cortex *as a whole*. The 
>> brain is a non-stop, always-active system. When we perceive something, the 
>> information does not end up in a specific *part* of our brain. Rather, 
>> it is added to the brain's existing activity. If we measure the 
>> electrochemical activity of the whole cortex, we find wave-like patterns. 
>> This shows that brain activity is not local but rather that activity 
>> constantly moves from one part of the brain to another." 
>>
>>
>>
>> Please, don't confuse the very particular neuro-philosophy with the much 
>> weaker assumption of computationalism. 
>> Wave-like pattern are typically computable functions. 
>> (I mentioned this when saying that I would say yes to a doctor only if he 
>> copies my glial cells at the right chemical level).
>>
>> There are just no evidence for non computable activities acting in a 
>> relevant way in the biological organism, or actually even in the physical 
>> universe.
>> You could point on the the wave packet reduction, but it does not make 
>> much sense by itself.
>>
>
> Right, I'm not arguing this as evidence of non-comp. Even if there was 
> non-comp activity in the brain, nothing that we could use to detect it 
> would be able to find anything since we would only know how to use an 
> exrternal detection instrument computationally. Mainly I posted this to 
> show the direction that the scientific evidence is leading us does not 
> support any kind of narrow folk-neuroscience of point to point 
> chain-reactions.
>
>
>
> Good.
>
>
>
>
>>
>> Not looking very charitable to the bottom-up, neuron machine view.
>>
>>
>> Ideas don't need charity  but in this case it is totally charitable, even 
>> with neurophilosophy, given that in your example, those waves still seem 
>> neuron driven.
>>
>
> How do you know that it seem neuron driven rather than whole brain driven?
>
>
> In neurophilosophy, they are used to global complex and distributed brain 
> activity, but still implemented in term of local computable rules obeyed by 
> neurons.
>

If you look at a city traffic pattern, you will see local computable rules 
obeyed by cars, but that doesn't mean there aren't non-computable agendas 
being pursued by the drivers.
 

>
>
>
>
> What would it look like if the brain as a whole were driving the neurons?
>
>
> Either it would be like saying that a high level program can have a 
> feedback on some of its low level implementations, which is not a problem 
> at all, as this already exist, in both biology and computer science, or it 
> would be like saying that a brain can break the physical laws, or the 
> arithmetical laws and it would be like pseudo-philosophy.
>

What about the relation between high level arithmetic laws - like the ones 
which allow for 1p subjectivity in UM, LM, etc and the programs which 
support them? Not between the high level program and the low level program, 
but between the X-Level truths and laws and all local functions?

Craig
 

>
> Bruno
>
>
>
>
> Craig
>  
>
>>
>> Bruno
>>
>>
>>
>>
>>
>>
>>
>>
>> Craig
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com.
>> To post to this group, send email to everyth...@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list?hl=en.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>  
>>  
>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>  
>  
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-24 Thread Bruno Marchal


On 21 Mar 2013, at 18:44, Craig Weinberg wrote:




On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote:

On 20 Mar 2013, at 19:16, Craig Weinberg wrote:


http://www.sciencedaily.com/releases/2013/03/130320115111.htm

"We are examining the activity in the cerebral cortex as a whole.  
The brain is a non-stop, always-active system. When we perceive  
something, the information does not end up in a specific part of  
our brain. Rather, it is added to the brain's existing activity. If  
we measure the electrochemical activity of the whole cortex, we  
find wave-like patterns. This shows that brain activity is not  
local but rather that activity constantly moves from one part of  
the brain to another."





Please, don't confuse the very particular neuro-philosophy with the  
much weaker assumption of computationalism.

Wave-like pattern are typically computable functions.
(I mentioned this when saying that I would say yes to a doctor only  
if he copies my glial cells at the right chemical level).


There are just no evidence for non computable activities acting in a  
relevant way in the biological organism, or actually even in the  
physical universe.
You could point on the the wave packet reduction, but it does not  
make much sense by itself.


Right, I'm not arguing this as evidence of non-comp. Even if there  
was non-comp activity in the brain, nothing that we could use to  
detect it would be able to find anything since we would only know  
how to use an exrternal detection instrument computationally. Mainly  
I posted this to show the direction that the scientific evidence is  
leading us does not support any kind of narrow folk-neuroscience of  
point to point chain-reactions.



Good.







Not looking very charitable to the bottom-up, neuron machine view.


Ideas don't need charity  but in this case it is totally charitable,  
even with neurophilosophy, given that in your example, those waves  
still seem neuron driven.


How do you know that it seem neuron driven rather than whole brain  
driven?


In neurophilosophy, they are used to global complex and distributed  
brain activity, but still implemented in term of local computable  
rules obeyed by neurons.





What would it look like if the brain as a whole were driving the  
neurons?


Either it would be like saying that a high level program can have a  
feedback on some of its low level implementations, which is not a  
problem at all, as this already exist, in both biology and computer  
science, or it would be like saying that a brain can break the  
physical laws, or the arithmetical laws and it would be like pseudo- 
philosophy.


Bruno





Craig


Bruno









Craig

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-22 Thread John Mikes
I tried to find FOAR - but failed. You kindly advised to 'show me how' get
subscribed, but it was missing from your post.
Could you repeat it?
John M

On Fri, Mar 22, 2013 at 5:36 AM, Bruno Marchal  wrote:

> Hi John,
>
>
> On 21 Mar 2013, at 21:40, John Mikes wrote:
>
> Dear Bruno,
> it is so fascinating to read about "universal machines".
> Is there a place where I could learn in short, understandable terms what
> they may be? Then again the difference between a 'Turing machine' and a
> 'physical computer' (what I usually call our embryonic Kraxlwerk).
> I grew up into my science without computers, got my doctorates in 1948 and
> 1967 and faced a computer first on a different continent (USA) in 1980. At
> that time I had already ~30 patents and a reputation of a practical
> scientist.
> So I need more than the 'difference' into the universal.
>
> Descriptions I saw turned me off. My chemistry-based polymer science does
> not give me the base for most (and mostly theoretical!) descriptions.
> How'bout common sense base?
>
>
> Actually I quasi-discovered Turing universality by myself when studying
> Jacob and Monod 's work on genetic molecular control in bacteria. But I did
> not take that very much seriously, until I discovered (in the literature)
> the diagonalization technic (Cantor, Kleene) and Church's thesis, which
> makes me decide to study math instead of biology.
>
> May be you could subscribe to the FOAR list, as I will explain all that
> there. But if you ask me, I can send it in cc here or provide other
> explanations (I think some people are on both list, but this should not be
> a problem as it will not be a great number of posts). I dunno. I will see.
>
> Thanks for telling me your interest,
>
> Best,
>
> Bruno
>
>
>
>
>
> On Thu, Mar 21, 2013 at 2:02 PM, Bruno Marchal  wrote:
>
>>
>> On 21 Mar 2013, at 02:32, Stephen P. King wrote:
>>
>> Are physical computers truly "universal Turing Machines"? No! They do not
>> have infinite tape, not precise read/write heads. They are subject to noise
>> and error.
>>
>>
>>
>> The infinite tape is not part of the universal machine. A universal
>> machine is a number u such that phi_u(x, y) = phi_x(y).
>>
>> Please concentrate to the thought experiments, the sum will be taken on
>> the memories of those who get the continuations, and the extensions.
>>
>> When a löbian universal number run out of memory, he asks for more memory
>> space or write on the wall of the cave, soon or later. And if it does not
>> get it it dies, but from the 1p, it will find itself in a situation
>> extending the memory (by just 1p indeterminacy).
>>
>>
>> Universal machines are finite entities. Physical Computer are particular
>> case of Turing machine, and can emulate all other possible universal
>> number, and the same is true for each of them. All universal machine can
>> imitate all universal machines.
>> But no universal machines can be universal for the notion of a belief,
>> knowledge, observation, feeling, etc. In those matter, they can differ a
>> lot.
>>
>> But they are all finite, and their ability is measured by abstracting
>> from the time and space (in the number theoretical or computer theoretical
>> sense) needed to accomplish the task.
>>
>> That they have no precise read/write components, makes them harder to
>> recognize among the phi_i, but this is not a problem, given that we know
>> that we already cannot know which machine we are, and form the first person
>> point of view, we are supported by all the relevant machines and
>> computations.
>>
>> And they are all subject to noise and error, (that follows from
>> arithmetic).  Those noise and errors are their best allies to build more
>> stable realities, I guess.
>>
>> Bruno
>>
>>
>>
>>
>>  http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list?hl=en.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>
>>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-22 Thread Bruno Marchal

Hi John,


On 21 Mar 2013, at 21:40, John Mikes wrote:


Dear Bruno,
it is so fascinating to read about "universal machines".
Is there a place where I could learn in short, understandable terms  
what they may be? Then again the difference between a 'Turing  
machine' and a 'physical computer' (what I usually call our  
embryonic Kraxlwerk).
I grew up into my science without computers, got my doctorates in  
1948 and 1967 and faced a computer first on a different continent  
(USA) in 1980. At that time I had already ~30 patents and a  
reputation of a practical scientist.

So I need more than the 'difference' into the universal.

Descriptions I saw turned me off. My chemistry-based polymer science  
does not give me the base for most (and mostly theoretical!)  
descriptions.

How'bout common sense base?


Actually I quasi-discovered Turing universality by myself when  
studying Jacob and Monod 's work on genetic molecular control in  
bacteria. But I did not take that very much seriously, until I  
discovered (in the literature) the diagonalization technic (Cantor,  
Kleene) and Church's thesis, which makes me decide to study math  
instead of biology.


May be you could subscribe to the FOAR list, as I will explain all  
that there. But if you ask me, I can send it in cc here or provide  
other explanations (I think some people are on both list, but this  
should not be a problem as it will not be a great number of posts). I  
dunno. I will see.


Thanks for telling me your interest,

Best,

Bruno






On Thu, Mar 21, 2013 at 2:02 PM, Bruno Marchal   
wrote:


On 21 Mar 2013, at 02:32, Stephen P. King wrote:

Are physical computers truly "universal Turing Machines"? No! They  
do not have infinite tape, not precise read/write heads. They are  
subject to noise and error.



The infinite tape is not part of the universal machine. A universal  
machine is a number u such that phi_u(x, y) = phi_x(y).


Please concentrate to the thought experiments, the sum will be taken  
on the memories of those who get the continuations, and the  
extensions.


When a löbian universal number run out of memory, he asks for more  
memory space or write on the wall of the cave, soon or later. And if  
it does not get it it dies, but from the 1p, it will find itself in  
a situation extending the memory (by just 1p indeterminacy).



Universal machines are finite entities. Physical Computer are  
particular case of Turing machine, and can emulate all other  
possible universal number, and the same is true for each of them.  
All universal machine can imitate all universal machines.
But no universal machines can be universal for the notion of a  
belief, knowledge, observation, feeling, etc. In those matter, they  
can differ a lot.


But they are all finite, and their ability is measured by  
abstracting from the time and space (in the number theoretical or  
computer theoretical sense) needed to accomplish the task.


That they have no precise read/write components, makes them harder  
to recognize among the phi_i, but this is not a problem, given that  
we know that we already cannot know which machine we are, and form  
the first person point of view, we are supported by all the relevant  
machines and computations.


And they are all subject to noise and error, (that follows from  
arithmetic).  Those noise and errors are their best allies to build  
more stable realities, I guess.


Bruno




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-21 Thread Stephen P. King

On 3/21/2013 4:40 PM, John Mikes wrote:
> Dear Bruno,
> it is so fascinating to read about "universal machines".
> Is there a place where I could learn in short, understandable terms
> what they may be? Then again the difference between a 'Turing machine'
> and a 'physical computer' (what I usually call our embryonic Kraxlwerk).

Hi John,

When Bruno discusses Machines, they are never something that might
inhabit your lab. It is the computer program X that generates an exact
simulation of a physical system that has exactly the functional space
required to run X. A logical loop of sorts. It does *not* run on just
any one or finite subset of the hardware box that is a physical system.
Do you see why?


> I grew up into my science without computers, got my doctorates in 1948
> and 1967 and faced a computer first on a different continent (USA) in
> 1980. At that time I had already ~30 patents and a reputation of a
> practical scientist.  
> So I need more than the 'difference' into the universal.

You have a difficulty with Bruno because he lives in an abstract
universe where he does not have to work within the constraints of the
physical world. The computers I know of do constrain and thus influence
the programs that can run on it!

>
> Descriptions I saw turned me off. My chemistry-based polymer science
> does not give me the base for most (and mostly theoretical!)
> descriptions. 
> How'bout common sense base?
> John M
>
> On Thu, Mar 21, 2013 at 2:02 PM, Bruno Marchal  > wrote:
>
>
> On 21 Mar 2013, at 02:32, Stephen P. King wrote:
>
>> Are physical computers truly "universal Turing Machines"? No!
>> They do not have infinite tape, not precise read/write heads.
>> They are subject to noise and error.
>
>
> The infinite tape is not part of the universal machine. A
> universal machine is a number u such that phi_u(x, y) = phi_x(y).
>
> Please concentrate to the thought experiments, the sum will be
> taken on the memories of those who get the continuations, and the
> extensions. 
>
> When a löbian universal number run out of memory, he asks for more
> memory space or write on the wall of the cave, soon or later. And
> if it does not get it it dies, but from the 1p, it will find
> itself in a situation extending the memory (by just 1p indeterminacy).
>
>
> Universal machines are finite entities. Physical Computer are
> particular case of Turing machine, and can emulate all other
> possible universal number, and the same is true for each of them.
> All universal machine can imitate all universal machines.
> But no universal machines can be universal for the notion of a
> belief, knowledge, observation, feeling, etc. In those matter,
> they can differ a lot. 
>
> But they are all finite, and their ability is measured by
> abstracting from the time and space (in the number theoretical or
> computer theoretical sense) needed to accomplish the task. 
>
> That they have no precise read/write components, makes them harder
> to recognize among the phi_i, but this is not a problem, given
> that we know that we already cannot know which machine we are, and
> form the first person point of view, we are supported by all the
> relevant machines and computations. 
>
> And they are all subject to noise and error, (that follows from
> arithmetic).  Those noise and errors are their best allies to
> build more stable realities, I guess.
>
> Bruno
>
>
>
>
> http://iridia.ulb.ac.be/~marchal/
> 
>
>
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to everything-list+unsubscr...@googlegroups.com
> .
> To post to this group, send email to
> everything-list@googlegroups.com
> .
> Visit this group at
> http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>  
>  
>
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>  
>  

-- 
Onward!

Stephen

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-21 Thread John Mikes
Dear Bruno,
it is so fascinating to read about "universal machines".
Is there a place where I could learn in short, understandable terms what
they may be? Then again the difference between a 'Turing machine' and a
'physical computer' (what I usually call our embryonic Kraxlwerk).
I grew up into my science without computers, got my doctorates in 1948 and
1967 and faced a computer first on a different continent (USA) in 1980. At
that time I had already ~30 patents and a reputation of a practical
scientist.
So I need more than the 'difference' into the universal.

Descriptions I saw turned me off. My chemistry-based polymer science does
not give me the base for most (and mostly theoretical!) descriptions.
How'bout common sense base?
John M

On Thu, Mar 21, 2013 at 2:02 PM, Bruno Marchal  wrote:

>
> On 21 Mar 2013, at 02:32, Stephen P. King wrote:
>
> Are physical computers truly "universal Turing Machines"? No! They do not
> have infinite tape, not precise read/write heads. They are subject to noise
> and error.
>
>
>
> The infinite tape is not part of the universal machine. A universal
> machine is a number u such that phi_u(x, y) = phi_x(y).
>
> Please concentrate to the thought experiments, the sum will be taken on
> the memories of those who get the continuations, and the extensions.
>
> When a löbian universal number run out of memory, he asks for more memory
> space or write on the wall of the cave, soon or later. And if it does not
> get it it dies, but from the 1p, it will find itself in a situation
> extending the memory (by just 1p indeterminacy).
>
>
> Universal machines are finite entities. Physical Computer are particular
> case of Turing machine, and can emulate all other possible universal
> number, and the same is true for each of them. All universal machine can
> imitate all universal machines.
> But no universal machines can be universal for the notion of a belief,
> knowledge, observation, feeling, etc. In those matter, they can differ a
> lot.
>
> But they are all finite, and their ability is measured by abstracting from
> the time and space (in the number theoretical or computer theoretical
> sense) needed to accomplish the task.
>
> That they have no precise read/write components, makes them harder to
> recognize among the phi_i, but this is not a problem, given that we know
> that we already cannot know which machine we are, and form the first person
> point of view, we are supported by all the relevant machines and
> computations.
>
> And they are all subject to noise and error, (that follows from
> arithmetic).  Those noise and errors are their best allies to build more
> stable realities, I guess.
>
> Bruno
>
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-21 Thread Bruno Marchal


On 21 Mar 2013, at 02:32, Stephen P. King wrote:

Are physical computers truly "universal Turing Machines"? No! They  
do not have infinite tape, not precise read/write heads. They are  
subject to noise and error.



The infinite tape is not part of the universal machine. A universal  
machine is a number u such that phi_u(x, y) = phi_x(y).


Please concentrate to the thought experiments, the sum will be taken  
on the memories of those who get the continuations, and the extensions.


When a löbian universal number run out of memory, he asks for more  
memory space or write on the wall of the cave, soon or later. And if  
it does not get it it dies, but from the 1p, it will find itself in a  
situation extending the memory (by just 1p indeterminacy).



Universal machines are finite entities. Physical Computer are  
particular case of Turing machine, and can emulate all other possible  
universal number, and the same is true for each of them. All universal  
machine can imitate all universal machines.
But no universal machines can be universal for the notion of a belief,  
knowledge, observation, feeling, etc. In those matter, they can differ  
a lot.


But they are all finite, and their ability is measured by abstracting  
from the time and space (in the number theoretical or computer  
theoretical sense) needed to accomplish the task.


That they have no precise read/write components, makes them harder to  
recognize among the phi_i, but this is not a problem, given that we  
know that we already cannot know which machine we are, and form the  
first person point of view, we are supported by all the relevant  
machines and computations.


And they are all subject to noise and error, (that follows from  
arithmetic).  Those noise and errors are their best allies to build  
more stable realities, I guess.


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-21 Thread Craig Weinberg


On Thursday, March 21, 2013 1:28:24 PM UTC-4, Bruno Marchal wrote:
>
>
> On 20 Mar 2013, at 19:16, Craig Weinberg wrote:
>
> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>
> "We are examining the activity in the cerebral cortex *as a whole*. The 
> brain is a non-stop, always-active system. When we perceive something, the 
> information does not end up in a specific *part* of our brain. Rather, it 
> is added to the brain's existing activity. If we measure the 
> electrochemical activity of the whole cortex, we find wave-like patterns. 
> This shows that brain activity is not local but rather that activity 
> constantly moves from one part of the brain to another." 
>
>
>
> Please, don't confuse the very particular neuro-philosophy with the much 
> weaker assumption of computationalism. 
> Wave-like pattern are typically computable functions. 
> (I mentioned this when saying that I would say yes to a doctor only if he 
> copies my glial cells at the right chemical level).
>
> There are just no evidence for non computable activities acting in a 
> relevant way in the biological organism, or actually even in the physical 
> universe.
> You could point on the the wave packet reduction, but it does not make 
> much sense by itself.
>

Right, I'm not arguing this as evidence of non-comp. Even if there was 
non-comp activity in the brain, nothing that we could use to detect it 
would be able to find anything since we would only know how to use an 
exrternal detection instrument computationally. Mainly I posted this to 
show the direction that the scientific evidence is leading us does not 
support any kind of narrow folk-neuroscience of point to point 
chain-reactions.


>
> Not looking very charitable to the bottom-up, neuron machine view.
>
>
> Ideas don't need charity  but in this case it is totally charitable, even 
> with neurophilosophy, given that in your example, those waves still seem 
> neuron driven.
>

How do you know that it seem neuron driven rather than whole brain driven? 
What would it look like if the brain as a whole were driving the neurons?

Craig
 

>
> Bruno
>
>
>
>
>
>
>
>
> Craig
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>  
>  
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-21 Thread Bruno Marchal


On 20 Mar 2013, at 19:16, Craig Weinberg wrote:


http://www.sciencedaily.com/releases/2013/03/130320115111.htm

"We are examining the activity in the cerebral cortex as a whole.  
The brain is a non-stop, always-active system. When we perceive  
something, the information does not end up in a specific part of our  
brain. Rather, it is added to the brain's existing activity. If we  
measure the electrochemical activity of the whole cortex, we find  
wave-like patterns. This shows that brain activity is not local but  
rather that activity constantly moves from one part of the brain to  
another."





Please, don't confuse the very particular neuro-philosophy with the  
much weaker assumption of computationalism.

Wave-like pattern are typically computable functions.
(I mentioned this when saying that I would say yes to a doctor only if  
he copies my glial cells at the right chemical level).


There are just no evidence for non computable activities acting in a  
relevant way in the biological organism, or actually even in the  
physical universe.
You could point on the the wave packet reduction, but it does not make  
much sense by itself.




Not looking very charitable to the bottom-up, neuron machine view.


Ideas don't need charity  but in this case it is totally charitable,  
even with neurophilosophy, given that in your example, those waves  
still seem neuron driven.


Bruno









Craig

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread Stephen P. King

On 3/20/2013 9:41 PM, meekerdb wrote:
> On 3/20/2013 6:32 PM, Stephen P. King wrote:
>>
>> On 3/20/2013 6:37 PM, meekerdb wrote:
>>> On 3/20/2013 2:21 PM, Stephen P. King wrote:

 On 3/20/2013 4:07 PM, meekerdb wrote:
> On 3/20/2013 11:16 AM, Craig Weinberg wrote:
>> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>>
>> "We are examining the activity in the cerebral cortex /as a
>> whole/. The brain is a non-stop, always-active system. When we
>> perceive something, the information does not end up in a specific
>> /part/ of our brain. Rather, it is added to the brain's existing
>> activity. If we measure the electrochemical activity of the whole
>> cortex, we find wave-like patterns. This shows that brain
>> activity is not local but rather that activity constantly moves
>> from one part of the brain to another."
>>
>> Not looking very charitable to the bottom-up, neuron machine view.
>
> The same description would apply to a computer.  Information moves
> around and it is distributed over many transistors and magnetic
> domains.
>
> Brent
> -

 Hi,

 Let me bounce an idea of your statement here. Is there a
 constraint on the software that can run on a computer related to
 the functions that those transistors and magnetic domains can
 implement? Is this not a form of interaction between hardware and
 software?
>>>
>>> Sure, a program to calculate f(x) has to be compiled differently
>>> depending on the computer.  Some early computers even used trinary
>>> instead of binary.  But assuming it's general purpose computer then
>>> it is always possible to translate a program from one computer to
>>> another so that they calculate the same function (except for
>>> possible space limits).
>>>
>>> Brent
>>
>> OK, but let's zoom in a bit more on this. How much can the
>> translation (from one program to another so that they can calculate
>> the same (identity is assumed here!) function) exactly cancel out the
>> constraint that one physical machine places on logical functions that
>> could run on it? Surely we can see that is we consider an infinite
>> number of physical machines to cover the variation of physical
>> systems we can show that the computation of the function becomes
>> "independent of physics", but that is an 'in principle' proof of the
>> Universality of computations.
>> Bruno rightly points out that this Universality can be used to
>> argue that computer programs have nothing at all to do with the
>> physical world and he uses that argument to good effect. I don't wish
>> to cancell out the physical worlds. I am asking a different question.
>> How much does a given physical computer constrain the class of all
>> possible computer programs? Are physical computers truly "universal
>> Turing Machines"? No! They do not have infinite tape, not precise
>> read/write heads. They are subject to noise and error.
>
> I agree, but the same constraints would also apply to brains.

YES!! So, can we discuss this?


-- 
Onward!

Stephen

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread meekerdb

On 3/20/2013 6:32 PM, Stephen P. King wrote:


On 3/20/2013 6:37 PM, meekerdb wrote:

On 3/20/2013 2:21 PM, Stephen P. King wrote:


On 3/20/2013 4:07 PM, meekerdb wrote:

On 3/20/2013 11:16 AM, Craig Weinberg wrote:

http://www.sciencedaily.com/releases/2013/03/130320115111.htm

"We are examining the activity in the cerebral cortex /as a whole/. The brain is a 
non-stop, always-active system. When we perceive something, the information does not 
end up in a specific /part/ of our brain. Rather, it is added to the brain's 
existing activity. If we measure the electrochemical activity of the whole cortex, 
we find wave-like patterns. This shows that brain activity is not local but rather 
that activity constantly moves from one part of the brain to another."


Not looking very charitable to the bottom-up, neuron machine view.


The same description would apply to a computer.  Information moves around and it is 
distributed over many transistors and magnetic domains.


Brent
-


Hi,

Let me bounce an idea of your statement here. Is there a constraint on the 
software that can run on a computer related to the functions that those transistors 
and magnetic domains can implement? Is this not a form of interaction between hardware 
and software?


Sure, a program to calculate f(x) has to be compiled differently depending on the 
computer.  Some early computers even used trinary instead of binary.  But assuming it's 
general purpose computer then it is always possible to translate a program from one 
computer to another so that they calculate the same function (except for possible space 
limits).


Brent


OK, but let's zoom in a bit more on this. How much can the translation (from one 
program to another so that they can calculate the same (identity is assumed here!) 
function) exactly cancel out the constraint that one physical machine places on logical 
functions that could run on it? Surely we can see that is we consider an infinite number 
of physical machines to cover the variation of physical systems we can show that the 
computation of the function becomes "independent of physics", but that is an 'in 
principle' proof of the Universality of computations.
Bruno rightly points out that this Universality can be used to argue that computer 
programs have nothing at all to do with the physical world and he uses that argument to 
good effect. I don't wish to cancell out the physical worlds. I am asking a different 
question. How much does a given physical computer constrain the class of all possible 
computer programs? Are physical computers truly "universal Turing Machines"? No! They do 
not have infinite tape, not precise read/write heads. They are subject to noise and error.


I agree, but the same constraints would also apply to brains.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread Craig Weinberg


On Wednesday, March 20, 2013 6:52:20 PM UTC-4, Stephen Paul King wrote:
>
>  
> On 3/20/2013 6:20 PM, Craig Weinberg wrote:
>  
>
>
> On Wednesday, March 20, 2013 5:30:58 PM UTC-4, Stephen Paul King wrote: 
>>
>>  
>> On 3/20/2013 4:29 PM, Craig Weinberg wrote:
>>  
>>
>>
>> On Wednesday, March 20, 2013 4:07:10 PM UTC-4, Brent wrote: 
>>>
>>>  On 3/20/2013 11:16 AM, Craig Weinberg wrote:
>>>  
>>> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>>>
>>> "We are examining the activity in the cerebral cortex *as a whole*. The 
>>> brain is a non-stop, always-active system. When we perceive something, the 
>>> information does not end up in a specific *part* of our brain. Rather, 
>>> it is added to the brain's existing activity. If we measure the 
>>> electrochemical activity of the whole cortex, we find wave-like patterns. 
>>> This shows that brain activity is not local but rather that activity 
>>> constantly moves from one part of the brain to another." 
>>>
>>> Not looking very charitable to the bottom-up, neuron machine view.
>>>
>>>
>>> The same description would apply to a computer.  Information moves 
>>> around and it is distributed over many transistors and magnetic domains.
>>>  
>>
>> But it is eventually stored in particular addressed memory locations. It 
>> is not part of a continuous wave of activity of the entire computer. 
>>
>> Craig 
>>  
>>  Hi Craig, 
>>
>>What difference does that make?
>>  
>
>
> Hi Stephen,
>
> The difference it makes to me that it is yet another example that the 
> mechanistic of view that the brain is increasingly unworkable, and that top 
> down organic qualities of  consciousness are increasingly supported. The 
> brain is not a collection of neurons so much as neurons are fragments of a 
> nervous system.
>
>  
>  Hi Craig,
>
> Yes, the cogwork model of the world and its constituent subsets is a 
> rotting corpse, but there is still not a wide consensus on an alternative. 
> What we are seeing is a knock-down drag out fight for the next paradigm.
>

I agree, and I don't pretend to have a handle on the specifics of the next 
paradigm in neuroscience, but I think we have some of the broad strokes. 
Still, on this list, the rotting corpse is still strolling around... :)

Craig 


>
> -- 
> Onward!
>
> Stephen
>
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread Stephen P. King

On 3/20/2013 6:37 PM, meekerdb wrote:
> On 3/20/2013 2:21 PM, Stephen P. King wrote:
>>
>> On 3/20/2013 4:07 PM, meekerdb wrote:
>>> On 3/20/2013 11:16 AM, Craig Weinberg wrote:
 http://www.sciencedaily.com/releases/2013/03/130320115111.htm

 "We are examining the activity in the cerebral cortex /as a
 whole/. The brain is a non-stop, always-active system. When we
 perceive something, the information does not end up in a specific
 /part/ of our brain. Rather, it is added to the brain's existing
 activity. If we measure the electrochemical activity of the whole
 cortex, we find wave-like patterns. This shows that brain activity
 is not local but rather that activity constantly moves from one
 part of the brain to another."

 Not looking very charitable to the bottom-up, neuron machine view.
>>>
>>> The same description would apply to a computer.  Information moves
>>> around and it is distributed over many transistors and magnetic domains.
>>>
>>> Brent
>>> -
>>
>> Hi,
>>
>> Let me bounce an idea of your statement here. Is there a
>> constraint on the software that can run on a computer related to the
>> functions that those transistors and magnetic domains can implement?
>> Is this not a form of interaction between hardware and software?
>
> Sure, a program to calculate f(x) has to be compiled differently
> depending on the computer.  Some early computers even used trinary
> instead of binary.  But assuming it's general purpose computer then it
> is always possible to translate a program from one computer to another
> so that they calculate the same function (except for possible space
> limits).
>
> Brent

OK, but let's zoom in a bit more on this. How much can the
translation (from one program to another so that they can calculate the
same (identity is assumed here!) function) exactly cancel out the
constraint that one physical machine places on logical functions that
could run on it? Surely we can see that is we consider an infinite
number of physical machines to cover the variation of physical systems
we can show that the computation of the function becomes "independent of
physics", but that is an 'in principle' proof of the Universality of
computations.
Bruno rightly points out that this Universality can be used to argue
that computer programs have nothing at all to do with the physical world
and he uses that argument to good effect. I don't wish to cancell out
the physical worlds. I am asking a different question. How much does a
given physical computer constrain the class of all possible computer
programs? Are physical computers truly "universal Turing Machines"? No!
They do not have infinite tape, not precise read/write heads. They are
subject to noise and error.

-- 
Onward!

Stephen

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread Stephen P. King

On 3/20/2013 6:20 PM, Craig Weinberg wrote:
>
>
> On Wednesday, March 20, 2013 5:30:58 PM UTC-4, Stephen Paul King wrote:
>
>
> On 3/20/2013 4:29 PM, Craig Weinberg wrote:
>>
>>
>> On Wednesday, March 20, 2013 4:07:10 PM UTC-4, Brent wrote:
>>
>> On 3/20/2013 11:16 AM, Craig Weinberg wrote:
>>> http://www.sciencedaily.com/releases/2013/03/130320115111.htm 
>>> 
>>>
>>> "We are examining the activity in the cerebral cortex /as a
>>> whole/. The brain is a non-stop, always-active system. When
>>> we perceive something, the information does not end up in a
>>> specific /part/ of our brain. Rather, it is added to the
>>> brain's existing activity. If we measure the electrochemical
>>> activity of the whole cortex, we find wave-like patterns.
>>> This shows that brain activity is not local but rather that
>>> activity constantly moves from one part of the brain to
>>> another."
>>>
>>> Not looking very charitable to the bottom-up, neuron machine
>>> view.
>>
>> The same description would apply to a computer.  Information
>> moves around and it is distributed over many transistors and
>> magnetic domains.
>>
>>
>> But it is eventually stored in particular addressed memory
>> locations. It is not part of a continuous wave of activity of the
>> entire computer.
>>
>> Craig
>>
> Hi Craig,
>
>What difference does that make?
>
>
>
> Hi Stephen,
>
> The difference it makes to me that it is yet another example that the
> mechanistic of view that the brain is increasingly unworkable, and
> that top down organic qualities of  consciousness are increasingly
> supported. The brain is not a collection of neurons so much as neurons
> are fragments of a nervous system.
>
>
Hi Craig,

Yes, the cogwork model of the world and its constituent subsets is a
rotting corpse, but there is still not a wide consensus on an
alternative. What we are seeing is a knock-down drag out fight for the
next paradigm.


-- 
Onward!

Stephen

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread meekerdb

On 3/20/2013 3:31 PM, Craig Weinberg wrote:



On Wednesday, March 20, 2013 6:11:18 PM UTC-4, Brent wrote:

On 3/20/2013 1:29 PM, Craig Weinberg wrote:



On Wednesday, March 20, 2013 4:07:10 PM UTC-4, Brent wrote:

On 3/20/2013 11:16 AM, Craig Weinberg wrote:

http://www.sciencedaily.com/releases/2013/03/130320115111.htm


"We are examining the activity in the cerebral cortex /as a whole/. The 
brain
is a non-stop, always-active system. When we perceive something, the
information does not end up in a specific /part/ of our brain. Rather, 
it is
added to the brain's existing activity. If we measure the 
electrochemical
activity of the whole cortex, we find wave-like patterns. This shows 
that
brain activity is not local but rather that activity constantly moves 
from one
part of the brain to another."

Not looking very charitable to the bottom-up, neuron machine view.


The same description would apply to a computer. Information moves 
around and it
is distributed over many transistors and magnetic domains.


But it is eventually stored in particular addressed memory locations. It is 
not
part of a continuous wave of activity of the entire computer.


There is nothing in the cited article to show that particular information 
is never
stored in some area.


Except for the part where they say " *When we perceive something, the information does 
not end up in a specific /part/ of our brain*.".


That refers to *when* we are perceiving it.  That doesn't show that the information gained 
from that perception is not stored in some area in memory.  Notice they refer to "when the 
subject is given a task", implying that not all information is waving around all the time.



You'll have to take it up with the people who concluded that in their study if 
you disagree.

If you looked at a computer you would also see electrical activity that was 
not
local and constantly moved from one part to another.


No, not like this. What the brain does would be as if you plugged in a flash drive and 
waves propagated the contents of the flash drive throughout the RAM, HD, and CPU, 
rolling back and forth mingled in with all of the other processes going on.


Actually that's exactly what my computer would do if I plugged in a thumb drive with a big 
complex program, e.g. a multi-player simulation game.




  And if it were perceiving its surroundings, as a Mars rover might, to 
evaluate its
next move it would obviously have to process data stored in memory as well 
as sensor
information.


It would be hard for it to process data stored in memory if it was circulating around 
the entire system, mixed with everything else.


On the contrary it can only process data in memory by copying it to registers and the 
CPU(s).  And if it's a multi-tasking OS it will be "mixed" time-wise with everything else.


As time goes on, I suspect that we will see more and more of these kinds of studies. The 
brain does have mechanisms, but it is not a machine. It does computer, but it is not 
just a computer.


And I suspect you will still be saying that when Bruno's daughter marries a 
robot.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread meekerdb

On 3/20/2013 2:21 PM, Stephen P. King wrote:


On 3/20/2013 4:07 PM, meekerdb wrote:

On 3/20/2013 11:16 AM, Craig Weinberg wrote:

http://www.sciencedaily.com/releases/2013/03/130320115111.htm

"We are examining the activity in the cerebral cortex /as a whole/. The brain is a 
non-stop, always-active system. When we perceive something, the information does not 
end up in a specific /part/ of our brain. Rather, it is added to the brain's existing 
activity. If we measure the electrochemical activity of the whole cortex, we find 
wave-like patterns. This shows that brain activity is not local but rather that 
activity constantly moves from one part of the brain to another."


Not looking very charitable to the bottom-up, neuron machine view.


The same description would apply to a computer.  Information moves around and it is 
distributed over many transistors and magnetic domains.


Brent
-


Hi,

Let me bounce an idea of your statement here. Is there a constraint on the software 
that can run on a computer related to the functions that those transistors and magnetic 
domains can implement? Is this not a form of interaction between hardware and software?


Sure, a program to calculate f(x) has to be compiled differently depending on the 
computer.  Some early computers even used trinary instead of binary.  But assuming it's 
general purpose computer then it is always possible to translate a program from one 
computer to another so that they calculate the same function (except for possible space 
limits).


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread Craig Weinberg


On Wednesday, March 20, 2013 6:11:18 PM UTC-4, Brent wrote:
>
>  On 3/20/2013 1:29 PM, Craig Weinberg wrote:
>  
>
>
> On Wednesday, March 20, 2013 4:07:10 PM UTC-4, Brent wrote: 
>>
>>  On 3/20/2013 11:16 AM, Craig Weinberg wrote:
>>  
>> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>>
>> "We are examining the activity in the cerebral cortex *as a whole*. The 
>> brain is a non-stop, always-active system. When we perceive something, the 
>> information does not end up in a specific *part* of our brain. Rather, 
>> it is added to the brain's existing activity. If we measure the 
>> electrochemical activity of the whole cortex, we find wave-like patterns. 
>> This shows that brain activity is not local but rather that activity 
>> constantly moves from one part of the brain to another." 
>>
>> Not looking very charitable to the bottom-up, neuron machine view.
>>
>>
>> The same description would apply to a computer.  Information moves around 
>> and it is distributed over many transistors and magnetic domains.
>>  
>
> But it is eventually stored in particular addressed memory locations. It 
> is not part of a continuous wave of activity of the entire computer.
>  
>
> There is nothing in the cited article to show that particular information 
> is never stored in some area.  
>

Except for the part where they say " *When we perceive something, the 
information does not end up in a specific part of our brain*.". You'll have 
to take it up with the people who concluded that in their study if you 
disagree.

If you looked at a computer you would also see electrical activity that was 
> not local and constantly moved from one part to another.
>

No, not like this. What the brain does would be as if you plugged in a 
flash drive and waves propagated the contents of the flash drive throughout 
the RAM, HD, and CPU, rolling back and forth mingled in with all of the 
other processes going on.
 

>   And if it were perceiving its surroundings, as a Mars rover might, to 
> evaluate its next move it would obviously have to process data stored in 
> memory as well as sensor information.
>

It would be hard for it to process data stored in memory if it was 
circulating around the entire system, mixed with everything else. As time 
goes on, I suspect that we will see more and more of these kinds of 
studies. The brain does have mechanisms, but it is not a machine. It does 
computer, but it is not just a computer.

Craig


> Brent
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread Craig Weinberg


On Wednesday, March 20, 2013 5:30:58 PM UTC-4, Stephen Paul King wrote:
>
>  
> On 3/20/2013 4:29 PM, Craig Weinberg wrote:
>  
>
>
> On Wednesday, March 20, 2013 4:07:10 PM UTC-4, Brent wrote: 
>>
>>  On 3/20/2013 11:16 AM, Craig Weinberg wrote:
>>  
>> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>>
>> "We are examining the activity in the cerebral cortex *as a whole*. The 
>> brain is a non-stop, always-active system. When we perceive something, the 
>> information does not end up in a specific *part* of our brain. Rather, 
>> it is added to the brain's existing activity. If we measure the 
>> electrochemical activity of the whole cortex, we find wave-like patterns. 
>> This shows that brain activity is not local but rather that activity 
>> constantly moves from one part of the brain to another." 
>>
>> Not looking very charitable to the bottom-up, neuron machine view.
>>
>>
>> The same description would apply to a computer.  Information moves around 
>> and it is distributed over many transistors and magnetic domains.
>>  
>
> But it is eventually stored in particular addressed memory locations. It 
> is not part of a continuous wave of activity of the entire computer. 
>
> Craig 
>  
>  Hi Craig, 
>
>What difference does that make?
>


Hi Stephen,

The difference it makes to me that it is yet another example that the 
mechanistic of view that the brain is increasingly unworkable, and that top 
down organic qualities of  consciousness are increasingly supported. The 
brain is not a collection of neurons so much as neurons are fragments of a 
nervous system.

Craig
 

>
> -- 
> Onward!
>
> Stephen
>
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread meekerdb

On 3/20/2013 1:29 PM, Craig Weinberg wrote:



On Wednesday, March 20, 2013 4:07:10 PM UTC-4, Brent wrote:

On 3/20/2013 11:16 AM, Craig Weinberg wrote:

http://www.sciencedaily.com/releases/2013/03/130320115111.htm


"We are examining the activity in the cerebral cortex /as a whole/. The 
brain is a
non-stop, always-active system. When we perceive something, the information 
does
not end up in a specific /part/ of our brain. Rather, it is added to the 
brain's
existing activity. If we measure the electrochemical activity of the whole 
cortex,
we find wave-like patterns. This shows that brain activity is not local but 
rather
that activity constantly moves from one part of the brain to another."

Not looking very charitable to the bottom-up, neuron machine view.


The same description would apply to a computer.  Information moves around 
and it is
distributed over many transistors and magnetic domains.


But it is eventually stored in particular addressed memory locations. It is not part of 
a continuous wave of activity of the entire computer.


There is nothing in the cited article to show that particular information is never stored 
in some area.  If you looked at a computer you would also see electrical activity that was 
not local and constantly moved from one part to another.  And if it were perceiving its 
surroundings, as a Mars rover might, to evaluate its next move it would obviously have to 
process data stored in memory as well as sensor information.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread Stephen P. King

On 3/20/2013 4:29 PM, Craig Weinberg wrote:
>
>
> On Wednesday, March 20, 2013 4:07:10 PM UTC-4, Brent wrote:
>
> On 3/20/2013 11:16 AM, Craig Weinberg wrote:
>> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>> 
>>
>> "We are examining the activity in the cerebral cortex /as a
>> whole/. The brain is a non-stop, always-active system. When we
>> perceive something, the information does not end up in a specific
>> /part/ of our brain. Rather, it is added to the brain's existing
>> activity. If we measure the electrochemical activity of the whole
>> cortex, we find wave-like patterns. This shows that brain
>> activity is not local but rather that activity constantly moves
>> from one part of the brain to another."
>>
>> Not looking very charitable to the bottom-up, neuron machine view.
>
> The same description would apply to a computer.  Information moves
> around and it is distributed over many transistors and magnetic
> domains.
>
>
> But it is eventually stored in particular addressed memory locations.
> It is not part of a continuous wave of activity of the entire computer.
>
> Craig
>
Hi Craig,

   What difference does that make?

-- 
Onward!

Stephen

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread Stephen P. King

On 3/20/2013 4:07 PM, meekerdb wrote:
> On 3/20/2013 11:16 AM, Craig Weinberg wrote:
>> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>>
>> "We are examining the activity in the cerebral cortex /as a
>> whole/. The brain is a non-stop, always-active system. When we
>> perceive something, the information does not end up in a specific
>> /part/ of our brain. Rather, it is added to the brain's existing
>> activity. If we measure the electrochemical activity of the whole
>> cortex, we find wave-like patterns. This shows that brain activity is
>> not local but rather that activity constantly moves from one part of
>> the brain to another."
>>
>> Not looking very charitable to the bottom-up, neuron machine view.
>
> The same description would apply to a computer.  Information moves
> around and it is distributed over many transistors and magnetic domains.
>
> Brent
> -

Hi,

Let me bounce an idea of your statement here. Is there a constraint
on the software that can run on a computer related to the functions that
those transistors and magnetic domains can implement? Is this not a form
of interaction between hardware and software?

-- 
Onward!

Stephen

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread Craig Weinberg


On Wednesday, March 20, 2013 4:07:10 PM UTC-4, Brent wrote:
>
>  On 3/20/2013 11:16 AM, Craig Weinberg wrote:
>  
> http://www.sciencedaily.com/releases/2013/03/130320115111.htm
>
> "We are examining the activity in the cerebral cortex *as a whole*. The 
> brain is a non-stop, always-active system. When we perceive something, the 
> information does not end up in a specific *part* of our brain. Rather, it 
> is added to the brain's existing activity. If we measure the 
> electrochemical activity of the whole cortex, we find wave-like patterns. 
> This shows that brain activity is not local but rather that activity 
> constantly moves from one part of the brain to another." 
>
> Not looking very charitable to the bottom-up, neuron machine view.
>
>
> The same description would apply to a computer.  Information moves around 
> and it is distributed over many transistors and magnetic domains.
>

But it is eventually stored in particular addressed memory locations. It is 
not part of a continuous wave of activity of the entire computer. 

Craig
 

>
> Brent
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-20 Thread meekerdb

On 3/20/2013 11:16 AM, Craig Weinberg wrote:

http://www.sciencedaily.com/releases/2013/03/130320115111.htm

"We are examining the activity in the cerebral cortex /as a whole/. The brain is a 
non-stop, always-active system. When we perceive something, the information does not end 
up in a specific /part/ of our brain. Rather, it is added to the brain's existing 
activity. If we measure the electrochemical activity of the whole cortex, we find 
wave-like patterns. This shows that brain activity is not local but rather that activity 
constantly moves from one part of the brain to another."


Not looking very charitable to the bottom-up, neuron machine view.


The same description would apply to a computer.  Information moves around and it is 
distributed over many transistors and magnetic domains.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.