Re: bruno list

2011-08-07 Thread Evgenii Rudnyi

On 07.08.2011 01:24 Stathis Papaioannou said the following:

On Sun, Aug 7, 2011 at 2:54 AM, Evgenii Rudnyiuse...@rudnyi.ru
wrote:


Let us forget for a moment machines and take for example some
other biological creatures, for example even insects. How would you
characterize the behaviour of insects? Is it intelligent or not?


Yes, I would say that insects have a limited intelligence. Why not?
And I imagine they also have a limited consciousness.


I am reading now Jeffrey A. Gray, Consciousness: Creeping up on the Hard 
Problem. This book is about consciousness experiences but not about 
intelligence. Please note the author is a famous neuroscientist, so the 
book is not about philosophy of consciousness but rather of experimental 
results. Hence I would say that it is wrong to tie consciousness with 
intelligence (it is unconsciousness that seems to be intelligent).


The author starts with a simple fact that the world that we experience 
is constructed by our brains. Here it is unlikely that for example 
insects perceive a visual three-dimensional world like we, the brain of 
insects seem to be too small. It is also unclear if insects experience 
pain (and other feelings). So it is unclear to me what a kind of limited 
consciousness insects could have.


Some quotes from the book are at

http://blog.rudnyi.ru/2011/08/consciousness-creeping-up-on-the-hard-problem.html

I have divided them into three sections: 1) The World is Inside the 
Head, 2) Perception, Qualia and Hard Problem, 3) Illusions of the Will 
(Rex gonna like it).


Evgenii



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-07 Thread Evgenii Rudnyi

On 07.08.2011 05:12 Craig Weinberg said the following:

On Aug 6, 9:35 pm, meekerdbmeeke...@verizon.net  wrote:

On 8/6/2011 4:59 PM, Craig Weinberg wrote:


The language doesn't matter. You can see that a person is in pain
by their response to being burned, even if they have not
developed language yet.


Interesting.  Now Craig *can* infer qualia from behavior.


We can always infer qualia. It doesn't mean our inference is
correct. In this case I'm pointing out that the inference doesn't
require a learned language. My point is that math is not nature, but
nurture. If it were otherwise, I would expect the effects of alcohol
intoxication or smaller brain cortex to make an animal more logical
rather than more emotional. Emotion is more primitive than symbolic
logic.



Please note that according to experimental results (see the book 
mentioned in my previous message), pain comes after the event. For 
example when you touch a hotplate, you take your hand back not because 
of the pain. The action actually happens unconsciously, conscious pain 
comes afterward.


Evgenii
http://blog.rudnyi.ru

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-07 Thread Stathis Papaioannou
On Sun, Aug 7, 2011 at 11:17 AM, Craig Weinberg whatsons...@gmail.com wrote:
 On Aug 6, 7:40 pm, Stathis Papaioannou stath...@gmail.com wrote:

 When you are online you don't analyse the biochemical make-up of you
 interlocutor, but you still come to a conclusion as to whether they
 are intelligent or not. If in doubt you can always ask a series of
 questions: I'm sure you are confident in your ability to tell the
 difference between a person and a bot. But there may come a time when
 it is impossible in general to tell the difference,

 Why does that matter though? What does being able to tell the
 difference between a bot and a person have to do with a bot feeling
 like a person?

That, as I keep saying, is the question. Assume that the bot can
behave like a person but lacks consciousness. Then it would be
possible to replace parts of your brain with non-conscious components
that function otherwise normally, which would lead to you lacking some
important aspect aspect of consciousness but being unaware of it. This
is absurd, but it is a corollary of the claim that it is possible to
separate consciousness from function. Therefore, the claim that it is
possible to separate consciousness from function is shown to be false.
If you don't accept this then you allow what you have already admitted
is an absurdity.

 and then we will
 have human level AI (soon after we will have superhuman AI and soon
 after that the human race may be supplanted, but that's a separate
 question).

 The human race has already been supplanted by a superhuman AI. It's
 called law and finance.

They are not entities and not intelligent, let alone intelligent in
the way humans are.

  I don't understand what all of this debate over how intelligence seems
  from the outside has to do with how it is experienced from the inside.
  Here's a thought experiment for the anti-zombie. If I study randomness
  and learn to impersonate machine randomness perfectly, have I become a
  machine? Have I lost sentience? Why not?

 Intelligence can fake non-intelligence, but non-intelligence can't
 fake intelligence.

 But intelligence can fake intelligence using non-intelligence. A
 computer isn't faking intelligence, it's just spinning a quantitative
 instruction set through semiconductors. It's only us who think it's
 intelligent. In fact it is intelligent, as a long polymer molecule is
 intelligent, but it is not conscious as an animal is conscious.

It seems that you are conflating intelligence with consciousness.
Intelligence is what is observed, while consciousness relates to the
internal experience. A zombie is intelligent but not conscious.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-08-07 Thread Bruno Marchal


On 06 Aug 2011, at 23:14, benjayk wrote:



Frankly I am a bit tired of this debate (to some extent debating in  
general),
so I will not respond in detail any time soon (if at all). Don't  
take it as
total disinterest, I found our exchange very interesting, I am just  
not in

the mood at the moment to discuss complex topics at length.


There is no problem, Ben. I hope you will not mind if I comment your  
post.






Bruno Marchal wrote:


Then computer science provides a theory of consciousness, and  
explains how

consciousness emerges from numbers,
How can consciousness be shown to emerge from numbers when it is  
already

assumed at the start?


In science we assume at some meta-level what we try to explain at some  
level. We have to assume the existence of the moon to try theories  
about its origin.




It's a bit like assuming A, and because B-A is true if A is true,  
we can

claim for any B that B is the reason that A true.


This confirms you are confusing two levels. The level of deduction in  
a theory, and the level of implication in formal logic.






Consciousness is simply a given. Every explanation of it will just  
express
what it is and will not determine its origin, as its origin would  
need to be
independent of it / prior to it, but could never be known to be  
prior to it,

as this would already require consciousness.


In the comp theory it can be explained why machine takes consciousness  
as a given, and that from their first person points of view, they are  
completely correct about this. Yet, consciousness is not assumed as  
something primitive in the TOE itself. You can define it by the  
number's first person belief in some reality, like you can explain the  
belief in matter by a sort of border of that belief. From this the  
math explains the qualia and the quanta as completely as any possible  
theory can ever explain (perhaps not correctly, because comp might be  
false, but then comp is refutable/scientific).






The only question is what systems are able to express that  
consciousness

exists,


And the comp answer is machine, or number, or universal numbers, or  
Löbian universal numbers.






and what place consciousness has in those systems.


And the comp answer is monumental. Universal number consciousness is  
at the origin of the laws of physics, even if it looks like a  
selection/projection inan richer arithmetical reality. This really  
needs to be understood by yourself.  I guess it makes no sense without  
understanding, because it *is* counterintuitive.

We might come back on this once you are in the mood again.







Bruno Marchal wrote:






Bruno Marchal wrote:





Bruno Marchal wrote:


And in that sense, comp provides, I think, the first coherent
picture of
almost everything, from God (oops!) to qualia, quanta included,  
and

this by assuming only seven arithmetical axioms.

I tend to agree. But it's coherent picture of everything includes
the
possibility of infinitely many more powerful theories.  
Theoretically

it may
be possible to represent every such theory with arithmetic - but
then we can
represent every arithmetical statement with just one symbol and an
encoding
scheme, still we wouldn't call . a theory of everything.
So it's not THE theory of everything, but *a* theory of  
everything.


Not really. Once you assume comp, the numbers (or equivalent) are
enough, and very simple (despite mysterious).

They are enough, but they are not the only way to make a theory of
everything. As you say, we can use everything as powerful as
numbers, so
there is an infinity of different formulations of theories of
everything.


For any theory, you have infinities of equivalent formulations. This
is not a defect. What is amazing is that they can be very different
(like cellular automata, LISP, addition+multiplication on natural
numbers, quantum topology, billiard balls, etc.
I agree. It's just that in my view the fact that they can be very  
different

makes them ultimately different theories, only theories about the same
thing.


And proving the same things, with equivalent explanation.





Different theories may explain the same thing, but in practice, they
may vary in their efficiency to explain it, so it makes sense to  
treat them

as different theories.


But the goal here is a conceptual understanding, not direct practical  
application.




In theory, even one symbol can represent every statement in any  
language,


That does not make sense for me. (or it is trivia).




but still it's not as powerful as the language it represents.

Similarily if you use just natural numbers as a TOE, you won't be  
able to

directly express important concepts like dimensionality.



Why? If you prove this, I abandon comp immediately. From comp you can  
derive the whole of physics, and this should be easy to understand if  
you get the UDA1-7. Comp remains incomplete on God, consciousness and  
souls, and can explain why, but physics, including 

Re: bruno list

2011-08-07 Thread Bruno Marchal


On 06 Aug 2011, at 23:33, meekerdb wrote:


On 8/6/2011 1:25 PM, John Mikes wrote:




On Sat, Aug 6, 2011 at 2:30 PM, meekerdb meeke...@verizon.net  
wrote:

On 8/6/2011 8:35 AM, John Mikes wrote:
Stasthis,

let me barge in with one fundamental - not dispersing my reply into  
those (many and long) details:


As I read your comments/replies (and I agree with your position  
within the limits I want to expose here), I have the feeling that  
you agree with the rest of the combatants in considering 'the  
brain', our idea about 'intelligence' and 'consciousness' as  
complete and total. I argue that it is not. Upon historic  
reminiscence all such inventory about these (and many more)  
concepts has been growing by addition and by phasing out imaginary  
'content'   as we 'learn', - an ongoing process that does not seem  
to have reached the ultimate end/completion.


So you are right in considering whatever we new yesterday (my  
substitution for today) but not including what we may know  
tomorrow. Drawing conclusions upon incomplete inventory  does not  
seem acceptable.


Regards
John Mikes

If we wait until we know everything, we'll never draw any  
conclusion; which is OK for science.  But for engineering we need  
to make decisions.


Brent

Brent: this is fine, we just should not mix up engineering with  
science: My science is the agnostic decision that we CANNOT know  
everything and feel comfortably in it. Also in my past engineering  
I made decisions but never pretended them to be scientific results.

Thanks for the remark

John


That's why I sometimes return to my engineering viewpoint.  It is  
easy to speculate that some overarching everything construct  
includes us and our world as an infinitesimal part.


I suspect a confusion with tegmark's kind of mathematicalism. Comp  
gives us (us = the UMs and LUMs) the big role in the emergence of  
physics; not an infinitesimal role at all.




That may satisfy some religious need for explanation; but it  
doesn't help answer any engineering questions - such as How do I  
make an intelligent Mars Rover?   And if I do will it be conscious?   
And if it is will it be ethical to send it to Mars?


There is no problem to come back on earth, especially during summer  
holiday.


Science is modest: all it says is that IF the Mars Rover is conscious,  
THEN physics has to be derived from a self-reference modality. If such  
a physics makes the electron weighting one ton, you can conclude that  
the Mars Rover is not conscious.
It might take some times before we get the existence and mass of the  
electron from addition and multiplication, but we already know how to  
proceed (time is needed to solve the related and genuine diophantine  
equations).


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-07 Thread Bruno Marchal


On 07 Aug 2011, at 01:24, Stathis Papaioannou wrote:

On Sun, Aug 7, 2011 at 2:54 AM, Evgenii Rudnyi use...@rudnyi.ru  
wrote:



Let us forget for a moment machines and take for example some other
biological creatures, for example even insects. How would you  
characterize

the behaviour of insects? Is it intelligent or not?


Yes, I would say that insects have a limited intelligence. Why not?
And I imagine they also have a limited consciousness.



That's my personal feeling too. Recently I have updated the arachnid  
an octopus in the Löbian (self-conscious) class of entity.


I came to that possible conclusion by looking at video like this, and  
then making some experiment with spider myself:


http://www.youtube.com/watch?v=iND8ucDiDSQ

It is not the move the spider, but its apparent induction that there  
is a spider behind the mirror, and its apparent shock discovering  
there is none.


Take this with some grain on salt, but in matter of consciousness I  
prefer to attribute too much than not enough.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-07 Thread Bruno Marchal


On 06 Aug 2011, at 22:00, Stephen P. King wrote to Craig Weinberg:


Craig:
Natural numbers are an invention
of an entity that thinks,



Bruno: The existence of numbers, with the laws of addition and
multiplication, entails the existence of universal numbers. They can
introspect themselves and discover, for themselves, the numbers and
their laws. They can even discover themselves in there, and this  
on a

variety of levels.

Craig: I don't think that you can say that they do that without a
mathematician being there to watch and understand, or a silicon chip
to prove it. What numbers help you discover is the logic behind sense
and the sense behind logic, but they don't necessarily reveal a logic
independent of sense. (That may be my main point right there).


   Stephen: I think that you are both wrong! Numbers as independent  
primitives can do nothing without the schemata of ordering and  
relations that even allows the notion of introspection and  
discovery to be meaningful.


I said numbers  *with the laws of addition and multiplication*. You  
can define the order x  y by Ez(not(z = 0)  (x + z = y)).
In fact with addition and multiplication, you have the dreams and  
their coherence property leading to physics.




OTOH, requiring the physical presence of a mathematician is missing  
the point that the relationships upon which 'introspection' and  
'discovery' supervene are not limited some just some particular  
kinds of things. You are missing the true part of functionalism.


OK.

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-07 Thread Craig Weinberg
On Aug 7, 2:44 am, Evgenii Rudnyi use...@rudnyi.ru wrote:
 On 07.08.2011 05:12 Craig Weinberg said the following:

  We can always infer qualia. It doesn't mean our inference is
  correct. In this case I'm pointing out that the inference doesn't
  require a learned language. My point is that math is not nature, but
  nurture. If it were otherwise, I would expect the effects of alcohol
  intoxication or smaller brain cortex to make an animal more logical
  rather than more emotional. Emotion is more primitive than symbolic
  logic.

 Please note that according to experimental results (see the book
 mentioned in my previous message), pain comes after the event. For
 example when you touch a hotplate, you take your hand back not because
 of the pain. The action actually happens unconsciously, conscious pain
 comes afterward.

The pain comes to 'us' after the event. That's not to say that the
cells of your burned finger are not in pain already. Cellular pain may
not be the same experience of course as a trillion cell human being's
version of it. We have to ramp up the significance of the sensation.
Cells die all the time, so their damage may not feel as 'expensive' to
us who, all things considered, consider our own fingers pretty highly.

As far as the book, it looks good at the beginning but then seems like
it creeps back down away from the hard problem. Most of what you have
quoted I agree with and have considered often. Here's my answers to
his qualia questions:

p. 66 “We would need to know of qualia (in terms that link up effectively with 
the rest of natural science):
1) What are they?
Qualia are the sensorimotive set complements to electromagnetic
behavior in matter. This requires a shift in our understanding about
electromagnetism, but does not require change to any calculations or
experimental results. All that is required is the reinterpretation of
the idea of an electromagnetic 'field' in space to be a sensitivity
'range' between material phenomena.

2) How does the brain produce them?
It doesn't. The brain is made of them...on the inside. The brain is as
much produced by qualia as qualia is produced by the brain. How the
elaboration of the human brain produces specifically human qualia is a
different story. I call that effect 'cumulative entanglement' or
significance. Sort of Energy + Time - Entropy. What Fibonacci feels
like.

3) Why does the brain produce them (given that it can perform so many complex 
operations, even to the level of intentionality, without them)?
Everything has qualia. The human brain has human qualia because that
is it's purpose - to human body to create and experience significance.

4) What do they do?
They are sensorimotive. They inform and inspire. They crystallize
meaning.

5) How did they evolve?
Significance is retained over time through the propagation of pattern
within the interior of matter.

6) What survival value do they confer?
That question reveals the bias of our time. What value does survival
confer to the universe? Qualia is the reason that we see a living
organism that is doomed to suffer and die as an improvement over the
silent void of asteroid rubble. Significance.

7) Is it only brains that can produce them?”
Nope.

p. 40 “Given, that there is a scientific story that goes seamlessly from 
sensory input to behavioural output without reference to consciousness then, 
when we try to add conscious experience back into the story, we can’t find 
anything for it to do. Consciousness, it seems, has no casual powers, it 
stands outside the casual chain.”

Another one that shows how backward we are willing to bend for the
sake of the 3p occidental worldview. Just because we aren't conscious
of everything that our brain is doing doesn't mean that nothing is
aware of it. Human consciousness is an entity on the scale of the
human body. It's natural default PRIF (Perceptual Relativity Inertial
Frame) is within a range of around 0.1 Hz to 24 hours and from around .
5 cm to 100m (approximating of course). When you start looking beyond
or beneath those thresholds, you are no longer looking at
consciousness, but rather subconscious awareness. Consciousness is
slow. It doesn't mean that it can't alter subconscious behaviors over
time if it wants to. It's a two way street. If we tell ourselves that
it doesn't matter if the stove is hot because we have a job as a cook,
then gradually, along with our heat receptors getting desensitized,
our conscious familiarity will override the initial reflexes and we
build a tolerance that changes the behavior of the brain. Free will is
not an illusion - that truly would have no purpose whatsoever - it's
just big and slow because we are a trillion cell animal with a crazy
complicated brain.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 

Re: bruno list

2011-08-07 Thread Craig Weinberg
On Aug 7, 7:42 am, Stathis Papaioannou stath...@gmail.com wrote:
 On Sun, Aug 7, 2011 at 11:17 AM, Craig Weinberg whatsons...@gmail.com wrote:
  On Aug 6, 7:40 pm, Stathis Papaioannou stath...@gmail.com wrote:

  When you are online you don't analyse the biochemical make-up of you
  interlocutor, but you still come to a conclusion as to whether they
  are intelligent or not. If in doubt you can always ask a series of
  questions: I'm sure you are confident in your ability to tell the
  difference between a person and a bot. But there may come a time when
  it is impossible in general to tell the difference,

  Why does that matter though? What does being able to tell the
  difference between a bot and a person have to do with a bot feeling
  like a person?

 That, as I keep saying, is the question. Assume that the bot can
 behave like a person but lacks consciousness.

No. You have it backwards from the start. There is no such thing as
'behaving like a person'. There is only a person interpreting
something's behavior as being like a person. There is no power
emanating from a thing that makes it person-like. If you understand
this you will know because you will see that the whole question is a
red herring. If you don't see that, you do not understand what I'm
saying.

Then it would be
 possible to replace parts of your brain with non-conscious components
 that function otherwise normally, which would lead to you lacking some
 important aspect aspect of consciousness but being unaware of it. This
 is absurd, but it is a corollary of the claim that it is possible to
 separate consciousness from function. Therefore, the claim that it is
 possible to separate consciousness from function is shown to be false.
 If you don't accept this then you allow what you have already admitted
 is an absurdity.

It's a strawman of consciousness that is employed in circular
thinking. You assume that consciousness is a behavior from the
beginning and then use that fallacy to prove that behavior can't be
separated from consciousness. Consciousness drives behavior and vice
versa, but each extends beyond the limits of the other.

  and then we will
  have human level AI (soon after we will have superhuman AI and soon
  after that the human race may be supplanted, but that's a separate
  question).

  The human race has already been supplanted by a superhuman AI. It's
  called law and finance.

 They are not entities and not intelligent, let alone intelligent in
 the way humans are.

What make you think that law and finance are any less intelligent than
a contemporary AI program?

   I don't understand what all of this debate over how intelligence seems
   from the outside has to do with how it is experienced from the inside.
   Here's a thought experiment for the anti-zombie. If I study randomness
   and learn to impersonate machine randomness perfectly, have I become a
   machine? Have I lost sentience? Why not?

  Intelligence can fake non-intelligence, but non-intelligence can't
  fake intelligence.

  But intelligence can fake intelligence using non-intelligence. A
  computer isn't faking intelligence, it's just spinning a quantitative
  instruction set through semiconductors. It's only us who think it's
  intelligent. In fact it is intelligent, as a long polymer molecule is
  intelligent, but it is not conscious as an animal is conscious.

 It seems that you are conflating intelligence with consciousness.
 Intelligence is what is observed, while consciousness relates to the
 internal experience. A zombie is intelligent but not conscious.

When you say that intelligence can 'fake' non-intelligence, you imply
an internal experience (faking is not an external phenomenon).
Intelligence is a broad, informal term. It can mean subjectivity,
intersubjectivity, or objective behavior, although I would say not
truly objective but intersubjectively imagined as objective. I agree
that consciousness or awareness is different from any of those
definitions of intelligence which would actually be categories of
awareness. I would not say that a zombie is intelligent. Intelligence
implies understanding, which is internal. What a computer or a zombie
has is intelliform mechanism.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-08-07 Thread benjayk


Bruno Marchal wrote:
 
 
 On 06 Aug 2011, at 23:14, benjayk wrote:
 

 Frankly I am a bit tired of this debate (to some extent debating in  
 general),
 so I will not respond in detail any time soon (if at all). Don't  
 take it as
 total disinterest, I found our exchange very interesting, I am just  
 not in
 the mood at the moment to discuss complex topics at length.
 
 There is no problem, Ben. I hope you will not mind if I comment your  
 post.
 
Of course not, I am interested in your comments. I just wanted to make clear
why I responded briefly.


Bruno Marchal wrote:
 


 Bruno Marchal wrote:

 Then computer science provides a theory of consciousness, and  
 explains how
 consciousness emerges from numbers,
 How can consciousness be shown to emerge from numbers when it is  
 already
 assumed at the start?
 
 In science we assume at some meta-level what we try to explain at some  
 level. We have to assume the existence of the moon to try theories  
 about its origin.
That's true, but I think this is a different case. The moon seems to have a
past, so it makes sense to say it emerged from its constituent parts. In the
past, it was already there as a possibility.

But consciousness as such has no past, so what would it mean that it emerges
from numbers? Emerging is something taking place within time. Otherwise we
are just saying we can deduce it from a theory, but this in and of itself
doesn't mean that what is derived is prior to what it is derived from.

To the contrary, what we call numbers just emerges after consciousness has
been there for quite a while. You might argue that they were there before,
but I don't see any evidence for it. What the numbers describe was there
before, this is certainly true (or you could say there were implicitly
there).


Bruno Marchal wrote:
 
 It's a bit like assuming A, and because B-A is true if A is true,  
 we can
 claim for any B that B is the reason that A true.
 
 This confirms you are confusing two levels. The level of deduction in  
 a theory, and the level of implication in formal logic.
I am not saying it's the same. I just don't see that because we can formally
deduce A from B, this mean that A in reality emerges from B.



Bruno Marchal wrote:
 

 Consciousness is simply a given. Every explanation of it will just  
 express
 what it is and will not determine its origin, as its origin would  
 need to be
 independent of it / prior to it, but could never be known to be  
 prior to it,
 as this would already require consciousness.
 
 In the comp theory it can be explained why machine takes consciousness  
 as a given, and that from their first person points of view, they are  
 completely correct about this.
OK.



Bruno Marchal wrote:
 
  Yet, consciousness is not assumed as  
 something primitive in the TOE itself.
But this doesn't really matter, as we already assume that it's primitive,
because we use it before we can even formulate anything. You can't just
ignore what you already know, by not making your assumptions explicit in
your theory.



Bruno Marchal wrote:
 
 



 Bruno Marchal wrote:




 Bruno Marchal wrote:



 Bruno Marchal wrote:

 And in that sense, comp provides, I think, the first coherent
 picture of
 almost everything, from God (oops!) to qualia, quanta included,  
 and
 this by assuming only seven arithmetical axioms.
 I tend to agree. But it's coherent picture of everything includes
 the
 possibility of infinitely many more powerful theories.  
 Theoretically
 it may
 be possible to represent every such theory with arithmetic - but
 then we can
 represent every arithmetical statement with just one symbol and an
 encoding
 scheme, still we wouldn't call . a theory of everything.
 So it's not THE theory of everything, but *a* theory of  
 everything.

 Not really. Once you assume comp, the numbers (or equivalent) are
 enough, and very simple (despite mysterious).
 They are enough, but they are not the only way to make a theory of
 everything. As you say, we can use everything as powerful as
 numbers, so
 there is an infinity of different formulations of theories of
 everything.

 For any theory, you have infinities of equivalent formulations. This
 is not a defect. What is amazing is that they can be very different
 (like cellular automata, LISP, addition+multiplication on natural
 numbers, quantum topology, billiard balls, etc.
 I agree. It's just that in my view the fact that they can be very  
 different
 makes them ultimately different theories, only theories about the same
 thing.
 
 And proving the same things, with equivalent explanation.
Sure, we can write indistinguishable programs (to the user) with different
programming languages as well. Still they are different programming
languages, and they are only equivalent with respect to what they can
compute, not at all practically.


Bruno Marchal wrote:
 
 Different theories may explain the same thing, but in practice, they
 may vary in their efficiency to explain it, so it 

Re: Simulated Brains

2011-08-07 Thread Evgenii Rudnyi

On 07.08.2011 14:51 Craig Weinberg said the following:

On Aug 7, 2:44 am, Evgenii Rudnyiuse...@rudnyi.ru  wrote:

On 07.08.2011 05:12 Craig Weinberg said the following:



We can always infer qualia. It doesn't mean our inference is
correct. In this case I'm pointing out that the inference
doesn't require a learned language. My point is that math is not
nature, but nurture. If it were otherwise, I would expect the
effects of alcohol intoxication or smaller brain cortex to make
an animal more logical rather than more emotional. Emotion is
more primitive than symbolic logic.


Please note that according to experimental results (see the book
mentioned in my previous message), pain comes after the event. For
example when you touch a hotplate, you take your hand back not
because of the pain. The action actually happens unconsciously,
conscious pain comes afterward.


The pain comes to 'us' after the event. That's not to say that the
cells of your burned finger are not in pain already. Cellular pain
may not be the same experience of course as a trillion cell human
being's version of it. We have to ramp up the significance of the
sensation. Cells die all the time, so their damage may not feel as
'expensive' to us who, all things considered, consider our own
fingers pretty highly.


Whether individual cells can experience pain is, I guess, an open 
question. It seems that there are no experimental results to this end.


What I meant was that the action to remove the hand is done 
unconsciously. I am not sure that pain in cells is the reason, in my 
view rather sensor neurons give signals to the brain and then it causes 
the action. All this however happens unconsciously and pain as we feel 
it comes after the action.



As far as the book, it looks good at the beginning but then seems
like it creeps back down away from the hard problem. Most of what you


The book considers experimental results and the Hard Problem is 
formulated in the context of experimental research. The book actually 
offers no solution, its goal rather to show the problem. To this end, 
the authors first tries to employ normal scientific knowledge as long as 
he can. This is why I like it. Yet, the book states pretty clear that 
the Hard Problem (Qualia) is right now incompatible with contemporary 
scientific knowledge.



have quoted I agree with and have considered often. Here's my answers
to his qualia questions:


...

Thanks. The problem is that you use your own language to model the world 
and it seems to be far away from that I get used to, hence no comments 
from my side here.


Evgenii
http://blog.rudnyi.ru

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-07 Thread Craig Weinberg
On Aug 7, 10:31 am, Evgenii Rudnyi use...@rudnyi.ru wrote:
 On 07.08.2011 14:51 Craig Weinberg said the following:

  The pain comes to 'us' after the event. That's not to say that the
  cells of your burned finger are not in pain already. Cellular pain
  may not be the same experience of course as a trillion cell human
  being's version of it. We have to ramp up the significance of the
  sensation. Cells die all the time, so their damage may not feel as
  'expensive' to us who, all things considered, consider our own
  fingers pretty highly.

 Whether individual cells can experience pain is, I guess, an open
 question. It seems that there are no experimental results to this end.

Are we not composed of individual cells? If groups of cells can
experience pain, it seems at least as likely that the pain experience
is in some way an aggregate of fractional pain experiences rather than
emerging spontaneously out of a complete absence of awareness,
especially when there is no biological advantage for any kind of
experience to exist at all.

 What I meant was that the action to remove the hand is done
 unconsciously. I am not sure that pain in cells is the reason, in my
 view rather sensor neurons give signals to the brain and then it causes
 the action. All this however happens unconsciously and pain as we feel
 it comes after the action.

I understand the neuro-mechanical view, I just think that it's a
prejudiced interpretation of the data. The signal that the sensor
neurons give to the brain are none other than pain. Sure, it may get
amplified as the brain experiences it, as it invited cognitive
associations and memories, rattles around in the executive processing
senate, etc., but there is no reason to assume that the primary input
of the sense organ is anything less than sense itself. What is a
'signal' made of? On the outside it's orderly changes we can observe
occurring in matter, on the inside, in our own case, we can experience
changes in what we feel and think. They are the same phenomenon, only
seen from two different (opposite) perspectives. The experience of
pain spread through the tissues of the body like a crowd wave,
including the nervous system, which is a kind of expressway for
politicizing the experiences of the body and through the body.

  As far as the book, it looks good at the beginning but then seems
  like it creeps back down away from the hard problem. Most of what you

 The book considers experimental results and the Hard Problem is
 formulated in the context of experimental research. The book actually
 offers no solution, its goal rather to show the problem. To this end,
 the authors first tries to employ normal scientific knowledge as long as
 he can. This is why I like it. Yet, the book states pretty clear that
 the Hard Problem (Qualia) is right now incompatible with contemporary
 scientific knowledge.

That's why I like it too. I see my ideas as picking up where he leaves
off, and I think that it possibly may solve the problem by showing it
in a new light, stripped of it's original assumptions.

  have quoted I agree with and have considered often. Here's my answers
  to his qualia questions:

 ...

 Thanks. The problem is that you use your own language to model the world
 and it seems to be far away from that I get used to, hence no comments
 from my side here.

It may be a problem for others, but I think that it is the truth
nevertheless. I don't think that the truth has to fit into what people
have gotten used to.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-07 Thread Evgenii Rudnyi

On 07.08.2011 14:14 Bruno Marchal said the following:


On 07 Aug 2011, at 01:24, Stathis Papaioannou wrote:


On Sun, Aug 7, 2011 at 2:54 AM, Evgenii Rudnyi use...@rudnyi.ru
wrote:


Let us forget for a moment machines and take for example some
other biological creatures, for example even insects. How would
you characterize the behaviour of insects? Is it intelligent or
not?


Yes, I would say that insects have a limited intelligence. Why
not? And I imagine they also have a limited consciousness.



That's my personal feeling too. Recently I have updated the arachnid
an octopus in the Löbian (self-conscious) class of entity.

I came to that possible conclusion by looking at video like this, and
 then making some experiment with spider myself:

http://www.youtube.com/watch?v=iND8ucDiDSQ

It is not the move the spider, but its apparent induction that there
is a spider behind the mirror, and its apparent shock discovering
there is none.

Take this with some grain on salt, but in matter of consciousness I
prefer to attribute too much than not enough.


On the other hand you can take some Khepera robots for example

http://youtu.be/t4elmvcMpBQ

and then to use a mirror as well. It would be an interesting experiment 
as well.


The question here what it could mean, limited consciousness in the 
case of a spider.


Jeffrey Gray foresees a role for consciousness as a general purpose 
comparator system for late error detection. This presumably give us an 
opportunity to reprogram ourselves by means of conscious experience. 
Spiders on the other hand seem to be hardwired, so it is unclear what an 
advantage conscious experience could give them.


Evgenii
http://blog.rudnyi.ru

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-07 Thread Evgenii Rudnyi

On 07.08.2011 17:12 Craig Weinberg said the following:

On Aug 7, 10:31 am, Evgenii Rudnyiuse...@rudnyi.ru  wrote:

On 07.08.2011 14:51 Craig Weinberg said the following:



The pain comes to 'us' after the event. That's not to say that
the cells of your burned finger are not in pain already. Cellular
pain may not be the same experience of course as a trillion cell
human being's version of it. We have to ramp up the significance
of the sensation. Cells die all the time, so their damage may not
feel as 'expensive' to us who, all things considered, consider
our own fingers pretty highly.


Whether individual cells can experience pain is, I guess, an open
question. It seems that there are no experimental results to this
end.


Are we not composed of individual cells? If groups of cells can
experience pain, it seems at least as likely that the pain
experience is in some way an aggregate of fractional pain experiences
rather than emerging spontaneously out of a complete absence of
awareness, especially when there is no biological advantage for any
kind of experience to exist at all.


It seems that pain is some brain function, see for example

http://www.thenakedscientists.com/HTML/content/interviews/interview/651/

I have just searched in Google

people that do not experience pain

and this was the first link.


What I meant was that the action to remove the hand is done
unconsciously. I am not sure that pain in cells is the reason, in
my view rather sensor neurons give signals to the brain and then it
causes the action. All this however happens unconsciously and pain
as we feel it comes after the action.


I understand the neuro-mechanical view, I just think that it's a
prejudiced interpretation of the data. The signal that the sensor
neurons give to the brain are none other than pain. Sure, it may get
amplified as the brain experiences it, as it invited cognitive
associations and memories, rattles around in the executive
processing senate, etc., but there is no reason to assume that the
primary input of the sense organ is anything less than sense itself.
What is a 'signal' made of? On the outside it's orderly changes we
can observe occurring in matter, on the inside, in our own case, we
can experience changes in what we feel and think. They are the same
phenomenon, only seen from two different (opposite) perspectives. The
experience of pain spread through the tissues of the body like a
crowd wave, including the nervous system, which is a kind of
expressway for politicizing the experiences of the body and through
the body.


A signal from neuron has electrical nature (see neuron spikes). 
Experiments show that brain operates at about 10 ms and this could be a 
typical reaction time. Pain (and consciousness experience in general) 
requires however say 200 ms. So, as I have said, first the action is 
made unconsciously and only after that comes pain. Hence pain could not 
be the cause for the action.



As far as the book, it looks good at the beginning but then
seems like it creeps back down away from the hard problem. Most
of what you


The book considers experimental results and the Hard Problem is
formulated in the context of experimental research. The book
actually offers no solution, its goal rather to show the problem.
To this end, the authors first tries to employ normal scientific
knowledge as long as he can. This is why I like it. Yet, the book
states pretty clear that the Hard Problem (Qualia) is right now
incompatible with contemporary scientific knowledge.


That's why I like it too. I see my ideas as picking up where he
leaves off, and I think that it possibly may solve the problem by
showing it in a new light, stripped of it's original assumptions.



have quoted I agree with and have considered often. Here's my
answers to his qualia questions:


...

Thanks. The problem is that you use your own language to model the
world and it seems to be far away from that I get used to, hence no
comments from my side here.


It may be a problem for others, but I think that it is the truth
nevertheless. I don't think that the truth has to fit into what
people have gotten used to.


Sure, I completely agree.

I have not meant that your theory is wrong. I just wanted to say that 
when you sell your theory to other people, it might be good to start 
talking their language. Well, sales is a hard problem on its own.


Evgenii
http://blog.rudnyi.ru

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-08-07 Thread Bruno Marchal


On 07 Aug 2011, at 15:50, benjayk wrote:




Bruno Marchal wrote:



On 06 Aug 2011, at 23:14, benjayk wrote:



Frankly I am a bit tired of this debate (to some extent debating in
general),
so I will not respond in detail any time soon (if at all). Don't
take it as
total disinterest, I found our exchange very interesting, I am just
not in
the mood at the moment to discuss complex topics at length.


There is no problem, Ben. I hope you will not mind if I comment your
post.

Of course not, I am interested in your comments. I just wanted to  
make clear

why I responded briefly.


OK. Thanks for letting me know. I have to brief also, because I am  
overwhelmed by summer  work. I enjoy very much your attempt to  
understand what I try to convey.



Bruno Marchal wrote:





Bruno Marchal wrote:


Then computer science provides a theory of consciousness, and
explains how
consciousness emerges from numbers,

How can consciousness be shown to emerge from numbers when it is
already
assumed at the start?


In science we assume at some meta-level what we try to explain at  
some

level. We have to assume the existence of the moon to try theories
about its origin.
That's true, but I think this is a different case. The moon seems to  
have a
past, so it makes sense to say it emerged from its constituent  
parts. In the

past, it was already there as a possibility.


OK, I should say that it emerges arithmetically. I thought you did  
already understand that time is not primitive at all. More on this  
below.






But consciousness as such has no past, so what would it mean that it  
emerges
from numbers? Emerging is something taking place within time.  
Otherwise we
are just saying we can deduce it from a theory, but this in and of  
itself

doesn't mean that what is derived is prior to what it is derived from.

To the contrary, what we call numbers just emerges after  
consciousness has
been there for quite a while. You might argue that they were there  
before,
but I don't see any evidence for it. What the numbers describe was  
there

before, this is certainly true (or you could say there were implicitly
there).


OK. That would be a real disagreement. I just assume that the  
arithmetical relations are true independently of anything. For example  
I consider the truth of Goldbach conjecture as already settled in  
Platonia. Either it is true that all even number bigger than 2 are the  
sum of two primes, or that this is not true, and this independently on  
any consideration on time, spaces, humans, etc.
Humans can easily verify this for little even numbers: 4 = 2+2, 6 =  
3+3, 8 = 3+5, etc. But we don't have found a proof of this, despite  
many people have searched for it.
I can see that the expression of such a statement needs humans or some  
thinking entity, but I don't see how the fact itself would depend on  
anything (but the definitions).








Bruno Marchal wrote:



It's a bit like assuming A, and because B-A is true if A is true,
we can
claim for any B that B is the reason that A true.


This confirms you are confusing two levels. The level of deduction in
a theory, and the level of implication in formal logic.
I am not saying it's the same. I just don't see that because we can  
formally

deduce A from B, this mean that A in reality emerges from B.


What I say is more subtle. I will make an attempt to be clearer below.






Bruno Marchal wrote:




Consciousness is simply a given. Every explanation of it will just
express
what it is and will not determine its origin, as its origin would
need to be
independent of it / prior to it, but could never be known to be
prior to it,
as this would already require consciousness.


In the comp theory it can be explained why machine takes  
consciousness

as a given, and that from their first person points of view, they are
completely correct about this.

OK.



Bruno Marchal wrote:


Yet, consciousness is not assumed as
something primitive in the TOE itself.
But this doesn't really matter, as we already assume that it's  
primitive,

because we use it before we can even formulate anything.


We already assumed it exists, sure. But why would that imply that it  
exists primitively? It exist fundamentally: in the sense that once you  
have all the true arithmetical relation, consciousness exists. So,  
consciousness is not something which appears or emerges in time or  
space, but it is not primitive in the sense that its existence is a  
logical consequence of arithmetical truth (provably so when we assume  
comp and accept some definition).


Sometimes I sketch this in the following manner. The arrows are logico- 
arithmetical deduction:


NUMBERS = CONSCIOUSNESS = PHYSICAL REALITY = HUMANS = HUMANS'  
NUMBERS




You can't just
ignore what you already know, by not making your assumptions  
explicit in

your theory.


It is just not an assumption in the theory, but a derived existence.  
With comp, consciousness is implicit in the arithmetical truth.





Re: Simulated Brains

2011-08-07 Thread Bruno Marchal


On 07 Aug 2011, at 16:31, Evgenii Rudnyi wrote:


On 07.08.2011 14:51 Craig Weinberg said the following:

On Aug 7, 2:44 am, Evgenii Rudnyiuse...@rudnyi.ru  wrote:

On 07.08.2011 05:12 Craig Weinberg said the following:



We can always infer qualia. It doesn't mean our inference is
correct. In this case I'm pointing out that the inference
doesn't require a learned language. My point is that math is not
nature, but nurture. If it were otherwise, I would expect the
effects of alcohol intoxication or smaller brain cortex to make
an animal more logical rather than more emotional. Emotion is
more primitive than symbolic logic.


Please note that according to experimental results (see the book
mentioned in my previous message), pain comes after the event. For
example when you touch a hotplate, you take your hand back not
because of the pain. The action actually happens unconsciously,
conscious pain comes afterward.


The pain comes to 'us' after the event. That's not to say that the
cells of your burned finger are not in pain already. Cellular pain
may not be the same experience of course as a trillion cell human
being's version of it. We have to ramp up the significance of the
sensation. Cells die all the time, so their damage may not feel as
'expensive' to us who, all things considered, consider our own
fingers pretty highly.


Whether individual cells can experience pain is, I guess, an open  
question. It seems that there are no experimental results to this end.


What I meant was that the action to remove the hand is done  
unconsciously. I am not sure that pain in cells is the reason, in my  
view rather sensor neurons give signals to the brain and then it  
causes the action. All this however happens unconsciously and pain  
as we feel it comes after the action.



As far as the book, it looks good at the beginning but then seems
like it creeps back down away from the hard problem. Most of what you


The book considers experimental results and the Hard Problem is  
formulated in the context of experimental research. The book  
actually offers no solution, its goal rather to show the problem. To  
this end, the authors first tries to employ normal scientific  
knowledge as long as he can. This is why I like it. Yet, the book  
states pretty clear that the Hard Problem (Qualia) is right now  
incompatible with contemporary scientific knowledge.



There is no scientific way to put a frontier between scientific  
knowledge and scientific beliefs. I am glad that the author sees there  
is a problem between qualia and what I would call only the current  
scientific naturalistic beliefs.

The incompatibility is between mechanism and materialism.

Bruno






have quoted I agree with and have considered often. Here's my answers
to his qualia questions:


...

Thanks. The problem is that you use your own language to model the  
world and it seems to be far away from that I get used to, hence no  
comments from my side here.


Evgenii
http://blog.rudnyi.ru

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-07 Thread Bruno Marchal


On 07 Aug 2011, at 17:25, Evgenii Rudnyi wrote:


On 07.08.2011 14:14 Bruno Marchal said the following:


On 07 Aug 2011, at 01:24, Stathis Papaioannou wrote:


On Sun, Aug 7, 2011 at 2:54 AM, Evgenii Rudnyi use...@rudnyi.ru
wrote:


Let us forget for a moment machines and take for example some
other biological creatures, for example even insects. How would
you characterize the behaviour of insects? Is it intelligent or
not?


Yes, I would say that insects have a limited intelligence. Why
not? And I imagine they also have a limited consciousness.



That's my personal feeling too. Recently I have updated the arachnid
an octopus in the Löbian (self-conscious) class of entity.

I came to that possible conclusion by looking at video like this, and
then making some experiment with spider myself:

http://www.youtube.com/watch?v=iND8ucDiDSQ

It is not the move the spider, but its apparent induction that there
is a spider behind the mirror, and its apparent shock discovering
there is none.

Take this with some grain on salt, but in matter of consciousness I
prefer to attribute too much than not enough.


On the other hand you can take some Khepera robots for example

http://youtu.be/t4elmvcMpBQ

and then to use a mirror as well. It would be an interesting  
experiment as well.


The question here what it could mean, limited consciousness in the  
case of a spider.


Why limited consciousness? For me the big departure is between RA  
consciousness and PA consciousness. PA is RA (addition and  
multiplication, mainly) + the induction axioms (it become very  
*clever*). When I say that I think that jumping spiders are Löbian, I  
mean that I think they are as much conscious than us. But they have a  
lower memory, lower motivation, they are severely constrained by a  
very little brain, which makes them far less intelligent (in the  
sense of Stathis), but I think they are as conscious as us: they  
distinguishes themselves from other creature to which they have a  
cognitive empathy. For a long time I thought only the mammals can do  
that, then I have enlarged this to the homeotherm animals (which  
regulate the temperature of the body and happens to dream), and then I  
have enlarged this recently to the octopus and the spiders. In a  
sense, our own consciousness might be more limited, because it is full  
of sophisticated, futile and less futile, human complexity. The brain  
seems to be more a filter of (platonic) consciousness than a  
consciousness producer, and bigger brain might filter more than less.  
technically this points is still hard to settle out.






Jeffrey Gray foresees a role for consciousness as a general purpose  
comparator system for late error detection.


It is a good idea.



This presumably give us an opportunity to reprogram ourselves by  
means of conscious experience.


This might be related with the ideal G/G* case. I think that  
consciousness brings semantics (model, unprovability) and that it  
speeds the machine relatively to its most probable computation/ 
universal machines.






Spiders on the other hand seem to be hardwired,


So we are. Our software admits much more loops, but the spider might  
have the loop more which distinguish them from the insect, which I  
think are universal but not Löbian (they have almost the trivial  
unlimited consciousness. They cannot reflect it and thus are almost  
non universal sort of robots, unlike the spiders and cats, they have  
not the opportunity to think something like I should go there. They  
can only go there).




so it is unclear what an advantage conscious experience could give  
them.


To better escape from the trap of a predator,
To capture a prey more quickly than another spider,
To mate more efficiently than another spider.

I am not talking about spider in general. I am agnostic. Only about  
the jumping spiders.


Consciousness is a key for the moving living systems, to anticipate  
the way the decor will move relatively to them. Self-consciousness  
accelerates this exponentially. The difference  between the spider and  
us, is the place in the exponential. Insects might be said Löbian on a  
larger non individual scale, like we can't exclude the plants. But  
spiders might be like cat and dogs, and primate and humans, always  
ameliorating their strategies. Slowly, or less slowly. With the  
insects and plants, the amelioration is basically only darwinian,  
not individual.
Of course I might be deluded on those spiders, but they surprise me a  
lot! Some month ago, I would have thought that arachnid were not much  
more, cognitively speaking, than insects or worms.
Insect's consciousness is really God's consciousness through a window.  
With Löbian entity Gods delegates a bit more of its will.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To 

Re: Simulated Brains

2011-08-07 Thread meekerdb

On 8/6/2011 11:44 PM, Evgenii Rudnyi wrote:

On 07.08.2011 05:12 Craig Weinberg said the following:

On Aug 6, 9:35 pm, meekerdbmeeke...@verizon.net  wrote:

On 8/6/2011 4:59 PM, Craig Weinberg wrote:


The language doesn't matter. You can see that a person is in pain
by their response to being burned, even if they have not
developed language yet.


Interesting.  Now Craig *can* infer qualia from behavior.


We can always infer qualia. It doesn't mean our inference is
correct. In this case I'm pointing out that the inference doesn't
require a learned language. My point is that math is not nature, but
nurture. If it were otherwise, I would expect the effects of alcohol
intoxication or smaller brain cortex to make an animal more logical
rather than more emotional. Emotion is more primitive than symbolic
logic.



Please note that according to experimental results (see the book 
mentioned in my previous message), pain comes after the event. For 
example when you touch a hotplate, you take your hand back not because 
of the pain. The action actually happens unconsciously, conscious pain 
comes afterward.


Evgenii
http://blog.rudnyi.ru



Which invites the question, was it pain before you were conscious of 
it?  Would it have been pain if you'd never become conscious of it?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-07 Thread Evgenii Rudnyi

On 07.08.2011 19:23 Bruno Marchal said the following:


On 07 Aug 2011, at 17:25, Evgenii Rudnyi wrote:



...


The question here what it could mean, limited consciousness in
the case of a spider.


Why limited consciousness? For me the big departure is between RA
consciousness and PA consciousness. PA is RA (addition and
multiplication, mainly) + the induction axioms (it become very
*clever*). When I say that I think that jumping spiders are Löbian, I
 mean that I think they are as much conscious than us. But they have
a lower memory, lower motivation, they are severely constrained by a
very little brain, which makes them far less intelligent (in the
sense of Stathis), but I think they are as conscious as us: they
distinguishes themselves from other creature to which they have a
cognitive empathy. For a long time I thought only the mammals can do
that, then I have enlarged this to the homeotherm animals (which
regulate the temperature of the body and happens to dream), and then
I have enlarged this recently to the octopus and the spiders. In a
sense, our own consciousness might be more limited, because it is
full of sophisticated, futile and less futile, human complexity. The
brain seems to be more a filter of (platonic) consciousness than a
consciousness producer, and bigger brain might filter more than less.
technically this points is still hard to settle out.



My question was more about what kind of consciousness experiences a 
spider has. Let us start with a vision. Does a spider experience a 3D 
world like we? (well even without colors, say greyscale but still a 3D 
world) Then does a spider has emotions, pain, etc.?


Evgenii
http://blog.rudnyi.ru

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-07 Thread Evgenii Rudnyi

On 07.08.2011 19:58 meekerdb said the following:

On 8/6/2011 11:44 PM, Evgenii Rudnyi wrote:


...


Please note that according to experimental results (see the book
mentioned in my previous message), pain comes after the event. For
 example when you touch a hotplate, you take your hand back not
because of the pain. The action actually happens unconsciously,
conscious pain comes afterward.

Evgenii http://blog.rudnyi.ru



Which invites the question, was it pain before you were conscious of
it? Would it have been pain if you'd never become conscious of it?


I would say just a series of neuron spikes, what else? I mean that in 
the skin there is some receptor that when it is hot excites some neuron. 
That neuron excites some other neurons and eventually your muscle move 
your hand. You see it differently?


Evgenii

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-07 Thread Evgenii Rudnyi

Brent,

Sorry, I have not understood your question correctly - I thought it was 
something like what was before pain. My unconsciousness has answered 
your question faster as it has interpreted it correctly for my 
consciousness.


The answer to your question in my view that without consciousness there 
would be no pain.


Evgenii

On 07.08.2011 20:07 Evgenii Rudnyi said the following:

On 07.08.2011 19:58 meekerdb said the following:

On 8/6/2011 11:44 PM, Evgenii Rudnyi wrote:


...


Please note that according to experimental results (see the book
mentioned in my previous message), pain comes after the event.
For example when you touch a hotplate, you take your hand back
not because of the pain. The action actually happens
unconsciously, conscious pain comes afterward.

Evgenii http://blog.rudnyi.ru



Which invites the question, was it pain before you were conscious
of it? Would it have been pain if you'd never become conscious of
it?


I would say just a series of neuron spikes, what else? I mean that in
 the skin there is some receptor that when it is hot excites some
neuron. That neuron excites some other neurons and eventually your
muscle move your hand. You see it differently?

Evgenii



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-07 Thread meekerdb

On 8/7/2011 4:42 AM, Stathis Papaioannou wrote:

That, as I keep saying, is the question. Assume that the bot can
behave like a person but lacks consciousness. Then it would be
possible to replace parts of your brain with non-conscious components
that function otherwise normally, which would lead to you lacking some
important aspect aspect of consciousness but being unaware of it.


Put that way it seems absurd.  But what about lacking consciousness but 
*acting as if you were unaware* of it?  The philosophical zombie says 
he's conscious and has an internal narration and imagines and 
dreams...but does he?  Can we say that he must?  If he says he doesn't, 
can we be sure he's lying?  Even though I think functionalism is right, 
I think consciousness may be very different depending on how the 
internal functions are implemented.  I go back to the example of having 
an inner narration in language (which most of us didn't have before age 
4).  I think Julian Jaynes was right to suppose that this was an 
evolutionary accident in co-opting the perceptual mechanism of 
language.  In a sense all thought may be perception; it's just that some 
of it is perception of internal states.


Brent


This
is absurd, but it is a corollary of the claim that it is possible to
separate consciousness from function. Therefore, the claim that it is
possible to separate consciousness from function is shown to be false.
If you don't accept this then you allow what you have already admitted
is an absurdity.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-07 Thread meekerdb

On 8/7/2011 5:01 AM, Bruno Marchal wrote:
That's why I sometimes return to my engineering viewpoint.  It is 
easy to speculate that some overarching everything construct 
includes us and our world as an infinitesimal part.


I suspect a confusion with tegmark's kind of mathematicalism. Comp 
gives us (us = the UMs and LUMs) the big role in the emergence of 
physics; not an infinitesimal role at all.


Isn't that the measure (aka white rabbit) problem.  Can you show that 
the UD does not generate inifinitely many Newtonian worlds?  or chaotic 
worlds?  Do you have to rely on anthropic selection?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-07 Thread Bruno Marchal


On 07 Aug 2011, at 20:02, Evgenii Rudnyi wrote:


On 07.08.2011 19:23 Bruno Marchal said the following:


On 07 Aug 2011, at 17:25, Evgenii Rudnyi wrote:



...


The question here what it could mean, limited consciousness in
the case of a spider.


Why limited consciousness? For me the big departure is between RA
consciousness and PA consciousness. PA is RA (addition and
multiplication, mainly) + the induction axioms (it become very
*clever*). When I say that I think that jumping spiders are Löbian, I
mean that I think they are as much conscious than us. But they have
a lower memory, lower motivation, they are severely constrained by a
very little brain, which makes them far less intelligent (in the
sense of Stathis), but I think they are as conscious as us: they
distinguishes themselves from other creature to which they have a
cognitive empathy. For a long time I thought only the mammals can do
that, then I have enlarged this to the homeotherm animals (which
regulate the temperature of the body and happens to dream), and then
I have enlarged this recently to the octopus and the spiders. In a
sense, our own consciousness might be more limited, because it is
full of sophisticated, futile and less futile, human complexity. The
brain seems to be more a filter of (platonic) consciousness than a
consciousness producer, and bigger brain might filter more than less.
technically this points is still hard to settle out.



My question was more about what kind of consciousness experiences a  
spider has. Let us start with a vision. Does a spider experience a  
3D world like we? (well even without colors, say greyscale but still  
a 3D world) Then does a spider has emotions, pain, etc.?


Jumping spiders have a larger spectrum of color than us, according to  
some scientists. Thay have a pretty good binocular vision system, but  
with a narrower view angle. But, well like most spiders, they have 8  
eight eyes. the six supplementary eyaes seems good at detecting moves  
all around them, so I figure out they might have a pretty good sense  
of 3D.
They certainly have emotions, which is the most basic mental  
experience in most living form, and pain, and thirst and appetite, and  
sexual desire, satisfaction and fears.
No doubt that they have a rather different perception than human, and  
different qualia, but basically, I would say that it is like us, minus  
some troubles (like how will I pay the taxes), but with the  
corresponding one, like its has been for some time I didn't see any  
edible pray (not with words, of course).


Yes. I would bet they are pretty self-conscious like us. They don't  
have language, and they probably have no way to learn from the  
discovery made by others. They progress in technics is still  
Darwinian, but in their individual life, they learn (unlike insects).


I think and speculate, from what I read and see (not as an expert in  
arthropods, for sure). I will certainly dig on this and let you know  
if that theory will weight up or down.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-07 Thread Bruno Marchal


On 07 Aug 2011, at 20:39, meekerdb wrote:


On 8/7/2011 5:01 AM, Bruno Marchal wrote:
That's why I sometimes return to my engineering viewpoint.  It is  
easy to speculate that some overarching everything construct  
includes us and our world as an infinitesimal part.


I suspect a confusion with tegmark's kind of mathematicalism. Comp  
gives us (us = the UMs and LUMs) the big role in the emergence of  
physics; not an infinitesimal role at all.


Isn't that the measure (aka white rabbit) problem.  Can you show  
that the UD does not generate inifinitely many Newtonian worlds?  or  
chaotic worlds?  Do you have to rely on anthropic selection?



We don't have to rely on anthropic selection, but we do have to rely  
on relative universal machine-tropic selection. That is why we need  
the machine's points of view (the arithmetical hypostases). You are  
selected by your most consistent extensions, like with the WM  
duplications.


The UD *does* generate infinitely many Newtonian worlds, but the  
machine's points of view, based on self-reference, introduce a  
quantization, and if comp is really true, it should introduce some  
phase and the negative probabilities leading to normal quasi- 
classical worlds, in a way similar to Everett+Gleason+Feynman. The  
fact that p - BDp is a theorem, for p sigma_1, in the material  
hypostases formally confirms the existence of that phase. Does that  
phase really make the White Rabbits as rare as they seem to be in our  
neighborhoods remains to be worked out (or passed to the next  
generation).



Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-08-07 Thread John Mikes
Dear benjamin if this is your name (benjayk?) if the unsigned text is
yours, of course:
I believe this post is not 'joining' the chorus of the debate. Or is it?
Benjayk wrote:
*Consciousness is simply a given*
OK, if you just disclose ANYTHING about it as you formulate that 'given'.
Your(?) logic seems alright that if it is 'originated' upon numbers then the
* 'consciousness-based' *numbers are a consequence of a consequence (or
prerequisite to a prerequisite).
 I am not decrying the 'origin' of consciousness, rather its entire concept
- what it may contain, include, act with, by, for, result in, - or else we
may not even know about today..
Then I may stipulate about an origin for it.

* ---EXISTS?---* as WHAT?
I volunteered on many discussion lists a defining generalization:* response
to relations, *
(originally: *to information*, which turned out to be a loose cannon). In
such general view it is not restricted to animates, in-animates, physical
objects, ideas, or more, since the 'relations' are quite ubiquitous even
beyond the limited circle of our knowledge. In such sense:* it exists*,
indeed.
Not (according to me) in *THOSE *systems, but everywhere.
 John M

(PS please excuse me if I pond on open doors in a discussion the ~100 long
posts of which I barely studied. I wanted to keep out and just could not
control my mouse. JM)

On Sat, Aug 6, 2011 at 5:14 PM, benjayk benjamin.jaku...@googlemail.comwrote:


 Frankly I am a bit tired of this debate (to some extent debating in
 general),
 so I will not respond in detail any time soon (if at all). Don't take it as
 total disinterest, I found our exchange very interesting, I am just not in
 the mood at the moment to discuss complex topics at length.





 Bruno Marchal wrote:
 
  Then computer science provides a theory of consciousness, and explains
 how
  consciousness emerges from numbers,
 How can consciousness be shown to emerge from numbers when it is already
 assumed at the start?
 It's a bit like assuming A, and because B-A is true if A is true, we can
 claim for any B that B is the reason that A true.

 Consciousness is simply a given. Every explanation of it will just
 express
 what it is and will not determine its origin, as its origin would need to
 be
 independent of it / prior to it, but could never be known to be prior to
 it,
 as this would already require consciousness.

 The only question is what systems are able to express that consciousness
 exists, and what place consciousness has in those systems.



 Bruno Marchal wrote:
 
 
 
 
  Bruno Marchal wrote:
 
 
 
  Bruno Marchal wrote:
 
  And in that sense, comp provides, I think, the first coherent
  picture of
  almost everything, from God (oops!) to qualia, quanta included, and
  this by assuming only seven arithmetical axioms.
  I tend to agree. But it's coherent picture of everything includes
  the
  possibility of infinitely many more powerful theories. Theoretically
  it may
  be possible to represent every such theory with arithmetic - but
  then we can
  represent every arithmetical statement with just one symbol and an
  encoding
  scheme, still we wouldn't call . a theory of everything.
  So it's not THE theory of everything, but *a* theory of everything.
 
  Not really. Once you assume comp, the numbers (or equivalent) are
  enough, and very simple (despite mysterious).
  They are enough, but they are not the only way to make a theory of
  everything. As you say, we can use everything as powerful as
  numbers, so
  there is an infinity of different formulations of theories of
  everything.
 
  For any theory, you have infinities of equivalent formulations. This
  is not a defect. What is amazing is that they can be very different
  (like cellular automata, LISP, addition+multiplication on natural
  numbers, quantum topology, billiard balls, etc.
 I agree. It's just that in my view the fact that they can be very different
 makes them ultimately different theories, only theories about the same
 thing. Different theories may explain the same thing, but in practice, they
 may vary in their efficiency to explain it, so it makes sense to treat them
 as different theories.
 In theory, even one symbol can represent every statement in any language,
 but still it's not as powerful as the language it represents.

 Similarily if you use just natural numbers as a TOE, you won't be able to
 directly express important concepts like dimensionality.
 --
 View this message in context:
 http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p32209984.html
 Sent from the Everything List mailing list archive at Nabble.com.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 

Re: Simulated Brains

2011-08-07 Thread Craig Weinberg
On Aug 7, 11:47 am, Evgenii Rudnyi use...@rudnyi.ru wrote:
 On 07.08.2011 17:12 Craig Weinberg said the following:

 It seems that pain is some brain function, see for example

 http://www.thenakedscientists.com/HTML/content/interviews/interview/651/

 I have just searched in Google

 people that do not experience pain

 and this was the first link.

It's saying that the amplification of pain is a molecular function:

It seems there are a whole series of *proteins that detect* various
types of damage, be it hot, cold, pressure, etc. These seem to be
integrated together by this *SCN9A, which seems to be an amplifier*
that takes these small initial tissue damage signals and turns them
into a much larger sodium impulse and a nerve can fire.

What WE feel as pain are what our brain cells feel from other neurons
when they are functioning properly. This genetic mutation affects the
neuron's ability to amplify the pain, not the ability for the other
cells of the body to feel the micro-pain that they might feel when
repairing themselves from damage, and the proteins of the cell that
detect that damage... which suggests that awareness is operating
robustly at the molecular level.

  I understand the neuro-mechanical view, I just think that it's a
  prejudiced interpretation of the data. The signal that the sensor
  neurons give to the brain are none other than pain. Sure, it may get
  amplified as the brain experiences it, as it invited cognitive
  associations and memories, rattles around in the executive
  processing senate, etc., but there is no reason to assume that the
  primary input of the sense organ is anything less than sense itself.
  What is a 'signal' made of? On the outside it's orderly changes we
  can observe occurring in matter, on the inside, in our own case, we
  can experience changes in what we feel and think. They are the same
  phenomenon, only seen from two different (opposite) perspectives. The
  experience of pain spread through the tissues of the body like a
  crowd wave, including the nervous system, which is a kind of
  expressway for politicizing the experiences of the body and through
  the body.

 A signal from neuron has electrical nature (see neuron spikes).
 Experiments show that brain operates at about 10 ms and this could be a
 typical reaction time. Pain (and consciousness experience in general)
 requires however say 200 ms. So, as I have said, first the action is
 made unconsciously and only after that comes pain. Hence pain could not
 be the cause for the action.

The experience of pain at the organism level is not the cause of the
action. It is the local sense of pain that, as you note, still
eventually arrives at the brain after the fact. Had there been no
original experience of pain, then there would be nothing to arrive at
the conscious areas after 200ms. The action is reflex, so it bypasses
the areas of the brain which we experience as 'us' and directly
responds, only letting us know why later on.

It may help to think of 'signals' as an analytical abstraction rather
than a concrete event. There are no 'signals' only feelings and
thoughts which look like electrochemical changes from a third person
perspective. It's not like there are sparks flying up the spinal chord
- that's just a fanciful way of understanding it. Neurons and proteins
are simply doing different things in a specific orderly pattern. The
pattern is what spikes, not the actual genes and cells. I realize this
is not the accepted current interpretation, but I think that it is the
more accurate one.

 I have not meant that your theory is wrong. I just wanted to say that
 when you sell your theory to other people, it might be good to start
 talking their language. Well, sales is a hard problem on its own.

I'm ambivalent about selling my theory to other people. They can have
it for free if they want. If I had wanted to speak their language I
probably would never have developed the theory in the first place.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Math Question

2011-08-07 Thread Craig Weinberg
On Aug 1, 2:29 pm, Bruno Marchal marc...@ulb.ac.be wrote:

Bruno  Stephen,

Isn't there a concept of imprecision in absolute physical measurement
and drift in cosmological constants? Are atoms and molecules all
infinitesimally different in size or are they absolutely the same
size? Certainly individual cells of the same type vary in all of their
measurements, do they not?

If so, that would seem to suggest my view - that arithmetic is an
approximation of feeling, and not the other way around. Cosmos is a
feeling of order, or of wanting to manifest order, but it is not
primitively precise. Make sense?

Biological processes then, could be conceived as a 'levelling up' of
molecular arithmetic having been formally actualized, a more
significant challenge is attempted on top of the completed molecular
canvas - with more elasticity and unpredictibility, and a host of
newer, richer feelings which expand upon the molecular range, becoming
at once more tangible and concrete, more real, and more unreal and
abstract. The increased potential for unreality in the subjective
interiority of the cells is what creates the perspective necessary to
conceive of the molecular world as objectively real by contrast. The
nervous system does the same trick one level higher.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-08-07 Thread benjayk


Bruno Marchal wrote:
 


 Bruno Marchal wrote:



 Bruno Marchal wrote:

 Then computer science provides a theory of consciousness, and
 explains how
 consciousness emerges from numbers,
 How can consciousness be shown to emerge from numbers when it is
 already
 assumed at the start?

 In science we assume at some meta-level what we try to explain at  
 some
 level. We have to assume the existence of the moon to try theories
 about its origin.
 That's true, but I think this is a different case. The moon seems to  
 have a
 past, so it makes sense to say it emerged from its constituent  
 parts. In the
 past, it was already there as a possibility.
 
 OK, I should say that it emerges arithmetically. I thought you did  
 already understand that time is not primitive at all. More on this  
 below.
Yeah, the problem is that consciousness emerging from arithmetics means
just that we manage to point to its existence within the theory. We have no
reason to suppose this expresses something more fundamental, that is, that
consciousness literally emerges from arithmetics. Honestly, I don't even
know how to interpret this literally.



Bruno Marchal wrote:
 

 But consciousness as such has no past, so what would it mean that it  
 emerges
 from numbers? Emerging is something taking place within time.  
 Otherwise we
 are just saying we can deduce it from a theory, but this in and of  
 itself
 doesn't mean that what is derived is prior to what it is derived from.

 To the contrary, what we call numbers just emerges after  
 consciousness has
 been there for quite a while. You might argue that they were there  
 before,
 but I don't see any evidence for it. What the numbers describe was  
 there
 before, this is certainly true (or you could say there were implicitly
 there).
 
 OK. That would be a real disagreement. I just assume that the  
 arithmetical relations are true independently of anything. For example  
 I consider the truth of Goldbach conjecture as already settled in  
 Platonia. Either it is true that all even number bigger than 2 are the  
 sum of two primes, or that this is not true, and this independently on  
 any consideration on time, spaces, humans, etc.
 Humans can easily verify this for little even numbers: 4 = 2+2, 6 =  
 3+3, 8 = 3+5, etc. But we don't have found a proof of this, despite  
 many people have searched for it.
 I can see that the expression of such a statement needs humans or some  
 thinking entity, but I don't see how the fact itself would depend on  
 anything (but the definitions).
My point is subtle, I wouldn't necessarily completly disagree with what you
said. The problem is that in some sense everything is already there in some
form, so in this sense 1+1=2 and 2+2=4 is independently, primarily true, but
so is everything else. Consciousness is required for any meaning to exist,
and ultimately is equivalent to it (IMO), so we derive from the meaning in
numbers that meaning exist. It's true, but ultimately trivial.

Either everything is independently true, which doesn't really seem to be the
case, or things are generally interdependent. 1+1=2 is just true because
2+2=4 and I can just be conscious because 1+1=2, but 1+1=2 is just true
because I am conscious, and 1+1=2 is true because my mouse pad is blue,
etc...

This view makes sense to me, because it is so simple. One particular
statement true statement is true, only because every particular statement
true statement is true, and because what is true is true. In this sense
every statement is true because of every other statement. If we derive
something, we just explain how we become aware of the truth (of a
statement). There is no objective hierarchy of emergence (but apparently
necessarily a subjective progression, we will first understand some things
and later some other things).
That's why it makes little sense to me to say consciousness as such arises
out of numbers. Subjectively we first need consciousness to make sense of
numbers. But certainly understanding of numbers can lead us to become more
conscious.


Bruno Marchal wrote:
 
 Bruno Marchal wrote:

 Yet, consciousness is not assumed as
 something primitive in the TOE itself.
 But this doesn't really matter, as we already assume that it's  
 primitive,
 because we use it before we can even formulate anything.
 
 We already assumed it exists, sure. But why would that imply that it  
 exists primitively? It exist fundamentally: in the sense that once you  
 have all the true arithmetical relation, consciousness exists. So,  
 consciousness is not something which appears or emerges in time or  
 space, but it is not primitive in the sense that its existence is a  
 logical consequence of arithmetical truth (provably so when we assume  
 comp and accept some definition).
 
 Sometimes I sketch this in the following manner. The arrows are logico- 
 arithmetical deduction:
 
 NUMBERS = CONSCIOUSNESS = PHYSICAL REALITY = HUMANS = HUMANS'  
 NUMBERS
I accept this deduction. 

Re: COMP refutation paper - finally out

2011-08-07 Thread benjayk


John Mikes wrote:
 
 Dear benjamin if this is your name (benjayk?)
 
Yep.


John Mikes wrote:
 
 I believe this post is not 'joining' the chorus of the debate. Or is it?
 Benjayk wrote:
 *Consciousness is simply a given*
 OK, if you just disclose ANYTHING about it as you formulate that 'given'.
 Your(?) logic seems alright that if it is 'originated' upon numbers then
 the
 * 'consciousness-based' *numbers are a consequence of a consequence (or
 prerequisite to a prerequisite).
  I am not decrying the 'origin' of consciousness, rather its entire
 concept
 - what it may contain, include, act with, by, for, result in, - or else we
 may not even know about today..
 Then I may stipulate about an origin for it.
Sorry, I can't follow you... You do not accept the concept of consciousness
and then want an origin for it?


John Mikes wrote:
 
 * ---EXISTS?---* as WHAT?
 I volunteered on many discussion lists a defining generalization:*
 response
 to relations, *
 (originally: *to information*, which turned out to be a loose cannon). In
 such general view it is not restricted to animates, in-animates, physical
 objects, ideas, or more, since the 'relations' are quite ubiquitous even
 beyond the limited circle of our knowledge. In such sense:* it exists*,
 indeed.
 Not (according to me) in *THOSE *systems, but everywhere.
???

benjayk

-- 
View this message in context: 
http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p32213960.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-07 Thread Evgenii Rudnyi

On 07.08.2011 20:54 Bruno Marchal said the following:


On 07 Aug 2011, at 20:02, Evgenii Rudnyi wrote:


...


My question was more about what kind of consciousness experiences a
 spider has. Let us start with a vision. Does a spider experience a
3D world like we? (well even without colors, say greyscale but
still a 3D world) Then does a spider has emotions, pain, etc.?


Jumping spiders have a larger spectrum of color than us, according to
 some scientists. Thay have a pretty good binocular vision system,
but with a narrower view angle. But, well like most spiders, they
have 8 eight eyes. the six supplementary eyaes seems good at
detecting moves all around them, so I figure out they might have a
pretty good sense of 3D. They certainly have emotions, which is the
most basic mental experience in most living form, and pain, and
thirst and appetite, and sexual desire, satisfaction and fears. No
doubt that they have a rather different perception than human, and
different qualia, but basically, I would say that it is like us,
minus some troubles (like how will I pay the taxes), but with the
corresponding one, like its has been for some time I didn't see any
edible pray (not with words, of course).

Yes. I would bet they are pretty self-conscious like us. They don't
have language, and they probably have no way to learn from the
discovery made by others. They progress in technics is still
Darwinian, but in their individual life, they learn (unlike
insects).

I think and speculate, from what I read and see (not as an expert in
 arthropods, for sure). I will certainly dig on this and let you know
if that theory will weight up or down.


Who knows, I am not an expert in this area. It would be good to see what 
experiments pro and contra are available. As for a visual system in a 
human brain, it is pretty complex. Say signals from retina go in 
parallel to two different visual subsystems, visual perception subsystem 
and visual action subsystem. If I understood correctly these subsystems 
take a considerable amount of a human brain. Hence I wonder how it goes 
in the brain of a spider. Is a perception visual subsystem there at all? 
8 eyes are not necessarily enough to form a consciousness 3D experience. 
To this end, one seems to need a good brain.


Also a quote from

Dario Floreano and Claudio Mattiussi, Bio-Inspired Artificial 
Intelligence: Theories, Methods, and Technologies, 2008, Chapter 6 
Behavioral Systems, section 6.5 Robots as Biological Models.


Robots can also be used as models to investigate biological questions 
and test hypothesis. As we mentioned in the historical introduction to 
this chapter, robots have been gradually replacing computers as the 
preferred tool and metaphor in embodied cognitive science. Robots are 
becoming increasingly accepted also among experimental biologists and 
neurosciences as tools to validate their models.


Evgenii

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-07 Thread Evgenii Rudnyi

On 07.08.2011 21:26 Craig Weinberg said the following:

On Aug 7, 11:47 am, Evgenii Rudnyiuse...@rudnyi.ru  wrote:

On 07.08.2011 17:12 Craig Weinberg said the following:



It seems that pain is some brain function, see for example

http://www.thenakedscientists.com/HTML/content/interviews/interview/651/




I have just searched in Google


people that do not experience pain

and this was the first link.


It's saying that the amplification of pain is a molecular function:

It seems there are a whole series of *proteins that detect* various
types of damage, be it hot, cold, pressure, etc. These seem to be
integrated together by this *SCN9A, which seems to be an amplifier*
that takes these small initial tissue damage signals and turns them
into a much larger sodium impulse and a nerve can fire.

What WE feel as pain are what our brain cells feel from other
neurons when they are functioning properly. This genetic mutation
affects the neuron's ability to amplify the pain, not the ability for
the other cells of the body to feel the micro-pain that they might
feel when repairing themselves from damage, and the proteins of the
cell that detect that damage... which suggests that awareness is
operating robustly at the molecular level.


Thanks, I have to read it more carefully.

Evgenii

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-08-07 Thread John Mikes
benjayk wrote:

*Sorry, I can't follow you... You do not accept the concept of
consciousness
**and then want an origin for it?*

I see you did not follow me... I asked for some identification to that
mystical noumenon we are talking about exactly* to make it acceptable for
discussion*.  T H E N  -  I F it turns out to BE acceptable, we may well
contemplate an origination for it - if???...
Better followable now?
Sorry for not having been clearer.

BTW I never said that I do not accept the term consciousness - if it is
identified in a way that makes sens (to me). I even worked on it (1992) to
apply the word to something *more general* than e.g. awareness or similar
'human' peculiarities. This is how I first formulated my ID for
it:*Acknowledgement of and response to information
*. During these 2 decades I attempted to clear the words into newer terms of
advanced meaning (changing to and extending them beyond our limits of
knowledge in my agnosticism like 'relations' etc.)

John M

On Sun, Aug 7, 2011 at 4:01 PM, benjayk benjamin.jaku...@googlemail.comwrote:



 John Mikes wrote:
 
  Dear benjamin if this is your name (benjayk?)
 
 Yep.


 John Mikes wrote:
 
  I believe this post is not 'joining' the chorus of the debate. Or is it?
  Benjayk wrote:
  *Consciousness is simply a given*
  OK, if you just disclose ANYTHING about it as you formulate that 'given'.
  Your(?) logic seems alright that if it is 'originated' upon numbers then
  the
  * 'consciousness-based' *numbers are a consequence of a consequence (or
  prerequisite to a prerequisite).
   I am not decrying the 'origin' of consciousness, rather its entire
  concept
  - what it may contain, include, act with, by, for, result in, - or else
 we
  may not even know about today..
  Then I may stipulate about an origin for it.
 Sorry, I can't follow you... You do not accept the concept of consciousness
 and then want an origin for it?


 John Mikes wrote:
 
  * ---EXISTS?---* as WHAT?
  I volunteered on many discussion lists a defining generalization:*
  response
  to relations, *
  (originally: *to information*, which turned out to be a loose cannon). In
  such general view it is not restricted to animates, in-animates, physical
  objects, ideas, or more, since the 'relations' are quite ubiquitous even
  beyond the limited circle of our knowledge. In such sense:* it exists*,
  indeed.
  Not (according to me) in *THOSE *systems, but everywhere.
 ???

 benjayk

 --
 View this message in context:
 http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p32213960.html
 Sent from the Everything List mailing list archive at Nabble.com.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-07 Thread meekerdb

On 8/7/2011 11:07 AM, Evgenii Rudnyi wrote:

On 07.08.2011 19:58 meekerdb said the following:

On 8/6/2011 11:44 PM, Evgenii Rudnyi wrote:


...


Please note that according to experimental results (see the book
mentioned in my previous message), pain comes after the event. For
 example when you touch a hotplate, you take your hand back not
because of the pain. The action actually happens unconsciously,
conscious pain comes afterward.

Evgenii http://blog.rudnyi.ru



Which invites the question, was it pain before you were conscious of
it? Would it have been pain if you'd never become conscious of it?


I would say just a series of neuron spikes, what else? I mean that in 
the skin there is some receptor that when it is hot excites some 
neuron. That neuron excites some other neurons and eventually your 
muscle move your hand. You see it differently?


No, but some neuron excites some other neuron is all that happens later 
in your brain too.  So where does it become pain?  Is it when those 
neurons in your brain connect the afferent signal with the language 
modes for pain or with memories of injuries or with a vocal cry?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



SINGULARITY SUMMIT 2011, Melbourne Australia

2011-08-07 Thread Colin Geoffrey Hales

‘THE FUTURE OF TECHNOLOGY’
SINGULARITY SUMMIT 2011 
AUGUST 20-21 
RMIT UNIVERSITY Melbourne

http://summit.singinst.org.au/

This August, leading scientists, inventors and philosophers will gather in 
Melbourne to discuss the upcoming ‘intelligence explosion’ that many now refer 
to as ‘The Singularity’- a technological breakthrough that promises to eclipse 
previous computing developments with the creation of super-human machines.

If present trends are to continue, computers will have more advanced and 
powerful ‘brains’ than humans within 25 years; the result will be a further 
explosion of computer power and other technologies such as biotechnology, 
nanotechnology and health technology beyond our current ability to predict.

The ‘Singularity Summit’ - a part of National Science Week - is an 
unprecedented opportunity to engage with today's leading experts on emerging 
technologies like Artificial Intelligence (AI), robotics, nanotechnology and 
brain-computer interfaces - right here in Melbourne.

As a pre-summit launch, the Australian premiere of documentary ‘Transcendent 
Man’ -  featuring leading futurist, singularity advocate and recent Time 
Magazine cover star ‘Ray Kurzweil’ - will be held at Nova Cinemas, Carlton on 
August 19.

The screening will also feature a prerecorded address to Australia from Ray 
Kurzweil and producer Barry Ptolemy, and a QA session with documentary 
participants and Internationally renowned Artificial Intelligence (AI) experts 
- Dr Ben Goertzel, Dr Steve Omohundro and Dr Hugo De Garis.

The highly successful 2010 Singularity Summit drew over a hundred local, 
interstate and international enthusiasts to hear first-rate speakers from a 
range of fields.

The 2011 Summit again offers a stellar line-up, including leading Artificial 
Intelligence experts Dr Ben Goertzel and Professor Steve Omohundro, popular 
scientist Dr Lawrence Krauss and renowned philosopher of consciousness Dr David 
Chalmers.  This year’s summit will also feature demonstrations of recent 
robotics advances by Professor Raymond Jarvis and others. 

The summit will explore the important ethical and philosophical dimensions of 
the Singularity - whilst sharing the very latest scientific and technological 
breakthroughs.

There is simply no better way to glimpse the future of these exciting 
technologies. Besides talks and demonstrations, panels will offer the 
opportunity to interact with the speakers and to contribute to the conversation 
about these important issues.

Seating is limited, so Secure your tickets for the 2011 Summit Here 

The conference will be held at Casey Plaza at RMIT.

http://summit2011.singinst.org.au/

Speakers and subjects include:
David Chalmers Leading Philosopher of Consciousness “The Singularity – A 
Philosophical Analysis”

Lawrence Krauss - Leading physicist and best-selling author of The 
Physics of Star Trek - “The Future of Life in the Universe”

Ben Goertzel - Renowned AI researcher and leader of the OpenCog project 
– “AI Roadmaps”

Steve Omohundro - Renowned AI researcher - “Minds Making Minds: 
Artificial Intelligence and the Future of Humanity”

Ray Jarvis – “The Envy of Roboticists - the Future of AI in the 
Material World”

Alan Hájek – “A Plea for the Improbable”

Ian Robinson – “Rationality  Transhumanism”

Kevin B. Korb – “Bayesian Artificial Intelligence”

Ben Goertzel Leading AI researcher – “Artificial General Intelligence”

James Newton-Thomas Machine Intelligence Engineer – “Advances in 
Science and Technology”

Burkard Polster – The Problem With Probability

David Dowe - Artificial Intelligence - “Bayesian/Algorithmic) 
Information theory, one- and two-part compression, and measures of intelligence”

and many more...

This conference is brought to you by Humanity+ @ Melbourne (Victoria, 
Australia). Humanity+ explores how society might use and profit from a variety 
of creative and innovative thought.

Join in an exciting weekend as we explore the surprising future. See you there!

Please feel free to pass this on.






-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-07 Thread Craig Weinberg
Interesting article:

Residents of the brain: Scientists turn up startling diversity among
nerve cells
http://www.sciencenews.org/view/feature/id/332400/title/Residents_of_the_brain_

No two cells are the same. Zoom in, and the brain’s wrinkly, pinkish-
gray exterior becomes a motley collection of billions of cells, each
with personalized quirks and idiosyncrasies.

New results suggest, for instance, that a population of nerve cells
in which individual responses to an electrical poke differ can process
more information than a group in which responses are the same. 

in addition to losing neurons, the brain would lose diversity, a
deficit that could usher in even more damage.

I would say this tends to support my view that the idea of replacement
neurons or normative behavior modeling is likely to be a dead end as
far as functionalism is concerned. It's more appropriate to consider
your brain a civilization of individual organisms (only some of which
are the conscious 'I') rather than a powerful computer executing
complicated instructions.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.