Re: Asifism revisited.

2007-07-05 Thread Torgny Tholerus

David Nyman skrev:
 You have however drawn our attention to something very interesting and 
 important IMO.  This concerns the necessary entailment of 'existence'.
1.  The relation 1+1=2 is always true.  It is true in all universes.  
Even if a universe does not contain any humans or any observers.  The 
truth of 1+1=2 is independent of all observers.

2.  If you have a set of rules and an initial condition, then there 
exist a universe with this set of rules and this initial condition.  
Because it is possible to compute a new situation from a situation, and 
from this new situation it is possible to compute another new situation, 
and this can be done for ever.  This unlimited set of situations will be 
a universe that exists independent of all humans and all observers.  
Noone needs to make these computations, the results of the computations 
will exist anyhow.

3.  All mathmatically possible universes exists, and they all exist in 
the same way.  Our universe is one of those possible universes.  Our 
universe exists independant of any humans or any observers.

4.  For us humans are the universes that contain observers more 
interesting.  But there is no qualitaive difference between universes 
with observers and universes without observers.  They all exist in the 
same way.  The GoL-universes (every initial condition will span a 
separate universe) exist in the same way as our universe.  But because 
we are humans, we are more intrested in universes with observers, and we 
are specially interested in our own universe.  But otherwise there is 
noting special with our universe.

-- 
Torgny Tholerus


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism revisited.

2007-07-05 Thread David Nyman
On 05/07/07, Torgny Tholerus [EMAIL PROTECTED] wrote:

TT:  All mathmatically possible universes exists, and they all exist in
the same way.  Our universe is one of those possible universes.  Our
universe exists independant of any humans or any observers.

DN: But here at the heart of your argument is the confusion again over
language.  If we grant that a mathematically possible universe exists
'independently' (i.e. other than as a sub-structure of the A-Universe) it -
and all consequences flowing from it - must exist self-relatively.  This is
the crucial entailment of 'independent' existence, as we discussed before.
And it exposes the confusion of the two distinct senses of 'independent'.
The first sense is of course that an independent universe does not 'depend'
on any observers it instantiates to grant it existence (i.e. they don't
'cause' it to exist).  It's in just this sense that it's 'independent' or
self-relative, and this is the sense you rely on.

But the second and crucial sense flows directly out of this 'self-relative
independence': which is that any self-relative universe capable of
generating the necessary structure simply *entails* the existence of
'observers' (i.e. self-relative sub-structures).  IOW, self-relation is what
observation *is*.   It's in precisely this crucial sense that an
'independently existing universe' is not 'independent of observation'. On
the contrary: it *entails* observation.  And of course our existence as
observers in self-relation to the A-Universe demonstrates this 'dependency'
in precisely this critical sense.

David


 David Nyman skrev:
  You have however drawn our attention to something very interesting and
  important IMO.  This concerns the necessary entailment of 'existence'.
 1.  The relation 1+1=2 is always true.  It is true in all universes.
 Even if a universe does not contain any humans or any observers.  The
 truth of 1+1=2 is independent of all observers.

 2.  If you have a set of rules and an initial condition, then there
 exist a universe with this set of rules and this initial condition.
 Because it is possible to compute a new situation from a situation, and
 from this new situation it is possible to compute another new situation,
 and this can be done for ever.  This unlimited set of situations will be
 a universe that exists independent of all humans and all observers.
 Noone needs to make these computations, the results of the computations
 will exist anyhow.

 3.  All mathmatically possible universes exists, and they all exist in
 the same way.  Our universe is one of those possible universes.  Our
 universe exists independant of any humans or any observers.

 4.  For us humans are the universes that contain observers more
 interesting.  But there is no qualitaive difference between universes
 with observers and universes without observers.  They all exist in the
 same way.  The GoL-universes (every initial condition will span a
 separate universe) exist in the same way as our universe.  But because
 we are humans, we are more intrested in universes with observers, and we
 are specially interested in our own universe.  But otherwise there is
 noting special with our universe.

 --
 Torgny Tholerus


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Some thoughts from Grandma

2007-07-05 Thread David Nyman
On 05/07/07, Bruno Marchal [EMAIL PROTECTED] wrote:

BM:  OK. I would insist that the comp project (extract physics from comp)
is really just a comp obligation. This is what is supposed to be shown
by the UDA (+ MOVIE-GRAPH). Are you OK with this. It *is*
counterintuitive.

DN:  I believe so - it's what the reductio ad absurdum of the 'physical'
computation in the 'grandma' post was meant to show.  My version of the
'comp obligation' would then run as follows.  Essentially, if comp and
number relations are held to be 'real in the sense that I am real', then to
use Plato's metaphor, it is numbers that represent the forms outside the
cave.  If that's so, then physics is represented by the shadows the
observers see on the wall of the cave.  This is what I mean by 'independent'
existence in my current dialogue with Torgny: i.e the 'arithmetical realism'
of numbers and their relations in the comp frame equates to their
'independence' or self-relativity.  And the existence of 'arithmetical
observers' then derives from subsequent processes of 'individuation'
intrinsic to such fundamental self-relation.  Actually, I find the equation
of existence with self-relativity highly intuitive.

BM:  Then, the interview of the universal machine is just a way to do the
extraction of physics in a constructive way. It is really the
subtleties of the incompleteness phenomena which makes this interview
highly non trivial.

DN:  This is the technical part.  But at this stage grandma has some feeling
for how both classical and QM narratives should be what we expect to emerge
from constructing physics in this way.

BM:  There is no direct (still less one-one) correlation between the mental
and the physical,
that is the physical supervenience thesis is incompatible with the comp hyp.
[A quale of a pain] felt at time t in place x, is not a product of the
physical activity of a machine, at time t in place x. Rather, it is the
whole quale of [a pain felt at time t in place x] which is associated
with an (immaterial and necessarily unknown) computational state, itself
related to its normal consistent computational continuations.

snip

Comp makes the yes doctor a gamble, necessarily. That is: assuming the
theory comp you have to understand that, by saying yes to the doctor, you
are gambling on a level of substitution. At the same time you make a
gamble on the theory comp itself. There is double gamble here. Now, the
first gamble, IF DONE AT THE RIGHT COMP SUBSTITUTION LEVEL, is
comp-equivalent with the natural gamble everybody do when going to sleep, or
just when waiting a nanosecond. In some sense nature do that gamble in our
place all the time ... But this is somethjng we cannot know, still less
assert in any scientific way, and that is why I insist so much on the
theological aspect of comp. This is important in practice. It really
justify that the truth of the yes doctor entails the absolute fundamental
right to say NO to the doctor. The doctor has to admit he is gambling on a
substitution level. If comp is
true we cannot be sure on the choice of the subst. level.

DN:  ISTM that a consequence of the above is that the issue of 'substitution
level' can in principle be 'gambled' on by cloning, or by evolution (because
presumably it has been, even though we can't say how).  But by engineering
or design???  Would there ever be any justification, in your view, for
taking a gamble on being uploaded to an AI program - and if so, on the basis
of what theory?  Essentially, this is what I've been trying to get at.  That
is: assuming comp, HOW would we go about making a 'sound bet', founded on a
specific AI theory, that some AI program instantiated by a 'physical'
computer, will equate to the continuity of our own observation?

The second question I have is summarised in my recent posts about 'sense and
'action'.  Essentially, I've been trying to postulate that the correlation
of consciousness and physics is such that the relations between both sets of
phenomena are a necessary entailment, not an additional assumption.  ISTM
that this is essential to avoid all the nonsense about zombies.  And not
only this, but to show that the reciprocity between experience - e.g.
suffering  - and behaviour (indeed the whole entailment of 'intentionality')
is a necessary consequence of fundamental self-relation (arithmetical
relations, in the comp frame).  Now, my attempt to do this has been to
postulate that 'sense' and 'action' are simply observer-related aspects of a
non-decomposable fundamental self-relation, which in the comp frame would
equate to a set of number-relations.  But ISTM that for this to be true, the
observer and physical narratives would somehow need to follow an 'identical'
or isomorphic trajectory for their invariant relation to emerge in the way
that it seems to.  Do you think that this idea has any specific sense or
relevance in the comp frame?

BM:  Does this help? I assert some propositions without justifying them,
because the 

Re: Penrose and algorithms

2007-07-05 Thread LauLuna



On 29 jun, 19:10, Jesse Mazer [EMAIL PROTECTED] wrote:
 LauLuna  wrote:

 On 29 jun, 02:13, Jesse Mazer [EMAIL PROTECTED] wrote:
   LauLuna wrote:

   For any Turing machine there is an equivalent axiomatic system;
   whether we could construct it or not, is of no significance here.

   But for a simulation of a mathematician's brain, the axioms wouldn't be
   statements about arithmetic which we could inspect and judge whether
 they
   were true or false individually, they'd just be statements about the
 initial
   state and behavior of the simulated brain. So again, there'd be no way
 to
   inspect the system and feel perfectly confident the system would never
   output a false statement about arithmetic, unlike in the case of the
   axiomatic systems used by mathematicians to prove theorems.

 Yes, but this is not the point. For any Turing machine performing
 mathematical skills there is also an equivalent mathematical axiomatic
 system; if we are sound Turing machines, then we could never know that
 mathematical system sound, in spite that its axioms are the same we
 use.

 I agree, a simulation of a mathematician's brain (or of a giant simulated
 community of mathematicians) cannot be a *knowably* sound system, because we
 can't do the trick of examining each axiom and seeing they are individually
 correct statements about arithmetic as with the normal axiomatic systems
 used by mathematicians. But that doesn't mean it's unsound either--it may in
 fact never produce a false statement about arithmetic, it's just that we
 can't be sure in advance, the only way to find out is to run it forever and
 check.

Yes, but how can there be a logical impossibility for us to
acknowledge as sound the same principles and rules we are using?



 But Penrose was not just arguing that human mathematical ability can't be
 based on a knowably sound algorithm, he was arguing that it must be
 *non-algorithmic*.

No, he argues in Shadows of the Mind exactly what I say. He goes on
arguing why a sound algorithm representing human intelligence is
unlikely to be not knowably sound.



 And the impossibility has to be a logical impossibility, not merely a
 technical or physical one since it depends on Gödel's theorem. That's
 a bit odd, isn't it?

 No, I don't see anything very odd about the idea that human mathematical
 abilities can't be a knowably sound algorithm--it is no more odd than the
 idea that there are some cellular automata where there is no shortcut to
 knowing whether they'll reach a certain state or not other than actually
 simulating them, as Wolfram suggests in A New Kind of Science.

The point is that the axioms are exactly our axioms!

In fact I'd
 say it fits nicely with our feeling of free will, that there should be no
 way to be sure in advance that we won't break some rules we have been told
 to obey, apart from actually running us and seeing what we actually end up
 doing.

I don't see how to reconcile free will with computationalism either.

Regards


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-07-05 Thread Jesse Mazer

LauLuna wrote:


On 29 jun, 19:10, Jesse Mazer [EMAIL PROTECTED] wrote:
  LauLuna  wrote:
 
  On 29 jun, 02:13, Jesse Mazer [EMAIL PROTECTED] wrote:
LauLuna wrote:
 
For any Turing machine there is an equivalent axiomatic system;
whether we could construct it or not, is of no significance here.
 
But for a simulation of a mathematician's brain, the axioms wouldn't 
be
statements about arithmetic which we could inspect and judge whether
  they
were true or false individually, they'd just be statements about the
  initial
state and behavior of the simulated brain. So again, there'd be no 
way
  to
inspect the system and feel perfectly confident the system would 
never
output a false statement about arithmetic, unlike in the case of the
axiomatic systems used by mathematicians to prove theorems.
 
  Yes, but this is not the point. For any Turing machine performing
  mathematical skills there is also an equivalent mathematical axiomatic
  system; if we are sound Turing machines, then we could never know that
  mathematical system sound, in spite that its axioms are the same we
  use.
 
  I agree, a simulation of a mathematician's brain (or of a giant 
simulated
  community of mathematicians) cannot be a *knowably* sound system, 
because we
  can't do the trick of examining each axiom and seeing they are 
individually
  correct statements about arithmetic as with the normal axiomatic systems
  used by mathematicians. But that doesn't mean it's unsound either--it 
may in
  fact never produce a false statement about arithmetic, it's just that we
  can't be sure in advance, the only way to find out is to run it forever 
and
  check.

Yes, but how can there be a logical impossibility for us to
acknowledge as sound the same principles and rules we are using?

The axioms in a simulation of a brain would have nothing to do with the 
high-level conceptual principles and rules we use when thinking about 
mathematics, they would be axioms concerning the most basic physical laws 
and microscopic initial conditions of the simulated brain and its simulated 
environment, like the details of which brain cells are connected by which 
synapses or how one cell will respond to a particular electrochemical signal 
from another cell. Just because I think my high-level reasoning is quite 
reliable in general, that's no reason for me to believe a detailed 
simulation of my brain would be sound in the sense that I'm 100% certain 
that this precise arrangement of nerve cells in this particular simulated 
environment, when allowed to evolve indefinitely according to some 
well-defined deterministic rules, would *never* make a mistake in reasoning 
and output an incorrect statement about arithmetic (or even that it would 
never choose to intentionally output a statement it believed to be false 
just to be contrary).


 
  But Penrose was not just arguing that human mathematical ability can't 
be
  based on a knowably sound algorithm, he was arguing that it must be
  *non-algorithmic*.

No, he argues in Shadows of the Mind exactly what I say. He goes on
arguing why a sound algorithm representing human intelligence is
unlikely to be not knowably sound.

He does argue that as a first step, but then he goes on to conclude what I 
said he did, that human intelligence cannot be algorithmic. For example, on 
p. 40 he makes quite clear that his arguments throughout the rest of the 
book are intended to show that there must be something non-computational in 
human mental processes:

I shall primarily be concerned, in Part I of this book, with the issue of 
what it is possible to achieve by use of the mental quality of 
'understanding.' Though I do not attempt to define what this word means, I 
hope that its meaning will indeed be clear enough that the reader will be 
persuaded that this quality--whatever it is--must indeed be an essentail 
part of that mental activity needed for an acceptance of the arguments of 
2.5. I propose to show that the appresiation of these arguments must involve 
something non-computational.

Later, on p. 54:

Why do I claim that this 'awareness', whatever it is, must be something 
non-computational, so that no robot, controlled by a computer, based merely 
on the standard logical ideas of a Turing machine (or equivalent)--whether 
top-down or bottom-up--can achieve or even simulate it? It is here that the 
Godelian argument plays its crucial role.

His whole Godelian argument is based on the idea that for any computational 
theorem-proving machine, by examining its construction we can use this 
understanding to find a mathematical statement which *we* know must be 
true, but which the machine can never output--that we understand something 
it doesn't. But I think my argument shows that if you were really to build a 
simulated mathematician or community of mathematicians in a computer, the 
Godel statement for this system would only be true *if* they never made a 
mistake in