RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou

Brent meeker writes:

 Stathis Papaioannou wrote:
  Peter Jones writes:
  
  
 Stathis Papaioannou wrote:
 
 
 Like Bruno, I am not claiming that this is definitely the case, just that 
 it is the case if
 computationalism is true. Several philosophers (eg. Searle) have used the 
 self-evident
 absurdity of the idea as an argument demonstrating that computationalism 
 is false -
 that there is something non-computational about brains and consciousness. 
 I have not
 yet heard an argument that rejects this idea and saves computationalism.
 
 [ rolls up sleaves ]
 
 The idea is easilly refuted if it can be shown that computation doesn't
 require
 interpretation at all. It can also be refuted more circuitously by
 showing that
 computation is not entirely a matter of intepretation. In everythingism
 , eveything
 is equal. If some computations (the ones that don't depend on
 interpretation) are
 more equal than others, the way is still open for the Somethinginst
 to object
 that interpretation-independent computations are really real, and the
 others are
 mere possibilities.
 
 The claim has been made that computation is not much use without an
 interpretation.
 Well, if you define a computer as somethin that is used by a human,
 that is true.
 It is also very problematic to the computationalist claim that the
 human mind is a computer.
 Is the human mind of use to a human ? Well, yes, it helps us stay alive
 in various ways.
 But that is more to do with reacting to a real-time environment, than
 performing abstract symbolic manipulations or elaborate
 re-interpretations. (Computationalists need to be careful about how
 they define computer. Under
 some perfectly reasonable definitions -- for instance, defining a
 computer as
 a human invention -- computationalism is trivially false).
  
  
  I don't mean anything controversial (I think) when I refer to 
  interpretation of 
  computation. Take a mercury thermometer: it would still do its thing if all 
  sentient life in the universe died out, or even if there were no sentient 
  life to 
  build it in the first place and by amazing luck mercury and glass had come 
  together 
  in just the right configuration. But if there were someone around to 
  observe it and 
  understand it, or if it were attached to a thermostat and heater, the 
  thermometer 
  would have extra meaning - the same thermometer, doing the same thermometer 
  stuff. Now, if thermometers were conscious, then part of their thermometer 
  stuff might include knowing what the temperature was - all by 
  themselves, without 
  benefit of external observer. 
 
 We should ask ourselves how do we know the thermometer isn't conscious of the 
 temperature?  It seems that the answer has been that it's state or activity 
 *could* 
 be intepreted in many ways other than indicating the temperature; therefore 
 it must 
 be said to unconscious of the temperature or we must allow that it implements 
 all 
 conscious thought (or at least all for which there is a possible 
 interpretative 
 mapping).  But I see it's state and activity as relative to our shared 
 environment; 
 and this greatly constrains what it can be said to compute, e.g. the 
 temperature, 
 the expansion coefficient of Hg...   With this constraint, then I think there 
 is no 
 problem in saying the thermometer is conscious at the extremely low level of 
 being 
 aware of the temperature or the expansion coefficient of Hg or whatever else 
 is 
 within the constraint.

I would basically agree with that. Consciousness would probably have to be a 
continuum 
if computationalism is true. Even if computationalism were false and only those 
machines 
specially blessed by God were conscious there would have to be a continuum, 
across
different species and within the lifespan of an individual from birth to death. 
The possibility 
that consciousness comes on like a light at some point in your life, or at some 
point in the 
evolution of a species, seems unlikely to me.

 Furthermore, if thermometers were conscious, they 
  might be dreaming of temperatures, or contemplating the meaning of 
  consciousness, 
  again in the absence of external observers, and this time in the absence of 
  interaction 
  with the real world. 
  
  This, then, is the difference between a computation and a conscious 
  computation. If 
  a computation is unconscious, it can only have meaning/use/interpretation 
  in the eyes 
  of a beholder or in its interaction with the environment. 
 
 But this is a useless definition of the difference.  To apply we have to know 
 whether 
 some putative conscious computation has meaning to itself; which we can only 
 know by 
 knowing whether it is conscious or not.  It makes consciousness ineffable and 
 so 
 makes the question of whether computationalism is true an insoluble mystery.

That's what I have in mind.
 
 Even worse it makes it impossible for us to know whether we're talking about 
 the same 
 thing when we use 

RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou







 Date: Mon, 11 Sep 2006 13:10:52 -0700
 From: [EMAIL PROTECTED]
 To: everything-list@googlegroups.com
 Subject: Re: computationalism and supervenience
 
 
 Stathis Papaioannou wrote:
  Brent Meeker writes:
  
  
 I think we need to say what it means for a computation to be 
 self-interpreting.  Many 
 control programs are written with self-monitoring functions and logging 
 functions. 
 Why would we not attribute consciousness to them?
  
  
  Well, why not? Some people don't even think higher mammals are conscious, 
  and perhaps 
  some there are true solipsists who could convince themselves that other 
  people are not really 
  conscious as rationalisation for antisocial behaviour. 
 
 Autistic people don't emphathize with others feelings - perhaps because they 
 don't 
 have them.  But their behavoir, and I would expect the behavoir of a real 
 solipist, 
 would be simply asocial.
 
 On the other hand, maybe flies experience 
  pain and fear when confronted with insecticide that is orders of magnitude 
  greater than that 
  of any mere human experience of torture, and maybe when I press the letter 
  y on my 
  keyboard I am subjecting my computer to the torments of hell. 
 
 And maybe every physical process implements all possible computations - but I 
 see no 
 reason to believe so.
 
 I don't buy the argument that 
  only complex brains or computations can experience pain either: when I was 
  a child I wasn't 
  as smart as I am now, but I recall that it hurt a lot more and I was much 
  more likely to cry when 
  I cut myself. 
  
  Stathis Papaioannou
 
 You write as though we know nothing about the physical basis of pain and 
 fear.  There 
 is a lot of empirical evidence about what prevents pain in humans, you can 
 even get a 
   degree in aesthesiology.  Fear can be induced by psychotropic drugs and 
 relieved by 
 whisky.
 
 Brent Meeker

But can you comment on the difference between your own subjective experience of 
fear or 
pain compared to that of a rabbit, a fish, or something even more alien? I know 
we can say that 
when you prod a fish with stimulus A it responds by releasing hormones B, C and 
D and swishing its 
tail about in pattern E, F or G according to the time of day and the phases of 
the moon, or whatever, 
and furthermore that these hormones and behaviours are similar to those in 
human responses to 
similar stimuli - but what is the fish feeling?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



The difference between a 'chair' concept and a 'mathematical concept' ;)

2006-09-12 Thread marc . geddes

But this only shows that mathematical objects exist in the sense that chair 
exists;
as a abstraction from chairs.  So chair isn't identical with any particular 
chair.

Brent Meeker


What follows is actually a very important and profound metaphysical
point, absolutely fundamental for understanding platonism and reality
theory.

Both the *concept* of a chair and mathematical concepts are *abstract*
things.  But there's a big difference.  In the case of the chair
concept, it's simply a human creation - it's simply a word we humans
use to summarize high-level properties of physical arrangements of
matter.  There are no 'chairs' in reality, only in our heads.  We can
see this by noting the fact that we can easily dispense with the 'chair
concept' and simply use physics descriptions instead.  So in the case
of the 'chair' concept, we're obviously dealing with a human construct.


Critical point:  The 'chair' concept is only a (human) cognitive
category NOT a metaphysical or ontological categories.

Mathematical concepts are quite different.  The key difference is that
we *cannot* in fact dispense with mathematical descriptions and replace
them with something else.  We cannot *eliminate* mathematical concepts
from our theories like we can with say 'chair' concepts.  And this is
the argument for regarding mathematical concepts as existing 'out
there' and not just in our heads.  There are two steps to the argument
for thinking that mathematical entities are real:

(1)  A general mathematical category is not the same as any specific
physical thing
AND
(2)  Mathematical entities cannot be removed from our descriptions and
replaced with something else ( the argument from indispensibility).

It's true that both 'chair' concepts (for example) and math concepts
are *abstract*, but the big difference is that for a 'chair' concept,
(1) is true, but not (2).  For mathematical concepts both (1) AND (2)
are true.

There's another way of clarifying the difference between the 'chair'
concept and math concepts.  Math concepts are *universal* in scope
(applicable everywhere - we cannot remove them from our theories) where
as the 'chair' concept is a cultural construct applicable only in human
domains.

To make this even clearer, pretend that all of reality is Java Code.
It's true that both a 'chair' *concept* and a 'math' concept is an
abstraction, and therfore a *class* , but the difference between a
'chair' concept and a 'math' concept is this:  'Math' is a *public
class* (an abstract category which can be applied everywhere in
reality), where as a 'chair' concept is *private* class, applicable
only in specific locations inside reality (in this case inside human
heads).

Reality Java Code for a math concept:
PUBLIC CLASS MATH  ()

Reality Java Code a chair concept:
PRIVATE CLASS CHAIR ()

Big difference!

The critical and profound point if we accept this argument, is this:

*There is NO difference between *epistemological* and *metaphysical*
categories in the cases where we are dealing with cognitive categories
which are universal in scope.  Math concepts of universal applicability
are BOTH epistemological tools AND metaphysical or ontological
categories.  One needs to think about this carefully to realize just
how important this is.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou


Brent meeker writes:

 I think it goes against standard computationalism if you say that a 
 conscious 
 computation has some inherent structural property. Opponents of 
 computationalism 
 have used the absurdity of the conclusion that anything implements any 
 conscious 
 computation as evidence that there is something special and 
 non-computational 
 about the brain. Maybe they're right.
 
 Stathis Papaioannou
 
 Why not reject the idea that any computation implements every possible 
 computation 
 (which seems absurd to me)?  Then allow that only computations with some 
 special 
 structure are conscious.
  
  
  It's possible, but once you start in that direction you can say that only 
  computations 
  implemented on this machine rather than that machine can be conscious. You 
  need the 
  hardware in order to specify structure, unless you can think of a God-given 
  programming 
  language against which candidate computations can be measured.
 
 I regard that as a feature - not a bug. :-)
 
 Disembodied computation doesn't quite seem absurd - but our empirical sample 
 argues 
 for embodiment.
 
 Brent Meeker

I don't have a clear idea in my mind of disembodied computation except in 
rather simple cases, 
like numbers and arithmetic. The number 5 exists as a Platonic ideal, and it 
can also be implemented 
so we can interact with it, as when there is a collection of 5 oranges, or 3 
oranges and 2 apples, 
or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite variety. 
The difficulty is that if we 
say that 3+2=5 as exemplified by 3 oranges and 2 apples is conscious, then 
should we also say 
that the pairs+triplets of fruit are also conscious? If so, where do we draw 
the line? That is what I mean 
when I say that any computation can map onto any physical system. The physical 
structure and activity 
of computer A implementing program a may be completely different to that of 
computer B implementing 
program b, but program b may be an emulation of program a, which should make 
the two machines 
functionally equivalent and, under computationalism, equivalently conscious. 
Maybe this is wrong, eg. 
there is something special about the insulation in the wires of machine A, so 
that only A can be conscious. 
But that is no longer computationalism.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou

Colin Hales writes:

 Please consider the plight of the zombie scientist with a huge set of
 sensory feeds and similar set of effectors. All carry similar signal
 encoding and all, in themselves, bestow no experiential qualities on the
 zombie.
 
 Add a capacity to detect regularity in the sensory feeds.
 Add a scientific goal-seeking behaviour.
 
 Note that this zombie...
 a) has the internal life of a dreamless sleep
 b) has no concept or percept of body or periphery
 c) has no concept that it is embedded in a universe.
 
 I put it to you that science (the extraction of regularity) is the science
 of zombie sensory fields, not the science of the natural world outside the
 zombie scientist. No amount of creativity (except maybe random choices)
 would ever lead to any abstraction of the outside world that gave it the
 ability to handle novelty in the natural world outside the zombie scientist.
 
 No matter how sophisticated the sensory feeds and any guesswork as to a
 model (abstraction) of the universe, the zombie would eventually find
 novelty invisible because the sensory feeds fail to depict the novelty .ie.
 same sensory feeds for different behaviour of the natural world.
 
 Technology built by a zombie scientist would replicate zombie sensory feeds,
 not deliver an independently operating novel chunk of hardware with a
 defined function(if the idea of function even has meaning in this instance).
 
 The purpose of consciousness is, IMO, to endow the cognitive agent with at
 least a repeatable (not accurate!) simile of the universe outside the
 cognitive agent so that novelty can be handled. Only then can the zombie
 scientist detect arbitrary levels of novelty and do open ended science (or
 survive in the wild world of novel environmental circumstance).
 
 In the absence of the functionality of phenomenal consciousness and with
 finite sensory feeds you cannot construct any world-model (abstraction) in
 the form of an innate (a-priori) belief system that will deliver an endless
 ability to discriminate novelty. In a very Godellian way eventually a limit
 would be reach where the abstracted model could not make any prediction that
 can be detected. The zombie is, in a very real way, faced with 'truths' that
 exist but can't be accessed/perceived. As such its behaviour will be
 fundamentally fragile in the face of novelty (just like all computer
 programs are).
 ---
 Just to make the zombie a little more real... consider the industrial
 control system computer. I have designed, installed hundreds and wired up
 tens (hundreds?) of thousands of sensors and an unthinkable number of
 kilometers of cables. (NEVER again!) In all cases I put it to you that the
 phenomenal content of sensory connections may, at best, be characterised as
 whatever it is like to have electrons crash through wires, for that is what
 is actually going on. As far as the internal life of the CPU is concerned...
 whatever it is like to be an electrically noisy hot rock, regardless of the
 programalthough the character of the noise may alter with different
 programs!
 
 I am a zombie expert! No that didn't come out right...erm
 perhaps... I think I might be a world expert in zombies yes, that's
 better.
 :-)
 Colin Hales

I'm not sure I understand why the zombie would be unable to respond to any 
situation it was likely to encounter. Doing science and philosophy is just a 
happy 
side-effect of a brain designed to help its owner survive and reproduce. Do you 
think it would be impossible to program a computer to behave like an insect, or 
a 
newborn infant, for example? You could add a random number generator to make 
its behaviour less predictable (so predators can't catch it and parents don't 
get 
complacent) or to help it decide what to do in a truly novel situation. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: The Mathematico-Cognition Reality Theory (MCRT) Ver 6.0

2006-09-12 Thread Bruno Marchal


Le 08-sept.-06, à 05:42, [EMAIL PROTECTED] a écrit :

snip

 It must consist of the 'movement' of mathematical forms through
 state space.  The only conclusion that can be drawn from this is that
 mathematical truth is not fixed, but can vary with time - because
 that's exactly what 'the movement of mathematical forms through
 state space' represents... the shifting of mathematical truth.

 And I suggest *that Qualia are precisely these abstract processes* !!!

 Hope this all makes a bit more sense.


I would certainly encourage you in that direction, although I am not 
sure you are aware that even modern math is going in a similar 
direction. Seeing qualia as mathematical *motions*, as you said in 
another post, can be related in a precise way with the arithmetical 
hypostases (comp notions of n-person) once you realize that modal logic 
is already a way to tackle notions of shifting mathematical truth: once 
for each world in a multiplicity (sometimes a continuum) of worlds.
Modal logic, but also Cohen's forcing technic in set theory, have led 
to a vast literature on variable truth.
The MWI itself can be seen in that context too. The whole category 
approach to math and logic can also be seen as a way to study variable 
notion of truth, especially through the notion of topos (boolean valued 
or not). All what you say, as far as I understand it, can, and perhaps 
should, be recasted in such a frame. Note that with comp, for technical 
reasons which I intuit only through my understanding of quantum 
mechanics, the quanta appears themselves to be sharable (first person 
plural) qualia having relational and somehow variable truth values 
attached to it too.

The advantage of modal logic and category theory is that such variable 
truth approach can be based on common non-variable usual notion of 
mathematical truth, making it possible to prevent extreme relativism 
which often appears in corresponding approaches in less rigorous 
philosophical works. I think this is not a problem for you, especially 
seeing your today's post (with which I do agree).

Technical remark: all arithmetical hypostases (godel-lobian derived 
notion of n-persons) come equipped with their own notions of 
multiverses (not all are Kripkean one), and so they are all equipped 
with a canonical notion of variable truth, but only the 1-person 
hypostase (and probably the sensible matter hypostase) got a notion of 
temporal-like (albeit bifurking) notion of variability. With the other 
hypostases, truth varies, not in a temporal way but in a more abstract 
and logical way.

With those remarks what you say makes sense for me,

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou


Brent meeker writes (quoting SP):

  Maybe this is a copout, but I just don't think it is even logically 
  possible to explain what consciousness 
  *is* unless you have it. 
 
 Not being *logically* possible means entailing a contradiction - I doubt 
 that.  But 
 anyway you do have it and you think I do because of the way we interact.  So 
 if you 
 interacted the same way with a computer and you further found out that the 
 computer 
 was a neural network that had learned through interaction with people over a 
 period 
 of years, you'd probably infer that the computer was conscious - at least you 
 wouldn't be sure it wasn't.

True, but I could still only imagine that it experiences what I experience 
because I already know what I 
experience. I don't know what my current computer experiences, if anything, 
because I'm not very much 
like it.
 
 It's like the problem of explaining vision to a blind man: he might be the 
 world's 
  greatest scientific expert on it but still have zero idea of what it is 
  like to see - and that's even though 
  he shares most of the rest of his cognitive structure with other humans, 
  and can understand analogies 
  using other sensations. Knowing what sort of program a conscious computer 
  would have to run to be 
  conscious, what the purpose of consciousness is, and so on, does not help 
  me to understand what the 
  computer would be experiencing, except by analogy with what I myself 
  experience. 
 
 But that's true of everything.  Suppose we knew a lot more about brains and 
 we 
 created an intelligent computer using brain-like functional architecture and 
 it acted 
 like a conscious human being, then I'd say we understood its consciousness 
 better 
 than we understand quantum field theory or global economics.

We would understand it in a third person sense but not in a first person sense, 
except by analogy with our 
own first person experience. Consciousness is the difference between what can 
be known by observing an 
entity and what can be known by being the entity, or something like the entity, 
yourself. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: The difference between a 'chair' concept and a 'mathematical concept' ;)

2006-09-12 Thread David Nyman

[EMAIL PROTECTED] wrote:

 (1)  A general mathematical category is not the same as any specific
 physical thing

But why can't it be reduced to classes of specific physical things? How
can you show that it is necessary for anything corresponding to this
description to 'exist' apart from its instantiations as documented
procedures and actual occurrences of their application? In this case:

(2)  Mathematical entities cannot be removed from our descriptions and
 replaced with something else ( the argument from indispensibility).

would be false, though such removal would be inconvenient (as would
'chair' for that matter). A 'mathematical entity' would then merely
refer to the classes of all descriptions, and all actual occurrences of
the application, of a given procedure - i.e. a human cognitive category
like 'chair', although as you say of greater generality.

David

 But this only shows that mathematical objects exist in the sense that chair 
 exists;
 as a abstraction from chairs.  So chair isn't identical with any particular 
 chair.
 
 Brent Meeker


 What follows is actually a very important and profound metaphysical
 point, absolutely fundamental for understanding platonism and reality
 theory.

 Both the *concept* of a chair and mathematical concepts are *abstract*
 things.  But there's a big difference.  In the case of the chair
 concept, it's simply a human creation - it's simply a word we humans
 use to summarize high-level properties of physical arrangements of
 matter.  There are no 'chairs' in reality, only in our heads.  We can
 see this by noting the fact that we can easily dispense with the 'chair
 concept' and simply use physics descriptions instead.  So in the case
 of the 'chair' concept, we're obviously dealing with a human construct.


 Critical point:  The 'chair' concept is only a (human) cognitive
 category NOT a metaphysical or ontological categories.

 Mathematical concepts are quite different.  The key difference is that
 we *cannot* in fact dispense with mathematical descriptions and replace
 them with something else.  We cannot *eliminate* mathematical concepts
 from our theories like we can with say 'chair' concepts.  And this is
 the argument for regarding mathematical concepts as existing 'out
 there' and not just in our heads.  There are two steps to the argument
 for thinking that mathematical entities are real:

 (1)  A general mathematical category is not the same as any specific
 physical thing
 AND
 (2)  Mathematical entities cannot be removed from our descriptions and
 replaced with something else ( the argument from indispensibility).

 It's true that both 'chair' concepts (for example) and math concepts
 are *abstract*, but the big difference is that for a 'chair' concept,
 (1) is true, but not (2).  For mathematical concepts both (1) AND (2)
 are true.

 There's another way of clarifying the difference between the 'chair'
 concept and math concepts.  Math concepts are *universal* in scope
 (applicable everywhere - we cannot remove them from our theories) where
 as the 'chair' concept is a cultural construct applicable only in human
 domains.

 To make this even clearer, pretend that all of reality is Java Code.
 It's true that both a 'chair' *concept* and a 'math' concept is an
 abstraction, and therfore a *class* , but the difference between a
 'chair' concept and a 'math' concept is this:  'Math' is a *public
 class* (an abstract category which can be applied everywhere in
 reality), where as a 'chair' concept is *private* class, applicable
 only in specific locations inside reality (in this case inside human
 heads).

 Reality Java Code for a math concept:
 PUBLIC CLASS MATH  ()

 Reality Java Code a chair concept:
 PRIVATE CLASS CHAIR ()

 Big difference!

 The critical and profound point if we accept this argument, is this:

 *There is NO difference between *epistemological* and *metaphysical*
 categories in the cases where we are dealing with cognitive categories
 which are universal in scope.  Math concepts of universal applicability
 are BOTH epistemological tools AND metaphysical or ontological
 categories.  One needs to think about this carefully to realize just
 how important this is.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:
 Peter Jones writes:

  Stathis Papaioannou wrote:
 
 Now, suppose some more complex variant of 3+2=3 implemented on your 
 abacus has consciousness associated with it, which is just one of the 
 tenets of computationalism. Some time later, you are walking in the 
 Amazon rain forest and notice that
 under a certain mapping
   
   
 of birds to beads and trees to wires, the forest is implementing the 
 same computation as your abacus was. So if your abacus was conscious, 
 and computationalism is true, the tree-bird sytem should also be 
 conscious.
   
No necessarily, because the mapping is required too. Why should
it still be conscious if no-one is around to make the mapping.
  
   Are you claiming that a conscious machine stops being conscious if its 
   designers die
   and all the information about how it works is lost?
 
  You are, if anyone is. I don't agree that computation *must* be
  interpreted,
  although they *can* be re-interpreted.

 What I claim is this:

 A computation does not *need* to be interpreted, it just is. However, a 
 computation
 does need to be interpreted, or interact with its environment in some way, if 
 it is to be
 interesting or meaningful.

A computation other than the one you are running needs to be
interpreted by you
to be meaningful to you. The computation you are running is useful
to you because it keeps you alive.

 By analogy, a string of characters is a string of characters
 whether or not anyone interprets it, but it is not interesting or meaningful 
 unless it is
 interpreted. But if a computation, or for that matter a string of characters, 
 is conscious,
 then it is interesting and meaningful in at least one sense in the absence of 
 an external
 observer: it is interesting and meaningful to itself. If it were not, then it 
 wouldn't be
 conscious. The conscious things in the world have an internal life, a first 
 person
 phenomenal experience, a certain ineffable something, whatever you want to 
 call it,
 while the unconscious things do not. That is the difference between them.

Which they manage to be aware of without the existence of an external
oberver,
so one of your premises must be wrong.

 Stathis Papaioannou
 _
 Be one of the first to try Windows Live Mail.
 http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:

 That's what I'm saying, but I certainly don't think everyone agrees with me 
 on the list, and
 I'm not completely decided as to which of the three is more absurd: every 
 physical system
 implements every conscious computation, no physical system implements any 
 conscious
 computation (they are all implemented non-physically in Platonia), or the 
 idea that a
 computation can be conscious in the first place.


You haven't made it clear why you don't accept that every physical
system
implements one computation, whether it is a
conscious computation or not. I don't see what
contradicts it.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:
 Brent meeker writes:

  Stathis Papaioannou wrote:
   Peter Jones writes:

  We should ask ourselves how do we know the thermometer isn't conscious of 
  the
  temperature?  It seems that the answer has been that it's state or activity 
  *could*
  be intepreted in many ways other than indicating the temperature; therefore 
  it must
  be said to unconscious of the temperature or we must allow that it 
  implements all
  conscious thought (or at least all for which there is a possible 
  interpretative
  mapping).  But I see it's state and activity as relative to our shared 
  environment;
  and this greatly constrains what it can be said to compute, e.g. the 
  temperature,
  the expansion coefficient of Hg...   With this constraint, then I think 
  there is no
  problem in saying the thermometer is conscious at the extremely low level 
  of being
  aware of the temperature or the expansion coefficient of Hg or whatever 
  else is
  within the constraint.

 I would basically agree with that. Consciousness would probably have to be a 
 continuum
 if computationalism is true.

I don't think that follows remotely. It is true that it is vastly
better to interpret a column of mercury as a temperature-sensor than
a pressure-sensor or a radiation-sensor. That doesn't mean the
thermometer
knows that in itself.

Computationalism does not claim that every computation is conscious.

If consciousness supervenes on inherent non-interprtation-dependent
features,
it can supervene on features which are binary, either present or
absent.

For instance, whether a programme examines or modifies its own code is
surely
such a feature.


Even if computationalism were false and only those machines
 specially blessed by God were conscious there would have to be a continuum, 
 across
 different species and within the lifespan of an individual from birth to 
 death. The possibility
 that consciousness comes on like a light at some point in your life, or at 
 some point in the
 evolution of a species, seems unlikely to me.

Surely it comes on like a light whenver you wake up.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:
 Brent meeker writes:

  I think it goes against standard computationalism if you say that a 
  conscious
  computation has some inherent structural property. Opponents of 
  computationalism
  have used the absurdity of the conclusion that anything implements any 
  conscious
  computation as evidence that there is something special and 
  non-computational
  about the brain. Maybe they're right.
  
  Stathis Papaioannou
  
  Why not reject the idea that any computation implements every possible 
  computation
  (which seems absurd to me)?  Then allow that only computations with some 
  special
  structure are conscious.
  
  
   It's possible, but once you start in that direction you can say that only 
   computations
   implemented on this machine rather than that machine can be conscious. 
   You need the
   hardware in order to specify structure, unless you can think of a 
   God-given programming
   language against which candidate computations can be measured.
 
  I regard that as a feature - not a bug. :-)
 
  Disembodied computation doesn't quite seem absurd - but our empirical 
  sample argues
  for embodiment.
 
  Brent Meeker

 I don't have a clear idea in my mind of disembodied computation except in 
 rather simple cases,
 like numbers and arithmetic. The number 5 exists as a Platonic ideal, and it 
 can also be implemented
 so we can interact with it, as when there is a collection of 5 oranges, or 3 
 oranges and 2 apples,
 or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
 variety. The difficulty is that if we
 say that 3+2=5 as exemplified by 3 oranges and 2 apples is conscious, then 
 should we also say
 that the pairs+triplets of fruit are also conscious?

No, they are only subroutines.

  If so, where do we draw the line?

At specific structures

 That is what I mean
 when I say that any computation can map onto any physical system. The 
 physical structure and activity
 of computer A implementing program a may be completely different to that of 
 computer B implementing
 program b, but program b may be an emulation of program a, which should make 
 the two machines
 functionally equivalent and, under computationalism, equivalently conscious.

So ? If the functional equivalence doesn't depend on a
baroque-reinterpretation,
where is the problem ?

 Maybe this is wrong, eg.
 there is something special about the insulation in the wires of machine A, so 
 that only A can be conscious.
 But that is no longer computationalism.

No. But what would force that conclusion on us ? Why can't
consciousness
attach to features more gneral than hardware, but less general than one
of your re-interpretations ?

 Stathis Papaioannou
 _
 Be one of the first to try Windows Live Mail.
 http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread 1Z


Stathis Papaioannou wrote:
 Brent Meeker writes:

  I could make a robot that, having suitable thermocouples, would quickly 
  withdraw it's
  hand from a fire; but not be conscious of it.  Even if I provide the 
  robot with
  feelings, i.e. judgements about good/bad/pain/pleasure I'm not sure it 
  would be
  conscious.  But if I provide it with attention and memory, so that it 
  noted the
  painful event as important and necessary to remember because of it's 
  strong negative
  affect; then I think it would be conscious.
  
  
   It's interesting that people actually withdraw their hand from the fire 
   *before* they experience
   the pain. The withdrawl is a reflex, presumably evolved in organisms with 
   the most primitive
   central nervour systems, while the pain seems to be there as an 
   afterthought to teach us a
   lesson so we won't do it again. Thus, from consideration of evolutionary 
   utility consciousness
   does indeed seem to be a side-effect of memory and learning.
 
  Even more curious, volitional action also occurs before one is aware of it. 
  Are you
  familiar with the experiments of Benjamin Libet and Grey Walter?

 These experiments showed that in apparently voluntarily initiated motion, 
 motor cortex activity
 actually preceded the subject's awareness of his intention by a substantial 
 fraction of a second.
 In other words, we act first, then decide to act.

Does Benjamin Libet's Research Empirically Disprove Free Will ?
Scientifically informed sceptics about FW often quote a famous
experiment by benjamin Libet, which supposedly shows that a kind of
signal called a Readiness Potential, detectable by electrodes,
precedes a conscious decisions, and is a reliable indicator of the
decision, and thus -- so the claim goes -- indicates that our decisions
are not ours but made for us by unconsious processes.

In fact, Libet himself doesn't draw a sweepingly sceptical conclusion
from his own results. For one thing, Readiness Potentials are not
always followed by actions. he believes it is possible for
consicousness to intervene with a veto to the action:

The initiation of the freely voluntary act appears to begin in the
brain unconsciously, well before the person consciously knows he wants
to act! Is there, then, any role for conscious will in the performing
of a voluntary act?...To answer this it must be recognised that
conscious will (W) does appear about 150milliseconds before the muscsle
is activated, even though it follows the onset ofthe RP. An interval of
150msec would allow enough time in which the conscious function might
affec the final outcome of the volitional process.

(Libet, quoted in Freedom Evolves by Daniel Dennett, p. 230 )

This suggests our conscious minds may not have free will but
rather free won't!

(V.S Ramachandran, quoted in Freedom Evolves by Daniel Dennett, p.
231 )

However, it is quite possible that the Libertarian doesn't need to
appeal to free won't to avoid the conclusion that free won't doesn't
exist.

Libet tells when the RP occurs using electrodes. But how does Libet he
when conscious decison-making occurs ? He relies on the subject
reporting the position of the hand of a clock. But, as Dennett points
out, this is only a report of where it seems to the subject that
various things come together, not of the objective time at which they
occur.

Suppose Libet knows that your readiness potential peaked at second
6,810 of the experimental trial, and the clock dot was straight down
(which is what you reported you saw) at millisecond 7,005. How many
milliseconds should he have to add to this number to get the time you
were conscious of it? The light gets from your clock face to your
eyeball almost instantaneously, but the path of the signals from retina
through lateral geniculate nucleus to striate cortex takes 5 to 10
milliseonds -- a paltry fraction of the 300 milliseconds offset, but
how much longer does it take them to get to you. (Or are you located in
the striate cortex?) The visual signals have to be processed before
they arrive at wherever they need to arrive for you to make a consicous
decision of simulataneity. Libet's method presupposes, in short, that
we can locate the intersection of two trajectories: # the
rising-to-consciousness of signals representing the decision to flick #
the rising to consciousness of signals representing successive
clock-face orientations so that these events occur side-by-side as it
were in place where their simultaneity can be noted.

(Freedom Evolves by Daniel Dennett, p. 231 )

Dennett refers to an experiment in which Churchland showed, that just
pressing a button when asked to signal when you see a flash of light
takes a normal subject about 350 milliseconds.

Does that mean that all actions taking longer than that are unconcisous
?

The brain processes stimuli over time, and the amount of time
depends on which information is being extracted for which purposes. A
top tennis player can set 

Russell's book

2006-09-12 Thread David Nyman

Hi Russell

I just received the book and have swiftly perused it (one of many
iterations I expect). I find it to be a clear presentation of your own
approach as well as a fine exposition of many topics from the list that
had me baffled. A couple of things immediately occur:

1) QTI - I must say until reading your remarks (e.g. re pension plans)
the possible personal consequences of QTI hadn't really struck me. If
QTI is true, there is a fundamental assymetry between the 1st and
3rd-person povs vis-a-vis personal longevity (at least the longevity of
consciousness), and this seems to imply that one should take seriously
the prospect of being around in some form far longer than generally
assumed from a purely 3rd-person perspective. This has obvious
implications for retirement planning in general and avoidance of the
more egregious cul-de-sac situations. On the other hand, short of
outright lunacy vis-a-vis personal safety, it also seems to imply that
from the 1st-person pov we are likely to come through (albeit possibly
in less-than-perfect shape) even apparently minimally survivable
situations. This struck me particularly forcibly while watching the
9/11 re-runs on TV last night.

In effect, we are being presented with a kind of 'yes doctor' in
everyday life. Do you find that these considerations affect your own
behaviour in any way?

2) RSSA vs ASSA - Isn't it the case that all 'absolute' self samples
will appear to be 'relative' (i.e. to their own content) and hence
1st-person experience can be 'time-like' without the need for
'objective' sequencing of observer moments? If the 'pov' is that of the
multiverse can't we simply treat all 1st-person experience as the
'absolute sampling' of all povs compresently?

David


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Arithmetical Realism

2006-09-12 Thread 1Z


Bruno Marchal wrote:

 Le 29-août-06, à 20:45, 1Z a écrit :



  The version of AR that is supported by comp
  only makes a commitment about  mind-independent *truth*. The idea
  that the mind-independent truth of mathematical propositions
  entails the mind-independent *existence* of mathematical objects is
  a very contentious and substantive claim.


 You have not yet answered my question: what difference are you making
 between there exist a prime number in platonia and the truth of the
 proposition asserting the *existence* of a prime number is independent
 of me, you, and all contingencies ?

P is true is not different to P. That is not the difference I
making.

I'm making a difference between what exists means in mathematical
sentences and what it means in empiricial sentences (and what it means
in fictional contexts...)


The logical case for mathematical Platonism is based on the idea
that mathematical statements are true, and make existence claims.
That they are true is not disputed by the anti-Platonist, who
must therefore claim that mathematical existence claims are somehow
weaker than other existence claims -- perhaps merely metaphorical.
That the the word exists means different things in different contexts
is easily established.

For one thing, this is already conceded by Platonists! Platonists think
Platonic existence is eternal, immaterial non-spatial, and so on,
unlike the Earthly existence of material bodies. For another,
we are already used to contextualising the meaning of exists.
We agree with both: helicopters exist; and helicopters
don't exist in Middle Earth. (People who base their entire
anti-Platonic philosophy are called fictionalists. However,
mathematics is not a fiction because it is not a free creation.
Mathematicians are constrained by consistency and non-contradiction
in a way that authors are not. Dr Watson's fictional existence
is intact despite the fact that he is sometimes called John
and sometimes James in Conan Doyle's stories).

The epistemic case for mathematical Platonism is  be  argued on the
basis of the
objective
nature of mathematical truth. Superficially, it seems persuasive that
objectivity requires  objects.
However, the basic case for the objectivity of mathematics is the
tendency
of mathematicians to
agree about the answers to mathematical problems; this can be explained
by
noting that mathematical logic is based on axioms and rules of
inference, and
different mathematicians following the same rules will tend to get the
same
answers , like different computers running the same problem.
(There is also disagreement about some axioms, such as the Axiom of
Choice,
and different mathematicians with different attitudes about the AoC
will
tend to get different answers -- a phenomenon which is easily explained

by the formalist view I am taking here).

The semantic case for mathematical Platonism is based on the idea
that the terms in a mathematical sentence must mean something,
and therefore must refer to objects. It can be argued on
general linguistic grounds that not all meaning is reference
to some kind of object outside the head. Some meaning is sense,
some is reference. That establishes the possibility that mathematical
terms do not have references. What establishes it is as likely
and not merely possible is the obeservation that nothing like
empirical investigation is needed to establish the truth
of mathematical statements. Mathematical truth is arrived at by a
purely
conceptual process, which is what would be expected if mathematical
meaning were restricted to the
 Sense, the in the head component of meaning.


A possible counter argument by the Platonist is that the downgrading of
mathematical existence to a mere metaphor is arbitrary. The
anti-Platonist must
show that a consistent standard is being applied. This it is possible
to do; the standard is to take the meaning of existence in the context
of
a particular proposition to relate to the means of justification of the
proposition.
Since ordinary statements are confirmed empirically, exists means
can
be perceived in that context. Since sufficient grounds for asserting
the
existence of mathematical objects are that it is does not contradict
anything else
in mathematics, mathematical existence just amounts to concpetual
non-contradictoriness.

(Incidentally, this approach answers a question about mathematical and
empirical
truth. The anti-Platonists want sthe two kinds of truth to be
different, but
also needs them to be related so as to avoid the charge that one class
of
statement is not true at all. This can be achieved because empirical
statements rest on non-contradiction in order to achive correspondence.
If an empricial observation fails co correspond to a statemet, there
is a contradiction between them. Thus non-contradiciton is a necessary
but insufficient justification for truth in empircal statements, but
a sufficient one for mathematical statements).



  Where is it shown the UD exists ?


 

Re: The difference between a 'chair' concept and a 'mathematical concept' ;)

2006-09-12 Thread Brent Meeker

[EMAIL PROTECTED] wrote:
But this only shows that mathematical objects exist in the sense that chair 
exists;
as a abstraction from chairs.  So chair isn't identical with any particular 
chair.

Brent Meeker
 
 
 
 What follows is actually a very important and profound metaphysical
 point, absolutely fundamental for understanding platonism and reality
 theory.
 
 Both the *concept* of a chair and mathematical concepts are *abstract*
 things.  But there's a big difference.  In the case of the chair
 concept, it's simply a human creation - it's simply a word we humans
 use to summarize high-level properties of physical arrangements of
 matter.  There are no 'chairs' in reality, only in our heads.  We can
 see this by noting the fact that we can easily dispense with the 'chair
 concept' and simply use physics descriptions instead.  So in the case
 of the 'chair' concept, we're obviously dealing with a human construct.
 
 
 Critical point:  The 'chair' concept is only a (human) cognitive
 category NOT a metaphysical or ontological categories.
 
 Mathematical concepts are quite different.  The key difference is that
 we *cannot* in fact dispense with mathematical descriptions and replace
 them with something else.  We cannot *eliminate* mathematical concepts
 from our theories like we can with say 'chair' concepts.  And this is
 the argument for regarding mathematical concepts as existing 'out
 there' and not just in our heads.  There are two steps to the argument
 for thinking that mathematical entities are real:
 
 (1)  A general mathematical category is not the same as any specific
 physical thing
 AND
 (2)  Mathematical entities cannot be removed from our descriptions and
 replaced with something else ( the argument from indispensibility).
 
 It's true that both 'chair' concepts (for example) and math concepts
 are *abstract*, but the big difference is that for a 'chair' concept,
 (1) is true, but not (2).  For mathematical concepts both (1) AND (2)
 are true.
 
 There's another way of clarifying the difference between the 'chair'
 concept and math concepts.  Math concepts are *universal* in scope
 (applicable everywhere - we cannot remove them from our theories) where
 as the 'chair' concept is a cultural construct applicable only in human
 domains.
 
 To make this even clearer, pretend that all of reality is Java Code.
 It's true that both a 'chair' *concept* and a 'math' concept is an
 abstraction, and therfore a *class* , but the difference between a
 'chair' concept and a 'math' concept is this:  'Math' is a *public
 class* (an abstract category which can be applied everywhere in
 reality), where as a 'chair' concept is *private* class, applicable
 only in specific locations inside reality (in this case inside human
 heads).
 
 Reality Java Code for a math concept:
 PUBLIC CLASS MATH  ()
 
 Reality Java Code a chair concept:
 PRIVATE CLASS CHAIR ()
 
 Big difference!
 
 The critical and profound point if we accept this argument, is this:
 
 *There is NO difference between *epistemological* and *metaphysical*
 categories in the cases where we are dealing with cognitive categories
 which are universal in scope.  Math concepts of universal applicability
 are BOTH epistemological tools AND metaphysical or ontological
 categories.  One needs to think about this carefully to realize just
 how important this is.

It is an interesting point, but it's not so fundamental as you seem to think.  
We can 
do without 'chair' and 'table' etc.  But we can't do wihtout 'this' and 'that'. 
Without distinguishing objects we couldn't count and we wouldn't have the 
integers. 
Language, logic, and math are human inventions just as chair is, c.f. William 
S. 
Cooper The Evolution of Reason.  Probably they are nomologically necessary in 
the 
sense that any sentient species that evolves would have to invent them. But 
just 
because mathematics and logic are built into our language and are necessary to 
any 
language that we could recognize, does not show they are out there like the 
object 
we call 'that chair' is out there.  That chair would continue to exist even if 
all 
humans were wiped off the Earth - but the concept of 'chairs' wouldn't and 
neither 
would '2'.

Ontology is invented too.  Most ontologies put the chair 'out there' and math 
'in our 
heads'.  Some put the chair 'out there' and math in 'Mathematica' (I don't like 
to 
use 'Platonia' because Plato put chair in there too).  Java has it's own 
ontology; 
that we invented to reflect an idea of instances and classes.  There's nothing 
necessary about that as is easily seen from the fact that anything Java can do 
can 
also be done in Fortran or assembly or by a Turing machine.

Brent Meeker
The sciences do not try to explain, they hardly even try to  interpret, they 
mainly 
make models. By a model is meant a  mathematical construct which, with the 
addition 
of certain verbal  interpretations, describes observed phenomena. The 
justification 
of  such a 

Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 Brent meeker writes:
 
 
I think it goes against standard computationalism if you say that a 
conscious 
computation has some inherent structural property. Opponents of 
computationalism 
have used the absurdity of the conclusion that anything implements any 
conscious 
computation as evidence that there is something special and 
non-computational 
about the brain. Maybe they're right.

Stathis Papaioannou

Why not reject the idea that any computation implements every possible 
computation 
(which seems absurd to me)?  Then allow that only computations with some 
special 
structure are conscious.


It's possible, but once you start in that direction you can say that only 
computations 
implemented on this machine rather than that machine can be conscious. You 
need the 
hardware in order to specify structure, unless you can think of a God-given 
programming 
language against which candidate computations can be measured.

I regard that as a feature - not a bug. :-)

Disembodied computation doesn't quite seem absurd - but our empirical sample 
argues 
for embodiment.

Brent Meeker
 
 
 I don't have a clear idea in my mind of disembodied computation except in 
 rather simple cases, 
 like numbers and arithmetic. The number 5 exists as a Platonic ideal, and it 
 can also be implemented 
 so we can interact with it, as when there is a collection of 5 oranges, or 3 
 oranges and 2 apples, 
 or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
 variety. The difficulty is that if we 
 say that 3+2=5 as exemplified by 3 oranges and 2 apples is conscious, then 
 should we also say 
 that the pairs+triplets of fruit are also conscious? If so, where do we draw 
 the line? 

I'm not sure I understand your example.  Are you saying that by simply 
existing, two 
apples and 3 oranges compute 2+3=5?  If so I would disagree.  I would say it is 
our 
comprehending them as individual objects and also as a set that is the 
computation. 
Just hanging there on the trees they may be computing apple hanging on a 
tree, 
apple hanging on a tree,... but they're not computing 2+3=5.

That is what I mean 
 when I say that any computation can map onto any physical system. 

But as you've noted before the computation is almost all in the mapping.  And 
not 
just in the map, but in the application of the map - which is something we do.  
That 
action can't be abstracted away.  You can't just say there's a physical system 
and 
there's a manual that would map it into some computation and stop there as 
though the 
computation has been done.  The mapping, an action, still needs to be performed.

The physical structure and activity 
 of computer A implementing program a may be completely different to that of 
 computer B implementing 
 program b, but program b may be an emulation of program a, which should make 
 the two machines 
 functionally equivalent and, under computationalism, equivalently conscious. 

I don't see any problem with supposing that A and B are equally conscious (or 
unconscious).

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
 Colin Hales writes:
 
 
Please consider the plight of the zombie scientist with a huge set of
sensory feeds and similar set of effectors. All carry similar signal
encoding and all, in themselves, bestow no experiential qualities on the
zombie.

Add a capacity to detect regularity in the sensory feeds.
Add a scientific goal-seeking behaviour.

Note that this zombie...
a) has the internal life of a dreamless sleep
b) has no concept or percept of body or periphery
c) has no concept that it is embedded in a universe.

I put it to you that science (the extraction of regularity) is the science
of zombie sensory fields, not the science of the natural world outside the
zombie scientist. No amount of creativity (except maybe random choices)
would ever lead to any abstraction of the outside world that gave it the
ability to handle novelty in the natural world outside the zombie scientist.

No matter how sophisticated the sensory feeds and any guesswork as to a
model (abstraction) of the universe, the zombie would eventually find
novelty invisible because the sensory feeds fail to depict the novelty .ie.
same sensory feeds for different behaviour of the natural world.

Technology built by a zombie scientist would replicate zombie sensory feeds,
not deliver an independently operating novel chunk of hardware with a
defined function(if the idea of function even has meaning in this instance).

The purpose of consciousness is, IMO, to endow the cognitive agent with at
least a repeatable (not accurate!) simile of the universe outside the
cognitive agent so that novelty can be handled. Only then can the zombie
scientist detect arbitrary levels of novelty and do open ended science (or
survive in the wild world of novel environmental circumstance).

In the absence of the functionality of phenomenal consciousness and with
finite sensory feeds you cannot construct any world-model (abstraction) in
the form of an innate (a-priori) belief system that will deliver an endless
ability to discriminate novelty. In a very Godellian way eventually a limit
would be reach where the abstracted model could not make any prediction that
can be detected. The zombie is, in a very real way, faced with 'truths' that
exist but can't be accessed/perceived. As such its behaviour will be
fundamentally fragile in the face of novelty (just like all computer
programs are).
---
Just to make the zombie a little more real... consider the industrial
control system computer. I have designed, installed hundreds and wired up
tens (hundreds?) of thousands of sensors and an unthinkable number of
kilometers of cables. (NEVER again!) In all cases I put it to you that the
phenomenal content of sensory connections may, at best, be characterised as
whatever it is like to have electrons crash through wires, for that is what
is actually going on. As far as the internal life of the CPU is concerned...
whatever it is like to be an electrically noisy hot rock, regardless of the
programalthough the character of the noise may alter with different
programs!

I am a zombie expert! No that didn't come out right...erm
perhaps... I think I might be a world expert in zombies yes, that's
better.
:-)
Colin Hales
 
 
 I'm not sure I understand why the zombie would be unable to respond to any 
 situation it was likely to encounter. Doing science and philosophy is just a 
 happy 
 side-effect of a brain designed to help its owner survive and reproduce. Do 
 you 
 think it would be impossible to program a computer to behave like an insect, 
 or a 
 newborn infant, for example? You could add a random number generator to make 
 its behaviour less predictable (so predators can't catch it and parents don't 
 get 
 complacent) or to help it decide what to do in a truly novel situation. 
 
 Stathis Papaioannou

And after you had given it all these capabilities how would you know it was not 
conscious?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread Johnathan Corgan

David Nyman wrote:

[re: QTI]
 This has obvious
 implications for retirement planning in general and avoidance of the
 more egregious cul-de-sac situations. On the other hand, short of
 outright lunacy vis-a-vis personal safety, it also seems to imply that
 from the 1st-person pov we are likely to come through (albeit possibly
 in less-than-perfect shape) even apparently minimally survivable
 situations. This struck me particularly forcibly while watching the
 9/11 re-runs on TV last night.

It's the cul-de-sac situations that interest me.  Are there truly any?
Are there moments of consciousness which have no logically possible
continuation (while remaining conscious?)

It seems the canonical example is surviving a nearby nuclear detonation.
 One logical possibility is that all your constituent particles
quantum-tunnel away from the blast in time.

This would be of extremely low measure in absolute terms, but what about
the proportion of continuations that contain you as a conscious entity?

This also touches on a recent thread about how being of low measure
feels. If QTI is true, and I'm subject to a nuclear detonation, does it
matter if my possible continuations are of such a low relative measure?
Once I'm in them, would I feel any different and should I care?

These questions may reduce to something like, Is there a lower limit to
the amplitude of the SWE?

If measure is infinitely divisible, then is there any natural scale to
its absolute value?

I raised a similar question on the list a few months ago when Tookie
Wiliams was in the headlines and was eventually executed by the State of
California.  What possible continuations exist in this situation?

 In effect, we are being presented with a kind of 'yes doctor' in
 everyday life. Do you find that these considerations affect your own
 behaviour in any way?

A very interesting question.

If my expectation is that QTI is true and I'll be living for a very long
time, I may adjust my financial planning accordingly.  But QTI only
applies to my own first-person view; I'll be constantly shedding
branches where I did indeed die.  If I have any financial dependents, do
I provide for their welfare, even if they'll only exist forever outside
my ability to interact with?

-Johnathan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 Brent meeker writes (quoting SP):
 
 
Maybe this is a copout, but I just don't think it is even logically possible 
to explain what consciousness 
*is* unless you have it. 

Not being *logically* possible means entailing a contradiction - I doubt 
that.  But 
anyway you do have it and you think I do because of the way we interact.  So 
if you 
interacted the same way with a computer and you further found out that the 
computer 
was a neural network that had learned through interaction with people over a 
period 
of years, you'd probably infer that the computer was conscious - at least you 
wouldn't be sure it wasn't.
 
 
 True, but I could still only imagine that it experiences what I experience 
 because I already know what I 
 experience. I don't know what my current computer experiences, if anything, 
 because I'm not very much 
 like it.
  
 
It's like the problem of explaining vision to a blind man: he might be the 
world's 
greatest scientific expert on it but still have zero idea of what it is like 
to see - and that's even though 
he shares most of the rest of his cognitive structure with other humans, and 
can understand analogies 
using other sensations. Knowing what sort of program a conscious computer 
would have to run to be 
conscious, what the purpose of consciousness is, and so on, does not help me 
to understand what the 
computer would be experiencing, except by analogy with what I myself 
experience. 

But that's true of everything.  Suppose we knew a lot more about brains and 
we 
created an intelligent computer using brain-like functional architecture and 
it acted 
like a conscious human being, then I'd say we understood its consciousness 
better 
than we understand quantum field theory or global economics.
 
 
 We would understand it in a third person sense but not in a first person 
 sense, except by analogy with our 
 own first person experience. Consciousness is the difference between what can 
 be known by observing an 
 entity and what can be known by being the entity, or something like the 
 entity, yourself. 
 
 Stathis Papaioannou

But you are simply positing that there is such a difference.  That's easy to do 
because we know so little about how brains work.  But consider the engine in 
your 
car.  Do you know what it's like to be the engine in your car?  You know a lot 
about 
it, but how do you know that you know all of it?  Does that mean your car 
engine is 
conscious?  I'd say yes it is (at a very low level) and you *can* know what 
it's like.

This just an extreme example of that kind of special pleading you hear in 
politics - 
nobody can represent Black interests except a Black, no man can understand 
Feminism. 
  Can only children be pediatricians?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

1Z wrote:
 
 Stathis Papaioannou wrote:
 
Brent meeker writes:


Stathis Papaioannou wrote:

Peter Jones writes:
 
 
We should ask ourselves how do we know the thermometer isn't conscious of the
temperature?  It seems that the answer has been that it's state or activity 
*could*
be intepreted in many ways other than indicating the temperature; therefore 
it must
be said to unconscious of the temperature or we must allow that it 
implements all
conscious thought (or at least all for which there is a possible 
interpretative
mapping).  But I see it's state and activity as relative to our shared 
environment;
and this greatly constrains what it can be said to compute, e.g. the 
temperature,
the expansion coefficient of Hg...   With this constraint, then I think 
there is no
problem in saying the thermometer is conscious at the extremely low level of 
being
aware of the temperature or the expansion coefficient of Hg or whatever else 
is
within the constraint.

I would basically agree with that. Consciousness would probably have to be a 
continuum
if computationalism is true.
 
 
 I don't think that follows remotely. It is true that it is vastly
 better to interpret a column of mercury as a temperature-sensor than
 a pressure-sensor or a radiation-sensor. That doesn't mean the
 thermometer
 knows that in itself.
 
 Computationalism does not claim that every computation is conscious.
 
 If consciousness supervenes on inherent non-interprtation-dependent
 features,
 it can supervene on features which are binary, either present or
 absent.

It could, depending on what it is.  But that's why we need some independent 
operational definition of consciousness before we can say what has it and what 
doens't.  It's pretty clear that there are degrees of consciousness.  My dog is 
aware 
of where he is and who he is relative to the family etc.  But I don't think he 
passes 
the mirror test.  So whether a thermometer is conscious or not is likely to be 
a 
matter of how we define and quantify consciousness.

 
 For instance, whether a programme examines or modifies its own code is
 surely
 such a feature.
 
 
 
Even if computationalism were false and only those machines
specially blessed by God were conscious there would have to be a continuum, 
across
different species and within the lifespan of an individual from birth to 
death. The possibility
that consciousness comes on like a light at some point in your life, or at 
some point in the
evolution of a species, seems unlikely to me.
 
 
 Surely it comes on like a light whenver you wake up.

Not at all.  If someone whispers your name while you're asleep, you will wake 
up - 
showing you were conscious of sounds and their meaning.

On the other hand, it does come on like a light (or a slow sunrise) when you 
come out 
of anesthesia.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread David Nyman

Johnathan Corgan wrote:

 If my expectation is that QTI is true and I'll be living for a very long
 time, I may adjust my financial planning accordingly.  But QTI only
 applies to my own first-person view; I'll be constantly shedding
 branches where I did indeed die.  If I have any financial dependents, do
 I provide for their welfare, even if they'll only exist forever outside
 my ability to interact with?

Is this in fact your expectation? And do you so plan? Forgive me if
this seems overly personal, but I'm fascinated to discover if anyone
actually acts on these beliefs.

David

 David Nyman wrote:

 [re: QTI]
  This has obvious
  implications for retirement planning in general and avoidance of the
  more egregious cul-de-sac situations. On the other hand, short of
  outright lunacy vis-a-vis personal safety, it also seems to imply that
  from the 1st-person pov we are likely to come through (albeit possibly
  in less-than-perfect shape) even apparently minimally survivable
  situations. This struck me particularly forcibly while watching the
  9/11 re-runs on TV last night.

 It's the cul-de-sac situations that interest me.  Are there truly any?
 Are there moments of consciousness which have no logically possible
 continuation (while remaining conscious?)

 It seems the canonical example is surviving a nearby nuclear detonation.
  One logical possibility is that all your constituent particles
 quantum-tunnel away from the blast in time.

 This would be of extremely low measure in absolute terms, but what about
 the proportion of continuations that contain you as a conscious entity?

 This also touches on a recent thread about how being of low measure
 feels. If QTI is true, and I'm subject to a nuclear detonation, does it
 matter if my possible continuations are of such a low relative measure?
 Once I'm in them, would I feel any different and should I care?

 These questions may reduce to something like, Is there a lower limit to
 the amplitude of the SWE?

 If measure is infinitely divisible, then is there any natural scale to
 its absolute value?

 I raised a similar question on the list a few months ago when Tookie
 Wiliams was in the headlines and was eventually executed by the State of
 California.  What possible continuations exist in this situation?

  In effect, we are being presented with a kind of 'yes doctor' in
  everyday life. Do you find that these considerations affect your own
  behaviour in any way?

 A very interesting question.

 If my expectation is that QTI is true and I'll be living for a very long
 time, I may adjust my financial planning accordingly.  But QTI only
 applies to my own first-person view; I'll be constantly shedding
 branches where I did indeed die.  If I have any financial dependents, do
 I provide for their welfare, even if they'll only exist forever outside
 my ability to interact with?
 
 -Johnathan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
 Brent meeker writes:
 
 
Stathis Papaioannou wrote:

Peter Jones writes:



Stathis Papaioannou wrote:



Like Bruno, I am not claiming that this is definitely the case, just that 
it is the case if
computationalism is true. Several philosophers (eg. Searle) have used the 
self-evident
absurdity of the idea as an argument demonstrating that computationalism 
is false -
that there is something non-computational about brains and consciousness. 
I have not
yet heard an argument that rejects this idea and saves computationalism.

[ rolls up sleaves ]

The idea is easilly refuted if it can be shown that computation doesn't
require
interpretation at all. It can also be refuted more circuitously by
showing that
computation is not entirely a matter of intepretation. In everythingism
, eveything
is equal. If some computations (the ones that don't depend on
interpretation) are
more equal than others, the way is still open for the Somethinginst
to object
that interpretation-independent computations are really real, and the
others are
mere possibilities.

The claim has been made that computation is not much use without an
interpretation.
Well, if you define a computer as somethin that is used by a human,
that is true.
It is also very problematic to the computationalist claim that the
human mind is a computer.
Is the human mind of use to a human ? Well, yes, it helps us stay alive
in various ways.
But that is more to do with reacting to a real-time environment, than
performing abstract symbolic manipulations or elaborate
re-interpretations. (Computationalists need to be careful about how
they define computer. Under
some perfectly reasonable definitions -- for instance, defining a
computer as
a human invention -- computationalism is trivially false).


I don't mean anything controversial (I think) when I refer to interpretation 
of 
computation. Take a mercury thermometer: it would still do its thing if all 
sentient life in the universe died out, or even if there were no sentient 
life to 
build it in the first place and by amazing luck mercury and glass had come 
together 
in just the right configuration. But if there were someone around to observe 
it and 
understand it, or if it were attached to a thermostat and heater, the 
thermometer 
would have extra meaning - the same thermometer, doing the same thermometer 
stuff. Now, if thermometers were conscious, then part of their thermometer 
stuff might include knowing what the temperature was - all by themselves, 
without 
benefit of external observer. 

We should ask ourselves how do we know the thermometer isn't conscious of the 
temperature?  It seems that the answer has been that it's state or activity 
*could* 
be intepreted in many ways other than indicating the temperature; therefore 
it must 
be said to unconscious of the temperature or we must allow that it implements 
all 
conscious thought (or at least all for which there is a possible 
interpretative 
mapping).  But I see it's state and activity as relative to our shared 
environment; 
and this greatly constrains what it can be said to compute, e.g. the 
temperature, 
the expansion coefficient of Hg...   With this constraint, then I think there 
is no 
problem in saying the thermometer is conscious at the extremely low level of 
being 
aware of the temperature or the expansion coefficient of Hg or whatever else 
is 
within the constraint.
 
 
 I would basically agree with that. Consciousness would probably have to be a 
 continuum 
 if computationalism is true. Even if computationalism were false and only 
 those machines 
 specially blessed by God were conscious there would have to be a continuum, 
 across
 different species and within the lifespan of an individual from birth to 
 death. The possibility 
 that consciousness comes on like a light at some point in your life, or at 
 some point in the 
 evolution of a species, seems unlikely to me.
 
 
Furthermore, if thermometers were conscious, they 
might be dreaming of temperatures, or contemplating the meaning of 
consciousness, 
again in the absence of external observers, and this time in the absence of 
interaction 
with the real world. 

This, then, is the difference between a computation and a conscious 
computation. If 
a computation is unconscious, it can only have meaning/use/interpretation in 
the eyes 
of a beholder or in its interaction with the environment. 

But this is a useless definition of the difference.  To apply we have to know 
whether 
some putative conscious computation has meaning to itself; which we can only 
know by 
knowing whether it is conscious or not.  It makes consciousness ineffable and 
so 
makes the question of whether computationalism is true an insoluble mystery.
 
 
 That's what I have in mind.
  
 
Even worse it makes it impossible for us to know whether we're talking about 
the same 
thing when we use the word consciousness.
 
 
 I know what I mean, and you probably know what I mean 

Re: Russell's book

2006-09-12 Thread Johnathan Corgan

David Nyman wrote:

 Is this in fact your expectation? And do you so plan? Forgive me if
 this seems overly personal, but I'm fascinated to discover if anyone
 actually acts on these beliefs.

It's not overly personal; I brought it up in fact.

But personally, no, I don't act on these beliefs because they are not
mine.  That is, I've not established to my satisfaction that QTI is
correct.  However, I do have an intense interest and must admit I want
it to be true.  Alas, I may only find out when I look around and wonder
why I'm the only 150 year old person :-)

It does seem to me the theory hinges on whether cul-de-sac's exist or
not, hence my earlier questioning.  I've already accepted the essential
underlying MWI explanation.

-Johnathan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

1Z wrote:
...
 Dennett's idea of stored conscious volition is quite in line with our
 theory. Indeed, we would like to extend it in a way that Dennett does
 not. We would like to extend it to stored indeterminism. Any decision
 we make in exigent situations wher we do nto have the luxury of
 conisdered thought must be more-or-less determinsistic -- must be
 more-or-less determined by our state of mind at the time - -if they are
 to be of any use at all to us. Otherwise we might as well toss a coin.
 But our state of mind at the time can be formed by rumination, training
 and so over a long period, perhaps over a lifetime. As such it can
 contain elemetns of indeterminism in the positive sense -- of
 imagination and creativity, not mere caprice.

Right.  Even if it's determined, it's determined by who we are.

 
 This extension of Dennett's criticism of Libet (or rather the way
 Libet's results are used by free-will sceptics) gives us a way of
 answering Dennett's own criticisms of Robert Kane, a prominent defender
 of naturalistic Free Will.

I didn't refer to Libet and Grey Walter as refuting free will - I was well 
aware of 
Dennett's writings (and Stathis probably is to). But I think they show that the 
conscious feeling of making a decision and actually making the decision are 
different 
things; that most of a decision making is unconscious.  Which is exactly what 
you 
would expect based on a model of a computer logging it's own decisions.  I 
actually 
found Grey Walter's experiments more convincing that Libet's.  It's too bad 
they 
aren't likely to be repeated.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread David Nyman

Johnathan Corgan wrote:

 It does seem to me the theory hinges on whether cul-de-sac's exist or
 not, hence my earlier questioning.  I've already accepted the essential
 underlying MWI explanation.

Yes, the question of cul-de-sacs is indeed interesting.  However, it
seems to me that they need only exist in a relative sense for it still
to be worthwhile to make a 'bet' (in the spirit of 'yes doctor') -
hence my point about avoiding 'insane' risks - perhaps like nuclear
blasts (incidentally this is strongly reminiscent of the 'infinite
improbability drive' for Douglas Adams fans). So long as there seemed
to be some plausible (even if very small) number of 'escape routes'
then it might be worth a punt. Your speculation re extremely small
measure is interesting in this context. Personally, I would expect some
sort of consciousness to survive in a non-zero branch, but in what
company?

David

 David Nyman wrote:

  Is this in fact your expectation? And do you so plan? Forgive me if
  this seems overly personal, but I'm fascinated to discover if anyone
  actually acts on these beliefs.

 It's not overly personal; I brought it up in fact.

 But personally, no, I don't act on these beliefs because they are not
 mine.  That is, I've not established to my satisfaction that QTI is
 correct.  However, I do have an intense interest and must admit I want
 it to be true.  Alas, I may only find out when I look around and wonder
 why I'm the only 150 year old person :-)

 It does seem to me the theory hinges on whether cul-de-sac's exist or
 not, hence my earlier questioning.  I've already accepted the essential
 underlying MWI explanation.
 
 -Johnathan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread Brent Meeker

Johnathan Corgan wrote:
 David Nyman wrote:
 
 [re: QTI]
 
This has obvious
implications for retirement planning in general and avoidance of the
more egregious cul-de-sac situations. On the other hand, short of
outright lunacy vis-a-vis personal safety, it also seems to imply that
from the 1st-person pov we are likely to come through (albeit possibly
in less-than-perfect shape) even apparently minimally survivable
situations. This struck me particularly forcibly while watching the
9/11 re-runs on TV last night.
 
 
 It's the cul-de-sac situations that interest me.  Are there truly any?
 Are there moments of consciousness which have no logically possible
 continuation (while remaining conscious?)
 
 It seems the canonical example is surviving a nearby nuclear detonation.
  One logical possibility is that all your constituent particles
 quantum-tunnel away from the blast in time.
 
 This would be of extremely low measure in absolute terms, but what about
 the proportion of continuations that contain you as a conscious entity?
 
 This also touches on a recent thread about how being of low measure
 feels. If QTI is true, and I'm subject to a nuclear detonation, does it
 matter if my possible continuations are of such a low relative measure?
 Once I'm in them, would I feel any different and should I care?
 
 These questions may reduce to something like, Is there a lower limit to
 the amplitude of the SWE?
 
 If measure is infinitely divisible, then is there any natural scale to
 its absolute value?

I think it is not and there is a lower limit below which cross terms in the 
density 
matrix must be strictly (not just FAPP) zero.  The Planck scale provides a 
lower 
bound on fundamental physical values.  So it makes sense to me that treating 
probability measures as a continuum is no more than a convenient approximation. 
 But 
I have no idea how to make that precise and testable.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread David Nyman

Johnathan Corgan wrote:

 QTI makes a big twist on this by removing from the numerator *and*
 denominator those outcomes where consciousness ceases.

Precisely. And this is what should bias one's choices in the case that
one is prepared to bet on the validity of QTI.

 Not sure what the question is.  Do you mean, what would things be like
 afterward?  Would it be worth it?

Yes, because this should also be taken into account before 'betting'
(at least in certain near-cul-de-sac circumstances). Any thoughts?

David

 David Nyman wrote:

  So long as there seemed
  to be some plausible (even if very small) number of 'escape routes'
  then it might be worth a punt.

 From a 'yes doctor' bet point of view, this introduces the idea of
 relative expectation of different future outcomes, an idea hashed out
 here many many times.

 Personally I think it's rational to base one's current actions on the
 probability of expected outcome*value (maximum utility theory).  And I
 also think subjective probability should equate to proportion of
 measure.  (Others disagree with this way of measuring future expectation.)

 QTI makes a big twist on this by removing from the numerator *and*
 denominator those outcomes where consciousness ceases.

  Your speculation re extremely small
  measure is interesting in this context. Personally, I would expect some
  sort of consciousness to survive in a non-zero branch, but in what
  company?

 Not sure what the question is.  Do you mean, what would things be like
 afterward?  Would it be worth it?
 
 -Johnathan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread David Nyman

(This is the original post that seems somehow to have gone missing)

Hi Russell

I just received the book and have swiftly perused it (one of many
iterations I expect). I find it to be a clear presentation of your own
approach as well as a fine exposition of many topics from the list that
had me baffled. A couple of things immediately occur:

1) QTI - I must say until reading your remarks (e.g. re pension plans)
the possible personal consequences of QTI hadn't really struck me. If
QTI is true, there is a fundamental assymetry between the 1st and
3rd-person povs vis-a-vis personal longevity (at least the longevity of
consciousness), and this seems to imply that one should take seriously
the prospect of being around in some form far longer than generally
assumed from a purely 3rd-person perspective. This has obvious
implications for retirement planning in general and avoidance of the
more egregious cul-de-sac situations. On the other hand, short of
outright lunacy vis-a-vis personal safety, it also seems to imply that
from the 1st-person pov we are likely to come through (albeit possibly
in less-than-perfect shape) even apparently minimally survivable
situations. This struck me particularly forcibly while watching the
9/11 re-runs on TV last night.

In effect, we are being presented with a kind of 'yes doctor' in
everyday life. Do you find that these considerations affect your own
behaviour in any way?

2) RSSA vs ASSA - Isn't it the case that all 'absolute' self samples
will appear to be 'relative' (i.e. to their own content) and hence
1st-person experience can be 'time-like' without the need for
'objective' sequencing of observer moments? If the 'pov' is that of the
multiverse can't we simply treat all 1st-person experience as the
'absolute sampling' of all povs compresently?

David


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread Johnathan Corgan

Brent Meeker wrote:

 Everett who originated the MWI thought about QTI.  Although he never 
 explicitly said 
 he believed it, he led a very unhealthy life style smoking, drinking, eating 
 to 
 excees, never exercising and he died young, of a heart attack IIRC.  So some 
 of his 
 acquaintences have speculated that he did really believe in QTI.

Well, that's not quite rational--what is the quality of life (utility)
that succeeds surviving a heart attack?

If QTI is true, and I'm going to live a very long time, it would not
only motivate me to plan for the long term, but also to be much more
careful about my health--I'll be living in this body for much longer
than ~73 years!

-Johnathan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: The difference between a 'chair' concept and a 'mathematical concept' ;)

2006-09-12 Thread 1Z


[EMAIL PROTECTED] wrote:

 Mathematical concepts are quite different.  The key difference is that
 we *cannot* in fact dispense with mathematical descriptions and replace
 them with something else.  We cannot *eliminate* mathematical concepts
 from our theories like we can with say 'chair' concepts.  And this is
 the argument for regarding mathematical concepts as existing 'out
 there' and not just in our heads.


Actually, it's an arguement against doing so. If mathematical
terms referred to particular things, they would not be universally
applicable.
They are universally applicable because they don't refer to anything.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread Brent Meeker

Johnathan Corgan wrote:
 Brent Meeker wrote:
 
 
These questions may reduce to something like, Is there a lower limit to
the amplitude of the SWE?

If measure is infinitely divisible, then is there any natural scale to
its absolute value?

I think it is not and there is a lower limit below which cross terms in the 
density 
matrix must be strictly (not just FAPP) zero.  The Planck scale provides a 
lower 
bound on fundamental physical values.  So it makes sense to me that treating 
probability measures as a continuum is no more than a convenient 
approximation.  But 
I have no idea how to make that precise and testable.
 
 
 Having measure ultimately having a fixed lower limit would I think be
 fatal to QTI.  But, consider the following:
 
 At every successive moment our measure is decreasing, possibly by a very
 large fraction, depending on how you count it. Every moment we branch
 into only one of a huge number of possibilities.  A moment here is on
 the order a Planck time unit.

First, it may not be such a large factor.  All nearby trajectories in 
configuration 
space constructively interfere to produce quasi-classical evolution in certain 
bases. 
So if we are essentially classical and I think we are (c.f. Tegmark's paper on 
the 
brain) then we are not decreasing in measure by MWI splitting on a Planckian 
or 
even millisecond time scale.  The evolution of our world is mostly 
deterministic.

Second, if there is a lower limit on the interference terms in the SE of the 
universe, then the density matrix gets diagonalized.  Then the MWI goes away.  
QM is, 
as Omnes' says, a probabilistic theory and it predicts probabilities.  
Probabilities 
mean something happens and other things don't.  So we don't risk vanishing.  
The fact 
that our probability seems to become vanishingly small is only a artifact of 
what we 
take as the domain of possibilities and it is no different than our 
improbability pre-QM.

But undoubtedly there are mathematical difficulties with assuming a lower bound 
on 
probabilities.  All our mathematics and theory has been built around continuous 
variables for the very good reason that it seems overwhelmingly difficult to do 
physics in discrete variables - just look at how messy numerical solution of 
partial 
differential equations is compared to the equations themselves.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread Saibal Mitra

I think I can prove that QTI as intepreted in this list is false, I'll post
the proof in a new thread.

The only version of QTI that makes sense to me is this:
All possible states exist out there in the multiverse. The observer
moments are timeless objects so, in a certain sense, QTI is true. But then
you must consider surviving with memory loss.

E.g., if I'm diagnosed with a terminal illness, then there is still a branch
in which I haven't  been diagnosed with that illness. If I'm 100 years old,
then I still have copies that are only 20 years old etc. etc.

Saibal

- Original Message - 
From: Johnathan Corgan [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Sent: Tuesday, September 12, 2006 7:43 PM
Subject: Re: Russell's book



 David Nyman wrote:

 [re: QTI]
  This has obvious
  implications for retirement planning in general and avoidance of the
  more egregious cul-de-sac situations. On the other hand, short of
  outright lunacy vis-a-vis personal safety, it also seems to imply that
  from the 1st-person pov we are likely to come through (albeit possibly
  in less-than-perfect shape) even apparently minimally survivable
  situations. This struck me particularly forcibly while watching the
  9/11 re-runs on TV last night.

 It's the cul-de-sac situations that interest me.  Are there truly any?
 Are there moments of consciousness which have no logically possible
 continuation (while remaining conscious?)

 It seems the canonical example is surviving a nearby nuclear detonation.
  One logical possibility is that all your constituent particles
 quantum-tunnel away from the blast in time.

 This would be of extremely low measure in absolute terms, but what about
 the proportion of continuations that contain you as a conscious entity?

 This also touches on a recent thread about how being of low measure
 feels. If QTI is true, and I'm subject to a nuclear detonation, does it
 matter if my possible continuations are of such a low relative measure?
 Once I'm in them, would I feel any different and should I care?

 These questions may reduce to something like, Is there a lower limit to
 the amplitude of the SWE?

 If measure is infinitely divisible, then is there any natural scale to
 its absolute value?

 I raised a similar question on the list a few months ago when Tookie
 Wiliams was in the headlines and was eventually executed by the State of
 California.  What possible continuations exist in this situation?

  In effect, we are being presented with a kind of 'yes doctor' in
  everyday life. Do you find that these considerations affect your own
  behaviour in any way?

 A very interesting question.

 If my expectation is that QTI is true and I'll be living for a very long
 time, I may adjust my financial planning accordingly.  But QTI only
 applies to my own first-person view; I'll be constantly shedding
 branches where I did indeed die.  If I have any financial dependents, do
 I provide for their welfare, even if they'll only exist forever outside
 my ability to interact with?

 -Johnathan

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Proof that QTI is false

2006-09-12 Thread Saibal Mitra

QTI in the way defined in this list contradicts quantum mechanics. The
observable part of the universe can only be in a finite number of quantum
states. So, it can only harbor a finite number of observer moments or
experiences a  person can have, see here for details:

http://arxiv.org/abs/gr-qc/0102010

If there can only be a finite number of observer moments you can only
experience a finite amount of time.

QED.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread Tom Caylor

After many life-expectancy-spans worth of narrow escapes, after
thousands or millions of years, wouldn't the probability be pretty high
for my personality/memory etc. to change so much that I wouldn't
recognize myself, or that I could be more like another person than my
original self, and so for all practical purposes wouldn't I be another
person?  How do I know this hasn't happened already?  If it has, what
difference does it make?  Isn't it true that the only realities that
matter are the ones that make any difference to my reality?  (almost a
tautology)

Johnathan Corgan wrote:
 Brent Meeker wrote:

  These questions may reduce to something like, Is there a lower limit to
  the amplitude of the SWE?
 
  If measure is infinitely divisible, then is there any natural scale to
  its absolute value?
 
  I think it is not and there is a lower limit below which cross terms in the 
  density
  matrix must be strictly (not just FAPP) zero.  The Planck scale provides a 
  lower
  bound on fundamental physical values.  So it makes sense to me that treating
  probability measures as a continuum is no more than a convenient 
  approximation.  But
  I have no idea how to make that precise and testable.

 Having measure ultimately having a fixed lower limit would I think be
 fatal to QTI.  But, consider the following:

 At every successive moment our measure is decreasing, possibly by a very
 large fraction, depending on how you count it.  Every moment we branch
 into only one of a huge number of possibilities.  A moment here is on
 the order a Planck time unit.

 So does this mean we run the risk of suddenly ceasing to exist, if our
 measure decreases past a lower limit simple due to the evolution of the SWE?
 
 -Johnathan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Proof that QTI is false

2006-09-12 Thread David Nyman

Saibal Mitra wrote:

 If there can only be a finite number of observer moments you can only
 experience a finite amount of time.

Whether or not this is the case, it is a secondary issue to my question
re *survivability* (call this the Quantum Theory of Enhanced Personal
Survivability, or QTEPS) which is based on the '1st-person pruning' of
non-conscious branches of MW. My question to Russell and the list is
whether this actually influences real-life behaviour - i.e. is anyone
in practice saying 'yes' to this doctor?

David

 QTI in the way defined in this list contradicts quantum mechanics. The
 observable part of the universe can only be in a finite number of quantum
 states. So, it can only harbor a finite number of observer moments or
 experiences a  person can have, see here for details:

 http://arxiv.org/abs/gr-qc/0102010

 If there can only be a finite number of observer moments you can only
 experience a finite amount of time.
 
 QED.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou


Peter Jones writes:
 
 Stathis Papaioannou wrote:
  Peter Jones writes:
 
   Stathis Papaioannou wrote:
  
  Now, suppose some more complex variant of 3+2=3 implemented on your 
  abacus has consciousness associated with it, which is just one of 
  the tenets of computationalism. Some time later, you are walking in 
  the Amazon rain forest and notice that
  under a certain mapping


  of birds to beads and trees to wires, the forest is implementing 
  the same computation as your abacus was. So if your abacus was 
  conscious, and computationalism is true, the tree-bird sytem should 
  also be conscious.

 No necessarily, because the mapping is required too. Why should
 it still be conscious if no-one is around to make the mapping.
   
Are you claiming that a conscious machine stops being conscious if its 
designers die
and all the information about how it works is lost?
  
   You are, if anyone is. I don't agree that computation *must* be
   interpreted,
   although they *can* be re-interpreted.
 
  What I claim is this:
 
  A computation does not *need* to be interpreted, it just is. However, a 
  computation
  does need to be interpreted, or interact with its environment in some way, 
  if it is to be
  interesting or meaningful.
 
 A computation other than the one you are running needs to be
 interpreted by you
 to be meaningful to you. The computation you are running is useful
 to you because it keeps you alive.
 
  By analogy, a string of characters is a string of characters
  whether or not anyone interprets it, but it is not interesting or 
  meaningful unless it is
  interpreted. But if a computation, or for that matter a string of 
  characters, is conscious,
  then it is interesting and meaningful in at least one sense in the absence 
  of an external
  observer: it is interesting and meaningful to itself. If it were not, then 
  it wouldn't be
  conscious. The conscious things in the world have an internal life, a first 
  person
  phenomenal experience, a certain ineffable something, whatever you want to 
  call it,
  while the unconscious things do not. That is the difference between them.
 
 Which they manage to be aware of without the existence of an external
 oberver,
 so one of your premises must be wrong.

No, that's exactly what I was saying all along. An observer is needed for 
meaningfulness, 
but consciousness provides its own observer. A conscious entity may interact 
with its 
environment, and in fact that would have to be the reason consciousness evolved 
(nature 
is not self-indulgent), but the interaction is not logically necessary for 
consciousness.

Stathis Papaioannou

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou

Peter Jones writes:

  That's what I'm saying, but I certainly don't think everyone agrees with me 
  on the list, and
  I'm not completely decided as to which of the three is more absurd: every 
  physical system
  implements every conscious computation, no physical system implements any 
  conscious
  computation (they are all implemented non-physically in Platonia), or the 
  idea that a
  computation can be conscious in the first place.
 
 
 You haven't made it clear why you don't accept that every physical
 system
 implements one computation, whether it is a
 conscious computation or not. I don't see what
 contradicts it.

Every physical system does implement every computation, in a trivial sense, as 
every rock 
is a hammer and a doorstop and contains a bust of Albert Einstein inside it. 
Those three aspects 
of rocks are not of any consequence unless there is someone around to 
appreciate them. 
Similarly, if the vibration of atoms in a rock under some complex mapping are 
calculating pi 
that is not of any consequence unless someone goes to the trouble of 
determining that mapping, 
and even then it wouldn't be of any use as a general purpose computer unless 
you built another 
general purpose computer to dynamically interpret the vibrations (which does 
not mean the rock 
isn't doing the calculation without this extra computer). However, if busts of 
Einstein were conscious 
regardless of the excess rock around them, or calculations of pi were conscious 
regardless of the 
absence of anyone being able to appreciate them, then the existence of the rock 
in an otherwise 
empty universe would necessitate the existence of at least those two conscious 
processes. 

Computationalism says that some computations are conscious. It is also a 
general principle of 
computer science that equivalent computations can be implemented on very 
different hardware 
and software platforms; by extension, the vibration of atoms in a rock can be 
seen as implementing 
any computation under the right interpretation. Normally, it is of no 
consequence that a rock 
implements all these computations. But if some of these computations are 
conscious (a consequence 
of computationalism) and if some of the conscious computations are conscious in 
the absence of 
environmental input, then every rock is constantly implementing all these 
conscious computations. 
To get around this you would have to deny that computations can be conscious, 
or at least restrict 
the conscious computations to specific hardware platforms and programming 
languages. This destroys 
computationalism, although it can still allow a form of functionalism. The 
other way to go is to reject 
the supervenience thesis and keep computationalism, which would mean that every 
computation 
(includidng the conscious ones) is implemented necessarily in the absence of 
any physical process.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
 Peter Jones writes (quoting SP):
 
 
I'm not sure how the multiverse comes into the discussion, but you have
made the point several times that a computation depends on an observer


No, I haven't! I have tried ot follow through the consequences of
assuming it must.
It seems to me that some sort of absurdity or contradiction ensues.

OK. This has been a long and complicated thread.


for its meaning. I agree, but *if* computations can be conscious (remember,
this is an assumption) then in that special case an external observer is 
not
needed.

Why not ? (Well, I would be quite happy that a conscious
computation would have some inherent structural property --
I want to foind out why *you* would think it doesn't).

I think it goes against standard computationalism if you say that a conscious
computation has some inherent structural property.
 
 
 I should have said, that the *hardware* has some special structural property 
 goes 
 against computationalism. It is difficult to pin down the structure of a 
 computation 
 without reference to a programming language or hardware. The idea is that the 
 same computation can look completely different on different computers, the 
 corollary 
 of which is that any computer (or physical process) may be implementing any 
 computation, we just might not know about it. It is legitimate to say that 
 only 
 particular computers (eg. brains, or PC's) using particular languages arev 
 actually 
 implementing conscious computations, but that is not standard 
 computationalism.
 
 Statthis Papaioannou

I thought standard computationalism was just the modest position that if the 
hardware 
of your brain were replaced piecemeal by units with the same input-output at 
some 
microscopic level usually assumed to be neurons, you'd still be you and you'd 
still 
be conscious.

I don't recall anything about all computations implementing consciousness?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 Peter Jones writes:
  
 
Stathis Papaioannou wrote:

Peter Jones writes:


Stathis Papaioannou wrote:


Now, suppose some more complex variant of 3+2=3 implemented on your 
abacus has consciousness associated with it, which is just one of the 
tenets of computationalism. Some time later, you are walking in the 
Amazon rain forest and notice that
under a certain mapping


of birds to beads and trees to wires, the forest is implementing the 
same computation as your abacus was. So if your abacus was conscious, 
and computationalism is true, the tree-bird sytem should also be 
conscious.

No necessarily, because the mapping is required too. Why should
it still be conscious if no-one is around to make the mapping.

Are you claiming that a conscious machine stops being conscious if its 
designers die
and all the information about how it works is lost?

You are, if anyone is. I don't agree that computation *must* be
interpreted,
although they *can* be re-interpreted.

What I claim is this:

A computation does not *need* to be interpreted, it just is. However, a 
computation
does need to be interpreted, or interact with its environment in some way, 
if it is to be
interesting or meaningful.

A computation other than the one you are running needs to be
interpreted by you
to be meaningful to you. The computation you are running is useful
to you because it keeps you alive.


By analogy, a string of characters is a string of characters
whether or not anyone interprets it, but it is not interesting or meaningful 
unless it is
interpreted. But if a computation, or for that matter a string of 
characters, is conscious,
then it is interesting and meaningful in at least one sense in the absence 
of an external
observer: it is interesting and meaningful to itself. If it were not, then 
it wouldn't be
conscious. The conscious things in the world have an internal life, a first 
person
phenomenal experience, a certain ineffable something, whatever you want to 
call it,
while the unconscious things do not. That is the difference between them.

Which they manage to be aware of without the existence of an external
oberver,
so one of your premises must be wrong.
 
 
 No, that's exactly what I was saying all along. An observer is needed for 
 meaningfulness, 
 but consciousness provides its own observer. A conscious entity may interact 
 with its 
 environment, and in fact that would have to be the reason consciousness 
 evolved (nature 
 is not self-indulgent), but the interaction is not logically necessary for 
 consciousness.

But it may be nomologically necessary.  Not logically necessary is the 
weakest 
standard of non-necessity that is still coherent; the only things less 
necessary are 
incoherent.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Stathis Papaioannou wrote:
 Peter Jones writes:
 
 
That's what I'm saying, but I certainly don't think everyone agrees with me 
on the list, and
I'm not completely decided as to which of the three is more absurd: every 
physical system
implements every conscious computation, no physical system implements any 
conscious
computation (they are all implemented non-physically in Platonia), or the 
idea that a
computation can be conscious in the first place.


You haven't made it clear why you don't accept that every physical
system
implements one computation, whether it is a
conscious computation or not. I don't see what
contradicts it.
 
 
 Every physical system does implement every computation, in a trivial sense, 
 as every rock 
 is a hammer and a doorstop and contains a bust of Albert Einstein inside it. 
 Those three aspects 
 of rocks are not of any consequence unless there is someone around to 
 appreciate them. 
 Similarly, if the vibration of atoms in a rock under some complex mapping are 
 calculating pi 
 that is not of any consequence unless someone goes to the trouble of 
 determining that mapping, 
 and even then it wouldn't be of any use as a general purpose computer unless 
 you built another 
 general purpose computer to dynamically interpret the vibrations (which does 
 not mean the rock 
 isn't doing the calculation without this extra computer). 

I think there are some constraints on what the rock must be doing in order that 
it 
can be said to be calculating pi instead of the interpreting computer.  For 
example 
if the rock states were just 1,0,1,0,1,0... then there are several arguments 
based on 
for example information theory that would rule out that being a computation of 
pi.

However, if busts of Einstein were conscious 
 regardless of the excess rock around them, or calculations of pi were 
 conscious regardless of the 
 absence of anyone being able to appreciate them, then the existence of the 
 rock in an otherwise 
 empty universe would necessitate the existence of at least those two 
 conscious processes. 
 
 Computationalism says that some computations are conscious. It is also a 
 general principle of 
 computer science that equivalent computations can be implemented on very 
 different hardware 
 and software platforms; by extension, the vibration of atoms in a rock can be 
 seen as implementing 
 any computation under the right interpretation. Normally, it is of no 
 consequence that a rock 
 implements all these computations. But if some of these computations are 
 conscious (a consequence 
 of computationalism) 

It's not a consequence of my more modest idea of computationalism.

and if some of the conscious computations are conscious in the absence of 
 environmental input, then every rock is constantly implementing all these 
 conscious computations. 
 To get around this you would have to deny that computations can be conscious, 
 or at least restrict 
 the conscious computations to specific hardware platforms and programming 
 languages. 

Why not some more complex and subtle critereon based on the computation?  Why 
just 
hardware or language - both of which seem easy to rule out as definitive of 
consciousness or even computation?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Proof that QTI is false

2006-09-12 Thread Russell Standish

Actually, in standard quantum mechanics, there is an infinity of
observer moments, 2^{\aleph_0} of them in fact.

What you are talking about are various quantum gravity theories, such
as string theory, which appear to have a finite number of observer
moments.

However, even if as observers we are locked into a Nietschian cycle at
some point in time due to finiteness of the number of possible states,
the number will be so large that the practical effects of QTI will
still need to be considered.

Cheers

On Tue, Sep 12, 2006 at 11:58:14PM +0200, Saibal Mitra wrote:
 
 QTI in the way defined in this list contradicts quantum mechanics. The
 observable part of the universe can only be in a finite number of quantum
 states. So, it can only harbor a finite number of observer moments or
 experiences a  person can have, see here for details:
 
 http://arxiv.org/abs/gr-qc/0102010
 
 If there can only be a finite number of observer moments you can only
 experience a finite amount of time.
 
 QED.
 
 
 
-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-12 Thread Russell Standish

On Tue, Sep 12, 2006 at 12:52:25PM -, David Nyman wrote:
 
 Hi Russell
 
 I just received the book and have swiftly perused it (one of many
 iterations I expect). I find it to be a clear presentation of your own
 approach as well as a fine exposition of many topics from the list that
 had me baffled. A couple of things immediately occur:
 
 1) QTI - I must say until reading your remarks (e.g. re pension plans)
 the possible personal consequences of QTI hadn't really struck me. If
 QTI is true, there is a fundamental assymetry between the 1st and
 3rd-person povs vis-a-vis personal longevity (at least the longevity of
 consciousness), and this seems to imply that one should take seriously
 the prospect of being around in some form far longer than generally
 assumed from a purely 3rd-person perspective. This has obvious
 implications for retirement planning in general and avoidance of the
 more egregious cul-de-sac situations. On the other hand, short of
 outright lunacy vis-a-vis personal safety, it also seems to imply that
 from the 1st-person pov we are likely to come through (albeit possibly
 in less-than-perfect shape) even apparently minimally survivable
 situations. This struck me particularly forcibly while watching the
 9/11 re-runs on TV last night.
 
 In effect, we are being presented with a kind of 'yes doctor' in
 everyday life. Do you find that these considerations affect your own
 behaviour in any way?

I mentioned two examples in my book - retirement savings planning - I
will be looking wherever possible for lifetime pension options. Of
course from a QTI perspective, the value of these are limited by the
estimated lifetime of the superannuation company.

The second example is my attitude to euthanasia has changed.

Beyond that, I suppose I no longer fear death. What I do fear is
incapacitation, and so I weigh my risks of bodily damage in any
action against the risks to personal liberty etc. by inaction. It
probably does not change the decision matrix very much at all, however
I can't see suicide bombing as a useful strategy under QTI.


 
 2) RSSA vs ASSA - Isn't it the case that all 'absolute' self samples
 will appear to be 'relative' (i.e. to their own content) and hence
 1st-person experience can be 'time-like' without the need for
 'objective' sequencing of observer moments? If the 'pov' is that of the
 multiverse can't we simply treat all 1st-person experience as the
 'absolute sampling' of all povs compresently?
 
 David
 

I've lost you here. Maybe you need to expand a bit.


-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Colin Hales

Brent Meeker:
 
 Colin Hales wrote:
 
  Stathis Papaioannou
  snip
 
 Maybe this is a copout, but I just don't think it is even logically
 possible to explain what consciousness
 *is* unless you have it. It's like the problem of explaining vision to a
 blind man: he might be the world's
 greatest scientific expert on it but still have zero idea of what it is
 like to see - and that's even though
 he shares most of the rest of his cognitive structure with other humans,
 and can understand analogies
 using other sensations. Knowing what sort of program a conscious
 computer
 would have to run to be
 conscious, what the purpose of consciousness is, and so on, does not
 help
 me to understand what the
 computer would be experiencing, except by analogy with what I myself
 experience.
 
 Stathis Papaioannou
 
 
 
  Please consider the plight of the zombie scientist with a huge set of
  sensory feeds and similar set of effectors. All carry similar signal
  encoding and all, in themselves, bestow no experiential qualities on the
  zombie.
 
  Add a capacity to detect regularity in the sensory feeds.
  Add a scientific goal-seeking behaviour.
 
  Note that this zombie...
  a) has the internal life of a dreamless sleep
  b) has no concept or percept of body or periphery
  c) has no concept that it is embedded in a universe.
 
  I put it to you that science (the extraction of regularity) is the
 science
  of zombie sensory fields, not the science of the natural world outside
 the
  zombie scientist. No amount of creativity (except maybe random choices)
  would ever lead to any abstraction of the outside world that gave it the
  ability to handle novelty in the natural world outside the zombie
 scientist.
 
  No matter how sophisticated the sensory feeds and any guesswork as to a
  model (abstraction) of the universe, the zombie would eventually find
  novelty invisible because the sensory feeds fail to depict the novelty
 .ie.
  same sensory feeds for different behaviour of the natural world.
 
  Technology built by a zombie scientist would replicate zombie sensory
 feeds,
  not deliver an independently operating novel chunk of hardware with a
  defined function(if the idea of function even has meaning in this
 instance).
 
  The purpose of consciousness is, IMO, to endow the cognitive agent with
 at
  least a repeatable (not accurate!) simile of the universe outside the
  cognitive agent so that novelty can be handled. Only then can the zombie
  scientist detect arbitrary levels of novelty and do open ended science
 (or
  survive in the wild world of novel environmental circumstance).
 

 Almost all organisms have become extinct.  Handling *arbitrary* levels of
 novelty is probably too much to ask of any species; and it's certainly
 more than is necessary to survive for millenia.

I am talking purely about scientific behaviour, not general behaviour. A
creature with limited learning capacity and phenomenal scenes could quite
happily live in an ecological niche until the niche changed. I am not asking
any creature other than a scientist to be able to appreciate arbitrary
levels of novelty.

 
 
  In the absence of the functionality of phenomenal consciousness and with
  finite sensory feeds you cannot construct any world-model (abstraction)
 in
  the form of an innate (a-priori) belief system that will deliver an
 endless
  ability to discriminate novelty. In a very Godellian way eventually a
 limit
  would be reach where the abstracted model could not make any prediction
 that
  can be detected.
 
 So that's how we got string theory!
 
 The zombie is, in a very real way, faced with 'truths' that
  exist but can't be accessed/perceived. As such its behaviour will be
  fundamentally fragile in the face of novelty (just like all computer
  programs are).
 

 How do you know we are so robust.  Planck said, A new idea prevails, not
 by the
 conversion of adherents, but by the retirement and demise of opponents.
 In other
 words only the young have the flexibility to adopt new ideas.  Ironically
 Planck
 never really believed quantum mechanics was more than a calculational
 trick.

The robustness is probably in that science is actually, at the level of
critical argument (like this, now), a super-organism.

In retrospect I think QM will be regarded as a side effect of the desperate
attempt to mathematically abtract appearances rather then deal with the
structure that is behaving quantum-mechanically. After the event they'll all
be going...what were we thinking! it won't be wrong... just not useful
in the sense that any of its considerations are not about underlying
structure.


 
  ---
  Just to make the zombie a little more real... consider the industrial
  control system computer. I have designed, installed hundreds and wired
 up
  tens (hundreds?) of thousands of sensors and an unthinkable number of
  kilometers of cables. (NEVER again!) In all cases I put it to you that
 the
  

RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou


Peter Jones writes:

 If consciousness supervenes on inherent non-interprtation-dependent
 features,
 it can supervene on features which are binary, either present or
 absent.
 
 For instance, whether a programme examines or modifies its own code is
 surely
 such a feature.
 
 
 Even if computationalism were false and only those machines
  specially blessed by God were conscious there would have to be a continuum, 
  across
  different species and within the lifespan of an individual from birth to 
  death. The possibility
  that consciousness comes on like a light at some point in your life, or at 
  some point in the
  evolution of a species, seems unlikely to me.
 
 Surely it comes on like a light whenver you wake up.

Being alive/dead or conscious/unconscious would seem to be a binary property, 
but it's 
hard to believe (though not impossible) that there would be one circuit, neuron 
or line of 
code that makes the difference between conscious and unconscious.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-12 Thread Brent Meeker

Colin Hales wrote:
...

 As far as the internal life of the CPU is

concerned...

whatever it is like to be an electrically noisy hot rock, regardless of

the

programalthough the character of the noise may alter with different
programs!

 
That's like say whatever it is like to be you, it is at best some waves of
chemical
potential.  You don't *know* that the control system is not conscious -
unless you
know what structure or function makes a system conscious.

 
 
 There is nothing there except wires and electrically noisy hot rocks,
 plastic and other materials = stuff. 

Just like me.  Nothing but proteins and osmotic potentials and ACT and ADP = 
stuff.

Whatever its consciousness is... it
 is the consciousness of the stuff. The function

Which function?

 is an epiphenomenon at the
 scale of a human user 

Who's the user of my brain?

Brent Meeker

that has nothing to do with the experiential qualities
 of being the computer.

What are the experiential qualities of being a computer? and how can we know 
them?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Proof that QTI is false

2006-09-12 Thread Brent Meeker

Saibal Mitra wrote:
 QTI in the way defined in this list contradicts quantum mechanics. The
 observable part of the universe can only be in a finite number of quantum
 states. So, it can only harbor a finite number of observer moments or
 experiences a  person can have, see here for details:
 
 http://arxiv.org/abs/gr-qc/0102010
 
 If there can only be a finite number of observer moments you can only
 experience a finite amount of time.
 
 QED.

So that would imply that when predicting states at some fixed finite time in 
the 
future there is a smallest, non-zero probability that is realizable.  So if our 
prediction, using continuum variables as an approximation, indicates a 
probability 
lower than this value we should set it to zero??

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-12 Thread Stathis Papaioannou

Peter Jones writes:
 
 Stathis Papaioannou wrote:
  Brent meeker writes:
 
   I think it goes against standard computationalism if you say that a 
   conscious
   computation has some inherent structural property. Opponents of 
   computationalism
   have used the absurdity of the conclusion that anything implements any 
   conscious
   computation as evidence that there is something special and 
   non-computational
   about the brain. Maybe they're right.
   
   Stathis Papaioannou
   
   Why not reject the idea that any computation implements every possible 
   computation
   (which seems absurd to me)?  Then allow that only computations with 
   some special
   structure are conscious.
   
   
It's possible, but once you start in that direction you can say that 
only computations
implemented on this machine rather than that machine can be conscious. 
You need the
hardware in order to specify structure, unless you can think of a 
God-given programming
language against which candidate computations can be measured.
  
   I regard that as a feature - not a bug. :-)
  
   Disembodied computation doesn't quite seem absurd - but our empirical 
   sample argues
   for embodiment.
  
   Brent Meeker
 
  I don't have a clear idea in my mind of disembodied computation except in 
  rather simple cases,
  like numbers and arithmetic. The number 5 exists as a Platonic ideal, and 
  it can also be implemented
  so we can interact with it, as when there is a collection of 5 oranges, or 
  3 oranges and 2 apples,
  or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
  variety. The difficulty is that if we
  say that 3+2=5 as exemplified by 3 oranges and 2 apples is conscious, 
  then should we also say
  that the pairs+triplets of fruit are also conscious?
 
 No, they are only subroutines.

But a computation is just a lot of subroutines; or equivalently, a computation 
is just a subroutine in a larger 
computation or subroutine.
 
   If so, where do we draw the line?
 
 At specific structures

By structures do you mean hardware or software? I don't think it's possible 
to pin down software structures 
without reference to a particular machine and operating system. There is no 
natural or God-given language.
 
  That is what I mean
  when I say that any computation can map onto any physical system. The 
  physical structure and activity
  of computer A implementing program a may be completely different to that of 
  computer B implementing
  program b, but program b may be an emulation of program a, which should 
  make the two machines
  functionally equivalent and, under computationalism, equivalently conscious.
 
 So ? If the functional equivalence doesn't depend on a
 baroque-reinterpretation,
 where is the problem ?

Who interprets the meaning of baroque?
 
  Maybe this is wrong, eg.
  there is something special about the insulation in the wires of machine A, 
  so that only A can be conscious.
  But that is no longer computationalism.
 
 No. But what would force that conclusion on us ? Why can't
 consciousness
 attach to features more gneral than hardware, but less general than one
 of your re-interpretations ?

Because there is no natural or God-given computer architecture or language. You 
could say that consciousness 
does follow a natural architecture: that of the brain. But that could mean you 
would have a zombie if you tried 
to copy brain function with a digital computer, or with a digital computer not 
running Mr. Gates' operating system.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Proof that QTI is false

2006-09-12 Thread Russell Standish

On Tue, Sep 12, 2006 at 08:47:04PM -0700, Brent Meeker wrote:
 
 So that would imply that when predicting states at some fixed finite time in 
 the 
 future there is a smallest, non-zero probability that is realizable.  So if 
 our 
 prediction, using continuum variables as an approximation, indicates a 
 probability 
 lower than this value we should set it to zero??
 
 Brent Meeker

That is one very common way of mapping continuum models to discrete
variables. Another way is probabilitistic assignment, where a value of
0.3 has a 70% chance of being mapped to 0 and 30% chance of being
mapped to 1. See my paper Population models with Random
  Embryologies as a Paradigm for Evolution Complexity International,
  2 (1994).

Of course these two possibilities do not exhaust the space!

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---