RE: UDA revisited and then some

2006-12-09 Thread Stathis Papaioannou


Peter Jones writes:

> Stathis Papaioannou wrote:
> > Pete Carlton writes:
> >
> > > On Dec 8, 2006, at 7:48 AM, Bruno Marchal wrote:
> > >
> > > > Then I am not sure if this is really related with Quentin Anciaux's
> > > > idea that he feels located in his head.
> > > > The idea that we are in our head ... is in our head!
> > > >
> > >
> > > Another way of saying that, is that we have the urge to utter and
> > > endorse sentences like "I feel located in my head" - but the
> > > explanation of this urge is not necessarily "I am located in my head,
> > > and I want to give an honest report of that".  I'm not convinced that
> > > there is a "1st person" fact of the matter whether "I" am in W or M.
> >
> > It's a strange thing, but we're not "really" located anywhere in the 
> > physical universe. We assume, probably correctly, that our 
> > consciousness-generating mechanisms are in our skulls receiving sensory 
> > input on the planet Earth, but we would have just the same experiences if 
> > our bodies were telepresence robots or if we were constructs in an 
> > appropriately configured virtual reality.
> 
> 
> If we are probably located in our heads, we are probably *really*
> located in our heads.
> 
> The claim "we're not "really" located anywhere in the physical
> universe"
> doesn't follow from "we cannot be completely certain we are located in
> our heads"

I did not mean that our brains may not be located in our heads. I meant that 
where our brains are located and where we feel ourselves to be located are 
only contingently related. Quentin's original observation was that although 
our brains occupy our whole skull, our "consciousness" seems to be centred 
just behind our eyes. This is because there is no proprioceptive mechanism in 
our brains telling us where they are located, and even if there were, it would 
just be another contingent fact about perception.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-09 Thread 1Z


Stathis Papaioannou wrote:
> Pete Carlton writes:
>
> > On Dec 8, 2006, at 7:48 AM, Bruno Marchal wrote:
> >
> > > Then I am not sure if this is really related with Quentin Anciaux's
> > > idea that he feels located in his head.
> > > The idea that we are in our head ... is in our head!
> > >
> >
> > Another way of saying that, is that we have the urge to utter and
> > endorse sentences like "I feel located in my head" - but the
> > explanation of this urge is not necessarily "I am located in my head,
> > and I want to give an honest report of that".  I'm not convinced that
> > there is a "1st person" fact of the matter whether "I" am in W or M.
>
> It's a strange thing, but we're not "really" located anywhere in the physical 
> universe. We assume, probably correctly, that our consciousness-generating 
> mechanisms are in our skulls receiving sensory input on the planet Earth, but 
> we would have just the same experiences if our bodies were telepresence 
> robots or if we were constructs in an appropriately configured virtual 
> reality.


If we are probably located in our heads, we are probably *really*
located in our heads.

The claim "we're not "really" located anywhere in the physical
universe"
doesn't follow from "we cannot be completely certain we are located in
our heads"


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited and then some

2006-12-08 Thread Stathis Papaioannou


Pete Carlton writes:

> On Dec 8, 2006, at 7:48 AM, Bruno Marchal wrote:
> 
> > Then I am not sure if this is really related with Quentin Anciaux's
> > idea that he feels located in his head.
> > The idea that we are in our head ... is in our head!
> >
> 
> Another way of saying that, is that we have the urge to utter and  
> endorse sentences like "I feel located in my head" - but the  
> explanation of this urge is not necessarily "I am located in my head,  
> and I want to give an honest report of that".  I'm not convinced that  
> there is a "1st person" fact of the matter whether "I" am in W or M.

It's a strange thing, but we're not "really" located anywhere in the physical 
universe. We assume, probably correctly, that our consciousness-generating 
mechanisms are in our skulls receiving sensory input on the planet Earth, but 
we would have just the same experiences if our bodies were telepresence robots 
or if we were constructs in an appropriately configured virtual reality. There 
is no necessary connection at all between where the information processing 
physically occurs and where, from the inside, it seems to occur.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-08 Thread Pete Carlton

On Dec 8, 2006, at 7:48 AM, Bruno Marchal wrote:

> This is indeed an excellent text (it is also in the book "Mind's I").
> Definitive? I doubt it. Dennett miss there the first person
> indeterminacy, although he get close ...
>

You're right of course, I should have used a different adjective,  
especially since the point was to look at questions differently, not  
to definitively answer them.
I think the essay shows the "problem word" in the question "Where am  
I?" is not "Where" but "I".
It's easy to refer to a location, to answer the question "Where", but  
much harder to refer to the "I" that is supposed to be there.


> Then I am not sure if this is really related with Quentin Anciaux's
> idea that he feels located in his head.
> The idea that we are in our head ... is in our head!
>

Another way of saying that, is that we have the urge to utter and  
endorse sentences like "I feel located in my head" - but the  
explanation of this urge is not necessarily "I am located in my head,  
and I want to give an honest report of that".  I'm not convinced that  
there is a "1st person" fact of the matter whether "I" am in W or M.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited and then some

2006-12-08 Thread Bruno Marchal


Le 08-déc.-06, à 02:33, Pete Carlton a écrit :

>
> A definitive treatment of this problem is Daniel Dennett's story
> "Where am I?"
> http://www.newbanner.com/SecHumSCM/WhereAmI.html


This is indeed an excellent text (it is also in the book "Mind's I").
Definitive? I doubt it. Dennett miss there the first person 
indeterminacy, although he get close ...

Then I am not sure if this is really related with Quentin Anciaux's 
idea that he feels located in his head.
The idea that we are in our head ... is in our head! (Hope you see the 
Epimenides-likeness here :)
I think such "location" are mind construct, and I think like Russell 
that with some training you can locate yourself even outside the body 
(and that can be helpful sometimes so this can be explained through 
elementary Darwinism perhaps).

Of course, somehow Pete is right, Dennett does show that locating 
yourself is hard with comp (but this we knew: without third person 
instruction you cannot know if you are in W or M after a 
self-duplication, nor can you know where you are in the UD*, etc.

Brent is right: some greek put the mind in the stomach. Did you know 
that even today there are people defending that idea due to the 
discovery that the stomach is one of the part of the body most 
unnerved?  Could be craps, I am not following them ... They have web 
pages, but I lost the references.

Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-07 Thread Pete Carlton

A definitive treatment of this problem is Daniel Dennett's story  
"Where am I?"
http://www.newbanner.com/SecHumSCM/WhereAmI.html

On Dec 6, 2006, at 4:06 PM, Brent Meeker wrote:

>
> Quentin Anciaux wrote:
>> Le Mercredi 6 Décembre 2006 19:35, Brent Meeker a écrit :
>>> Quentin Anciaux wrote:
>>> ...
>>>
 Another thing that puzzles me is that consciousness should be  
 generated
 by physical (and chemicals which is also "physical") activities  
 of the
 brain, yet I feel my consciousness (in fact me) is located in  
 the upper
 front of my skull... Why then neurons located in the back of my  
 brain do
 not generate conscious feeling ? And if they do participate, why  
 am I
 located in the front of my brain ? Why this location ? Why only  
 a tiny
 part of the brain feels conscious activities ?
>>> Because you're not an ancient Greek.  They felt their   
>>> consciousness was
>>> located in their stomach.
>>>
>>> Brent Meeker
>>
>> While I'm not, the questionning was serious... While I've never  
>> ask where
>> other people feels they were... I'm there (in upper front of the  
>> brain)... Is
>> my "feelings" not in accordance with yours ?
>>
>> Quenton
>
> It might be because we're so visual and hence locate ourself at the  
> viewpoint of our vision (but that wouldn't explain the Greeks).  Or  
> it might be because we've been taught that consciousness is done by  
> the brain.
>
> Brent Meeker
>
> >


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-06 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> Brent Meeker writes:
> 
>>> You're implying that the default assumption should be that 
>>> consciousness correlates more closely with external behaviour
>>> than with internal activity generating the behaviour: the tape
>>> recorder should reason that as the CD player produces the same
>>> audio output as I do, most likely it has the same experiences as
>>> I do. But why shouldn't the tape recorder reason: even though the
>>> CD player produces the same output as I do, it does so using
>>> completely different technology, so it most likely has completely
>>> different experiences to my own.
>> Here's my reasoning: We think other people (and animals) are
>> conscious, have experiences, mainly because of the way they behave
>> and to a lesser degree because they are like us in appearance and
>> structure.  On the other hand we're pretty sure that consciousness
>> requires a high degree of complexity, something supported by our
>> theories and technology of information.  So we don't think that
>> individual molecules or neurons are conscious - it must be
>> something about how a large number of subsystems interact.  This
>> implies that any one subsystem could be replaced by a functionally
>> similar one, e.g. silicon "neuron", and not change consciousness.
>> So our theory is that it is not technology in the sense of digital
>> vs analog, but in some functional information processing sense.
>> 
>> So given two things that have the same behavior, the default
>> assumption is they have the same consciousness (i.e. little or none
>> in the case of CD and tape players).  If I look into them deeper
>> and find they use different technologies, that doesn't do much to
>> change my opinion - it's like a silicon neuron vs a biochemical
>> one.  If I find the flow and storage of information is different,
>> e.g. one throws away more information than the other, or one adds
>> randomness, then I'd say that was evidence for different
>> consciousness.
> 
> I basically agree, but with qualifications. If the attempt to copy
> human intelligence is "bottom up", for example by emulating neurons
> with electronics, then I think it is a good bet that if it behaves
> like a human and is based on the same principles as the human brain,
> it probably has the same types of conscious experiences as a human.
> But long before we are able to build such artificial brains, we will
> probably have the equivalent of characters in advanced computer games
> designed to pass the Turing Test using technology nothing like a
> biological brain. If such a computer program is conscious at all I
> would certainly not bet that it was conscious in the same way as a
> human is conscious, just because it is able to fool us into thinking
> it is human.

Such computer personas will probably be very different in terms of information 
storage and processing  - although we may not know it when they are developed 
simply because we still won't know how humans do it.  But a good example would 
be a neural net vice a production system.  At some level I'm sure you can get 
the same behavior out of them, but at the information processing level they're 
very different.

Incidentally, I wonder if anybody remembers that the test Turing proposed was 
for an AI and a man to each try to fool an interrogator by pretending to be a 
woman.

Brent 
Metaphysics is a restaurant where they give you a 30,000 page menu and no food.
--- Robert Pirsig

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited and then some

2006-12-06 Thread Stathis Papaioannou


Brent Meeker writes:

> > You're implying that the default assumption should be that
> > consciousness correlates more closely with external behaviour than
> > with internal activity generating the behaviour: the tape recorder
> > should reason that as the CD player produces the same audio output as
> > I do, most likely it has the same experiences as I do. But why
> > shouldn't the tape recorder reason: even though the CD player
> > produces the same output as I do, it does so using completely
> > different technology, so it most likely has completely different
> > experiences to my own.
> 
> Here's my reasoning: We think other people (and animals) are conscious, have 
> experiences, mainly because of the way they behave and to a lesser degree 
> because they are like us in appearance and structure.  On the other hand 
> we're pretty sure that consciousness requires a high degree of complexity, 
> something supported by our theories and technology of information.  So we 
> don't think that individual molecules or neurons are conscious - it must be 
> something about how a large number of subsystems interact.  This implies that 
> any one subsystem could be replaced by a functionally similar one, e.g. 
> silicon "neuron", and not change consciousness.  So our theory is that it is 
> not technology in the sense of digital vs analog, but in some functional 
> information processing sense.
> 
> So given two things that have the same behavior, the default assumption is 
> they have the same consciousness (i.e. little or none in the case of CD and 
> tape players).  If I look into them deeper and find they use different 
> technologies, that doesn't do much to change my opinion - it's like a silicon 
> neuron vs a biochemical one.  If I find the flow and storage of information 
> is different, e.g. one throws away more information than the other, or one 
> adds randomness, then I'd say that was evidence for different consciousness.

I basically agree, but with qualifications. If the attempt to copy human 
intelligence is "bottom up", for example by emulating neurons with electronics, 
then I think it is a good bet that if it behaves like a human and is based on 
the same principles as the human brain, it probably has the same types of 
conscious experiences as a human. But long before we are able to build such 
artificial brains, we will probably have the equivalent of characters in 
advanced computer games designed to pass the Turing Test using technology 
nothing like a biological brain. If such a computer program is conscious at all 
I would certainly not bet that it was conscious in the same way as a human is 
conscious, just because it is able to fool us into thinking it is human.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-06 Thread Brent Meeker

Quentin Anciaux wrote:
> Le Mercredi 6 Décembre 2006 19:35, Brent Meeker a écrit :
>> Quentin Anciaux wrote:
>> ...
>>
>>> Another thing that puzzles me is that consciousness should be generated
>>> by physical (and chemicals which is also "physical") activities of the
>>> brain, yet I feel my consciousness (in fact me) is located in the upper
>>> front of my skull... Why then neurons located in the back of my brain do
>>> not generate conscious feeling ? And if they do participate, why am I
>>> located in the front of my brain ? Why this location ? Why only a tiny
>>> part of the brain feels conscious activities ?
>> Because you're not an ancient Greek.  They felt their  consciousness was
>> located in their stomach.
>>
>> Brent Meeker
> 
> While I'm not, the questionning was serious... While I've never ask where 
> other people feels they were... I'm there (in upper front of the brain)... Is 
> my "feelings" not in accordance with yours ?
> 
> Quenton

It might be because we're so visual and hence locate ourself at the viewpoint 
of our vision (but that wouldn't explain the Greeks).  Or it might be because 
we've been taught that consciousness is done by the brain.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-06 Thread Russell Standish

On Wed, Dec 06, 2006 at 11:38:32PM +0100, Quentin Anciaux wrote:
> 
> Le Mercredi 6 Décembre 2006 19:35, Brent Meeker a écrit :
> > Quentin Anciaux wrote:
> > ...
> >
> > > Another thing that puzzles me is that consciousness should be generated
> > > by physical (and chemicals which is also "physical") activities of the
> > > brain, yet I feel my consciousness (in fact me) is located in the upper
> > > front of my skull... Why then neurons located in the back of my brain do
> > > not generate conscious feeling ? And if they do participate, why am I
> > > located in the front of my brain ? Why this location ? Why only a tiny
> > > part of the brain feels conscious activities ?
> >
> > Because you're not an ancient Greek.  They felt their  consciousness was
> > located in their stomach.
> >
> > Brent Meeker
> 
> While I'm not, the questionning was serious... While I've never ask where 
> other people feels they were... I'm there (in upper front of the brain)... Is 
> my "feelings" not in accordance with yours ?
> 
> Quenton
> 

I don't feel very pointlike. Rather my consciousness feels distributed over
a volume that is usually a substantial fraction of my brain. When
meditating my consciousness feels like it expands to fill the room or
maybe even larger.

What of it? Probably not significant

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-06 Thread Quentin Anciaux

Le Mercredi 6 Décembre 2006 19:35, Brent Meeker a écrit :
> Quentin Anciaux wrote:
> ...
>
> > Another thing that puzzles me is that consciousness should be generated
> > by physical (and chemicals which is also "physical") activities of the
> > brain, yet I feel my consciousness (in fact me) is located in the upper
> > front of my skull... Why then neurons located in the back of my brain do
> > not generate conscious feeling ? And if they do participate, why am I
> > located in the front of my brain ? Why this location ? Why only a tiny
> > part of the brain feels conscious activities ?
>
> Because you're not an ancient Greek.  They felt their  consciousness was
> located in their stomach.
>
> Brent Meeker

While I'm not, the questionning was serious... While I've never ask where 
other people feels they were... I'm there (in upper front of the brain)... Is 
my "feelings" not in accordance with yours ?

Quenton

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-06 Thread Stathis Papaioannou


Hi Quentin,
> 
> Hi Stathis,
> 
> Le Mercredi 6 Décembre 2006 10:23, Stathis Papaioannou a écrit :
> > Brent meeker writes:
> > > Stathis Papaioannou wrote:
> > > "Fair" is a vague term.  That they are the same would be my default
> > > assumption, absent any other information.  Of course knowing that one is
> > > analog and the other digital reduces my confidence in that assumption,
> > > but no theory of "audio source experience" I have no way to form a
> > > specific alternative hypothesis.
> >
> > You're implying that the default assumption should be that consciousness
> > correlates more closely with external behaviour than with internal activity
> > generating the behaviour: the tape recorder should reason that as the CD
> > player produces the same audio output as I do, most likely it has the same
> > experiences as I do. But why shouldn't the tape recorder reason: even
> > though the CD player produces the same output as I do, it does so using
> > completely different technology, so it most likely has completely different
> > experiences to my own.
> >
> > Stathis Papaioannou
> 
> A tape recorder or a CD has no external behavior that would mimic a human. 
> But 
> I really think that if you have same external behavior than a human then 
> the "copy" (whathever it is made of) will be conscious. Exact replica means 
> you can talk with the replica, learn, etc... It's not just sound (and or 
> move). Even If I knew that the "brain" copy was made of smashed apples it 
> would not change my "mind" ;) about it. The only "evidence" of others 
> consciousness is behavior, social interactions, ... You could scan a brain, 
> yet you won't see consciousness.

The tape recorder/ CD player example was to show that two entities may have 
similar behaviour generated by completely different mechanisms. As you say, we 
can see the brain, we can see the behaviour, but we *deduce* the consciousness, 
unless it is our own. If someone has similar behaviour generated by a similar 
brain, then you would have to invoke magical processes to explain why he would 
not also have similar consciousness. But if someone has similar behaviour with 
a very different brain, I don't think there is anything in the laws of nature 
which says that he has to have the same consciousness, even if you say that he 
must have *some* sort of consciousness.

> Another thing that puzzles me is that consciousness should be generated by 
> physical (and chemicals which is also "physical") activities of the brain, 
> yet I feel my consciousness (in fact me) is located in the upper front of my 
> skull... Why then neurons located in the back of my brain do not generate 
> conscious feeling ? And if they do participate, why am I located in the front 
> of my brain ? Why this location ? Why only a tiny part of the brain feels 
> conscious activities ?

That's just what our brains make us think. If our brains were slightly 
different our consciousness could seem to be located in our big toe, or on the 
moons of Jupiter.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-06 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> Brent meeker writes:
> 
>> Stathis Papaioannou wrote:
>>> Brent Meeker writes:
>>> 
> I assume that there is some copy of me possible which
> preserves my 1st person experience. After all, physical
> copying literally occurs in the course of normal life and I
> still feel myself to be the same person. But suppose I am
> offered some artificial means of being copied. The evidence I
> am presented with is that Fred2 here is a robot who behaves
> exactly the same as the standard human Fred: has all his
> memories, a similar personality, similar intellectual
> abilities, and passes whatever other tests one cares to set
> him. The question is, how can I be sure that Fred2 really has
> the same 1st person experiences as Fred? A software engineer 
> might copy a program's "look and feel" without knowing
> anything about the original program's internal code, his goal
> being to mimic the external appearance as seen by the end
> user by whatever means available. Similarly with Fred2,
> although the hope was to produce a copy with the same 1st
> person experiences, the only possible research method would
> have been to produce a copy that mimics Fred's behaviour. If
> Fred2 has 1st person experiences at all, they may be utterly
> unlike those of Fred. Fred2 may even be aware that he is
> different but be extremely good at hiding it, because if he
> were not he would have been rejected in the testing process.
> 
> If it could be shown that Fred2 behaves like Fred *and* is 
> structurally similar
 Or *functionally* similar at lower levels, e.g. having long and
  short-term memory, having reflexes, having mostly separate
 areas for language and vision.
 
> to Fred then I would be more confident in accepting copying.
> If behaviour is similar but the underlying mechanism
> completely different then I would consider that only by
> accident could 1st person experience be similar.
 I'd say that would still be the way to bet - just with less 
 confidence.
 
 Brent Meeker
>>> It's the level of confidence which is the issue. Would it be fair
>>> to assume that a digital and an analogue audio source have the
>>> same 1st person experience (such as it may be) because their
>>> output signal is indistinguishable to human hearing and
>>> scientific instruments?
>>> 
>>> Stathis Papaioannou
>> "Fair" is a vague term.  That they are the same would be my default
>> assumption, absent any other information.  Of course knowing that
>> one is analog and the other digital reduces my confidence in that
>> assumption, but no theory of "audio source experience" I have no
>> way to form a specific alternative hypothesis.
> 
> You're implying that the default assumption should be that
> consciousness correlates more closely with external behaviour than
> with internal activity generating the behaviour: the tape recorder
> should reason that as the CD player produces the same audio output as
> I do, most likely it has the same experiences as I do. But why
> shouldn't the tape recorder reason: even though the CD player
> produces the same output as I do, it does so using completely
> different technology, so it most likely has completely different
> experiences to my own.

Here's my reasoning: We think other people (and animals) are conscious, have 
experiences, mainly because of the way they behave and to a lesser degree 
because they are like us in appearance and structure.  On the other hand we're 
pretty sure that consciousness requires a high degree of complexity, something 
supported by our theories and technology of information.  So we don't think 
that individual molecules or neurons are conscious - it must be something about 
how a large number of subsystems interact.  This implies that any one subsystem 
could be replaced by a functionally similar one, e.g. silicon "neuron", and not 
change consciousness.  So our theory is that it is not technology in the sense 
of digital vs analog, but in some functional information processing sense.

So given two things that have the same behavior, the default assumption is they 
have the same consciousness (i.e. little or none in the case of CD and tape 
players).  If I look into them deeper and find they use different technologies, 
that doesn't do much to change my opinion - it's like a silicon neuron vs a 
biochemical one.  If I find the flow and storage of information is different, 
e.g. one throws away more information than the other, or one adds randomness, 
then I'd say that was evidence for different consciousness.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTE

Re: UDA revisited and then some

2006-12-06 Thread Brent Meeker

Quentin Anciaux wrote:
...
> Another thing that puzzles me is that consciousness should be generated by 
> physical (and chemicals which is also "physical") activities of the brain, 
> yet I feel my consciousness (in fact me) is located in the upper front of my 
> skull... Why then neurons located in the back of my brain do not generate 
> conscious feeling ? And if they do participate, why am I located in the front 
> of my brain ? Why this location ? Why only a tiny part of the brain feels 
> conscious activities ?

Because you're not an ancient Greek.  They felt their  consciousness was 
located in their stomach.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited and then some

2006-12-06 Thread Quentin Anciaux

Hi Stathis,

Le Mercredi 6 Décembre 2006 10:23, Stathis Papaioannou a écrit :
> Brent meeker writes:
> > Stathis Papaioannou wrote:
> > "Fair" is a vague term.  That they are the same would be my default
> > assumption, absent any other information.  Of course knowing that one is
> > analog and the other digital reduces my confidence in that assumption,
> > but no theory of "audio source experience" I have no way to form a
> > specific alternative hypothesis.
>
> You're implying that the default assumption should be that consciousness
> correlates more closely with external behaviour than with internal activity
> generating the behaviour: the tape recorder should reason that as the CD
> player produces the same audio output as I do, most likely it has the same
> experiences as I do. But why shouldn't the tape recorder reason: even
> though the CD player produces the same output as I do, it does so using
> completely different technology, so it most likely has completely different
> experiences to my own.
>
> Stathis Papaioannou

A tape recorder or a CD has no external behavior that would mimic a human. But 
I really think that if you have same external behavior than a human then 
the "copy" (whathever it is made of) will be conscious. Exact replica means 
you can talk with the replica, learn, etc... It's not just sound (and or 
move). Even If I knew that the "brain" copy was made of smashed apples it 
would not change my "mind" ;) about it. The only "evidence" of others 
consciousness is behavior, social interactions, ... You could scan a brain, 
yet you won't see consciousness.

Another thing that puzzles me is that consciousness should be generated by 
physical (and chemicals which is also "physical") activities of the brain, 
yet I feel my consciousness (in fact me) is located in the upper front of my 
skull... Why then neurons located in the back of my brain do not generate 
conscious feeling ? And if they do participate, why am I located in the front 
of my brain ? Why this location ? Why only a tiny part of the brain feels 
conscious activities ?

Quentin


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-06 Thread Stathis Papaioannou


Brent meeker writes:

> Stathis Papaioannou wrote:
> > 
> > Brent Meeker writes:
> > 
> >>> I assume that there is some copy of me possible which preserves
> >>> my 1st person experience. After all, physical copying literally
> >>> occurs in the course of normal life and I still feel myself to be
> >>> the same person. But suppose I am offered some artificial means
> >>> of being copied. The evidence I am presented with is that Fred2
> >>> here is a robot who behaves exactly the same as the standard
> >>> human Fred: has all his memories, a similar personality, similar
> >>> intellectual abilities, and passes whatever other tests one cares
> >>> to set him. The question is, how can I be sure that Fred2 really
> >>> has the same 1st person experiences as Fred? A software engineer
> >>> might copy a program's "look and feel" without knowing anything
> >>> about the original program's internal code, his goal being to
> >>> mimic the external appearance as seen by the end user by whatever
> >>> means available. Similarly with Fred2, although the hope was to
> >>> produce a copy with the same 1st person experiences, the only
> >>> possible research method would have been to produce a copy that
> >>> mimics Fred's behaviour. If Fred2 has 1st person experiences at
> >>> all, they may be utterly unlike those of Fred. Fred2 may even be
> >>> aware that he is different but be extremely good at hiding it,
> >>> because if he were not he would have been rejected in the testing
> >>> process.
> >>> 
> >>> If it could be shown that Fred2 behaves like Fred *and* is 
> >>> structurally similar
> >> Or *functionally* similar at lower levels, e.g. having long and
> >> short-term memory, having reflexes, having mostly separate areas
> >> for language and vision.
> >> 
> >>> to Fred then I would be more confident in accepting copying. If
> >>> behaviour is similar but the underlying mechanism completely
> >>> different then I would consider that only by accident could 1st
> >>> person experience be similar.
> >> I'd say that would still be the way to bet - just with less
> >> confidence.
> >> 
> >> Brent Meeker
> > 
> > It's the level of confidence which is the issue. Would it be fair to
> > assume that a digital and an analogue audio source have the same 1st
> > person experience (such as it may be) because their output signal is
> > indistinguishable to human hearing and scientific instruments?
> > 
> > Stathis Papaioannou 
> 
> "Fair" is a vague term.  That they are the same would be my default 
> assumption, absent any other information.  Of course knowing that one is 
> analog and the other digital reduces my confidence in that assumption, but no 
> theory of "audio source experience" I have no way to form a specific 
> alternative hypothesis.

You're implying that the default assumption should be that consciousness 
correlates more closely with external behaviour than with internal activity 
generating the behaviour: the tape recorder should reason that as the CD player 
produces the same audio output as I do, most likely it has the same experiences 
as I do. But why shouldn't the tape recorder reason: even though the CD player 
produces the same output as I do, it does so using completely different 
technology, so it most likely has completely different experiences to my own. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-05 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> Brent Meeker writes:
> 
>>> I assume that there is some copy of me possible which preserves
>>> my 1st person experience. After all, physical copying literally
>>> occurs in the course of normal life and I still feel myself to be
>>> the same person. But suppose I am offered some artificial means
>>> of being copied. The evidence I am presented with is that Fred2
>>> here is a robot who behaves exactly the same as the standard
>>> human Fred: has all his memories, a similar personality, similar
>>> intellectual abilities, and passes whatever other tests one cares
>>> to set him. The question is, how can I be sure that Fred2 really
>>> has the same 1st person experiences as Fred? A software engineer
>>> might copy a program's "look and feel" without knowing anything
>>> about the original program's internal code, his goal being to
>>> mimic the external appearance as seen by the end user by whatever
>>> means available. Similarly with Fred2, although the hope was to
>>> produce a copy with the same 1st person experiences, the only
>>> possible research method would have been to produce a copy that
>>> mimics Fred's behaviour. If Fred2 has 1st person experiences at
>>> all, they may be utterly unlike those of Fred. Fred2 may even be
>>> aware that he is different but be extremely good at hiding it,
>>> because if he were not he would have been rejected in the testing
>>> process.
>>> 
>>> If it could be shown that Fred2 behaves like Fred *and* is 
>>> structurally similar
>> Or *functionally* similar at lower levels, e.g. having long and
>> short-term memory, having reflexes, having mostly separate areas
>> for language and vision.
>> 
>>> to Fred then I would be more confident in accepting copying. If
>>> behaviour is similar but the underlying mechanism completely
>>> different then I would consider that only by accident could 1st
>>> person experience be similar.
>> I'd say that would still be the way to bet - just with less
>> confidence.
>> 
>> Brent Meeker
> 
> It's the level of confidence which is the issue. Would it be fair to
> assume that a digital and an analogue audio source have the same 1st
> person experience (such as it may be) because their output signal is
> indistinguishable to human hearing and scientific instruments?
> 
> Stathis Papaioannou 

"Fair" is a vague term.  That they are the same would be my default assumption, 
absent any other information.  Of course knowing that one is analog and the 
other digital reduces my confidence in that assumption, but no theory of "audio 
source experience" I have no way to form a specific alternative hypothesis.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited and then some

2006-12-05 Thread Stathis Papaioannou


Brent Meeker writes:

> > I assume that there is some copy of me possible which preserves my
> > 1st person experience. After all, physical copying literally occurs
> > in the course of normal life and I still feel myself to be the same
> > person. But suppose I am offered some artificial means of being
> > copied. The evidence I am presented with is that Fred2 here is a
> > robot who behaves exactly the same as the standard human Fred: has
> > all his memories, a similar personality, similar intellectual
> > abilities, and passes whatever other tests one cares to set him. The
> > question is, how can I be sure that Fred2 really has the same 1st
> > person experiences as Fred? A software engineer might copy a
> > program's "look and feel" without knowing anything about the original
> > program's internal code, his goal being to mimic the external
> > appearance as seen by the end user by whatever means available.
> > Similarly with Fred2, although the hope was to produce a copy with
> > the same 1st person experiences, the only possible research method
> > would have been to produce a copy that mimics Fred's behaviour. If
> > Fred2 has 1st person experiences at all, they may be utterly unlike
> > those of Fred. Fred2 may even be aware that he is different but be
> > extremely good at hiding it, because if he were not he would have
> > been rejected in the testing process.
> > 
> > If it could be shown that Fred2 behaves like Fred *and* is
> > structurally similar 
> 
> Or *functionally* similar at lower levels, e.g. having long and short-term 
> memory, having reflexes, having mostly separate areas for language and vision.
> 
> >to Fred then I would be more confident in
> > accepting copying. If behaviour is similar but the underlying
> > mechanism completely different then I would consider that only by
> > accident could 1st person experience be similar.
> 
> I'd say that would still be the way to bet - just with less confidence.
> 
> Brent Meeker

It's the level of confidence which is the issue. Would it be fair to assume 
that a digital and an analogue audio source have the same 1st person experience 
(such as it may be) because their output signal is indistinguishable to human 
hearing and scientific instruments? 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-05 Thread Brent Meeker

Stathis Papaioannou wrote:
... 
> I assume that there is some copy of me possible which preserves my
> 1st person experience. After all, physical copying literally occurs
> in the course of normal life and I still feel myself to be the same
> person. But suppose I am offered some artificial means of being
> copied. The evidence I am presented with is that Fred2 here is a
> robot who behaves exactly the same as the standard human Fred: has
> all his memories, a similar personality, similar intellectual
> abilities, and passes whatever other tests one cares to set him. The
> question is, how can I be sure that Fred2 really has the same 1st
> person experiences as Fred? A software engineer might copy a
> program's "look and feel" without knowing anything about the original
> program's internal code, his goal being to mimic the external
> appearance as seen by the end user by whatever means available.
> Similarly with Fred2, although the hope was to produce a copy with
> the same 1st person experiences, the only possible research method
> would have been to produce a copy that mimics Fred's behaviour. If
> Fred2 has 1st person experiences at all, they may be utterly unlike
> those of Fred. Fred2 may even be aware that he is different but be
> extremely good at hiding it, because if he were not he would have
> been rejected in the testing process.
> 
> If it could be shown that Fred2 behaves like Fred *and* is
> structurally similar 

Or *functionally* similar at lower levels, e.g. having long and short-term 
memory, having reflexes, having mostly separate areas for language and vision.

>to Fred then I would be more confident in
> accepting copying. If behaviour is similar but the underlying
> mechanism completely different then I would consider that only by
> accident could 1st person experience be similar.

I'd say that would still be the way to bet - just with less confidence.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited and then some

2006-12-05 Thread Stathis Papaioannou


Bruno Marchal writes:

> >> Well, in the case comp will be refuted (for example by predicting that
> >> electrons weigh one ton, or by predicting non eliminable white 
> >> rabbits)
> >> , then everyone will be able to guess that those people were 
> >> committing
> >> suicide. The problem is that we will probably copy brain at some level
> >> well before refuting comp, if ever.
> >> The comp hyp. entails the existence of possible relative zombies, but
> >> from the point of view of those who accept artificial brains, if they
> >> survive, they will survive where the level has been correctly chosen. 
> >> A
> >> linguistic difficulty is that the "where" does not denote a place in a
> >> universe, but many similar "instants" in many consistent histories.
> >
> 
> 
> > But how good a predictor of the right level having been  chosen is 3rd 
> > person
> > observable behaviour?
> 
> 
> Stathis, I don't understand the question. Could you elaborate just a 
> few bits, thanks.

I assume that there is some copy of me possible which preserves my 1st person 
experience. After all, physical copying literally occurs in the course of 
normal life and I still feel myself to be the same person. But suppose I am 
offered some artificial means of being copied. The evidence I am presented with 
is that Fred2 here is a robot who behaves exactly the same as the standard 
human Fred: has all his memories, a similar personality, similar intellectual 
abilities, and passes whatever other tests one cares to set him. The question 
is, how can I be sure that Fred2 really has the same 1st person experiences as 
Fred? A software engineer might copy a program's "look and feel" without 
knowing anything about the original program's internal code, his goal being to 
mimic the external appearance as seen by the end user by whatever means 
available. Similarly with Fred2, although the hope was to produce a copy with 
the same 1st person experiences, the only possible research method wou
 ld have been to produce a copy that mimics Fred's behaviour. If Fred2 has 1st 
person experiences at all, they may be utterly unlike those of Fred. Fred2 may 
even be aware that he is different but be extremely good at hiding it, because 
if he were not he would have been rejected in the testing process. 

If it could be shown that Fred2 behaves like Fred *and* is structurally similar 
to Fred then I would be more confident in accepting copying. If behaviour is 
similar but the underlying mechanism completely different then I would consider 
that only by accident could 1st person experience be similar.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-05 Thread Bruno Marchal


Le 05-déc.-06, à 00:31, Stathis Papaioannou a écrit :


>> Well, in the case comp will be refuted (for example by predicting that
>> electrons weigh one ton, or by predicting non eliminable white 
>> rabbits)
>> , then everyone will be able to guess that those people were 
>> committing
>> suicide. The problem is that we will probably copy brain at some level
>> well before refuting comp, if ever.
>> The comp hyp. entails the existence of possible relative zombies, but
>> from the point of view of those who accept artificial brains, if they
>> survive, they will survive where the level has been correctly chosen. 
>> A
>> linguistic difficulty is that the "where" does not denote a place in a
>> universe, but many similar "instants" in many consistent histories.
>


> But how good a predictor of the right level having been  chosen is 3rd 
> person
> observable behaviour?


Stathis, I don't understand the question. Could you elaborate just a 
few bits, thanks.

Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-04 Thread Stathis Papaioannou


Bruno Marchal writes:

> Le 02-déc.-06, à 06:11, Stathis Papaioannou a écrit :
> 
> > In addition to spectrum reversal type situations, where no change is 
> > noted from
> > either 3rd or 1st person perspective (and therefore it doesn't really 
> > matter to anyone:
> > as you say, it may be occurring all the time anyway and we would never 
> > know), there is
> > the possibility that a change is noted from a 1st person perspective, 
> > but never reported.
> > If you consider a practical research program to make artificial 
> > replacement brains, all the
> > researchers can ever do is build a brain that behaves like the 
> > original. It may do this because
> > it thinks like the original, or it may do it because it is a very good 
> > actor and is able to pretend
> > that it thinks like the original. Those brains which somehow betray 
> > the fact that they are
> > acting will be rejected, but the ones that never betray this fact will 
> > be accepted as true
> > replacement brains when they are actually not. Millions of people 
> > might agree to have these
> > replacement brains and no-one will ever know that they are committing 
> > suicide.
> 
> 
> Well, in the case comp will be refuted (for example by predicting that 
> electrons weigh one ton, or by predicting non eliminable white rabbits) 
> , then everyone will be able to guess that those people were committing 
> suicide. The problem is that we will probably copy brain at some level 
> well before refuting comp, if ever.
> The comp hyp. entails the existence of possible relative zombies, but 
> from the point of view of those who accept artificial brains, if they 
> survive, they will survive where the level has been correctly chosen. A 
> linguistic difficulty is that the "where" does not denote a place in a 
> universe, but many similar "instants" in many consistent histories.

But how good a predictor of the right level having been  chosen is 3rd person 
observable behaviour?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-04 Thread Bruno Marchal


Le 02-déc.-06, à 06:11, Stathis Papaioannou a écrit :

> In addition to spectrum reversal type situations, where no change is 
> noted from
> either 3rd or 1st person perspective (and therefore it doesn't really 
> matter to anyone:
> as you say, it may be occurring all the time anyway and we would never 
> know), there is
> the possibility that a change is noted from a 1st person perspective, 
> but never reported.
> If you consider a practical research program to make artificial 
> replacement brains, all the
> researchers can ever do is build a brain that behaves like the 
> original. It may do this because
> it thinks like the original, or it may do it because it is a very good 
> actor and is able to pretend
> that it thinks like the original. Those brains which somehow betray 
> the fact that they are
> acting will be rejected, but the ones that never betray this fact will 
> be accepted as true
> replacement brains when they are actually not. Millions of people 
> might agree to have these
> replacement brains and no-one will ever know that they are committing 
> suicide.


Well, in the case comp will be refuted (for example by predicting that 
electrons weigh one ton, or by predicting non eliminable white rabbits) 
, then everyone will be able to guess that those people were committing 
suicide. The problem is that we will probably copy brain at some level 
well before refuting comp, if ever.
The comp hyp. entails the existence of possible relative zombies, but 
from the point of view of those who accept artificial brains, if they 
survive, they will survive where the level has been correctly chosen. A 
linguistic difficulty is that the "where" does not denote a place in a 
universe, but many similar "instants" in many consistent histories.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-04 Thread Bruno Marchal


Le 01-déc.-06, à 20:05, Brent Meeker a écrit :

>
> Bruno Marchal wrote:
>>
>> Le 01-déc.-06, à 10:24, Stathis Papaioannou a écrit :
>>
>>>
>>> Bruno Marchal writes:
>>>
 

> We can assume that the structural difference makes a difference to
> consciousness but
> not external behaviour. For example, it may cause spectrum 
> reversal.

 Let us suppose you are right. This would mean that there is
 substitution level such that the digital copy person would act AS IF
 she has been duplicated at the correct level, but having or living a
 (1-person) spectrum reversal.

 Now what could that mean? Let us interview the copy and ask her the
 color of the sky. Having the same external behavior as the original,
 she will told us the usual answer: blue (I suppose a sunny day!).

 So, apparently she is not 1-aware of that spectrum reversal. This
 means
 that from her 1-person point of view, there was no spectrum 
 reversal,
 but obviously there is no 3-description of it either 

 So I am not sure your assertion make sense. I agree that if we take 
 an
 incorrect substitution level, the copy could experience a spectrum
 reversal, but then the person will complain to her doctor saying
 something like "I have not been copied correctly", and will not pay
 her
 doctor bill (but this is a different  external behaviour, ok?)
>>> I don't doubt that there is some substitution level that preserves 
>>> 3rd
>>> person
>>> behaviour and 1st person experience, even if this turns out to mean
>>> copying
>>> a person to the same engineering tolerances as nature has specified
>>> for ordinary
>>> day to day life. The question is, is there some substitution level
>>> which preserves
>>> 3rd person behaviour but not 1st person experience? For example,
>>> suppose
>>> you carried around with you a device which monitored all your
>>> behaviour in great
>>> detail, created predictive models, compared its predictions with your
>>> actual
>>> behaviour, and continuously refined its models. Over time, this 
>>> device
>>> might be
>>> able to mimic your behaviour closely enough such that it could take
>>> over control of
>>> your body from your brain and no-one would be able to tell that the
>>> substitution
>>> had occurred. I don't think it would be unreasonable to wonder 
>>> whether
>>> this copy
>>> experiences the same thing when it looks at the sky and declares it 
>>> to
>>> be blue as
>>> you do before the substitution.
>>
>>
>>
>> Thanks for the precision.
>> It *is* as reasonable to ask such a question as it is reasonable to 
>> ask
>> if tomorrow my first person experience will not indeed permute my blue
>> and orange qualia *including my memories of it* in such a way that my
>> 3-behavior will remain unchanged. In that case we are back to the
>> original spectrum reversal problem.
>> This is a reasonable question in the sense that the answer can be 
>> shown
>> relatively (!) undecidable: it is not verifiable by any external 
>> means,
>> nor by the first person itself. We could as well conclude that such a
>> change occurs each time the magnetic poles permute, or that it changes
>> at each season, etc.
>> *But* (curiously enough perhaps) such a change can be shown to be
>> guess-able by some richer machine.
>> The spectrum reversal question points on the gap between the 1 and 3
>> descriptions. With acomp your question should be addressable in the
>> terms of the modal logic Z and X, or more precisely Z1* minus Z1 and
>> X1* minus X1, that is their true but unprovable (and undecidable)
>> propositions. Note that the question makes no sense at all for the
>> "pure 1-person" because S4Grz1* minus S4Grz1 is empty.
>> So your question makes sense because at the level of the fourth and
>> fifth hypo your question can be translated into purely arithmetical
>> propositions, which although highly undecidable by the machine itself
>> can be decided by some richer machine.
>> And I would say, without doing the calculus which is rather complex,
>> that the answer could very well be positive indeed, but this remains 
>> to
>> be proved. At least the unexpected nuances between computability,
>> provability, knowability, observability, perceivability (all redefined
>> by modal variant of G) gives plenty room for this, indeed.
>>
>> Bruno
>
> So what does your calculus say about the experience of people who wear 
> glasses which invert their field of vision?


This is just an adaptation process. If I remember people wearing those 
glasses are aware of the inversion of their field of vision until their 
brain generates an unconscious correction. All this can be explained 
self-referentially in G without problem and even without mentioning the 
qualia (which would need the Z* or X* ). Stathis' remarks on the 
existence of qualia changes without first person knowledge of the 
change is far less obvious.

Bruno


http://irid

Re: UDA revisited and then some

2006-12-02 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> Brent meeker writes:
>  
>>> I don't doubt that there is some substitution level that preserves 3rd 
>>> person 
>>> behaviour and 1st person experience, even if this turns out to mean copying 
>>> a person to the same engineering tolerances as nature has specified for 
>>> ordinary 
>>> day to day life. The question is, is there some substitution level which 
>>> preserves 
>>> 3rd person behaviour but not 1st person experience? For example, suppose 
>>> you carried around with you a device which monitored all your behaviour in 
>>> great 
>>> detail, created predictive models, compared its predictions with your 
>>> actual 
>>> behaviour, and continuously refined its models. Over time, this device 
>>> might be 
>>> able to mimic your behaviour closely enough such that it could take over 
>>> control of 
>>> your body from your brain and no-one would be able to tell that the 
>>> substitution 
>>> had occurred. I don't think it would be unreasonable to wonder whether this 
>>> copy 
>>> experiences the same thing when it looks at the sky and declares it to be 
>>> blue as 
>>> you do before the substitution.
>> That's a precis of Greg Egan's short story "The Jewel".  I wouldn't call it 
>> unreasonable to wonder whether the copy experiences the same qualia, but I'd 
>> call it unreasonable to conclude that it did not on the stated evidence.  In 
>> fact I find it hard to think of what evidence would count against it have 
>> some kind of qualia.
> 
> It would be a neat theory if any machine that processed environmental 
> information 
> in a manner analogous to an animal had some level of conscious experience 
> (and consistent 
> with Colin's "no zombie scientists" hypothesis, although I don't think it is 
> a conclusion he would 
> agree with). It would explain consciousness as a corollary of this sort of 
> information processing. 
> However, I don't know how such a thing could ever be proved or disproved. 
> 
> Stathis Papaioannou

Things are seldom proved or disproved in science.  Right now I'd say the 
evidence favors the no-zombie theory.  The only evidence beyond observation of 
behavior that I can imagine is to map processes in the brain and determine how 
memories are stored and how manipulation of symbolic and graphic 
representations are done.  It might then be possible to understand how a 
computer/robot could achieve the same behavior with a different functional 
structure; analogous say to imperative vs functional programs.  But then we'd 
only be able to infer that the robot might be conscious in a different way.  I 
don't see how we could infer that it was not conscious.

On a related point, it is often said here that consciousness is ineffable: what 
it is like to be someone cannot be communicated.  But there's another side to 
this: it is exactly the content of consciousness that we can communicate.  We 
can tell someone how we prove a theorem: we're conscious of those steps.  But 
we can't tell someone how our brain came up with the proof (the Poincare' 
effect) or why it is persuasive.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited and then some

2006-12-02 Thread Stathis Papaioannou


Brent meeker writes:
 
> > I don't doubt that there is some substitution level that preserves 3rd 
> > person 
> > behaviour and 1st person experience, even if this turns out to mean copying 
> > a person to the same engineering tolerances as nature has specified for 
> > ordinary 
> > day to day life. The question is, is there some substitution level which 
> > preserves 
> > 3rd person behaviour but not 1st person experience? For example, suppose 
> > you carried around with you a device which monitored all your behaviour in 
> > great 
> > detail, created predictive models, compared its predictions with your 
> > actual 
> > behaviour, and continuously refined its models. Over time, this device 
> > might be 
> > able to mimic your behaviour closely enough such that it could take over 
> > control of 
> > your body from your brain and no-one would be able to tell that the 
> > substitution 
> > had occurred. I don't think it would be unreasonable to wonder whether this 
> > copy 
> > experiences the same thing when it looks at the sky and declares it to be 
> > blue as 
> > you do before the substitution.
> 
> That's a precis of Greg Egan's short story "The Jewel".  I wouldn't call it 
> unreasonable to wonder whether the copy experiences the same qualia, but I'd 
> call it unreasonable to conclude that it did not on the stated evidence.  In 
> fact I find it hard to think of what evidence would count against it have 
> some kind of qualia.

It would be a neat theory if any machine that processed environmental 
information 
in a manner analogous to an animal had some level of conscious experience (and 
consistent 
with Colin's "no zombie scientists" hypothesis, although I don't think it is a 
conclusion he would 
agree with). It would explain consciousness as a corollary of this sort of 
information processing. 
However, I don't know how such a thing could ever be proved or disproved. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-01 Thread Stathis Papaioannou


In addition to spectrum reversal type situations, where no change is noted from 
either 3rd or 1st person perspective (and therefore it doesn't really matter to 
anyone: 
as you say, it may be occurring all the time anyway and we would never know), 
there is 
the possibility that a change is noted from a 1st person perspective, but never 
reported. 
If you consider a practical research program to make artificial replacement 
brains, all the 
researchers can ever do is build a brain that behaves like the original. It may 
do this because 
it thinks like the original, or it may do it because it is a very good actor 
and is able to pretend 
that it thinks like the original. Those brains which somehow betray the fact 
that they are 
acting will be rejected, but the ones that never betray this fact will be 
accepted as true 
replacement brains when they are actually not. Millions of people might agree 
to have these 
replacement brains and no-one will ever know that they are committing suicide. 

Stathis Papaioannou



> From: [EMAIL PROTECTED]
> Subject: Re: UDA revisited and then some
> Date: Fri, 1 Dec 2006 12:27:37 +0100
> To: everything-list@googlegroups.com
> 
> 
> 
> Le 01-déc.-06, à 10:24, Stathis Papaioannou a écrit :
> 
> >
> >
> > Bruno Marchal writes:
> >
> >> 
> >>
> >>> We can assume that the structural difference makes a difference to
> >>> consciousness but
> >>> not external behaviour. For example, it may cause spectrum reversal.
> >>
> >>
> >> Let us suppose you are right. This would mean that there is
> >> substitution level such that the digital copy person would act AS IF
> >> she has been duplicated at the correct level, but having or living a
> >> (1-person) spectrum reversal.
> >>
> >> Now what could that mean? Let us interview the copy and ask her the
> >> color of the sky. Having the same external behavior as the original,
> >> she will told us the usual answer: blue (I suppose a sunny day!).
> >>
> >> So, apparently she is not 1-aware of that spectrum reversal. This 
> >> means
> >> that from her 1-person point of view, there was no spectrum reversal,
> >> but obviously there is no 3-description of it either 
> >>
> >> So I am not sure your assertion make sense. I agree that if we take an
> >> incorrect substitution level, the copy could experience a spectrum
> >> reversal, but then the person will complain to her doctor saying
> >> something like "I have not been copied correctly", and will not pay 
> >> her
> >> doctor bill (but this is a different  external behaviour, ok?)
> >
> > I don't doubt that there is some substitution level that preserves 3rd 
> > person
> > behaviour and 1st person experience, even if this turns out to mean 
> > copying
> > a person to the same engineering tolerances as nature has specified 
> > for ordinary
> > day to day life. The question is, is there some substitution level 
> > which preserves
> > 3rd person behaviour but not 1st person experience? For example, 
> > suppose
> > you carried around with you a device which monitored all your 
> > behaviour in great
> > detail, created predictive models, compared its predictions with your 
> > actual
> > behaviour, and continuously refined its models. Over time, this device 
> > might be
> > able to mimic your behaviour closely enough such that it could take 
> > over control of
> > your body from your brain and no-one would be able to tell that the 
> > substitution
> > had occurred. I don't think it would be unreasonable to wonder whether 
> > this copy
> > experiences the same thing when it looks at the sky and declares it to 
> > be blue as
> > you do before the substitution.
> 
> 
> 
> Thanks for the precision.
> It *is* as reasonable to ask such a question as it is reasonable to ask 
> if tomorrow my first person experience will not indeed permute my blue 
> and orange qualia *including my memories of it* in such a way that my 
> 3-behavior will remain unchanged. In that case we are back to the 
> original spectrum reversal problem.
> This is a reasonable question in the sense that the answer can be shown 
> relatively (!) undecidable: it is not verifiable by any external means, 
> nor by the first person itself. We could as well conclude that such a 
> change occurs each time the magnetic poles permute, or that it changes 
> at each season, etc.
> *But* (curiously enough perhaps) such a change can be shown to be 
> guess-able by s

Re: UDA revisited and then some

2006-12-01 Thread Brent Meeker

Bruno Marchal wrote:
> 
> Le 01-déc.-06, à 10:24, Stathis Papaioannou a écrit :
> 
>>
>> Bruno Marchal writes:
>>
>>> 
>>>
 We can assume that the structural difference makes a difference to
 consciousness but
 not external behaviour. For example, it may cause spectrum reversal.
>>>
>>> Let us suppose you are right. This would mean that there is
>>> substitution level such that the digital copy person would act AS IF
>>> she has been duplicated at the correct level, but having or living a
>>> (1-person) spectrum reversal.
>>>
>>> Now what could that mean? Let us interview the copy and ask her the
>>> color of the sky. Having the same external behavior as the original,
>>> she will told us the usual answer: blue (I suppose a sunny day!).
>>>
>>> So, apparently she is not 1-aware of that spectrum reversal. This 
>>> means
>>> that from her 1-person point of view, there was no spectrum reversal,
>>> but obviously there is no 3-description of it either 
>>>
>>> So I am not sure your assertion make sense. I agree that if we take an
>>> incorrect substitution level, the copy could experience a spectrum
>>> reversal, but then the person will complain to her doctor saying
>>> something like "I have not been copied correctly", and will not pay 
>>> her
>>> doctor bill (but this is a different  external behaviour, ok?)
>> I don't doubt that there is some substitution level that preserves 3rd 
>> person
>> behaviour and 1st person experience, even if this turns out to mean 
>> copying
>> a person to the same engineering tolerances as nature has specified 
>> for ordinary
>> day to day life. The question is, is there some substitution level 
>> which preserves
>> 3rd person behaviour but not 1st person experience? For example, 
>> suppose
>> you carried around with you a device which monitored all your 
>> behaviour in great
>> detail, created predictive models, compared its predictions with your 
>> actual
>> behaviour, and continuously refined its models. Over time, this device 
>> might be
>> able to mimic your behaviour closely enough such that it could take 
>> over control of
>> your body from your brain and no-one would be able to tell that the 
>> substitution
>> had occurred. I don't think it would be unreasonable to wonder whether 
>> this copy
>> experiences the same thing when it looks at the sky and declares it to 
>> be blue as
>> you do before the substitution.
> 
> 
> 
> Thanks for the precision.
> It *is* as reasonable to ask such a question as it is reasonable to ask 
> if tomorrow my first person experience will not indeed permute my blue 
> and orange qualia *including my memories of it* in such a way that my 
> 3-behavior will remain unchanged. In that case we are back to the 
> original spectrum reversal problem.
> This is a reasonable question in the sense that the answer can be shown 
> relatively (!) undecidable: it is not verifiable by any external means, 
> nor by the first person itself. We could as well conclude that such a 
> change occurs each time the magnetic poles permute, or that it changes 
> at each season, etc.
> *But* (curiously enough perhaps) such a change can be shown to be 
> guess-able by some richer machine.
> The spectrum reversal question points on the gap between the 1 and 3 
> descriptions. With acomp your question should be addressable in the 
> terms of the modal logic Z and X, or more precisely Z1* minus Z1 and 
> X1* minus X1, that is their true but unprovable (and undecidable) 
> propositions. Note that the question makes no sense at all for the 
> "pure 1-person" because S4Grz1* minus S4Grz1 is empty.
> So your question makes sense because at the level of the fourth and 
> fifth hypo your question can be translated into purely arithmetical 
> propositions, which although highly undecidable by the machine itself 
> can be decided by some richer machine.
> And I would say, without doing the calculus which is rather complex, 
> that the answer could very well be positive indeed, but this remains to 
> be proved. At least the unexpected nuances between computability, 
> provability, knowability, observability, perceivability (all redefined 
> by modal variant of G) gives plenty room for this, indeed.
> 
> Bruno

So what does your calculus say about the experience of people who wear glasses 
which invert their field of vision?

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-12-01 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> Bruno Marchal writes:
> 
>> 
>>
>>> We can assume that the structural difference makes a difference to 
>>> consciousness but
>>> not external behaviour. For example, it may cause spectrum reversal.
>>
>> Let us suppose you are right. This would mean that there is 
>> substitution level such that the digital copy person would act AS IF 
>> she has been duplicated at the correct level, but having or living a 
>> (1-person) spectrum reversal.
>>
>> Now what could that mean? Let us interview the copy and ask her the 
>> color of the sky. Having the same external behavior as the original, 
>> she will told us the usual answer: blue (I suppose a sunny day!).
>>
>> So, apparently she is not 1-aware of that spectrum reversal. This means 
>> that from her 1-person point of view, there was no spectrum reversal, 
>> but obviously there is no 3-description of it either 
>>
>> So I am not sure your assertion make sense. I agree that if we take an 
>> incorrect substitution level, the copy could experience a spectrum 
>> reversal, but then the person will complain to her doctor saying 
>> something like "I have not been copied correctly", and will not pay her 
>> doctor bill (but this is a different  external behaviour, ok?)
> 
> I don't doubt that there is some substitution level that preserves 3rd person 
> behaviour and 1st person experience, even if this turns out to mean copying 
> a person to the same engineering tolerances as nature has specified for 
> ordinary 
> day to day life. The question is, is there some substitution level which 
> preserves 
> 3rd person behaviour but not 1st person experience? For example, suppose 
> you carried around with you a device which monitored all your behaviour in 
> great 
> detail, created predictive models, compared its predictions with your actual 
> behaviour, and continuously refined its models. Over time, this device might 
> be 
> able to mimic your behaviour closely enough such that it could take over 
> control of 
> your body from your brain and no-one would be able to tell that the 
> substitution 
> had occurred. I don't think it would be unreasonable to wonder whether this 
> copy 
> experiences the same thing when it looks at the sky and declares it to be 
> blue as 
> you do before the substitution.

That's a precis of Greg Egan's short story "The Jewel".  I wouldn't call it 
unreasonable to wonder whether the copy experiences the same qualia, but I'd 
call it unreasonable to conclude that it did not on the stated evidence.  In 
fact I find it hard to think of what evidence would count against it have some 
kind of qualia.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited and then some

2006-12-01 Thread Bruno Marchal


Le 01-déc.-06, à 10:24, Stathis Papaioannou a écrit :

>
>
> Bruno Marchal writes:
>
>> 
>>
>>> We can assume that the structural difference makes a difference to
>>> consciousness but
>>> not external behaviour. For example, it may cause spectrum reversal.
>>
>>
>> Let us suppose you are right. This would mean that there is
>> substitution level such that the digital copy person would act AS IF
>> she has been duplicated at the correct level, but having or living a
>> (1-person) spectrum reversal.
>>
>> Now what could that mean? Let us interview the copy and ask her the
>> color of the sky. Having the same external behavior as the original,
>> she will told us the usual answer: blue (I suppose a sunny day!).
>>
>> So, apparently she is not 1-aware of that spectrum reversal. This 
>> means
>> that from her 1-person point of view, there was no spectrum reversal,
>> but obviously there is no 3-description of it either 
>>
>> So I am not sure your assertion make sense. I agree that if we take an
>> incorrect substitution level, the copy could experience a spectrum
>> reversal, but then the person will complain to her doctor saying
>> something like "I have not been copied correctly", and will not pay 
>> her
>> doctor bill (but this is a different  external behaviour, ok?)
>
> I don't doubt that there is some substitution level that preserves 3rd 
> person
> behaviour and 1st person experience, even if this turns out to mean 
> copying
> a person to the same engineering tolerances as nature has specified 
> for ordinary
> day to day life. The question is, is there some substitution level 
> which preserves
> 3rd person behaviour but not 1st person experience? For example, 
> suppose
> you carried around with you a device which monitored all your 
> behaviour in great
> detail, created predictive models, compared its predictions with your 
> actual
> behaviour, and continuously refined its models. Over time, this device 
> might be
> able to mimic your behaviour closely enough such that it could take 
> over control of
> your body from your brain and no-one would be able to tell that the 
> substitution
> had occurred. I don't think it would be unreasonable to wonder whether 
> this copy
> experiences the same thing when it looks at the sky and declares it to 
> be blue as
> you do before the substitution.



Thanks for the precision.
It *is* as reasonable to ask such a question as it is reasonable to ask 
if tomorrow my first person experience will not indeed permute my blue 
and orange qualia *including my memories of it* in such a way that my 
3-behavior will remain unchanged. In that case we are back to the 
original spectrum reversal problem.
This is a reasonable question in the sense that the answer can be shown 
relatively (!) undecidable: it is not verifiable by any external means, 
nor by the first person itself. We could as well conclude that such a 
change occurs each time the magnetic poles permute, or that it changes 
at each season, etc.
*But* (curiously enough perhaps) such a change can be shown to be 
guess-able by some richer machine.
The spectrum reversal question points on the gap between the 1 and 3 
descriptions. With acomp your question should be addressable in the 
terms of the modal logic Z and X, or more precisely Z1* minus Z1 and 
X1* minus X1, that is their true but unprovable (and undecidable) 
propositions. Note that the question makes no sense at all for the 
"pure 1-person" because S4Grz1* minus S4Grz1 is empty.
So your question makes sense because at the level of the fourth and 
fifth hypo your question can be translated into purely arithmetical 
propositions, which although highly undecidable by the machine itself 
can be decided by some richer machine.
And I would say, without doing the calculus which is rather complex, 
that the answer could very well be positive indeed, but this remains to 
be proved. At least the unexpected nuances between computability, 
provability, knowability, observability, perceivability (all redefined 
by modal variant of G) gives plenty room for this, indeed.

Bruno






http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-12-01 Thread Stathis Papaioannou


Bruno Marchal writes:

> 
> 
> > We can assume that the structural difference makes a difference to 
> > consciousness but
> > not external behaviour. For example, it may cause spectrum reversal.
> 
> 
> Let us suppose you are right. This would mean that there is 
> substitution level such that the digital copy person would act AS IF 
> she has been duplicated at the correct level, but having or living a 
> (1-person) spectrum reversal.
> 
> Now what could that mean? Let us interview the copy and ask her the 
> color of the sky. Having the same external behavior as the original, 
> she will told us the usual answer: blue (I suppose a sunny day!).
> 
> So, apparently she is not 1-aware of that spectrum reversal. This means 
> that from her 1-person point of view, there was no spectrum reversal, 
> but obviously there is no 3-description of it either 
> 
> So I am not sure your assertion make sense. I agree that if we take an 
> incorrect substitution level, the copy could experience a spectrum 
> reversal, but then the person will complain to her doctor saying 
> something like "I have not been copied correctly", and will not pay her 
> doctor bill (but this is a different  external behaviour, ok?)

I don't doubt that there is some substitution level that preserves 3rd person 
behaviour and 1st person experience, even if this turns out to mean copying 
a person to the same engineering tolerances as nature has specified for 
ordinary 
day to day life. The question is, is there some substitution level which 
preserves 
3rd person behaviour but not 1st person experience? For example, suppose 
you carried around with you a device which monitored all your behaviour in 
great 
detail, created predictive models, compared its predictions with your actual 
behaviour, and continuously refined its models. Over time, this device might be 
able to mimic your behaviour closely enough such that it could take over 
control of 
your body from your brain and no-one would be able to tell that the 
substitution 
had occurred. I don't think it would be unreasonable to wonder whether this 
copy 
experiences the same thing when it looks at the sky and declares it to be blue 
as 
you do before the substitution.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-11-30 Thread Bruno Marchal


Le 29-nov.-06, à 06:33, Stathis Papaioannou wrote:




> We can assume that the structural difference makes a difference to 
> consciousness but
> not external behaviour. For example, it may cause spectrum reversal.


Let us suppose you are right. This would mean that there is 
substitution level such that the digital copy person would act AS IF 
she has been duplicated at the correct level, but having or living a 
(1-person) spectrum reversal.

Now what could that mean? Let us interview the copy and ask her the 
color of the sky. Having the same external behavior as the original, 
she will told us the usual answer: blue (I suppose a sunny day!).

So, apparently she is not 1-aware of that spectrum reversal. This means 
that from her 1-person point of view, there was no spectrum reversal, 
but obviously there is no 3-description of it either 

So I am not sure your assertion make sense. I agree that if we take an 
incorrect substitution level, the copy could experience a spectrum 
reversal, but then the person will complain to her doctor saying 
something like "I have not been copied correctly", and will not pay her 
doctor bill (but this is a different  external behaviour, ok?)

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-29 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> David Nyman writes:
> 
>> You're right - it's muddled, but as you imply there is the glimmer of
>> an idea trying to break through. What I'm saying is that the
>> 'functional' - i.e. 3-person description - not only of the PZ, but of
>> *anything* - fails to capture the information necessary for PC. Now,
>> this isn't intended as a statement of belief in magic, but rather that
>> the 'uninstantiated' 3-person level (i.e. when considered abstractly)
>> is simply a set of *transactions*.  But - beyond the abstract - the
>> instantiation or substrate of these transactions is itself an
>> information 'domain' - the 1-person level - that in principle must be
>> inaccessible via the transactions alone - i.e. you can't see it 'out
>> there'. But by the same token it is directly accessible via
>> instantiation - i.e. you can see it 'in here'
>>
>> For this to be what is producing PC, the instantiating, or
>> constitutive, level must be providing whatever information is necessary
>> to 'animate' 3-person transactional 'data' in phenomenal form, and in
>> addition whatever processes are contingent on phenomenally-animated
>> perception must be causally effective at the 3-person level (if we are
>> to believe that possessing PC actually makes a difference). This seems
>> a bit worrying in terms of the supposed inadmissability of 'hidden
>> variables' in QM (i.e the transactional theory of reality).
>> Notwithstanding this, if what I'm saying is true (which no doubt it
>> isn't), then it would appear that information over and above what is
>> manifested transactionally would be required to account for PC, and for
>> whatever transactional consequences are contingent on the possession of
>> PC.
>>
>> Just to be clear about PZs, it would be a consequence of the foregoing
>> that a functionally-equivalent analog of a PC entity *might* possess
>> PC, but that this would depend critically on the functional
>> *substitution level*. We could be confident that physical cloning
>> (duplication) would find the right level, but in the absence of this,
>> and without a theory of instantiation, we would be forced to rely on
>> the *behaviour* of the analog in assessing whether it possessed PC.
>> But, on reflection, this seems right.
> 
> You seem to be implying that there is "something" in the instantiation which 
> cannot be captured in the 3rd person description. Could this something just 
> be identified as "the raw feeling of PC from the inside", generated by 
> perfectly 
> well understood physics, with no causal effects of its own? 
> 
> Let me give a much simpler example than human consciousness. Suppose that 
> when a hammer hits a nail, it groks the nail. Grokking is not something that 
> can 
> be explained to a non-hammer. There is no special underlying physics: 
> whenever 
> momentum is transferred from the hammer to the nail, grokking necessarily 
> occurs. 
> It is no more possible for a hammer to hit a nail without grokking it than it 
> is 
> possible for a hammer to hit a nail without hitting it. Because of this, it 
> doesn't 
> really make sense to say that grokking "causes" anything: the 3rd person 
> describable physics completely defines all hammer-nail interactions, which is 
> why 
> we have all gone through life never suspecting that hammers grok. 
> 
> The idea of a zombie (non-grokking) hammer is philosophically problematic. We 
> would 
> have to invoke magic to explain how of two physically identical hammers doing 
> identical 
> things, one is a zombie and the other is normal. (There is no evidence that 
> there is 
> anything magic about grokking. Mysterious though it may seem, it's just a 
> natural part 
> of being a hammer). Still, we can imagine that God has created a zombie 
> hammer, 
> indistinguishable from normal hammers no matter what test we put it through. 
> This would 
> imply that there is some non-third person describable aspect of hammers 
> responsible for 
> their ability to grok nails. OK: we knew that already, didn't we? It is what 
> makes grokking 
> special, private, and causally irrelevant from a third person perspective. 
> 
> Stathis Papaioannou

Very well put, Stathis.

And an apt example since "to grok" actually is an english word meaning "to 
understand intuitively".  So when you understand that "A and B" entails "A", it 
is because you grok "and".  Intuitive understanding is not communicable 
directly.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-29 Thread Stathis Papaioannou


David Nyman writes:

> You're right - it's muddled, but as you imply there is the glimmer of
> an idea trying to break through. What I'm saying is that the
> 'functional' - i.e. 3-person description - not only of the PZ, but of
> *anything* - fails to capture the information necessary for PC. Now,
> this isn't intended as a statement of belief in magic, but rather that
> the 'uninstantiated' 3-person level (i.e. when considered abstractly)
> is simply a set of *transactions*.  But - beyond the abstract - the
> instantiation or substrate of these transactions is itself an
> information 'domain' - the 1-person level - that in principle must be
> inaccessible via the transactions alone - i.e. you can't see it 'out
> there'. But by the same token it is directly accessible via
> instantiation - i.e. you can see it 'in here'
> 
> For this to be what is producing PC, the instantiating, or
> constitutive, level must be providing whatever information is necessary
> to 'animate' 3-person transactional 'data' in phenomenal form, and in
> addition whatever processes are contingent on phenomenally-animated
> perception must be causally effective at the 3-person level (if we are
> to believe that possessing PC actually makes a difference). This seems
> a bit worrying in terms of the supposed inadmissability of 'hidden
> variables' in QM (i.e the transactional theory of reality).
> Notwithstanding this, if what I'm saying is true (which no doubt it
> isn't), then it would appear that information over and above what is
> manifested transactionally would be required to account for PC, and for
> whatever transactional consequences are contingent on the possession of
> PC.
> 
> Just to be clear about PZs, it would be a consequence of the foregoing
> that a functionally-equivalent analog of a PC entity *might* possess
> PC, but that this would depend critically on the functional
> *substitution level*. We could be confident that physical cloning
> (duplication) would find the right level, but in the absence of this,
> and without a theory of instantiation, we would be forced to rely on
> the *behaviour* of the analog in assessing whether it possessed PC.
> But, on reflection, this seems right.

You seem to be implying that there is "something" in the instantiation which 
cannot be captured in the 3rd person description. Could this something just 
be identified as "the raw feeling of PC from the inside", generated by 
perfectly 
well understood physics, with no causal effects of its own? 

Let me give a much simpler example than human consciousness. Suppose that 
when a hammer hits a nail, it groks the nail. Grokking is not something that 
can 
be explained to a non-hammer. There is no special underlying physics: whenever 
momentum is transferred from the hammer to the nail, grokking necessarily 
occurs. 
It is no more possible for a hammer to hit a nail without grokking it than it 
is 
possible for a hammer to hit a nail without hitting it. Because of this, it 
doesn't 
really make sense to say that grokking "causes" anything: the 3rd person 
describable physics completely defines all hammer-nail interactions, which is 
why 
we have all gone through life never suspecting that hammers grok. 

The idea of a zombie (non-grokking) hammer is philosophically problematic. We 
would 
have to invoke magic to explain how of two physically identical hammers doing 
identical 
things, one is a zombie and the other is normal. (There is no evidence that 
there is 
anything magic about grokking. Mysterious though it may seem, it's just a 
natural part 
of being a hammer). Still, we can imagine that God has created a zombie hammer, 
indistinguishable from normal hammers no matter what test we put it through. 
This would 
imply that there is some non-third person describable aspect of hammers 
responsible for 
their ability to grok nails. OK: we knew that already, didn't we? It is what 
makes grokking 
special, private, and causally irrelevant from a third person perspective. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited and then some

2006-11-28 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> Colin Hales writes:
> 
>>> I think it is logically possible to have functional equivalence but
>>> structural
>>> difference with consequently difference in conscious state even though
>>> external behaviour is the same.
>>>
>>> Stathis Papaioannou
>> Remember Dave Chalmers with his 'silicon replacement' zombie papers? (a)
>> Replace every neuron with a silicon "functional equivalent" and (b) hold
>> the external behaviour identical.
> 
> I would guess that such a 1-for-1 replacement brain would in fact have the 
> same 
> PC as the biological original, although this si not a logical certainty. But 
> what I was 
> thinking of was the equivalent of copying the "look and feel" of a piece of 
> software 
> without having access to the source code. Computers may one day be able to 
> copy 
> the "look and feel" of a human not by directly modelling neurons but by 
> completely 
> different mechanisms. Even if such computers were conscious, there seems no 
> good 
> reason to assume that their experiences would be similar to those of a 
> similarly 
> behaving human. 
>  
>> If the 'structural difference' (accounting for consciousness) has a
>> critical role in function then the assumption of identical external
>> behaviour is logically flawed. This is the 'philosophical zombie'. Holding
>> the behaviour to be the same is a meaninglesss impossibility in this
>> circumstance.
> 
> We can assume that the structural difference makes a difference to 
> consciousness but 
> not external behaviour. For example, it may cause spectrum reversal.
>  
>> In the case of Chalmers silicon replacement it assumes that everything
>> that was being done by the neuron is duplicated. What the silicon model
>> assumes is a) that we know everything there is to know and b) that silicon
>> replacement/modelling/representation is capable of delivering everything,
>> even if we did 'know  everything' and put it in the model. Bad, bad,
>> arrogant assumptions.
> 
> Well, it might just not work, and you end up with an idiot who slobbers and 
> stares into 
> space. Or you might end up with someone who can do calculations really well 
> but displays 
> no emotions. But it's a thought experiment: suppose you use whatever advanced 
> technology 
> it takes to create a being with *exactly* the same behaviours as a biological 
> human. Can 
> you be sure that this being would be conscious? Can you be sure that this 
> being would be 
> conscious in the same way you and I are conscious?

Consciousness would be supported by the behavioral evidence.  If it were 
functionally similar at a low level I don't see what evidence there would be 
against it. So the best conclusion would be that the being was conscious.

If we knew a lot about the function of the human brain and we created this 
behaviorally identical being but with different functional structure; then we 
would have some evidence against the being having human type consciousness - 
but I think we'd be able to assert that it was not conscious in some way.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-28 Thread Stathis Papaioannou


Quentin Anciaux writes:

> Le Mardi 28 Novembre 2006 00:00, Stathis Papaioannou a écrit :
> > Quentin Anciaux writes:
> > > But the point is to assume this "nonsense" to take a "conclusion", to see
> > > where it leads. Why imagine a "possible" zombie which is functionnally
> > > identical if there weren't any dualistic view in the first place ! Only
> > > in dualistic framework it is possible to imagine a functionnally
> > > equivalent to human yet lacking consciousness, the other way is that
> > > functionnally equivalence *requires* consciousness (you can't have
> > > functionnally equivalence without consciousness).
> >
> > I think it is logically possible to have functional equivalence but
> > structural difference with consequently difference in conscious state even
> > though external behaviour is the same.
> >
> > Stathis Papaioannou
> 
> Do you mean you can have exact human external behavior replica without 
> consciousness ? or with a different consciousness (than a human) ?
> 
> If 1st case then if you can't find any difference between a real human and 
> the 
> replica lacking consciousness how could you tell the replica is lacking 
> consciouness (or that the human have consciousness) ?
> 
> If the second case, I don't understand what could be a different 
> consciousness, could you elaborate ?

See my answer to Colin on this point. I assume that you are conscious in much 
the same 
way I am because (roughly speaking) you have a similar brain to mine *and* your 
behaviour is similar to mine. If only one of us were conscious we would have to 
invoke 
magic to explain it: God has decided to give only one of us an immaterial, 
undetectable 
soul which does not make any difference to behaviour. 

On the other hand, if it turns out that you are an alien robot designed to fool 
us into 
thinking you are human, based on technology utterly different to that in a 
biological brain, 
it is not unreasonable to wonder whether you are conscious at all, or if you 
are whether 
your conscious experience is anything like a human's.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-11-28 Thread Stathis Papaioannou


Colin Hales writes:

> > I think it is logically possible to have functional equivalence but
> > structural
> > difference with consequently difference in conscious state even though
> > external behaviour is the same.
> >
> > Stathis Papaioannou
> 
> Remember Dave Chalmers with his 'silicon replacement' zombie papers? (a)
> Replace every neuron with a silicon "functional equivalent" and (b) hold
> the external behaviour identical.

I would guess that such a 1-for-1 replacement brain would in fact have the same 
PC as the biological original, although this si not a logical certainty. But 
what I was 
thinking of was the equivalent of copying the "look and feel" of a piece of 
software 
without having access to the source code. Computers may one day be able to copy 
the "look and feel" of a human not by directly modelling neurons but by 
completely 
different mechanisms. Even if such computers were conscious, there seems no 
good 
reason to assume that their experiences would be similar to those of a 
similarly 
behaving human. 
 
> If the 'structural difference' (accounting for consciousness) has a
> critical role in function then the assumption of identical external
> behaviour is logically flawed. This is the 'philosophical zombie'. Holding
> the behaviour to be the same is a meaninglesss impossibility in this
> circumstance.

We can assume that the structural difference makes a difference to 
consciousness but 
not external behaviour. For example, it may cause spectrum reversal.
 
> In the case of Chalmers silicon replacement it assumes that everything
> that was being done by the neuron is duplicated. What the silicon model
> assumes is a) that we know everything there is to know and b) that silicon
> replacement/modelling/representation is capable of delivering everything,
> even if we did 'know  everything' and put it in the model. Bad, bad,
> arrogant assumptions.

Well, it might just not work, and you end up with an idiot who slobbers and 
stares into 
space. Or you might end up with someone who can do calculations really well but 
displays 
no emotions. But it's a thought experiment: suppose you use whatever advanced 
technology 
it takes to create a being with *exactly* the same behaviours as a biological 
human. Can 
you be sure that this being would be conscious? Can you be sure that this being 
would be 
conscious in the same way you and I are conscious?
 
> This is the endless loop that comes about when you make two contradictory
> assumptions without being able to know that you are, explore the
> consequences and decide you are right/wrong, when the whole scenario is
> actually meaningless because the premises are flawed. You can be very
> right/wrong in terms of the discussion (philosophy) but say absolutely
> nothing useful about anything in the real world (science).

I agree that the idea of a zombie identical twin (i.e. same brain, same 
behaviour but no PC) 
is philosophically dubious, but I think it is theoretically possible to have a 
robot twin which is 
if not unconscious at least differently conscious.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited

2006-11-28 Thread Stathis Papaioannou


I was using David Chalmer's terminology. The science, however advanced it 
might become, is the "easy problem". Suppose alien scientists discover that 
human consciousness is caused by angels that reside in tiny black holes inside 
every neuron. They study these angels so closely that they come to understand 
them as well as humans understand hammers or screwdrivers today: well enough 
to build a human and predict his every response. Despite such detailed 
knowledge, 
they might still have no idea that humans are conscious, or what it is like to 
be a 
human, or how having one type of angel in your head feels different to having a 
different type of angel. For that matter, we have no idea whether hammers and 
screwdrivers have any kind of phenomenal consciousness. We assume that they do 
not, but maybe they experience something which for us is utterly beyond 
imagination. 
It's not a question science can ever answer, even in principle.

Stathis Papaioannou
 


> > The hard problem is not that we haven't discovered the physics that
> > explains
> > consciousness, it is that no such explanation is possible. Whatever
> > Physics X
> > is, it is still possible to ask, "Yes, but how can a blind man who
> > understands
> > Physics X use it to know what it is like to see?" As far as the hard
> > problem goes,
> > Physics X (if there is such a thing) is no more of an advance than knowing
> > which
> > neurons fire when a subject has an experience.
> >
> > Stathis Papaioannou
> 
> I think you are mixing up modelling and explanation. It may be that 'being
> something' is the only way to describe it. Why is that invalid science?
> Especially when 'being something' is everything that enables science.
> 
> Every oject in the universe has a first person story to tell. Not just us.
> Voicelessness is just an logistics issue.
> 
> Colin

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited: Digital Physics

2006-11-28 Thread 1Z


Colin Geoffrey Hales wrote:

> > (And "analogue" physics might turn out to be digital)
> >
>
> Digital is a conceptual representation metaphor only.

Not necessarily.

http://en.wikipedia.org/wiki/Digital_physics

http://www.mtnmath.com/digital.html


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-28 Thread 1Z


Colin Geoffrey Hales wrote:
> <[EMAIL PROTECTED]>
>   <[EMAIL PROTECTED]>
> In-Reply-To: <[EMAIL PROTECTED]>
>
> Hi Brent,
> Please see the post/replies to Quentin/LZ.
> I am trying to understand the context in which I can be wrong and how
> other people view the proposition. There can be a mixture of mistakes and
> poor communication and I want to understand all the ways in which these
> things play a role in the discourse.
>
> So...
>
> >> So, I have my zombie scientist and my human scientist and I
> >> ask them to do science on exquisite novelty. What happens?
> >> The novelty is invisible to the zombie, who has the internal
> >> life of a dreamless sleep.
> >
> > Scientists don't literally "see" novel theories - they invent
> > them by combining other ideas.  "Invisible" is just a metaphor.
>
> I am not talking about the creative process. I am talking about the
> perception of a natural world phenomena that has never before been
> encountered. There can be no a-priori scientific knowledge in such
> situations. It is as far from a metaphor as you can get. I mean literal
> invisibility. See the red photon discussion in the LZ posting. If all you
> have is a-priori abstract (non-phenomenal) rules of interpretation of
> sensory signals to go by, then one day you are going to misinterpret
> because the signals came in the same from a completely different source
> and you;d never know it. That is the invisibility I claim at the center of
> the zombie's difficulty.
>
> >
> >> The reason it is invisible is because there is no phenomenal
> >> consciousness. The zombie has only sensory data to use to
> >> do science. There are an infinite number
> >> of ways that same sensory data could arrive from an infinity
> >> of external natural world situations. The sensory data is
> >> ambiguous - it's all the same - action potential pulse trains
> >> traveling from sensors to brain. The zombie cannot possibly
> >> distinguish the novelty from the sensory data
> >
> > Why can it not distinguish them as well as the limited human scientist?
>
> Because the human scientist is distinguishing them within the phenomenal
> construct made from the sensory data, not directly from the sensory data -
> which all the zombie has. The zombie has no phenomenal construct of the
> external world. It has an abstraction entirely based on the prior history
> of non-phenonmenal sensory input.

All the evidence indicates that humans have only an
abstraction entirely based on the prior history
of phenomenal sensory input -- which itself contains omly information
previously present in  abstraction entirely based on the prior history
of non-phenonmenal sensory input.

> >
> >> and has no awareness of the external world or even its own boundary.
> >
> > Even simple robots like the Mars Rovers have awareness of the
> > world, where they are, their internal states, and
>
> No they don't. They have an internal state sufficiently complex to
> navigate according to the rules of the program (a-priori knowledge) given
> to them by humans, who are the only beings that are actually aware where
> the rover is. Look at what happens when the machine gets hung up on
> novelty... like the rock nobody could allow for who digs it out of it?
> no the rover... humans do...

Because it lacks phenomenality? Or because it is not
a very smart robot?

> .The rover has no internal life at all. Going
> 'over there' is what the human sees. 'actuate this motor until until this
> number equals that number' is what the rover does.
>
> >
> > No.  You've simply assumed that you know what "awareness" is and you
> have the defined a zombie as not having it.  You might as
> > well have just defined "zombie" as "just like a person, but can't do
> science" or "can't whistle".  Whatever definition you give
> > still leaves the question of whether a being whose internal
> > processes (and a fortiori the external processes) are
> > functionally identical with a human's is conscious.
>
> This is the nub of it. It's where I struggle to see the logic others see.
> I don't think I have done what you describe. I'll walk myself through it.

> What I have done is try to figure out a valid test for phenomenal
> consciousness.
>
> When you take away phenomenal consciousness what can't you do? It seems
> science is a unique/special candidate for a variety of reasons. Its
> success is critically dependent on the existence of a phenomenal
> representation of the external world.

So is art. So is walking around without bumping into things.
So, no science is not unique.

> The creature that is devoid of such constructs is what we typically call a
> zombie. May be a mistake to call it that. No matter.
>
> OK, so the real sticking point is the 'phenomenal construct'. The zombie
> could have a 'construct' with as much detail in it as the human phenomenal
> construct, but that is phenomenally inert (a numerical abstraction). Upon
> what basis could the zombie acquire such a co

Re: UDA revisited

2006-11-28 Thread 1Z


David Nyman wrote:
> 1Z wrote:
>
> > But PC isn't *extra* information It is a re-presentation of
> > what is coming in through the senses by 3rd person mechanisms.
>
> How can you be confident of that?

Because phenomenal perception wouldn't be perception otherwise.

Non-phenomenal sense data (pulse trains, etc) has to co-vary with
external events, or it is a useless as a guide to what is going on
outside
your head. Likewise, the phenomenal re-presentation has
to co-vary with the data. And if A co-varies with
B, A contains essentially the same information as B. If PC were
a "free variable", it would not present, or re-present anything outside
itself.

> We can see that transactional
> information arrives in the brain and is processed in a 3-person
> describable manner. We don't have the glimmer of a theory of how this
> could of itself produce anything remotely like PC, or indeed more
> fundamentally account for the existence of any non-3-personal 'pov'
> whatsoever.

"How", no. But when PC is part of perception, that sets
constraints on *what* is happening.

> What I'm suggesting is that 'phenomenality' is inherently
> bound up with instantiation, and that it thereby embodies (literally)
> information that is inaccessible from the 3-person (i.e. disembodied)
> pov.

Information about how information is embodied (this is a first folio
of Shakespeare, that is CDROM of Shakespeare) is always
"extra". However, if the information itself is "extra", therr is no
phenomenal perception.

> This is why 'qualia' aren't 'out there'.  Of course this doesn't
> imply that electrons are conscious or whatever, because the typical
> content and 'grasp' of PC would emerge at vastly higher-order levels of
> organisation. But my point is that *instantiation* makes the difference
> - the world looks *like* something (actually, like *me*) to an
> instantiated entity, but not like anything (obviously) to a
> non-instantiated entity.
>
> PZs, as traditionally conceived, are precisely that - non-instantiated,
> abstract, and hence not 'like' anything at all.

Huh? PZ's are supposed to be physical.

> The difference between
> a PZ and a traditionally-duplicated PC human is that we *can't help*
> but get the phenomenality when we follow the traditional process of
> constructing people. But a purely 3-person functional theory doesn't
> tell us how. And consequently we can't find a purely functional
> *substitution level* that is guaranteed to produce PC, except by
> physical duplication. Or - as in the 'yes doctor' gamble - by observing
> the behaviour of the entity and drawing our own conclusions.
> 
> David
>


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-28 Thread Bruno Marchal


Le 27-nov.-06, à 02:31, David Nyman a écrit :

>
>
> On Nov 26, 11:50 pm, "1Z" <[EMAIL PROTECTED]> wrote:
>
>> Why use the word if you don't like the concept?
>
> I've been away for a bit and I can't pretend to have absorbed all the
> nuances of this thread but I have some observations.
>
> 1. To coherently conceive that a PZ which is a *functional* (not
> physical) duplicate can nonetheless lack PC - and for this to make any
> necessary difference to its possible behaviour - we must believe that
> the PZ thereby lacks some crucial information.



OK.





> 2. Such missing information consequently can't be captured by any
> purely *functional* description (however defined) of the non-PZ
> original.



Such missing information can't be captured in any "provable way" by a 
functional description (but assuming comp it exists, we just cannot 
know it exists).





> 3. Hence having PC must entail the possession and utilisation of
> information which *in principle* is not functionally (3-person)
> describable,



... by us, or by the original copied person;





> but which, in *instantiating* 3-person data, permits it to
> be contextualised, differentiated, and actioned in a manner not
> reproducible by any purely functional (as opposed to constructable)
> analog.


OK. Necessarily so with the explicit comp hyp.


>
> Now this seems to tally with what Colin is saying about the crucial
> distinction between the *content* of PC and whatever is producing it.



I thought so, at least. (But Colin is more cautious in his paper, less 
in his posts).




> It implies that whatever is producing it isn't reducible to sharable
> 3-person quanta.

Yes. For the same reason that after a WM duplication, the fact that you 
feel to be in W cannot be reduced to any third person description.



> This seems also (although I may be confused) to square
> with Bruno's claims for COMP that the sharable 3-person emerges from
> (i.e. is instantiated by) the 1-person level.


Hmmm I would say OK, but the word "instantiated" can be misleading. 
There is only types, and some are less abstract than other, and thus 
"more instantiated", but it is a question of degree. In that sense the 
sharable physics can be said to emerge from 1-person level (singular, 
plural, sensible) person povs.




> As he puts it -'quanta
> are sharable qualia'.



Yes, but this was a statement by the lobian machine which made me doubt 
the comp-physics could make sense until I read Everett where indeed 
quanta can make sense only from first person plural pov. In that sense 
comp predicts the multiplication of *population of individuals".
(and from this we can expect to be able to derive the "right" tensor 
product for comp states and histories, but I have not succeed to do it 
properly yet. This is because I derive a quantum logic only, and (as 
quantum logician know) quantum logic cannot justify easily the tensor 
products.




> IOW, the observable - quanta - is the set of
> possible transactions between functionally definable entities
> instantiated at a deeper level of representation (the constitutive
> level).

Hoping they have the right measure. This is, assuming comp, testable, 
making comp testable (popper-falsifiable)..



> This is why we see brains not minds.
>
> It seems to me that the above, or something like it, must be true if we
> are to take the lessons of the PZ to heart. IOW, the information
> instantiated by PC is in principle inaccessible to a PZ because the
> specification of the PZ as a purely functional 3-person analog is
> unable to capture the necessary constitutive information. The
> specification is at the wrong level.


Indeed, and that is why the philosophical zombie will remain 
"philosophical". The PZ, like the movie of the boolean graph, will be a 
Zombie only relatively to us. From its "point of view", he will survive 
in any relative continuations/instantiations which will complete its 
functional deficit (Note that this happens all the time in the Albert 
Loewer Many Mind interpretation of QM (in which all third person in 
"your" branch are zombies: with comp this should be exceptional, like 
in Everett. Somehow a philosophical zombie has the same probability to 
remain in "your branches" than you (1-pov) have to stay in a Harry 
Potter region of the (comp) multiverses (= negligible proba).


> It's like trying to physically
> generate a new computer by simply running more and more complex
> programs on the old one. It's only by *constructing* a physical
> duplicate (or some equivalent physical analog) that the critical
> constitutive - or instantiating - information can be captured.


I would say it is just by making the instantiations at the right level 
(or below), but it always concerned possible sheaf of 
instantiations/continuations. There is no token, no "real" 
instantiations.




>
> We have to face it.  We won't find PC 'out there' - if we could, it
> would (literally) be staring us in the face. I think what 

Re: UDA revisited

2006-11-28 Thread David Nyman


1Z wrote:

> But PC isn't *extra* information It is a re-presentation of
> what is coming in through the senses by 3rd person mechanisms.

How can you be confident of that? We can see that transactional
information arrives in the brain and is processed in a 3-person
describable manner. We don't have the glimmer of a theory of how this
could of itself produce anything remotely like PC, or indeed more
fundamentally account for the existence of any non-3-personal 'pov'
whatsoever. What I'm suggesting is that 'phenomenality' is inherently
bound up with instantiation, and that it thereby embodies (literally)
information that is inaccessible from the 3-person (i.e. disembodied)
pov. This is why 'qualia' aren't 'out there'.  Of course this doesn't
imply that electrons are conscious or whatever, because the typical
content and 'grasp' of PC would emerge at vastly higher-order levels of
organisation. But my point is that *instantiation* makes the difference
- the world looks *like* something (actually, like *me*) to an
instantiated entity, but not like anything (obviously) to a
non-instantiated entity.

PZs, as traditionally conceived, are precisely that - non-instantiated,
abstract, and hence not 'like' anything at all. The difference between
a PZ and a traditionally-duplicated PC human is that we *can't help*
but get the phenomenality when we follow the traditional process of
constructing people. But a purely 3-person functional theory doesn't
tell us how. And consequently we can't find a purely functional
*substitution level* that is guaranteed to produce PC, except by
physical duplication. Or - as in the 'yes doctor' gamble - by observing
the behaviour of the entity and drawing our own conclusions.

David

> David Nyman wrote:
>
> > For this to be what is producing PC, the instantiating, or
> > constitutive, level must be providing whatever information is necessary
> > to 'animate' 3-person transactional 'data' in phenomenal form, and in
> > addition whatever processes are contingent on phenomenally-animated
> > perception must be causally effective at the 3-person level (if we are
> > to believe that possessing PC actually makes a difference). This seems
> > a bit worrying in terms of the supposed inadmissability of 'hidden
> > variables' in QM (i.e the transactional theory of reality).
>
>
> But PC isn't *extra* information It is a re-presentation of
> what is coming in through the senses by 3rd person mechanisms.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-28 Thread 1Z


David Nyman wrote:

> For this to be what is producing PC, the instantiating, or
> constitutive, level must be providing whatever information is necessary
> to 'animate' 3-person transactional 'data' in phenomenal form, and in
> addition whatever processes are contingent on phenomenally-animated
> perception must be causally effective at the 3-person level (if we are
> to believe that possessing PC actually makes a difference). This seems
> a bit worrying in terms of the supposed inadmissability of 'hidden
> variables' in QM (i.e the transactional theory of reality).


But PC isn't *extra* information It is a re-presentation of
what is coming in through the senses by 3rd person mechanisms.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-28 Thread David Nyman



On Nov 28, 10:17 am, Stathis Papaioannou
<[EMAIL PROTECTED]> wrote:

> This seems to me a bit muddled (though in a good way: ideas breaking surface 
> at
> the limits of what can be expressed). If the duplicate is a "functional" one 
> then
> there can't be any difference to its possible behaviour, by definition.

You're right - it's muddled, but as you imply there is the glimmer of
an idea trying to break through. What I'm saying is that the
'functional' - i.e. 3-person description - not only of the PZ, but of
*anything* - fails to capture the information necessary for PC. Now,
this isn't intended as a statement of belief in magic, but rather that
the 'uninstantiated' 3-person level (i.e. when considered abstractly)
is simply a set of *transactions*.  But - beyond the abstract - the
instantiation or substrate of these transactions is itself an
information 'domain' - the 1-person level - that in principle must be
inaccessible via the transactions alone - i.e. you can't see it 'out
there'. But by the same token it is directly accessible via
instantiation - i.e. you can see it 'in here'

For this to be what is producing PC, the instantiating, or
constitutive, level must be providing whatever information is necessary
to 'animate' 3-person transactional 'data' in phenomenal form, and in
addition whatever processes are contingent on phenomenally-animated
perception must be causally effective at the 3-person level (if we are
to believe that possessing PC actually makes a difference). This seems
a bit worrying in terms of the supposed inadmissability of 'hidden
variables' in QM (i.e the transactional theory of reality).
Notwithstanding this, if what I'm saying is true (which no doubt it
isn't), then it would appear that information over and above what is
manifested transactionally would be required to account for PC, and for
whatever transactional consequences are contingent on the possession of
PC.

Just to be clear about PZs, it would be a consequence of the foregoing
that a functionally-equivalent analog of a PC entity *might* possess
PC, but that this would depend critically on the functional
*substitution level*. We could be confident that physical cloning
(duplication) would find the right level, but in the absence of this,
and without a theory of instantiation, we would be forced to rely on
the *behaviour* of the analog in assessing whether it possessed PC.
But, on reflection, this seems right.

David





> David Nyman writes:
> > 1. To coherently conceive that a PZ which is a *functional* (not
> > physical) duplicate can nonetheless lack PC - and for this to make any
> > necessary difference to its possible behaviour - we must believe that
> > the PZ thereby lacks some crucial information.
> > 2. Such missing information consequently can't be captured by any
> > purely *functional* description (however defined) of the non-PZ
> > original.
> > 3. Hence having PC must entail the possession and utilisation of
> > information which *in principle* is not functionally (3-person)
> > describable, but which, in *instantiating* 3-person data, permits it to
> > be contextualised, differentiated, and actioned in a manner not
> > reproducible by any purely functional (as opposed to constructable)
> > analog.This seems to me a bit muddled (though in a good way: ideas breaking 
> > surface at
> the limits of what can be expressed). If the duplicate is a "functional" one 
> then
> there can't be any difference to its possible behaviour, by definition.
>
> Colin could have made his point by saying that a PZ is impossible, as only a
> conscious person can act like a conscious person when faced with a difficult
> enough test, such as doing science. Ironically, this is the same conclusion as
> standard computationalism, which Colin opposes.
>
>
>
> > Now this seems to tally with what Colin is saying about the crucial
> > distinction between the *content* of PC and whatever is producing it.
> > It implies that whatever is producing it isn't reducible to sharable
> > 3-person quanta. This seems also (although I may be confused) to square
> > with Bruno's claims for COMP that the sharable 3-person emerges from
> > (i.e. is instantiated by) the 1-person level. As he puts it -'quanta
> > are sharable qualia'. IOW, the observable - quanta - is the set of
> > possible transactions between functionally definable entities
> > instantiated at a deeper level of representation (the constitutive
> > level). This is why we see brains not minds.
>
> > It seems to me that the above, or something like it, must be true if we
> > are to take the lessons of the PZ to heart. IOW, the information
> > instantiated by PC is in principle inaccessible to a PZ because the
> > specification of the PZ as a purely functional 3-person analog is
> > unable to capture the necessary constitutive information. The
> > specification is at the wrong level. It's like trying to physically
> > generate a new computer by simply running more and more complex
> >

RE: UDA revisited

2006-11-28 Thread Stathis Papaioannou


David Nyman writes:

> 1. To coherently conceive that a PZ which is a *functional* (not
> physical) duplicate can nonetheless lack PC - and for this to make any
> necessary difference to its possible behaviour - we must believe that
> the PZ thereby lacks some crucial information.
> 2. Such missing information consequently can't be captured by any
> purely *functional* description (however defined) of the non-PZ
> original.
> 3. Hence having PC must entail the possession and utilisation of
> information which *in principle* is not functionally (3-person)
> describable, but which, in *instantiating* 3-person data, permits it to
> be contextualised, differentiated, and actioned in a manner not
> reproducible by any purely functional (as opposed to constructable)
> analog.

This seems to me a bit muddled (though in a good way: ideas breaking surface at
the limits of what can be expressed). If the duplicate is a "functional" one 
then 
there can't be any difference to its possible behaviour, by definition. 

Colin could have made his point by saying that a PZ is impossible, as only a 
conscious person can act like a conscious person when faced with a difficult 
enough test, such as doing science. Ironically, this is the same conclusion as 
standard computationalism, which Colin opposes. 

> Now this seems to tally with what Colin is saying about the crucial
> distinction between the *content* of PC and whatever is producing it.
> It implies that whatever is producing it isn't reducible to sharable
> 3-person quanta. This seems also (although I may be confused) to square
> with Bruno's claims for COMP that the sharable 3-person emerges from
> (i.e. is instantiated by) the 1-person level. As he puts it -'quanta
> are sharable qualia'. IOW, the observable - quanta - is the set of
> possible transactions between functionally definable entities
> instantiated at a deeper level of representation (the constitutive
> level). This is why we see brains not minds.
> 
> It seems to me that the above, or something like it, must be true if we
> are to take the lessons of the PZ to heart. IOW, the information
> instantiated by PC is in principle inaccessible to a PZ because the
> specification of the PZ as a purely functional 3-person analog is
> unable to capture the necessary constitutive information. The
> specification is at the wrong level. It's like trying to physically
> generate a new computer by simply running more and more complex
> programs on the old one. It's only by *constructing* a physical
> duplicate (or some equivalent physical analog) that the critical
> constitutive - or instantiating - information can be captured.

If PZ's do exist, then there has to be a clear 3-person difference between 
the PZ and its PC-possessing brother: different physical structure, if not 
different behaviour. A machine based on semiconductors is not conscious, 
but the equivalent machine based on thermionic valves and doing the same 
computations is. Far-fetched, I don't believe it, and we could never *know* 
that it was the case (not even the computers themselves could know it was 
the case, i.e. whether they are conscious, unconscious or differently 
conscious), but we would have all the information there very clearly 3-person 
accessible. 

There is another possibility: this machine lacks PC, that identical 
functionally 
*and* physically identical machine has it. The problem is, we need to invoke 
magic to explain this.

> We have to face it.  We won't find PC 'out there' - if we could, it
> would (literally) be staring us in the face. I think what Colin is
> trying to do is to discover how we can still do science on PC despite
> the fact that whatever is producing it isn't capturable by 'the
> observables', but rather only in the direct process and experience of
> observation itself.
> 
> David

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited

2006-11-27 Thread Colin Geoffrey Hales

>> the basic assumption of BIV I would see as flawed. It assumes that all
>> there is to the scene genreation is what there is at the boundary where
>> the sense measurement occurs.
>>
>> Virtual reality works, I think, because in the end, actual photons fly
>> at
>> you from outside. Actual phonons impinge your ears and so forth.
>
> What you have just said suggests that you do believe in some sort of ESP,
> for how else could you tell whether an impulse travelling down your optic
> nerve originated from a photon hitting the retina or from direct nerve
> stimulation?
>
> Stathis Papaioannou

Good question. Have a think about the kinds of universes that might make
this possible that also look like the universe we have when you use it to
look at things. Give yourself permission to do that. I can think of one.
It's the universe that is not made of its appearances, but something that
constructs appearances there are a large number of possibles My EC
formalism is a depiction of what I think it might be.

I can give you a little insight, though.

In transmition line theory, at a discontinuity an electromagnetic wave can
be reflected. For..what?... a century? one of the models for this is that
at the discontinuity we have the collision of 2 waves. One going one way
and one going the other. The NET result can be total reflection (imagine a
seaside wave hitting a wall and bouncing back). Nothing downstream of the
discontinuity is ever measured. But the maths can model it as such and it
works.

OK. Electromagnetics

SCENARIO 1)
Physical EM wave A> is real, approaches the diconinuity.
Virtual wave . Instead the boundary electrically acts 'as if' electromagnetic wave
.

Now

PROPOSITION X
If it was 'like something' to be SCENARIO 1), then it MUST BE like
something to be scenario 2).
-

No go to brain material.
Absolute BLIZZARD of programmable boundaries. All manufacturing virtual
bosons on a massive scale. EM field the strength of lighting collpasing
everywhere.
It is LIKE SOMETHING to be those programmable boundaries. This we know.
So proposition X is true.
-

This is the universe we really live in. For real.
In that universe, when you analyse all causalty is coded innately with
origin. It's built of it.
No esp. No magic.
Have a think about it.
Clue: It's all nested layers of coherent noise events. Cohering with X 
makes X real, no matter where it is because everything is very very real
but equally really actually all in the same place: nowhere. Distance is
actually meaningless. It's an artifact of the causal event trail you have
to enact, not an idicator that you actually physically went anywhere,
exept to those in it with you. Freaky huh?

None of this contradicts any existing laws of nature at all. The laws of
nature depicted by appearances depict appearances, not what the universe
is made of. All the laws stay exactly as they are. QM. Everything.

If you can't cope with the latter then at least accept the logical reality
of the virtual boson and work from there.

The horse has been lead to the water. LZ now has his real paint. And I
have to move on

I'd appreciate some feedback on the zombie room.

Colin Hales



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited and then some

2006-11-27 Thread 1Z


Colin Geoffrey Hales wrote:
> >
> >
> > Quentin Anciaux writes:
> >
> >> But the point is to assume this "nonsense" to take a "conclusion", to
> >> see
> >> where it leads. Why imagine a "possible" zombie which is functionnally
> >> identical if there weren't any dualistic view in the first place ! Only
> >> in
> >> dualistic framework it is possible to imagine a functionnally equivalent
> >> to
> >> human yet lacking consciousness, the other way is that functionnally
> >> equivalence *requires* consciousness (you can't have functionnally
> >> equivalence without consciousness).
> >
> > I think it is logically possible to have functional equivalence but
> > structural
> > difference with consequently difference in conscious state even though
> > external behaviour is the same.
> >
> > Stathis Papaioannou
>
> Remember Dave Chalmers with his 'silicon replacement' zombie papers? (a)
> Replace every neuron with a silicon "functional equivalent" and (b) hold
> the external behaviour identical.
>
> If the 'structural difference' (accounting for consciousness) has a
> critical role in function then the assumption of identical external
> behaviour is logically flawed.

Chalmers argues in the opposite direction; that if
the external behaviour is the same, the PC must
be in sync, because it is absurd to go
through he motions
of laughing and grimacing without feeling anything.

>  This is the 'philosophical zombie'. Holding
> the behaviour to be the same is a meaninglesss impossibility in this
> circumstance.

How do you know?

> In the case of Chalmers silicon replacement it assumes that everything
> that was being done by the neuron is duplicated.

Everything relevant, anyway.

> What the silicon model
> assumes is a) that we know everything there is to know and b) that silicon
> replacement/modelling/representation is capable of delivering everything,
> even if we did 'know  everything' and put it in the model. Bad, bad,
> arrogant assumptions.


a) Chalmers is as entitled to assume complete understanding of
neurology
as Frank Jackson is in the Mary story.

b) It doesn't have to be silicon. The assumption is that
there is something other than a neuron which can replicate
the relevant functioning. Well, maybe there isn't. chalmers
doesn't know there is. You don't know there isn't.

> This is the endless loop that comes about when you make two contradictory
> assumptions without being able to know that you are, explore the
> consequences and decide you are right/wrong, when the whole scenario is
> actually meaningless because the premises are flawed. You can be very
> right/wrong in terms of the discussion (philosophy) but say absolutely
> nothing useful about anything in the real world (science).

It's a thought experiment.

> So you've kind of hit upon the real heart of the matter.
> 
> Colin Hales


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-27 Thread Stathis Papaioannou


Colin Hales writes:

> > Well, of course, we have a phenomenal view. Bu there is no informtion
> > in the phenomenal display that was not first in the pre-phenomenal
> > sensory data.
> 
> Yes there is. Mountains of it. It's just that the mechanism and the need
> for it is not obvious to you. Some aspects of the external world must be
> recruited to some extent in the production of the visual field, for
> example. None of the real spatial relative location qualities, for
> example, are inherent in the photons hitting the retina. Same with the
> spatial nature of a sound field. That data is added through the mechanisms
> for generation of phenomenality.

Are you saying that it is impossible to capture the spatial nature of a sound 
field with recording technology? Specific empirical predictions along these 
lines 
would add weight to your theory.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited

2006-11-27 Thread Stathis Papaioannou


Colin Hales writes:

> >> To bench test "a human" I could not merely
> >> replicate sensoiry feeds. I'd have to replicate the factory!
> >
> > As in brain-in-vat scenarios. Do you have a way of showing
> > that BIV would be able to detect its status?
> 
> I think the BIV is another oxymoron like the philosophical zombie. It
> assumes that the distal processes originating the casuality that cause the
> impinging sense data (from the external/distal world) are not involved at
> all in the internal scene generation. An assumption I do not make.
> 
> I would predict that the scenes related to the 'phantom' body might work
> because there are (presumably) the original internal (brain-based) body
> maps that can substitute for the lack of the actual bodyBut the scenes
> related to the 'phantom external world' I would predict wouldn't work. So
> the basic assumption of BIV I would see as flawed. It assumes that all
> there is to the scene genreation is what there is at the boundary where
> the sense measurement occurs.
> 
> Virtual reality works, I think, because in the end, actual photons fly at
> you from outside. Actual phonons impinge your ears and so forth.

What you have just said suggests that you do believe in some sort of ESP, 
for how else could you tell whether an impulse travelling down your optic 
nerve originated from a photon hitting the retina or from direct nerve 
stimulation?

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-27 Thread Colin Geoffrey Hales

>
> Do you mean you can have exact human external behavior replica without
> consciousness ? or with a different consciousness (than a human) ?
>
> If 1st case then if you can't find any difference between a real human and
> the
> replica lacking consciousness how could you tell the replica is lacking
> consciouness (or that the human have consciousness) ?
>
> If the second case, I don't understand what could be a different
> consciousness, could you elaborate ?
>
> Quentin Anciaux

Stathis is as entitled to imagine that your assumptions are wrong as you
are his. He merely has to say "something". It doesn;t matter what. he can
imagine it. I can imagine it. Only you seem to think you have the
physicaal facts of consciousness sorted out, when you merely have a
linguistic argument sorted out.

Colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-27 Thread Quentin Anciaux

Hi, 
Le Mardi 28 Novembre 2006 00:00, Stathis Papaioannou a écrit :
> Quentin Anciaux writes:
> > But the point is to assume this "nonsense" to take a "conclusion", to see
> > where it leads. Why imagine a "possible" zombie which is functionnally
> > identical if there weren't any dualistic view in the first place ! Only
> > in dualistic framework it is possible to imagine a functionnally
> > equivalent to human yet lacking consciousness, the other way is that
> > functionnally equivalence *requires* consciousness (you can't have
> > functionnally equivalence without consciousness).
>
> I think it is logically possible to have functional equivalence but
> structural difference with consequently difference in conscious state even
> though external behaviour is the same.
>
> Stathis Papaioannou

Do you mean you can have exact human external behavior replica without 
consciousness ? or with a different consciousness (than a human) ?

If 1st case then if you can't find any difference between a real human and the 
replica lacking consciousness how could you tell the replica is lacking 
consciouness (or that the human have consciousness) ?

If the second case, I don't understand what could be a different 
consciousness, could you elaborate ?

Quentin Anciaux

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited and then some

2006-11-27 Thread Colin Geoffrey Hales

>
>
> Quentin Anciaux writes:
>
>> But the point is to assume this "nonsense" to take a "conclusion", to
>> see
>> where it leads. Why imagine a "possible" zombie which is functionnally
>> identical if there weren't any dualistic view in the first place ! Only
>> in
>> dualistic framework it is possible to imagine a functionnally equivalent
>> to
>> human yet lacking consciousness, the other way is that functionnally
>> equivalence *requires* consciousness (you can't have functionnally
>> equivalence without consciousness).
>
> I think it is logically possible to have functional equivalence but
> structural
> difference with consequently difference in conscious state even though
> external behaviour is the same.
>
> Stathis Papaioannou

Remember Dave Chalmers with his 'silicon replacement' zombie papers? (a)
Replace every neuron with a silicon "functional equivalent" and (b) hold
the external behaviour identical.

If the 'structural difference' (accounting for consciousness) has a
critical role in function then the assumption of identical external
behaviour is logically flawed. This is the 'philosophical zombie'. Holding
the behaviour to be the same is a meaninglesss impossibility in this
circumstance.

In the case of Chalmers silicon replacement it assumes that everything
that was being done by the neuron is duplicated. What the silicon model
assumes is a) that we know everything there is to know and b) that silicon
replacement/modelling/representation is capable of delivering everything,
even if we did 'know  everything' and put it in the model. Bad, bad,
arrogant assumptions.

This is the endless loop that comes about when you make two contradictory
assumptions without being able to know that you are, explore the
consequences and decide you are right/wrong, when the whole scenario is
actually meaningless because the premises are flawed. You can be very
right/wrong in terms of the discussion (philosophy) but say absolutely
nothing useful about anything in the real world (science).

So you've kind of hit upon the real heart of the matter.

Colin Hales



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-27 Thread Stathis Papaioannou


Quentin Anciaux writes:

> But the point is to assume this "nonsense" to take a "conclusion", to see 
> where it leads. Why imagine a "possible" zombie which is functionnally 
> identical if there weren't any dualistic view in the first place ! Only in 
> dualistic framework it is possible to imagine a functionnally equivalent to 
> human yet lacking consciousness, the other way is that functionnally 
> equivalence *requires* consciousness (you can't have functionnally 
> equivalence without consciousness).

I think it is logically possible to have functional equivalence but structural 
difference with consequently difference in conscious state even though 
external behaviour is the same. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-27 Thread Colin Geoffrey Hales

>>
>> "If the mind is what the brain does, then what exactly is a coffee cup
>> doing?"
>
> It's not mind-ing.
>
>> For that question is just as valid and has just as complex an
>> answer...
>
> Of course not.

>
>> .yet we do not ask it. Every object in the universe is like this.
>> This is the mother of all anthropomorphisms.
>>
>> There is a view of the universe from the perspective of being a coffee
>> cup
>
> No there isn't. It has no internal representation of anything else.

>
> This isn't a "mysterious qualia" issue. Things like digital cameras
> and tape recorders demonstrably contain representations.Things like
> coffee cups don't.

>
>> and it is being equivalently created by whatever is
>> the difference between it and a brain. And you are
>> not entitled to say 'Nothing', all you can say
>> is that there's no brain material, so it isn't
>> like a brain. You can make no assertion as to the
>> actual experience because describing a brain does
>> NOT explain the causality of itHot cup? Cold
>> cup? Full? Empty? All the same? Not the same? None
>> of these questions are helped by the "what the
>> brain does" bandaid excuse for proper science.
>> Glaring missing physics.
>>

What you need to do is deliver a law of nature that says representation
makes qualia. Some physical law. I have at least found candidate real
physices to hypothesise and it indicates that representation is NOT causal
of anything other than representation.

metaphorically:

You have no paint on your paint brush. You are telling me you don't need
any. You assume the act of painting art, and ONLY the act of painting art
makes paint. Randomly spraying paint everywhere is "painting", just not
necessarily art. That act is still using paint and visible.

A brain has a story to tell that is more like art.
A coffee cup has a story to tell that definitely isn't art. But it's not
necessarily nothing.

Your assumptions in respect of representation are far far more
unjustified, magical, mystical and baseless than any of my propositions in
respect of physics. I have a hypothesis for a physical process for 'paint'
that exists in brain material. The suggested physics involved means I
could make a statement 'it's not like anything' to be a coffee cup because
that physics is not present in the necessary form. That explanation has
NOTHING to do with representation.

You have nothing but assumptions that paint is magic.

All your comments are addressed in the last paragraph of my original
(above), negating all your claims...which you then didn't respond or
acknowledge. You have obviously responded without reading everything
first. Would you please stop wasting my time like this. Endless gainsay is
not an argument now say... "yes it is!".

Colin Hales



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-27 Thread Bruno Marchal


Le 26-nov.-06, à 07:09, Colin Geoffrey Hales a écrit :



> I know your work is mathematics, not philosophy. Thank goodness! I can 
> see
> how your formalism can tell you 'about' a universe. I can see how
> inspection of the mathematics tells a story about the view from within 
> and
> without. Hypostatses and all that. I can see how the whole picture is
> constructed of platonic objects interacting according to their innate
> rules.
>
> It is the term 'empirically falsifiable' I have trouble with. For that 
> to
> have any meaning at all it must happen in our universe, not the 
> universe
> of your formalism.


Let us say that "my formalism" (actually the Universal Machine talk) is 
given by the 8 hypostases. Then, just recall that the UDA+Movie Graph 
explains why the appearance of the physical world is described by some 
of those hypostases (person point of view, pov).





> A belief in its falsifiability of a formalism that does
> not map to anything we can find in our universe is problematic.


The "intelligible matter hypo" *must* map the observations. If not, the 
comp formalism remains coherent, but would be falsified.



>
> In the platonic realm of your formalism arithmetical propositions of 
> the
> form  (A v ~A) happen to be identical to our empirical laws:
>
> "It is an unconditional truth about the natural world that either (A is
> true about the natural world) or (A is not true about the natural 
> world)"


Physics does not appear at that level.





>
> (we do the science dance by making sure A is good enough so that the 
> NOT
> clause never happens and voila, A is an an empirical 'fact')
>
> Call me thick but I don't understand how this correspondence between
> platonic statements and our empirical method makes comp falsifiable in 
> our
> universe. You need to map the platonic formalism to that which drives 
> our
> reality and then say something useful we can test.You need to make a 
> claim
> that is critically dependent on 'comp' being true.


If comp is true the propositional logic of the certain observable obeys 
the logic of the "intelligible matter" hypostases, which are perfectly 
well defined and comparable to the empirical quantum logic, for 
example.
We can already prove that comp makes that logic non boolean.


>
> I would suggest that claim be about the existence or otherwise of
> phenomenal consciousness PC would be the best bet.


You have not yet convince me that PC can be tested.



>
> There is another more subtle psychological issue in that a belief that
> comp is empirically testable in principle does not entail that acting 
> as
> if it were true is valid.


You are right. The contrary is true. We should act as if we were 
doubting that comp is true. (Actually comp is special in that regard: 
if true we have to doubt it).
Note the "funny situation": in 2445 Mister Alfred accepts an artificial 
digital brain (betting on comp). He lives happily (apparently) until 
2620 where at least comp is tested in the sense I describe above, and 
is refuted (say).
Should we conclude that M. Alfred, from 2445q,  is a zombie ?




> Sometimes I think that is what is going on
> around here.
>
> Do you have any suggested areas where comp might be tested and have any
> ideas what the test might entail?


Naive comp predicts that all cup of coffee will turn into white rabbits 
or weirder in less than two seconds. Let us look at this cup of coffee 
right now. After more than two seconds I see it has not changed into a 
white rabbit. Naive comp has been refuted.
Now, computer science gives precise reason to expect that the comp 
prediction are more difficult to do, but UDA shows (or should show) 
that the whole of physics is derivable from comp (that is all the 
"empirical" physical laws---the rest is geography). So testing comp 
needs doing two things:
- deriving physics from arithmetic in the way comp predict this must be 
done (that is from the pov hypostases)
- comparing with observations.

The interest of comp is that it explains 8 povs, but only some of them 
are empirically testable, but then the other appears to be indirectly 
testable because all the povs are related.

To sump up quickly: Comp entails the following mystical propositions: 
the whole truth is in your head. But I have shown that this entails 
that the whole truth is in the "head" of any universal machine. I 
explain how to look inside the head of a universal machine and how to 
distinguish (in that head) the physical truth (quanta) from other sort 
of truth (like qualia). Then you can test comp by comparing the 
structure of the quanta you will find in the universal machine "head", 
and those you see around you in the "physical universe".

It is not at all different from the usual work by physicists, despite 
it makes machine's physics(*) a branch of number theory. We can compare 
that "machine's physics" with usual empirical physics and test that 
machine's physics.

(*) "machine's physics" really means here t

Re: UDA revisited

2006-11-27 Thread 1Z


Colin Geoffrey Hales wrote:
> >
> >
> > Colin Hales writes:
> >
> >> The very fact that the laws of physics, derived and validated using
> >> phenomenality, cannot predict or explain how appearances are generated
> >> is
> >> proof that the appearance generator is made of something else and that
> >> something else else is the reality involved, which is NOT
> >> appearances, but independent of them.
> >>
> >> I know that will sound weird...
> >>
> >> >
> >> >> The only science you can do is "I hypothesise that when I activate
> >> this
> >> >> nerve, that sense nerve and this one do "
> >> >
> >> > And I call regularities in my perceptions the "external world", which
> >> > becomes so
> >> > familiar to me that I forget it is a hypothesis.
> >>
> >> Except that in time, as people realise what I just said above, the
> >> hypothesis has some emprical support: If the universe were made of
> >> appearances when we opened up a cranium we'd see them. We don't. We see
> >> something generating/delivering them - a brain. That difference is the
> >> proof.
> >
> > I don't really understand this. We see that chemical reactions
> > in the brain generate consciousness, so why not stop at that?
> > In Gilbert Ryle's words, "the mind is what
> > the brain does". It's mysterious, and it's not well
> > understood, but it's still just chemistry.
>
> I have heard this 3 times now!
>
> 1) Marvin Minski... not sure where but people quote it.
> 2) Derek Denton, "The primordial emotions..."
> and now
> 3) Gilbert Ryle!
>
> Who really said it? Not that it matters OK...back to business
>
> ask your self:
>
> "If the mind is what the brain does, then what exactly is a coffee cup
> doing?"

It's not mind-ing.

> For that question is just as valid and has just as complex an
> answer...

Of course not.

> .yet we do not ask it. Every object in the universe is like this.
> This is the mother of all anthropomorphisms.
>
> There is a view of the universe from the perspective of being a coffee cup

No there isn't. It has no internal representation of anything else.

This isn't a "mysterious qualia" issue. Things like digital cameras
and tape recorders demonstrably contain representations.Things like
coffee cups don't.

> and it is being equivalently created by whatever is the difference between
> it and a brain. And you are not entitled to say 'Nothing', all you can say
> is that there's no brain material, so it isn't like a brain. You can make
> no assertion as to the actual experience because describing a brain does
> NOT explain the causality of itHot cup? Cold cup? Full? Empty? All the
> same? Not the same? None of these questions are helped by the "what the
> brain does" bandaid excuse for proper science. Glaring missing physics.
>
> Zombie room has been deployed... OK dogs... do your worst! Attack!
> 
> :-)
> 
> Colin


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

> The hard problem is not that we haven't discovered the physics that
> explains
> consciousness, it is that no such explanation is possible. Whatever
> Physics X
> is, it is still possible to ask, "Yes, but how can a blind man who
> understands
> Physics X use it to know what it is like to see?" As far as the hard
> problem goes,
> Physics X (if there is such a thing) is no more of an advance than knowing
> which
> neurons fire when a subject has an experience.
>
> Stathis Papaioannou

I think you are mixing up modelling and explanation. It may be that 'being
something' is the only way to describe it. Why is that invalid science?
Especially when 'being something' is everything that enables science.

Every oject in the universe has a first person story to tell. Not just us.
Voicelessness is just an logistics issue.

Colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
>
> Colin Hales writes:
>
>> The very fact that the laws of physics, derived and validated using
>> phenomenality, cannot predict or explain how appearances are generated
>> is
>> proof that the appearance generator is made of something else and that
>> something else else is the reality involved, which is NOT
>> appearances, but independent of them.
>>
>> I know that will sound weird...
>>
>> >
>> >> The only science you can do is "I hypothesise that when I activate
>> this
>> >> nerve, that sense nerve and this one do "
>> >
>> > And I call regularities in my perceptions the "external world", which
>> > becomes so
>> > familiar to me that I forget it is a hypothesis.
>>
>> Except that in time, as people realise what I just said above, the
>> hypothesis has some emprical support: If the universe were made of
>> appearances when we opened up a cranium we'd see them. We don't. We see
>> something generating/delivering them - a brain. That difference is the
>> proof.
>
> I don't really understand this. We see that chemical reactions
> in the brain generate consciousness, so why not stop at that?
> In Gilbert Ryle's words, "the mind is what
> the brain does". It's mysterious, and it's not well
> understood, but it's still just chemistry.

I have heard this 3 times now!

1) Marvin Minski... not sure where but people quote it.
2) Derek Denton, "The primordial emotions..."
and now
3) Gilbert Ryle!

Who really said it? Not that it matters OK...back to business

ask your self:

"If the mind is what the brain does, then what exactly is a coffee cup
doing?"

For that question is just as valid and has just as complex an
answeryet we do not ask it. Every object in the universe is like this.
This is the mother of all anthropomorphisms.

There is a view of the universe from the perspective of being a coffee cup
and it is being equivalently created by whatever is the difference between
it and a brain. And you are not entitled to say 'Nothing', all you can say
is that there's no brain material, so it isn't like a brain. You can make
no assertion as to the actual experience because describing a brain does
NOT explain the causality of itHot cup? Cold cup? Full? Empty? All the
same? Not the same? None of these questions are helped by the "what the
brain does" bandaid excuse for proper science. Glaring missing physics.

Zombie room has been deployed... OK dogs... do your worst! Attack!

:-)

Colin









--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-26 Thread Stathis Papaioannou


Colin Hales writes:

> OK. There is a proven mystery called the hard problem. Documented to death
> and beyond. Call it Physics X. It is the physics that _predicts_ (NOT
> DESCRIBES) phenomenal consciousness (PC). We have, through all my fiddling
> about with scientists, conclusive scientific evidence PC exists and is
> necessary for science.
> 
> So what next?
> 
> You say to yourself... "none of the existing laws of physics predict PC.
> Therefore my whole conception of how I understand the universe
> scientifically must be missing something fundamental. Absolutely NONE of
> what we know is part of it. What could that be?".

The hard problem is not that we haven't discovered the physics that explains 
consciousness, it is that no such explanation is possible. Whatever Physics X 
is, it is still possible to ask, "Yes, but how can a blind man who understands 
Physics X use it to know what it is like to see?" As far as the hard problem 
goes, 
Physics X (if there is such a thing) is no more of an advance than knowing 
which 
neurons fire when a subject has an experience.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited

2006-11-26 Thread Stathis Papaioannou


Colin Hales writes:

> The very fact that the laws of physics, derived and validated using
> phenomenality, cannot predict or explain how appearances are generated is
> proof that the appearance generator is made of something else and that
> something else else is the reality involved, which is NOT
> appearances, but independent of them.
> 
> I know that will sound weird...
> 
> >
> >> The only science you can do is "I hypothesise that when I activate this
> >> nerve, that sense nerve and this one do "
> >
> > And I call regularities in my perceptions the "external world", which
> > becomes so
> > familiar to me that I forget it is a hypothesis.
> 
> Except that in time, as people realise what I just said above, the
> hypothesis has some emprical support: If the universe were made of
> appearances when we opened up a cranium we'd see them. We don't. We see
> something generating/delivering them - a brain. That difference is the
> proof.

I don't really understand this. We see that chemical reactions in the brain 
generate 
consciousness, so why not stop at that? In Gilbert Ryle's words, "the mind is 
what 
the brain does". It's mysterious, and it's not well understood, but it's still 
just chemistry.

> >> If I am to do more I must have a 'learning rule'. Who tells me the
> >> learning rule? This is a rule of interpretation. That requires context.
> >> Where does the context come from? There is none. That is the situation
> >> of
> >> the zombie.
> >
> > I do need some rules or knowledge to begin with if I am to get anywhere
> > with interpreting sense data.
> 
> You do NOT interpret sense data! In consciuous activity you interpret the
> phenomenal scene generated using the sense data. Habituated/unconscious
> reflex behaviour with fixed rules uses sense data directly.

You could equally well argue that my computer does not interpret keystrokes, 
nor the 
electrical impulses that travel to it from the keyboard, but rather it creates 
a phenomenal 
scene in RAM based on those keystrokes. 

> Think about driving home on a well travelled route. You don't even know
> how you got home. Yet if something unusual happened on the drive - ZAP -
> phenomenality kicks in and phenomenal consciousness handles the novelty.

If something unusual happens I'll try to match it as closely as I can to 
something I have 
already encountered and act accordingly. If it's like nothing I've ever 
encountered before 
I guess I'll do something random, and on the basis of the effect this has 
decide what I 
will do next time I encounter the same situation. 

> > With living organisms, evolution provides this
> > knowledge
> 
> Evolution provided
> a) a learning tool(brain) that knows how to learn from phenomenal
>consciousness, which is an adaptive presentation of real
>external world a-priori knowledge.
> b) Certain simple reflex behaviours.
> 
> > while with machines the designers provide it.
> 
> Machine providers do not provide (a)
> 
> They only provide (b), which includes any adaptivity rules, which are just
> more rules.
> 
> 
> >
> > Incidentally, you have stated in your paper that novel technology as the
> > end
> > product of scientific endeavour is evidence that other people are not
> > zombies, but
> > how would you explain the very elaborate technology in living organisms,
> > created
> > by zombie evolutionary processes?
> >
> > Stathis Papaioannou
> 
> Amazing but true. Trial and error. Hypothesis/Test in a brutal live or die
> laboratory called The Earth Notice that the process selected for
> phenomenal consciousness early onwhich I predict will eventually be
> proven to exist in nearly all animal cellular life (vertebrate and
> invertebrate and even single celled organisms) to some extent. Maybe even
> in some plant life.
> 
> 'Technology' is a loaded word...I suppose I mean 'human made' technology.
> Notice that chairs and digital watches did not evolve independently of
> humans. Nor did science. Novel technology could be re-termed 'non-DNA
> based technology, I suppose. A bird flies. So do planes. One is DNA based.
> The other not DNA based, but created by a DNA based creature called the
> human. Eventually conscious machines will create novel technology too -
> including new versions of themselves. It doesn't change any part of the
> propositions I make - just contextualises them inside a fascinating story.

The point is a process that is definitely non-conscious, i.e. evolution, 
produces 
novel machines, some of which are themselves conscious at that.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsub

RE: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
> Of course they are analogue devices, but their analogue nature makes no
> difference to the computation. If the ripple in the power supply of a TTL
> circuit were >4 volts then the computer's true analogue nature would
> intrude and it would malfunction.
>
> Stathis Papaioannou

Of course you are right..The original intent of my statement was to try
and correct any mental misunderstandings about the difference between the
real piece of material manipulating charge and the notional 'digital'
abstraction represented by it. I hope I did that.

Colin


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread David Nyman


On Nov 26, 11:50 pm, "1Z" <[EMAIL PROTECTED]> wrote:

> Why use the word if you don't like the concept?

I've been away for a bit and I can't pretend to have absorbed all the
nuances of this thread but I have some observations.

1. To coherently conceive that a PZ which is a *functional* (not
physical) duplicate can nonetheless lack PC - and for this to make any
necessary difference to its possible behaviour - we must believe that
the PZ thereby lacks some crucial information.
2. Such missing information consequently can't be captured by any
purely *functional* description (however defined) of the non-PZ
original.
3. Hence having PC must entail the possession and utilisation of
information which *in principle* is not functionally (3-person)
describable, but which, in *instantiating* 3-person data, permits it to
be contextualised, differentiated, and actioned in a manner not
reproducible by any purely functional (as opposed to constructable)
analog.

Now this seems to tally with what Colin is saying about the crucial
distinction between the *content* of PC and whatever is producing it.
It implies that whatever is producing it isn't reducible to sharable
3-person quanta. This seems also (although I may be confused) to square
with Bruno's claims for COMP that the sharable 3-person emerges from
(i.e. is instantiated by) the 1-person level. As he puts it -'quanta
are sharable qualia'. IOW, the observable - quanta - is the set of
possible transactions between functionally definable entities
instantiated at a deeper level of representation (the constitutive
level). This is why we see brains not minds.

It seems to me that the above, or something like it, must be true if we
are to take the lessons of the PZ to heart. IOW, the information
instantiated by PC is in principle inaccessible to a PZ because the
specification of the PZ as a purely functional 3-person analog is
unable to capture the necessary constitutive information. The
specification is at the wrong level. It's like trying to physically
generate a new computer by simply running more and more complex
programs on the old one. It's only by *constructing* a physical
duplicate (or some equivalent physical analog) that the critical
constitutive - or instantiating - information can be captured.

We have to face it.  We won't find PC 'out there' - if we could, it
would (literally) be staring us in the face. I think what Colin is
trying to do is to discover how we can still do science on PC despite
the fact that whatever is producing it isn't capturable by 'the
observables', but rather only in the direct process and experience of
observation itself.

David

> Colin Geoffrey Hales wrote:
> > <>
> > >> No confusion at all. The zombie is behaving. 'Wide awake'
> > >> in the sense that it is fully functional.
>
> > > Well, adaptive behaviour -- dealing with novelty --- is functioning.
>
> > Yes - but I'm not talking about merely functioning. I am talking about the
> > specialised function called scientific behaviour in respect of the natural
> > world outside.You assume, but have no shown, that it is in a class of its 
> > own.
>
> > The adaptive behaviour you speak of is adaptivity in
> > respect of adherence or otherwise to an internal rule set, not adaptation
> > in respect of the natural world outside.False dichotomy.
> Any adaptive system adapts under the influence under the influence of
> external impacts, and there are always some underlying rules, if only
> the rules of physics.
>
> > BTW 'Adaptive' means change, change means novelty has occurred. If you
> > have no phenopmenality you must already have a rule as to how to adapt to
> > all change - ergo you know everything already.Rules to adapt to change 
> > don't have to stipulate novel inputs in
> advance.
>
> > >> I spent tens of thousands of hours designing, building,
> > >> benchtesting and commissioning zombies. On the benchtop I
> > >> have pretended to be their environment and they had no 'awareness'
> > >> they weren't in their real environment. It's what makes bench
> > >>  testing possible. The universe of the zombies was the
> > >> universe of my programming. The zombies could not tell if
> > >> they were in the factory or on the benchtop. That's why I
> > >> can empathise so well with zombie life. I have been
> > >> literally swatted by zombies (robot/cranes and other machines)
> > >> like I wasn't therescares the hell
> > >> out of you! Some even had 'vision systems' but were still
> > >> blind. soyes the zombie can 'behave'. What I am claiming
> > >> is they cannot do _science_ i.e. they cannot behave
> > >> scientifically. This is a very specific claim, not a general
> > >> claim.
>
> > > I see nothing to support it.
>
> > I have already showed you conclusive empirical evidence you can
> > demonstrate on yourself.No you haven't. Zombies aren't blind in the sense
> of not being able to see at all,. You are just juggling
> different definitions of "Zombie".
>
> > Perhaps the 'zombie r

Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

The discussion has run its course. It has taught me a lot about the sorts
of issues and mindsets involved.

It has also given me the idea for the methodological-zombie-room, which I
will now write up. Maybe it will depict the circumstances and role of
phenomenality better than I have thus far.

Meanwhile I'd ask you to think about what sort of universe could make it
that if matter (A) acts 'as if' it intereacted with matter (B), that it
literally reified aspects of that interaction, even though matter (B) does
not exist. For that is what I propose constitutes the phenomenal scenes.
It happens in brain material at the membranes of appropriately configured
neurons and astrocytes. Matter (B) is best classed as virtual bosons.

Just have a think about how that might be and what the universe that does
that might be made of. It's not made of the things depicted by the virtual
bosons.

cheers,

colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
> >
> > Le Dimanche 26 Novembre 2006 22:54, Colin Geoffrey Hales a écrit :
> > 
> >> What point is there in bothering with it. The philosophical zombie is
> >> ASSUMED to be equivalent! This is failure before you even start! It's
> >> wrong and it's proven wrong because there is a conclusively logically
> >> and
> >> empirically provable function that the zombie cannot possibly do without
> >> phenomenality: SCIENCE. The philosophical zombie would have to know
> >> everything a-priori, which makes science meaningless. There is no
> >> novelty
> >> to a philosophical zombie. It would have to anticipate all forms of
> >> randomness or chaotic behaviour NUTS.
> >
> > But that's exactly what all the arguments is about !! Either identical
> > functionnal behavior entails consciousness either there is some magical
> > property needed plus  identical functionnal behavior to entails
> > consciousness.
> >
> >> This is failure before you even start!
> >
> > But the point is to assume this "nonsense" to take a "conclusion", to see
> > where it leads. Why imagine a "possible" zombie which is functionnally
> > identical if there weren't any dualistic view in the first place ! Only in
> > dualistic framework it is possible to imagine a functionnally equivalent
> > to
> > human yet lacking consciousness, the other way is that functionnally
> > equivalence *requires* consciousness (you can't have functionnally
> > equivalence without consciousness).
> >
> >> This is failure before you even start!
> >
> > That's what you're doing... you haven't prove that zombie can't do science
> > because the "zombie" point is not on what they can do or not, it is the
> > fact
> > that either acting like we act (human way) entails necessarily to have
> > consciousness or it does not (meaning that there exists an extra property
> > beyond behavior, an extra thing undetectable from
> > seeing/living/speaking/...
> > with the "zombie" that gives rise to consciousness)L.
> >
> > You haven't prove that zombie can't do science because you tells it at the
> > starting of the argument. The argument should be weither or not it is
> > possible to have a *complete* *functionnal* (human) replica yet lacking
> > consciousness.
> >
> > Quentin
> >
>
> Scientist_A does science.
>
> Scientist_A closes his eyes and finds the ability to do science radically
> altered.
>
> Continue the process and you eliminate all scientific behaviour.
>
> The failure of scientific behaviour correlates perfectly with the lack of
> phenomenal cosnciousness.

Closing your eyes cuts of sensory data as well. So: not proven.

> Empirical fact:
>
> "Human scientists have phenomenal consciousness"
>
> also
> "Phenomenal consciousness is the source of all our scientific evidence"
>
> ergo
>
> "Phenomenal consciousness exists and is sufficient and necessary for human
> scientific behaviour"

Doesn't follow. the fact that you use X to do Y doesn't make
Z necessary for Y. Something else could be used instead. legs and
locomotion...


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

 That's it. Half the laws of physics are going neglected merely because
 we
 won't accept phenomenal consciousness ITSELF as evidence of anything.
>>> We accept it as evidence of extremely complex neural activity - can you
>>> demonstrate it is not?
>>
>> You have missed the point again.
>>
>> a) We demand CONTENTS OF phenomenal consciousness (that which is
>> perceived) as all scientific evidence.
>>
>> but
>>
>> B) we do NOT accept phenomenal consciousness ITSELF, "perceiving" as
>> scientific evidence of anything.
>
> Sure we do.  We accept it as evidence of our evolutionary adaptation to
> survival on Earth.

Evdiencde of anything CAUSAL OF PHENOMENAL CONSCIOUSNESS. You are quoting
evidence (a) at me.

>
>>
>> Evidence (a) is impotent to explain (b).
>
> That's your assertion - but repeating it over an over doesn't add anything
> to its support.

It is logically impossible for apparent causality depicted in objects in
phenomenal scenes to betray anything that caused the scene you used. This
is like saying you conclude the objects in the image in a mirror caused
the reflecting surface that is the mirror.

This is NOT just assertion.

"Empirical evidence derives no necessity for causal relationships"
NAGEL

Well proven. Accepted. Not mine. All empirical science is like this there
is no causality in any of it. Phenomenality is CAUSED by something.
Whatever that is, is caused all our empirical evidence.

>
> Maybe some new physics is implied by consciousness (as in Penrose's
> suggestion) or a complete revolution (as in Burno's UD), but it is far
> from proven.  I don't see even a suggestion from you - just repeated
> complaints that we're not recognizing the need for some new element and
> claims that you've proven we need one.
>
> Brent Meeker
>

OK. Well I'll just get on with making my chips then. I have been exploring
the physics in question for some time now and it pointed me at exactly the
right place in brain material. I am just trying to get people to make the
first steps I did.

It involves accepting that you don't know everything and that exactly what
you don't know is why our universe produces phenomenality. There is an
anomaly in our evidence system which is an indicator af how to change.
That anomaly means that investigating underlying realities consistent with
the causal production of phenomenal conciousness is viable science.

The thing is you have to actually do it to get anywhere. Killing your
darlings is not easy.

Colin Hales





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
> >>
> >> You are a zombie. What is it about sensory data that suggests an
> >> external world?
> >
> > What is it about sensory data that suggests an external world to
> > human?
>
> Nothing. That's the point. That's why we incorporate the usage of natural
> world properties to contextualise it in the external world.

Huh???

> Called
> phenomenal consciousuness..that makes us not a zombie.

That's not what phenomenal consciousness means...or usually
means...

> >
> > Well, of course, we have a phenomenal view. Bu there is no informtion
> > in the phenomenal display that was not first in the pre-phenomenal
> > sensory data.
>
> Yes there is. Mountains of it. It's just that the mechanism and the need
> for it is not obvious to you.

Things that don't exist tend not to be obvious.

> Some aspects of the external world must be
> recruited to some extent in the production of the visual field, for
> example. None of the real spatial relative location qualities, for
> example, are inherent in the photons hitting the retina. Same with the
> spatial nature of a sound field. That data is added through the mechanisms
> for generation of phenomenality.

It's not added. It's already there. It needs to be made explicit.

> >> The science you can do is the science of zombie sense data, not an
> >> external world.
> >
> > What does "of" mean in that sentence? Human science
> > is based on human phenomenality which is based on pre-phenomenal
> > sense data, and contains nothing beyond it informationally.
>
> No, science is NOT done on pre-phenomenal sense data. It is done on the
> phenomenal scene.

Which in turn is derived from sense data. If A is informative about B
and B is informative about C, A is informative about C.

> This is physiological fact. Close you eyes and see how
> much science you can do.

That shuts off sense-data , not just phenomenality.

> I don;t seem to be getting this obvious simple thing past the pre-judgements.



> >
> > Humans unconsciously make guesses about the causal origins
> > of their sense-data in order to construct the phenomenal
> > view, which is then subjected to further educated guesswork
> > as part of the scientific process (which make contradict the
> > original guesswork, as in the detection of illusions)
>
> No they unconsciously generate a phenomenal field an then make judgements
> from it. Again close your eyes and explore what affect it has on your
> judgements. Hard-coded a-priori reflex system such as those that make the
> hand-eye reflex work in blindsight are not science and exist nowhere else
> excpet in reflex bahaviour.


In humans. That doesn't mean phenomenality is necessary for adaptive
behaviour in other entities.

> >> Your hypotheses about an external world would be treated
> >> as wild metaphysics by your zombie friends
> >
> > Unless they are doing the same thing. why shouldn't
> > they be? It is function/behaviour afer all. Zombies
> > are suppposed to lack phenomenality, not function.
> >
>
> You are stuck on the philosophiocal zombie! Ditch it! Not what we are
> talking about. The philosophical zombie is an oxymoron.

If *you're* not talking about Zombies,
why use the word?

> >> (none of which you cen ever be
> >> aware of, for they are in this external world..., so there's another
> >> problem :-) Very tricky stuff, this.
> >> The only science you can do is "I hypohesise that when I activate this
> >> nerve, that sense nerve and this one do " You then publish in
> >> nature
> >> and collect your prize. (Except the external world this assumes is not
> >> there, from your perspective... life is grim for the zombie)
> >
> > Assuming, for some unexplained reasons, that zombies cannot
> > hypothesise about an external world without phenomena.
>
> Again you are projecting your experiences onto the zombie. There is no
> body, no boundary, not NOTHING to the zombie to even conceive of to
> hypothesise about. They are a toaster, a rock.

Then there is no zombie art or zombie work or zombie anything.

Why focus on science?

> >> We have to admit to this ignorance and accept that we don't know
> >> something
> >> fundamental about the universe. BTW this means no magic, no ESP, no
> >> "dualism" - just basic physics an explanatory mechanism that is right in
> >> front of us that our 'received view' finds invisible.
> >
> > Errr, yes. Or our brains don't access the external world directly.
>
> That is your preconception, not mine.

It's not a preconception,. There just isn't any evidence of
clairvoyance or ESP.

>  Try and imagine the ways in which
> you would have to think if that make sense of phenomenality. here's one:

> That there is no such thing as 'space' or 'things' or 'distance' at all.
> That we are all actually in the same place. You can do this and not
> violate any "laws of nature" at all, and it makes phenomenality easy -
> predictable in brain material the fact that it predicts itself, when
> nothing else has... now 

Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

Everything in this weve been through already. All my answers are already in.


>
>
> Colin Geoffrey Hales wrote:
>> >> Colin
>> >> I'm not talking about invisibility of within a perceptual field. That
>> is
>> >> an invisibility humans can deal with to some extent using
>> instruments.
>> >> We
>> >> inherit the limits of that process, but at least we have something
>> >> presented to us from the outside world. The invisibility I speak of
>> is
>> >> the
>> >> invisibility of novel behaviour in the natural world within a
>> perceptual
>> >> field.
>> >
>> >
>> > To an entity without a phenomenal field, novel
>> > behaviour will be phenomenally invisible. Everything
>> > will be phenomenally invisible. That doesn't
>> > mean they won't be able have non-phenomenal
>> > access to events. Including novdl ones.
>>
>> Then you will be at the mercy of the survivability of thast situation.
>> If
>> your reflex actions in that circumstance are OK you get to live.
>
> There is no special relationship between the novel and the phenomenal.
> Both new and old events are phnemoneally visible
> to humans, and both are phenomenaly invisible to zombies.
>
>
>
>> If the
>> novelty is a predator you've never encountered it'll look like whatever
>> your reflex action interpretation thinks it is...if the behaviour thus
>> slected is survivable you'll get to live. That's the non-phenomenal
>> world
>> in a nutshell. I imagine some critters live like this: habitat bound.
>
>
> Likewise, there is no strong reason to suppose that there is no
> adaptation or learning in the absence of phenomena.
> Phenomenality itself is an adaptation that arose in a
> non-phenomenal world.
>
>
>
>
>> >> Brent:
>> >> Are you saying that a computer cannot have any pre-programmed rules
>> for
>> >> dealing with sensory inputs, or if it does it's not a zombie.
>> >>
>> >> Colin:
>> >> I would say that a computer can have any amount of pre-programmed
>> rules
>> >> for dealing with sensory inputs. Those rules are created by humans
>> and
>> >
>> > Yes.
>> >
>> >> grounded in the perceptual experiences of humans.
>> >
>> > Not necessarily. AI researches try to generalise as much as possible.
>>
>> Yes, and they generalise according to their generalisation rules, which
>> are also grounded in human phenomenal consciousness.
>
>>  It is very hard to
>> imagine what happens to rule-making without phenomenality...but keep
>> trying... you'll get there...
>
>
> It's not for me to imagine, it's for you to explain.
>
>
> >
>



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
> Le Dimanche 26 Novembre 2006 22:54, Colin Geoffrey Hales a écrit :
> 
>> What point is there in bothering with it. The philosophical zombie is
>> ASSUMED to be equivalent! This is failure before you even start! It's
>> wrong and it's proven wrong because there is a conclusively logically
>> and
>> empirically provable function that the zombie cannot possibly do without
>> phenomenality: SCIENCE. The philosophical zombie would have to know
>> everything a-priori, which makes science meaningless. There is no
>> novelty
>> to a philosophical zombie. It would have to anticipate all forms of
>> randomness or chaotic behaviour NUTS.
>
> But that's exactly what all the arguments is about !! Either identical
> functionnal behavior entails consciousness either there is some magical
> property needed plus  identical functionnal behavior to entails
> consciousness.
>
>> This is failure before you even start!
>
> But the point is to assume this "nonsense" to take a "conclusion", to see
> where it leads. Why imagine a "possible" zombie which is functionnally
> identical if there weren't any dualistic view in the first place ! Only in
> dualistic framework it is possible to imagine a functionnally equivalent
> to
> human yet lacking consciousness, the other way is that functionnally
> equivalence *requires* consciousness (you can't have functionnally
> equivalence without consciousness).
>
>> This is failure before you even start!
>
> That's what you're doing... you haven't prove that zombie can't do science
> because the "zombie" point is not on what they can do or not, it is the
> fact
> that either acting like we act (human way) entails necessarily to have
> consciousness or it does not (meaning that there exists an extra property
> beyond behavior, an extra thing undetectable from
> seeing/living/speaking/...
> with the "zombie" that gives rise to consciousness)L.
>
> You haven't prove that zombie can't do science because you tells it at the
> starting of the argument. The argument should be weither or not it is
> possible to have a *complete* *functionnal* (human) replica yet lacking
> consciousness.
>
> Quentin
>

Scientist_A does science.

Scientist_A closes his eyes and finds the ability to do science radically
altered.

Continue the process and you eliminate all scientific behaviour.

The failure of scientific behaviour correlates perfectly with the lack of
phenomenal cosnciousness.

Empirical fact:

"Human scientists have phenomenal consciousness"

also
"Phenomenal consciousness is the source of all our scientific evidence"

ergo

"Phenomenal consciousness exists and is sufficient and necessary for human
scientific behaviour"

No need to mention zombies, sorry I ever did.
No more times round the loop, thanks.

Colin Hales



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-26 Thread Brent Meeker

Colin Geoffrey Hales wrote:
>> Colin Geoffrey Hales wrote:
 But you have no way to know whether phenomenal scenes are created by a
 particular computer/robot/program or not because it's just mystery
 property defined as whatever creates phenomenal scenes.  You're going
 around in circles.  At some point you need to anchor your theory to an
 operational definition.
>>> OK. There is a proven mystery calle dthe hard problem. Documented to
>>> death
>>> and beyond.
>> It is discussed in documents - but it is not "documented" and it is not
>> proven.
> 
> It's enshrined in encylopedias! yes it's a problem We don;t know. It was
> #2 in "big questions" in science mag last year.
> 
>> It is predicted (by Bruno to take a nearby example) that a
>> physical system that replicates the functions of a human (or dog) brain at
>> the level of neural activity and receives will implement phenomenal
>> consciousness.
> 
> Then the proposition should be able to say exactly where, why and how. It
> can't, it hasn't.

Where is in the brain.  Science doesn't usually answer "why" questions except 
in the general sense of evolutionary adaptation.  How? we don't know exactly.  
But having an unanswered question doesn't constitute a deep mystery that 
demands new physics.  

> 
>>> is that the physics (rule set) of appearances and the physics (rule
>>> set) of the universe capable of generating appearances are not the same
>>> rule set! That the universe is NOT made of its appearance, it's made of
>>> something _with_ an appearance that is capable of making an appearance
>>> generator.
>> It is a commonplace that the ontology of physics may be mistaken (that's
>> how science differs from religion) and hence one can never be sure that
>> his theory refers to what's really real - but that's the best bet.
> 
> Yes but in order that you be mistaken you have to be aware you have made a
> mistake, 

Do you ever read what you write?  That sounds like something Geore W. Bush 
believes.

>which means admitting you have missed something. The existence of
> an apparently unsolvable problem... isn;t that a case for that kind of
> behaviour? (see below to see what science doesn't know it doesn't know
> about itself)
> 
>>> That's it. Half the laws of physics are going neglected merely because
>>> we
>>> won't accept phenomenal consciousness ITSELF as evidence of anything.
>> We accept it as evidence of extremely complex neural activity - can you
>> demonstrate it is not?
> 
> You have missed the point again.
> 
> a) We demand CONTENTS OF phenomenal consciousness (that which is
> perceived) as all scientific evidence.
> 
> but
> 
> B) we do NOT accept phenomenal consciousness ITSELF, "perceiving" as
> scientific evidence of anything.

Sure we do.  We accept it as evidence of our evolutionary adaptation to 
survival on Earth.

> 
> Evidence (a) is impotent to explain (b). 

That's your assertion - but repeating it over an over doesn't add anything to 
its support.

Maybe some new physics is implied by consciousness (as in Penrose's suggestion) 
or a complete revolution (as in Burno's UD), but it is far from proven.  I 
don't see even a suggestion from you - just repeated complaints that we're not 
recognizing the need for some new element and claims that you've proven we need 
one.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
> Colin Geoffrey Hales wrote:
>> <>
 No confusion at all. The zombie is behaving. 'Wide awake'
 in the sense that it is fully functional.
>>> Well, adaptive behaviour -- dealing with novelty --- is functioning.
>>
>> Yes - but I'm not talking about merely functioning. I am talking about
>> the
>> specialised function called scientific behaviour in respect of the
>> natural
>> world outside. The adaptive behaviour you speak of is adaptivity in
>> respect of adherence or otherwise to an internal rule set, not
>> adaptation
>> in respect of the natural world outside.
>>
>> BTW 'Adaptive' means change, change means novelty has occurred. If you
>> have no phenopmenality you must already have a rule as to how to adapt
>> to
>> all change - ergo you know everything already.
>
> So you deny that life has adapted through Darwinian evolution.
>
> Brent Meeker
>

Adaptation in KNOWLEDGE?
Adaptation in reflex behaviour?
Adaptation in the creature's hardware?
Adaptation in the capacity the learn?

All different.
Dead end. No more.







--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
>
> Colin Geoffrey Hales wrote:
>> >> Scientific behaviour demanded of the zombie condition is a clearly
>> >> identifiable behavioural benchmark where we can definitely claim that
>> >> phenomenality is necessary...see below...
>> >
>> > It is all to easy to consider scientific behaviour without
>> > phenomenality.
>> > Scientist looks at test-tube -- scientist makes note in lab
>> > journal...
>>
>> 'Looks' with what?
>
> Eyes, etc.
>
>> Scientist has no vision system.
>
> A Zombie scientist has a complete visual system except for whatever
> it is that causes phenomenality.since we don't
> know what it is, we can imagine a zombie scientist as having
> a complete neural system for processing vision.
>
>> There are eyes and
>> optic chiasm, LGN and all that. But no visual scene.
>
>
>> The scientist is
>> blind.
>
> The zombie scientist is a functional duplicate. The zombie scientist
> will behave as though it sees. It will also behave the same in novel
> situations -- or it would not be  a functional duplicate.

Oh god. here we go again. I have to comply with the strictures of a
philosophical zombie or I'm not saying anything. I wish I'd never
mentioned the damned word.






--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
> What the zombie argument says (and I repeat it again) is that you SHOULD
> (if you are an honest rational person) accept ONE (and only
> one as they are contradictory proposition) of the following propositions:
>
> 1) Consciousness is not tied to a given behavior nor to a given physical
> attribute, replicating these does not give consciousness. (This permit
> existence of so called 'zombie' being). Also the "special(s)" attribute(s)
> that discriminate conscious/non conscious being is in no way
> emulable/simulable/replicable/copiable (if it was, it would not be
> dualistic).
>
> 2) Zombies are IMPOSSIBLE (non sensical proposition), if you
> do/construct/create a functionnaly identical being, it WILL
> be conscious. (It is not possible that it acts like it was conscious
> without
> really being conscious)
>
> Quentin
>

Your logic applies to a philosophical zombie, consideration of which has
forced you to make a choice between two options when there are several
others out here in the real world of scientists. Choosing one or the other
will prove nothing in my world.


I choose neither/both as follows:

Consciousness (PC) is dependent on certain very specific physics (the
behaviour of brain material) being present. The fact that you do not know
what they are entitles you to no rights to make assumptions about what
"replicates/models it". Any attempt to do so is an assumption we know what
does it. We don't.

Functional equivalence is impossible, so assuming a functional equivalent
is conscious is meaningless.

The zombie defined as a philosophical zombie is impossible. That
impossibility does not make it conscious. It makes it impossible.
-

Now you are going to label me a dualist.

I am a radical dual aspect monist. The fact that you can;t see how is your
problem, not minewhich is that you have failed to recognise a lack of
knowledge of physics. Rather you assume you have the entire explanatory
framework 100% complete. Existing tools do it all. That assumption is
challenged by the very existence of the 'hard problem'.

"To he who only has a hammer all the world's problems look like nails"

Colin Hales







--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
> Colin Geoffrey Hales wrote:
>>> But you have no way to know whether phenomenal scenes are created by a
>>> particular computer/robot/program or not because it's just mystery
>>> property defined as whatever creates phenomenal scenes.  You're going
>>> around in circles.  At some point you need to anchor your theory to an
>>> operational definition.
>>
>> OK. There is a proven mystery calle dthe hard problem. Documented to
>> death
>> and beyond.
>
> It is discussed in documents - but it is not "documented" and it is not
> proven.

It's enshrined in encylopedias! yes it's a problem We don;t know. It was
#2 in "big questions" in science mag last year.

> It is predicted (by Bruno to take a nearby example) that a
> physical system that replicates the functions of a human (or dog) brain at
> the level of neural activity and receives will implement phenomenal
> consciousness.

Then the proposition should be able to say exactly where, why and how. It
can't, it hasn't.

>> is that the physics (rule set) of appearances and the physics (rule
>> set) of the universe capable of generating appearances are not the same
>> rule set! That the universe is NOT made of its appearance, it's made of
>> something _with_ an appearance that is capable of making an appearance
>> generator.
>
> It is a commonplace that the ontology of physics may be mistaken (that's
> how science differs from religion) and hence one can never be sure that
> his theory refers to what's really real - but that's the best bet.

Yes but in order that you be mistaken you have to be aware you have made a
mistake, which means admitting you have missed something. The existence of
an apparently unsolvable problem... isn;t that a case for that kind of
behaviour? (see below to see what science doesn't know it doesn't know
about itself)

>
>>
>> That's it. Half the laws of physics are going neglected merely because
>> we
>> won't accept phenomenal consciousness ITSELF as evidence of anything.
>
> We accept it as evidence of extremely complex neural activity - can you
> demonstrate it is not?

You have missed the point again.

a) We demand CONTENTS OF phenomenal consciousness (that which is
perceived) as all scientific evidence.

but

B) we do NOT accept phenomenal consciousness ITSELF, "perceiving" as
scientific evidence of anything.

Evidence (a) is impotent to explain (b). Empirical fact - 2500 of total
failure. So, why not allow ourselves the luxury of exploring candidate
physics of underlying realities that appears to provide phenomenal
consciousness in the way that we have? Indeed more than that...such that
it also makes the universe look like it does when we do science on it
using it = (a)? A very tight constraint. Phenomenality is the evidence
source for 2 sets of descrptions not one - both equalliy empirically
supported.

If we accepted (B) as evidence we'd be doing this already. We don't. We're
missing half the picture.

Colin Hales





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>>
>> You are a zombie. What is it about sensory data that suggests an
>> external world?
>
> What is it about sensory data that suggests an external world to
> human?

Nothing. That's the point. That's why we incorporate the usage of natural
world properties to contextualise it in the external world. Called
phenomenal consciousuness..that makes us not a zombie.

>
> Well, of course, we have a phenomenal view. Bu there is no informtion
> in the phenomenal display that was not first in the pre-phenomenal
> sensory data.

Yes there is. Mountains of it. It's just that the mechanism and the need
for it is not obvious to you. Some aspects of the external world must be
recruited to some extent in the production of the visual field, for
example. None of the real spatial relative location qualities, for
example, are inherent in the photons hitting the retina. Same with the
spatial nature of a sound field. That data is added through the mechanisms
for generation of phenomenality.

>
>> The science you can do is the science of zombie sense data, not an
>> external world.
>
> What does "of" mean in that sentence? Human science
> is based on human phenomenality which is based on pre-phenomenal
> sense data, and contains nothing beyond it informationally.

No, science is NOT done on pre-phenomenal sense data. It is done on the
phenomenal scene. This is physiological fact. Close you eyes and see how
much science you can do.

I don;t seem to be getting this obvious simple thing past the pre-judgements.

>
> Humans unconsciously make guesses about the causal origins
> of their sense-data in order to construct the phenomenal
> view, which is then subjected to further educated guesswork
> as part of the scientific process (which make contradict the
> original guesswork, as in the detection of illusions)

No they unconsciously generate a phenomenal field an then make judgements
from it. Again close your eyes and explore what affect it has on your
judgements. Hard-coded a-priori reflex system such as those that make the
hand-eye reflex work in blindsight are not science and exist nowhere else
excpet in reflex bahaviour.

>
>> Your hypotheses about an external world would be treated
>> as wild metaphysics by your zombie friends
>
> Unless they are doing the same thing. why shouldn't
> they be? It is function/behaviour afer all. Zombies
> are suppposed to lack phenomenality, not function.
>

You are stuck on the philosophiocal zombie! Ditch it! Not what we are
talking about. The philosophical zombie is an oxymoron.

>
>
>> (none of which you cen ever be
>> aware of, for they are in this external world..., so there's another
>> problem :-) Very tricky stuff, this.
>> The only science you can do is "I hypohesise that when I activate this
>> nerve, that sense nerve and this one do " You then publish in
>> nature
>> and collect your prize. (Except the external world this assumes is not
>> there, from your perspective... life is grim for the zombie)
>
> Assuming, for some unexplained reasons, that zombies cannot
> hypothesise about an external world without phenomena.

Again you are projecting your experiences onto the zombie. There is no
body, no boundary, not NOTHING to the zombie to even conceive of to
hypothesise about. They are a toaster, a rock.

>
>> If I am to do more I must have a 'learning rule'. Who tells me the
>> learning rule?
>
> The only thing a zombie lacks, by hypothesis, is phenomenality.
> Since a "learning rule" is not a quale, they presumably have them.

>
>> This is a rule of interpretation. That requires context.
>> Where does the context come from? There is none. That is the situation
>> of
>> the zombie.
>
>
>
>
>> 
>> >> ..but..
>> >> The sense data is separate and exquisitely ambiguous and we do
>> >> not look for sense data to verify scientific observations!
>> >> We look for perceptual/phenomenal data. Experiences.
>> >> Maybe this is yet another terminological issue. Sensing
>> >> is not perception.
>> >
>> > If the perception is less ambiguous that the sense data,
>> > that is a false certainty.
>>
>> Less ambiguous means more information content. More discrimination. The
>> brain accesses the external world directly, not only via sensing.
>
> How?
>
>> A
>> mystery of non-local access = "hard problem"  = we don't know
>> everything.
>
> The hard problem is about how phenomenality arises.
> You seem to have assumed that there is some kind of
> clairvoyance going on as well. But that is idiosyncratic.

No. No. No. I am saying that we do not know everything! That is all. You
are constantly trying to make a solutuion fit your knowledge in the face
of a problem which everyone agrees remains. So it means we do not know
everything.


>
>> We have to admit to this ignorance and accept that we don't know
>> something
>> fundamental about the universe. BTW this means no magic, no ESP, no
>> "dualism" - just basic physics an explanatory mechanism that is right in
>> front of us that our 'received view' finds i

Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>>
>> Absolutely! But the humans have phenomenal consciousness in lieu of ESP,
>> which the zombies do not.
>
> PC doesn't magically solve the problem.It just involves a more
> sophisticated form of guesswork. It can be fooled.

We been here before and I'll say it again if I have to

Yes! It can be fooled. Yes! It can be wrong. Yes! It can be pathologically
affected. Nevertheless without it we are unaware of anything and we could
not do science on novelty in the world outside. The act of doing science
proves we have phenomenal consciosuness and it's third person verification
proves that whatever reality is, it's the same for us all.

>
>> To bench test "a human" I could not merely
>> replicate sensoiry feeds. I'd have to replicate the factory!
>
> As in brain-in-vat scenarios. Do you have a way of showing
> that BIV would be able to detect its status?

I think the BIV is another oxymoron like the philosophical zombie. It
assumes that the distal processes originating the casuality that cause the
impinging sense data (from the external/distal world) are not involved at
all in the internal scene generation. An assumption I do not make.

I would predict that the scenes related to the 'phantom' body might work
because there are (presumably) the original internal (brain-based) body
maps that can substitute for the lack of the actual bodyBut the scenes
related to the 'phantom external world' I would predict wouldn't work. So
the basic assumption of BIV I would see as flawed. It assumes that all
there is to the scene genreation is what there is at the boundary where
the sense measurement occurs.

Virtual reality works, I think, because in the end, actual photons fly at
you from outside. Actual phonons impinge your ears and so forth.

>
>> The human is
>> connected to the external world (as mysterious as that may be and it's
>> not
>> ESP!). The zombie isn't, so faking it is easy.
>
> No. They both have exactly the same causal connections. The zombie's
> lack of phenomenality is the *only* difference. By definition.
>
>
> And every nerve that a human has is a sensory feed You just have to
> feed
> data into all of them to fool PC. As in a BIV scenario.

See above

>>
>> Phenomenal scenes can combine to produce masterful, amazing
>> discriminations. But how does the machine, without being
>> told already by a
>> human, know one from the other?
>
> How do humans know without being told by God?

You are once again assuming that existing scientific knowledge is 100%
equipped. Then, when it fails to have anything to say about phenomenality,
you invoke god, the Berkeleyan informant.

How about a new strategy: we don't actually know everything. The universe
seems to quite naturally deliver phgenomenality. This is your problem, not
its problem.

>
>> Having done that how can it combine and
>> contextualise that joint knowledge? You have to tell it how to learn.
>> Again a-priori knowledge ...
>
> Where did we get our apriori knowledge from? If it wasn't
> a gift from God, it must have been a natural process.

Yes. Now how might that be? What sort of universe could do that?
This is where I've been. Go explore.

>
> (And what has this to do with zombies? Zombies
> lack phenomenality, not apriori knowledge).

They lack the a-priori knowledge that is delivered in the form of
phenomenality, from which all other knowledge is derived. The a-priori
knowledge (say in the baby zombie) is all pre-programmed reflex -
unconsciousess internal processes all about the self - not the external
world...except for bawling...another reflex.

All of which is irrelevant to my main contention which is about science
and exquisite novelty.

>>
>> You're talking about cross-correlating sensations, not sensory
>> measurement. The human as an extra bit of physics in the
>> generation of the
>> phenomenal scenes which allows such contextualisations.
>
> Why does it need new physics? Is that something you
> are assuming or something you are proving?

I am conclusively proving that science, scientists and novel technology
are literally scientific proof that phenomenality is a real, natural
process in need of explanation. The whole world admits to the 'hard
problem'. For 2500 years!

The new physics is something I am proving is necessarily there to be
found. Not what it is but merely that it a new way of thinking is needed.
It is the permission we need to scientifically explore the underlying
reality of the universe.

That is what this is saying. Phenomenality is evidence of something causal
of it. That causality is NOT that depicted by the appearances it delivers
or we'd already predict it!

Our total inability to predict it and total dependence on it for
scientific evidence is proof that allowing youself to explore universes
causalof phenomenality that is also causal of atoms and scientists is the
new physics rule-set to find - and it is NOT the physics rule-set
delivered by using the appearances thus delivered. The two are intimately
relat

Re: UDA revisited

2006-11-26 Thread Quentin Anciaux

Le Dimanche 26 Novembre 2006 22:54, Colin Geoffrey Hales a écrit :

> What point is there in bothering with it. The philosophical zombie is
> ASSUMED to be equivalent! This is failure before you even start! It's
> wrong and it's proven wrong because there is a conclusively logically and
> empirically provable function that the zombie cannot possibly do without
> phenomenality: SCIENCE. The philosophical zombie would have to know
> everything a-priori, which makes science meaningless. There is no novelty
> to a philosophical zombie. It would have to anticipate all forms of
> randomness or chaotic behaviour NUTS.

But that's exactly what all the arguments is about !! Either identical 
functionnal behavior entails consciousness either there is some magical 
property needed plus  identical functionnal behavior to entails 
consciousness.

> This is failure before you even start!

But the point is to assume this "nonsense" to take a "conclusion", to see 
where it leads. Why imagine a "possible" zombie which is functionnally 
identical if there weren't any dualistic view in the first place ! Only in 
dualistic framework it is possible to imagine a functionnally equivalent to 
human yet lacking consciousness, the other way is that functionnally 
equivalence *requires* consciousness (you can't have functionnally 
equivalence without consciousness).

> This is failure before you even start!

That's what you're doing... you haven't prove that zombie can't do science 
because the "zombie" point is not on what they can do or not, it is the fact 
that either acting like we act (human way) entails necessarily to have 
consciousness or it does not (meaning that there exists an extra property 
beyond behavior, an extra thing undetectable from seeing/living/speaking/... 
with the "zombie" that gives rise to consciousness)L.

You haven't prove that zombie can't do science because you tells it at the 
starting of the argument. The argument should be weither or not it is 
possible to have a *complete* *functionnal* (human) replica yet lacking 
consciousness.

Quentin

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-26 Thread Brent Meeker

Colin Geoffrey Hales wrote:
> <>
>>> No confusion at all. The zombie is behaving. 'Wide awake'
>>> in the sense that it is fully functional.
>> Well, adaptive behaviour -- dealing with novelty --- is functioning.
> 
> Yes - but I'm not talking about merely functioning. I am talking about the
> specialised function called scientific behaviour in respect of the natural
> world outside. The adaptive behaviour you speak of is adaptivity in
> respect of adherence or otherwise to an internal rule set, not adaptation
> in respect of the natural world outside.
> 
> BTW 'Adaptive' means change, change means novelty has occurred. If you
> have no phenopmenality you must already have a rule as to how to adapt to
> all change - ergo you know everything already.

So you deny that life has adapted through Darwinian evolution.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>>
>> Except that in time, as people realise what I just said above, the
>> hypothesis has some emprical support: If the universe were made of
>> appearances when we opened up a cranium we'd see them. We don't.
>
> Or appearances don't appear to be appearances to a third party.
>

Precisely. Now ask yourself...
"What kind of universe could make that possible?"
It is not the kind of universe depicted by laws created using appearances.

>> > I do need some rules or knowledge to begin with if I
>> > am to get anywhere with interpreting sense data.
>>
>> You do NOT interpret sense data! In consciuous activity
>> you interpret the phenomenal scene generated using the
>> sense data.
>
> But that is itself an interpetation for reasons you yourself have
> spelt out. Sensory pulse-trains don't have any  meaning in themselves.

An interpretation that is hard-coded into your biology a-priori. You do
not manufacture it from your own knowledge (unless you are hallucinating!)
your knowledge is a-poteriori.

>
>>  Habituated/unconscious
>> reflex behaviour with fixed rules uses sense data directly.
>
> Does that make it impossible to have
> adaptive responses to sense data?

Not at all. That adaptation is based on what rule acquired how? Adaptation
is another rule assuming the meaning of all novelty. Where does that come
from? You're stuck in a loop assuming your knowledge is in the zombie.
Stop it!

>
>
>> Think about driving home on a well travelled route. You don't even know
>> how you got home. Yet if something unusual happened on the drive - ZAP -
>> phenomenality kicks in and phenomenal consciousness handles the
>> novelty.>
>
> Is that your only evidence for saying that it is impossible
> to cope with novelty without phenomenality?

I am claiming that the only way to find out the laws of nature is through
the capcity to experience the novelty in the natural world OUTSIDE the
scientist, not the novelty in the sensory data.

This is about science, not any old behaviour. The fact is that most
novelty can be handled by any old survivable rule. That rule is just a
behaviour rule, not a law of the natural world. The scientist needs to be
able to act 'as-if' a rule was operating OUTSIDE themselves in order that
testing happen.

>
>> > With living organisms, evolution provides this
>> > knowledge
>>
>> Evolution provided
>> a) a learning tool(brain) that knows how to learn from phenomenal
>>consciousness, which is an adaptive presentation of real
>>external world a-priori knowledge.
>> b) Certain simple reflex behaviours.
>>
>> > while with machines the designers provide it.
>>
>> Machine providers do not provide (a)
>
>
>> They only provide (b), which includes any adaptivity rules, which are
>> just
>> more rules.
>
> How do you know that (a) isn't "just" rules? What's the difference?

Yes rules in our DNA give us the capacity to create the scenes in a
repeatable way. Those are natural rules. (Not made BY us). The physics
that actually does it in response to the sensory data is a natural rule.
The physics that makes it an experience is another natural rule. All these
are natural rules.

You are assuming that rules are experienced, regardless of their form. You
are basing this assumption on your own belief (asnother assumption) that
we know everything there is to know about physics. You act in denial of
something you can prove to yourself exists with simple experiments.

You should be proving to me why we don't need phenomenal consciousness,
not the other way around.


>
> You seem to think there is an ontological gulf between (a) and (b). But
> that seems arbitrary.

Only under the assumptions mentioned above. These are assumptions I do not
make.

>>
>> Amazing but true. Trial and error. Hypothesis/Test in a brutal live or
>> die laboratory called The Earth Notice that the process
>> selected for phenomenal consciousness early on
>
> But that slides past the point. The development of phenomenal
> consciousness was an adaptation that occurred without PC.
>
> Hence, PC is not necessary for all adaptation.

I am not claiming that. I am claiming it is necessary for scientific
behaviour. It can be optional in an artifact or animal. The constraints of
that situation merely need to be consistent with survival. The fact that
most animals have it is proof of its efficacy as a knowledge source, not a
disproof of my claim.

Read the rest of my paragraph before you blurt.

>
>> which I predict will eventually be
>> proven to exist in nearly all animal cellular life (vertebrate and
>> invertebrate and even single celled organisms) to some extent. Maybe
>> even
>> in some plant life.
>>
>> 'Technology' is a loaded word...I suppose I mean 'human made'
>> technology.
>> Notice that chairs and digital watches did not evolve independently of
>> humans. Nor did science. Novel technology could be re-termed 'non-DNA
>> based technology, I suppose. A bird flies. So do planes. One is DNA
>> based.
>> The other not DNA based, b

Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

<>
>> No confusion at all. The zombie is behaving. 'Wide awake'
>> in the sense that it is fully functional.
>
> Well, adaptive behaviour -- dealing with novelty --- is functioning.

Yes - but I'm not talking about merely functioning. I am talking about the
specialised function called scientific behaviour in respect of the natural
world outside. The adaptive behaviour you speak of is adaptivity in
respect of adherence or otherwise to an internal rule set, not adaptation
in respect of the natural world outside.

BTW 'Adaptive' means change, change means novelty has occurred. If you
have no phenopmenality you must already have a rule as to how to adapt to
all change - ergo you know everything already.

>
>> Doing stuff. I said it has the _internal
>> life_ of a dreamless sleep, not that it was asleep. This
>> means that the life you 'experience that is the state of
>> a dreamless sleep - the nothing
>> of it - that is the entire life of the awake zombie.
>> Want to partially 'zombie' yourself?  close your eyes.
>> block your ears. I know seeing black/hearing nothing is not
>> blindess/deafness, but you get the idea.
>
> That isn't zombification. A zombie is not an entity
> which cannot see at all. A zombie will stop its car when
> the lights turn red. It is just that red does not "seem like"
> anything to the zombie.
> A zombies has a kind of efficient blindsight.

I said 'partially', so you'd get the idea of it. It seems you are.


>
>> Scientific behaviour demanded of the zombie condition
>> is a clearly identifiable behavioural benchmark where
>> we can definitely claim that phenomenality is necessary
>> ...see below...
>> The reason it is invisible is because there is no
>> phenomenal consciousness. The zombie has only sensory data
>> to use to do science. There are an infinite number of ways
>> that same
>> >> sensory data could arrive from an infinity of external
>> >> natural world situtations. The sensory data is ambiguous
>> >
>> > That doesn't follow. The Zombie can produce different
>> > responses on the basis of physical differences in its input,
>> > just as a machine can.
>>
>> I spent tens of thousands of hours designing, building,
>> benchtesting and commissioning zombies. On the benchtop I
>> have pretended to be their environment and they had no 'awareness'
>> they weren't in their real environment. It's what makes bench
>>  testing possible. The universe of the zombies was the
>> universe of my programming. The zombies could not tell if
>> they were in the factory or on the benchtop. That's why I
>> can empathise so well with zombie life. I have been
>> literally swatted by zombies (robot/cranes and other machines)
>> like I wasn't therescares the hell
>> out of you! Some even had 'vision systems' but were still
>> blind. soyes the zombie can 'behave'. What I am claiming
>> is they cannot do _science_ i.e. they cannot behave
>> scientifically. This is a very specific claim, not a general
>> claim.
>
> I see nothing to support it.

I have already showed you conclusive empirical evidence you can
demonstrate on yourself. Perhaps the 'zombie room' will do it.

>
>> >
>> >>- it's all the
>> >> same - action potential pulse trains traveling from sensors to
brain.
>> >
>> > No, it's not all the same. Its coded in a very complex way. It's like
>> saying the information in you computer is "all the same -- its all ones
and zeros"
>> yes you got it - all codedI am talking about action potential pulse
trains. They are all the same general class. Burst mode/Continuous
mode,
>> all the same basic voltage waveform, overshoot, refratory period...LTP,
LTD, afterhyperpolarisation all the same class for sight, sound,
taste, imagination, touch, thirst, orgasm etc etc... coded messages
travelling all the way from the periphery and into the brain. They are
all
>> the same...and..
>
>
> They need to be interpreted an contextualised against other
> other information. How does that lead to the conclusion
> that zombies can't do science?
>
They can do science on their sensory data only. They have no a-priori
method for applying any interpretation as to its context in the natural
world that originated the sensory feeds. If you like: it can do the
science of its boundary. Even that is a stretch - for it has no idea it
has a body or any boundary.

They cannot contextualise NOVELTY with respect to the external world,
merely against the non-phenomenal rule-set they have concted. They cannot
do science on the natural world - but they can do science on zombie sense
data and internal rule-sets, whose correpondence with any sort of external
world is a complete mystery. If any of the rules they concoct happen to
correpond to the a natural world law it'd be an accident and they'd never
know it.


>
>> None of it says anything about WHY the input did what it did. The
causality outside the zombie is MISSING from these signals.
>
> It's missing from the individual signals. But we must
> be able to build up a pic

Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
> >> Colin
> >> I'm not talking about invisibility of within a perceptual field. That is
> >> an invisibility humans can deal with to some extent using instruments.
> >> We
> >> inherit the limits of that process, but at least we have something
> >> presented to us from the outside world. The invisibility I speak of is
> >> the
> >> invisibility of novel behaviour in the natural world within a perceptual
> >> field.
> >
> >
> > To an entity without a phenomenal field, novel
> > behaviour will be phenomenally invisible. Everything
> > will be phenomenally invisible. That doesn't
> > mean they won't be able have non-phenomenal
> > access to events. Including novdl ones.
>
> Then you will be at the mercy of the survivability of thast situation. If
> your reflex actions in that circumstance are OK you get to live.

There is no special relationship between the novel and the phenomenal.
Both new and old events are phnemoneally visible
to humans, and both are phenomenaly invisible to zombies.



> If the
> novelty is a predator you've never encountered it'll look like whatever
> your reflex action interpretation thinks it is...if the behaviour thus
> slected is survivable you'll get to live. That's the non-phenomenal world
> in a nutshell. I imagine some critters live like this: habitat bound.


Likewise, there is no strong reason to suppose that there is no
adaptation or learning in the absence of phenomena.
Phenomenality itself is an adaptation that arose in a
non-phenomenal world.




> >> Brent:
> >> Are you saying that a computer cannot have any pre-programmed rules for
> >> dealing with sensory inputs, or if it does it's not a zombie.
> >>
> >> Colin:
> >> I would say that a computer can have any amount of pre-programmed rules
> >> for dealing with sensory inputs. Those rules are created by humans and
> >
> > Yes.
> >
> >> grounded in the perceptual experiences of humans.
> >
> > Not necessarily. AI researches try to generalise as much as possible.
>
> Yes, and they generalise according to their generalisation rules, which
> are also grounded in human phenomenal consciousness.

>  It is very hard to
> imagine what happens to rule-making without phenomenality...but keep
> trying... you'll get there...


It's not for me to imagine, it's for you to explain.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
> > a) Darwinian evolution b) genetic learning algorithm.
>
> None of which have any innate capacity to launch or generate phenomenal
> consciousness and BOTH of which have to be installed by humans a-priori.

The actual real process of evolution does have the capacity
to install consciousness because it did in humans.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread 1Z


Quentin Anciaux wrote:
> Hi,
> Le Dimanche 26 Novembre 2006 12:43, Colin Geoffrey Hales a écrit :
> > Note: Scientists, by definition:
> > a) are doing science on the world external to them
> > b) inhabit a universe of exquisite novelty
> >...or there'd be no need for them!
> Please note: Zombies by definition:
> a) are functionnaly equivalent to what you called 'scientists'.
> b) are undistinguishable from what you called 'scientists', because if they
> were it would be a property that would easily discriminate them.
>
> The zombie point is a I'll say one more time a point to show non-sense in
> dualistic view... what it means is either that a property of the natural/real
> world is not copiable/replicable and that copying all physical/computational
> properties is not enough, there is still left the PC(what you call phenomenal
> consciousness which could be shorten to consciousness) which discriminate
> zombie from the scientist.
>
> Taking your point is this:
>
> Definition:
>
> Zombie can't do science because they don't have PC.
> Scientist can do science because they have PC
>
> Conclusion:
>
> Zombie can't do science because they don't have PC.
> So zombie can't be a scientist.

I think his premiss is:
PC is a function.

>From that alone it follows that you can't have
zombies, but zombies must be functional duplicated, but, in lacking
PC, they would lack a function. So they would both be functionally
identical and functionally different -- reductio ad absurdum.


But the premiss is arbitrary


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
> >> Scientific behaviour demanded of the zombie condition is a clearly
> >> identifiable behavioural benchmark where we can definitely claim that
> >> phenomenality is necessary...see below...
> >
> > It is all to easy to consider scientific behaviour without
> > phenomenality.
> > Scientist looks at test-tube -- scientist makes note in lab
> > journal...
>
> 'Looks' with what?

Eyes, etc.

> Scientist has no vision system.

A Zombie scientist has a complete visual system except for whatever
it is that causes phenomenality.since we don't
know what it is, we can imagine a zombie scientist as having
a complete neural system for processing vision.

> There are eyes and
> optic chiasm, LGN and all that. But no visual scene.


> The scientist is
> blind.

The zombie scientist is a functional duplicate. The zombie scientist
will behave as though it sees. It will also behave the same in novel
situations -- or it would not be  a functional duplicate.


> >> I spent tens of thousands of hours designing, building, benchtesting and
> >> commissioning zombies. On the benchtop I have pretended to be their
> >> environment and they had no 'awareness' they weren't in their real
> >> environment. It's what makes bench testing possible. The universe of the
> >> zombies was the universe of my programming. The zombies could not tell
> >> if
> >> they were in the factory or on the benchtop.
> >
> > According to solipsists, humans can't either. You seem
> > to think PC somehow tells you reality is really real,
> > but you haven't shown it. Counterargument: we have
> > PC during dreaming, but dreams aren't real.
>
> I say nothing about the 'really realness' of 'reality'. It's irrelevant.
> Couldn't care less. _Whatever it is_, its relentlessly consistent to all
> of us in regular ways suffient to characterise it scientifically.
> Our
> visual phenomenal scene depicts it well enough to do science.

So there are no blind scientists?

>Without that
> visual depiction we can't do science.

Unless we find another way.

But a functional duplicate is a functional duplicate.

> Yes we have internal imagery. Indeed it is an example supporting what I am
> saying! The scenes and the sensing are 2 separate things. You can have one
> without the other. You can hallucinate - internal imagery overrides that
> of the sensing stimulus. Yes! That is the point. It is a representation
> and we cannot do science without it.

Unless we find another way. Maybe the zombies could find one.

> >> None of it says anything about WHY the input did what it did. The
> >> causality outside the zombie is MISSING from these signals.
> >
> > The causality outside the human is missing from the signals.
> > A photon is a photon, it doesn't come with a biography.
>
> Yep. That's the point. How does the brain make sense of it? By making use
> of some property of the natural world which makes a phenomeanl scene.

The process by which we infer the real-world objects that
caused our sense-data can be treated in information
processing terms, for all that it is presented to us
phenomenally. You haven't demonstrated that
unplugging phenomenality stymies the whole process.

> >>  They have no
> >> intrinsic sensation to them either. The only useful information is the
> >> body knows implicitly where they came from..which still is not enough
> >> because:
> >>
> >> Try swapping the touch nerves for 2 fingers. You 'touch' with one and
> >> feel
> >> the touch happen on the other. The touch sensation is created as
> >> phenomenal consciousness in the brain using the measurement, not the
> >> signal measurement itself.
> >
> > The brain attaches meaning to signals according to the channel they
> > come on on, hence phantom limb pain and so on. We still
> > don't need PC to explain that.
>
> Please see the recent post to Brent re pain and nociception. Pain IS
> phenomenal consiouness (a phenomenal scene).

Pain is presented phenomenally, but neurologists can
identify pain signals without being able to peak into
other people's qualia.

> How do you think the phantom
> limb gets there?  It's a brain/phenomenal representation.

Yes.

> It IS phenomenal
> consiousness.

Not all representations are phenomenal.

> Of a limb that isn't actually there.



> >> Now think about the touch..the same sensation of touch could have been
> >> generated by a feather or a cloth or another finger or a passing car.
> >> That
> >> context is what phenomenal consciousness provides.
> >
> > PC doesn't miraculously provide the true context. It can
> > be fooled by dreams and hallucination.
>
> Yes it can misdirect, be wrong, be pathologically constitutes. But at
> least we have it. We could not survive without it. Would could not do
> science without it.

Unless we find another way. Most people move around using
their legs. But legless people can find other ways of moving.

> It situates us in an external world which we would
> otherwise find completely invisible.

Blind

Re: UDA revisited

2006-11-26 Thread Brent Meeker

1Z wrote:
> 
> Brent Meeker wrote:
> 
>> No, I think Colin has point there.  Your phenomenal view adds a lot of 
>> assumptions to the sensory data in constructing an internal model of what 
>> you see.  These assumptions are hard-wired by evolution.  It is situations 
>> in which these assumptions are false that produce optical illusions.
> 
> It depends on what you mean by information. Our hardwiring allows us to
> make
> better-than-chance guesses about what is really out there. But it is
> not
> information *about* what is really out there -- it doesn't come from
> the external world in the way sensory data does.

Not in the way that sensory data does, but it comes from the external world via 
evolution.  I'd say it's information about what's out there just as much as the 
sensory data is.  Whether it's about what's *really* out there invites 
speculation about what's *really real*.  I'd agree that it provides our best 
guess at what's real.

Brent Meeker


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Quentin Anciaux

Hi,
Le Dimanche 26 Novembre 2006 12:43, Colin Geoffrey Hales a écrit :
> Note: Scientists, by definition:
> a) are doing science on the world external to them
> b) inhabit a universe of exquisite novelty
>...or there'd be no need for them!
Please note: Zombies by definition:
a) are functionnaly equivalent to what you called 'scientists'.
b) are undistinguishable from what you called 'scientists', because if they 
were it would be a property that would easily discriminate them.

The zombie point is a I'll say one more time a point to show non-sense in 
dualistic view... what it means is either that a property of the natural/real 
world is not copiable/replicable and that copying all physical/computational 
properties is not enough, there is still left the PC(what you call phenomenal 
consciousness which could be shorten to consciousness) which discriminate 
zombie from the scientist.

Taking your point is this:

Definition:

Zombie can't do science because they don't have PC.
Scientist can do science because they have PC

Conclusion:

Zombie can't do science because they don't have PC.
So zombie can't be a scientist.

You assume in your definition that they can't (and also that functionnaly 
identical being to a human is possible without it/him/her having 
consciousness).

What the zombie argument says (and I repeat it again) is that you SHOULD (if 
you are an honest rational person) accept ONE (and only one as they are 
contradictory proposition) of the following propositions:

1) Consciousness is not tied to a given behavior nor to a given physical 
attribute, replicating these does not give consciousness. (This permit 
existence of so called 'zombie' being). Also the "special(s)" attribute(s) 
that discriminate conscious/non conscious being is in no way 
emulable/simulable/replicable/copiable (if it was, it would not be 
dualistic).

2) Zombies are IMPOSSIBLE (non sensical proposition), if you 
do/construct/create a functionnaly identical being, it WILL 
be conscious. (It is not possible that it acts like it was conscious without 
really being conscious)

Quentin

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: UDA revisited

2006-11-26 Thread 1Z


Brent Meeker wrote:

> No, I think Colin has point there.  Your phenomenal view adds a lot of 
> assumptions to the sensory data in constructing an internal model of what you 
> see.  These assumptions are hard-wired by evolution.  It is situations in 
> which these assumptions are false that produce optical illusions.

It depends on what you mean by information. Our hardwiring allows us to
make
better-than-chance guesses about what is really out there. But it is
not
information *about* what is really out there -- it doesn't come from
the external world in the way sensory data does.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Brent Meeker

1Z wrote:
> 
> Colin Geoffrey Hales wrote:
>> Stathis,
...
 Whatever 'reality' is, it is regular/persistent,
 repeatable/stable enough to do science on it via
 our phenomenality and come
 up with laws that seem to characterise how it will appear
 to us in our phenomenality.
>>> You could say: my perceptions are
>>> regular/persistent/repeatable/stable enough to assume an
>>> external reality generating them and to do science on. And if
>>> a machine's central processor's perceptions are similarly
>>> regular/persistent/, repeatable/stable, it could also do
>>> science on them. The point is, neither I nor
>>> the machine has any magical knowledge of an external world.
>>> All we have is regularities in perceptions, which we assume
>>> to be originating from the external world because that's
>>> a good model which stands up no matter what we throw
>>> at it.
>> Oops. Maybe I spoke too soon! OK.
>> Consider... "...stable enough to assume an external reality..".
>>
>> You are a zombie. What is it about sensory data that suggests an external
>> world?
> 
> What is it about sensory data that suggests an external world to
> human?
> 
> Well, of course, we have a phenomenal view. Bu there is no informtion
> in the phenomenal display that was not first in the pre-phenomenal
> sensory data.

No, I think Colin has point there.  Your phenomenal view adds a lot of 
assumptions to the sensory data in constructing an internal model of what you 
see.  These assumptions are hard-wired by evolution.  It is situations in which 
these assumptions are false that produce optical illusions.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Brent Meeker

Colin Geoffrey Hales wrote:
>> But you have no way to know whether phenomenal scenes are created by a
>> particular computer/robot/program or not because it's just mystery
>> property defined as whatever creates phenomenal scenes.  You're going
>> around in circles.  At some point you need to anchor your theory to an
>> operational definition.
> 
> OK. There is a proven mystery calle dthe hard problem. Documented to death
> and beyond. 

It is discussed in documents - but it is not "documented" and it is not proven. 
It is predicted (by Bruno to take a nearby example) that a physical system that 
replicates the functions of a human (or dog) brain at the level of neural 
activity and receives will implement phenomenal consciousness.  This may be 
false, it may take a soul or spirit, but such certainly has not be proven.

>Call it Physics X. It is the physics that _predicts_ (NOT
> DESCRIBES) phenomenal consciousness (PC). We have, through all my fiddling
> about with scientists, conclusive scientific evidence PC exists and is
> necessary for science.
> 
> So what next?
> 
> You say to yourself... "none of the existing laws of physics predict PC.
> Therefore my whole conception of how I understand the universe
> scientifically must be missing something fundamental. Absolutely NONE of
> what we know is part of it. What could that be?".
> 
> Then you let yourself have the freedom to explore that possibiltiy. For the
> answer to is which you seek.
> 
> The answer?
> 
> is that the physics (rule set) of appearances and the physics (rule
> set) of the universe capable of generating appearances are not the same
> rule set! That the universe is NOT made of its appearance, it's made of
> something _with_ an appearance that is capable of making an appearance
> generator.

It is a commonplace that the ontology of physics may be mistaken (that's how 
science differs from religion) and hence one can never be sure that his theory 
refers to what's really real - but that's the best bet.

> 
> That's it. Half the laws of physics are going neglected merely because we
> won't accept phenomenal consciousness ITSELF as evidence of anything.

We accept it as evidence of extremely complex neural activity - can you 
demonstrate it is not?

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
> >>
> >> soyes the zombie can 'behave'. What I am claiming is they
> >> cannot do _science_ i.e. they cannot behave scientifically.
> >> This is a very specific claim, not a general claim.
> >
> > You're being unfair to the poor zombie robots. How could they
> > possibly tell if they were in the factory or on the benchtop
> > when the benchtop (presumably) exactly replicates the sensory
> > feeds they would receive in the factory?
> > Neither humans nor robots, zombie or otherwise, should be
> > expected to have ESP.
>
> Absolutely! But the humans have phenomenal consciousness in lieu of ESP,
> which the zombies do not.

PC doesn't magically solve the problem.It just involves a more
sophisticated form of guesswork. It can be fooled.

> To bench test "a human" I could not merely
> replicate sensoiry feeds. I'd have to replicate the factory!

As in brain-in-vat scenarios. Do you have a way of shwoing
that BIV would be able to detect its status?

> The human is
> connected to the external world (as mysterious as that may be and it's not
> ESP!). The zombie isn't, so faking it is easy.

No. They both have exactly the same causal connections. The zombie's
lack of phenomenality is the *only* difference. By definition.


And every nerve that a human has is a sensory feed You just have to
feed
data into all of them to fool PC. As in a BIV scenario.

> >
> >>
> >> Now think about the touch..the same sensation of touch could
> >> have been generated by a feather or a cloth or another finger
> >> or a passing car. That context is what phenomenal
> >> consciousness provides.
> >
> > But it is impossible to differentiate between different sources
> > of a sensation unless the different sources generate a different
> > sensation. If you close your eyes and the touch of a feather
> > and a cloth feel the same, you can't tell which it was.
> > If you open your eyes, you can tell a difference because
> > the combined sensation (touch + vision) is different in the
> > two cases. A machine that has touch receptors alone might not
> > be able to distinguish between them, but a machine that has
> > touch + vision receptors would be able to.
> >
>
> Phenomenal scenes can combine to produce masterful, amazing
> discriminations. But how does the machine, without being told already by a
> human, know one from the other?

How do humans know without being told by God?

> Having done that how can it combine and
> contextualise that joint knowledge? You have to tell it how to learn.
> Again a-priori knowledge ...

Where did we get our apriori knowledge from? If it wasn't
a gift from God, it mus have been a natural process.

(And what has this to do with zombies? Zombies
lack phenomenality, not apriori knowledge).

> >>
> >> Yes but how is it to do anything to contextualise the input other than
> >> correlate it with other signals? (none of which, in themselves, generate
> >> any phenomenal consciousness, they trigger it downstream in the
> >> cranium/cortex).
> >
> > That's all we ever do: correlate one type of signal with another.
> > The correlations get called various things such
> > as  "red", "circular", "salty", or perhaps "a weird taste"
> > I have never encountered before, somewhere between salty
> > and sweet, which also spills over into a sparkly purple
> > visual sensation".
>
> See the above. Synesthetes corrlate in weird ways. Sharp chees and purple
> 5. That is what humans do naturally. Associative memory. Sometimes it can
> go wrong (or very right!). Words can tast bitter.



> >> Put it this way a 'red photon' arrives and hits a retina cone and
> >> isomerises a protein, causing a cascade that results in an action
> >> potential pulse train. That photon could have come from alpha-centuri,
> >> bounced off a dog collar or come from a disco light. The receptor has no
> >> clue. Isomerisation of a protein has nothing to do with 'seeing'. In the
> >> human the perception (sensation) of a red photon happens in the visual
> >> cortex as an experience of redness and is 'projected' mentally into the
> >> phenomenal scene. That way the human can tell where it came from. The
> >> mystery of how that happens is another story. That it happens and is
> >> necessary for science is what matters here.
> >
> > I don't think that's correct. It is impossible for a human to tell where
> > the photon came from if it makes no sensory difference.
> > That difference may have to involve other sensations, eg. if the
> > red sensation occurs simultaneously with a loud bang
> > it may have come from an explosion, while the same red sensation
> > associated with a 1 KHz tone may have come from a warning beacon.
>
> You're talking about cross-correlating sensations, not sensory
> measurement. The human as an extra bit of physics in the generation of the
> phenomenal scenes which allows such contextualisations.

Why does it need new physics? Is that something you
are assuming or something you are proving?


--

Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
> Stathis,
> I am answering all the mail in time order. I can see below you are making
> some progress! This is cool.
>
> > Colin Hales writes:
> >> >> So, I have my zombie scientist and my human scientist and
> >> >> I ask them to do science on exquisite novelty. What happens?
> >> >> The novelty is invisible to the zombie, who has the internal
> >> >> life of a dreamless sleep. The reason it is invisible is
> >> >> because there is no phenomenal consciousness. The zombie
> >> >> has only sensory data to use to do science. There are an
> >> >> infinite number of ways that same sensory data could arrive
> >> >> from an infinity of external natural world situtations.
> >> >> The sensory data is ambiguous - it's all the same - action
> >> >> potential pulse trains traveling from sensors to brain.
>
> >> Stathis:
> >> > All I have to work on is sensory data also.
>
> >> No you don't! You have an entire separate set of
> >> perceptual/experiential fields constructed from sensory feeds.
> >> The fact of this is proven - think
> >> of hallucination. When the senory data gets overidden
> >> by the internal imagery (schizophrenia). Sensing is NOT
> >> our perceptions. It is these latter phenomenal fields
> >> that you  consciously work from as a scientist. Not the
> >> sensory feeds.
> >> This seems to be a recurring misunderstanding or something
> >> people seem to be struggling with. It feels like its coming
> >> from your senses but it's all generated inside your head.
> >
> > OK, I'll revise my claim: all I have to work with is
> > perceptions which I assume are coming from sense data which
> > I assume is
> > coming from the real world impinging on my sense organs.
> > The same is true of a machine which receives environmental
> > input and processes it. At the processing stage, this is
> > the equivalent of perception. The processor assumes that
> > the information it is processing originates from sensors which
> > are responding to real world stimuli, but it has no way of
> > knowing if the data actually arose from spontaneous
> > or externally induced activity at any point from the sensors,
> > transducers, conductors, or components of the processor itself:
> > whether they are hallucinations, in fact. There might be some
> > clue that it is not a legitimate sensory feed, but if the
> > halllucination is perfect it is by definition
> > impossible to detect.
> >
>
> By George, you're getting it!
>
> >> Whatever 'reality' is, it is regular/persistent,
> >> repeatable/stable enough to do science on it via
> >> our phenomenality and come
> >> up with laws that seem to characterise how it will appear
> >> to us in our phenomenality.
> >
> > You could say: my perceptions are
> > regular/persistent/repeatable/stable enough to assume an
> > external reality generating them and to do science on. And if
> > a machine's central processor's perceptions are similarly
> > regular/persistent/, repeatable/stable, it could also do
> > science on them. The point is, neither I nor
> > the machine has any magical knowledge of an external world.
> > All we have is regularities in perceptions, which we assume
> > to be originating from the external world because that's
> > a good model which stands up no matter what we throw
> > at it.
>
> Oops. Maybe I spoke too soon! OK.
> Consider... "...stable enough to assume an external reality..".
>
> You are a zombie. What is it about sensory data that suggests an external
> world?

What is it about sensory data that suggests an external world to
human?

Well, of course, we have a phenomenal view. Bu there is no informtion
in the phenomenal display that was not first in the pre-phenomenal
sensory data.

> The science you can do is the science of zombie sense data, not an
> external world.

What does "of" mean in that sentence? Human science
is based on human phenomenality which is based on pre-phenomenal
sense data, and contains nothing beyond it informationally.

Humans unconsciously make guesses about the causal origins
of their sense-data in order to construct the phenomenal
view, which is then subjected to further educated guesswork
as part of the scientific process (which make contradict the
original guesswork, as in the detection of illusions)

> Your hypotheses about an external world would be treated
> as wild metaphysics by your zombie friends

Unless they are doing the same thing. why shouldn't
they be? It is function/behaviour afer all. Zombies
are suppposed to lack phenomenality, not function.



> (none of which you cen ever be
> aware of, for they are in this external world..., so there's another
> problem :-) Very tricky stuff, this.
> The only science you can do is "I hypohesise that when I activate this
> nerve, that sense nerve and this one do " You then publish in nature
> and collect your prize. (Except the external world this assumes is not
> there, from your perspective... life is grim for the zombie)

Assuming, for some unexplained re

Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:

> You do NOT interpret sense data! In consciuous activity you interpret the
> phenomenal scene generated using the sense data.

But that is itself an interpetation for reasons you yourself have
spelt out. Sensory pulse-trains don't have any  meaning in themselves.

>  Habituated/unconscious
> reflex behaviour with fixed rules uses sense data directly.

Does that make it impossible to have
adaptive responses to sense data?


> Think about driving home on a well travelled route. You don't even know
> how you got home. Yet if something unusual happened on the drive - ZAP -
> phenomenality kicks in and phenomenal consciousness handles the novelty.>

Is that your only evidence for saying that it is impossible
to cope with novelty without phenomenality?

> > With living organisms, evolution provides this
> > knowledge
>
> Evolution provided
> a) a learning tool(brain) that knows how to learn from phenomenal
>consciousness, which is an adaptive presentation of real
>external world a-priori knowledge.
> b) Certain simple reflex behaviours.
>
> > while with machines the designers provide it.
>
> Machine providers do not provide (a)


> They only provide (b), which includes any adaptivity rules, which are just
> more rules.

How do you know that (a) isn't "just" rules? What's the difference?

You seem to think there is an ontological gulf between (a) and (b). But
that
seems arbitrary.

> > Incidentally, you have stated in your paper that novel technology as the
> > end
> > product of scientific endeavour is evidence that other people are not
> > zombies, but
> > how would you explain the very elaborate technology in living organisms,
> > created
> > by zombie evolutionary processes?
> >
> > Stathis Papaioannou
>
> Amazing but true. Trial and error. Hypothesis/Test in a brutal live or die
> laboratory called The Earth Notice that the process selected for
> phenomenal consciousness early on

But that slides past the point. The development of phenomenal
consciousness was an adaptation that occurred without PC.

Hence, PC is not necessary for all adaptation.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread 1Z


Colin Geoffrey Hales wrote:
> >
> >
> > Colin Hales writes:
> >
> >> You are a zombie. What is it about sensory data that suggests an
> >> external world? The science you can do is the science of
> >> zombie sense data, not an external world. Your hypotheses
> >> about an external world would be treated
> >> as wild metaphysics by your zombie friends (none of which you
> >> can ever be aware of, for they are in this external world...,
> >> so there's another problem :-) Very tricky stuff, this.
> >
> > My hypothesis about an external world *is* metaphysics, and
> > you seemed to agree in an earlier post that there was not
> > much point debating it. I assume that
> > there is an external world, behave as if there is one, and would be
> > surprised and > disturbed if evidence came up suggesting that
> > it is all a hallucination, but I can't
> > ever be certain that such evidence will not come up.
>
> This is the surprise we are due. It's something that you have to inhabit
> for a while to assimilate properly. I have been on the other side of this
> for a long while now.
>
> The very fact that the laws of physics, derived and validated using
> phenomenality, cannot predict or explain how appearances are generated is
> proof that the appearance generator is made of something else
> and that
> something else That something else is the reality involved, which is NOT
> appearances, but independent of them.
>
> I know that will sound weird...
>
> >
> >> The only science you can do is "I hypothesise that when I activate this
> >> nerve, that sense nerve and this one do "
> >
> > And I call regularities in my perceptions the "external world", which
> > becomes so
> > familiar to me that I forget it is a hypothesis.
>
> Except that in time, as people realise what I just said above, the
> hypothesis has some emprical support: If the universe were made of
> appearances when we opened up a cranium we'd see them. We don't.

Or appearances don't appear to be appearances to a third party.

> We see
> something generating/delivering them - a brain. That difference is the
> proof.



> >> If I am to do more I must have a 'learning rule'. Who tells me the
> >> learning rule? This is a rule of interpretation. That requires context.
> >> Where does the context come from? There is none. That is the situation
> >> of
> >> the zombie.
> >
> > I do need some rules or knowledge to begin with if I am to get anywhere
> > with interpreting sense data.
>
> You do NOT interpret sense data! In consciuous activity you interpret the
> phenomenal scene generated using the sense data.

But that is itself an interpetation for reasons you yourself have
spelt out. Sensory pulse-trains don't have any  meaning in themselves.

>  Habituated/unconscious
> reflex behaviour with fixed rules uses sense data directly.

Does that make it impossible to have
adaptive responses to sense data?


> Think about driving home on a well travelled route. You don't even know
> how you got home. Yet if something unusual happened on the drive - ZAP -
> phenomenality kicks in and phenomenal consciousness handles the novelty.>

Is that your only evidence for saying that it is impossible
to cope with novelty without phenomenality?

> > With living organisms, evolution provides this
> > knowledge
>
> Evolution provided
> a) a learning tool(brain) that knows how to learn from phenomenal
>consciousness, which is an adaptive presentation of real
>external world a-priori knowledge.
> b) Certain simple reflex behaviours.
>
> > while with machines the designers provide it.
>
> Machine providers do not provide (a)


> They only provide (b), which includes any adaptivity rules, which are just
> more rules.

How do you know that (a) isn't "just" rules? What's the difference?

You seem to think there is an ontological gulf between (a) and (b). But
that
seems arbitrary.

> > Incidentally, you have stated in your paper that novel technology as the
> > end
> > product of scientific endeavour is evidence that other people are not
> > zombies, but
> > how would you explain the very elaborate technology in living organisms,
> > created
> > by zombie evolutionary processes?
> >
> > Stathis Papaioannou
>
> Amazing but true. Trial and error. Hypothesis/Test in a brutal live or die
> laboratory called The Earth Notice that the process selected for
> phenomenal consciousness early on

But that slides past the point. The development of phenomenal
consciousness was an adaptation that occurred without PC.

Hence, PC is not necessary for all adaptation.

> which I predict will eventually be
> proven to exist in nearly all animal cellular life (vertebrate and
> invertebrate and even single celled organisms) to some extent. Maybe even
> in some plant life.
>
> 'Technology' is a loaded word...I suppose I mean 'human made' technology.
> Notice that chairs and digital watches did not evolve independently of
> humans. Nor did science. Novel technology could be re-termed 'non-DNA
> 

RE: UDA revisited

2006-11-26 Thread Stathis Papaioannou



> > Colin Geoffrey Hales wrote:
> >
> >> BTW there's no such thing as a truly digital computer. They are all
> >> actually analogue. We just ignore the analogue parts of the state
> >> transitions and time it all so it makes sense.
> >
> > And if the analogue part intrudes, the computer has malfunctioned
> > in some way. So correctly functioning computers are digital.
> >
> 
> Not so in the case of all the computers we have. 0 and 1 are an
> interpretation put on a voltage by you and I. Depending on the voltage
> levels of the computer chip.
> 
> Eg TTL  <0.2 volts = 0, >4.2ish volts = 1
> 
> If you get a logic gate and control the voltage fed to it you can see the
> transition from 0.24.whatever and up to 5 volts usually. It's a nice
> smooth transition. Let it go under it's own steam and the transition is
> very fast, but still all analogue real world potential measureed in
> conducting crytalline environment. You're talking to an electronic
> engineer here.

Of course they are analogue devices, but their analogue nature makes no 
difference to the computation. If the ripple in the power supply of a TTL 
circuit were >4 volts then the computer's true analogue nature would 
intrude and it would malfunction.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

Stathis:
>
> See my previous post, I'm also answering them in the order that I read
> them
> (otherwise I'll never get back to them).
>
> If your model is adequate, then it should allow you to implement a replica
> of what
> it is that you're modelling such that the replica behaves the same as the
> original, or
> close enough to the original. Now, you're not going to say that a model
> might be able
> to behave like a human scientist but actually be a zombie, are you?
>

Hell no! I am saying that scientific behaviour...(open ended unveiling of
exquisite novelty and its depiction in the form of communicable
generalisations to an arbitrary level of abstraction) mandates
phenomenal consciousness. I am not saying only humans can do this. I am
only saying phenomenal consciousness is necessary. On its own is it not
sufficient, but is it absolutely necessary. Machines will be scientists in
the future. Like us. Better, probably, necause they won't be as hung up on
being right, the status quo, the received view, the dogmaas we humans
are!

Colin Hales




--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
> You seem to be implying that there is some special physics
> involved in living processes: isn't that skimming a little
> close to vitalism?. All I see is the chemistry
> of large organic molecules, the fundamentals of which are
> well understood, even if the level of complexity is beyond
> what modern chemists' computer models can cope with. Classical
> chaos may make it impossible to perfectly model a living system
> in that the behaviour of the model will deviate from the
> behaviour of the original after a period of time, but the
> same is true if you try to model a game of pool.
>
> As for "modelling the physics that does the experience" not being
> the same as having the experience,  I think your own argument
> showing that a colleague cannot behave as if he is conscious
> by doing science without actually being conscious refutes this.

Note: Scientists, by definition:
a) are doing science on the world external to them
b) inhabit a universe of exquisite novelty
   ...or there'd be no need for them!

The rules we have DO NOT COVER what the scientist works on.
IN order that the scientist be aware of the NOVELTY in the external it has
to be visible. Prior learning (rules) cannot predict what that novelty
will be or you'd already know what it was! It is not novel! So what do we
have? we have a form of a-priori knowledge that tells us simply what is
'out there'. It's called phenomenal consciousness. Without it novelty
would be invisible. Novelty can hide in ambiguous sensory feeds, so they
are not up to the task and are thus NOT what we use to do science.

Are we there yet?

> If you could model all the responses of a scientist to his
> environment on a computer in real
> time and clothe this computer in the skin of a scientist, then this
> artificial scientist should
> behave just like the real thing and therefore should have the same
> phenomenal
> consciousness as the real thing.
>
>> You tell it to respond 'as-if' it had them. What you do not do is model
>> all possible paramecium experiences, only the ones you used to create
>> the
>> model. The experience and the behaviour in response are 2 different
>> things. All you can observe is behaviour.
>
> Of course you model all possible paramecium experiences: you wouldn't be
> doing your job
> if you didn't. But that doesn't mean you have to program in each possible
> experience one
> by one, any more than a word processing program needs to explicitly
> contain every possible
> combination of characters a user might possibly input.

That is simply a way of avoiding the whole issue. You are assuming you
capture everything by modelling (the modelling perfectly) when you are
happy to accept what you have done (the perfect model) and then you
conclude that as a result you have captured everything? You have captured
100% of the partial truth.

That's way more circular and assuming than anything I've proposed.

How would you model 100% of the life of Stahis? The only complete way is
to BE Stathis. Everything else is a shortcut. Information lost. It's just
a question of degree.

Colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
> But you have no way to know whether phenomenal scenes are created by a
> particular computer/robot/program or not because it's just mystery
> property defined as whatever creates phenomenal scenes.  You're going
> around in circles.  At some point you need to anchor your theory to an
> operational definition.

OK. There is a proven mystery calle dthe hard problem. Documented to death
and beyond. Call it Physics X. It is the physics that _predicts_ (NOT
DESCRIBES) phenomenal consciousness (PC). We have, through all my fiddling
about with scientists, conclusive scientific evidence PC exists and is
necessary for science.

So what next?

You say to yourself... "none of the existing laws of physics predict PC.
Therefore my whole conception of how I understand the universe
scientifically must be missing something fundamental. Absolutely NONE of
what we know is part of it. What could that be?".

Then you let yourself have the freedom to explore that possibiltiy. For the
answer to is which you seek.

The answer?

is that the physics (rule set) of appearances and the physics (rule
set) of the universe capable of generating appearances are not the same
rule set! That the universe is NOT made of its appearance, it's made of
something _with_ an appearance that is capable of making an appearance
generator.

That's it. Half the laws of physics are going neglected merely because we
won't accept phenomenal consciousness ITSELF as evidence of anything.

> If you try to make doing unique science the
> operational test, then you've defined 90% of humans and 100% of dogs,
> chimps, cats, etc. to be zombies.

Nope. I have merely defined them not to be scientists. That's all. Science
is merely special enough to allow conclusive testing. That's all I need to
do.

Cheers

Colin Hales




--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
>
> Colin Hales writes:
>
>> You are a zombie. What is it about sensory data that suggests an
>> external world? The science you can do is the science of
>> zombie sense data, not an external world. Your hypotheses
>> about an external world would be treated
>> as wild metaphysics by your zombie friends (none of which you
>> can ever be aware of, for they are in this external world...,
>> so there's another problem :-) Very tricky stuff, this.
>
> My hypothesis about an external world *is* metaphysics, and
> you seemed to agree in an earlier post that there was not
> much point debating it. I assume that
> there is an external world, behave as if there is one, and would be
> surprised and > disturbed if evidence came up suggesting that
> it is all a hallucination, but I can't
> ever be certain that such evidence will not come up.

This is the surprise we are due. It's something that you have to inhabit
for a while to assimilate properly. I have been on the other side of this
for a long while now.

The very fact that the laws of physics, derived and validated using
phenomenality, cannot predict or explain how appearances are generated is
proof that the appearance generator is made of something else and that
something else That something else is the reality involved, which is NOT
appearances, but independent of them.

I know that will sound weird...

>
>> The only science you can do is "I hypothesise that when I activate this
>> nerve, that sense nerve and this one do "
>
> And I call regularities in my perceptions the "external world", which
> becomes so
> familiar to me that I forget it is a hypothesis.

Except that in time, as people realise what I just said above, the
hypothesis has some emprical support: If the universe were made of
appearances when we opened up a cranium we'd see them. We don't. We see
something generating/delivering them - a brain. That difference is the
proof.

>
>> If I am to do more I must have a 'learning rule'. Who tells me the
>> learning rule? This is a rule of interpretation. That requires context.
>> Where does the context come from? There is none. That is the situation
>> of
>> the zombie.
>
> I do need some rules or knowledge to begin with if I am to get anywhere
> with interpreting sense data.

You do NOT interpret sense data! In consciuous activity you interpret the
phenomenal scene generated using the sense data. Habituated/unconscious
reflex behaviour with fixed rules uses sense data directly.

Think about driving home on a well travelled route. You don't even know
how you got home. Yet if something unusual happened on the drive - ZAP -
phenomenality kicks in and phenomenal consciousness handles the novelty.


> With living organisms, evolution provides this
> knowledge

Evolution provided
a) a learning tool(brain) that knows how to learn from phenomenal
   consciousness, which is an adaptive presentation of real
   external world a-priori knowledge.
b) Certain simple reflex behaviours.

> while with machines the designers provide it.

Machine providers do not provide (a)

They only provide (b), which includes any adaptivity rules, which are just
more rules.


>
> Incidentally, you have stated in your paper that novel technology as the
> end
> product of scientific endeavour is evidence that other people are not
> zombies, but
> how would you explain the very elaborate technology in living organisms,
> created
> by zombie evolutionary processes?
>
> Stathis Papaioannou

Amazing but true. Trial and error. Hypothesis/Test in a brutal live or die
laboratory called The Earth Notice that the process selected for
phenomenal consciousness early onwhich I predict will eventually be
proven to exist in nearly all animal cellular life (vertebrate and
invertebrate and even single celled organisms) to some extent. Maybe even
in some plant life.

'Technology' is a loaded word...I suppose I mean 'human made' technology.
Notice that chairs and digital watches did not evolve independently of
humans. Nor did science. Novel technology could be re-termed 'non-DNA
based technology, I suppose. A bird flies. So do planes. One is DNA based.
The other not DNA based, but created by a DNA based creature called the
human. Eventually conscious machines will create novel technology too -
including new versions of themselves. It doesn't change any part of the
propositions I make - just contextualises them inside a fascinating story.

Colin Hales





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: UDA revisited

2006-11-26 Thread Colin Geoffrey Hales

>
>
> Colin Hales writes:
>
>> > You're being unfair to the poor zombie robots. How could they
>> > possibly tell if they were in the factory or on the benchtop
>> > when the benchtop (presumably) exactly replicates the sensory
>> > feeds they would receive in the factory?
>> > Neither humans nor robots, zombie or otherwise, should be
>> > expected to have ESP.
>>
>> Absolutely! But the humans have phenomenal consciousness in lieu of ESP,
>> which the zombies do not. To bench test "a human" I could not merely
>> replicate sensoiry feeds. I'd have to replicate the factory! The human
>> is
>> connected to the external world (as mysterious as that may be and it's
>> not
>> ESP!). The zombie isn't, so faking it is easy.
>
> I don't understand why you would have to replicate the factory
> rather than just the sensory feeds to fool a human, but not a machine.
> It is part of the definition of a hallucination that it is
> indistinguishable from the the real thing. People have done
> terrible things, including murder and suicide, because of auditory
> hallucinations. The hallucinations are so real to them that
> even when presented with contrary evidence,
> such as someone standing next to them denying that they
> heard anything, they insist
> it is not a hallucination: "I know what I heard, you must
> either be deaf lying".

I don't know how to insert/overwrite the phenomenal scene activity/physics
directly.
Artifical/virtual reality might do it. It works for airline pilots.

But I'd have to create the sensory stimulus sufficiently sophisticated to
be a useful simulation of the world, including temperature, pressure and
other real-world phenomena. Rather more tricky.

Part of my long term strategy for these things in process control products
is to actually eliminate the bench testing! Take the unprogrammed but
intelligent machine that has phenomenal consciousness tailored to suit -
then teach it in situ or teach it how to learn things and leave it to it.
>
>> >> Now think about the touch..the same sensation of touch could
>> >> have been generated by a feather or a cloth or another finger
>> >> or a passing car. That context is what phenomenal
>> >> consciousness provides.
>> >
>> > But it is impossible to differentiate between different sources
>> > of a sensation unless the different sources generate a different
>> > sensation. If you close your eyes and the touch of a feather
>> > and a cloth feel the same, you can't tell which it was.
>> > If you open your eyes, you can tell a difference because
>> > the combined sensation (touch + vision) is different in the
>> > two cases. A machine that has touch receptors alone might not
>> > be able to distinguish between them, but a machine that has
>> > touch + vision receptors would be able to.
>> >
>>
>> Phenomenal scenes can combine to produce masterful, amazing
>> discriminations. But how does the machine, without being told already by
>> a
>> human, know one from the other? Having done that how can it combine and
>> contextualise that joint knowledge? You have to tell it how to learn.
>> Again a-priori knowledge ...
>
> A machine can tell one from the other because they are different.
> If they were the same, it would not be able to tell one from
> the other, and neither would a human, or a paramecium.
> As for combining, contextualising etc., that is what
> the information processing hardware + software does.
> In living organisms the hardware + software has evolved
> naturally while in machines it is artificial.
>
> I think it is possible that any entity, whether living or artificial,
> which processes sensory data and is able to learn and interact
> with its environment has a basic
> consciousness.

You can call it a form of 'consciousness' I suppose... but unless it has
some physics of phenomenal consciousness happeneing in there to apprehend
the external world - it's a zombie and will be fundamentally limited to
handling all novelty with it's unconscious reflexes, inclusing whatever
a-prori adaptive behaviour it happens to have.

> This would be consistent with your idea
> that zombies can't be scientists. What I cannot understand
> is your claim that machines are necessarily
> zombies.

No phenomenality = Zombie. Simple.
This does not mean it is incapable of functioning successfully in a
certain habitat. What it does mean is that it cannot be a scientist
because it's habits/reflexes are all fixed. It's adaptation is a-priori
fixed.

> Machines and living organisms are just special
> arrangements of matter following the laws of physics.
> What is the fundamental difference between them
> which enables one to be conscious and the other not?
>

 = The unknown physics of phenomenal consciousness.

which none of the existing 'laws of physics' have ever predicted and
never will because they were derived using it and all they predict is how
it will appear in phenomenality when we look. Very useful butimpotent
... predictably impotent when it comes to under

RE: UDA revisited

2006-11-26 Thread Stathis Papaioannou


See my previous post, I'm also answering them in the order that I read them 
(otherwise I'll never get back to them). 

If your model is adequate, then it should allow you to implement a replica of 
what 
it is that you're modelling such that the replica behaves the same as the 
original, or 
close enough to the original. Now, you're not going to say that a model might 
be able 
to behave like a human scientist but actually be a zombie, are you? 

> Ooops...I forgot the 'quantum level' issue in the paramecium discussion.
> 
> No. I would disagree. Quantum mechanics is just another "law of
> appearances" - how the world appears when we look. The universe is not
> made of quantum mechanics. It is made of 'something'. That 'something' is
> behaving quantum mechanicially.
> 
> The model is a bunch of 'something' doing a 'model-dance' in a computer.
> It does not do what the 'something' does in a paramecium. Hence whatever
> is lost by changing the dance from the 'something dance' (quantum
> mechanical or whatever) to the 'model-dance' will be lost to the model
> paramecium.
> 
> I would hold that what is lost is the faculty for experience. The
> paramecium includes all levels of the organisation of reality. No matter
> how deep your model goes go you throw away whatever is underneath your
> bottom layer of abstraction and then assume that does not matter. Big
> mistake, IMO. Fixable, but not by modelling.
> 
> Does that make sense?
> 
> Colin Hales

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




  1   2   >