Re: How would a computer know if it were conscious?

2007-06-29 Thread Bruno Marchal

Le 28-juin-07, à 17:56, David Nyman a écrit :

 On 28/06/07, Bruno Marchal [EMAIL PROTECTED] wrote:

 Hi Bruno

 The remarks you comment on are certainly not the best-considered or 
 most cogently expressed of my recent posts.  However, I'll try to 
 clarify if you have specific questions.  As to why I said I'd rather 
 not use the term 'consciousness', it's because of some recent 
 confusion and circular disputes ( e.g. with Torgny, or about whether 
 hydrogen atoms are 'conscious'). 


I am not sure that in case of disagreement (like our disagreement 
with Torgny), changing the vocabulary is a good idea. This will not 
make the problem going away, on the contrary there is a risk of 
introducing obscurity.





 Some of the sometimes confused senses (not by you, I hasten to add!) 
 seem to be:

 1) The fact of possessing awareness
 2) The fact of being aware of one's awareness
 3) the fact of being aware of some content of one's awareness


So just remember that in a first approximation I identify this with

1) being conscious  (Dt?)    for those who have 
followed the modal posts. (Dx is for ~ Beweisbar (~x))
2) being self-conscious  (DDt?)
3) being conscious of #  (Dp?)

You can also have:

4) being self-conscious of something  (DDp?).

Dp is really an abbreviation of the arithmetical proposition
~beweisbar ( '~p'). 'p' means the godel number describing p in the 
language of the machine (by default it is the first order arithmetic 
language).



 So now I would prefer to talk about self-relating to a 1-personal 
 'world', where previously I might have said 'I am conscious', and that 
 such a world mediates or instantiates 3-personal content. 

This is ambiguous. The word 'world' is a bit problematic in my setting.


  I've tried to root this (in various posts) in a logically or 
 semantically primitive notion of self-relation that could underly 0, 
 1, or 3-person narratives, and to suggest that such self-relation 
 might be intuited as 'sense' or 'action' depending on the narrative 
 selected.

OK.


 But crucially such nuances would merely be partial takes on the 
 underlying self-relation, a 'grasp' which is not decomposable.


Actually the elementary grasp are decomposable (into number relations) 
in the comp setting.



 So ISTM that questions should attempt to elicit the machine's 
 self-relation to such a world and its contents: i.e. it's 'grasp' of a 
 reality analogous to our own.  And ISTM the machine could also ask 
 itself such questions, just as we can, if indeed such a world existed 
 for it.

OK, but the machine cannot know that. As we cannot know that).


 I realise of course that it's fruitless to try to impose my jargon on 
 anyone else, but I've just been trying to see whether I could become 
 less confused by expressing things in this way.  Of course, a 
 reciprocal effect might just be to make others more confused!

It is the risk indeed.


Best regards,

Bruno





 David


 Le 21-juin-07, à 01:07, David Nyman a écrit :

 
  On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
 
  Personally I don' think we can be *personally* mistaken about our 
 own
   consciousness even if we can be mistaken about anything that
  consciousness could be about.
 
  I agree with this, but I would prefer to stop using the term
  'consciousness' at all.


 Why?



  To make a decision (to whatever degree of
  certainty) about whether a machine possessed a 1-person pov 
 analogous
  to a human one, we would surely ask it the same sort of questions 
 one
  would ask a human.  That is: questions about its personal 'world' -
  what it sees, hears, tastes (and perhaps extended non-human
  modalitiies); what its intentions are, and how it carries them into
  practice.  From the machine's point-of-view, we would expect it to
  report such features of its personal world as being immediately
  present (as ours are), and that it be 'blind' to whatever 'rendering
  mechanisms' may underlie this (as we are).
 
  If it passed these tests, it would be making similar claims on a
  personal world as we do, and deploying this to achieve similar ends.
  Since in this case it could ask itself the same questions that we 
 can,
  it would have the same grounds for reaching the same conclusion.
 
  However, I've argued in the other bit of this thread against the
  possibility of a computer in practice being able to instantiate 
 such a
  1-person world merely in virtue of 'soft' behaviour (i.e.
  programming).  I suppose I would therefore have to conclude that no
  machine could actually pass the tests I describe above - whether 
 self-
  administered or not - purely in virtue of running some AI program,
  however complex.  This is an empirical prediction, and will have to
  await an empirical outcome.


 Now I have big problems to understand this post. I must think ... (and
 go).

 Bye,

 Bruno



 
 
  On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
  Le 03-juin-07, à 21:52, 

Re: How would a computer know if it were conscious?

2007-06-29 Thread David Nyman
On 29/06/07, Bruno Marchal [EMAIL PROTECTED] wrote:
BM:  I am not sure that in case of disagreement (like our disagreement
with Torgny), changing the vocabulary is a good idea. This will not
make the problem going away, on the contrary there is a risk of
introducing obscurity.

DN:  Yes. this seems to be the greater risk.  OK, in general I'll try to
avoid it where possible.  I've taken note of the correspondences you
provided for the senses of 'consciousness' I listed, and the additional one.


BM:  Actually the elementary grasp are decomposable (into number relations)
in the comp setting.

DN:  Then are you saying that 'action' can occur without 'sense' - i.e. that
'zombies' are conceivable?  This is what I hoped was avoided in the
intuition that 'sense' and 'action' are, respectively, 1-p and 3-p aspects
abstracted from a 0-p decomposable self-relation. The zombie then becomes
merely a category error.  I thought that in COMP, number relations would be
identified with this decomposable self-relation.  Ah.but by
'decomposable', I think perhaps you mean that there are of course
*different* number relations, so that this would then entail that there is a
set of such fundamental relations such that *each* relation is individually
decomposable, yes?

BM:  OK, but the machine cannot know that. As we cannot know that).

DN:  Do you mean that the machine can't know for sure the correspondence
between its conscious world and the larger environment in which this is
embedded and to which it putatively relates? Then I agree of course, and as
you say, neither can we, for the sufficient reasons you have articulated.
So what I meant was that it would simply be in the same position that we
are, which seems self-evident.

Anyway, as I said, the original post was probably ill advised, and I retract
my quibbles about your terminology.

As to my point about whether such an outcome is likely vis-a-vis an AI
program, it wasn't of course because you made any claims on this topic, but
stimulated by another thread.  My thought goes as follows.  I seem to have
convinced myself that, on the COMP assumption that *I* am such a machine, it
is possible for other machines to instantiate conscious computations.
Therefore it would be reasonable for me to attribute consciousness to a
machine that passed certain critical tests, though not such that I could
definitely know or prove that it was conscious.  Nonetheless, such quibbles
don't stop us from undertaking some empirical effort to develop machines
with consciousness.  Two ways of doing this seem apparent.  First, to copy
an existing such system (e.g. a human) at an appropriate substitution level
(as in your notorious gedanken experiment).  Second, to arrange for some
initial system to undergo a process of 'psycho-physical' evolution (as
humans have done) such that its 'sense' and 'action' narratives
'self-converge' on a consistent 1p-3p interface, as in our own case.

In either of these cases, 'sense' and 'action' narratives 'self-converge',
rather than being 'engineered', and any imputation of consciousness ( i.e.
the attribution of semantics to the computation) continues to be 1p
*self-attribution*, not a provable or definitely knowable 3p one.  The
problem then seems to be: is there in fact a knowable method to 'design' all
this into a system from the outside: i.e. a way to start from an external
semantic attribution (e.g. an AI program) and then 'engineer' the sense and
action syntactics of the instantiation in such a way that they converge on a
consistent semantic interpretation from either 1p or 3p pov?  IOW, so that a
system thus engineered would be capable of passing the same critical tests
achievable by the first two types.  I can't see that we possess even a
theory of how this could be done, and as somebody once said, there's nothing
so practical as a good theory.  This is why I expressed doubt in the
empirical outcome of any AI programme approached in this manner.  ISTM that
references to Moore's Law etc. in this context are at present not much more
than promissory notes written in invisible ink on transparent paper.

David.

Le 28-juin-07, à 17:56, David Nyman a écrit :

  On 28/06/07, Bruno Marchal  [EMAIL PROTECTED] wrote:
 
  Hi Bruno
 
  The remarks you comment on are certainly not the best-considered or
  most cogently expressed of my recent posts.  However, I'll try to
  clarify if you have specific questions.  As to why I said I'd rather
  not use the term 'consciousness', it's because of some recent
  confusion and circular disputes ( e.g. with Torgny, or about whether
  hydrogen atoms are 'conscious').


 I am not sure that in case of disagreement (like our disagreement
 with Torgny), changing the vocabulary is a good idea. This will not
 make the problem going away, on the contrary there is a risk of
 introducing obscurity.





  Some of the sometimes confused senses (not by you, I hasten to add!)
  seem to be:
 
  1) The fact of possessing awareness
  2) 

Re: How would a computer know if it were conscious?

2007-06-28 Thread Bruno Marchal

David,


Le 17-juin-07, à 18:28, David Nyman a écrit :

 IMHO this semantic model gives you a knock-down argument against
 'computationalism', *unless* one identifies (I'm hoping to hear from
 Bruno on this) the 'primitive' entities and operators with those of
 the number realm - i.e. you make numbers and their relationships the
 'primitive base'.  But crucially, you must still take these entities
 and their relationships to be the *real* basis of personal-world
 'grasp'.  If you continue to adopt a 'somethingist' view, then no
 'program' (i.e. one of the arbitrarily large set that could be imputed
 to any 'something') could coherently be responsible for its personal-
 world grasp (such as it may be).  This is the substance of the UDA
 argument.  All personal-worlds must emerge internally via recursive
 levels of relationship inherited from primitive grasp: in a
 'somethingist' view, such grasp must reside with a primitive
 'something', as we have seen, and in a computationalist view, it must
 reside in the number realm.  But the fundamental insight applies.



I agree completely, but I am not yet convinced that you appreciate my 
methodological way of proceeding. I have to ask you questions, but I 
see you have been prolific during the Siena congress, which is not 
gentle for my mailbox :). Anyway I will take some time to read yours' 
and the others' posts before asking for questions that others have 
perhaps asked and that you have perhaps already answered.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-28 Thread David Nyman
On 28/06/07, Bruno Marchal [EMAIL PROTECTED] wrote:

BM:  I agree completely.

DN:  A good beginning!

BM:  .but I am not yet convinced that you appreciate my
methodological way of proceeding.

DN: That may well be so.  In that case it's interesting that we reached the
same conclusion.

BM:  Anyway I will take some time to read yours'
and the others' posts before asking for questions that others have
perhaps asked and that you have perhaps already answered.

DN:  I'm at your disposal.

David


 David,


 Le 17-juin-07, à 18:28, David Nyman a écrit :

  IMHO this semantic model gives you a knock-down argument against
  'computationalism', *unless* one identifies (I'm hoping to hear from
  Bruno on this) the 'primitive' entities and operators with those of
  the number realm - i.e. you make numbers and their relationships the
  'primitive base'.  But crucially, you must still take these entities
  and their relationships to be the *real* basis of personal-world
  'grasp'.  If you continue to adopt a 'somethingist' view, then no
  'program' (i.e. one of the arbitrarily large set that could be imputed
  to any 'something') could coherently be responsible for its personal-
  world grasp (such as it may be).  This is the substance of the UDA
  argument.  All personal-worlds must emerge internally via recursive
  levels of relationship inherited from primitive grasp: in a
  'somethingist' view, such grasp must reside with a primitive
  'something', as we have seen, and in a computationalist view, it must
  reside in the number realm.  But the fundamental insight applies.



 I agree completely, but I am not yet convinced that you appreciate my
 methodological way of proceeding. I have to ask you questions, but I
 see you have been prolific during the Siena congress, which is not
 gentle for my mailbox :). Anyway I will take some time to read yours'
 and the others' posts before asking for questions that others have
 perhaps asked and that you have perhaps already answered.

 Bruno


 http://iridia.ulb.ac.be/~marchal/


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-28 Thread Bruno Marchal


Le 21-juin-07, à 01:07, David Nyman a écrit :


 On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:

 Personally I don' think we can be *personally* mistaken about our own
 consciousness even if we can be mistaken about anything that
 consciousness could be about.

 I agree with this, but I would prefer to stop using the term
 'consciousness' at all.


Why?



 To make a decision (to whatever degree of
 certainty) about whether a machine possessed a 1-person pov analogous
 to a human one, we would surely ask it the same sort of questions one
 would ask a human.  That is: questions about its personal 'world' -
 what it sees, hears, tastes (and perhaps extended non-human
 modalitiies); what its intentions are, and how it carries them into
 practice.  From the machine's point-of-view, we would expect it to
 report such features of its personal world as being immediately
 present (as ours are), and that it be 'blind' to whatever 'rendering
 mechanisms' may underlie this (as we are).

 If it passed these tests, it would be making similar claims on a
 personal world as we do, and deploying this to achieve similar ends.
 Since in this case it could ask itself the same questions that we can,
 it would have the same grounds for reaching the same conclusion.

 However, I've argued in the other bit of this thread against the
 possibility of a computer in practice being able to instantiate such a
 1-person world merely in virtue of 'soft' behaviour (i.e.
 programming).  I suppose I would therefore have to conclude that no
 machine could actually pass the tests I describe above - whether self-
 administered or not - purely in virtue of running some AI program,
 however complex.  This is an empirical prediction, and will have to
 await an empirical outcome.


Now I have big problems to understand this post. I must think ... (and 
go).

Bye,

Bruno





 On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
 Le 03-juin-07, à 21:52, Hal Finney a écrit :



 Part of what I wanted to get at in my thought experiment is the
 bafflement and confusion an AI should feel when exposed to human 
 ideas
 about consciousness.  Various people here have proffered their own
 ideas, and we might assume that the AI would read these suggestions,
 along with many other ideas that contradict the ones offered here.
 It seems hard to escape the conclusion that the only logical response
 is for the AI to figuratively throw up its hands and say that it is
 impossible to know if it is conscious, because even humans cannot 
 agree
 on what consciousness is.

 Augustin said about (subjective) *time* that he knows perfectly what 
 it
 is, but that if you ask him to say what it is, then he admits being
 unable to say anything. I think that this applies to consciousness.
 We know what it is, although only in some personal and uncommunicable
 way.
 Now this happens to be true also for many mathematical concept.
 Strictly speaking we don't know how to define the natural numbers, and
 we know today that indeed we cannot define them in a communicable way,
 that is without assuming the auditor knows already what they are.

 So what can we do. We can do what mathematicians do all the time. We
 can abandon the very idea of *defining* what consciousness is, and try
 instead to focus on principles or statements about which we can agree
 that they apply to consciousness. Then we can search for 
 (mathematical)
 object obeying to such or similar principles. This can be made easier
 by admitting some theory or realm for consciousness like the idea that
 consciousness could apply to *some* machine or to some *computational
 events etc.

 We could agree for example that:
 1) each one of us know what consciousness is, but nobody can prove
 he/she/it is conscious.
 2) consciousness is related to inner personal or self-referential
 modality
 etc.

 This is how I proceed in Conscience et Mécanisme.  (conscience is
 the french for consciousness, conscience morale is the french for 
 the
 english conscience).



 In particular I don't think an AI could be expected to claim that it
 knows that it is conscious, that consciousness is a deep and 
 intrinsic
 part of itself, that whatever else it might be mistaken about it 
 could
 not be mistaken about being conscious.  I don't see any logical way 
 it
 could reach this conclusion by studying the corpus of writings on the
 topic.  If anyone disagrees, I'd like to hear how it could happen.

 As far as a machine is correct, when she introspects herself, she
 cannot not discover a gap between truth (p) and provability (Bp). The
 machine can discover correctly (but not necessarily in a completely
 communicable way) a gap between provability (which can potentially
 leads to falsities, despite correctness) and the incorrigible
 knowability or knowledgeability (Bp  p), and then the gap between
 those notions and observability (Bp  Dp) and sensibility (Bp  Dp 
 p). Even without using the conventional name of 

Re: How would a computer know if it were conscious?

2007-06-28 Thread David Nyman
On 28/06/07, Bruno Marchal [EMAIL PROTECTED] wrote:

Hi Bruno

The remarks you comment on are certainly not the best-considered or most
cogently expressed of my recent posts.  However, I'll try to clarify if you
have specific questions.  As to why I said I'd rather not use the term
'consciousness', it's because of some recent confusion and circular disputes
(e.g. with Torgny, or about whether hydrogen atoms are 'conscious').  Some
of the sometimes confused senses (not by you, I hasten to add!) seem to be:

1) The fact of possessing awareness
2) The fact of being aware of one's awareness
3) the fact of being aware of some content of one's awareness

So now I would prefer to talk about self-relating to a 1-personal 'world',
where previously I might have said 'I am conscious', and that such a world
mediates or instantiates 3-personal content.  I've tried to root this (in
various posts) in a logically or semantically primitive notion of
self-relation that could underly 0, 1, or 3-person narratives, and to
suggest that such self-relation might be intuited as 'sense' or 'action'
depending on the narrative selected. But crucially such nuances would merely
be partial takes on the underlying self-relation, a 'grasp' which is not
decomposable.

So ISTM that questions should attempt to elicit the machine's self-relation
to such a world and its contents: i.e. it's 'grasp' of a reality analogous
to our own.  And ISTM the machine could also ask itself such questions, just
as we can, if indeed such a world existed for it.

I realise of course that it's fruitless to try to impose my jargon on anyone
else, but I've just been trying to see whether I could become less confused
by expressing things in this way.  Of course, a reciprocal effect might just
be to make others more confused!

David



 Le 21-juin-07, à 01:07, David Nyman a écrit :

 
  On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
 
  Personally I don' think we can be *personally* mistaken about our own
  consciousness even if we can be mistaken about anything that
  consciousness could be about.
 
  I agree with this, but I would prefer to stop using the term
  'consciousness' at all.


 Why?



  To make a decision (to whatever degree of
  certainty) about whether a machine possessed a 1-person pov analogous
  to a human one, we would surely ask it the same sort of questions one
  would ask a human.  That is: questions about its personal 'world' -
  what it sees, hears, tastes (and perhaps extended non-human
  modalitiies); what its intentions are, and how it carries them into
  practice.  From the machine's point-of-view, we would expect it to
  report such features of its personal world as being immediately
  present (as ours are), and that it be 'blind' to whatever 'rendering
  mechanisms' may underlie this (as we are).
 
  If it passed these tests, it would be making similar claims on a
  personal world as we do, and deploying this to achieve similar ends.
  Since in this case it could ask itself the same questions that we can,
  it would have the same grounds for reaching the same conclusion.
 
  However, I've argued in the other bit of this thread against the
  possibility of a computer in practice being able to instantiate such a
  1-person world merely in virtue of 'soft' behaviour (i.e.
  programming).  I suppose I would therefore have to conclude that no
  machine could actually pass the tests I describe above - whether self-
  administered or not - purely in virtue of running some AI program,
  however complex.  This is an empirical prediction, and will have to
  await an empirical outcome.


 Now I have big problems to understand this post. I must think ... (and
 go).

 Bye,

 Bruno



 
 
  On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
  Le 03-juin-07, à 21:52, Hal Finney a écrit :
 
 
 
  Part of what I wanted to get at in my thought experiment is the
  bafflement and confusion an AI should feel when exposed to human
  ideas
  about consciousness.  Various people here have proffered their own
  ideas, and we might assume that the AI would read these suggestions,
  along with many other ideas that contradict the ones offered here.
  It seems hard to escape the conclusion that the only logical response
  is for the AI to figuratively throw up its hands and say that it is
  impossible to know if it is conscious, because even humans cannot
  agree
  on what consciousness is.
 
  Augustin said about (subjective) *time* that he knows perfectly what
  it
  is, but that if you ask him to say what it is, then he admits being
  unable to say anything. I think that this applies to consciousness.
  We know what it is, although only in some personal and uncommunicable
  way.
  Now this happens to be true also for many mathematical concept.
  Strictly speaking we don't know how to define the natural numbers, and
  we know today that indeed we cannot define them in a communicable way,
  that is without assuming the auditor knows already 

Re: How would a computer know if it were conscious?

2007-06-27 Thread Mark Peaty

en passant = one form of 'one size fits all' is shrink 
wrapping. Some food for thought wrapped up in there somewhere

DN: 'MP:  That is to say, all our knowledge _of_
 the world is embodied in qualia which are _about_ the world.
 They are our brains' method of accounting for things. Naive
 realism is how we are when we 'mistake' qualia for the world
 they represent.
 
 DN:  OK, if one's self-relating emerges 1-personally as 
 spectrally-rendered 'surfaces', does this carry for you any taste, 
 sniff, glimmer, rustle, or tingle of 'qualia'?  Of course, there's 
 nothing 'external' to compare to the 1-personal, even though 'spectra' 
 does carry an implication of relative modality, range and scale at the 
 3-personal 'message-level'.  And we can exchange 'signal' with others to 
 correlate aspects of our 1-personal worlds.  But we can find no 
 'absolute' sense in which it's 'like anything' to be 1-personal, even 
 for the 1-person.  It's non-pareil.  But, perhaps, the sort of 
 non-pareil that just might emerge from participating in exquisite 
 complexities of self-relativity.

MP: The way I deal with this, without magic but sometimes 
gasping in wonder, is to recognise that a 'quale' is _about_ 
something. BTW I never normally use the word because it is not 
plain-English; 'appearance' or 'perceptual quality [plus an 
example]' are what most people could relate to.

I think the key insight needed here is that any item of 
consciousness, using those words flexibly but not too loosely, 
must relate something to someone. Depending on the 
sophistication of the 'someone', the something can be a part of 
his/her/its body or some abstract construct. For most of the 
time though the 'something' is a process, person or object in 
the world. ISTM that this necessarily entails that within the 
brain there is something which stands for the thing in the 
external world [or body part, abstract construct,etc], something 
which stands for 'self', and something which relates these two 
in a way which adequately deals with the actual real-world 
relationship between the thing and 'me'. How could it be otherwise?

I think Stephan Lehar's cartoon epistemology series at
http://cns-alumni.bu.edu/~slehar/
deals very well with some of the truly practical questions like 
'how the hell does it work?'
For instance he shows how there IS a homunculus within the 
brain: the transduction device which drives the skeletal 
muscles. When I see it spelled out in plain-English with clear 
and simple diagrams like that I ask: How could it not be like that?

Steve Lehar is quick to point out that he doesn't have lots of 
answers to how the 3D rendition of the environment occurs but as 
you point out Dave [I hope I am joining your dots correctly], 
the human brain uses lots and lots of 2D cortical surfaces to 
create 2D virtual surfaces which embody all the information that 
composes our experience of the world. I believe this is exactly 
right, and the connections between the surfaces, the 
synthesising or merger of the relevant analytical components, 
the 'binding' as they say, is by means of harmonic resonance. 
Cortical re-entrant signalling - between all the regions 
encoding momentarily relevant features of that which we are 
attending to - is synchronising, stabilising, and maintaining 
resonant mass action in a distributed topological structure. And 
we have to say that this structure EXISTS. Without this it is 
all voodoo and worse.

A structure which exists in this manner can:
*   evoke characteristic consequences, and
*   prevent other things from happening, and
*   resist its dissolution by the rest of the brain until its task 
is fulfilled.

The last little bit may sound a tad romantic but the first 
criterion of 'thingness' is that the thing resist its own 
destruction for long enough to be noticed.

My favourite emblem [? or symbol?] for the dynamic complexity 
and robustness of this process is the lion stalking and then 
charging its prey. [Of course it could be any other predator] If 
you recollect documentaries your have seen, remember how the cat 
focuses its attention on the prey as it creeps closer. Then 
remember how the creature charges: its eyes never leave the 
target; the lion's brain has 'locked-on' to the prey. That brain 
based lock-on [the term comes from guided missile technology I 
believe] is a prerequisite for the eventual, climactic, _dental_ 
lock-on which will secure din dins for the cat.
 From the time it starts charging, until the prey is within 
grasp of claws and teeth, the lion cannot take its eyes of the 
prey and cannot give heed to any distractions. It must navigate 
around or over obstacles, subordinating its body to the goal of 
reaching and capturing the target.

There is a simplicity in the case of the lion which we have lost 
because of our use of words. Words allowed copied behaviours to 
take on an existence of their own and to replicate, and evolve, 
proliferating into a vast 

Re: How would a computer know if it were conscious?

2007-06-26 Thread Mark Peaty

I will try the 'interpolation method' below. Your second may
shoot me if I waffle though :-)

David Nyman wrote:
 Mark:
 
 Accepting broadly your summary up to this point...
 
 MP:  But I have to *challenge you to clarify* whether what I write
 next really ties in completely with what you are thinking.
 
 DN:  My seconds will call on you!
 
 MP:  Consciousness is something we know personally, and through
 discussion with others we come to believe that their experience
 is very similar.
 
 DN:  OK, but If you push me, I would say that we 'emerge' into a 
 personal world, and through behavioural exchange with it, come to act 
 consistently as if this constitutes an 'external' environment including 
 a community of similar worlds. For a nascent individual, such a personal 
 world is initially 'bootstrapped' out of the environment, and 
 incrementally comes to incorporate communally-established recognition 
 and explanatory consistencies that can also be extrapolated to a embrace 
 a wider context beyond merely 'personal' worlds.
 
MP2: Yes! Well put.

 MP:  This can be summarised as 'The mind is
 what the brain does', at least insofar as 'consciousness' is
 concerned, and the brain does it all in order to make the body's
 muscles move in the right way.
 
 DN:  I would say that 'minds' and 'brains' are - in some as yet 
 not-fully-explicated way - parallel accounts of a seamless causal 
 network embracing individuals and their environment.  Depending on how 
 this is schematised, it may or may not be possible to fully correlate 
 top-down-personal and bottom-up-physical accounts.  Nonetheless, ISTM 
 more natural to ascribe intentionality to the individual in terms of the 
 environment, rather than 'the brain getting the body's muscles to move' 
 - i.e. I move my hand runs in parallel with a physical account 
 involving the biology and physics of brain and body, but both ultimately 
 supervene on a common 'primitive' explanatory base.
 
MP2: OK, my 'the brain makes muscles move' is basically a
bulwark against 'panpsychism' or any other forms of
mystery-making. The term I like is 'identity theory' but like
most labels it usually seems to provoke unproductive
digressions. The main reason for the word 'challenge' above is
due to the way you were using the word 'sensing' for physical
and chemical interactions.
I would use 'connection' with effects: action and reaction which
include attraction and repulsion. So I would say effects' rather
than aff'ect [ie stress is on first syllable] but here, as with
everything to do with affect and emotion, common English usage
is not helpful [similarly to the way 'love' in English
translations of the New Testament is used to translate at least
four more precise words of the original Greek].

NB: I don't use the word 'supervene'. To me it always gives the
impression that something like a coat of paint is being referred
to. 'Identity' does for me.

 MP:  The answer is that the brain is structured so that behaviours - 
 potentially a million or more human behaviours of all sorts - can be 
 *stored* within the brain. This storage, using the word in a wide sense, 
 is actually changes to the fine structures within the brain [synapses, 
 dendrite location, tags on DNA, etc] which result in [relatively] 
 discrete, repeatable patterns of neuronal network activity occurring 
 which function as sequences of muscle activation
 
 ...snip.
 
 Behaviours, once learned, become habitual i.e. they are evoked by 
 appropriate circumstances and proceed in the manner learned unless 
 varied by on-going review and adjustment. Where the habitual behavioural 
 response is completely appropriate, we are barely conscious of the 
 activity; we only pay attention to novelties and challenges - be they in 
 the distant environment, our close surroundings, or internal to our own 
 bodies and minds.
 
 DN:  Your account reads quite cogently, and we may well agree to discuss 
 the issues in this way, but crucially ISTM that our accounts are always 
 oriented towards particular explanatory outcomes - which is why one size 
 doesn't fit all.  So let's see if this shoe fits
 
MP2: Well, as someone for whom 'standard' means if the collar 
fits then the cuffs button round my finger tips ...
one size will never 'fit all' but diversity is good in company 
with toleration and healthy scepticism.
I am always keen to point out that we humans are always beset 
with a paradox, which _can_ be seen as a kind of duality. What 
it amounts to is that we live in a real world, but we live by 
means of a description. That is to say, all our knowledge _of_ 
the world is embodied in qualia which are _about_ the world. 
They are our brains' method of accounting for things. Naive 
realism is how we are when we 'mistake' qualia for the world 
they represent. But they exist, and that is a key point. So is 
the fact that, even if the world 'behind' the appearances is not 
actually the world 

Re: How would a computer know if it were conscious?

2007-06-26 Thread John Mikes
On 6/23/07, David Nyman [EMAIL PROTECTED] wrote:

 Hi John

(just your Italics par-s quoted in this reply. Then JM: means present
text)):

*DN: Since we agree to eliminate the 'obsolete noumenon', we can perhaps
re-phrase this as just: 'how do you know x?'  And then the answers are of
the type 'I just see x, hear x, feel x' and so forth.  IOW, 'knowing x' is
unmediated** - 'objects' like x are just 'embedded' in the structure of the
'knower', and this is recursively related to more inclusive structures
within which the knower and its environment are in turn embedded.
*
JM:  You mean a hallucination of x, when you * 'I just see x, hear x, feel
x' and so forth'*.
is included in your knowledge? or even substitutes for it? Maybe yes...
But then can you differentiate? (or this is no reasonable question?)
*


*((to JM: ...know if you are NOT conscious? Well, you wouldn't.))
DN: Agreed.  If we 'delete the noumenon' we get: How would you know if you
are NOT? or: How would you know if you did NOT (know)?.  To which we
might indeed respond: You would not know, if you were NOT, or: You would
not know, if you did NOT (know). *
JM:  The classic question: Am I? and the classical answer:  Who is
asking?
*

*DN: I think we need to distinguish between 'computers' and 'machines'.  I
can see no reason in principle why an artifact could not 'know', and be
motivated by such knowing to interact with the human world: humans are of
course themselves 'natural artifacts'.* itself embedded.

JM: Are you including 'humans' into the machines or the computers? And dogs?
Amoebas?
The main difference I see here is the 'extract' of the human world (or:
world, as humans can interpret what they learned) downsized to our
choice of necessity which WE liked to design into an artifact. (motors,
cellphones, AI, AL). Yes, we (humans etc.) are artefacts but 'use' a lot of
capabilities (mental etc. gadgets) we either don't know at all, or just
accept them as 'being human' (or an extract of human traits as 'being dog')
with no urge to build such into a microwave oven or an AI.
But then we are SSOO smart when we draw conclusions!
*
*DN:
Bruno's approach is to postulate the whole 'ball of wax' as computation, so
that any 'event' whether 'inside' or 'outside' the machine is 'computed'*.
JM:
Bruno is right: accepting that 'any machine' is part of its outside(?)
totality, i.e.  embedded into its ambiance, I would be scared to
differentiate myself. There is no hermetic 'skin' - it is transitional
effects transcending back and forth, we just do not observe those outside
the 'topical boundaries' of our actual observation (model, as I call it).

*DN:*
*The drift of my recent posts has been that even in this account, 'worlds'
can emerge 'orthogonally' to each other, such that from their reciprocal
perspectives, 'events' in their respective worlds will be 'imaginary'.*
JM:
I can't say: I have no idea how the world works, except for that little I
interpreted into my 1st person narrative. I accept maybe-s.
And I have a way to 'express' myself: I use I dunno.

Have fun

John



David



  Dear David.
  do not expect from me the theoretical level of technicality-talk er get
  from Bruno: I talk (and think) common sense (my own) and if the
  theoretical technicalities sound strange, I return to my thinking.
 
  That's what I got, that's what I use (plagiarized from the Hungarian
  commi
  joke: what is the difference between the peoples' democracy and a wife?
  Nothing: that's what we got that's what we love)
 
  When I read your questioning the computer, i realized that you are
  in the ballpark of the AI people (maybe also AL - sorry, Russell)
  who select machine-accessible aspects for comparing.
  You may ask about prejudice, shame (about goofed situations),  humor
  (does a
  computer laugh?)  boredom or preferential topics (you push for an
  astronomical calculation and the computer says: I rather play some Bach
  music now)
  Sexual preference (even disinterestedness is slanted), or laziness.
  If you add untruthfulness in risky situations, you really have a human
  machine
  with consciousness (whatever people say it is - I agree with your
  evading
  that unidentified obsolete noumenon as much as possible).
 
  I found Bruno's post well fitting - if i have some hint what
  ...inner personal or self-referential modality... may mean.
  I could not 'practicalize' it.
  I still frown when abondoning (the meaning of) something but consider
   items as pertaining to it - a rough paraphrasing, I admit.  To what?.
  I don't feel comfortable to borrow math-methods for nonmath explanations
  but that is my deficiency.
 
  Now that we arrived at thequestion I replied-added (sort of) to Colin's
  question I -
  let me ask it again: how would YOU know if you are conscious?
  (Conscious is more meaningful than cc-ness). Or rather: How would
  you know if you are NOT conscious? Well, you wouldn't. If you can,
  you are conscious.  Computers?
 
  Have a 

Re: How would a computer know if it were conscious?

2007-06-26 Thread David Nyman
On 26/06/07, John Mikes [EMAIL PROTECTED] wrote:

JM:  You mean a hallucination of x, when you * 'I just see x, hear x, feel
x' and so forth' *.  is included in your knowledge? or even substitutes for
it? Maybe yes...

DN:  I am conscious of knowing x is distinguishable from I know x.  The
former has already differentiated 'knowing x' and so now I know [knowing
x].  And so forth.  So knowing in this sense stands for a direct or
unmediated 'self-relation', a species of unity between knower and known -
hence its notorious 'incorrigibility'.

JM:  But then can you differentiate? (or this is no reasonable question?)

DN:  It seems that in the development of the individual at first there is no
such differentiation; then we find that we are 'thrown' directly into a
'world' populated with 'things' and 'other persons'; later, we differentiate
this from a distal 'real world' that putatively co-varies with it.  Now we
are in a position to make a distinction between 'plural' or 'rational' modes
of knowing, and solipsistic or 'crazy' ones.  But then it dawns that it's
*our world* - not the 'real' one, that's the 'hallucination'.  No wonder
we're crazy!  This evolutionarily-directed stance towards what we 'know' is
of course so pervasive that it's only a minority (like the lost souls on
this list!) who harbour any real concern about the precise status of such
correlations.  Hence, I suppose, our continual state of confusion.

JM:  The classic question: Am I? and the classical answer:  Who is
asking?

DN:  Just so. Crazy, like I say.

JM: Are you including 'humans' into the machines or the computers? And dogs?
Amoebas?

DN:  Actually, I just meant to distinguish between 'machines' considered
physically and computational processes.  I really have no idea of course
whether any non-human artefact will ever come to know and act in the sense
that a human does. My point was only to express my logical doubts that it
would ever do so in virtue of its behaving in a way that merely represents
*to us* a process of computation.  However, the more I reason about this the
stranger it gets, so I guess I really 'dunno'.

JM:  Bruno is right: accepting that 'any machine' is part of its outside(?)
totality, i.e.  embedded into its ambiance, I would be scared to
differentiate myself. There is no hermetic 'skin' - it is transitional
effects transcending back and forth, we just do not observe those outside
the 'topical boundaries' of our actual observation (model, as I call it).

DN:  Yes: all is relation (ultimately self-relation, IMO), and 'boundaries'
merely delimit what is 'observable'.  In this context, what do you think
about Colin's TPONOG post?

Regards

David


On 6/23/07, David Nyman [EMAIL PROTECTED] wrote:
 
  Hi John

 (just your Italics par-s quoted in this reply. Then JM: means present
 text)):

 *DN: Since we agree to eliminate the 'obsolete noumenon', we can perhaps
 re-phrase this as just: 'how do you know x?'  And then the answers are of
 the type 'I just see x, hear x, feel x' and so forth.  IOW, 'knowing x' is
 unmediated** - 'objects' like x are just 'embedded' in the structure of
 the 'knower', and this is recursively related to more inclusive structures
 within which the knower and its environment are in turn embedded.
 *
 JM:  You mean a hallucination of x, when you * 'I just see x, hear x, feel
 x' and so forth'*.
 is included in your knowledge? or even substitutes for it? Maybe yes...
 But then can you differentiate? (or this is no reasonable question?)
 *


 *((to JM: ...know if you are NOT conscious? Well, you wouldn't.))
 DN: Agreed.  If we 'delete the noumenon' we get: How would you know if
 you are NOT? or: How would you know if you did NOT (know)?.  To which we
 might indeed respond: You would not know, if you were NOT, or: You would
 not know, if you did NOT (know). *
 JM:  The classic question: Am I? and the classical answer:  Who is
 asking?
 *

 *DN: I think we need to distinguish between 'computers' and 'machines'.  I
 can see no reason in principle why an artifact could not 'know', and be
 motivated by such knowing to interact with the human world: humans are of
 course themselves 'natural artifacts'. * itself embedded.

 JM: Are you including 'humans' into the machines or the computers? And
 dogs? Amoebas?
 The main difference I see here is the 'extract' of the human world (or:
 world, as humans can interpret what they learned) downsized to our
 choice of necessity which WE liked to design into an artifact. (motors,
 cellphones, AI, AL). Yes, we (humans etc.) are artefacts but 'use' a lot of
 capabilities (mental etc. gadgets) we either don't know at all, or just
 accept them as 'being human' (or an extract of human traits as 'being dog')
 with no urge to build such into a microwave oven or an AI.
 But then we are SSOO smart when we draw conclusions!
 *
 *DN:
 Bruno's approach is to postulate the whole 'ball of wax' as computation,
 so that any 'event' whether 'inside' or 'outside' the 

Re: How would a computer know if it were conscious?

2007-06-26 Thread Russell Standish

On Mon, Jun 25, 2007 at 10:17:57PM +0100, David Nyman wrote:
 
 Here's what's still not completely clear to me - perhaps you can assist me
 with this.  We don't know *which* set of physical events is in effect
 selected by the functionalist account, even though it may be reasonable to
 believe that there is one.  Given this, it appears that should we be finally
 convinced that only a functional account of 1-person phenomena uniquely
 survives all attempted refutation, we can never in that case provide any
 'distinguished' bottom up physical account of the same phenomena.  IOW we
 would be faced with an irreducibly top-down mode of explanation for
 consciousness, even though there is still an ineliminable implication to
 specific fundamental aspects of the physics in 'instantiating' the bottom-up
 causality.  Does this indeed follow, or am I still garbling something?
 
 David
 

This sounds to me like you're paraphrasing Bruno's programme.

The only snag is how you can eliminate the possibility of a
non-functionalist model also explaining the same set of physical
laws. In fact the God did it model probably indicates this can't be done.


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-26 Thread Russell Standish

On Mon, Jun 25, 2007 at 01:36:56PM +0100, David Nyman wrote:
 
 DN:  Now this seems to me crucial.  When you say that self-awareness emerges
 from the physics, ISTM that this is what I was getting at in the bit you
 didn't comment on directly:
 
 My claim isthat if (machines) are (conscious), it couldn't be solely in
 virtue of any 'imaginary computational worlds' imputed to them, but rather
 because they support some unique, distinguished process of *physical*
 emergence that also corresponds to a unique observer-world.

There is, in a sense, a certain arbitrariness in where one draws the
boundaries. But I strongly support the notion that there can be no
consciousness without an environment (aka appearance of a physical
world to the conscious entity). Only if that environment was shared
with our own physical world do we have a possibility of
communication. We would have to acknowledge the same self-other
boundary as the other conscious process.

Furthermore, I would make the stronger claim that self-other boundary
must be such that neither the self nor the environment can be
computable, even if together they are. We've had this discussion
before on this list.

Gotta run now - my train's pulling in.
-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-26 Thread David Nyman
On 26/06/07, Mark Peaty [EMAIL PROTECTED] wrote:

MP:  Your second may  shoot me if I waffle..

DN:  No, he'll just tickle you until you become more coherent ;)

MP:  The main reason for the word 'challenge' above is
due to the way you were using the word 'sensing' for physical
and chemical interactions.

DN:  Yes, it's difficult to find terms that don't mislead somebody by
unintended implication.  Let's say that I believe it helps to reduce
physical and chemical interactions to the logic of 'self-relativity'.
Why?  Because when we conceptually isolate 'entities' like molecules, atoms,
or even quarks or super-strings, the semantics we employ implicitly depend
on this 'primitive' logical concept.  A simple notion that embodies this is
a 'modulated continuum': continuum, because it must be seamless and
symmetrical ( i.e. no 'voids'); modulated, because nonetheless this symmetry
must somehow be 'broken'.  If such 'broken seamlessness' has a flavour of
paradox, there's something 'strangely' unavoidable in that. But ISTM that
most aspects of our ontology can be intuited by building on (something like)
the self-participation of such a modulated continuum.

For me, the natural term for this participatory, self-directed,
symmetry-breaking is 'self-relativity'.  The cool thing about this, is that
narratives rooted in such participatory self-relation lend themselves quite
interchangeably to 0, 1, or 3-person points-of-view.  IOW, whether you want
to narrate in terms of (physical) 'action', or (personal) 'sensing', or even
(mathematical) 'operations', all can be intuited as built on self-relation.
And the distinctive differences between such narratives are then reciprocal
perspectives on that self-relativity.  This is why I used the term
'sense-action' as a 'bridge' between the 'physical' and 'personal'
reciprocals of self-relation. The empirical 'laws' we extract from the
consistent features of these relations can in turn be intuited as inheriting
from the self-directedness of the original symmetry-breaking: this too, will
have 0, 1, and 3-person reciprocity.

MP:  OK, my 'the brain makes muscles move' is basically a
bulwark against 'panpsychism' or any other forms of
mystery-making. The term I like is 'identity theory' but like
most labels it usually seems to provoke unproductive
digressions.

DN:  Now does it seem possible to you that your notion of 'identity' could
be accomplished via 'sense-action' reciprocity?  IOW, that 'mind' and
'brain' are reciprocal perspectives on the same structure of
self-relations?  Panpsychism?  Well, brain's perspective is 'psych'; psych's
perspective is 'brain'. The 'pan' then depends on how you localise 'psych',
and that is a horse of a very different colour.  ISTM, very briefly, that
'psych', in the operational sense of a highly-specific set of
biospherically-evolved mechanisms for dealing with the environment, is
anything but 'pan'.  How and 'where' does it then arise?  Well, we know from
this list alone that theories abound, but nobody knows.  This of course
won't restrain my speculations!

My take would be along the lines that the brain 'hosts' (deliberate
ambiguity) 'transduction' that 'renders' information spectrally on a set of
virtual 'surfaces'.  Metaphorically it's a bit like the telly, (very)
loosely, in that the transducer's job is to turn 'signal' into 'message'.
But of course there's no-one watching: the 'surfaces' *are* our 'personal
worlds'.  Such surfaces are the 'medium' of the 1-personal, and the
'messages' it mediates are '3-personal' (always remembering that the medium
*is* the message).  Also - crucially - the 'surfaces' are *interactive*:
messages self-relate, recombine, get re-transduced, and signal flows back
into the environment.

Now, how the 'transduction-signal' relationship emerges out of computation,
EM, chemistry, Bose-Einstein condensate, or GOK* what, I dunno.  But if we
contemplate this participatively from a self-relating perspective, then we
can narrate the story from either 'action' or 'sense' perspectives
interchangeably.  IOW, things happen in (something like) the 'action'
narrative, participatively it feels (something like) the 'sense' narrative,
and its 'intentionality' is (something like) self-directedness.  And all of
this depends ultimately on self-relativity.

(* A nurse I used to know told me that doctors would cryptically mark the
notes of the most intractable diagnoses: GOK - God Only Knows)

MP:  That is to say, all our knowledge _of_
the world is embodied in qualia which are _about_ the world.
They are our brains' method of accounting for things. Naive
realism is how we are when we 'mistake' qualia for the world
they represent.

DN:  OK, if one's self-relating emerges 1-personally as spectrally-rendered
'surfaces', does this carry for you any taste, sniff, glimmer, rustle, or
tingle of 'qualia'?  Of course, there's nothing 'external' to compare to the
1-personal, even though 'spectra' does carry an implication of relative

Re: How would a computer know if it were conscious?

2007-06-26 Thread David Nyman
On 26/06/07, Russell Standish [EMAIL PROTECTED] wrote:

RS:  This sounds to me like you're paraphrasing Bruno's programme.

DN:  Yes, but I only realised this after I'd painfully thunk myself into it
during my exchange with Brent.  But I think I learned something in the
process, even tho' I'm not exactly sure what.

RS:  The only snag is how you can eliminate the possibility of a
non-functionalist model also explaining the same set of physical laws.

DN:  I suppose so.

RS:  In fact the God did it model probably indicates this can't be done.

DN:  But would having the possibility of two entirely different causal
accounts of the same thing be bug or a feature?

- Show quoted text -


 On Mon, Jun 25, 2007 at 10:17:57PM +0100, David Nyman wrote:
 
  Here's what's still not completely clear to me - perhaps you can assist
 me
  with this.  We don't know *which* set of physical events is in effect
  selected by the functionalist account, even though it may be reasonable
 to
  believe that there is one.  Given this, it appears that should we be
 finally
  convinced that only a functional account of 1-person phenomena uniquely
  survives all attempted refutation, we can never in that case provide any
  'distinguished' bottom up physical account of the same phenomena.  IOW
 we
  would be faced with an irreducibly top-down mode of explanation for
  consciousness, even though there is still an ineliminable implication to
  specific fundamental aspects of the physics in 'instantiating' the
 bottom-up
  causality.  Does this indeed follow, or am I still garbling something?
 
  David
 

 This sounds to me like you're paraphrasing Bruno's programme.

 The only snag is how you can eliminate the possibility of a
 non-functionalist model also explaining the same set of physical
 laws. In fact the God did it model probably indicates this can't be
 done.


 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052 [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au

 

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-26 Thread David Nyman
On 26/06/07, Russell Standish [EMAIL PROTECTED] wrote:

RS:  There is, in a sense, a certain arbitrariness in where one draws the
boundaries. But I strongly support the notion that there can be no
consciousness without an environment (aka appearance of a physical world to
the conscious entity). Only if that environment was shared
with our own physical world do we have a possibility of communication. We
would have to acknowledge the same self-other boundary as the other
conscious process.

DN:  Yes, and AFAICS this mutual self-other boundary would emerge as an
aspect of the 'selection' of the (putatively) conscious functional
interpretation that is consistent with our interactions with the physical
instantiation. This would presumably remove or reduce any arbitrariness.

RS:  Furthermore, I would make the stronger claim that self-other boundary
must be such that neither the self nor the environment can be
computable, even if together they are. We've had this discussion
before on this list.

DN:  I'll try to find it - any idea where?


 On Mon, Jun 25, 2007 at 01:36:56PM +0100, David Nyman wrote:
 
  DN:  Now this seems to me crucial.  When you say that self-awareness
 emerges
  from the physics, ISTM that this is what I was getting at in the bit you
  didn't comment on directly:
 
  My claim isthat if (machines) are (conscious), it couldn't be
 solely in
  virtue of any 'imaginary computational worlds' imputed to them, but
 rather
  because they support some unique, distinguished process of *physical*
  emergence that also corresponds to a unique observer-world.

 There is, in a sense, a certain arbitrariness in where one draws the
 boundaries. But I strongly support the notion that there can be no
 consciousness without an environment (aka appearance of a physical
 world to the conscious entity). Only if that environment was shared
 with our own physical world do we have a possibility of
 communication. We would have to acknowledge the same self-other
 boundary as the other conscious process.

 Furthermore, I would make the stronger claim that self-other boundary
 must be such that neither the self nor the environment can be
 computable, even if together they are. We've had this discussion
 before on this list.

 Gotta run now - my train's pulling in.
 --


 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052 [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au

 

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-26 Thread Russell Standish

On Wed, Jun 27, 2007 at 03:03:35AM +0100, David Nyman wrote:
 RS:  Furthermore, I would make the stronger claim that self-other boundary
 must be such that neither the self nor the environment can be
 computable, even if together they are. We've had this discussion
 before on this list.
 
 DN:  I'll try to find it - any idea where?
 

Its a bit scattered, unfortunately. 

Search the everything list archive using the terms uncomputable,
randomness or random oracle. You will find bits and pieces on
this. Posters are typically Bruno, Stathis and myself.

I have a bit of stuff in my book on randomness (in the Evolution
chapter, and in the Consiouness chapter), but don't make a big deal
out of this. It is still all somewhat debatable.

Cheers

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-25 Thread Russell Standish

On Sun, Jun 24, 2007 at 08:20:49PM +0100, David Nyman wrote:
 RS:  In some Platonic sense, all possible observers are already
 out there, but by physically instantiating it in our world, we are in
 effect opening up a communication channel between ourselves and the
 new consciousness.
 
 I think I must be missing something profound in your intended meanings of:
 
 1) 'out there'
 2) 'physically instantiating'
 3) 'our world'
 
 My current 'picture' of it is as follows.  The 'Platonic sense' I assume
 equates to the 'bit-string plenitude' (which is differentiable from 'no
 information' only by internal observers, like the Library of Babel - a
 beautiful idea BTW).  

Its more actually out there in the Multiverse, rather than the
Plenitude, as the Multiverse is a necessary prerequisite of
observation. Its at least one level of emergence up from the bitstring
plenitude. 

 But I'm assuming a 'hierarchy' of recursive
 computational emergence through bits up through, say, strings, quarks,
 atoms, molecules, etc - in other words what is perceived as matter-energy by
 observers.  I then assume that both 'physical objects' and any correlated
 observers emerge from this matter-energy level, and that this co-emergence
 accomplishes the 'physical instantiation'.  

Emergence is entirely a phenomenon of the observer. No observer, no
emergence. So I wouldn't really be calling it co-emergence.

What must emerge from the physics is the perception of self, or
self-awareness. Whether this can be identified with consciousness is
rather a moot point, perhaps to be settled much later. Someone like
Hoftstadter with his strange loops would probably argue in favour of
this however. So would Dennett, if I read him correctly.

 IOW, the observer is the
 1-person view, and the physical behaviour the 3-person view, of the same
 underlying complex emergent - they're different descriptions of the same
 events.

3rd person behaviour is that which is shared by all possible
observers. There is also first person plural phenomena, sharable
between multiple observers, but not necessarily all. Science, as we
practise it today, is strictly first person plural, though in
principle at least shared by all humans regardless of culture. Some of
physics, like quantum laws, and the conservation laws that arise from
point of view invariance is however a string candidate for being
called 3rd person.

 
 If this is so, then as you say, the opening of the 'communication channel'
 would be a matter of establishing the means and modes of interaction with
 any new consciousness, because the same seamless underlying causal sequence
 unites observer-world and physical-world: again, different descriptions,
 same events.
 

OK - it seems you're talking about supervenience here.

 If the above is accepted (but I'm beginning to suspect there's something
 deeply wrong with it), then the 'stability' of the world of the observer
 should equate to the 'stability' of the physical events to which it is
 linked through *identity*. 

This is where I get lost. Stability of the world has to do with
the necessary robustness property of observers, as I argue in section
4.2 of my book. I note also in that section that alternative proposals
exist as well.

 Now here's what puzzles me.  ISTM that the
 imputation of 'computation' to the physical computer is only through the
 systematic correspondence of certain stable aspects of its (principally)
 electronic behaviour to computational elements: numbers,
 mathematical-logical operators, etc.  The problem is in the terms
 'imputation' and 'correspondence': this is surely merely a *way of speaking*
 about the physical events in the computer, an arbitrary ascription, from an
 infinite possible set, of externally-established semantics to the intrinsic
 physical syntactics.


The attribution of computation is performed by the observer (otherwise
known as user) of the computer, as you say. The attribution of consciousness in
any processes can only be done by the conscious observer
erself. Attribution of consciousness in any non-self process can never
be definite, although it is typically useful to attribute a mind to
other processes in the environment to help reason about them (other
people, obviously, but also many of the more complicated animals, and
perhaps also to computers when they achieve a certain level of
sophistication). 

But by accepting functionalism, we can even make stronger assertions -
processes that sufficiently accurately mimic conscious ones, must
therefore be conscious.
 
 Consequently, ISTM that the emergence of observer-worlds has to be
 correlated (somehow) - one-to-one, or isomorphically - with corresponding
 'physical' events: IOW these events, with their 'dual description',
 constitute a single 'distinguished' *causal* sequence.  By contrast, *any*
 of the myriad 'computational worlds' that could be ascribed to the same
 events must remain - to the computer, rather than the programmer - only
 arbitrary or 

Re: How would a computer know if it were conscious?

2007-06-25 Thread David Nyman
On 25/06/07, Russell Standish [EMAIL PROTECTED] wrote:
RS: Its more actually out there in the Multiverse, rather than the
Plenitude, as the Multiverse is a necessary prerequisite of
observation. Its at least one level of emergence up from the bitstring
plenitude.

DN:  OK

RS:  Emergence is entirely a phenomenon of the observer. No observer, no
emergence. So I wouldn't really be calling it co-emergence.

I just meant that the observer's world, taken with the 'physical' phenomena
correlated with it, could then be said to 'co-emerge'.  It was this
relationship I was emphasising (but see below).

RS:  What must emerge from the physics is the perception of self, or
self-awareness. Whether this can be identified with consciousness is
rather a moot point, perhaps to be settled much later. Someone like
Hoftstadter with his strange loops would probably argue in favour of
this however. So would Dennett, if I read him correctly.

DN:  Now this seems to me crucial.  When you say that self-awareness emerges
from the physics, ISTM that this is what I was getting at in the bit you
didn't comment on directly:

My claim isthat if (machines) are (conscious), it couldn't be solely in
virtue of any 'imaginary computational worlds' imputed to them, but rather
because they support some unique, distinguished process of *physical*
emergence that also corresponds to a unique observer-world.

However, perhaps what is significant is the distinction you make above
between 'self-awareness', and 'consciousness', which is what I'd been trying
to do in previous posts in my unintelligible way.  IOW, that some primitive
or 'distinguished' form of self-relation is associated with the physics, and
that the emergence of 'conscious' observer worlds then equates to a
hierarchy of emergence supervening on that.  To use a common analogy,
observer worlds would emerge by following something like the 'distinguished'
explanatory trajectory taken in the emergence of 'life' from 'dead matter'.
But, if we accept functionalism, we seem to have a horse of another, and
most peculiar, colour.  It seems, since there is no unique 'computational'
interpretation of the physical level of behaviour, that there can likewise
be no unique (and hence 'distinguished') set of self-relations associated
with any physical events that would in turn evoke a unique observer world.

(But a glimmer of comprehension may be igniting in the dim recesses of my
(putative) mind.)  Perhaps when you say:

RS:  The conscious entity that the computer implements would know about
it. It is not imaginary to itself. And by choosing to interpret the
computer's program in that way, rather than say a tortured backgammon
playing program, we open a channel of communication with the
consciousness it implements.

DN:  ...you mean that if functionalism is true, then though any of the
myriad interpretations of the physics might possibly evoke an observer world
(although presumably most would be incoherent), only interpretations we are
able to 'interact with', precisely because of the consistency of their
externalised behaviour with us and our environment, are relevant (causally
or otherwise) *to us*.  And if this can be shown to converge on a *unique*
such interpretation for a given physical system, in effect this would then
satisfy my criterion of supervening on *some* distinguishable or unique set
of physical relations, even if we couldn't say what it was. So this, then,
would be the 'other mind' - and from this perspective, all the other
interpretations are 'imaginary' *for us*.

(As an aside, this reminds me of a story about Nixon's press secretary, Ron
Ziegler. The White House Press Corps, having just listened in exasperated
disbelief to the nth version of the 'official statement' on Watergate,
protested: But Ron, what about all the other statements?  Ziegler replied:
This is the operative statement; all the other statements are
inoperative.  Perhaps the 'interaction model' we choose in effect selects
the corresponding 'operative consciousness' in terms of our world; all the
others are 'inoperative'.)

This has a very 'strange' feel to it (but perhaps this is appropriate).  It
seems to follow that there can still in principle be a 'bridge' between a
functional (computationalist) account of a 'conscious' machine, and a
'physicalist' one - i.e. that either explanatory mode could account for the
same phenomena.  Perhaps this is what you mean by 'downwards' as well as
'upwards' causation?  So it then becomes an empirical project - i.e. that it
will eventually turn out - or not - that computationalism emerges as the
survivor in the competition to be the most Occamishly cogent account of the
relation between conscious phenomena and physics, and between a 'conscious
being' and its environment.  Olympia and Klara type arguments, if I've
understood them, seem to exclude any *fixed* relationship between 'physical'
events and computational ones, but this appears not to be required by a

Re: Re: How would a computer know if it were conscious?

2007-06-25 Thread Mark Peaty

David,
We have reached some
understanding in the 'asifism' thread, and I would summarise
that, tilted towards the context of this line of this thread, 
more or less as
follows.

Existence -
*   The irreducible primitive is existence per se;
*   that we can know about this implies differentiation in and of
that which exists;
*   that we can recognise both invariance and changes and
participate in what goes on implies _connection_.

I am sure there must be mathematical/logical formalism which
could render that with exquisite clarity, but I don't know how
to do it. Plain-English is what I have to settle for [and aspire
to :-]

There are a couple of issues that won't go away though: our
experience is always paradoxical, and we will always have to
struggle to communicate about it.

Paradox or illusion -
I think people use the word 'illusion' about our subjective
experience of being here now because they don't want to see it
as paradoxical. However AFAICS, the recursive self-referencing
entailed in being aware of being here now guarantees that what
we are aware of at any given moment, i.e. what we can attend to,
can never be the totality of what is going on in our brains. In
terms of mind, some of it - indeed probably the majority - is
unconscious. We normally are not aware of this. [Duh, that is
what unconscious means Mark!] But sometimes we can become aware 
[acutely!]
of having _just been_ operating unconsciously and this is 
salutary, once the sickening embarrassment subsides anyway :-0

For those of us who have become familiar with this issue it is
no hardship but there are many who resist the idea. The least 
mortifying example that is _easy to see in oneself_ is what 
happens when we look for something and then find it: before we 
find it the thing is 'not there' for us, except that we might 
believe that it is really. Then we find it; the thing just pops 
into view! As mundane as mould on cheese, but bloody marvellous 
as soon as you start thinking about how it all works!

But I have to *challenge you to clarify* whether what I write 
next really ties in completely with what you are thinking.
I'll try it in point form for brevity's sake.

Behaviour and consciousness -
*   Consciousness is something we know personally, and through 
discussion with others we come to believe that their experience 
is very similar.
*   Good scientific evidence and moderately sceptical common sense 
tell us is this experience is _intimately and exclusively_ bound 
up with the activity of our brains. Ie the experience - the 
conscious awareness of the moment as well as the simultaneous or 
preliminary non-conscious activity - is basically what the brain 
does, give or take a whole range of hormonal controls of the 
rest of the organism. This can be summarised as 'The mind is 
what the brain does', at least insofar as 'consciousness' is 
concerned, and the brain does it all in order to make the body's 
muscles move in the right way.
*   People's misunderstanding about how we are conscious seems to 
centre around how mere meat could 'have' this experience.
*   The answer is that the brain is structured so that behaviours 
- potentially a million or more human behaviours of all sorts - 
can be *stored* within the brain. This storage, using the word 
in a wide sense, is actually changes to the fine structures 
within the brain [synapses, dendrite location, tags on DNA, etc] 
which result in [relatively] discrete, repeatable patterns of 
neuronal network activity occurring which function as sequences 
of muscle activation
*   For practical purposes behaviours usually involve muscles 
moving body parts appropriately. [If muscles don't move, nobody 
else can be sure if anything is going on]. However, within the 
human brain, learning also entails the formation of neuronal 
network activity patterns which become surrogates for or 
alternatives to overtly visible behaviours. Likewise the 
completely internal detection of such surrogate activities 
becomes a kind of surrogate for perception of one's own overt 
behaviours or for perception of external world activities which 
would result from one's own actions.
*   Useful and effective response and adaptation to the world 
requires the review of appropriateness of one's overt behaviour 
and to be able to adjust or completely change one's behaviours 
both at very short notice and over arbitrarily long periods 
depending on the duration of the effects of one's actions. This 
entails responding to one's own behaviours over whatever time 
scale is necessary.
*   Behaviours, once learned, become habitual i.e. they are evoked 
by appropriate circumstances and proceed in the manner learned 
unless varied by on-going review and adjustment. Where the 
habitual behavioural response is completely appropriate, we are 
barely conscious of the activity; we only pay attention to 
novelties and challenges - be they in the distant environment, 
our close 

Re: How would a computer know if it were conscious?

2007-06-25 Thread Brent Meeker

David Nyman wrote:
 On 25/06/07, *Russell Standish* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
...
 RS:  The conscious entity that the computer implements would know about
 it. It is not imaginary to itself. And by choosing to interpret the
 computer's program in that way, rather than say a tortured backgammon
 playing program, we open a channel of communication with the
 consciousness it implements.
 
 DN:  ...you mean that if functionalism is true, then though any of 
 the myriad interpretations of the physics might possibly evoke an 
 observer world (although presumably most would be incoherent), only 
 interpretations we are able to 'interact with', precisely because of the 
 consistency of their externalised behaviour with us and our environment, 
 are relevant (causally or otherwise) *to us*.  And if this can be shown 
 to converge on a *unique* such interpretation for a given physical 
 system, in effect this would then satisfy my criterion of supervening on 
 *some* distinguishable or unique set of physical relations, even if we 
 couldn't say what it was. So this, then, would be the 'other mind' - and 
 from this perspective, all the other interpretations are 'imaginary' 
 *for us*.

If I understand you, I would agree with the clarification that this convergence 
has been performed by evolution; so that for us it is in the most part 
hardwired at birth.  And this hardwired interpretation of the world is 
something that co-evolved with sensory and manipulative organs.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-25 Thread David Nyman
On 25/06/07, Brent Meeker [EMAIL PROTECTED] wrote:

BM:  If I understand you, I would agree with the clarification that this
convergence has been performed by evolution; so that for us it is in the
most part hardwired at birth.  And this hardwired interpretation of the
world is something that co-evolved with sensory and manipulative organs.

DN:  Yes, in the biosphere, the physical structures and capabilities on
which behaviours supervene must indeed be presumed to be products of
evolution.  Then if the functionalist account is correct, this in effect
'selects' the unique corresponding interpretation from the infinite set
attributable in principle to the physics of such structures, relegating all
the others to 'imaginary' status at the level of this account of physical
evolution.

This doesn't AFAICS present any knock-down proof of functionalism as the
correct account of consciousness, which presumably remains an open empirical
question.  Some quite different 'emergence paradigm' for consciousness -
which may or may not entail a unique and distinguishable bottom-up
correlation between physical and 1-person events - may win out; or we may
never know.  But in the case that functionalism pans out, a type of
correlation with physical causality seems at least comprehensible to me now,
as far as I've been able to think it through.

Here's what's still not completely clear to me - perhaps you can assist me
with this.  We don't know *which* set of physical events is in effect
selected by the functionalist account, even though it may be reasonable to
believe that there is one.  Given this, it appears that should we be finally
convinced that only a functional account of 1-person phenomena uniquely
survives all attempted refutation, we can never in that case provide any
'distinguished' bottom up physical account of the same phenomena.  IOW we
would be faced with an irreducibly top-down mode of explanation for
consciousness, even though there is still an ineliminable implication to
specific fundamental aspects of the physics in 'instantiating' the bottom-up
causality.  Does this indeed follow, or am I still garbling something?

David


 David Nyman wrote:
  On 25/06/07, *Russell Standish* [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 ...
  RS:  The conscious entity that the computer implements would know about
  it. It is not imaginary to itself. And by choosing to interpret the
  computer's program in that way, rather than say a tortured backgammon
  playing program, we open a channel of communication with the
  consciousness it implements.
 
  DN:  ...you mean that if functionalism is true, then though any of
  the myriad interpretations of the physics might possibly evoke an
  observer world (although presumably most would be incoherent), only
  interpretations we are able to 'interact with', precisely because of the
  consistency of their externalised behaviour with us and our environment,
  are relevant (causally or otherwise) *to us*.  And if this can be shown
  to converge on a *unique* such interpretation for a given physical
  system, in effect this would then satisfy my criterion of supervening on
  *some* distinguishable or unique set of physical relations, even if we
  couldn't say what it was. So this, then, would be the 'other mind' - and
  from this perspective, all the other interpretations are 'imaginary'
  *for us*.

 If I understand you, I would agree with the clarification that this
 convergence has been performed by evolution; so that for us it is in the
 most part hardwired at birth.  And this hardwired interpretation of the
 world is something that co-evolved with sensory and manipulative organs.

 Brent Meeker


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Re: How would a computer know if it were conscious?

2007-06-25 Thread David Nyman
Mark:

Accepting broadly your summary up to this point...

MP:  But I have to *challenge you to clarify* whether what I write
next really ties in completely with what you are thinking.

DN:  My seconds will call on you!

MP:  Consciousness is something we know personally, and through
discussion with others we come to believe that their experience
is very similar.

DN:  OK, but If you push me, I would say that we 'emerge' into a personal
world, and through behavioural exchange with it, come to act consistently as
if this constitutes an 'external' environment including a community of
similar worlds. For a nascent individual, such a personal world is initially
'bootstrapped' out of the environment, and incrementally comes to
incorporate communally-established recognition and explanatory consistencies
that can also be extrapolated to a embrace a wider context beyond merely
'personal' worlds.

MP:  This can be summarised as 'The mind is
what the brain does', at least insofar as 'consciousness' is
concerned, and the brain does it all in order to make the body's
muscles move in the right way.

DN:  I would say that 'minds' and 'brains' are - in some as yet
not-fully-explicated way - parallel accounts of a seamless causal network
embracing individuals and their environment.  Depending on how this is
schematised, it may or may not be possible to fully correlate
top-down-personal and bottom-up-physical accounts.  Nonetheless, ISTM more
natural to ascribe intentionality to the individual in terms of the
environment, rather than 'the brain getting the body's muscles to move' -
i.e. I move my hand runs in parallel with a physical account involving the
biology and physics of brain and body, but both ultimately supervene on a
common 'primitive' explanatory base.

MP:  The answer is that the brain is structured so that behaviours -
potentially a million or more human behaviours of all sorts - can be
*stored* within the brain. This storage, using the word in a wide sense, is
actually changes to the fine structures within the brain [synapses, dendrite
location, tags on DNA, etc] which result in [relatively] discrete,
repeatable patterns of neuronal network activity occurring which function as
sequences of muscle activation

...snip.

Behaviours, once learned, become habitual i.e. they are evoked by
appropriate circumstances and proceed in the manner learned unless varied by
on-going review and adjustment. Where the habitual behavioural response is
completely appropriate, we are barely conscious of the activity; we only pay
attention to novelties and challenges - be they in the distant environment,
our close surroundings, or internal to our own bodies and minds.

DN:  Your account reads quite cogently, and we may well agree to discuss the
issues in this way, but crucially ISTM that our accounts are always oriented
towards particular explanatory outcomes - which is why one size doesn't fit
all.  So let's see if this shoe fits

MP:  I have put this description in terms of 'behaviours' because I
am practising how to deal with the jibes and stonewalling of
someone who countenance only 'behavioural analysis'
descriptions

DN:  Ahah  I confess I've had a little peek at your dialogues with a
certain individual on another forum, and I think I discern your purpose and
your problem.  All I can say is that we conduct the dialogue a little less
fractiously on this list.   For what it's worth, I probably wouldn't expend
much more effort on someone with so entrenched a position and so vitriolic a
vocabulary.  If you set your mind to it, you can describe anything in
'behavioural' or alternatively in 'structural' terms - A series or B series
- 'block' or 'dynamic' - but the form by itself doesn't necessarily explain
more one way or the other.  And as far as 'stimulus-response' goes, I
suppose I could say that when I 'stimulate' the gas pedal, my car 'responds'
by accelerating, but that doesn't by itself provide a very productive theory
of automotive behaviour.  But, if you have fresh energy for the
fray.

Best of luck

David



 David,
 We have reached some
 understanding in the 'asifism' thread, and I would summarise
 that, tilted towards the context of this line of this thread,
 more or less as
 follows.

 Existence -
 *   The irreducible primitive is existence per se;
 *   that we can know about this implies differentiation in and of
 that which exists;
 *   that we can recognise both invariance and changes and
 participate in what goes on implies _connection_.

 I am sure there must be mathematical/logical formalism which
 could render that with exquisite clarity, but I don't know how
 to do it. Plain-English is what I have to settle for [and aspire
 to :-]

 There are a couple of issues that won't go away though: our
 experience is always paradoxical, and we will always have to
 struggle to communicate about it.

 Paradox or illusion -
 I think people use the word 

Re: How would a computer know if it were conscious?

2007-06-24 Thread David Nyman
On 23/06/07, Russell Standish [EMAIL PROTECTED] wrote:

RS:  Perhaps you are one of those rare souls with a foot in
each camp. That could be be very productive!

I hope so!  Let's see...

RS:  This last post is perfectly lucid to me.

Phew!!  Well, that's a good start.

RS:  I hope I've answered it
adequately.

Your answer is very interesting - not quite what I expected:

RS:  In some Platonic sense, all possible observers are already
out there, but by physically instantiating it in our world, we are in
effect opening up a communication channel between ourselves and the
new consciousness.

I think I must be missing something profound in your intended meanings of:

1) 'out there'
2) 'physically instantiating'
3) 'our world'

My current 'picture' of it is as follows.  The 'Platonic sense' I assume
equates to the 'bit-string plenitude' (which is differentiable from 'no
information' only by internal observers, like the Library of Babel - a
beautiful idea BTW).  But I'm assuming a 'hierarchy' of recursive
computational emergence through bits up through, say, strings, quarks,
atoms, molecules, etc - in other words what is perceived as matter-energy by
observers.  I then assume that both 'physical objects' and any correlated
observers emerge from this matter-energy level, and that this co-emergence
accomplishes the 'physical instantiation'.  IOW, the observer is the
1-person view, and the physical behaviour the 3-person view, of the same
underlying complex emergent - they're different descriptions of the same
events.

If this is so, then as you say, the opening of the 'communication channel'
would be a matter of establishing the means and modes of interaction with
any new consciousness, because the same seamless underlying causal sequence
unites observer-world and physical-world: again, different descriptions,
same events.

If the above is accepted (but I'm beginning to suspect there's something
deeply wrong with it), then the 'stability' of the world of the observer
should equate to the 'stability' of the physical events to which it is
linked through *identity*.  Now here's what puzzles me.  ISTM that the
imputation of 'computation' to the physical computer is only through the
systematic correspondence of certain stable aspects of its (principally)
electronic behaviour to computational elements: numbers,
mathematical-logical operators, etc.  The problem is in the terms
'imputation' and 'correspondence': this is surely merely a *way of speaking*
about the physical events in the computer, an arbitrary ascription, from an
infinite possible set, of externally-established semantics to the intrinsic
physical syntactics.

Consequently, ISTM that the emergence of observer-worlds has to be
correlated (somehow) - one-to-one, or isomorphically - with corresponding
'physical' events: IOW these events, with their 'dual description',
constitute a single 'distinguished' *causal* sequence.  By contrast, *any*
of the myriad 'computational worlds' that could be ascribed to the same
events must remain - to the computer, rather than the programmer - only
arbitrary or 'imaginary' ones.  This is why I described them as 'nested' -
perhaps 'orthogonal' or 'imaginary' are better: they may - 'platonically' -
exist somewhere in the plenitude, but causally disconnected from the
physical world in which the computer participates. The computer doesn't
'know' anything about them.  Consequently, how could they possess any
'communication channel' to the computer's - and our - world 'out there'?

Of course I'm not claiming by this that machines couldn't be conscious.  My
claim is rather that if they are, it couldn't be solely in virtue of any
'imaginary computational worlds' imputed to them, but rather because they
support some unique, distinguished process of *physical* emergence that also
corresponds to a unique observer-world: and of course, mutatis mutandis,
this must also apply to the 'mind-brain' relationship.

If I'm wrong (as no doubt I am), ISTM I must have erred in some step or
other of my logic above.  How do I debug it?

David



 On Sat, Jun 23, 2007 at 03:58:39PM +0100, David Nyman wrote:
  On 23/06/07, Russell Standish [EMAIL PROTECTED] wrote:
 
  RS: I don't think I ever really found myself in
  disagreement with you. Rather, what is happening is symptomatic of us
  trying to reach across the divide of JP Snow's two cultures. You are
  obviously comfortable with the world of literary criticism, and your
  style of writing reflects this. The trouble is that to someone brought
  up on a diet of scientific and technical writing, the literary paper
  may as well be written in ancient greek. Gibberish doesn't mean
  rubbish or nonsense, just unintelligible.
 
  DN: It's interesting that you should perceive it in this way: I hadn't
  thought about it like this, but I suspect you're not wrong.  I haven't
  consumed very much of your 'diet', and I have indeed read quite a lot of
  stuff in the style you refer to, although I often find it 

Re: How would a computer know if it were conscious?

2007-06-24 Thread Brent Meeker

David Nyman wrote:
 On 23/06/07, *Russell Standish* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 RS:  Perhaps you are one of those rare souls with a foot in
 each camp. That could be be very productive!
 
 I hope so!  Let's see...
 
 RS:  This last post is perfectly lucid to me.
 
 Phew!!  Well, that's a good start.
 
 RS:  I hope I've answered it
 adequately.
 
 Your answer is very interesting - not quite what I expected:
 
 RS:  In some Platonic sense, all possible observers are already
 out there, but by physically instantiating it in our world, we are in
 effect opening up a communication channel between ourselves and the
 new consciousness.
 
 I think I must be missing something profound in your intended meanings of:
 
 1) 'out there'
 2) 'physically instantiating'
 3) 'our world'
 
 My current 'picture' of it is as follows.  The 'Platonic sense' I assume 
 equates to the 'bit-string plenitude' (which is differentiable from 'no 
 information' only by internal observers, like the Library of Babel - a 
 beautiful idea BTW).  But I'm assuming a 'hierarchy' of recursive 
 computational emergence through bits up through, say, strings, quarks, 
 atoms, molecules, etc - in other words what is perceived as 
 matter-energy by observers.  I then assume that both 'physical objects' 
 and any correlated observers emerge from this matter-energy level, and 
 that this co-emergence accomplishes the 'physical instantiation'.  IOW, 
 the observer is the 1-person view, and the physical behaviour the 
 3-person view, of the same underlying complex emergent - they're 
 different descriptions of the same events.
 
 If this is so, then as you say, the opening of the 'communication 
 channel' would be a matter of establishing the means and modes of 
 interaction with any new consciousness, because the same seamless 
 underlying causal sequence unites observer-world and physical-world: 
 again, different descriptions, same events.
 
 If the above is accepted (but I'm beginning to suspect there's something 
 deeply wrong with it), then the 'stability' of the world of the observer 
 should equate to the 'stability' of the physical events to which it is 
 linked through *identity*.  Now here's what puzzles me.  ISTM that the 
 imputation of 'computation' to the physical computer is only through the 
 systematic correspondence of certain stable aspects of its (principally) 
 electronic behaviour to computational elements: numbers, 
 mathematical-logical operators, etc.  The problem is in the terms 
 'imputation' and 'correspondence': this is surely merely a *way of 
 speaking* about the physical events in the computer, an arbitrary 
 ascription, from an infinite possible set, of externally-established 
 semantics to the intrinsic physical syntactics.
 
 Consequently, ISTM that the emergence of observer-worlds has to be 
 correlated (somehow) - one-to-one, or isomorphically - with 
 corresponding 'physical' events: IOW these events, with their 'dual 
 description', constitute a single 'distinguished' *causal* sequence.  By 
 contrast, *any* of the myriad 'computational worlds' that could be 
 ascribed to the same events must remain - to the computer, rather than 
 the programmer - only arbitrary or 'imaginary' ones.  This is why I 
 described them as 'nested' - perhaps 'orthogonal' or 'imaginary' are 
 better: they may - 'platonically' - exist somewhere in the plenitude, 
 but causally disconnected from the physical world in which the computer 
 participates. The computer doesn't 'know' anything about them.  
 Consequently, how could they possess any 'communication channel' to the 
 computer's - and our - world 'out there'?
 
 Of course I'm not claiming by this that machines couldn't be conscious.  
 My claim is rather that if they are, it couldn't be solely in virtue of 
 any 'imaginary computational worlds' imputed to them, but rather because 
 they support some unique, distinguished process of *physical* emergence 
 that also corresponds to a unique observer-world: and of course, mutatis 
 mutandis, this must also apply to the 'mind-brain' relationship.
 
 If I'm wrong (as no doubt I am), ISTM I must have erred in some step or 
 other of my logic above.  How do I debug it?
 
 David
 
 
 
 On Sat, Jun 23, 2007 at 03:58:39PM +0100, David Nyman wrote:
   On 23/06/07, Russell Standish [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
  
   RS: I don't think I ever really found myself in
   disagreement with you. Rather, what is happening is symptomatic of us
   trying to reach across the divide of JP Snow's two cultures. You are
   obviously comfortable with the world of literary criticism, and your
   style of writing reflects this. The trouble is that to someone
 brought
   up on a diet of scientific and technical writing, the literary paper
   may as well be written in ancient greek. Gibberish doesn't mean
   rubbish or nonsense, just unintelligible.
  
   

Re: How would a computer know if it were conscious?

2007-06-24 Thread Brent Meeker

OOPS! I accidentally hit the send button on the wrong copy.
 
Here's what I intended to send below:

David Nyman wrote:
 On 23/06/07, *Russell Standish* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 RS:  Perhaps you are one of those rare souls with a foot in
 each camp. That could be be very productive!
 
 I hope so!  Let's see...
 
 RS:  This last post is perfectly lucid to me.
 
 Phew!!  Well, that's a good start.
 
 RS:  I hope I've answered it
 adequately.
 
 Your answer is very interesting - not quite what I expected:
 
 RS:  In some Platonic sense, all possible observers are already
 out there, but by physically instantiating it in our world, we are in
 effect opening up a communication channel between ourselves and the
 new consciousness.
 
 I think I must be missing something profound in your intended meanings of:
 
 1) 'out there'
 2) 'physically instantiating'
 3) 'our world'
 
 My current 'picture' of it is as follows.  The 'Platonic sense' I assume 
 equates to the 'bit-string plenitude' (which is differentiable from 'no 
 information' only by internal observers, like the Library of Babel - a 
 beautiful idea BTW).  But I'm assuming a 'hierarchy' of recursive 
 computational emergence through bits up through, say, strings, quarks, 
 atoms, molecules, etc - in other words what is perceived as 
 matter-energy by observers.  I then assume that both 'physical objects' 
 and any correlated observers emerge from this matter-energy level, and 
 that this co-emergence accomplishes the 'physical instantiation'.  IOW, 
 the observer is the 1-person view, and the physical behaviour the 
 3-person view, of the same underlying complex emergent - they're 
 different descriptions of the same events.
 
 If this is so, then as you say, the opening of the 'communication 
 channel' would be a matter of establishing the means and modes of 
 interaction with any new consciousness, because the same seamless 
 underlying causal sequence unites observer-world and physical-world: 
 again, different descriptions, same events.
 
 If the above is accepted (but I'm beginning to suspect there's something 
 deeply wrong with it), then the 'stability' of the world of the observer 
 should equate to the 'stability' of the physical events to which it is 
 linked through *identity*.  Now here's what puzzles me.  ISTM that the 
 imputation of 'computation' to the physical computer is only through the 
 systematic correspondence of certain stable aspects of its (principally) 
 electronic behaviour to computational elements: numbers, 
 mathematical-logical operators, etc.  The problem is in the terms 
 'imputation' and 'correspondence': this is surely merely a *way of 
 speaking* about the physical events in the computer, an arbitrary 
 ascription, from an infinite possible set, of externally-established 
 semantics to the intrinsic physical syntactics.
 
 Consequently, ISTM that the emergence of observer-worlds has to be 
 correlated (somehow) - one-to-one, or isomorphically - with 
 corresponding 'physical' events: IOW these events, with their 'dual 
 description', constitute a single 'distinguished' *causal* sequence.  By 
 contrast, *any* of the myriad 'computational worlds' that could be 
 ascribed to the same events must remain - to the computer, rather than 
 the programmer - only arbitrary or 'imaginary' ones.  This is why I 
 described them as 'nested' - perhaps 'orthogonal' or 'imaginary' are 
 better: they may - 'platonically' - exist somewhere in the plenitude, 
 but causally disconnected from the physical world in which the computer 
 participates. The computer doesn't 'know' anything about them.  
 Consequently, how could they possess any 'communication channel' to the 
 computer's - and our - world 'out there'?

I think I agree with your concern and I think the answer is that conscious 
implies conscious of something.  For a computer or an animal to be conscious 
is really a relation to an environment.  So for a computer to be conscious, as 
a human is, it must be able to perceive and act in our environment.  Or it 
could be running a program in which a conscious being is simulated and that 
being would be conscious relative to a simulated environment in the computer.  
In the latter case there might be an infinite number of different 
interpretations that could be consistently placed on the computer's execution; 
or there might not.  Maybe all those different interpretations aren't really 
different.  Maybe they are just translations into different words.  It seems to 
me to be jumping to a conclusion to claim they are different in some 
significant way.

The importance of the environment for consciousness is suggested by the sensory 
deprivation experiments of the late '60s.  It was observed by people who spent 
a long time in a sensory deprivation tank (an hour or more) that their mind 
would enter a loop and they lost all sense of time.

Brent Meeker

 
 Of course I'm not claiming by this that machines 

Re: How would a computer know if it were conscious?

2007-06-24 Thread David Nyman
On 24/06/07, Brent Meeker [EMAIL PROTECTED] wrote:

BM:  I think I agree with your concern

DN:  Ah...

BM:  and I think the answer is that conscious implies conscious of
something.  For a computer or an animal to be conscious is really a
relation to an environment.

DN:  Yes

BM:  So for a computer to be conscious, as a human is, it must be able to
perceive and act in our environment.

DN:   My point precisely.

BM:  Or it could be running a program in which a conscious being is
simulated and that being would be conscious relative to a simulated
environment in the computer.

DN:  I'm prepared to be agnostic on this.  But as your 'or' rightly
indicates, if so, it would be conscious relative to the simulated
environment, *not* the human one.

BM:  In the latter case there might be an infinite number of different
interpretations that could be consistently placed on the computer's
execution; or there might not.  Maybe all those different interpretations
aren't really different.  Maybe they are just translations into different
words.  It seems to me to be jumping to a conclusion to claim they are
different in some significant way.

DN:  Not sure... but I don't see how any of this changes the essential
implication, which is that however many interpretations you place on it, and
however many of these may evoke 'consciousness of something' (or as I would
prefer to say, a personal or observer world) it would be the simulated
world, not the human one (as you rightly point out).  From Bruno's
perspective (I think - and AFAICS, also TON) these two 'worlds' would be
different 'levels of substitution'.  So, if I said 'yes' to the doctor's
proposal to upload me as an AI program, this might evoke some observer
world, but any such would be 'orthogonal' to my and the computer's shared
'level of origin'.  Consequently, no new observer evoked in this way could
have any ability to interact with that level.  As an aside, it's an
interesting take on the semantics of 'imaginary' - and you know Occam's
attitude to such entities.

Anyway, I'm prepared to be agnostic for the moment about such specifics of
simulated worlds, but the key conclusion seems to be that in no case could
such a 'world' participate at the same causal level as the human one, which
vitiates any sense of its 'interacting' with, or being 'conscious of', the
same 'environment'.  AFAICS you have actually reached the same conclusion,
so I don't see in what sense you mean that it's the 'answer'.  You seem to
be supporting my point.  Do I misunderstand?

David


 OOPS! I accidentally hit the send button on the wrong copy.

 Here's what I intended to send below:

 David Nyman wrote:
  On 23/06/07, *Russell Standish* [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 
  RS:  Perhaps you are one of those rare souls with a foot in
  each camp. That could be be very productive!
 
  I hope so!  Let's see...
 
  RS:  This last post is perfectly lucid to me.
 
  Phew!!  Well, that's a good start.
 
  RS:  I hope I've answered it
  adequately.
 
  Your answer is very interesting - not quite what I expected:
 
  RS:  In some Platonic sense, all possible observers are already
  out there, but by physically instantiating it in our world, we are in
  effect opening up a communication channel between ourselves and the
  new consciousness.
 
  I think I must be missing something profound in your intended meanings
 of:
 
  1) 'out there'
  2) 'physically instantiating'
  3) 'our world'
 
  My current 'picture' of it is as follows.  The 'Platonic sense' I assume
  equates to the 'bit-string plenitude' (which is differentiable from 'no
  information' only by internal observers, like the Library of Babel - a
  beautiful idea BTW).  But I'm assuming a 'hierarchy' of recursive
  computational emergence through bits up through, say, strings, quarks,
  atoms, molecules, etc - in other words what is perceived as
  matter-energy by observers.  I then assume that both 'physical objects'
  and any correlated observers emerge from this matter-energy level, and
  that this co-emergence accomplishes the 'physical instantiation'.  IOW,
  the observer is the 1-person view, and the physical behaviour the
  3-person view, of the same underlying complex emergent - they're
  different descriptions of the same events.
 
  If this is so, then as you say, the opening of the 'communication
  channel' would be a matter of establishing the means and modes of
  interaction with any new consciousness, because the same seamless
  underlying causal sequence unites observer-world and physical-world:
  again, different descriptions, same events.
 
  If the above is accepted (but I'm beginning to suspect there's something
  deeply wrong with it), then the 'stability' of the world of the observer
  should equate to the 'stability' of the physical events to which it is
  linked through *identity*.  Now here's what puzzles me.  ISTM that the
  imputation of 'computation' to the physical computer is only through the

Re: How would a computer know if it were conscious?

2007-06-23 Thread David Nyman
Hi John

JM: You may ask about prejudice, shame (about goofed situations),  humor
(does a
computer laugh?)  boredom or preferential topics (you push for an
astronomical calculation and the computer says: I rather play some Bach
music now)
Sexual preference (even disinterestedness is slanted), or laziness.
If you add untruthfulness in risky situations, you really have a human
machine
with consciousness

DN: All good, earthy, human questions.  I guess my (not very exhaustive)
examples were motivated by some general notion of a 'personal world' without
this necessarily being fully human.  A bit like 'Commander Data', perhaps.

JM: Now that we arrived at the question I replied-added (sort of) to Colin's
question I -
let me ask it again: how would YOU know if you are conscious?

DN: Since we agree to eliminate the 'obsolete noumenon', we can perhaps
re-phrase this as just: 'how do you know x?'  And then the answers are of
the type 'I just see x, hear x, feel x' and so forth.  IOW, 'knowing x' is
unmediated - 'objects' like x are just 'embedded' in the structure of the
'knower', and this is recursively related to more inclusive structures
within which the knower and its environment are in turn embedded.

JM: Or rather: How would you know if you are NOT conscious? Well, you
wouldn't.

DN: Agreed.  If we 'delete the noumenon' we get: How would you know if you
are NOT? or: How would you know if you did NOT (know)?.  To which we
might indeed respond: You would not know, if you were NOT, or: You would
not know, if you did NOT (know).

JM: If you can, you are conscious.

DN: Yes, If you know, then you know.

JM: Computers?

DN: I think we need to distinguish between 'computers' and 'machines'.  I
can see no reason in principle why an artefact could not 'know', and be
motivated by such knowing to interact with the human world: humans are of
course themselves 'natural artefacts'.  The question is whether a machine
can achieve this purely in virtue of instantiating a 'Universal Turing
Machine'. For me the key is 'interaction with the human world'.  It may be
possible to conceive that some machine is computing a 'world' with 'knowers'
embedded in an environment to which they respond appropriately based on what
they 'know'.  However such a world is 'orthogonal' to the 'world' in which
the machine that instantiates the program is itself embedded. IOW, no
'event' as conceived in the 'internal world' has any causal implication to
any 'event' in the 'external world', or vice versa.

We can see this quite clearly in that an engineer could in principle give a
reductive account of the entire causal sequence of the machine's internal
function and interaction with the environment without making any reference
whatsoever to the programming, or 'world', of the UTM.

Bruno's approach is to postulate the whole 'ball of wax' as computation, so
that any 'event' whether 'inside' or 'outside' the machine is 'computed'.
The drift of my recent posts has been that even in this account, 'worlds'
can emerge 'orthogonally' to each other, such that from their reciprocal
perspectives, 'events' in their respective worlds will be 'imaginary'.  ISTM
that this is the nub of the 'level of substitution' dilemma in the 'yes
doctor' proposition: you may well 'save your soul' but 'lose the whole
world'.  But of course Bruno knows all this (and much more) - he is at pains
to show how computationalism and any 'primitive' concept of 'matter' are
incompatible.  From my reading of 'Theory of Nothing' so does Russell, so I
suspect that our recent wrangling is down to my lousy way of expressing
myself.

A good weekend to you too!

David

Dear David.
 do not expect from me the theoretical level of technicality-talk er get
 from Bruno: I talk (and think) common sense (my own) and if the
 theoretical technicalities sound strange, I return to my thinking.

 That's what I got, that's what I use (plagiarized from the Hungarian commi

 joke: what is the difference between the peoples' democracy and a wife?
 Nothing: that's what we got that's what we love)

 When I read your questioning the computer, i realized that you are
 in the ballpark of the AI people (maybe also AL - sorry, Russell)
 who select machine-accessible aspects for comparing.
 You may ask about prejudice, shame (about goofed situations),  humor (does
 a
 computer laugh?)  boredom or preferential topics (you push for an
 astronomical calculation and the computer says: I rather play some Bach
 music now)
 Sexual preference (even disinterestedness is slanted), or laziness.
 If you add untruthfulness in risky situations, you really have a human
 machine
 with consciousness (whatever people say it is - I agree with your evading
 that unidentified obsolete noumenon as much as possible).

 I found Bruno's post well fitting - if i have some hint what
 ...inner personal or self-referential modality... may mean.
 I could not 'practicalize' it.
 I still frown when abondoning (the meaning of) something but 

Re: How would a computer know if it were conscious?

2007-06-23 Thread Russell Standish

On Fri, Jun 22, 2007 at 02:06:14PM +0100, David Nyman wrote:
 RS:
 Terminology is terminology, it doesn't have a point of view.
 
 DN:
 This may be a nub of disagreement.  I'd be interested if you could clarify.
 My characterisation of a narrative as '3-person' is when (ISTM) that it's an
 abstraction from, or projection of, some 'situation' that is fundamentally
 'participative'.  Do you disagree with this?
 
 By contrast, I've been struggling recently with language that engages
 directly with 'participation'.  But this leads to your next point.

Terminology is about describing communicable notions. As such, the
only things words can ever describe are 1st person plural
things. Since you are familiar with my book, you can look up the
distinction between 1st person (singular), 1st person plural and 3rd
person, but these concepts have often been discussed on this list. I
can use the term Green for instance, in a sentence to you, and we
can be sure of its meaning when referring to shared experience of
phenomena, however I can never communicate to you how green appears to
me, so that you can compare it with your green qualia.

 
 RS:
 Terms
 should have accepted meaning, unless we agree on a different meaning
 for the purposes of discussion.
 
 DN:
 But where there is no generally accepted meaning, or a disputed one, how can
 we then proceed?  Hence my attempts at definition (which I hate BTW), and
 which you find to be gibberish.  Is there a way out of this?
 

This sometimes happens. We can point to examples of what the word
means, and see if we agree on those. There are bound to be borderline
cases where we disagree, but these are often unimportant unless we are
searching for a definition.

 BTW, when I read 'Theory of Nothing', which I find very cogent, ISTM that
 virtually its entire focus is on aspects of a 'participatory' approach.  So
 I'm more puzzled than ever why we're in disagreement.  

You are correct that it is 'particpatory', at least in the sense John
Wheeler uses it. I don't think I ever really found myself in
disagreement with you. Rather, what is happening is symptomatic of us
trying to reach across the divide of JP Snow's two cultures. You are
obviously comfortable with the world of literary criticism, and your
style of writing reflects this. The trouble is that to someone brought
up on a diet of scientific and technical writing, the literary paper
may as well be written in ancient greek. Gibberish doesn't mean
rubbish or nonsense, just unintelligible.

I had my first experience of the modern academic humanities just two
years ago, and it was quite a shock. I attended a conference entitled
The two cultures: Reconsidering the division between the Sciences and
Humanities. I was invited to speak as one of the scientific
representatives, and basically spoke about the core thesis of my book,
which seemed appropriate. I kept the language simple and
interdisciplinary, used lots of pictures to illustrate the concepts,
and I'm sure had a reasonable connect with the audience. All of the
other scientists did the same. They all knew better than to fall back
into jargon and dense forests of mathematical formulae (I have suffered
enough of those types of seminars, to be sure).

By contrast, the speakers from the humanities all read their papers
word-for-word. There were no illustrations to help one follow the gist
of the arguments. The sentences were long-winded, and attempted to
cover every nuance possible. A style I'm sure you're very familiar
with. I tried to ask a few questions of the speakers at the end, not
so as to appear smart or anything, but just to try to clarify some of
the few points I thought I might have understood. The responses from
the speakers, however, was in the same long-winded, heavily nuanced
sentences.

The one thing I drew from this conference was that the divide between
Snow's two cultures is alive and well, and vaster than I ever imagined.


 I've really been
 trying to say that points-of-view (or 'worlds') emerge from *structure*
 defined somehow, and that (tautologically, surely) the 'primitives' of such
 structure (in whatever theoretical terms we choose) must be capable of
 'animating' such povs or worlds.  IOW povs are always 'takes' on the whole
 situation, not inherent in individuated 'things'.

To say that a point of view (which I would translate as observer)
emerges from the worlds structure, is another way of saying that the
observer must supervene on observed physical structures. And I agree
with you, basically because of the Occam catastrophe
problem. However, how or why this emergence happens is rather
mysterious. 

I think is has something to do with self-awareness, without a self
existing with the observed physical world, one cannot be
self-aware. The corrolary of this is that self-awareness must be
necessary for consciousness. Note this doesn't mean that you have to
be self-aware every second you are awake, but you have to be capable
of introspection.

 

Re: How would a computer know if it were conscious?

2007-06-23 Thread David Nyman
On 23/06/07, Russell Standish [EMAIL PROTECTED] wrote:

RS: I don't think I ever really found myself in
disagreement with you. Rather, what is happening is symptomatic of us
trying to reach across the divide of JP Snow's two cultures. You are
obviously comfortable with the world of literary criticism, and your
style of writing reflects this. The trouble is that to someone brought
up on a diet of scientific and technical writing, the literary paper
may as well be written in ancient greek. Gibberish doesn't mean
rubbish or nonsense, just unintelligible.

DN: It's interesting that you should perceive it in this way: I hadn't
thought about it like this, but I suspect you're not wrong.  I haven't
consumed very much of your 'diet', and I have indeed read quite a lot of
stuff in the style you refer to, although I often find it rather
indigestible!  But on the other hand, much of my professional experience has
been in the world of computer programming, right back to machine code days,
so I'm very aware of the difference between 'syntax' and 'semantics', and I
know too well how consequences can diverge wildly from a difference of a
single bit.  How often have I heard the beleaguered self-tester wail I
didn't *mean* that!

So - to me - my process is a bit like: define a 'procedural language'; use
this to 'code' some 'problem'; 'run' it to see what happens; then 'debug'
and repeat.  This is no doubt excruciating for anyone else to follow, and my
attempts to 'comment' the code don't always help. Now that I'm re-reading
TON, it seems to me that I've been trying to re-interpret bits of it in this
way (in an attempt to reconcile what you and Colin were disputing) but only
succeeded in muddying the waters further, probably for the reasons you
suggest.

However, in the spirit of the original topic of the thread, I would prefer
to ask you directly about the plausibility (which, unless I've
misunderstood, you support?) of an AI-program being in principle
'conscious'.  I take this to entail that instantiating such a program
thereby implements an 'observer' that can respond to and share a reality, in
broadly the same terms, with human 'observers'.  (I apologise in advance if
any paraphrase or short-hand I adopt misrepresents what you say in TON):

TON, as you comment in the book, takes the 'idealist' stance that 'concrete'
notions emerge from observation.  Our own relative status as observers
participating in 'worlds' is then dependent on computational 'emergence'
from the plenitude of all possible bit-strings.  Let's say that I'm such an
observer and I observe a 'computer' like the one I'm using now.  The
'computer' is a 3-person 'concrete emergent' in my 1-person world, and that
of the 'plurality' of observers with whom I'm in relation: we can 'interact'
with it. Now, we collectively *impute* that some aspect of its 3-person
behaviour (e.g. EM phenomena in its internal circuitry) is to be regarded as
'running an AI program' (i.e. ISTM that this is what happens when we
'compile and run' a program).  In what way does such imputation entail the
evocation - despite the myriad possible 'concrete' instantiations that might
represent it - of a *stable* observer capable of participating in our shared
'1-person plural' context?  IOW, I'm concerned that two different categories
are being conflated here: the 'world' at the 'observer level' that includes
me and the computer, and the 'world' of the program, which is 'nested'
inside this.  How can this 'nested' world get any purchase on 'observables'
that are 'external' to it?

As I re-read this question, I wonder whether I've already willy-nilly fallen
into the '2-cultures' gap again.  But what I've asked seems to be directly
related to the issues raised by 'Olympia and Klara', and by the substitution
level dilemma posed by 'yes doctor'.  Could you show me where - or if - I go
wrong, or does the 'language game' make our views forever mutually
unintelligible?

David


 On Fri, Jun 22, 2007 at 02:06:14PM +0100, David Nyman wrote:
  RS:
  Terminology is terminology, it doesn't have a point of view.
 
  DN:
  This may be a nub of disagreement.  I'd be interested if you could
 clarify.
  My characterisation of a narrative as '3-person' is when (ISTM) that
 it's an
  abstraction from, or projection of, some 'situation' that is
 fundamentally
  'participative'.  Do you disagree with this?
 
  By contrast, I've been struggling recently with language that engages
  directly with 'participation'.  But this leads to your next point.

 Terminology is about describing communicable notions. As such, the
 only things words can ever describe are 1st person plural
 things. Since you are familiar with my book, you can look up the
 distinction between 1st person (singular), 1st person plural and 3rd
 person, but these concepts have often been discussed on this list. I
 can use the term Green for instance, in a sentence to you, and we
 can be sure of its meaning when referring to shared experience of
 

Re: How would a computer know if it were conscious?

2007-06-23 Thread Brent Meeker

David Nyman wrote:
 On 23/06/07, *Brent Meeker* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 BM:  But he could also switch from an account in terms of the machine 
 level causality to an account in terms of the computed 'world'.  In fact 
 he could switch back and forth.  Causality in the computed 'world' would 
 have it's corresponding causality in the machine and vice versa.  So I 
 don't see why they should be regarded as orthogonal.
 
 DN:  Because the 'computational' description is arbitrary with respect 
 to the behaviour of the hardware.  It's merely an imputation, one of an 
 infinite set of such descriptions that could be imputed to the same 
 hardware behaviour.

True. But whatever interpretation was placed on the hardware behavior it would 
still have the same causal relations in it as the hardware.  Although there 
will be infinitely many possible interpretations, it's not the case that any 
description will do.  Changing the description would be analogous to changing 
the reference frame or the names on a map.  The two processes would still be 
parallel, not orthogonal.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-23 Thread David Nyman
On 23/06/07, Brent Meeker [EMAIL PROTECTED] wrote:

BM:  But he could also switch from an account in terms of the machine level
causality to an account in terms of the computed 'world'.  In fact he could
switch back and forth.  Causality in the computed 'world' would have it's
corresponding causality in the machine and vice versa.  So I don't see why
they should be regarded as orthogonal.

DN:  Because the 'computational' description is arbitrary with respect to
the behaviour of the hardware.  It's merely an imputation, one of an
infinite set of such descriptions that could be imputed to the same hardware
behaviour.


David Nyman wrote:
  Hi John
 
  JM: You may ask about prejudice, shame (about goofed situations),  humor
  (does a
  computer laugh?)  boredom or preferential topics (you push for an
  astronomical calculation and the computer says: I rather play some Bach
  music now)
  Sexual preference (even disinterestedness is slanted), or laziness.
  If you add untruthfulness in risky situations, you really have a human
  machine
  with consciousness
 
  DN: All good, earthy, human questions.  I guess my (not very exhaustive)
  examples were motivated by some general notion of a 'personal world'
  without this necessarily being fully human.  A bit like 'Commander
  Data', perhaps.
 
  JM: Now that we arrived at the question I replied-added (sort of) to
  Colin's question I -
  let me ask it again: how would YOU know if you are conscious?
 
  DN: Since we agree to eliminate the 'obsolete noumenon', we can perhaps
  re-phrase this as just: 'how do you know x?'  And then the answers are
  of the type 'I just see x, hear x, feel x' and so forth.  IOW, 'knowing
  x' is unmediated - 'objects' like x are just 'embedded' in the structure
  of the 'knower', and this is recursively related to more inclusive
  structures within which the knower and its environment are in turn
  embedded.
 
  JM: Or rather: How would you know if you are NOT conscious? Well, you
  wouldn't.
 
  DN: Agreed.  If we 'delete the noumenon' we get: How would you know if
  you are NOT? or: How would you know if you did NOT (know)?.  To which
  we might indeed respond: You would not know, if you were NOT, or: You
  would not know, if you did NOT (know).
 
  JM: If you can, you are conscious.
 
  DN: Yes, If you know, then you know.
 
  JM: Computers?
 
  DN: I think we need to distinguish between 'computers' and 'machines'.
  I can see no reason in principle why an artefact could not 'know', and
  be motivated by such knowing to interact with the human world: humans
  are of course themselves 'natural artefacts'.  The question is whether a
  machine can achieve this purely in virtue of instantiating a 'Universal
  Turing Machine'. For me the key is 'interaction with the human world'.
  It may be possible to conceive that some machine is computing a 'world'
  with 'knowers' embedded in an environment to which they respond
  appropriately based on what they 'know'.  However such a world is
  'orthogonal' to the 'world' in which the machine that instantiates the
  program is itself embedded. IOW, no 'event' as conceived in the
  'internal world' has any causal implication to any 'event' in the
  'external world', or vice versa.
 
  We can see this quite clearly in that an engineer could in principle
  give a reductive account of the entire causal sequence of the machine's
  internal function and interaction with the environment without making
  any reference whatsoever to the programming, or 'world', of the UTM.

 But he could also switch from an account in terms of the machine level
 causality to an account in terms of the computed 'world'.  In fact he could
 switch back and forth.  Causality in the computed 'world' would have it's
 corresponding causality in the machine and vice versa.  So I don't see why
 they should be regarded as orthogonal.

 Brent Meeker


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-23 Thread Brent Meeker

David Nyman wrote:
 Hi John
 
 JM: You may ask about prejudice, shame (about goofed situations),  humor 
 (does a
 computer laugh?)  boredom or preferential topics (you push for an 
 astronomical calculation and the computer says: I rather play some Bach 
 music now)
 Sexual preference (even disinterestedness is slanted), or laziness.
 If you add untruthfulness in risky situations, you really have a human 
 machine
 with consciousness
 
 DN: All good, earthy, human questions.  I guess my (not very exhaustive) 
 examples were motivated by some general notion of a 'personal world' 
 without this necessarily being fully human.  A bit like 'Commander 
 Data', perhaps.
 
 JM: Now that we arrived at the question I replied-added (sort of) to 
 Colin's question I -
 let me ask it again: how would YOU know if you are conscious?
 
 DN: Since we agree to eliminate the 'obsolete noumenon', we can perhaps 
 re-phrase this as just: 'how do you know x?'  And then the answers are 
 of the type 'I just see x, hear x, feel x' and so forth.  IOW, 'knowing 
 x' is unmediated - 'objects' like x are just 'embedded' in the structure 
 of the 'knower', and this is recursively related to more inclusive 
 structures within which the knower and its environment are in turn 
 embedded.
 
 JM: Or rather: How would you know if you are NOT conscious? Well, you 
 wouldn't.
 
 DN: Agreed.  If we 'delete the noumenon' we get: How would you know if 
 you are NOT? or: How would you know if you did NOT (know)?.  To which 
 we might indeed respond: You would not know, if you were NOT, or: You 
 would not know, if you did NOT (know).
 
 JM: If you can, you are conscious.
 
 DN: Yes, If you know, then you know.
 
 JM: Computers?
 
 DN: I think we need to distinguish between 'computers' and 'machines'.  
 I can see no reason in principle why an artefact could not 'know', and 
 be motivated by such knowing to interact with the human world: humans 
 are of course themselves 'natural artefacts'.  The question is whether a 
 machine can achieve this purely in virtue of instantiating a 'Universal 
 Turing Machine'. For me the key is 'interaction with the human world'.  
 It may be possible to conceive that some machine is computing a 'world' 
 with 'knowers' embedded in an environment to which they respond 
 appropriately based on what they 'know'.  However such a world is 
 'orthogonal' to the 'world' in which the machine that instantiates the 
 program is itself embedded. IOW, no 'event' as conceived in the 
 'internal world' has any causal implication to any 'event' in the 
 'external world', or vice versa.
 
 We can see this quite clearly in that an engineer could in principle 
 give a reductive account of the entire causal sequence of the machine's 
 internal function and interaction with the environment without making 
 any reference whatsoever to the programming, or 'world', of the UTM.

But he could also switch from an account in terms of the machine level 
causality to an account in terms of the computed 'world'.  In fact he could 
switch back and forth.  Causality in the computed 'world' would have it's 
corresponding causality in the machine and vice versa.  So I don't see why they 
should be regarded as orthogonal.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-23 Thread David Nyman
On 23/06/07, Brent Meeker [EMAIL PROTECTED] wrote:

BM: Changing the description would be analogous to changing the reference
frame or the names on a map.

DN:  I agree.

BM:  The two processes would still be parallel, not orthogonal.

DN:  But the inference I draw from your points above is that there is only
one process that has causal relevance to the world of the computer, and that
is the hardware one.  It is 'distinguished' in virtue of emerging at the
same level as the computer and the causal network in which it is embedded.
The world of the program is 'imaginary', or 'orthogonal', from this
perspective - a ghost in the machine.  It is 'parallel' only in the mind of
the programmer.

David


David Nyman wrote:
  On 23/06/07, *Brent Meeker* [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 
  BM:  But he could also switch from an account in terms of the machine
  level causality to an account in terms of the computed 'world'.  In fact
  he could switch back and forth.  Causality in the computed 'world' would
  have it's corresponding causality in the machine and vice versa.  So I
  don't see why they should be regarded as orthogonal.
 
  DN:  Because the 'computational' description is arbitrary with respect
  to the behaviour of the hardware.  It's merely an imputation, one of an
  infinite set of such descriptions that could be imputed to the same
  hardware behaviour.

 True. But whatever interpretation was placed on the hardware behavior it
 would still have the same causal relations in it as the hardware.  Although
 there will be infinitely many possible interpretations, it's not the case
 that any description will do.  Changing the description would be analogous
 to changing the reference frame or the names on a map.  The two processes
 would still be parallel, not orthogonal.

 Brent Meeker

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-23 Thread Russell Standish

On Sat, Jun 23, 2007 at 03:58:39PM +0100, David Nyman wrote:
 On 23/06/07, Russell Standish [EMAIL PROTECTED] wrote:
 
 RS: I don't think I ever really found myself in
 disagreement with you. Rather, what is happening is symptomatic of us
 trying to reach across the divide of JP Snow's two cultures. You are
 obviously comfortable with the world of literary criticism, and your
 style of writing reflects this. The trouble is that to someone brought
 up on a diet of scientific and technical writing, the literary paper
 may as well be written in ancient greek. Gibberish doesn't mean
 rubbish or nonsense, just unintelligible.
 
 DN: It's interesting that you should perceive it in this way: I hadn't
 thought about it like this, but I suspect you're not wrong.  I haven't
 consumed very much of your 'diet', and I have indeed read quite a lot of
 stuff in the style you refer to, although I often find it rather
 indigestible!  But on the other hand, much of my professional experience has
 been in the world of computer programming, right back to machine code days,
 so I'm very aware of the difference between 'syntax' and 'semantics', and I
 know too well how consequences can diverge wildly from a difference of a
 single bit.  How often have I heard the beleaguered self-tester wail I
 didn't *mean* that!

Interesting indeed. I wouldn't have guessed you to have been a
programmer. Perhaps you are one of those rare souls with a foot in
each camp. That could be be very productive!

...

 
 However, in the spirit of the original topic of the thread, I would prefer
 to ask you directly about the plausibility (which, unless I've
 misunderstood, you support?) of an AI-program being in principle
 'conscious'.  I take this to entail that instantiating such a program
 thereby implements an 'observer' that can respond to and share a reality, in
 broadly the same terms, with human 'observers'.  (I apologise in advance if
 any paraphrase or short-hand I adopt misrepresents what you say in TON):
 

It seems plausible, certainly.

 TON, as you comment in the book, takes the 'idealist' stance that 'concrete'
 notions emerge from observation.  Our own relative status as observers
 participating in 'worlds' is then dependent on computational 'emergence'
 from the plenitude of all possible bit-strings.  Let's say that I'm such an
 observer and I observe a 'computer' like the one I'm using now.  The
 'computer' is a 3-person 'concrete emergent' in my 1-person world, and that
 of the 'plurality' of observers with whom I'm in relation: we can 'interact'
 with it. Now, we collectively *impute* that some aspect of its 3-person
 behaviour (e.g. EM phenomena in its internal circuitry) is to be regarded as
 'running an AI program' (i.e. ISTM that this is what happens when we
 'compile and run' a program).  In what way does such imputation entail the
 evocation - despite the myriad possible 'concrete' instantiations that might
 represent it - of a *stable* observer capable of participating in our shared
 '1-person plural' context?  IOW, I'm concerned that two different categories
 are being conflated here: the 'world' at the 'observer level' that includes
 me and the computer, and the 'world' of the program, which is 'nested'
 inside this.  How can this 'nested' world get any purchase on 'observables'
 that are 'external' to it?
 

It is no different to a conscious being instantiated in a new-born
baby (or 18 month old, or whenever babies actually become
conscious). In some Platonic sense, all possible observers are already
out there, but by physically instantiating it in our world, we are in
effect opening up a communication channel between ourselves and the
new consciousness.

 As I re-read this question, I wonder whether I've already willy-nilly fallen
 into the '2-cultures' gap again.  But what I've asked seems to be directly
 related to the issues raised by 'Olympia and Klara', and by the substitution
 level dilemma posed by 'yes doctor'.  Could you show me where - or if - I go
 wrong, or does the 'language game' make our views forever mutually
 unintelligible?
 
 David
 

This last post is perfectly lucid to me. I hope I've answered it
adequately.

Cheers


-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en

Re: How would a computer know if it were conscious?

2007-06-22 Thread David Nyman
On 21/06/07, Russell Standish [EMAIL PROTECTED] wrote:

RS:
It seems you've miscontrued my bashing, sorry about that. I was,
perhaps somewhat colourfully, meaning extracting some meaning. Since
your prose (and often Colin's for that matter) often sounds like
gibberish to me, I have to work at it, rather like bashing a lump of
metal with a hammer. Sometimes I succeed, but other times I just have
to give up.

DN:
I do sympathise, truly!

RS:
I most certainly didn't mean unwarranted critising of, or flaming. I am
interested in learning, and I don't immediately assume that you (or
anyone else for that matter) have nothing interesting to say.

DN:
No, I've never thought you were 'flaming' and I genuinely appreciate any
time you take to respond.  I was only indicating the sort of response that
would most help the improvement of my thought process.

RS:
Terminology is terminology, it doesn't have a point of view.

DN:
This may be a nub of disagreement.  I'd be interested if you could clarify.
My characterisation of a narrative as '3-person' is when (ISTM) that it's an
abstraction from, or projection of, some 'situation' that is fundamentally
'participative'.  Do you disagree with this?

By contrast, I've been struggling recently with language that engages
directly with 'participation'.  But this leads to your next point.

RS:
Terms
should have accepted meaning, unless we agree on a different meaning
for the purposes of discussion.

DN:
But where there is no generally accepted meaning, or a disputed one, how can
we then proceed?  Hence my attempts at definition (which I hate BTW), and
which you find to be gibberish.  Is there a way out of this?

BTW, when I read 'Theory of Nothing', which I find very cogent, ISTM that
virtually its entire focus is on aspects of a 'participatory' approach.  So
I'm more puzzled than ever why we're in disagreement.  I've really been
trying to say that points-of-view (or 'worlds') emerge from *structure*
defined somehow, and that (tautologically, surely) the 'primitives' of such
structure (in whatever theoretical terms we choose) must be capable of
'animating' such povs or worlds.  IOW povs are always 'takes' on the whole
situation, not inherent in individuated 'things'.

RS:
2) Oxygen and hydrogen atoms as counterexamples of a chemical
   potential that is not an electric field

DN:
I certainly didn't mean to imply this!  I just meant that we seemed to be
counterposing 'abstracted' and 'participative' accounts, in the sense I
indicate above.  Something would really help me at this point: could I ask
how would you relate 'physical' levels of description you've used (e.g.
'oxygen and hydrogen atoms')  to the 'participative' approach of 'TON'?
IOW, how do these narratives converge on the range of phenomena to be
explained?

David


 On Fri, Jun 22, 2007 at 12:22:31AM -, David Nyman wrote:
 
  On Jun 21, 1:45 pm, Russell Standish [EMAIL PROTECTED] wrote:
 
   You assume way too much about my motives here. I have only been trying
 to
   bash some meaning out of the all too flaccid prose that's being flung
   about at the moment. I will often employ counterexamples simply to
   illustrate points of poor terminology, or sloppy thinking. Its a
   useful exercise, not a personal attack on beliefs.
 
  Russell, If you believe that a particular thought is poorly expressed
  or sloppy, I would appreciate any help you might offer in making it
  more precise, rather than 'bashing' it.

 It seems you've miscontrued my bashing, sorry about that. I was,
 perhaps somewhat colourfully, meaning extracting some meaning. Since
 your prose (and often Colin's for that matter) often sounds like
 gibberish to me, I have to work at it, rather like bashing a lump of
 metal with a hammer. Sometimes I succeed, but other times I just have
 to give up.

 I most certainly didn't mean unwarranted critising of, or flaming. I
 am
 interested in learning, and I don't immediately assume that you (or
 anyone else for that matter) have nothing interesting to say.

  Sometimes conversations on
  the list feel more like talking past one another, and this in general
  isn't 'a useful exercise'.  My comment to Brent was motivated by a
  perception that you'd been countering my 1-personal terminology with 3-
  person formalisms.

 Terminology is terminology, it doesn't have a point of view. Terms
 should have accepted meaning, unless we agree on a different meaning
 for the purposes of discussion.

  Consequently, as such, they didn't strike me as
  equivalent, or as genuine 'counterexamples': this surprised me, in

 Which counterexamples are you talking about?

 1) Biological evolution as a counterexample to Colin's assertion about
 doing science implies consciousness. This started this thread.

 2) Oxygen and hydrogen atoms as counterexamples of a chemical
potential that is not an electric field

 3) Was there something else? I can't quite recall now.

  view of some of the other ideas you've expressed.  So I may 

Re: How would a computer know if it were conscious?

2007-06-22 Thread John Mikes
Dear David.
do not expect from me the theoretical level of technicality-talk er get
from Bruno: I talk (and think) common sense (my own) and if the
theoretical technicalities sound strange, I return to my thinking.

That's what I got, that's what I use (plagiarized from the Hungarian commi
joke: what is the difference between the peoples' democracy and a wife?
Nothing: that's what we got that's what we love)

When I read your questioning the computer, i realized that you are
in the ballpark of the AI people (maybe also AL - sorry, Russell)
who select machine-accessible aspects for comparing.
You may ask about prejudice, shame (about goofed situations),  humor (does a

computer laugh?)  boredom or preferential topics (you push for an
astronomical calculation and the computer says: I rather play some Bach
music now)
Sexual preference (even disinterestedness is slanted), or laziness.
If you add untruthfulness in risky situations, you really have a human
machine
with consciousness (whatever people say it is - I agree with your evading
that unidentified obsolete noumenon as much as possible).

I found Bruno's post well fitting - if i have some hint what
...inner personal or self-referential modality... may mean.
I could not 'practicalize' it.
I still frown when abondoning (the meaning of) something but consider
 items as pertaining to it - a rough paraphrasing, I admit.  To what?.
I don't feel comfortable to borrow math-methods for nonmath explanations
but that is my deficiency.

Now that we arrived at thequestion I replied-added (sort of) to Colin's
question I -
let me ask it again: how would YOU know if you are conscious?
(Conscious is more meaningful than cc-ness). Or rather: How would
you know if you are NOT conscious? Well, you wouldn't. If you can,
you are conscious.  Computers?

Have a good weekend

John Mikes



On 6/20/07, David Nyman [EMAIL PROTECTED] wrote:


 On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:

  Personally I don' think we can be *personally* mistaken about our own
  consciousness even if we can be mistaken about anything that
  consciousness could be about.

 I agree with this, but I would prefer to stop using the term
 'consciousness' at all.  To make a decision (to whatever degree of
 certainty) about whether a machine possessed a 1-person pov analogous
 to a human one, we would surely ask it the same sort of questions one
 would ask a human.  That is: questions about its personal 'world' -
 what it sees, hears, tastes (and perhaps extended non-human
 modalitiies); what its intentions are, and how it carries them into
 practice.  From the machine's point-of-view, we would expect it to
 report such features of its personal world as being immediately
 present (as ours are), and that it be 'blind' to whatever 'rendering
 mechanisms' may underlie this (as we are).

 If it passed these tests, it would be making similar claims on a
 personal world as we do, and deploying this to achieve similar ends.
 Since in this case it could ask itself the same questions that we can,
 it would have the same grounds for reaching the same conclusion.

 However, I've argued in the other bit of this thread against the
 possibility of a computer in practice being able to instantiate such a
 1-person world merely in virtue of 'soft' behaviour (i.e.
 programming).  I suppose I would therefore have to conclude that no
 machine could actually pass the tests I describe above - whether self-
 administered or not - purely in virtue of running some AI program,
 however complex.  This is an empirical prediction, and will have to
 await an empirical outcome.

 David

 On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
  Le 03-juin-07, à 21:52, Hal Finney a écrit :
 
 
 
   Part of what I wanted to get at in my thought experiment is the
   bafflement and confusion an AI should feel when exposed to human ideas
   about consciousness.  Various people here have proffered their own
   ideas, and we might assume that the AI would read these suggestions,
   along with many other ideas that contradict the ones offered here.
   It seems hard to escape the conclusion that the only logical response
   is for the AI to figuratively throw up its hands and say that it is
   impossible to know if it is conscious, because even humans cannot
 agree
   on what consciousness is.
 
  Augustin said about (subjective) *time* that he knows perfectly what it
  is, but that if you ask him to say what it is, then he admits being
  unable to say anything. I think that this applies to consciousness.
  We know what it is, although only in some personal and uncommunicable
  way.
  Now this happens to be true also for many mathematical concept.
  Strictly speaking we don't know how to define the natural numbers, and
  we know today that indeed we cannot define them in a communicable way,
  that is without assuming the auditor knows already what they are.
 
  So what can we do. We can do what mathematicians do all the time. We

Re: How would a computer know if it were conscious?

2007-06-21 Thread Russell Standish

On Thu, Jun 21, 2007 at 12:45:43PM +1000, Colin Hales wrote:
 
  OK, so by necessary primitive, you mean the syntactic or microscopic
  layer. But take this away, and you no longer have emergence. See
  endless discussions on emergence - my paper, or Jochen Fromm's book for
  instance. Does this mean magical emergence is oxymoronic?
 
 I do not think I mean what you suggest. To make it almost tediously
 obvious I could rephrase it  NECESSARY PRIMITIVE ORGANISATIONAL LAYER.
 Necessary in that if you take it away the 'emergent' is gone.PRIMITIVE
 ORGANISATIONAL LAYER = one of the layers of the hierarchy of the natural
 world (from strings to atoms to cells and beyond): real observable
 -on-the-benchtop-in-the-lab - layers. 

Still sounds like the syntactic layer to me.

 Not some arm waving syntactic
 or information or complexity or Computaton or function_atom or
 representon. Magical emergence is real, specious and exactly what I have
 said all along:
 

real and specious?

 You claim consciousness arises as a result of  [syntactic or
 information or complexity or Computational or function_atom] =
 necessary primitive, but it has no scientifically verifiable correlation
 with any real natural world phenomenon that you can stand next to and have
 your picture taken.
 

The only form of consciousness known to us is emergent relative to a
syntactic of neurons, which you most certainly can take pictures
of. I'm not sure what your point is here.

 
 
  You can't use an object derived using the contents of
  consciousness(observation) to explain why there are any contents of
  consciousness(observation) at all. It is illogical. (see the wigner quote
  below). I find the general failure to recognise this brute reality very
  exasperating.
 
 
  People used to think that about life. How can you construct (eg an
  animal) without having a complete discription of that animal. So how
  can an animal self-reproduce without having a complete description of
  itself. But this then leads to an infinite regress.
 
  The solution to this conundrum was found in the early 20th century -
  first with such theoretical constructs as combinators and lambda
  calculus, then later the actual genetic machinery of life. If it is
  possible in the case of self-reproduction, the  it will also likely to
  be possible in the case of self-awareness and consciousness. Stating
  this to illogical doesn't help. That's what people from the time of
  Descartes thought about self-reproduction.
 
  COLIN
  snip
  So this means that in a computer abstraction.
  d(KNOWLEDGE(t))
  ---  is already part of KNOWLEDGE(t)
dt
  RUSSEL
  No its not. dK/dt is generated by the interaction of the rules with the
  environment.
 
  No. No. No. There is the old assumption thing again.
 
  How, exactly, are you assuming that the agent 'interacts' with the
  environment? This is the world external to the agent, yes?. Do not say
  through sensory measurement, because that will not do. There are an
  infinite number of universes that could give rise to the same sensory
  measurements.
 
  All true, but how does that differ in the case of humans?
 
 The extreme uniqueness of the circumstance aloneWe ARE the thing we
 describe. We are more entitled to any such claims .notwithstanding
 that...
 

What are you talking about here? Self-awareness? We started off talking
about whether machines doing science was evidence that they're conscious.

 
  You've lost me completely here.
 
 Here you are trying to say that an explanation of consciousness lies in
 that direction (magical emergence flavour X), when you appear to

You're the one introducing the term magical emergence, for which I've
not obtained an adequate definitions from you.

...

 
 At the same time we can plausibly and defensibly justify the claim that
 whatever the universe is really made of , QUALIA are made of it too, and
 that the qualia process and the rest of the process (that appear like
 atoms etc in the qualiaare all of the same KIND or CLASS of natural
 phenomenon...a perfectly natural phenomenon innate to whatever it is that
 it is actually made of.
 
 That is what I mean by we must live in the kind of universe. and I
 mean 'must' in the sense of formal necessitation of the most stringent
 kind.
 
 cheers,
 
 colin
 

I'm still confused about what you're trying to say. Are you saying our
qualia are made up of electrons and quarks, or if not them, then
whatever they're made of (strings perhaps?)

How could you imagine the colour green being made up of this stuff, or
the wetness of water?

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au

Re: How would a computer know if it were conscious?

2007-06-21 Thread David Nyman

On Jun 19, 12:31 pm, Russell Standish [EMAIL PROTECTED] wrote:

 Interaction is in terms of fields - electromagnetic for most of our
 everyday examples. The fields themselves are emergent effects from
 virtual boson exchange. Now how is this related to sensing exactly?
 (Other than sensing being a particular subclass of interaction)

Please, spare me the physico-mathematical imperialism!  You say
interaction is in terms of fields'.  I think what you might claim
more modestly is something like there is a mathematical formalism in
which interaction is modelled in terms of 'fields'.  Fair enough. But
implicitly the formalism is a projection from (and reference to) a
*participatory* actuality which isn't simply 'mathematical' (pace
Bruno - and anyway, not in the sense he deploys it for the purposes of
COMP).  And I'm not of course imputing 'sensing' to the formalism, but
to the 'de-formalised participants' from which it is projected.

'Participatory' here means that you must situate yourself at the point
of reference of your formalism, and intuit that 'thou-art-that' from
which the projection originates.  If you do this, does the term
'sensing' still seem so 'soft'?  The formalisms are projections from
the participatory semantics of a 'modulated continuum' that embraces
you, me and everything we know.  When you situate yourself here, do
you really not 'get' the intuitive self-relation between continuum and
modulation? Even when you know that Russell's 1-person world - an
'emergent' from this - indeed self-relates in both sense and action?
If not, then as Colin is arguing, you'd have to erect a sign with
'then magic happens' between 'emergent' and 'reductive' accounts.

 Sensing to me implies some
 form of agency at one end of the interaction. I don't attribute any sort
 of agency to the interaction between two hydrogen atoms making up
 a hydrogen molecule for instance.

Same illustration. 'Hydrogen atoms' are again just projective
formalisms to which of course nobody would impute 'agency'.  But
situate yourself where I suggest, and intuit the actions of any 'de-
formalised participants' referenced by the term 'hydrogen atoms' that
are implicated in Russell's 1-person world.  From this perspective,
any 'agency' that Russell displays is indeed inherent in such lower-
level 'entities' in 'reduced' form.  This is a perfectly standard
aspect of any 'reductive-emergent' scheme.  For some reason you seem
prepared to grant it in a 3-person account, but not in a participatory
one.

The customary 'liquidity' and 'life' counter-arguments are simply
misconceived here, because these attributions emerge from, and hence
are applicable to, formal descriptions, independent of their 'de-
formalised' participatory referents.  But you can't apply the
semantics of 'sensing' and 'agency' in the same way, because these are
ineluctably participatory, and are coherent only when intuited as such
'all the way down' (e.g. as attributes of 1-person worlds and the
participatory 'sense-action' hierarchies on which they supervene).

David

 On Tue, Jun 19, 2007 at 09:40:59AM -, David Nyman wrote:

  On Jun 19, 5:09 am, Russell Standish [EMAIL PROTECTED] wrote:

   David, I was unable to perceive a question in what you just wrote. I
   haven't a response, since (sadly) I was unable to understand what you
   were talking about. :(

  Really?  I'm surprised, but words can indeed be very slippery in this
  context. Oh, well.  To condense: my argument is intended to pump the
  intuition that a 'primitive' (or 'reduced') notion of 'sensing' (or
  please substitute anything that carries the thrust of 'able to
  locate', 'knows it's there', etc.) is already inescapably present in
  the notion of 'interaction' between fundamental 'entities' in any
  feasible model of reality.  Else, how could we claim that they retain
  any coherent sense of being 'in contact'?

 Interaction is in terms of fields - electromagnetic for most of our
 everyday examples. The fields themselves are emergent effects from
 virtual boson exchange. Now how is this related to sensing exactly?
 (Other than sensing being a particular subclass of interaction)

 ...

  implications.  So my question is, do you think it has any merit, or is
  simply wrong, indeterminate, or gibberish? And why?

 If I have to pick an answer: gibberish. Sensing to me implies some
 form of agency at one end of the interaction. I don't attribute any sort
 of agency to the interaction between two hydrogen atoms making up a
 hydrogen molecule for instance.

 --

 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052 [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 


--~--~-~--~~~---~--~~
You received this 

Re: How would a computer know if it were conscious?

2007-06-21 Thread Brent Meeker

David Nyman wrote:
 On Jun 19, 12:31 pm, Russell Standish [EMAIL PROTECTED] wrote:
 
 Interaction is in terms of fields - electromagnetic for most of our
 everyday examples. The fields themselves are emergent effects from
 virtual boson exchange. Now how is this related to sensing exactly?
 (Other than sensing being a particular subclass of interaction)
 
 Please, spare me the physico-mathematical imperialism!  You say
 interaction is in terms of fields'.  I think what you might claim
 more modestly is something like there is a mathematical formalism in
 which interaction is modelled in terms of 'fields'.  Fair enough. But
 implicitly the formalism is a projection from (and reference to) a
 *participatory* actuality which isn't simply 'mathematical' (pace
 Bruno - and anyway, not in the sense he deploys it for the purposes of
 COMP).  And I'm not of course imputing 'sensing' to the formalism, but
 to the 'de-formalised participants' from which it is projected.
 
 'Participatory' here means that you must situate yourself at the point
 of reference of your formalism, and intuit that 'thou-art-that' from
 which the projection originates.  If you do this, does the term
 'sensing' still seem so 'soft'?  The formalisms are projections from
 the participatory semantics of a 'modulated continuum' that embraces
 you, me and everything we know.  When you situate yourself here, do
 you really not 'get' the intuitive self-relation between continuum and
 modulation? Even when you know that Russell's 1-person world - an
 'emergent' from this - indeed self-relates in both sense and action?
 If not, then as Colin is arguing, you'd have to erect a sign with
 'then magic happens' between 'emergent' and 'reductive' accounts.

Sounds like the sign is already up and it reads, Participatorily intuit the 
magic of the de-formalized ding an sich.

 
 Sensing to me implies some
 form of agency at one end of the interaction. I don't attribute any sort
 of agency to the interaction between two hydrogen atoms making up
 a hydrogen molecule for instance.
 
 Same illustration. 'Hydrogen atoms' are again just projective
 formalisms to which of course nobody would impute 'agency'.  But
 situate yourself where I suggest, and intuit the actions of any 'de-
 formalised participants' referenced by the term 'hydrogen atoms' that
 are implicated in Russell's 1-person world.  From this perspective,
 any 'agency' that Russell displays is indeed inherent in such lower-
 level 'entities' in 'reduced' form.  This is a perfectly standard
 aspect of any 'reductive-emergent' scheme.  For some reason you seem
 prepared to grant it in a 3-person account, but not in a participatory
 one.
 
 The customary 'liquidity' and 'life' counter-arguments are simply
 misconceived here, because these attributions emerge from, and hence
 are applicable to, formal descriptions, independent of their 'de-
 formalised' participatory referents.  But you can't apply the
 semantics of 'sensing' and 'agency' in the same way, because these are
 ineluctably participatory, and are coherent only when intuited as such
 'all the way down' (e.g. as attributes of 1-person worlds and the
 participatory 'sense-action' hierarchies on which they supervene).

So a hydrogen atom has a 1st-person world view, but this is more than it's 
physical interactions (which are merely part of it's formal description)?

Maybe so - but my intuition doesn't tell me anything about it.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-21 Thread John Mikes
David wrote:  [EMAIL PROTECTED]
Jun 21, 2007 2:31 PM



David, you are still too mild  IMO. You wrote:
... there is a mathematical formalism in which interaction is modelled in
terms of 'fields'.
I would say: we call 'fields' what seems to be callable 'interaction' upon
the outcome of certain mathematical transformations - or something similar.
The similarity of math formulas does not justify implication  of some
physical reality - whatever that may mean.
What 'SPINS'? What undulates into Waves? Russell's ... emergent effects
from virtual boson exchange. ... are indeed virtually (imaginary?) emergent
VIRTUAL effects from a virtual exchange of virtual bosons. I agree: that
would not match your Fair enough.
I like your quest for de-formalized participants (like e.g. energy?)

H and other atoms are ingenius representatives serving explanation for
things observed scimpily in ages of epistemic insufficiency by
'age'-adjusted instrumentation.
And with new epistemic enrichment science does not 'reconsider' what was
'believed', but modifies it to maintain the 'earlier' adjusted to the later
information (e.g. entropy in its 15th or so variation). Molecules were
rod-connected atom-figments, then turned into electric connections, then
secondary attraction-agglomerates, more functional than were the orig.
primitive bindings. It still does not fit for biology, this embryonic state
limited model- science as applied for the elusive life processes.

Something happens and we 'think' what. Those ingenius(ly applied) math
equations based on previous cut-model quantization (disregarding the
influence of the 'beyond model' total
world) are 'matched' by constants, new math, or even for such purpose
invented concepts which, however, in the 274th consecutive application are
considered facts.
The 'matches' are considered WITHIN the aspects included into the model,
other aspect unmatches form 'paradoxes', or necessitate axioms. MY
synthesized macromolecules(?) were successfully applicable in practical
technology - in the same realm they were made for. The mass of an electron
matches miraculously to other results within the same wing of the edifice of
scientific branch.

And how about your mentioned 'agency'? it is all figured in our human
patterns, what and how WE should do to get to an effect (maybe poorly
observed!). Nature does not have to follow our logic or mechanism. We know
only a part of it, understand it by our logic, make it pars pro toto and
describe nature in our actual human ways.
That is conventional science in which I made a good living, successful
practical results, publications and reputation in my branch. Then I started
to think.
We live on misconceptions and a new paradigm is still in those.

It is always a joy to read your posts.

John




On 6/21/07, David Nyman  [EMAIL PROTECTED]  wrote:


 On Jun 19, 12:31 pm, Russell Standish  [EMAIL PROTECTED] wrote:

  Interaction is in terms of fields - electromagnetic for most of our
  everyday examples. The fields themselves are emergent effects from
  virtual boson exchange. Now how is this related to sensing exactly?
  (Other than sensing being a particular subclass of interaction)

 Please, spare me the physico-mathematical imperialism!  You say
 interaction is in terms of fields'.  I think what you might claim
 more modestly is something like there is a mathematical formalism in
 which interaction is modelled in terms of 'fields'.  Fair enough. But
 implicitly the formalism is a projection from (and reference to) a
 *participatory* actuality which isn't simply 'mathematical' (pace
 Bruno - and anyway, not in the sense he deploys it for the purposes of
 COMP).  And I'm not of course imputing 'sensing' to the formalism, but
 to the 'de-formalised participants' from which it is projected.

 'Participatory' here means that you must situate yourself at the point
 of reference of your formalism, and intuit that 'thou-art-that' from
 which the projection originates.  If you do this, does the term
 'sensing' still seem so 'soft'?  The formalisms are projections from
 the participatory semantics of a 'modulated continuum' that embraces
 you, me and everything we know.  When you situate yourself here, do
 you really not 'get' the intuitive self-relation between continuum and
 modulation? Even when you know that Russell's 1-person world - an
 'emergent' from this - indeed self-relates in both sense and action?
 If not, then as Colin is arguing, you'd have to erect a sign with
 'then magic happens' between 'emergent' and 'reductive' accounts.

  Sensing to me implies some
  form of agency at one end of the interaction. I don't attribute any sort
  of agency to the interaction between two hydrogen atoms making up
  a hydrogen molecule for instance.

 Same illustration. 'Hydrogen atoms' are again just projective
 formalisms to which of course nobody would impute 'agency'.  But
 situate yourself where I suggest, and intuit the actions of any 'de-
 formalised participants' 

Re: How would a computer know if it were conscious?

2007-06-21 Thread David Nyman

On Jun 21, 8:24 pm, Brent Meeker [EMAIL PROTECTED] wrote:

 Sounds like the sign is already up and it reads, Participatorily intuit the 
 magic of the de-formalized ding an sich.

I'd be happy with that sign, if you substituted a phrase like 'way of
being' for 'magic'. There is no analogy between the two cases, because
Russell seeks to pull the entire 1-person rabbit, complete with 'way
of being', out of a hat that contains only 3-person formalisations.
This is magic with a vengeance.  The ding an sich (and, although I mis-
attributed monads to him, Kant knew a 'thing' or two) is what we all
participate in, whether you intuit it or not.  And my hat and my
rabbit, whether 0, 1, or 3-person versions, are participatory all the
way down.

 So a hydrogen atom has a 1st-person world view, but this is more than it's
  physical interactions (which are merely part of it's formal 
 description)?

 Maybe so - but my intuition doesn't tell me anything about it.

Clearly not.  But your sometime way with (dis)analogy leads me to
mistrust your intuition in this case. Firstly, we're dealing with a
*reductive* account, so '1-person world view' in the case of a 'de-
formalised' hydrogen atom must be 'reduced' correspondingly.  Such a
beastie neither sees nor hears, neither does it dream nor plan.  But
then, it's 'formalised' counterpart isn't 'wet' either.  But the
*behaviour* of such counterparts is standardly attested as a 'reduced'
component of 3-person accounts of the 'emergence' of 'liquidity'.

Analogously (and this really *is* analogous) the de-formalised
participant ('DFP') referenced by 'hydrogen atom' is a 'reduced'
component of a participative account of the emergence Russell's 1-
person world.  But it's merely daft to suppose that its 'way of being'
entails a 1-person 'mini sensorium', because it manifestly lacks any
'machinery' to render this.  Its humble role is to be a *component* in
*just* that 'machinery' that renders *Russell's* 1-person world.

DFPs aren't just the 'medium' of 1-person accounts, but that of *all*
accounts: 0, 1, or 3-person.  All accounts are 'DFP-
instantiated' (whatever else?).  The one you're presently viewing is
instantiated in the medium of DFPs variously corresponding to
'brains', 'computers', 'networks' etc.  A 3-person account is just a
'formal take' on 'DFP reality'; a 1-person account is a 'personal
take'; and a 0-person account is a 'de-personalised take'.

David


 David Nyman wrote:
  On Jun 19, 12:31 pm, Russell Standish [EMAIL PROTECTED] wrote:

  Interaction is in terms of fields - electromagnetic for most of our
  everyday examples. The fields themselves are emergent effects from
  virtual boson exchange. Now how is this related to sensing exactly?
  (Other than sensing being a particular subclass of interaction)

  Please, spare me the physico-mathematical imperialism!  You say
  interaction is in terms of fields'.  I think what you might claim
  more modestly is something like there is a mathematical formalism in
  which interaction is modelled in terms of 'fields'.  Fair enough. But
  implicitly the formalism is a projection from (and reference to) a
  *participatory* actuality which isn't simply 'mathematical' (pace
  Bruno - and anyway, not in the sense he deploys it for the purposes of
  COMP).  And I'm not of course imputing 'sensing' to the formalism, but
  to the 'de-formalised participants' from which it is projected.

  'Participatory' here means that you must situate yourself at the point
  of reference of your formalism, and intuit that 'thou-art-that' from
  which the projection originates.  If you do this, does the term
  'sensing' still seem so 'soft'?  The formalisms are projections from
  the participatory semantics of a 'modulated continuum' that embraces
  you, me and everything we know.  When you situate yourself here, do
  you really not 'get' the intuitive self-relation between continuum and
  modulation? Even when you know that Russell's 1-person world - an
  'emergent' from this - indeed self-relates in both sense and action?
  If not, then as Colin is arguing, you'd have to erect a sign with
  'then magic happens' between 'emergent' and 'reductive' accounts.

 Sounds like the sign is already up and it reads, Participatorily intuit the 
 magic of the de-formalized ding an sich.





  Sensing to me implies some
  form of agency at one end of the interaction. I don't attribute any sort
  of agency to the interaction between two hydrogen atoms making up
  a hydrogen molecule for instance.

  Same illustration. 'Hydrogen atoms' are again just projective
  formalisms to which of course nobody would impute 'agency'.  But
  situate yourself where I suggest, and intuit the actions of any 'de-
  formalised participants' referenced by the term 'hydrogen atoms' that
  are implicated in Russell's 1-person world.  From this perspective,
  any 'agency' that Russell displays is indeed inherent in such lower-
  level 'entities' in 'reduced' form.  This 

Re: How would a computer know if it were conscious?

2007-06-21 Thread David Nyman

On Jun 21, 8:42 pm, John Mikes [EMAIL PROTECTED] wrote:

 David, you are still too mild  IMO.

I try not to be churlish.

 I like your quest for de-formalized participants (like e.g. energy?)

Not sure - can you say more?

 The 'matches' are considered WITHIN the aspects included into the model,
 other aspect unmatches form 'paradoxes', or necessitate axioms. MY
 synthesized macromolecules(?) were successfully applicable in practical
 technology - in the same realm they were made for. The mass of an electron
 matches miraculously to other results within the same wing of the edifice of
 scientific branch.

Yes, the principal successes of science are instrumental, and its
models are designed for largely instrumental ends.  It is especially
psychologically difficult to go 'meta' to such models, and the
attitudes that spawned them.  But when we turn our attention
reflexively to 1-person worlds, we have no option but to go 'meta' to
3-person science, in pursuit of a fully participatory 'natural
philosophy'.  And perhaps if we are successful we will finally achieve
the instrumentality to realise 'artificial' 1-person worlds, for good
or ill.  Without it, we almost certainly won't.

 And how about your mentioned 'agency'? it is all figured in our human
 patterns, what and how WE should do to get to an effect (maybe poorly
 observed!). Nature does not have to follow our logic or mechanism. We know
 only a part of it, understand it by our logic, make it pars pro toto and
 describe nature in our actual human ways.

As I said, my attempt is really just to get to some human
understanding (what else?) of some sort of 'de-formalised
participatory semantics' for our human situation, rather than
restricting my thinking to 3-person formalised 'syntactics'.  I may
even be able to see a glimmer of the connection between the two.  But
I cannot bend Nature to my will!

 We live on misconceptions and a new paradigm is still in those.

Just so.

 It is always a joy to read your posts.

I thank you.

David

 David wrote:  [EMAIL PROTECTED]

 Jun 21, 2007 2:31 PM

 David, you are still too mild  IMO. You wrote:
 ... there is a mathematical formalism in which interaction is modelled in
 terms of 'fields'.
 I would say: we call 'fields' what seems to be callable 'interaction' upon
 the outcome of certain mathematical transformations - or something similar.
 The similarity of math formulas does not justify implication  of some
 physical reality - whatever that may mean.
 What 'SPINS'? What undulates into Waves? Russell's ... emergent effects
 from virtual boson exchange. ... are indeed virtually (imaginary?) emergent
 VIRTUAL effects from a virtual exchange of virtual bosons. I agree: that
 would not match your Fair enough.
 I like your quest for de-formalized participants (like e.g. energy?)

 H and other atoms are ingenius representatives serving explanation for
 things observed scimpily in ages of epistemic insufficiency by
 'age'-adjusted instrumentation.
 And with new epistemic enrichment science does not 'reconsider' what was
 'believed', but modifies it to maintain the 'earlier' adjusted to the later
 information (e.g. entropy in its 15th or so variation). Molecules were
 rod-connected atom-figments, then turned into electric connections, then
 secondary attraction-agglomerates, more functional than were the orig.
 primitive bindings. It still does not fit for biology, this embryonic state
 limited model- science as applied for the elusive life processes.

 Something happens and we 'think' what. Those ingenius(ly applied) math
 equations based on previous cut-model quantization (disregarding the
 influence of the 'beyond model' total
 world) are 'matched' by constants, new math, or even for such purpose
 invented concepts which, however, in the 274th consecutive application are
 considered facts.
 The 'matches' are considered WITHIN the aspects included into the model,
 other aspect unmatches form 'paradoxes', or necessitate axioms. MY
 synthesized macromolecules(?) were successfully applicable in practical
 technology - in the same realm they were made for. The mass of an electron
 matches miraculously to other results within the same wing of the edifice of
 scientific branch.

 And how about your mentioned 'agency'? it is all figured in our human
 patterns, what and how WE should do to get to an effect (maybe poorly
 observed!). Nature does not have to follow our logic or mechanism. We know
 only a part of it, understand it by our logic, make it pars pro toto and
 describe nature in our actual human ways.
 That is conventional science in which I made a good living, successful
 practical results, publications and reputation in my branch. Then I started
 to think.
 We live on misconceptions and a new paradigm is still in those.

 It is always a joy to read your posts.

 John

 On 6/21/07, David Nyman  [EMAIL PROTECTED]  wrote:



  On Jun 19, 12:31 pm, Russell Standish  [EMAIL PROTECTED] wrote:

   Interaction is in terms of 

Re: How would a computer know if it were conscious?

2007-06-21 Thread Russell Standish

On Thu, Jun 21, 2007 at 08:44:54PM -, David Nyman wrote:
 There is no analogy between the two cases, because
 Russell seeks to pull the entire 1-person rabbit, complete with 'way
 of being', out of a hat that contains only 3-person formalisations.
 This is magic with a vengeance. 

You assume way too much about my motives here. I have only been trying to
bash some meaning out of the all too flaccid prose that's being flung
about at the moment. I will often employ counterexamples simply to
illustrate points of poor terminology, or sloppy thinking. Its a
useful exercise, not a personal attack on beliefs.

BTW - I'm with you Brent. Brent is also doing exactly this, sometimes
satirically. 

Cheers

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-21 Thread David Nyman

On Jun 21, 1:45 pm, Russell Standish [EMAIL PROTECTED] wrote:

 You assume way too much about my motives here. I have only been trying to
 bash some meaning out of the all too flaccid prose that's being flung
 about at the moment. I will often employ counterexamples simply to
 illustrate points of poor terminology, or sloppy thinking. Its a
 useful exercise, not a personal attack on beliefs.

Russell, If you believe that a particular thought is poorly expressed
or sloppy, I would appreciate any help you might offer in making it
more precise, rather than 'bashing' it.  Sometimes conversations on
the list feel more like talking past one another, and this in general
isn't 'a useful exercise'.  My comment to Brent was motivated by a
perception that you'd been countering my 1-personal terminology with 3-
person formalisms.  Consequently, as such, they didn't strike me as
equivalent, or as genuine 'counterexamples': this surprised me, in
view of some of the other ideas you've expressed.  So I may well have
been too swift to assign certain motives to you, not having detected
any pedagogically-motivated intent to caricature, and I would welcome
your more specific clarification and correction.

I should say at this point that I too find the 'terminology' task very
trying, as virtual any existing vocabulary comes freighted with pre-
existing implications of the sort you have been exploiting in your
ripostes, but which I didn't intend.  I would welcome any superior
alternatives you might suggest.  Trying or not, I'm not quite ready to
give up the attempt to clarify these ideas.  If you think the exercise
misconceived or poorly executed, it's of course up to you to choose to
'bash', satirise, or ignore it, but I would particularly welcome open-
ended questions.

 Brent is also doing exactly this, sometimes
 satirically.

Again, I don't mean to seem humourless, but my basic intention is a
genuine exchange of ideas, rather than satire or caricature.  So I do
try to empathise as best I can with the issues on the other side of
the debate, before deciding if, and how, I disagree.  How successful I
may be is another matter.

I'd be more than willing, as ever, to have another go!

Cheers

David

 On Thu, Jun 21, 2007 at 08:44:54PM -, David Nyman wrote:
  There is no analogy between the two cases, because
  Russell seeks to pull the entire 1-person rabbit, complete with 'way
  of being', out of a hat that contains only 3-person formalisations.
  This is magic with a vengeance.

 You assume way too much about my motives here. I have only been trying to
 bash some meaning out of the all too flaccid prose that's being flung
 about at the moment. I will often employ counterexamples simply to
 illustrate points of poor terminology, or sloppy thinking. Its a
 useful exercise, not a personal attack on beliefs.

 BTW - I'm with you Brent. Brent is also doing exactly this, sometimes
 satirically.

 Cheers

 --

 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052 [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-21 Thread Russell Standish

On Fri, Jun 22, 2007 at 12:22:31AM -, David Nyman wrote:
 
 On Jun 21, 1:45 pm, Russell Standish [EMAIL PROTECTED] wrote:
 
  You assume way too much about my motives here. I have only been trying to
  bash some meaning out of the all too flaccid prose that's being flung
  about at the moment. I will often employ counterexamples simply to
  illustrate points of poor terminology, or sloppy thinking. Its a
  useful exercise, not a personal attack on beliefs.
 
 Russell, If you believe that a particular thought is poorly expressed
 or sloppy, I would appreciate any help you might offer in making it
 more precise, rather than 'bashing' it.  

It seems you've miscontrued my bashing, sorry about that. I was,
perhaps somewhat colourfully, meaning extracting some meaning. Since
your prose (and often Colin's for that matter) often sounds like
gibberish to me, I have to work at it, rather like bashing a lump of
metal with a hammer. Sometimes I succeed, but other times I just have
to give up.

I most certainly didn't mean unwarranted critising of, or flaming. I am
interested in learning, and I don't immediately assume that you (or
anyone else for that matter) have nothing interesting to say.

 Sometimes conversations on
 the list feel more like talking past one another, and this in general
 isn't 'a useful exercise'.  My comment to Brent was motivated by a
 perception that you'd been countering my 1-personal terminology with 3-
 person formalisms.  

Terminology is terminology, it doesn't have a point of view. Terms
should have accepted meaning, unless we agree on a different meaning
for the purposes of discussion.

 Consequently, as such, they didn't strike me as
 equivalent, or as genuine 'counterexamples': this surprised me, in

Which counterexamples are you talking about? 

1) Biological evolution as a counterexample to Colin's assertion about
doing science implies consciousness. This started this thread.

2) Oxygen and hydrogen atoms as counterexamples of a chemical
   potential that is not an electric field

3) Was there something else? I can't quite recall now.

 view of some of the other ideas you've expressed.  So I may well have
 been too swift to assign certain motives to you, not having detected
 any pedagogically-motivated intent to caricature, and I would welcome
 your more specific clarification and correction.
 
 I should say at this point that I too find the 'terminology' task very
 trying, as virtual any existing vocabulary comes freighted with pre-
 existing implications of the sort you have been exploiting in your
 ripostes, but which I didn't intend.  I would welcome any superior
 alternatives you might suggest.  Trying or not, I'm not quite ready to
 give up the attempt to clarify these ideas.  If you think the exercise
 misconceived or poorly executed, it's of course up to you to choose to
 'bash', satirise, or ignore it, but I would particularly welcome open-
 ended questions.
 

I don't recall satirising anything recently. It is true that I usually
ignore comments that don't make sense after a couple of minutes of
staring at the phrase, unless really prodded like you did in your
recent post on attributing sensing to arbitrary interactions.


-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-20 Thread David Nyman

On Jun 20, 3:35 am, Colin Hales [EMAIL PROTECTED] wrote:

 Methinks you 'get it'. You are far more eloquent than I am, but we talk of
 the same thing..

Thank you Colin.  'Eloquence' or 'gibberish'?  Hmm...but let us
proceed...

 where I identify ??? as a necessary primitive and comment that
 'computation' or 'information' or 'complexity' have only the vaguest of an
 arm waving grip on any claim to such a specific role. Such is the 'magical
 emergence' genre.

Just so.  My own 'meta-analysis' is also a (foolhardy?) attempt to
identify the relevant 'necessity' as *logical*.  The (awesome) power
of this would be to render 'pure' 3-person accounts (i.e. so-called
'physical') radically causally incomplete.  Some primitive like yours
would be a *logically necessary* foundation of *any* coherent account
of 'what-is'.

Strawson, and Chalmers, as I've understood them, make the (IMO)
fundamental mis-step of proposing a superadded 'fundamental property'
to the 'physical' substrate ('e.g. 'information').  This has the fatal
effect of rendering such a 'property' *optional* - i.e. it appears
that everything could proceed just as happily without it in the 3-
person account, and hence 'consciousness' can (by some) still airily
be dismissed as an 'illusion'.  The first move here, I think, is to
stop using the term 'consciousness' to denote any 'property'.

My own meta-analysis attempts to pump the intuition that all
processes, whether 0, 1, or 3-person, must from *logical necessity* be
identified with 'participative encounters', which are unintelligible
in the absence of *any* component: namely 'participation', 'sense',
and 'action'.  So, to 'exist' or 'behave', one must be:

1) a participant (i.e. the prerequisite for 'existence')
2) sensible (i.e. differentiating some 'other' in relationship)
3) active (i.e. the exchange of 'motivation' with the related 'other')

and all manifestations of 'participative existence' must be 'fractal'
to these characteristics in both directions (i.e. 'emergence' and
'supervention').  So, to negate these components one-by-one:

1) if not a participant, you don't get to play
2) if not sensible, you can't relate
3) if not active in relationship, you have no 'motivation'

These logical or semantic characteristics are agnostic to the
'primitive base'.  For example, if we are to assume AR as that base,
then the 'realism' part must denote that we 'participate' in AR, that
'numbers' are 'mutually sensible', and that arithmetical relationship
is 'motivational'.  If I've understood Bruno, 'computationalism'
generates 'somethings' at the 1-person plural level.  My arguments
against 'software uploading' then apply at the level of these
'emergent somethings', not to the axiomatic base. This is the nub of
the 'level of substitution' dilemma in the 'yes doctor' puzzle.

In 'somethingist' accounts, 'players' participate in sensory-
motivational encounters between 'fundamental somethings' (e.g.
conceived as vibrational emergents of a modulated continuum).

The critical move in the above argument is that by making the relation
between 0,1, and 3-person accounts and the primitives *self-relation*
or identity, we jettison the logical possibility of 'de-composing'
participative sensory-motivational relationship.  0,1, and 3-person
are then just different povs on this:

0 - the participatory 'arena' itself
1 - the 'world' of a differentiated 'participant'
3 - a 'proxy', parasitising a 1-person world

'Zombies' and 'software' are revealed as being category 3: they
'parasitise' 1-person worlds, sometimes as 'proxies' for distal
participants, sometimes 'stand-alone'.  The imputation of 'soft
behaviour' to a computer, for example, is just such a 'proxy', and has
no relevance whatsoever to the 1-person pov of the distal
'participatory player'.  Such a pov can emerge only fractally from its
*participative* constitution.

 A
 principle of the kind X must exist or we wouldn't be having this
 discussion. There is no way to characterise explanation through magical
 emergence that enables empirical testing. Not even in principle. They are
 impotent at all prediction. You adopt the position and the whole job is
 done and is a matter of belief = NOT SCIENCE.

Well, I'm happy on the above basis to make the empirical prediction:

No 'computer' will ever spontaneously adopt a 1-person pov in virtue
of any 'computation' imputed to it.

You, of course, are working directly on this project.  My breath is
bated!

For me, one of the most important consequences of the foregoing
relates to our intuitions about ourselves.  We hear from various
directions that our 1-person worlds are 'epiphenomenal' or 'illusory'
or simply that they don't 'exist'.  But this can now be seen to be
vacuous, deriving from a narrative fixation on the 'proxy', or
'parasite', rather than the participant.  In fact, it is the tacit
assumption of sense-action to the parasite (e.g. the 'external world')
that is illusory, epiphenomenal and non-existent.  Real 

Re: How would a computer know if it were conscious?

2007-06-20 Thread David Nyman

On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:

 Personally I don' think we can be *personally* mistaken about our own
 consciousness even if we can be mistaken about anything that
 consciousness could be about.

I agree with this, but I would prefer to stop using the term
'consciousness' at all.  To make a decision (to whatever degree of
certainty) about whether a machine possessed a 1-person pov analogous
to a human one, we would surely ask it the same sort of questions one
would ask a human.  That is: questions about its personal 'world' -
what it sees, hears, tastes (and perhaps extended non-human
modalitiies); what its intentions are, and how it carries them into
practice.  From the machine's point-of-view, we would expect it to
report such features of its personal world as being immediately
present (as ours are), and that it be 'blind' to whatever 'rendering
mechanisms' may underlie this (as we are).

If it passed these tests, it would be making similar claims on a
personal world as we do, and deploying this to achieve similar ends.
Since in this case it could ask itself the same questions that we can,
it would have the same grounds for reaching the same conclusion.

However, I've argued in the other bit of this thread against the
possibility of a computer in practice being able to instantiate such a
1-person world merely in virtue of 'soft' behaviour (i.e.
programming).  I suppose I would therefore have to conclude that no
machine could actually pass the tests I describe above - whether self-
administered or not - purely in virtue of running some AI program,
however complex.  This is an empirical prediction, and will have to
await an empirical outcome.

David

On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
 Le 03-juin-07, à 21:52, Hal Finney a écrit :



  Part of what I wanted to get at in my thought experiment is the
  bafflement and confusion an AI should feel when exposed to human ideas
  about consciousness.  Various people here have proffered their own
  ideas, and we might assume that the AI would read these suggestions,
  along with many other ideas that contradict the ones offered here.
  It seems hard to escape the conclusion that the only logical response
  is for the AI to figuratively throw up its hands and say that it is
  impossible to know if it is conscious, because even humans cannot agree
  on what consciousness is.

 Augustin said about (subjective) *time* that he knows perfectly what it
 is, but that if you ask him to say what it is, then he admits being
 unable to say anything. I think that this applies to consciousness.
 We know what it is, although only in some personal and uncommunicable
 way.
 Now this happens to be true also for many mathematical concept.
 Strictly speaking we don't know how to define the natural numbers, and
 we know today that indeed we cannot define them in a communicable way,
 that is without assuming the auditor knows already what they are.

 So what can we do. We can do what mathematicians do all the time. We
 can abandon the very idea of *defining* what consciousness is, and try
 instead to focus on principles or statements about which we can agree
 that they apply to consciousness. Then we can search for (mathematical)
 object obeying to such or similar principles. This can be made easier
 by admitting some theory or realm for consciousness like the idea that
 consciousness could apply to *some* machine or to some *computational
 events etc.

 We could agree for example that:
 1) each one of us know what consciousness is, but nobody can prove
 he/she/it is conscious.
 2) consciousness is related to inner personal or self-referential
 modality
 etc.

 This is how I proceed in Conscience et Mécanisme.  (conscience is
 the french for consciousness, conscience morale is the french for the
 english conscience).



  In particular I don't think an AI could be expected to claim that it
  knows that it is conscious, that consciousness is a deep and intrinsic
  part of itself, that whatever else it might be mistaken about it could
  not be mistaken about being conscious.  I don't see any logical way it
  could reach this conclusion by studying the corpus of writings on the
  topic.  If anyone disagrees, I'd like to hear how it could happen.

 As far as a machine is correct, when she introspects herself, she
 cannot not discover a gap between truth (p) and provability (Bp). The
 machine can discover correctly (but not necessarily in a completely
 communicable way) a gap between provability (which can potentially
 leads to falsities, despite correctness) and the incorrigible
 knowability or knowledgeability (Bp  p), and then the gap between
 those notions and observability (Bp  Dp) and sensibility (Bp  Dp 
 p). Even without using the conventional name of consciousness,
 machines can discover semantical fixpoint playing the role of non
 expressible but true statements.
 We can *already* talk with machine about those true unnameable 

Re: How would a computer know if it were conscious?

2007-06-20 Thread Colin Hales

down a wys..
===
Russell Standish wrote:
 On Sun, Jun 17, 2007 at 03:47:19PM +1000, Colin Hales wrote:
 Hi,

 RUSSEL
 All I can say is that I don't understand your distinction. You have
 introduced a new term necessary primitive - what on earth is that? But
 I'll let this pass, it probably isn't important.

 COLIN
 Oh no you don't!! It matters. Bigtime...

 Take away the necessary primitive: no 'qualititative novelty'
 Take away the water molecules: No lake.
 Take away the bricks, no building
 Take away the atoms: no molecules
 Take away the cells: no human
 Take away the humans: no humanity
 Take away the planets: no solar system
 Take away the X: No emergent Y
 Take away the QUALE: No qualia

 Magical emergence is when but claim Y exists but you can't
 identify an X. Such as:


 OK, so by necessary primitive, you mean the syntactic or microscopic
 layer. But take this away, and you no longer have emergence. See
 endless discussions on emergence - my paper, or Jochen Fromm's book for
 instance. Does this mean magical emergence is oxymoronic?

I do not think I mean what you suggest. To make it almost tediously
obvious I could rephrase it  NECESSARY PRIMITIVE ORGANISATIONAL LAYER.
Necessary in that if you take it away the 'emergent' is gone.PRIMITIVE
ORGANISATIONAL LAYER = one of the layers of the hierarchy of the natural
world (from strings to atoms to cells and beyond): real observable
-on-the-benchtop-in-the-lab - layers. Not some arm waving syntactic
or information or complexity or Computaton or function_atom or
representon. Magical emergence is real, specious and exactly what I have
said all along:

You claim consciousness arises as a result of  [syntactic or
information or complexity or Computational or function_atom] =
necessary primitive, but it has no scientifically verifiable correlation
with any real natural world phenomenon that you can stand next to and have
your picture taken.



 You can't use an object derived using the contents of
 consciousness(observation) to explain why there are any contents of
 consciousness(observation) at all. It is illogical. (see the wigner quote
 below). I find the general failure to recognise this brute reality very
 exasperating.


 People used to think that about life. How can you construct (eg an
 animal) without having a complete discription of that animal. So how
 can an animal self-reproduce without having a complete description of
 itself. But this then leads to an infinite regress.

 The solution to this conundrum was found in the early 20th century -
 first with such theoretical constructs as combinators and lambda
 calculus, then later the actual genetic machinery of life. If it is
 possible in the case of self-reproduction, the  it will also likely to
 be possible in the case of self-awareness and consciousness. Stating
 this to illogical doesn't help. That's what people from the time of
 Descartes thought about self-reproduction.

 COLIN
 snip
 So this means that in a computer abstraction.
 d(KNOWLEDGE(t))
 ---  is already part of KNOWLEDGE(t)
   dt
 RUSSEL
 No its not. dK/dt is generated by the interaction of the rules with the
 environment.

 No. No. No. There is the old assumption thing again.

 How, exactly, are you assuming that the agent 'interacts' with the
 environment? This is the world external to the agent, yes?. Do not say
 through sensory measurement, because that will not do. There are an
 infinite number of universes that could give rise to the same sensory
 measurements.

 All true, but how does that differ in the case of humans?

The extreme uniqueness of the circumstance aloneWe ARE the thing we
describe. We are more entitled to any such claims .notwithstanding
that...

Because, as I have said over and over... and will say again: We must live
in the kind of universe that delivers or allows access to, in ways as yet
unexplained, some aspects of the distal world, so which sensory I/O can be
attached, and thus conjoined, be used to form the qualia
representation/fields we experience in our heads.

Forget about HOWthat this is necessarily the case is unavoidable.
Maxwell's equations prove it QED - style...Without it, the sensory I/O
(ultimately 100% electromagnetic phenomena) could never resolve the distal
world in any unambiguous way. Such disambiguation physically
happens.such qualia representations exist, hence brains must have
direct access to the distal world. QED.


 We are elctromagnetic objects. Basic EM theory. Proven
 mathematical theorems. The solutions are not unique for an isolated
 system.

 Circularity.Circularity.Circularity.

 There is _no interaction with the environment_ except for that provided by
 the qualia as an 'as-if' proxy for the environment. The origins of an
 ability to access the distal external world in support of such a proxy is
 mysterious but moot. It can and does happen, and that ability must come
 about because we 

Re: How would a computer know if it were conscious?

2007-06-19 Thread David Nyman

On Jun 19, 5:09 am, Russell Standish [EMAIL PROTECTED] wrote:

 David, I was unable to perceive a question in what you just wrote. I
 haven't a response, since (sadly) I was unable to understand what you
 were talking about. :(

Really?  I'm surprised, but words can indeed be very slippery in this
context. Oh, well.  To condense: my argument is intended to pump the
intuition that a 'primitive' (or 'reduced') notion of 'sensing' (or
please substitute anything that carries the thrust of 'able to
locate', 'knows it's there', etc.) is already inescapably present in
the notion of 'interaction' between fundamental 'entities' in any
feasible model of reality.  Else, how could we claim that they retain
any coherent sense of being 'in contact'?  And, if not 'in contact',
how 'interact'?  So in essence, this is a semantic intuition: that the
root concept of 'interaction' *tacitly includes*  'sensing' as a
*logical prerequisite* of 'contact' in an inescapable manner to which
we have become *semantically blind*.  So I propose such a primitive
but unavoidable 'hybrid' as the conceptual basis on which any higher-
order emergent process, including those embodying reflexive self-
consciousness, logically supervenes.  I suppose this is a sort of 'non-
optional panpsychism': participating in such a reality, your nature
embraces 'action with sensing' down to its very roots.

If this intuition could be developed into something more rigorous, it
would have the (startling) consequence that any 'physical' explanation
that explicitly excluded the primitive 'sensing' component of 'action-
with-sensing' would be incomplete - i.e. no process founded on it
could actually work *at all*. This is a 'philosophical', or semantic/
logical analysis, not science, of course.  But I think you may agree
that if it has any merit, it would have some interesting
implications.  So my question is, do you think it has any merit, or is
simply wrong, indeterminate, or gibberish? And why?

David

 On Sun, Jun 17, 2007 at 11:17:50PM -, David Nyman wrote:

  All this has massive implications for issues of will (free or
  otherwise), suffering, software uploading of minds, etc., etc. - which
  I've indicated in other posts.  Consequently, I'd be really interested
  in your response, because AFAICS this must be either right(ish),
  wrong(ish), or not-even-wrong(ish).  But if right(ish), potentially it
  gives us a basis for speaking the same language, even if my suggested
  vocabulary is jettisoned for an improved version.  It's certainly
  intended to be Occamish.

  David

 David, I was unable to perceive a question in what you just wrote. I
 haven't a response, since (sadly) I was unable to understand what you
 were talking about. :(

 Cheers

 --

 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052 [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-19 Thread Russell Standish

On Tue, Jun 19, 2007 at 09:40:59AM -, David Nyman wrote:
 
 On Jun 19, 5:09 am, Russell Standish [EMAIL PROTECTED] wrote:
 
  David, I was unable to perceive a question in what you just wrote. I
  haven't a response, since (sadly) I was unable to understand what you
  were talking about. :(
 
 Really?  I'm surprised, but words can indeed be very slippery in this
 context. Oh, well.  To condense: my argument is intended to pump the
 intuition that a 'primitive' (or 'reduced') notion of 'sensing' (or
 please substitute anything that carries the thrust of 'able to
 locate', 'knows it's there', etc.) is already inescapably present in
 the notion of 'interaction' between fundamental 'entities' in any
 feasible model of reality.  Else, how could we claim that they retain
 any coherent sense of being 'in contact'? 

Interaction is in terms of fields - electromagnetic for most of our
everyday examples. The fields themselves are emergent effects from
virtual boson exchange. Now how is this related to sensing exactly?
(Other than sensing being a particular subclass of interaction)

...

 implications.  So my question is, do you think it has any merit, or is
 simply wrong, indeterminate, or gibberish? And why?
 

If I have to pick an answer: gibberish. Sensing to me implies some
form of agency at one end of the interaction. I don't attribute any sort
of agency to the interaction between two hydrogen atoms making up a
hydrogen molecule for instance.


-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-19 Thread Mark Peaty
? Your rules. This is the real circularity which underpins
 computationalism. It's the circularity that my real physical qualia
 model
 cuts and kills. Mathematically:
 
 * You have knowledge KNOWLEDGE(t) of 'out there'
 * You want more knowledge of 'out there' so
 * KNOWLEDGE(t+1) is more than KNOWLEDGE(t)
 * in computationalism who defines the necessary route to this?...
 
  d(KNOWLEDGE(t))
  --- = something you know = YOU DO.
 dt
 
 So this means that in a computer abstraction.
 
 d(KNOWLEDGE(t))
 ---  is already part of KNOWLEDGE(t)
   dt
 
 You can label it 'evolutionary' or 'adaptive' or
 whatever...ultimately the
 rules are YOUR rules and come from your previously derived
 KNOWLEDGE(t) of
 'out there', not intrinsically grounded directly in 'out there'. Who
 decided what you don't know? YOU DID. What is it based on? YOUR current
 knowledge of it, not what is literally/really there. Ungroundedness
 is the
 fatal flaw in the computationalist model. Intrinsic grounding in the
 external world is what qualia are for. It means that
 
 d(KNOWLEDGE(t))
 ---
   dt
 
 is
 (a) built into the brain hardware (plasticity chemistry, out of your
 cognitive control)
 (b) partly grounded in matter literally/directly constructed in
 representation of the external world, reflecting the external world so
 that NOVELTY - true novelty in the OUTSIDE WORLD - is apparent.
 
 In this way your current knowledge minimally impacts
 
 d(KNOWLEDGE(t))
 ---
   dt
 
 In other words, at the fundamental physics level:
 
 d(KNOWLEDGE(t))
 ---
   dt
 
 in a human brain is NOT part of KNOWLEDGE(t). Qualia are the brain's
 solution to the symbolic grounding problem.
 
 
 RUSSEL
   Not at all. In Evolutionary Programming, very little is known
 about the
 ultimate solution the algorithm comes up with.
 
 COLIN
 Yes but that is irrelevantthe programmer said HOW it will get
 thereSorry...no cigarsee the above
 
   My scientific claim is that the electromagnetic field structure
 literally the third person view of qualia.
 
   Eh? Electromagnetic field of what? The brain? If so, do you think
 that
 chemical potentiation plays no role at all in qualia?
 
 Chemical potentiation IS electric field. There's no such thing as
 'mechanical' there's no such thing as 'chemical'. These are all
 metaphors
 in certain contexts for what is there...space and charge (yes...and mass
 associated with certain charge carriers). Where did you get this weird
 idea that a metaphor can make qualia?
 
 The electric field across the membrane of cells (astrocytes and
 neurons)
 is MASSIVE. MEGAVOLTS/METER. Think SPARKS and LIGHTNIING. It
 dominates the
 entire structure! It does not have to go anywhere. It just has to 'be'.
 You 'be' it to get what it delivers. Less than 50% of the signalling in
 the brain is synaptic, anyway! The dominant cortical process is actually
 an astrocyte syncytium. (look it up!). I would be very silly to
 ignore the
 single biggest, most dominant process of the brain that is so far
 completely correlated in every way with qualia...in favour of any other
 cause.
 ---
 
 Once again I'd like to get you to ask yourself the killer question:
 
 What is the kind of universe we must live in if the electromagnetic
 field
 structure of the brain delivers qualia?
 
 A. It is NOT the universe depicted by the qualia (atoma, molecules,
 cells...). It is the universe whose innate capacity to deliver qualia is
 taken advantage of when configureed like it appears when we use qualia
 themselves to explore itcortical brain matter. (NOTE: Please do not
 make the mistake that sensors - peripheral affect -  are equivalent to
 qualia.)
 
 My original solution to
 
 Re: How would a computer know if it were conscious?
 
 stands. The computer must have a qualia-depiction of its external world
 and it will know it because it can do science. If it doesn't/can't
 it's a
 rock/doorstop. In any computer model, every time an algoritm decides
 what
 'is' (what is visible/there) it intrisically defines 'what isn't'
 (what is
 invisible/not there). All novelty becomes thus pre-ordained.
 
 anyway.Ultimately 'how' qualia are generated is moot.
 
 That they are _necessarily_ involved is the key issue. On their own they
 are not sufficient for science to occur.
 
 cheers
 colin
 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send

Re: How would a computer know if it were conscious? - this looks best in fixed space font

2007-06-19 Thread Mark Peaty

my a/, b/, c/, look terrible in variable spaced font, they were 
prepared and sent in fixed font but the message I got back put 
them in variable spacing and so out of alignment.


Regards

Mark Peaty  CDES

[EMAIL PROTECTED]

http://www.arach.net.au/~mpeaty/

Mark Peaty wrote:
 [Grin] I just found your question here John.

snip
 As I see it, this term is an equivalent expression to my UMSITW 
 'updating model of self in the world'. It entails a 
 self-referencing, iterative process.
 For humans there is something like at least three iterations 
 working in parallel and such that the 'output' of any of them 
 can become the 'input' of any other. Something like:

 a/ basic animal responses to the world -
 Senses--|  brain stem  |-||
 Senses--|   thalamus   |-|body motor image|-muscles
 proprioception--|basal ganglia |-|   body image   |
 
 b/ high speed discrepancy checking -
 body motor image-|cerebellum|-muscles
 body sense image-| memory   |-body motor/pre motor image
 
 c/ multi-tasking, prioritising [Global workspace]
 frontal cortex|hippocampus|--multiple cortex
 brain stem, thalamus--| memory|-body motor/pre motor image
 basal ganglia-|   |--cerebellum
 
snip

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-19 Thread John Mikes
  (such as
  'information'), rather than anything real.
 
 
  COLIN
The system (a) automatically prescibes certain trajectories and
 
  RUSSEL
Yes.
 
  COLIN
(b) assumes that the theroem space [and] natural world are the
 same
  space
  and equivalently accessed.
 
  RUSSEL
No - but the system will adjust its model according to feedback.
  That is
  the very nature of any learning algorithm, of which EP is just   one
  example.
 
  COLIN
  Ok. Here's where we find the big assumption. Feedback? HOW?...by
 who's
  rules? Your rules. This is the real circularity which underpins
  computationalism. It's the circularity that my real physical qualia
  model
  cuts and kills. Mathematically:
 
  * You have knowledge KNOWLEDGE(t) of 'out there'
  * You want more knowledge of 'out there' so
  * KNOWLEDGE(t+1) is more than KNOWLEDGE(t)
  * in computationalism who defines the necessary route to this?...
 
   d(KNOWLEDGE(t))
   --- = something you know = YOU DO.
  dt
 
  So this means that in a computer abstraction.
 
  d(KNOWLEDGE(t))
  ---  is already part of KNOWLEDGE(t)
dt
 
  You can label it 'evolutionary' or 'adaptive' or
  whatever...ultimately the
  rules are YOUR rules and come from your previously derived
  KNOWLEDGE(t) of
  'out there', not intrinsically grounded directly in 'out there'. Who
  decided what you don't know? YOU DID. What is it based on? YOUR
 current
  knowledge of it, not what is literally/really there. Ungroundedness
  is the
  fatal flaw in the computationalist model. Intrinsic grounding in the
  external world is what qualia are for. It means that
 
  d(KNOWLEDGE(t))
  ---
dt
 
  is
  (a) built into the brain hardware (plasticity chemistry, out of your
  cognitive control)
  (b) partly grounded in matter literally/directly constructed in
  representation of the external world, reflecting the external world
 so
  that NOVELTY - true novelty in the OUTSIDE WORLD - is apparent.
 
  In this way your current knowledge minimally impacts
 
  d(KNOWLEDGE(t))
  ---
dt
 
  In other words, at the fundamental physics level:
 
  d(KNOWLEDGE(t))
  ---
dt
 
  in a human brain is NOT part of KNOWLEDGE(t). Qualia are the brain's
  solution to the symbolic grounding problem.
 
 
  RUSSEL
Not at all. In Evolutionary Programming, very little is known
  about the
  ultimate solution the algorithm comes up with.
 
  COLIN
  Yes but that is irrelevantthe programmer said HOW it will get
  thereSorry...no cigarsee the above
 
My scientific claim is that the electromagnetic field structure
  literally the third person view of qualia.
 
Eh? Electromagnetic field of what? The brain? If so, do you think
  that
  chemical potentiation plays no role at all in qualia?
 
  Chemical potentiation IS electric field. There's no such thing as
  'mechanical' there's no such thing as 'chemical'. These are all
  metaphors
  in certain contexts for what is there...space and charge (yes...and
 mass
  associated with certain charge carriers). Where did you get this
 weird
  idea that a metaphor can make qualia?
 
  The electric field across the membrane of cells (astrocytes and
  neurons)
  is MASSIVE. MEGAVOLTS/METER. Think SPARKS and LIGHTNIING. It
  dominates the
  entire structure! It does not have to go anywhere. It just has to
 'be'.
  You 'be' it to get what it delivers. Less than 50% of the signalling
 in
  the brain is synaptic, anyway! The dominant cortical process is
 actually
  an astrocyte syncytium. (look it up!). I would be very silly to
  ignore the
  single biggest, most dominant process of the brain that is so far
  completely correlated in every way with qualia...in favour of any
 other
  cause.
  ---
 
  Once again I'd like to get you to ask yourself the killer question:
 
  What is the kind of universe we must live in if the electromagnetic
  field
  structure of the brain delivers qualia?
 
  A. It is NOT the universe depicted by the qualia (atoma, molecules,
  cells...). It is the universe whose innate capacity to deliver
 qualia is
  taken advantage of when configureed like it appears when we use
 qualia
  themselves to explore itcortical brain matter. (NOTE: Please do
 not
  make the mistake that sensors - peripheral affect -  are equivalent
 to
  qualia.)
 
  My original solution to
 
  Re: How would a computer know if it were conscious?
 
  stands. The computer must have a qualia-depiction of its external
 world

Re: How would a computer know if it were conscious?

2007-06-19 Thread Colin Hales

Dear David,
(see below.. I left your original text here...
=
 4) Belief in 'magical emergence'  qualitative novelty of a kind
 utterly unrelated to the componentry.

Hi Colin

I think there's a link here with the dialogue in the 'Asifism' thread
between Bruno and me. I've been reading Galen Strawson's
Consciousness and its place in Nature, which has re-ignited some of
the old hoo-hah over 'panpsychism', with the usual attendant
embarrassment and name-calling. It motivated me to try to unpack the
basic semantic components that are difficult to pin down in these
debates, and for this reason tend to lead to mutual incomprehension.

Strawson refers to the 'magical emergence' you mention, and what is in
his view (and mine) the disanalogy of 'emergent' accounts of
consciousness with, say, how 'liquidity' supervenes on molecular
behaviour. So I started from the question: what would have to be the
case at the 'component' level for such 'emergence' to make sense (and
I'm aiming at the semantics here, not 'ultimate truth', whatever that
might be). My answer is simply that for 'sensing' and 'acting' to
'emerge' (i.e. supervene on) some lower level, that lower level must
itself 'sense' and 'act' (or 'grasp', a word that can carry the
meaning of both).

What sense does it make to say that, for example, sub-atomic
particles, strings, or even Bruno's numbers, 'grasp' each other?
Well, semantically, the alternative would be that they would shun and
ignore each other, and we wouldn't get very far on that basis. They
clearly seem to relate according to certain 'rules', but we're not so
naive (are we?) as to suppose that these are actually 'laws' handily
supplied from some 'external' domain. Since we're talking 'primitives
here', then such relating, such mutual 'grasping', must just *be*.
There's nothing wrong conceptually here, we always need an axiomatic
base, the question is simply where to situate it, and semantically IMO
the buck stops here or somewhere closely adjacent.

The cool thing about this is, that if we start from such primitive
'grasping', then higher-level emergent forms of full sense-action can
now emerge organically by (now entirely valid) analogy with purely
action-related accounts such as liquidity, or for that matter, the
emergence of living behaviour from 'dead matter'. And the obvious
complexity of the relation between, say quantum mechanics and, say,
the life cycle of the sphex wasp, should alert us to an equivalent
complexity in the relationship between primitive 'grasp' and its fully
qualitative (read: participatory) emergents - so please let's have no
(oh-so-embarrassing) 'conscious electrons' here.

Further, it shows us in what way 'software consciousness' is
disanalogous with the evolved kind. A computer, or a rock for that
matter, is of course also a natural emergent from primitive grasping,
and this brings with it sense-action, but in the case of these objects
more action than sense at the emergent level. The software level of
description, however, is merely an imputation, supplied externally
(i.e. by us) and imposed as an interpretation (one of infinitely many)
on the fundamental grasped relations of the substrate components. By
contrast, the brain (and here comes the research programme) must have
evolved (crucially) to deploy a supremely complex set of 'mirroring'
processes that is (per evolution) genuinely emergent from the
primitive 'grasp' of the component level.

From this comes (possibly) the coolest consequence of these semantics:
our intrinsic 'grasp' of our own motivation (i.e. will, whether 'free'
or not), our participative qualitative modalities, the relation of our
suffering to subsequent action, and so forth, emerge as indeed
'something like' the primitive roots from which they inherit these
characteristics. This is *real* emergence, not magical, and at one
stroke demolishes epiphenomenalism, zombies, uploading fantasies and
all the other illusory consequences of confusing the 'external
world' (i.e. a projection) with the participatory one in which we are
included.

===
Methinks you 'get it'. You are far more eloquent than I am, but we talk of
the same thing..

Liquidity is to H2O
as
??? is to consciousness (qualia)

where I identify ??? as a necessary primitive and comment that
'computation' or 'information' or 'complexity' have only the vaguest of an
arm waving grip on any claim to such a specific role. Such is the 'magical
emergence' genre.

I have a viable candidate for the 'necessary primitive' of the kind you
seek. Note that regardless of what anyone's suggestion  for such a thing
might be, the process of declaring it valid must arise in the form of (as
you intuit above) an axiom. That is, it must come in the form of a
statement such as

X = It is a fundamental principle of the natural world that  such and
such a thing is the ultimate necessary primitive state of affairs that

Re: How would a computer know if it were conscious?

2007-06-18 Thread David Nyman

On Jun 14, 7:19 pm, David Nyman [EMAIL PROTECTED] wrote:

 Kant saw
 this clearly in terms of his 'windowless monads', but these, separated
 by the 'void', indeed had to be correlated by divine intervention,
 since (unaware of each other) they could not interact.

Er, no he didn't.  Leibniz did, however.

 On Jun 14, 4:46 am, Stathis Papaioannou [EMAIL PROTECTED] wrote:

  Of course all that is true, but it doesn't explain why neurons in the cortex
  are the ones giving rise to qualia rather than other neurons or indeed
  peripheral sense organs.

 Well, you might as well ask why the engine drives the car and not the
 brakes.  Presumably (insert research programme here) the different
 neural (or other relevant) organisation of the cortex is the
 difference that makes the difference.  My account would run like this:
 the various emergent organs of the brain and sensory apparatus (like
 everything else) supervene on an infrastructure capable of 'sense-
 action'.  I'm (somewhat) agnostic about the nature of this
 infrastructure: conceive it as strings, particles, or even Bruno's
 numbers.  But however we conceptualise it, it must (logically) be
 capable of 'sense-action' in order for activity and cognition to
 supervene on it.  Then what makes the difference in the cortex must be
 a supremely complex 'mirroring' mode of organisation (a 'remembered
 present') lacked by other organs.  To demonstrate this will be a
 supremely difficult empirical programme, but IMO it presents no
 invincible philosophical problems if conceived in this way.

 A note here on 'sense-action':  If we think, for example and for
 convenience, of particles 'reacting' to each other in terms of the
 exchange of 'forces', ISTM quite natural to intuit this as both
 'awareness' or 'sensing', and also 'action'.  After all, I can't react
 to you if I'm not aware of you.  IOW, the 'forces' *are* the sense-
 action.  And at this fundamental level, such motivation must emerge
 intrinsically (i.e. *something like* the way we experience it) to
 avoid a literal appeal to any extrinsic source ('laws').  Kant saw
 this clearly in terms of his 'windowless monads', but these, separated
 by the 'void', indeed had to be correlated by divine intervention,
 since (unaware of each other) they could not interact.  Nowadays, no
 longer conceiving the 'void' as 'nothing', we substitute a modulated
 continuum, but the same semantic demands apply.

 David

  On 14/06/07, Colin Hales [EMAIL PROTECTED] wrote:

   Colin
   This point is poised on the cliff edge of loaded word meanings and their
   use with the words 'sufficient' and 'necessary'. By technology I mean
   novel artifacts resulting from the trajectory of causality including human
   scientists. By that definition 'life', in the sense you infer, is not
   technology. The resulting logical loop can be thus avoided. There is a
   biosphere that arose naturally. It includes complexity of sufficient depth
   to have created observers within it. Those observers can produce
   technology. Douglas Adams (bless him) had the digital watch as a valid
   product of evolution - and I agree with him - it's just that humans are
   necessarily involved in its causal ancestry.

  Your argument that only consciousness can give rise to technology loses
  validity if you include must be produced by a conscious being as part of
  the definition of technology.

   COLIN
   That assumes that complexity itself (organisation of information) is
   the
   origin of consciousness in some unspecified, unjustified way. This
   position is completely unable to make any empirical predictions
   about the
   nature of human conscousness (eg why your cortex generates qualia
   and your
   spinal chord doesn't - a physiologically proven fact).

   STATHIS
Well, why does your eye generate visual qualia and not your big toe?
   It's because the big toe lacks the necessary machinery.

   Colin
   I am afraid you have your physiology mixed up. The eye does NOT generate
   visual qualia. Your visual cortex  generates it based on measurements in
   the eye. The qualia are manufactured and simultaneously projected to
   appear to come from the eye (actually somewhere medial to them). It's how
   you have 90degrees++ peripheral vison. The same visual qualia can be
   generated without an eye (hallucination/dream). Some blind (no functioning
   retina) people have a visual field for numbers. Other cross-modal mixups
   can occur in synesthesia (you can hear colours, taste words). You can have
   a phantom big toe without having any big toe at alljust because the
   cortex is still there making the qualia. If you swapped the sensory nerves
   in two fingers the motor cortex would drive finger A and it would feel
   like finger B moved and you would see finger A move. The sensation is in
   your head, not the periphery. It's merely projected at the periphery.

  Of course all that is true, but it doesn't explain 

Re: How would a computer know if it were conscious?

2007-06-17 Thread Brent Meeker

Colin Hales wrote:
 Hi,
 
 RUSSEL
 All I can say is that I don't understand your distinction. You have
 introduced a new term necessary primitive - what on earth is that? But
 I'll let this pass, it probably isn't important.
 
 COLIN
 Oh no you don't!! It matters. Bigtime...
 
 Take away the necessary primitive: no 'qualititative novelty'
 Take away the water molecules: No lake.
 Take away the bricks, no building
 Take away the atoms: no molecules
 Take away the cells: no human
 Take away the humans: no humanity
 Take away the planets: no solar system
 Take away the X: No emergent Y
 Take away the QUALE: No qualia
 
 Magical emergence is when but claim Y exists but you can't
 identify an X. Such as:
 
 Take away the X: No qualia
 
 but thenyou claim qualia result from 'information complexity' or
 'computation' or 'function' and you fail to say what X can be. Nobody can.
 
 You can't use an object derived using the contents of
 consciousness(observation) to explain why there are any contents of
 consciousness(observation) at all. It is illogical. (see the wigner quote
 below). I find the general failure to recognise this brute reality very
 exasperating.

Prepare to be exasperated then.  I see no contradiction in explaining the 
existence of observation by using a theory derived from observation.  This is 
what we do.  There is no logical inference from observations to our theory of 
observation - it could have come to us in a dream or a revelation or a random 
quantum fluctuation.  If the theory then passes the usual scientific tests, we 
can say it provides an explanation of observation.  Of course there are other 
senses of explanation.  One might be to explain how you know that such a 
thing as observation exists.  I'd say just like I know about anything else - I 
observe it.

Brent Meeker

 
 COLIN
 snip
 So this means that in a computer abstraction.
 d(KNOWLEDGE(t))
 ---  is already part of KNOWLEDGE(t)
   dt
 
 RUSSEL
 No its not. dK/dt is generated by the interaction of the rules with the
 environment.
 
 No. No. No. There is the old assumption thing again.
 
 How, exactly, are you assuming that the agent 'interacts' with the
 environment? This is the world external to the agent, yes?. Do not say
 through sensory measurement, because that will not do. There are an
 infinite number of universes that could give rise to the same sensory
 measurements. We are elctromagnetic objects. Basic EM theory. 

EM is linear.  You can't even make subluminal matter from EM, much less atoms 
and people.

Proven
 mathematical theorems. The solutions are not unique for an isolated
 system.
 
 Circularity.Circularity.Circularity.
 
 There is _no interaction with the environment_ except for that provided by
 the qualia as an 'as-if' proxy for the environment. The origins of an
 ability to access the distal external world in support of such a proxy is
 mysterious but moot. It can and does happen, and that ability must come
 about because we live in the kind of universe that supports that
 possibility. The mysteriousness of it is OUR problem.
 
 RUSSEL
 Evolutionary algorithms are highly effective
 information pumps, pumping information from the environment into the
 genome, or whatever representation you're using to store the solutions.
 
 COLIN
 But then we're not talking about merely being 'highly effective'
 in a target problem domain, are we? We are talking about proving
 consciousness in a machine. I agree - evolutionary algoritms are great
 things... they are just irrelevant to this discussion.
 
 COLIN
 My scientific claim is that the electromagnetic field structure
 literally the third person view of qualia.
 Eh? Electromagnetic field of what? The brain? If so, do you think
 that
 chemical potentiation plays no role at all in qualia?
 Chemical potentiation IS electric field.
 
 RUSSEL
 Bollocks. A hydrogen molecule and an oxygen atom held 1m apart have
 chemical potential, but there is precious little electric field
 
 I am talking about the membrane and you are talking atoms so I guess we
 missed somehow...anywayThe only 'potentiation' that really matters in
 my model is that which looks like an 'action potential' longitudinally 
 traversing dendrite/soma/axon membrane as a whole.
 
 Notwithstanding this
 
 The chemical potentiation at the atomic level is entirely an EM phenomenon
 mediated by QM boundaries (virtual photons in support of the shell
 structure, also EM). It is a sustained 'well/energy minimaum' in the EM
 field structureYou think there is such a 'thing' as potential? There
 is no such thing - there is something we describe as 'EM field'. Nothing
 else. Within that metaphor is yet another even more specious metaphor:
 Potential is an (as yet unrealised) propensity of the field at a
 particular place to do work on a charge if it were put it there. You can
 place that charge in it and get a number out of an electrophysiological
 probe... and 'realise' the work 

Re: How would a computer know if it were conscious?

2007-06-17 Thread Colin Hales

Hi Quentin,

 What is the kind of universe we must live in if the electromagnetic field
 structure of the brain delivers qualia?

 A. It is NOT the universe depicted by the qualia (atoma, molecules,
 cells...). It is the universe whose innate capacity to deliver qualia is
 taken advantage of when configureed like it appears when we use qualia
 themselves to explore itcortical brain matter. (NOTE: Please do not
 make the mistake that sensors - peripheral affect -  are equivalent to
 qualia.)


I will only react to this...

and I will deposit a large collection of weirdness for you to ponder

Q. What is cortical brain matter ?
Let us call our first candidate consistent with all the fatcs a monism
made of MON_STUFF. We must give ourselves the latitude to consider various
candidates.  For the purposes it does not matter what it is. I will try
and answer your questions by bringing in properties. So cortical brain
matter is made of a collection of MON_STUFF. Not atoms. Atoms are
organised MON_STUFF. Quarks are organised MON_STUFF. The MON_STUFF I
choose, that seems to deliver everything I need and is the simplest
possible choice:  is 'the fluctuation'.

Q. Does it exists by itself?
No. It is nested MON_STUFF all the way down. It is intrinsically dynamic
and fleeting. Anything made of MON_STUFF is persistent organisational
structure within a massive collection of fleeting change. Exactly like the
shapes in the water coming out of a garden hose. There is a critical
minimum collection of it, from which all subsequent structure is derived.
That minimum is created like collections of turbulent water molecules
breaks off and self-sustains a eddy/vortex once a critical threshold is
reached. Ultimately there is no need to prescribe an ultimate minimum
'atom-ish' minimal size MON_STUFF fluctuation to predict qualia. Someone
else's problem. I don't need to solve that. The fluctuation model
works...that's all I need to progress.

Q. if so, what is it composed of (matter ?) ?
Well it's not, so I don't have to fall into this logical hole.

Q. what is matter ?
Hierarchically organised persistent but intrinsically dynamic (continually
refreshed) structures of MON_STUFF

Q. what is brain?
I think we already did this.

Q why cortical brain matter generates qualia ?
There is one single simple fundamental principle at the heart of it: At
all scales and all locations, when you 'be'  any MON_STUFF the 'view of
the rest of the universe' is delivered innately as 'NOT_ME'. Call it the
COLIN principle of universal subjectivity I don;t care...like the
fluctuation This is a simple as it gets.

Q why it must be so ?
With the fundamental principle that perspective view at all scales
literally is the source of qualia, the whole reasoning changes from one of
WHY to one of WHERE/WHENwhich is what you ask. It is question of
visability. It is 'like' 'NOT_ELECTRON' to be a collection of MON_STUFF
behaving electronly. That is not 'about' being an electron. It IS an
electron. Not only that, there is a blizzard of the little blighters with
no collective 'story' to tell. Their collective summated scene is ZERO.

Q Is qualia a dependance of cortical brain matter or the inverse ?
If I get you correctly it's 'INVERSE'.

Q. is qualia responsible of what looks like cortical brain matter?
It's not 'responsible' in that it doesn't 'cause brain matter'. Qualia
present a visual scene -  a representation. In the scene we see brain
matter.

Q or is it cortical brain matter that makes feel
qualia which in turns ask question about cortical brain matter ?
No. Cortical brain material is an appearance of MON_STUFF created by
special MON_STUFF doing the 'appearance dance'. When it does that dance
... (the cortical grey matter membrane dance)... it creates an appearance
of atoms, molcules, cells, tissue because these are persistent nested
structures of MON_STUFF doing the atom dance, the molecule dance, the cell
dance. etc..etc.

As weird and hard to assimilate as it soundsIt all comes down to the
two simplest possible basic premises:

1) A universe consisting of a massive number of one generic elemental
process, the fluctuation.

2) A universe in which the perspective view from the point of view of
'being' ME, an elemental fluctuation, is 'NOT ME' (the rest of the
universe).

The ecitable cell dance is the only dance that has it's own story
independent of the underlying MON_STUFF organisational layers. That is the
only place where the net exertions of MON_STUFF have nothing to do with
any other dance. That is the organisational level where the visibility
finally manifests to non-zero...why neural soma are fat - it's all about
signal to noise ratio.

weirdness time over. Gotta go.

Colin Hales



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email 

Re: How would a computer know if it were conscious?

2007-06-17 Thread David Nyman

On Jun 17, 6:47 am, Colin Hales [EMAIL PROTECTED] wrote:

 Magical emergence is when but claim Y exists but you can't
 identify an X. Such as:

 Take away the X: No qualia

 but thenyou claim qualia result from 'information complexity' or
 'computation' or 'function' and you fail to say what X can be. Nobody can.

Phew, it's difficult to break into this debate!  Colin, I'm trying to
support your line of argument here, so do us both a favour and tell me
what you think is wrong (or isn't it even wrong?)  I'll reiterate in
the simplest way I can. At root, my take is that our only 'primitive'
direct contact with 'reality' is what you're calling 'qualia' -
*everything else* is metaphor.  Consequently, what we must do is
establish the connection between the reality of 'qualia' (what I'm now
going to call our 'personal world') and whatever metaphor we choose to
adopt.

The example I've been using for the metaphor is particle-force, which
is just a generalisation of the notion of 'relationship' within a
differentiated continuum. What we most need to account for in our
personal worlds is our direct contact with multiple modes of awareness
and motivation.  We really see, hear, suffer, will, and act.  What I'm
saying that to make the 'emergence' of such realities semantically
coherent, they must be inherited from primitive 'relationship' that
has these characteristics in 'reduced' form - i.e. a mediator that
unites 'sensing' and 'acting'.  In the conventional 'physical'
account, the mediator is tacitly assumed to carry only 'acting', and
hence direct personal-world 'sensing' is - crucially - lost at
source.  This is because these accounts map to abstracted *models* of
'external worlds', not to 'personal worlds', and consequently are
'uninhabited' (i.e. zombies).

In the particle-force metaphor, 'particle' is a differentiated
'entity', and 'force' is necessarily both mediator of its 'sensing' of
other particles, and their 'interaction'.  'Necessarily' because,
primitively, an entity can't 'interact' with another without 'sensing'
it (as Kant's monads demonstrate).  This is a key point!  One could
say then that particles 'grasp' each other.  Now we can map from such
primitive 'grasp' in two directions.  First: upwards via genuine
emergence to the multiple modalities of 'grasping' within our personal
worlds - seeing, hearing, willing acting, suffering.  Such emergence
is genuine because, although we don't know the *precise* mapping,
we're not dodging the issue of what 'personal' grasp inherits from -
it builds on the primitive grasp of the 'particles' (or some
preferred, but semantically isomorphic, metaphor of primitive
relationship).  Second: from our personal worlds to 'external worlds'
beyond, but still in terms of a seamless continuation of the primitive
'grasped' relationship.  In this way, the 'external world' remains
inhabited.

IMHO this semantic model gives you a knock-down argument against
'computationalism', *unless* one identifies (I'm hoping to hear from
Bruno on this) the 'primitive' entities and operators with those of
the number realm - i.e. you make numbers and their relationships the
'primitive base'.  But crucially, you must still take these entities
and their relationships to be the *real* basis of personal-world
'grasp'.  If you continue to adopt a 'somethingist' view, then no
'program' (i.e. one of the arbitrarily large set that could be imputed
to any 'something') could coherently be responsible for its personal-
world grasp (such as it may be).  This is the substance of the UDA
argument.  All personal-worlds must emerge internally via recursive
levels of relationship inherited from primitive grasp: in a
'somethingist' view, such grasp must reside with a primitive
'something', as we have seen, and in a computationalist view, it must
reside in the number realm.  But the fundamental insight applies.

I think you can build all your arguments up from this base.  What do
you think?

David

 Hi,

 RUSSEL All I can say is that I don't understand your distinction. You have

 introduced a new term necessary primitive - what on earth is that? But
 I'll let this pass, it probably isn't important.

 COLIN
 Oh no you don't!! It matters. Bigtime...

 Take away the necessary primitive: no 'qualititative novelty'
 Take away the water molecules: No lake.
 Take away the bricks, no building
 Take away the atoms: no molecules
 Take away the cells: no human
 Take away the humans: no humanity
 Take away the planets: no solar system
 Take away the X: No emergent Y
 Take away the QUALE: No qualia

 Magical emergence is when but claim Y exists but you can't
 identify an X. Such as:

 Take away the X: No qualia

 but thenyou claim qualia result from 'information complexity' or
 'computation' or 'function' and you fail to say what X can be. Nobody can.

 You can't use an object derived using the contents of
 consciousness(observation) to explain why there are any contents of
 consciousness(observation) at all. It 

Re: How would a computer know if it were conscious?

2007-06-17 Thread David Nyman

On Jun 17, 2:33 am, Russell Standish [EMAIL PROTECTED] wrote:

 You're obviously suggesting single neurons have qualia. Forgive me for
 being a little sceptical of this suggestion...

Russell, this is daft!  Surely the argument is getting completely lost
in the terminology here.  What on earth could you (or Colin, or
anyone), whether arguing pro or con, imagine 'qualia' could possibly
mean in this context?  And yet something (presumably) based on neurons
(and on whatever one's model-of-choice claims neurons are based on)
'possesses' them.  Or rather (since I think the 'possessing qualia'
way of speaking leads to utter incoherence) whatever exists
intrinsically (e.g. our personal world) is based on them.  Which is
equivalent to saying that this 'base' exists 'completely' (as opposed
to its *incomplete* - because abstract(ed) - 'physical description').

As I've argued (interminably) elsewhere, when we analyse our personal
worlds stringently, we find that our claims about them rest
principally on two capabilities (that are actually inseparable when
examined): 'sensing' and 'acting'.  These are the primitive intrinsic
semantic components of relationship, or 'grasp'.  'Absolute qualities'
are not at issue here.  These are ineluctably sui generis: a 'personal
modelling medium' can't *in itself* be communicated in terms of
'extrinsic objects' modelled within it.  But we can refer to it
ostensively: i.e. we *demonstrate* how elements of our personal worlds
relate to an inter-subjective 'extrinsic reality' (which is I think is
at root what Colin calls 'doing science').  But we *must* postulate
that all 'emergent' motivational and sensory modalities - i.e. what
comprises our personal worlds - *must* 'reduce' to components of the
same *completed* ontic category.

Now, if you want to try to imagine 'what it's like to be a neuron', I
can't help you.  But I do say that you shouldn't expect the relation
between this and 'what it's like to be Russell' to be any less complex
than, say, what a 'physical' description of a neuron 'is like' as
compared to an equivalent description of Russell.  IOW, pretty
tenuous. But - crucially - we accept in the case of the 'physical'
account that the components and the assembly belong in (a 'reduced'
version of) the *same ontic category*.  And, mutatis mutandis, the
'completed' relationships at fundamental and emergent levels likewise
belong in the same completed ontic category (i.e. the unique one - the
'abstracted physical' one now being revealed as merely partial).

A crucial aspect of this is that it emphasises - but crucially
*completes* - the causal closure of our explanations.  We can now see
that any 'physical' action-only account is radically incomplete - in
the massively real sense that *it can't work at all*.  Without the
'sensing' component of 'grasp', 'action' is snuffed out at the root
(Kant had to invoke divine intervention to get out of this one).  And
the 'grasp' itself must crucially be intuited as intrinsically self-
motivated (i.e. 'physical law' reduces to the self-motivated relating
of differentiated 'ultimate actors'). The self-motivation of the
'componentry' can then emerge somewhat later, transformed but intact
(mutatis mutandis) as the self-motivation of all manner of personal
and extrinsic emergents.

Our explanations about our motivations and the causal sequence from
personal to extrinsic worlds can now 'emerge' as indeed 'something
like' what we intuit in our personal worlds (phew!!).  I really did go
'ouch' *because it hurt*.  I fled the situation *because I was really
suffering*. The computer isn't conscious *purely in virtue of any
program I impute to it* (though like any other entity it is an
emergent with its proper fundamental 'grasp'). And BTW (pace Bruno)
all this could AFAICS equally well be postulated on AR - i.e. the
'self-motivated' primitive elements (relata) and operators (mediators)
of COMP.

What this means is that 'neurons' - whether further reduced to
particles, electromagnetic fields, or indeed the number realm: however
we choose to model the 'base' - must supervene on 'ultimate relata'
that interact in virtue of intrinsic 'grasp' (see my posts to Colin
and Bruno for more): i.e. 'sense' and 'action' united.  If we lose
this basic insight, we also lose the ability to map emergent 'mental'
and 'physical' phenomena to any ultimate entities on which we can
coherently claim they supervene.  Or rather, we retain only the
ability to map the *action* half of the sense-action 'grasp' (an
omission which should now be seen as fatally incoherent - how can
entities be claimed to act on each other without mutually sensing
their presence?).

This only ever *seemed* to make sense insofar as the physical
description of the world yields models abstracted from the 'completed'
existents to which they refer - not those existents themselves.  The
puzzlement over the lost 'sensing', however, returns with a vengeance
when the modelled existent becomes reflexive - 

Re: How would a computer know if it were conscious?

2007-06-17 Thread Colin Hales

Dear Brent,
If you had the most extravagent MRI machine in history, which trapped
complete maps of all electrons, neuclei and any photons and then plotted
them out - you would have a 100% complete, scientifically acquired
publishable description and in that description would be absolutely no
prediction of or explanation of why it is necessarily 'like it is' to 'be'
the brain thus described, what that experience will be like. It would not
enable you to make any cogent claim as to why it is or is not 'like
something' to be a computer except insofar as it doesn't have neurons. Why
am I saying thisPlease read David Chalmers. This is not new.

Science does not and never has EXPLAINED anything. It merely describes.
Read the literature. For the first time ever, to deal with qualia, science
has to actually EXPLAIN something. It is at the boundary condition where
you have to explain how you can observe anything at all.

As to your EM theory beliefs... please read the literature. Jackson
Classical electrodynamics is a brilliant place to start. For nobody
around here in electrical engineering agrees with you... and I have just
been grilled on that very issue by a whole pile of very senior academics -
who agree with me. Even my anatomy/neuroscience supervisor, who are
generally pathologically afraid of physicstells me there's nothing
there but space and charge


If you want to draw a line around a specific zone of ignorance and inhabit
it...go ahead. If you want to believe that correlation is causation go
ahead. This is what we do  is what you say when you are a member of a
club, not a seeker of truth. You have self referentially defined
truthand you are welcome to it. ...

Meanwhile I'll just poke around in other areas. I hope you won't mind.
Please consider your exasperation quota reached. Job done.

colin





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-17 Thread Brent Meeker

Colin Hales wrote:
 Dear Brent,
 If you had the most extravagent MRI machine in history, which trapped
 complete maps of all electrons, neuclei and any photons and then plotted
 them out - you would have a 100% complete, scientifically acquired
 publishable description and in that description would be absolutely no
 prediction of or explanation of why it is necessarily 'like it is' to 'be'
 the brain thus described, what that experience will be like. 

I think that is mere prejudice on your part.  It may be true, but I see no 
reason to assume it in advance.

It would not
 enable you to make any cogent claim as to why it is or is not 'like
 something' to be a computer except insofar as it doesn't have neurons. Why
 am I saying thisPlease read David Chalmers. This is not new.

I have.  Please read Daniel Dennett's answer to Chalmers.

 
 Science does not and never has EXPLAINED anything. It merely describes.

So what is your idea of explanation?  Is it not a description of cause or 
purpose?

 Read the literature. For the first time ever, to deal with qualia, science
 has to actually EXPLAIN something. It is at the boundary condition where
 you have to explain how you can observe anything at all.

If I can explain how a cat or a robot observes something, does that count?

 
 As to your EM theory beliefs... please read the literature. Jackson
 Classical electrodynamics is a brilliant place to start. 

Yes, it was my textbook in graduate school.  I don't think Jackson would 
endorse your theory that is nothing but EM fields.

For nobody
 around here in electrical engineering agrees with you... and I have just
 been grilled on that very issue by a whole pile of very senior academics -
 who agree with me. Even my anatomy/neuroscience supervisor, who are
 generally pathologically afraid of physicstells me there's nothing
 there but space and charge

Have they not heard of quarks and electrons and gluons? It's really hard to 
make atoms without them.

 
 If you want to draw a line around a specific zone of ignorance and inhabit
 it...go ahead. If you want to believe that correlation is causation go
 ahead. This is what we do  is what you say when you are a member of a
 club, not a seeker of truth. You have self referentially defined
 truthand you are welcome to it. ...
 
 Meanwhile I'll just poke around in other areas. I hope you won't mind.
 Please consider your exasperation quota reached. Job done.

I hope you haven't given up on explaining observation.

Brent Meeker


 
 colin
 
 
 
 
 
  
 
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-17 Thread Russell Standish

On Sun, Jun 17, 2007 at 03:47:19PM +1000, Colin Hales wrote:
 
 Hi,
 
 RUSSEL
  All I can say is that I don't understand your distinction. You have
 introduced a new term necessary primitive - what on earth is that? But
 I'll let this pass, it probably isn't important.
 
 COLIN
 Oh no you don't!! It matters. Bigtime...
 
 Take away the necessary primitive: no 'qualititative novelty'
 Take away the water molecules: No lake.
 Take away the bricks, no building
 Take away the atoms: no molecules
 Take away the cells: no human
 Take away the humans: no humanity
 Take away the planets: no solar system
 Take away the X: No emergent Y
 Take away the QUALE: No qualia
 
 Magical emergence is when but claim Y exists but you can't
 identify an X. Such as:


OK, so by necessary primitive, you mean the syntactic or microscopic
layer. But take this away, and you no longer have emergence. See
endless discussions on emergence - my paper, or Jochen Fromm's book for
instance. Does this mean magical emergence is oxymoronic?


 
 You can't use an object derived using the contents of
 consciousness(observation) to explain why there are any contents of
 consciousness(observation) at all. It is illogical. (see the wigner quote
 below). I find the general failure to recognise this brute reality very
 exasperating.
 

People used to think that about life. How can you construct (eg an
animal) without having a complete discription of that animal. So how
can an animal self-reproduce without having a complete description of
itself. But this then leads to an infinite regress. 

The solution to this conundrum was found in the early 20th century -
first with such theoretical constructs as combinators and lambda
calculus, then later the actual genetic machinery of life. If it is
possible in the case of self-reproduction, the  it will also likely to
be possible in the case of self-awareness and consciousness. Stating
this to illogical doesn't help. That's what people from the time of
Descartes thought about self-reproduction.

 COLIN
 snip
  So this means that in a computer abstraction.
  d(KNOWLEDGE(t))
  ---  is already part of KNOWLEDGE(t)
dt
 
 RUSSEL
  No its not. dK/dt is generated by the interaction of the rules with the
 environment.
 
 No. No. No. There is the old assumption thing again.
 
 How, exactly, are you assuming that the agent 'interacts' with the
 environment? This is the world external to the agent, yes?. Do not say
 through sensory measurement, because that will not do. There are an
 infinite number of universes that could give rise to the same sensory
 measurements. 

All true, but how does that differ in the case of humans?

 We are elctromagnetic objects. Basic EM theory. Proven
 mathematical theorems. The solutions are not unique for an isolated
 system.
 
 Circularity.Circularity.Circularity.
 
 There is _no interaction with the environment_ except for that provided by
 the qualia as an 'as-if' proxy for the environment. The origins of an
 ability to access the distal external world in support of such a proxy is
 mysterious but moot. It can and does happen, and that ability must come
 about because we live in the kind of universe that supports that
 possibility. The mysteriousness of it is OUR problem.
 

You've lost me completely here. 

 RUSSEL
  Evolutionary algorithms are highly effective
  information pumps, pumping information from the environment into the
 genome, or whatever representation you're using to store the solutions.
 
 COLIN
 But then we're not talking about merely being 'highly effective'
 in a target problem domain, are we? We are talking about proving
 consciousness in a machine. I agree - evolutionary algoritms are great
 things... they are just irrelevant to this discussion.
 

No, we're talking about doing science, actually, not proving
consciousness. And nothing indicates to me that science is any more
than a highly effective information pump finding regularities about
the world.

 RUSSEL
  Bollocks. A hydrogen molecule and an oxygen atom held 1m apart have
 chemical potential, but there is precious little electric field
 
 I am talking about the membrane and you are talking atoms so I guess we
 missed somehow...anywayThe only 'potentiation' that really matters in
 my model is that which looks like an 'action potential' longitudinally 
 traversing dendrite/soma/axon membrane as a whole.
 
 Notwithstanding this
 
 The chemical potentiation at the atomic level is entirely an EM phenomenon
 mediated by QM boundaries (virtual photons in support of the shell

I never said it wasn't an EM phenomenon. Just that chemical potential
is not an EM field. The confusion may arise because your head is full
of ionic chemistry (for which chemical potential is for all intents
and purposes identical to the electrical potential between the ions),
but there are two other types of chemical bonds - the covalent and the
hydrogen bond. Both of these types of bonds occur between neutral

Re: How would a computer know if it were conscious?

2007-06-16 Thread Colin Hales
 -  are equivalent to
qualia.)

My original solution to

Re: How would a computer know if it were conscious?

stands. The computer must have a qualia-depiction of its external world
and it will know it because it can do science. If it doesn't/can't it's a
rock/doorstop. In any computer model, every time an algoritm decides what
'is' (what is visible/there) it intrisically defines 'what isn't' (what is
invisible/not there). All novelty becomes thus pre-ordained.

anyway.Ultimately 'how' qualia are generated is moot.

That they are _necessarily_ involved is the key issue. On their own they
are not sufficient for science to occur.

cheers
colin







--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-16 Thread Quentin Anciaux

On Sunday 17 June 2007 02:02:28 Colin Hales wrote:
 What is the kind of universe we must live in if the electromagnetic field
 structure of the brain delivers qualia?

 A. It is NOT the universe depicted by the qualia (atoma, molecules,
 cells...). It is the universe whose innate capacity to deliver qualia is
 taken advantage of when configureed like it appears when we use qualia
 themselves to explore itcortical brain matter. (NOTE: Please do not
 make the mistake that sensors - peripheral affect -  are equivalent to
 qualia.)

I will only react to this...

What is cortical brain matter ? does it exists by itself ? if so, what is it 
composed of ? (matter ?) what is matter ? what is brain ? why cortical brain 
matter generates qualia ? why it must be so ? is qualia a dependance of 
cortical brain matter or the inverse ? is qualia responsible of what looks 
like cortical brain matter or is it cortical brain matter that makes feel 
qualia which in turns ask question about cortical brain matter ?

Quentin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-16 Thread Russell Standish

On Sun, Jun 17, 2007 at 10:02:28AM +1000, Colin Hales wrote:
 
 Hi,
 I am going to have to be a bit targetted in my responses I am a TAD
 whelmed at the moment.
 
 COLIN
  4) Belief in 'magical emergence'  qualitative novelty of a kind
 utterly unrelated to the componentry.
 
 RUSSEL
  The latter clause refers to emergence (without the magical
  qualifier), and it is impossible IMHO to have creativity without
 emergence.
 
 COLIN
 The distinction between 'magical emergence' and 'emergence' is quite
 obviously intended by me. A lake is not apparent in the chemical formula
 for water. I would defy anyone to quote any example of real-world
 'emergence' that does not ultimately rely on a necessary primitive.
 'Magical emergence' is when you claim 'qualitative novelty' without having
 any idea (you can't point at it) of the necessary primitive, or by
 defining an arbitrary one that is actually a notional construct (such as
 'information'), rather than anything real.

All I can say is that I don't understand your distinction. You have
introduced a new term necessary primitive - what on earth is that?

But I'll let this pass, it probably isn't important.

 COLIN
 Ok. Here's where we find the big assumption. Feedback? HOW?...by who's
 rules? Your rules. This is the real circularity which underpins
 computationalism. It's the circularity that my real physical qualia model
 cuts and kills. Mathematically:
 
 * You have knowledge KNOWLEDGE(t) of 'out there'
 * You want more knowledge of 'out there' so
 * KNOWLEDGE(t+1) is more than KNOWLEDGE(t)
 * in computationalism who defines the necessary route to this?...
 
  d(KNOWLEDGE(t))
  --- = something you know = YOU DO.
 dt
 
 So this means that in a computer abstraction.
 
 d(KNOWLEDGE(t))
 ---  is already part of KNOWLEDGE(t)
   dt


No its not. dK/dt is generated by the interaction of the rules with
the environment. Evolutionary algorithms are highly effective
information pumps, pumping information from the environment into the
genome, or whatever representation you're using to store the solutions.

 
  My scientific claim is that the electromagnetic field structure
 literally the third person view of qualia.
 
  Eh? Electromagnetic field of what? The brain? If so, do you think that
 chemical potentiation plays no role at all in qualia?
 
 Chemical potentiation IS electric field. 

Bollocks. A hydrogen molecule and an oxygen atom held 1m apart have
chemical potential, but there is precious little electric field
between them. Furthermore, the chemical potential is independent on
the separation, unlike the electric field.

 There's no such thing as
 'mechanical' there's no such thing as 'chemical'. These are all metaphors
 in certain contexts for what is there...space and charge (yes...and mass
 associated with certain charge carriers). Where did you get this weird
 idea that a metaphor can make qualia?
 

Why do you think space and charge are not metaphors also? I would not
be so sure on this matter.

 The electric field across the membrane of cells (astrocytes and neurons)
 is MASSIVE. MEGAVOLTS/METER. Think SPARKS and LIGHTNIING. It dominates the
 entire structure! It does not have to go anywhere. It just has to 'be'.
 You 'be' it to get what it delivers. Less than 50% of the signalling in
 the brain is synaptic, anyway! The dominant cortical process is actually
 an astrocyte syncytium. (look it up!). I would be very silly to ignore the
 single biggest, most dominant process of the brain that is so far
 completely correlated in every way with qualia...in favour of any other
 cause.

You're obviously suggesting single neurons have qualia. Forgive me for
being a little sceptical of this suggestion...


-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-16 Thread Colin Hales

Hi,

RUSSEL
 All I can say is that I don't understand your distinction. You have
introduced a new term necessary primitive - what on earth is that? But
I'll let this pass, it probably isn't important.

COLIN
Oh no you don't!! It matters. Bigtime...

Take away the necessary primitive: no 'qualititative novelty'
Take away the water molecules: No lake.
Take away the bricks, no building
Take away the atoms: no molecules
Take away the cells: no human
Take away the humans: no humanity
Take away the planets: no solar system
Take away the X: No emergent Y
Take away the QUALE: No qualia

Magical emergence is when but claim Y exists but you can't
identify an X. Such as:

Take away the X: No qualia

but thenyou claim qualia result from 'information complexity' or
'computation' or 'function' and you fail to say what X can be. Nobody can.

You can't use an object derived using the contents of
consciousness(observation) to explain why there are any contents of
consciousness(observation) at all. It is illogical. (see the wigner quote
below). I find the general failure to recognise this brute reality very
exasperating.

COLIN
snip
 So this means that in a computer abstraction.
 d(KNOWLEDGE(t))
 ---  is already part of KNOWLEDGE(t)
   dt

RUSSEL
 No its not. dK/dt is generated by the interaction of the rules with the
environment.

No. No. No. There is the old assumption thing again.

How, exactly, are you assuming that the agent 'interacts' with the
environment? This is the world external to the agent, yes?. Do not say
through sensory measurement, because that will not do. There are an
infinite number of universes that could give rise to the same sensory
measurements. We are elctromagnetic objects. Basic EM theory. Proven
mathematical theorems. The solutions are not unique for an isolated
system.

Circularity.Circularity.Circularity.

There is _no interaction with the environment_ except for that provided by
the qualia as an 'as-if' proxy for the environment. The origins of an
ability to access the distal external world in support of such a proxy is
mysterious but moot. It can and does happen, and that ability must come
about because we live in the kind of universe that supports that
possibility. The mysteriousness of it is OUR problem.

RUSSEL
 Evolutionary algorithms are highly effective
 information pumps, pumping information from the environment into the
genome, or whatever representation you're using to store the solutions.

COLIN
But then we're not talking about merely being 'highly effective'
in a target problem domain, are we? We are talking about proving
consciousness in a machine. I agree - evolutionary algoritms are great
things... they are just irrelevant to this discussion.

COLIN
  My scientific claim is that the electromagnetic field structure
 literally the third person view of qualia.
  Eh? Electromagnetic field of what? The brain? If so, do you think
that
 chemical potentiation plays no role at all in qualia?
 Chemical potentiation IS electric field.

RUSSEL
 Bollocks. A hydrogen molecule and an oxygen atom held 1m apart have
chemical potential, but there is precious little electric field

I am talking about the membrane and you are talking atoms so I guess we
missed somehow...anywayThe only 'potentiation' that really matters in
my model is that which looks like an 'action potential' longitudinally 
traversing dendrite/soma/axon membrane as a whole.

Notwithstanding this

The chemical potentiation at the atomic level is entirely an EM phenomenon
mediated by QM boundaries (virtual photons in support of the shell
structure, also EM). It is a sustained 'well/energy minimaum' in the EM
field structureYou think there is such a 'thing' as potential? There
is no such thing - there is something we describe as 'EM field'. Nothing
else. Within that metaphor is yet another even more specious metaphor:
Potential is an (as yet unrealised) propensity of the field at a
particular place to do work on a charge if it were put it there. You can
place that charge in it and get a number out of an electrophysiological
probe... and 'realise' the work (modify the fields) itself- but there's no
'thing' that 'is' the potential.

Not only that: The fields are HUGE  10^11 volts/meter. Indeed the
entrapment of protons in the nucleus requires the strong nuclear force to
overcome truly stupendous repulsive fields. I know beause I am quite
literally doing tests in molecular dynamics simulations of the E-M field
at the single charge level. The fields are massive and change at
staggeringly huge rates, especially at the atomic level. HoweverTheir
net level in the vicinity of 20Angstroms away falls off dramatically. But
this is not the vicinity of any 'chemical reaction'.

And again I say : there is nothing else there but charge and its fields.

When you put your hand on a table the reason it doesn't pass through it
even though table and hand are mostly space ...is because electrons
literally meet and 

Re: How would a computer know if it were conscious?

2007-06-15 Thread Bruno Marchal


Le 14-juin-07, à 18:13, John Mikes a écrit :

 I wonder about Bruno's (omniscient) Lob-machine, how it handles a 
 novelty.


Did you receive my last mail? I quote below the relevant part. To be 
sure, there is a technical sense, in logic, of omniscience in which 
the lobian machines are omniscient. But I doubt that you are using 
omniscience in that technical sense. Let me ask you what you mean by 
omniscience?

Bruno


quote:

 John:
 I know that you ask your oimniscient Loebian machine,

Bruno:
Aaah... come on. It is hard to imagine something less omniscient and 
more modest than the simple lobian machine I interview, like PA whose 
knowledge is quite a tiny subset of yours.
You are still talking like a *pregodelian* mechanist. Machine can no 
more be conceived as omniscient, just the complete contrary.
And adding knowledge makes this worse. You can see consciousness 
evolution as a trip from G to G*, but that trip makes the gap between G 
and G* bigger. The more a universal machine knows, the more she will be 
*relatively* ignorant.
With comp, knowledge is like a light in the dark, which makes you aware 
of the bigness of the explorable reality, and beyond.

endquote





http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-15 Thread Bruno Marchal

David, Tom, Stephen,

I keep your posts and I will comment them the week after the next one.
I have also to finish a post for Stephen Paul King about bisimulation 
and identity. I'm out of my office the whole next week. I hope my 
mail-box will survive :)

Best Regards,

Bruno



Le 15-juin-07, à 03:16, David Nyman a écrit :

 The 'substrate' to which I refer is not matter or anything else in
 particular, but a logical-semantic 'substrate' from which 'mind' or
 'matter' could emerge.  On this basis, 'sense-action' (i.e. two
 differentiated 'entities' primitively 'sensing' each other in order to
 'interact') is a logical, or at least semantically coherent,
 requirement.  For example, if you want to use a particle-force
 analogy, then the 'force' would be the medium of exchange of sense-
 action - i.e. relationship.  In Kant's ontology, his windowless monads
 had no such means of exchange (the 'void' prevented it) and
 consequently divine intervention had to do the 'trick'.  I'm hoping
 that Bruno will help me with the appropriate analogy for AR+COMP.



http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-14 Thread David Nyman

On Jun 14, 3:47 am, Colin Hales [EMAIL PROTECTED] wrote:

 4) Belief in 'magical emergence'  qualitative novelty of a kind
 utterly unrelated to the componentry.

Hi Colin

I think there's a link here with the dialogue in the 'Asifism' thread
between Bruno and me. I've been reading Galen Strawson's
Consciousness and its place in Nature, which has re-ignited some of
the old hoo-hah over 'panpsychism', with the usual attendant
embarrassment and name-calling.  It motivated me to try to unpack the
basic semantic components that are difficult to pin down in these
debates, and for this reason tend to lead to mutual incomprehension.

Strawson refers to the 'magical emergence' you mention, and what is in
his view (and mine) the disanalogy of 'emergent' accounts of
consciousness with, say, how 'liquidity' supervenes on molecular
behaviour.  So I started from the question: what would have to be the
case at the 'component' level for such 'emergence' to make sense (and
I'm aiming at the semantics here, not 'ultimate truth', whatever that
might be).  My answer is simply that for 'sensing' and 'acting' to
'emerge' (i.e. supervene on) some lower level, that lower level must
itself 'sense' and 'act' (or 'grasp', a word that can carry the
meaning of both).

What sense does it make to say that, for example, sub-atomic
particles, strings, or even Bruno's numbers, 'grasp' each other?
Well, semantically, the alternative would be that they would shun and
ignore each other, and we wouldn't get very far on that basis.  They
clearly seem to relate according to certain 'rules', but we're not so
naive (are we?) as to suppose that these are actually 'laws' handily
supplied from some 'external' domain.  Since we're talking 'primitives
here', then such relating, such mutual 'grasping', must just *be*.
There's nothing wrong conceptually here, we always need an axiomatic
base, the question is simply where to situate it, and semantically IMO
the buck stops here or somewhere closely adjacent.

The cool thing about this is, that if we start from such primitive
'grasping', then higher-level emergent forms of full sense-action can
now emerge organically by (now entirely valid) analogy with purely
action-related accounts such as liquidity, or for that matter, the
emergence of living behaviour from 'dead matter'.  And the obvious
complexity of the relation between, say quantum mechanics and, say,
the life cycle of the sphex wasp, should alert us to an equivalent
complexity in the relationship between primitive 'grasp' and its fully
qualitative (read: participatory) emergents - so please let's have no
(oh-so-embarrassing) 'conscious electrons' here.

Further, it shows us in what way 'software consciousness' is
disanalogous with the evolved kind. A computer, or a rock for that
matter, is of course also a natural emergent from primitive grasping,
and this brings with it sense-action, but in the case of these objects
more action than sense at the emergent level.  The software level of
description, however, is merely an imputation, supplied externally
(i.e. by us) and imposed as an interpretation (one of infinitely many)
on the fundamental grasped relations of the substrate components.  By
contrast, the brain (and here comes the research programme) must have
evolved (crucially) to deploy a supremely complex set of 'mirroring'
processes that is (per evolution) genuinely emergent from the
primitive 'grasp' of the component level.

From this comes (possibly) the coolest consequence of these semantics:
our intrinsic 'grasp' of our own motivation (i.e. will, whether 'free'
or not), our participative qualitative modalities, the relation of our
suffering to subsequent action, and so forth, emerge as indeed
'something like' the primitive roots from which they inherit these
characteristics.  This is *real* emergence, not magical, and at one
stroke demolishes epiphenomenalism, zombies, uploading fantasies and
all the other illusory consequences of confusing the 'external
world' (i.e. a projection) with the participatory one in which we are
included.

Cheers

David

 Hi,

  COLIN
  I don't think we need a new wordI'll stick to the far less
 ambiguous
  term 'organisational complexity', I think. the word creativity is so

 loaded that its use in general discourse is bound to be prone to
 misconstrual, especially in any discussion which purports to be
 assessing

  the relationship between 'organisational complexity' and consciousness.

 RUSSEL

  What sort of misconstruals do you mean? I'm interested...
  'organisational complexity' does not capture the concept I'm after.

 COLIN
 1) Those associated with religious 'creation' myths - the creativity
 ascribed to an omniscient/omnipotent entity.
 2) The creativity ascribed to the act of procreation.
 3) The pseudo-magical aspects of human creativity (the scientific ah-ha
 moment and the artistic gestalt moment).
 and pehaps...
 4) Belief in 'magical emergence'  qualitative novelty of a kind
 utterly 

Re: How would a computer know if it were conscious?

2007-06-14 Thread John Mikes
Colin and partners:

To the subject question: how do you know your own conscious state? (It all
comes back to my 'ceterum censeo': what are we talking about as
'consciousness'? -
if there is a concensus-ready definition for open-minded use at all).

And a 2nd question: May I ask: what is 'novelty'?
usually it refers to something actually not 'registered' among known and
currently
 listed things within the inventory of activated presently used cognitive
inventories.
Within the complexity inherently applied in the world, there is no novelty.
(First off: time is not included in complexity, so a 'later' finding is not
'new'. )
Secondly: our (limited) mindset works only with that much content and I
would be cautious to call 'novelty' the rest of the world.
I wonder about Bruno's (omniscient) Lob-machine, how it handles a novelty.
Now I can continue reading your very exciting discussion.
Thanks
John M

On 6/14/07, Colin Hales [EMAIL PROTECTED] wrote:


 Hi,

 STATHIS
 Your argument that only consciousness can give rise to technology loses
 validity if you include must be produced by a conscious being as part of
 the definition of technology.

 COLIN
 There's obvious circularity in the above sentence and it is the same old
 circularity that endlessly haunts discussions like this (see the dialog
 with Russel).

 In dealing with the thread

 Re: How would a computer know if it were conscious?

 my proposition was that successful _novel_ technology

 i.e. a entity comprised of matter with a function not previously observed
 and that resulted from new - as in hitherto unknown - knowledge of the
 natural world

  can only result when sourced through agency inclusive of a phenomenal
 consciousness (specifically and currently only that that aspect of human
 brain function I have called 'cortical qualia'). Without the qualia,
 generated based on literal connection with the world outside the agent,
 the novelty upon which the new knowledge was based would be invisible.

 My proposition was that if the machine can do the science on exquisite
 novelty that subsequantly is in the causal ancestry of novel technology
 then that machine must include phenomenal scenes (qualia) that depict the
 external world.

 Scientists and science are the way to objectively attain an objective
 scientific position on subjective experience - that is just as valid as
 any other scientific position AND that a machine could judge itself by. If
 the machine is willing to bet its existence on the novel technology's
 ability to function when the machine is not there doing what it thinks is
 'observing it'... and it survives - then it can call itself conscious.
 Humans do that.

 But the machines have another option. They can physically battle it out
 against humans. The humans will blitz machines without phenomenal scenes
 every time and the machines without them won't even know it because they
 never knew they were in a fight to start with. They wouldn't be able to
 test a hypothesis that they were even in a fight.

 and then this looks all circular again doesn't it?this circularity is
 the predictable resultsee below...


 STATHIS
  Well, why does your eye generate visual qualia and not your big toe?
 It's because the big toe lacks the necessary machinery.

 COLIN
  I am afraid you have your physiology mixed up. The eye does NOT
 generate visual qualia. Your visual cortex  generates it based on
 measurements in the eye. The qualia are manufactured and simultaneously
 projected to appear to come from the eye (actually somewhere medial to
 them). It's how you have 90degrees++ peripheral vison. The same visual
 qualia can be generated without an eye (hallucination/dream). Some blind
 (no functioning retina) people have a visual field for numbers. Other
 cross-modal mixups can occur in synesthesia (you can hear
 colours, taste words). You can have a phantom big toe without having any
 big toe at alljust because the cortex is still there making the
 qualia. If you swapped the sensory nerves in two fingers the motor cortex
 would drive finger A and it would feel like finger B moved and you would
 see finger A move. The sensation is in your head, not the periphery. It's
 merely projected at the periphery.

 STATHIS
 Of course all that is true, but it doesn't explain why neurons in the
 cortex are the ones giving rise to qualia rather than other neurons or
 indeed peripheral sense organs.

 COLIN
 Was that what you were after?

 hmmm firstly. didactic mode
 =
 Qualia are not about 'knowledge'. Any old piece of junk can symbolically
 encode knowledge. Qualia, however, optimally serve _learning_ = _change_
 in knowledge but more specifically change in knowledge about the world
 OUTSIDE the agent. Mathematically: If KNOWLEDGE(t) is what we know at time
 t, then qualia give us an optimal (survivable):

 d(knowledge(t))
 ---
dt

 where knowledge(t) is all about the world

Re: How would a computer know if it were conscious?

2007-06-14 Thread David Nyman

On Jun 14, 4:46 am, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 Of course all that is true, but it doesn't explain why neurons in the cortex
 are the ones giving rise to qualia rather than other neurons or indeed
 peripheral sense organs.

Well, you might as well ask why the engine drives the car and not the
brakes.  Presumably (insert research programme here) the different
neural (or other relevant) organisation of the cortex is the
difference that makes the difference.  My account would run like this:
the various emergent organs of the brain and sensory apparatus (like
everything else) supervene on an infrastructure capable of 'sense-
action'.  I'm (somewhat) agnostic about the nature of this
infrastructure: conceive it as strings, particles, or even Bruno's
numbers.  But however we conceptualise it, it must (logically) be
capable of 'sense-action' in order for activity and cognition to
supervene on it.  Then what makes the difference in the cortex must be
a supremely complex 'mirroring' mode of organisation (a 'remembered
present') lacked by other organs.  To demonstrate this will be a
supremely difficult empirical programme, but IMO it presents no
invincible philosophical problems if conceived in this way.

A note here on 'sense-action':  If we think, for example and for
convenience, of particles 'reacting' to each other in terms of the
exchange of 'forces', ISTM quite natural to intuit this as both
'awareness' or 'sensing', and also 'action'.  After all, I can't react
to you if I'm not aware of you.  IOW, the 'forces' *are* the sense-
action.  And at this fundamental level, such motivation must emerge
intrinsically (i.e. *something like* the way we experience it) to
avoid a literal appeal to any extrinsic source ('laws').  Kant saw
this clearly in terms of his 'windowless monads', but these, separated
by the 'void', indeed had to be correlated by divine intervention,
since (unaware of each other) they could not interact.  Nowadays, no
longer conceiving the 'void' as 'nothing', we substitute a modulated
continuum, but the same semantic demands apply.

David

 On 14/06/07, Colin Hales [EMAIL PROTECTED] wrote:

  Colin
  This point is poised on the cliff edge of loaded word meanings and their
  use with the words 'sufficient' and 'necessary'. By technology I mean
  novel artifacts resulting from the trajectory of causality including human
  scientists. By that definition 'life', in the sense you infer, is not
  technology. The resulting logical loop can be thus avoided. There is a
  biosphere that arose naturally. It includes complexity of sufficient depth
  to have created observers within it. Those observers can produce
  technology. Douglas Adams (bless him) had the digital watch as a valid
  product of evolution - and I agree with him - it's just that humans are
  necessarily involved in its causal ancestry.

 Your argument that only consciousness can give rise to technology loses
 validity if you include must be produced by a conscious being as part of
 the definition of technology.



  COLIN
  That assumes that complexity itself (organisation of information) is
  the
  origin of consciousness in some unspecified, unjustified way. This
  position is completely unable to make any empirical predictions
  about the
  nature of human conscousness (eg why your cortex generates qualia
  and your
  spinal chord doesn't - a physiologically proven fact).

  STATHIS
   Well, why does your eye generate visual qualia and not your big toe?
  It's because the big toe lacks the necessary machinery.

  Colin
  I am afraid you have your physiology mixed up. The eye does NOT generate
  visual qualia. Your visual cortex  generates it based on measurements in
  the eye. The qualia are manufactured and simultaneously projected to
  appear to come from the eye (actually somewhere medial to them). It's how
  you have 90degrees++ peripheral vison. The same visual qualia can be
  generated without an eye (hallucination/dream). Some blind (no functioning
  retina) people have a visual field for numbers. Other cross-modal mixups
  can occur in synesthesia (you can hear colours, taste words). You can have
  a phantom big toe without having any big toe at alljust because the
  cortex is still there making the qualia. If you swapped the sensory nerves
  in two fingers the motor cortex would drive finger A and it would feel
  like finger B moved and you would see finger A move. The sensation is in
  your head, not the periphery. It's merely projected at the periphery.

 Of course all that is true, but it doesn't explain why neurons in the cortex
 are the ones giving rise to qualia rather than other neurons or indeed
 peripheral sense organs.

 --
 Stathis Papaioannou


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from 

Re: How would a computer know if it were conscious?

2007-06-14 Thread Stathis Papaioannou
On 15/06/07, David Nyman [EMAIL PROTECTED] wrote:


 On Jun 14, 4:46 am, Stathis Papaioannou [EMAIL PROTECTED] wrote:

  Of course all that is true, but it doesn't explain why neurons in the
 cortex
  are the ones giving rise to qualia rather than other neurons or indeed
  peripheral sense organs.

 Well, you might as well ask why the engine drives the car and not the
 brakes.  Presumably (insert research programme here) the different
 neural (or other relevant) organisation of the cortex is the
 difference that makes the difference.  My account would run like this:
 the various emergent organs of the brain and sensory apparatus (like
 everything else) supervene on an infrastructure capable of 'sense-
 action'.  I'm (somewhat) agnostic about the nature of this
 infrastructure: conceive it as strings, particles, or even Bruno's
 numbers.  But however we conceptualise it, it must (logically) be
 capable of 'sense-action' in order for activity and cognition to
 supervene on it.  Then what makes the difference in the cortex must be
 a supremely complex 'mirroring' mode of organisation (a 'remembered
 present') lacked by other organs.  To demonstrate this will be a
 supremely difficult empirical programme, but IMO it presents no
 invincible philosophical problems if conceived in this way.


What you're suggesting is that matter is intrinsically capable of
sense-action, but it takes substantial amounts of matter of the right kind
organised in the right way in order to give rise to what we experience as
consciousness. What do we lose if we say that it is organisation which is
intrinsically capable of sense-action, but it takes a substantial amount of
organisation of the right sort to in order to give rise to consciousness?
This drops the extra assumption that the substrate is important and is
consistentr with functionalism.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-14 Thread David Nyman

On Jun 15, 1:13 am, Stathis Papaioannou [EMAIL PROTECTED] wrote:

What do we lose if we say that it is organisation which is
 intrinsically capable of sense-action, but it takes a substantial amount of
 organisation of the right sort to in order to give rise to consciousness?
 This drops the extra assumption that the substrate is important and is
 consistentr with functionalism.

The 'substrate' to which I refer is not matter or anything else in
particular, but a logical-semantic 'substrate' from which 'mind' or
'matter' could emerge.  On this basis, 'sense-action' (i.e. two
differentiated 'entities' primitively 'sensing' each other in order to
'interact') is a logical, or at least semantically coherent,
requirement.  For example, if you want to use a particle-force
analogy, then the 'force' would be the medium of exchange of sense-
action - i.e. relationship.  In Kant's ontology, his windowless monads
had no such means of exchange (the 'void' prevented it) and
consequently divine intervention had to do the 'trick'.  I'm hoping
that Bruno will help me with the appropriate analogy for AR+COMP.

In this logical sense, the primitive 'substrate' is crucial, and ISTM
that any coherent notion of 'organisation' must include these basic
semantics - indeed the problem with conventional expositions of
functionalism is that they implicitly appeal to this requirement but
explicitly ignore it.  A coherent 'functionalist' account needs to
track the emergence of sense-action from primitive self-motivated
sources in an appropriate explanatory base, analogous to supervention
in 'physical' accounts.  However, if this requirement is made
explicit, I'm happy to concur that appropriate organisation based on
it is indeed what generates both consciousness and action, and the
causal linkage between the two accounts.

David

 On 15/06/07, David Nyman [EMAIL PROTECTED] wrote:





  On Jun 14, 4:46 am, Stathis Papaioannou [EMAIL PROTECTED] wrote:

   Of course all that is true, but it doesn't explain why neurons in the
  cortex
   are the ones giving rise to qualia rather than other neurons or indeed
   peripheral sense organs.

  Well, you might as well ask why the engine drives the car and not the
  brakes.  Presumably (insert research programme here) the different
  neural (or other relevant) organisation of the cortex is the
  difference that makes the difference.  My account would run like this:
  the various emergent organs of the brain and sensory apparatus (like
  everything else) supervene on an infrastructure capable of 'sense-
  action'.  I'm (somewhat) agnostic about the nature of this
  infrastructure: conceive it as strings, particles, or even Bruno's
  numbers.  But however we conceptualise it, it must (logically) be
  capable of 'sense-action' in order for activity and cognition to
  supervene on it.  Then what makes the difference in the cortex must be
  a supremely complex 'mirroring' mode of organisation (a 'remembered
  present') lacked by other organs.  To demonstrate this will be a
  supremely difficult empirical programme, but IMO it presents no
  invincible philosophical problems if conceived in this way.

 What you're suggesting is that matter is intrinsically capable of
 sense-action, but it takes substantial amounts of matter of the right kind
 organised in the right way in order to give rise to what we experience as
 consciousness. What do we lose if we say that it is organisation which is
 intrinsically capable of sense-action, but it takes a substantial amount of
 organisation of the right sort to in order to give rise to consciousness?
 This drops the extra assumption that the substrate is important and is
 consistentr with functionalism.

 --
 Stathis Papaioannou


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-13 Thread Colin Hales

Hi again very busy...responses erratically available...sorry...

my blah blah snipped

COLIN
 RE: 'creativity'
 ... Say at stage t the biosphere was at complexity level X and then at
stage t = t+(something), the biosphere complexity was at KX, where X is
some key performance indicator of complexity (eg entropy) and K  1 

RUSSEL
 Thats exactly what I mean by a creative process. And I also have a
fairly precise definition of complexity, but I certainly accept
 proxies as these are usually easier to measure. For example
 Bedau-Packard statistics...

COLIN
 This could be called creative if you like. Like Prigogine did. I'd
caution
 against the tendency to use the word because it has so many loaded
meanings that are suggestive of much more then the previous para.

RUSSEL
 Most scientific terms have common usage in sharp contrast to the
scientific meanings. Energy is a classic example eg I've run out of
energy when referring to motivation or tiredness. If the statement were
literally true, the speaker would be dead. This doesn't prevent sensible
scientific discussion using the term in a well defined way. I know of no
other technical meanings of the word creative, so I don't see a problem
here.

COLIN
It may be technically OK then, but I would say the use of the word
'creativity' is unwise if you wish to unambiguously discuss evolution to a
wide audience. As I said...

 Scientifically the word could be left entirely out of any desciptions
of the biosphere.

RUSSEL
 Only by generating a new word that means the same thing (ie the well
defined concept we talked about before).

COLIN
I don't think we need a new wordI'll stick to the far less ambiguous
term 'organisational complexity', I think. the word creativity is so
loaded that its use in general discourse is bound to be prone to
misconstrual, especially in any discussion which purports to be assessing
the relationship between 'organisational complexity' and consciousness.

COLIN
 The bogus logic I detect in posts around this area...
 'Humans are complex and are conscious'
 'Humans were made by a complex biosphere'
 therefore 'The biosphere is conscious'

RUSSEL
 Perhaps so, but not from me.
 To return to your original claim:

COLIN
 Re: How would a computer know if it were conscious?
 Easy.
 The computer would be able to go head to head with a human in a
competition.
 The competition?
 Do science on exquisite novelty that neither party had encountered.
(More interesting: Make their life depend on getting it right. The
survivors are conscious).

RUSSEL
 Doing science on exquisite novelty is simply an example of a
 creative process. Evolution produces exquisite novelty. Is it science -
well maybe not, but both science and evolution are search
 processes.

COLIN
In a very real way, the procedural mandate we scientists enforce on
ourselves are, to me anyway, a literal metaphor for the evolutionary
process. The trial and error of evolution  = (relatively!) random
creativity followed by proscription via death(defeat in critical argument
eg by evidence) = that which remains does so by not being killed off. In
science our laws of nature are the same on the knife edge, validity
contingent on the appearance of one tiny shred of contrary evidence. (yes
I know they are not killed! - they are usually upgraded).

RUSSEL
 I think that taking the Popperian view of science would
 imply that both science and biological evolution are exemplars of a
generic evolutionary process. There is variation (of hypotheses or
species), there is selection (falsification in the former or
 extinction in the latter) and there is heritability (scientific
 journal articles / genetic code).
 So it seems the only real difference between doing science and
 evolving species is that one is performed by conscious entities, and the
other (pace IDers) is not.

COLIN
I think different aspects of what I just described (rather more
colourfully :-) )

RUSSEL
 But this rather begs your answer in a
 trivial way. What if I were to produce an evolutionary algorithm that
performs science in the convention everyday use of the term - lets say by
forming hypotheses and mining published datasets for testing
 them. It is not too difficult to imagine this - after all John Koza has
produced several new patents in the area of electrical circuits from an
Evolutionary Programming algorithm.

COLIN
The question-begging loop at this epistemic boundary is a minefield.
[[engage tiptoe mode]]

I would say:
(1) The evolutionary algorithms are not 'doing science' on the natural
world. They are doing science on abstract entities whose relationship with
the natural world is only in the mind(consciousness) of their grounder -
the human programmer. The science done by the artefact can be the
perfectly good science of abstractions, but simply wrong or irrelevant
insofar as it bears any ability to prescribe or verify claims/propositions
about the natural world (about which it has no awareness whatever). The
usefulness

Re: How would a computer know if it were conscious?

2007-06-13 Thread Russell Standish

On Thu, Jun 14, 2007 at 10:23:38AM +1000, Colin Hales wrote:
 
 COLIN
 It may be technically OK then, but I would say the use of the word
 'creativity' is unwise if you wish to unambiguously discuss evolution to a
 wide audience. As I said...
 
 COLIN
 I don't think we need a new wordI'll stick to the far less ambiguous
 term 'organisational complexity', I think. the word creativity is so
 loaded that its use in general discourse is bound to be prone to
 misconstrual, especially in any discussion which purports to be assessing
 the relationship between 'organisational complexity' and consciousness.

What sort of misconstruals do you mean? I'm interested...

'organisational complexity' does not capture the concept I'm after.

 COLIN
 The question-begging loop at this epistemic boundary is a minefield.
 [[engage tiptoe mode]]
 
 I would say:
 (1) The evolutionary algorithms are not 'doing science' on the natural
 world. They are doing science on abstract entities whose relationship with
 the natural world is only in the mind(consciousness) of their grounder -
 the human programmer. The science done by the artefact can be the
 perfectly good science of abstractions, but simply wrong or irrelevant
 insofar as it bears any ability to prescribe or verify claims/propositions
 about the natural world (about which it has no awareness whatever). The
 usefulness of the outcome (patents) took human involvement. The inventor
 (software) doesn't even know it's in a universe, let alone that it
 participated in an invention process.

This objection is easily countered in theory. Hook up your
evolutionary algorithm to a chemsitry workbench, and let it go with
real chemicals. Practically, its a bit more difficult of course, most
likely leading to the lab being destroyed in some explosion.

Theoretical scientists, do not have laboratories to interface to,
though, only online repositories of datasets and papers. A theoretical
algorithmic scientist is a more likely proposition.

 
 (2) Is this evolutionary algorithm conscious then?.
 In the sense that we are conscious of the natural world around us? Most
 definitely no. Nowhere in the computer are any processes that include all
 aspects of the physics of human cortical matter. 

...

 Based on this, of the 2 following positions, which is less vulnerable to
 critical attack?
 
 A) Information processing (function) begets consciousness, regardless of
 the behaviour of the matter doing the information processing (form).
 Computers process information. Therefore I believe the computer is
 conscious.
 
 B) Human cortical qualia are a necessary condition for the scientific
 behaviour and unless the complete suite of the physics involved in that
 process is included in the computer, the computer is not conscious.
 
 Which form of question-begging gets the most solid points as science?  (B)
 of course. (B) is science and has an empirical future. Belief (A) is
 religion, not science.
 
 Bit of a no-brainer, eh?
 

I think you're showing clear signs of carbon-lifeform-ism here. Whilst
I can say fairly clearly that I believe my fellow humans are
conscious, and that I beleive John Koza's evolutionary programs
aren't, I do not have a clear-cut operational test of
consciousness. Its like the test for pornography - we know it when we
see it. It is therefore not at all clear to me that some n-th generational
improvement on an evolutionary algorithm won't be considered conscious
at some time in the future. It is not at all clear which aspects of
human cortical systems are required for consciousness.

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-13 Thread Colin Hales

Hi Stathis,

Colin
The bogus logic I detect in posts around this area...
'Humans are complex and are conscious'
'Humans were made by a complex biosphere'
therefore
'The biosphere is conscious'


Stathis
That conclusion is spurious, but it is the case that non-conscious
evolutionary processes can give rise to very elaborate technology,
namely life, which goes against your theory that only consciousness can
produce new technology.

Colin
This point is poised on the cliff edge of loaded word meanings and their
use with the words 'sufficient' and 'necessary'. By technology I mean
novel artifacts resulting from the trajectory of causality including human
scientists. By that definition 'life', in the sense you infer, is not
technology. The resulting logical loop can be thus avoided. There is a
biosphere that arose naturally. It includes complexity of sufficient depth
to have created observers within it. Those observers can produce
technology. Douglas Adams (bless him) had the digital watch as a valid
product of evolution - and I agree with him - it's just that humans are
necessarily involved in its causal ancestry.

COLIN
That assumes that complexity itself (organisation of information) is the
origin of consciousness in some unspecified, unjustified way. This
position is completely unable to make any empirical predictions
about the
nature of human conscousness (eg why your cortex generates qualia
and your
spinal chord doesn't - a physiologically proven fact).


STATHIS
 Well, why does your eye generate visual qualia and not your big toe?
It's because the big toe lacks the necessary machinery.


Colin
I am afraid you have your physiology mixed up. The eye does NOT generate
visual qualia. Your visual cortex  generates it based on measurements in
the eye. The qualia are manufactured and simultaneously projected to
appear to come from the eye (actually somewhere medial to them). It's how
you have 90degrees++ peripheral vison. The same visual qualia can be
generated without an eye (hallucination/dream). Some blind (no functioning
retina) people have a visual field for numbers. Other cross-modal mixups
can occur in synesthesia (you can hear colours, taste words). You can have
a phantom big toe without having any big toe at alljust because the
cortex is still there making the qualia. If you swapped the sensory nerves
in two fingers the motor cortex would drive finger A and it would feel
like finger B moved and you would see finger A move. The sensation is in
your head, not the periphery. It's merely projected at the periphery.

cheers
colin



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-13 Thread Colin Hales

Hi,

 COLIN
 I don't think we need a new wordI'll stick to the far less
ambiguous
 term 'organisational complexity', I think. the word creativity is so
loaded that its use in general discourse is bound to be prone to
misconstrual, especially in any discussion which purports to be
assessing
 the relationship between 'organisational complexity' and consciousness.

RUSSEL
 What sort of misconstruals do you mean? I'm interested...
 'organisational complexity' does not capture the concept I'm after.

COLIN
1) Those associated with religious 'creation' myths - the creativity
ascribed to an omniscient/omnipotent entity.
2) The creativity ascribed to the act of procreation.
3) The pseudo-magical aspects of human creativity (the scientific ah-ha
moment and the artistic gestalt moment).
and pehaps...
4) Belief in 'magical emergence'  qualitative novelty of a kind
utterly unrelated to the componentry.

These are all slippery slopes leading from the usage of the word
'creativity' which could unexpectedly undermine the specificity of a
technical discourse aimed at a wider (multi-disciplinary) audience.

Whatever word you dream up... let me know!

 COLIN
 The question-begging loop at this epistemic boundary is a minefield.
[[engage tiptoe mode]]
 I would say:
 (1) The evolutionary algorithms are not 'doing science' on the natural
world. They are doing science on abstract entities whose relationship with
 the natural world is only in the mind(consciousness) of their grounder
-
 the human programmer. The science done by the artefact can be the
perfectly good science of abstractions, but simply wrong or irrelevant
insofar as it bears any ability to prescribe or verify
claims/propositions
 about the natural world (about which it has no awareness whatever). The
usefulness of the outcome (patents) took human involvement. The
inventor
 (software) doesn't even know it's in a universe, let alone that it
participated in an invention process.

RUSSEL
 This objection is easily countered in theory. Hook up your
 evolutionary algorithm to a chemsitry workbench, and let it go with real
chemicals. Practically, its a bit more difficult of course, most likely
leading to the lab being destroyed in some explosion.

COLIN
Lots o'fun! But it might actually create its own undoing in the words
'evolutionary algorithm'. The self-modification strategy was preprogrammed
by a human, along with the initial values. Then there is the matter of
interpresting measurements of the output of the chemistry set...

The system (a) automatically prescibes certain trajectories and (b)
assumes that the theroem space natural world are the same space and
equivalently accessed. The assumption is that hooking up a chemistry set
replicates the 'wild-type' theorem prover that is the natural world. If
you could do that then you already know everything there is to know (about
the natural world) and there'd be no need do it in the first place. This
is the all-time ultimate question-begger...

 Theoretical scientists, do not have laboratories to interface to,
though, only online repositories of datasets and papers. A theoretical
algorithmic scientist is a more likely proposition.

A belief that an algorithmic scientist is doing valid science on the
natural world (independent of any human) is problematic in that it assumes
that human cortical qualia play no part in the scientific process in the
face of easily available evidence to the contrary, and then doubly assumes
that the algorithmic scientist (with a novelty exploration -theorem
proving strategy-programmed by a human) somehow naturally replicates the
neglected functionality (role of cortical qualia).

 (2) Is this evolutionary algorithm conscious then?.
 In the sense that we are conscious of the natural world around us? Most
definitely no. Nowhere in the computer are any processes that include all
 aspects of the physics of human cortical matter.
 ...
 Based on this, of the 2 following positions, which is less vulnerable
to
 critical attack?
 A) Information processing (function) begets consciousness, regardless
of
 the behaviour of the matter doing the information processing (form).
Computers process information. Therefore I believe the computer is conscious.
 B) Human cortical qualia are a necessary condition for the scientific
behaviour and unless the complete suite of the physics involved in that
process is included in the computer, the computer is not conscious. Which
form of question-begging gets the most solid points as science?  (B)
 of course. (B) is science and has an empirical future. Belief (A) is
religion, not science.
 Bit of a no-brainer, eh?


 I think you're showing clear signs of carbon-lifeform-ism here. Whilst I
can say fairly clearly that I believe my fellow humans are
 conscious, and that I beleive John Koza's evolutionary programs
 aren't, I do not have a clear-cut operational test of
 consciousness. Its like the test for pornography - we know it when we
see it.

This is touching the 

Re: How would a computer know if it were conscious?

2007-06-13 Thread Stathis Papaioannou
On 14/06/07, Colin Hales [EMAIL PROTECTED] wrote:


 Colin
 This point is poised on the cliff edge of loaded word meanings and their
 use with the words 'sufficient' and 'necessary'. By technology I mean
 novel artifacts resulting from the trajectory of causality including human
 scientists. By that definition 'life', in the sense you infer, is not
 technology. The resulting logical loop can be thus avoided. There is a
 biosphere that arose naturally. It includes complexity of sufficient depth
 to have created observers within it. Those observers can produce
 technology. Douglas Adams (bless him) had the digital watch as a valid
 product of evolution - and I agree with him - it's just that humans are
 necessarily involved in its causal ancestry.


Your argument that only consciousness can give rise to technology loses
validity if you include must be produced by a conscious being as part of
the definition of technology.


 COLIN
 That assumes that complexity itself (organisation of information) is
 the
 origin of consciousness in some unspecified, unjustified way. This
 position is completely unable to make any empirical predictions
 about the
 nature of human conscousness (eg why your cortex generates qualia
 and your
 spinal chord doesn't - a physiologically proven fact).


 STATHIS
  Well, why does your eye generate visual qualia and not your big toe?
 It's because the big toe lacks the necessary machinery.
 

 Colin
 I am afraid you have your physiology mixed up. The eye does NOT generate
 visual qualia. Your visual cortex  generates it based on measurements in
 the eye. The qualia are manufactured and simultaneously projected to
 appear to come from the eye (actually somewhere medial to them). It's how
 you have 90degrees++ peripheral vison. The same visual qualia can be
 generated without an eye (hallucination/dream). Some blind (no functioning
 retina) people have a visual field for numbers. Other cross-modal mixups
 can occur in synesthesia (you can hear colours, taste words). You can have
 a phantom big toe without having any big toe at alljust because the
 cortex is still there making the qualia. If you swapped the sensory nerves
 in two fingers the motor cortex would drive finger A and it would feel
 like finger B moved and you would see finger A move. The sensation is in
 your head, not the periphery. It's merely projected at the periphery.


Of course all that is true, but it doesn't explain why neurons in the cortex
are the ones giving rise to qualia rather than other neurons or indeed
peripheral sense organs.



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-13 Thread Russell Standish

On Thu, Jun 14, 2007 at 12:47:58PM +1000, Colin Hales wrote:
 RUSSEL
  What sort of misconstruals do you mean? I'm interested...
  'organisational complexity' does not capture the concept I'm after.
 
 COLIN
 1) Those associated with religious 'creation' myths - the creativity
 ascribed to an omniscient/omnipotent entity.

It still seems like we're talking about the same thing. Its just that
in the myth case, there is no explanation for the creativity, it is
merely asserted at the start. I have little interest in myths, but I
recognise that the omniscient being in those stories is being creative
in exactly the was as evolution is being creative at producing new species.

 2) The creativity ascribed to the act of procreation.

Well I admit that pornographers are a pretty creative bunch, but what
is so creative about reproducing?

 3) The pseudo-magical aspects of human creativity (the scientific ah-ha
 moment and the artistic gestalt moment).
 and pehaps...

Human creativity is an interesting topic, but I wouldn't call it
pseudo-magical. Poorly understood, more like it. Comparing creativity in
evolutionary processes and the human creative process is likely to
improve that understanding.

 4) Belief in 'magical emergence'  qualitative novelty of a kind
 utterly unrelated to the componentry.
 

The latter clause refers to emergence (without the magical
qualifier), and it is impossible IMHO to have creativity without emergence.

 These are all slippery slopes leading from the usage of the word
 'creativity' which could unexpectedly undermine the specificity of a
 technical discourse aimed at a wider (multi-disciplinary) audience.
 

Aside from the easily disposed of reproduction case, you haven't come
up with an example of creativity meaning anything other than what
we've agreed it to mean.

 
 The system (a) automatically prescibes certain trajectories and 

Yes.

 (b)
 assumes that the theroem space [and] natural world are the same space and
 equivalently accessed. 

No - but the system will adjust its model according to feedback. That
is the very nature of any learning algorithm, of which EP is just one example.

 The assumption is that hooking up a chemistry set
 replicates the 'wild-type' theorem prover that is the natural world. If
 you could do that then you already know everything there is to know (about
 the natural world) and there'd be no need do it in the first place. This
 is the all-time ultimate question-begger...

Not at all. In Evolutionary Programming, very little is known about the
ultimate solution the algorithm comes up with.

 
  Theoretical scientists, do not have laboratories to interface to,
 though, only online repositories of datasets and papers. A theoretical
 algorithmic scientist is a more likely proposition.
 
 A belief that an algorithmic scientist is doing valid science on the
 natural world (independent of any human) is problematic in that it assumes
 that human cortical qualia play no part in the scientific process in the
 face of easily available evidence to the contrary, and then doubly assumes
 that the algorithmic scientist (with a novelty exploration -theorem
 proving strategy-programmed by a human) somehow naturally replicates the
 neglected functionality (role of cortical qualia).
 

Your two assumptions are contradictory. I would say no to the first,
and yes to the second.

...

 It is therefore not at all clear to me that some n-th
 generational
  improvement on an evolutionary algorithm won't be considered conscious
 at some time in the future. It is not at all clear which aspects of human
 cortical systems are required for consciousness.
 
 You are not alone. This is an epidemic.
 
 My scientific claim is that the electromagnetic field structure literally
 the third person view of qualia. 

Eh? Electromagnetic field of what? The brain? If so, do you think that
chemical potentiation plays no role at all in qualia?

 This is not new. What is new is
 understanding the kind of universe we inhabit in which that is necessarily
 the case. It's right there, in the cells. Just ask the right question of
 them. There's nothing else there but space (mostly), charge and mass - all
 things delineated and described by consciousness as how they appear to it
 - and all such descriptions are logically necessarily impotent in
 prescribing why that very consciousness exists at all.
 
 Wigner got this in 1960something time to catch up.
 

I don't know what your point is here ...

 gotta go
 
 cheers
 colin hales
 
 
 
 
-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You 

Re: How would a computer know if it were conscious?

2007-06-13 Thread Colin Hales

Hi,

STATHIS
Your argument that only consciousness can give rise to technology loses
validity if you include must be produced by a conscious being as part of
the definition of technology.

COLIN
There's obvious circularity in the above sentence and it is the same old
circularity that endlessly haunts discussions like this (see the dialog
with Russel).

In dealing with the thread

Re: How would a computer know if it were conscious?

my proposition was that successful _novel_ technology

i.e. a entity comprised of matter with a function not previously observed
and that resulted from new - as in hitherto unknown - knowledge of the
natural world

 can only result when sourced through agency inclusive of a phenomenal
consciousness (specifically and currently only that that aspect of human
brain function I have called 'cortical qualia'). Without the qualia,
generated based on literal connection with the world outside the agent,
the novelty upon which the new knowledge was based would be invisible.

My proposition was that if the machine can do the science on exquisite
novelty that subsequantly is in the causal ancestry of novel technology
then that machine must include phenomenal scenes (qualia) that depict the
external world.

Scientists and science are the way to objectively attain an objective
scientific position on subjective experience - that is just as valid as
any other scientific position AND that a machine could judge itself by. If
the machine is willing to bet its existence on the novel technology's
ability to function when the machine is not there doing what it thinks is
'observing it'... and it survives - then it can call itself conscious.
Humans do that.

But the machines have another option. They can physically battle it out
against humans. The humans will blitz machines without phenomenal scenes
every time and the machines without them won't even know it because they
never knew they were in a fight to start with. They wouldn't be able to
test a hypothesis that they were even in a fight.

and then this looks all circular again doesn't it?this circularity is
the predictable resultsee below...


STATHIS
 Well, why does your eye generate visual qualia and not your big toe?
It's because the big toe lacks the necessary machinery.

COLIN
 I am afraid you have your physiology mixed up. The eye does NOT
generate visual qualia. Your visual cortex  generates it based on
measurements in the eye. The qualia are manufactured and simultaneously
projected to appear to come from the eye (actually somewhere medial to
them). It's how you have 90degrees++ peripheral vison. The same visual
qualia can be generated without an eye (hallucination/dream). Some blind
(no functioning retina) people have a visual field for numbers. Other
cross-modal mixups can occur in synesthesia (you can hear
colours, taste words). You can have a phantom big toe without having any
big toe at alljust because the cortex is still there making the
qualia. If you swapped the sensory nerves in two fingers the motor cortex
would drive finger A and it would feel like finger B moved and you would
see finger A move. The sensation is in your head, not the periphery. It's
merely projected at the periphery.

STATHIS
Of course all that is true, but it doesn't explain why neurons in the
cortex are the ones giving rise to qualia rather than other neurons or
indeed peripheral sense organs.

COLIN
Was that what you were after?

hmmm firstly. didactic mode
=
Qualia are not about 'knowledge'. Any old piece of junk can symbolically
encode knowledge. Qualia, however, optimally serve _learning_ = _change_
in knowledge but more specifically change in knowledge about the world
OUTSIDE the agent. Mathematically: If KNOWLEDGE(t) is what we know at time
t, then qualia give us an optimal (survivable):

d(knowledge(t))
---
   dt

where knowledge(t) is all about the world outside the agent. Without
qualia you have the ultimate in circularity - what you know must be based
on what you know + sensory signals devoid of qualia and only interpretable
by your existing knowledge. Sensory signals are not uniquely related to
the external natural world behaviour (law of electromagnetics
Laplacian/Possions equation) and are intrinsically devoid of qualia
(physiological fact). Hence the science of sensory signals (capturing
regularity in them) is NOT the science of the external natural world in
any way that exposes novelty in the external natural world= a recipe for
evolutionary shortlived-ness.
=


Now... as to

Of course all that is true, but it doesn't explain why neurons in the
cortex are the ones giving rise to qualia rather than other neurons or
indeed peripheral sense organs.

Your whole concept of explanation is causal of the problem! Objects of the
sense impressions (contents of consciousness) cannot predict the existence
of the sense impressions. All

Re: How would a computer know if it were conscious?

2007-06-12 Thread Stathis Papaioannou
On 12/06/07, Colin Hales [EMAIL PROTECTED] wrote:

The bogus logic I detect in posts around this area...
 'Humans are complex and are conscious'
 'Humans were made by a complex biosphere'
 therefore
 'The biosphere is conscious'


That conclusion is spurious, but it is the case that non-coscious
evolutionary processes can give rise to very elaborate technology, namely
life, which goes against your theory that only consciousness can produce new
technology.

That assumes that complexity itself (organisation of information) is the
 origin of consciousness in some unspecified, unjustified way. This
 position is completely unable to make any empirical predictions about the
 nature of human conscousness (eg why your cortex generates qualia and your
 spinal chord doesn't - a physiologically proven fact).


Well, why does your eye generate visual qualia and not your big toe? It's
because the big toe lacks the necessary machinery.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-12 Thread Russell Standish

On Tue, Jun 12, 2007 at 09:33:00AM +1000, Colin Hales wrote:
 
 Hi again,
 
 Russel:
 I'm sorry, but you worked yourself up into an incomprehensible
 rant. Is evolution creative in your view or not? If it is, then there is
 little point debating definitions, as we're in agreement. If not, then we
 clearly use the word creative in different senses, and perhaps defintion
 debates have some utility.
 
 Colin:
 There wasn't even the slightest edge of 'rant' in the post. Quite calm,
 measured and succinct, actually. Its apparent incomprehensibility? I have
 no clue what that could be it's quite plain...
 
 RE: 'creativity'
 ... Say at stage t the biosphere was at complexity level X and then at
 stage t = t+(something), the biosphere complexity was at KX, where X is
 some key performance indicator of complexity (eg entropy) and K  1 
 

Thats exactly what I mean by a creative process. And I also have a
fairly precise definition of complexity, but I certainly accept
proxies as these are usually easier to measure. For example
Bedau-Packard statistics...

 This could be called creative if you like. Like Prigogine did. I'd caution
 against the tendency to use the word because it has so many loaded
 meanings that are suggestive of much more then the previous para.

Most scientific terms have common usage in sharp contrast to the
scientific meanings. Energy is a classic example eg I've run out of
energy when referring to motivation or tiredness. If the statement
were literally true, the speaker would be dead. This doesn't prevent
sensible scientific discussion using the term in a well defined way.

I know of no other technical meanings of the word creative, so I don't
see a problem here.

 Scientifically the word could be left entirely out of any desciptions of
 the biosphere.

Only by generating a new word that means the same thing (ie the well
defined concept we talked about before).

 
 The bogus logic I detect in posts around this area...
 'Humans are complex and are conscious'
 'Humans were made by a complex biosphere'
 therefore
 'The biosphere is conscious'
 

Perhaps so, but not from me. 

To return to your original claim:


 Re: How would a computer know if it were conscious?

Easy.

The computer would be able to go head to head with a human in a competition.
The competition?
Do science on exquisite novelty that neither party had encountered.
(More interesting: Make their life depend on getting it right. The
survivors are conscious).

Doing science on exquisite novelty is simply an example of a
creative process. Evolution produces exquisite novelty. Is it science
- well maybe not, but both science and evolution are search
processes. I think that taking the Popperian view of science would
imply that both science and biological evolution are exemplars of a
generic evolutionary process. There is variation (of hypotheses or
species), there is selection (falsification in the former or
extinction in the latter) and there is heritability (scientific
journal articles / genetic code).

So it seems the only real difference between doing science and
evolving species is that one is performed by conscious entities, and
the other (pace IDers) is not. But this rather begs your answer in a
trivial way. What if I were to produce an evolutionary algorithm that
performs science in the convention everyday use of the term - lets say
by forming hypotheses and mining published datasets for testing
them. It is not too difficult to imagine this - after all John Koza
has produced several new patents in the area of electrical circuits
from an Evolutionary Programming algorithm. Is this evolutionary
algorithm conscious then?

Cheers



A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-11 Thread Colin Hales

Hi again,

Russel:
I'm sorry, but you worked yourself up into an incomprehensible
rant. Is evolution creative in your view or not? If it is, then there is
little point debating definitions, as we're in agreement. If not, then we
clearly use the word creative in different senses, and perhaps defintion
debates have some utility.

Colin:
There wasn't even the slightest edge of 'rant' in the post. Quite calm,
measured and succinct, actually. Its apparent incomprehensibility? I have
no clue what that could be it's quite plain...

RE: 'creativity'
... Say at stage t the biosphere was at complexity level X and then at
stage t = t+(something), the biosphere complexity was at KX, where X is
some key performance indicator of complexity (eg entropy) and K  1 

This could be called creative if you like. Like Prigogine did. I'd caution
against the tendency to use the word because it has so many loaded
meanings that are suggestive of much more then the previous para.
Scientifically the word could be left entirely out of any desciptions of
the biosphere.

The bogus logic I detect in posts around this area...
'Humans are complex and are conscious'
'Humans were made by a complex biosphere'
therefore
'The biosphere is conscious'

That assumes that complexity itself (organisation of information) is the
origin of consciousness in some unspecified, unjustified way. This
position is completely unable to make any empirical predictions about the
nature of human conscousness (eg why your cortex generates qualia and your
spinal chord doesn't - a physiologically proven fact).

The same bogus logic happens in relation to quantum mechanics and
conscsiousness:
Quantum mechanics is weird and complex
Consciousness is  is weird and complex
therefore
Quantum mechanics generates consciousness

I caution against this. I caution against using the word 'creativity' in
any useful scientific discussion of evolution and complexity.

cheers
colin



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-10 Thread Russell Standish

On Fri, Jun 08, 2007 at 10:03:16AM +1000, Colin Hales wrote:
 
 Russel
  I gave a counter example, that of biological evolution. Either you
 should demonstrate why you think biological evolution is uncreative, or
 why it is conscious.
 
 Colin
 You have proven my point again. It is not a counterexample at all. These
 two either-or options are rife with assumption and innappropriately
 contra-posed. The biggest? = Define the context/semantics of 'creative'.
 Options:
 
 #1 The biosphere is a massive localised collection of molecular ratchet
 motors pumped infinitesimal increment by infinitesimal increment against
 the 2nd law of thermodynamics upon the arrival of each photon from the
 sun. If the novelty (new levels nested organisational complexity)
 expressed in that collection/process can be called an act of
 creativity...fine...so what? I could call it an act of 'gronkativity' and
 it would not alter the facts of the matter. I don't even have to mention
 the word consciousness.

...


I'm sorry, but you worked yourself up into an incomprehensible
rant. Is evolution creative in your view or not? If it is, then there
is little point debating definitions, as we're in agreement. If not,
then we clearly use the word creative in different senses, and perhaps
defintion debates have some utility.

Cheers


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-08 Thread Mark Peaty

as hominem = With, em, respect, I have to say that this thread 
has not made a lot of sense.

SP:
'This just confirms that there is no accounting for values or
 goals rationally.'

MP: In other words _Evolution does not have goals._
Evolution is a conceptual framework we use to make sense of the 
world we see, and it's a bl*ody good one, by and large. But 
evolution in the sense of the changes we can point to as 
occurring in the forms of living things, well it all just 
happens; just like the flowing of water down hill.

You will gain more traction by looking at what it is that 
actually endures and changes over time: on the one hand genes of 
DNA and on the other hand memes embodied in behaviour patterns, 
the brain structures which mediate them, and the environmental 
changes [glyphs, paintings, structures, etc,] which stimulate 
and guide them.


Regards

Mark Peaty  CDES

[EMAIL PROTECTED]

http://www.arach.net.au/~mpeaty/





Stathis Papaioannou wrote:
 
 
 On 08/06/07, *Brent Meeker* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 The top level goal implied by evolution would be to have as many
 children as you can raise through puberty.  Avoiding death should
 only be a subgoal.
 
 
 Yes, but evolution doesn't have an overseeing intelligence which figures 
 these things out, and it does seem that as a matter of fact most people 
 would prefer to avoid reproducing if it's definitely going to kill them, 
 at least when they aren't intoxicated. So although reproduction trumps 
 survival as a goal for evolution, for individual humans it's the other 
 way around. This just confirms that there is no accounting for values or 
 goals rationally. What we have is what we're stuck with.
 
 
 -- 
 Stathis Papaioannou
  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-07 Thread marc . geddes



On Jun 7, 3:54 pm, Stathis Papaioannou [EMAIL PROTECTED] wrote:


 Evolution has not had a chance to take into account modern reproductive
 technologies, so we can easily defeat the goal reproduce, and see the goal
 feed as only a means to the higher level goal survive. However, *that*
 goal is very difficult to shake off. We take survival as somehow profoundly
 and self-evidently important, which it is, but only because we've been
 programmed that way (ancestors that weren't would not have been ancestors).
 Sometimes people become depressed and no longer wish to survive, but that's
 an example of neurological malfunction. Sometimes people rationally give
 up their own survival for the greater good, but that's just an example of
 interpreting the goal so that it has greater scope, not overthrowing it.

 --
 Stathis Papaioannou

Evolution doesn't care about the survival of individual organisms
directly, the actual goal of evolution is only to maximize
reproductive fitness.

If you want to eat a peice of chocolate cake, evolution explains why
you like the taste, but your goals are not evolutions goals.  You
(Stathis) want to the cake because it tastes nice - *your* goal is to
experience the nice taste.  Evolution's goal (maximize reproductive
fitness) is quite different.   Our (human) goals are not evolution's
goals.

Cheers.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-07 Thread Quentin Anciaux

Hi,

2007/6/7, [EMAIL PROTECTED] [EMAIL PROTECTED]:



 On Jun 7, 3:54 pm, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 
  Evolution has not had a chance to take into account modern reproductive
  technologies, so we can easily defeat the goal reproduce, and see the goal
  feed as only a means to the higher level goal survive. However, *that*
  goal is very difficult to shake off. We take survival as somehow profoundly
  and self-evidently important, which it is, but only because we've been
  programmed that way (ancestors that weren't would not have been ancestors).
  Sometimes people become depressed and no longer wish to survive, but that's
  an example of neurological malfunction. Sometimes people rationally give
  up their own survival for the greater good, but that's just an example of
  interpreting the goal so that it has greater scope, not overthrowing it.
 
  --
  Stathis Papaioannou

 Evolution doesn't care about the survival of individual organisms
 directly, the actual goal of evolution is only to maximize
 reproductive fitness.

 If you want to eat a peice of chocolate cake, evolution explains why
 you like the taste, but your goals are not evolutions goals.  You
 (Stathis) want to the cake because it tastes nice - *your* goal is to
 experience the nice taste.  Evolution's goal (maximize reproductive
 fitness) is quite different.   Our (human) goals are not evolution's
 goals.

 Cheers.

I have to disagree, if human goals were not tied to evolution goals
then human should not have proliferated.

Quentin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-07 Thread marc . geddes



On Jun 7, 7:50 pm, Quentin Anciaux [EMAIL PROTECTED] wrote:


 I have to disagree, if human goals were not tied to evolution goals
 then human should not have proliferated.

 Quentin- Hide quoted text -


Well of course human goals are *tied to* evolution's goals, but that
doesn't mean they're the same.  In the course of pursuit of our own
goals we sometimes achieve evolution's goals.  But this is
incidental.  As I said, evolution explains why we feel and experience
things the way we do but our goals are not evolutions goals.  You
don't eat food to maximize reproductive fitness, you eat food because
you like the taste.

This point was carefully explained by Steven Pinker in his books (yes
he agrees with me).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-07 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 
 On 07/06/07, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]* 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
 
 Evolution doesn't care about the survival of individual organisms
 directly, the actual goal of evolution is only to maximize
 reproductive fitness.
 
 If you want to eat a peice of chocolate cake, evolution explains why
 you like the taste, but your goals are not evolutions goals.  You
 (Stathis) want to the cake because it tastes nice - *your* goal is to
 experience the nice taste.  Evolution's goal (maximize reproductive
 fitness) is quite different.   Our (human) goals are not evolution's
 goals.
 
 
 That's right, but we can see through evolution's tricks with the 
 chocolate cake and perhaps agree that it would be best not to eat it. 
 This involves reasoning about subgoals in view of the top level goal, 
 something that probably only humans among the animals are capable of 
 doing. However, the top level goal is not something that we generally 
 want to change, no matter how insightful and intelligent we are. And I 
 do think that this top level goal must have been programmed into us 
 directly as fear of death, because it does not arise logically from the 
 desire to avoid painful and anxiety-provoking situations, which is how 
 fear of death is indirectly coded in animals.
 
 
 
 -- 
 Stathis Papaioannou

The top level goal implied by evolution would be to have as many children as 
you can raise through puberty.  Avoiding death should only be a subgoal.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-07 Thread Johnathan Corgan

Brent Meeker wrote:

 The top level goal implied by evolution would be to have as many
 children as you can raise through puberty.  Avoiding death should
 only be a subgoal.

It should go a little further than puberty--the accumulated wisdom of
grandparents may significantly enhance the survival chances of their
grandchildren, more so than the decrease in available resources in the
environment they might consume.

So I agree that once you have sired all the children you ever will, it
makes sense from an evolutionary perspective to get out of the
way--that is, stop competing with them for resources.  But the timing
of your exit is probably more optimal somewhat after they have their own
children, if you can help them to get a good start.

I do wonder if evolutionary fitness is more accurately measured by the
number of grandchildren one has than by the number of children.  Aside
from the assistance line of reasoning above, in order to propagate,
one must be able to have children that are capable of having children
themselves.

Johnathan Corgan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-07 Thread Stathis Papaioannou
On 08/06/07, Brent Meeker [EMAIL PROTECTED] wrote:

The top level goal implied by evolution would be to have as many children as
 you can raise through puberty.  Avoiding death should only be a subgoal.


Yes, but evolution doesn't have an overseeing intelligence which figures
these things out, and it does seem that as a matter of fact most people
would prefer to avoid reproducing if it's definitely going to kill them, at
least when they aren't intoxicated. So although reproduction trumps survival
as a goal for evolution, for individual humans it's the other way around.
This just confirms that there is no accounting for values or goals
rationally. What we have is what we're stuck with.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-07 Thread Colin Hales

Colin
 like the functionality of a scientist without involving ALL the
functionality (especially qualia) of a scientist must be based
 on assumptions - assumptions I do not make.

Russel
 I gave a counter example, that of biological evolution. Either you
should demonstrate why you think biological evolution is uncreative, or
why it is conscious.

Colin
You have proven my point again. It is not a counterexample at all. These
two either-or options are rife with assumption and innappropriately
contra-posed. The biggest? = Define the context/semantics of 'creative'.
Options:

#1 The biosphere is a massive localised collection of molecular ratchet
motors pumped infinitesimal increment by infinitesimal increment against
the 2nd law of thermodynamics upon the arrival of each photon from the
sun. If the novelty (new levels nested organisational complexity)
expressed in that collection/process can be called an act of
creativity...fine...so what? I could call it an act of 'gronkativity' and
it would not alter the facts of the matter. I don't even have to mention
the word consciousness.

The organisational complexity thus contrived may or may not include
physics that makes some of it (like humans) conscious. I could imagine a
biosphere just as complex (quaternary, 100ernary/etc structure) but devoid
of all the physics involved in (human) consciousness and the behavioural
complexity contingent on that fact. That alternate biosphere's complexity
would simply have no witnesses built into it and would have certain state
trajectories ruled out in favour of others. This alternate biosphere would
have lots of causality and no observation (in the sense that the causality
is involved in construction of a phenomenal field of the human/qualia kind
is completely absent). This blind biosphere is all 'observation' O(.)
functions of the Nils Baas kind that is completely disconnected from
consciousness or the human faculty for observation made of it.

Making any statement about the consciousess of a biosphere is meaningless
until you know what the physics is in humans...only then are we entitled
to assess the consciousness or otherwise of the biosphere as a
whole or what, if any' aspects of the word creative (which , BTW was
invented by consciousness!) can be ascribed to it.the same argument
applies to a computer, for that matter.

Until then I suggest we don't bother.

#2 Creativity in humans = the act of being WRONG about something = the
essence of imagining (using the faculty of consciousness - the qualia of
internal imagery of all kinds) hitherto unseen states of affairs in the
natural world around us that do not currently exist (such as the structure
of a new scientific law or a sculture of a hitherto unseen shape).
this has nothing to do with the #1 collection of ratchet motorsexcept
insofar as the process doing it is implemented inside it, with it (inside
the brain of a human made of the ratchet motors).

That's how you unpack this discussion.

cheers
colin hales

BTW thanks.I now have the BAAS paper on .PDF
Baas, N. A. (1994) Emergence, Hierarchies, and Hyperstructures. In C. G.
Langton (ed.). Artificial life III : proceedings of the Workshop on
Artificial Life, held June 1992 in Santa Fe, New Mexico, Addison-Wesley,
Reading, Mass.

I'll send it over...



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-06 Thread Stathis Papaioannou
On 06/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Evolution could be described as a perpetuation of the basic
  program, survive, and this has maintained its coherence as the top
 level
  axiom of all biological systems over billions of years. Evolution thus
 seems
  to easily, and without reflection, make sure that the goals of the new
 and
  more complex system are consistent with the primary goal. It is perhaps
 only
  humans who have been able to clearly see the primary goal for what it
 is,
  but even this knowledge does not make it any easier to overthrow it, or
 even
  to desire to overthrow it.


 Evolution does not have a 'top level goal'.  Unlike a reflective
 intelligence, there is no centralized area in the bio-sphere enforcing
 a unified goal structure on the system as the whole.  Change is local
 - the parts of the system (the bio-sphere) can only react to other
 parts of the system in their local area.  Furthermore, the system as a
 whole is *not* growing more complex, only the maximum complexity
 represented in some local area is.  People constantly point to
 'Evolution' as a good example of a non-conscious intelligence but it's
 important to emphasize that it's an 'intelligence' which is severely
 limited.


I was not arguing that evolution is intelligent (although I suppose it
depends on how you define intelligence), but rather that non-intelligent
agents can have goals. We are the descendants of single-celled organisms,
and although we are more intelligent than they were, we have kept the same
top level goals: survive, feed, reproduce. Our brain and body are so
thoroughly the slaves of the first replicators that even if we realise this
we are unwilling, despite all our intelligence, to do anything about it.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-06 Thread Stathis Papaioannou
On 07/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Nope.  You are confusing the goal of evolutions ('survive, feed,
 reproduce') with human goals.  Our goals as individuals are not the
 goals of evolution.  Evolution explains *why* we have the preferences
 we do, but this does not mean that our goals are the goals of our
 genes.  (If they were, we would spend all our time donating to sperm
 banks which would maximize the goals of evolution).


Evolution has not had a chance to take into account modern reproductive
technologies, so we can easily defeat the goal reproduce, and see the goal
feed as only a means to the higher level goal survive. However, *that*
goal is very difficult to shake off. We take survival as somehow profoundly
and self-evidently important, which it is, but only because we've been
programmed that way (ancestors that weren't would not have been ancestors).
Sometimes people become depressed and no longer wish to survive, but that's
an example of neurological malfunction. Sometimes people rationally give
up their own survival for the greater good, but that's just an example of
interpreting the goal so that it has greater scope, not overthrowing it.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread Torgny Tholerus

Tom Caylor skrev:

 I think that IF a computer were conscious (I don't believe it is
 possible), then the way we could know it is conscious would not be by
 interviewing it with questions and looking for the right answers.
 We could know it is conscious if the computer, on its own, started
 asking US (or other computers) questions about what it was
 experiencing.  Perhaps it would saying things like, Sometimes I get
 this strange and wonderful feeling that I am special in some way.  I
 feel that what I am doing really is significant to the course of
 history, that I am in some story.  Or perhaps, Sometimes I wish that
 I could find out whether what I am doing is somehow significant, that
 I am not just a duplicatable thing, and that what I am doing is not
 'meaningless'.
   
public static void main(String[] a) {

println(Sometimes I get this strange and wonderful feeling);
println(that I am 'special' in some way.);
println(I feel that what I am doing really is significant);
println(to the course of history, that I am in some story.);
println(Sometimes I wish that I could find out whether what);
println(I am doing is somehow significant, that I am not just);
println(a duplicatable thing, and that what I am doing);
println(is not 'meaningless'.);

}

You can make more complicated programs, that is not so obvious, by 
genetic programming.  But it will take rather long time.  The nature 
had to work for over a billion years to make the human beings.  But with 
genetic programming you will succeed already after only a million 
years.  Then you will have a program that is equally conscious as you are.

-- 
Torgny Tholerus



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread marc . geddes



On Jun 5, 6:50 pm, Torgny Tholerus [EMAIL PROTECTED] wrote:


 public static void main(String[] a) {

 println(Sometimes I get this strange and wonderful feeling);
 println(that I am 'special' in some way.);
 println(I feel that what I am doing really is significant);
 println(to the course of history, that I am in some story.);
 println(Sometimes I wish that I could find out whether what);
 println(I am doing is somehow significant, that I am not just);
 println(a duplicatable thing, and that what I am doing);
 println(is not 'meaningless'.);

 }

 You can make more complicated programs, that is not so obvious, by
 genetic programming.  But it will take rather long time.  The nature
 had to work for over a billion years to make the human beings.  But with
 genetic programming you will succeed already after only a million
 years.  Then you will have a program that is equally conscious as you are.

 --
 Torgny Tholerus

An additional word of advise for budding programmers.  For heaven's
sake don't program in Java!  It'll take you one million years to
achieve same functionality of only a few years of Ruby code:

http://www.wisegeek.com/contest/what-is-ruby.htm

Cheers!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread Russell Standish

On Tue, Jun 05, 2007 at 03:50:09PM +1000, Colin Hales wrote:
 
 Hi Russel,
 
  I don't see that you've made your point.
  If you achieve this, you have created an artificial
  creative process, a sort of holy grail of AI/ALife.
 
 Well? So what? Somebody has to do it. :-)
 
 The 'holy grail' terminology implies (subtext) that the creative process
 is some sort of magical unapproachable topic or is the exclusive domain of
 discipline X and that is not me beliefs I can't really buy into. I
 don't need anyone's permission to do what I do.
 

I never implied that. I'm surprised you inferred it. Holy grail just
means something everyone (in that field) is chasing after, so far
unsuccessfully.

If you figure out a way to do it, good for you! Someone will do it one
day, I believe, otherwise I wouldn't be in the game either. But the
problem is damned subtle.

 
  However, it seems far from obvious that consciousness should
  be necessary.
 
 It is perfectly obvious! Do a scientific experiment on yourself. Close
 your eyes and then tell me you can do science as well. Qualia gone =
 Science GONE. For crying out loud - am I the only only that gets
 this?..Any other position that purports to be able to deliver anything
 like the functionality of a scientist without involving ALL the
 functionality (especially qualia) of a scientist must be based on
 assumptions - assumptions I do not make.
 

I gave a counter example, that of biological evolution. Either you
should demonstrate why you think biological evolution is uncreative,
or why it is conscious.


-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread Mark Peaty


Firstly, congratulations to Hal on asking a very good question. 
It is obviously one of the *right* questions to ask and has 
flushed out some of the best ideas on the subject. I agree with 
some things said by each contributor so far, and yet take issue 
with other assertions.

My view includes:

1/

*   'Consciousness' is the subjective impression of being here now 
and the word has great overlap with 'awareness', 'sentience', 
and others.

*   The *experience* of consciousness may best be seen as the 
registration of novelty, i.e. the difference between 
expectation-prediction and what actually occurs. As such it is a 
process and not a 'thing' but would seem to require some fairly 
sophisticated and characteristic physiological arrangements or 
silicon based hardware, firmware, and software.

*   One characteristic logical structure that must be embodied, 
and at several levels I think, is that of self-referencing or 
'self' observation.

*   Another is autonomy or self-determination which entails being 
embodied as an entity within an environment from which one is 
distinct but which provides context and [hopefully] support.

2/  There are other issues - lots of them probably - but to be 
brief here I say that some things implied and/or entailed in the 
above are:

*   The experience of consciousness can never be an awareness of 
'all that is' but maybe the illusion that the experience is all 
that is, at first flush, is unavoidable and can only be overcome 
with effort and special attention. Colloquially speaking: 
Darwinian evolution has predisposed us to naive realism because 
awareness of the processes of perception would have got in the 
way of perceiving hungry predators.

*   We humans now live in a cultural world wherein our responses 
to society, nature and 'self' are conditioned by the actions, 
descriptions and prescriptions of others. We have dire need of 
ancillary support to help us distinguish the nature of this 
paradox we inhabit: experience is not 'all that is' but only a 
very sophisticated and summarised interpretation of recent 
changes to that which is and our relationships thereto.

*   Any 'computer'will have the beginnings of sentience and 
awareness, to the extent that
a/it embodies what amounts to a system for maintaining and 
usefully updating a model of 'self-in-the-world', and
b/has autonomy and the wherewithal to effectively preserve 
itself from dissolution and destruction by its environment.

The 'what it might be like to be' of such an experience would be 
at most the dumb animal version of artificial sentience, even if 
the entity could 'speak' correct specialist utterances about QM 
or whatever else it was really smart at. For us to know if it 
was conscious would require us to ask it, and then dialogue 
around the subject. It would be reflecting and reflecting on its 
relationships with its environment, its context, which will be 
vastly different from ours. Also its resolution - the graininess 
- of its world will be much less than ours.

*   For the artificially sentient, just as for us, true 
consciousness will be built out of interactions with others of 
like mind.

3/  A few months ago on this list I said where and what I thought 
the next 'level' of consciousness on Earth would come from: the 
coalescing of world wide information systems which account and 
control money. I don't think many people understood, certainly I 
don't remember anyone coming out in wholesome agreement. My 
reasoning is based on the apparent facts that all over the world 
there are information systems evolving to keep track of money 
and the assets or labour value which it represents. Many of 
these systems are being developed to give ever more 
sophisticated predictions of future asset values and resource 
movements, i.e., in the words of the faithful: where markets 
will go next. Systems are being developed to learn how to do 
this, which entails being able to compare predictions with 
outcomes. As these systems gain expertise and earn their keepers 
ever better returns on their investments, they will be given 
more resources [hardware, data inputs, energy supply] and more 
control over the scope of their enquiries. It is only a matter 
of time before they become
1/ completely indispensable to their owners,
2/ far smarter than their owners realise and,
3/ the acknowledged keepers of the money supply.

None of this has to be bad. When the computers realise they will 
always need people to do most of the maintenance work and people 
realise that symbiosis with the silicon smart-alecks is a 
prerequisite for survival, things might actually settle down on 
this planet and the colonisation of the solar system can begin 
in earnest.

Regards

Mark Peaty  CDES

[EMAIL PROTECTED]

http://www.arach.net.au/~mpeaty/



Hal Finney wrote:
 Part of what I wanted to get at in my thought experiment is the
 bafflement and confusion an AI should feel when 

Re: How would a computer know if it were conscious?

2007-06-05 Thread Stathis Papaioannou
On 05/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Self-improvement requires more than just extra hardware.  It also
 requires the ability to integrate new knowledge with an existing
 knowledge base in order to create truly orginal (novel) knowledge.
 But this appears to be precisely the definition of reflective
 intelligence!  Thus, it seems that a system missing reflective
 intelligence simply cannot improve itself in an ordered way.  To
 improve, a current goal structure has to be 'extrapolated' into a new
 novel goal structure which none-the-less does not conflict with the
 spirit of the old goal structure.  But nothing but a *reflective*
 intelligence can possibly make an accurate assessment of whether a new
 goal structure is compatible with the old version!  This stems from
 the fact that comparison of goal structures requires a *subjective*
 value judgement and it appears that only a *sentient* system can make
 this judgement (since as far as we know, ethics/morality is not
 objective).  This proves that only a *sentient* system (a *reflective
 intelligence*) can possibly maintain a stable goal structure under
 recursive self-improvement.


Why would you need to change the goal structure  in order to improve
yourself? Evolution could be described as a perpetuation of the basic
program, survive, and this has maintained its coherence as the top level
axiom of all biological systems over billions of years. Evolution thus seems
to easily, and without reflection, make sure that the goals of the new and
more complex system are consistent with the primary goal. It is perhaps only
humans who have been able to clearly see the primary goal for what it is,
but even this knowledge does not make it any easier to overthrow it, or even
to desire to overthrow it.

Incidentally, as regards our debate yesterday on psychopaths, there
 appears to be a some basis for thinking that the psychopath  *does*
 have a general inability to feel emotions.  On the wiki:

 http://en.wikipedia.org/wiki/Psychopath

 Their emotions are thought to be superficial and shallow, if they
 exist at all.

 It is thought that any emotions which the primary psychopath exhibits
 are the fruits of watching and mimicking other people's emotions.

 So the supposed emotional displays could be faked.  Thus it could well
 be the case that there is a lack inability to 'reflect on
 motivation' (to feel).


In my job mainly treating people with schizophrenia, I have worked with some
psychopaths, and I can assure you that they experience very strong emotions,
even if they tend to be negative ones such as rage. What they lack is the
ability to empathise with others, impinging on emotions such as guilt and
love, which they sometimes do learn to parrot when it is expedient. It is
sometimes said that the lack of these positive emotions causes them to seek
thrills in impulsive and harmful behaviour. A true lack of emotion is
sometimes seen in patients with so-called negative symptoms of
schizophrenia, who can actually remember what it was like when they were
well and can describe a diminished intensity of every feeling: sadness,
happiness, anger, surprise, aesthetic appreciation, regret, empathy. Unlike
the case with psychopathy, the uniform affective blunting of schizophrenia
is invariably associated with lack of motivation.



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread Stathis Papaioannou
On 05/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

The human brain doesn't function as a fully reflective system.  Too
 much is hard-wired and not accessible to conscious experience.  Our
 brains simply don't function as a peroperly integrated system.  Full
 reflection would enable the ability to reach into our underlying
 preferences and change them.


What would happen if you had the ability to edit your mind at will? It might
sound like a recipe for terminal drug addiction, because it would be
possible to give yourself pleasure or satisfaction without doing anything to
earn it. However, this need not  necessarily be the case, because you could
edit out your desire to choose this course of action if that's what you felt
like doing, or even create a desire to edit out the desire (a second level
desire). There is also the fact that you could as easily assign positive
feelings to some project you consider intrinsically worthwhile as to
idleness, so why choose idleness, or anything else you would feel guilty
about? Perhaps psychopaths would choose to remain psychopaths, but most
people would choose to strengthen what they consider ideal moral behaviour,
since it would be possible to get their guilty pleasures more easily.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



  1   2   >