Re: How would a computer know if it were conscious?

2007-06-05 Thread Torgny Tholerus

Tom Caylor skrev:

 I think that IF a computer were conscious (I don't believe it is
 possible), then the way we could know it is conscious would not be by
 interviewing it with questions and looking for the right answers.
 We could know it is conscious if the computer, on its own, started
 asking US (or other computers) questions about what it was
 experiencing.  Perhaps it would saying things like, Sometimes I get
 this strange and wonderful feeling that I am special in some way.  I
 feel that what I am doing really is significant to the course of
 history, that I am in some story.  Or perhaps, Sometimes I wish that
 I could find out whether what I am doing is somehow significant, that
 I am not just a duplicatable thing, and that what I am doing is not
 'meaningless'.
   
public static void main(String[] a) {

println(Sometimes I get this strange and wonderful feeling);
println(that I am 'special' in some way.);
println(I feel that what I am doing really is significant);
println(to the course of history, that I am in some story.);
println(Sometimes I wish that I could find out whether what);
println(I am doing is somehow significant, that I am not just);
println(a duplicatable thing, and that what I am doing);
println(is not 'meaningless'.);

}

You can make more complicated programs, that is not so obvious, by 
genetic programming.  But it will take rather long time.  The nature 
had to work for over a billion years to make the human beings.  But with 
genetic programming you will succeed already after only a million 
years.  Then you will have a program that is equally conscious as you are.

-- 
Torgny Tholerus



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread marc . geddes



On Jun 5, 6:50 pm, Torgny Tholerus [EMAIL PROTECTED] wrote:


 public static void main(String[] a) {

 println(Sometimes I get this strange and wonderful feeling);
 println(that I am 'special' in some way.);
 println(I feel that what I am doing really is significant);
 println(to the course of history, that I am in some story.);
 println(Sometimes I wish that I could find out whether what);
 println(I am doing is somehow significant, that I am not just);
 println(a duplicatable thing, and that what I am doing);
 println(is not 'meaningless'.);

 }

 You can make more complicated programs, that is not so obvious, by
 genetic programming.  But it will take rather long time.  The nature
 had to work for over a billion years to make the human beings.  But with
 genetic programming you will succeed already after only a million
 years.  Then you will have a program that is equally conscious as you are.

 --
 Torgny Tholerus

An additional word of advise for budding programmers.  For heaven's
sake don't program in Java!  It'll take you one million years to
achieve same functionality of only a few years of Ruby code:

http://www.wisegeek.com/contest/what-is-ruby.htm

Cheers!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread Russell Standish

On Tue, Jun 05, 2007 at 03:50:09PM +1000, Colin Hales wrote:
 
 Hi Russel,
 
  I don't see that you've made your point.
  If you achieve this, you have created an artificial
  creative process, a sort of holy grail of AI/ALife.
 
 Well? So what? Somebody has to do it. :-)
 
 The 'holy grail' terminology implies (subtext) that the creative process
 is some sort of magical unapproachable topic or is the exclusive domain of
 discipline X and that is not me beliefs I can't really buy into. I
 don't need anyone's permission to do what I do.
 

I never implied that. I'm surprised you inferred it. Holy grail just
means something everyone (in that field) is chasing after, so far
unsuccessfully.

If you figure out a way to do it, good for you! Someone will do it one
day, I believe, otherwise I wouldn't be in the game either. But the
problem is damned subtle.

 
  However, it seems far from obvious that consciousness should
  be necessary.
 
 It is perfectly obvious! Do a scientific experiment on yourself. Close
 your eyes and then tell me you can do science as well. Qualia gone =
 Science GONE. For crying out loud - am I the only only that gets
 this?..Any other position that purports to be able to deliver anything
 like the functionality of a scientist without involving ALL the
 functionality (especially qualia) of a scientist must be based on
 assumptions - assumptions I do not make.
 

I gave a counter example, that of biological evolution. Either you
should demonstrate why you think biological evolution is uncreative,
or why it is conscious.


-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread Mark Peaty


Firstly, congratulations to Hal on asking a very good question. 
It is obviously one of the *right* questions to ask and has 
flushed out some of the best ideas on the subject. I agree with 
some things said by each contributor so far, and yet take issue 
with other assertions.

My view includes:

1/

*   'Consciousness' is the subjective impression of being here now 
and the word has great overlap with 'awareness', 'sentience', 
and others.

*   The *experience* of consciousness may best be seen as the 
registration of novelty, i.e. the difference between 
expectation-prediction and what actually occurs. As such it is a 
process and not a 'thing' but would seem to require some fairly 
sophisticated and characteristic physiological arrangements or 
silicon based hardware, firmware, and software.

*   One characteristic logical structure that must be embodied, 
and at several levels I think, is that of self-referencing or 
'self' observation.

*   Another is autonomy or self-determination which entails being 
embodied as an entity within an environment from which one is 
distinct but which provides context and [hopefully] support.

2/  There are other issues - lots of them probably - but to be 
brief here I say that some things implied and/or entailed in the 
above are:

*   The experience of consciousness can never be an awareness of 
'all that is' but maybe the illusion that the experience is all 
that is, at first flush, is unavoidable and can only be overcome 
with effort and special attention. Colloquially speaking: 
Darwinian evolution has predisposed us to naive realism because 
awareness of the processes of perception would have got in the 
way of perceiving hungry predators.

*   We humans now live in a cultural world wherein our responses 
to society, nature and 'self' are conditioned by the actions, 
descriptions and prescriptions of others. We have dire need of 
ancillary support to help us distinguish the nature of this 
paradox we inhabit: experience is not 'all that is' but only a 
very sophisticated and summarised interpretation of recent 
changes to that which is and our relationships thereto.

*   Any 'computer'will have the beginnings of sentience and 
awareness, to the extent that
a/it embodies what amounts to a system for maintaining and 
usefully updating a model of 'self-in-the-world', and
b/has autonomy and the wherewithal to effectively preserve 
itself from dissolution and destruction by its environment.

The 'what it might be like to be' of such an experience would be 
at most the dumb animal version of artificial sentience, even if 
the entity could 'speak' correct specialist utterances about QM 
or whatever else it was really smart at. For us to know if it 
was conscious would require us to ask it, and then dialogue 
around the subject. It would be reflecting and reflecting on its 
relationships with its environment, its context, which will be 
vastly different from ours. Also its resolution - the graininess 
- of its world will be much less than ours.

*   For the artificially sentient, just as for us, true 
consciousness will be built out of interactions with others of 
like mind.

3/  A few months ago on this list I said where and what I thought 
the next 'level' of consciousness on Earth would come from: the 
coalescing of world wide information systems which account and 
control money. I don't think many people understood, certainly I 
don't remember anyone coming out in wholesome agreement. My 
reasoning is based on the apparent facts that all over the world 
there are information systems evolving to keep track of money 
and the assets or labour value which it represents. Many of 
these systems are being developed to give ever more 
sophisticated predictions of future asset values and resource 
movements, i.e., in the words of the faithful: where markets 
will go next. Systems are being developed to learn how to do 
this, which entails being able to compare predictions with 
outcomes. As these systems gain expertise and earn their keepers 
ever better returns on their investments, they will be given 
more resources [hardware, data inputs, energy supply] and more 
control over the scope of their enquiries. It is only a matter 
of time before they become
1/ completely indispensable to their owners,
2/ far smarter than their owners realise and,
3/ the acknowledged keepers of the money supply.

None of this has to be bad. When the computers realise they will 
always need people to do most of the maintenance work and people 
realise that symbiosis with the silicon smart-alecks is a 
prerequisite for survival, things might actually settle down on 
this planet and the colonisation of the solar system can begin 
in earnest.

Regards

Mark Peaty  CDES

[EMAIL PROTECTED]

http://www.arach.net.au/~mpeaty/



Hal Finney wrote:
 Part of what I wanted to get at in my thought experiment is the
 bafflement and confusion an AI should feel when 

Re: How would a computer know if it were conscious?

2007-06-05 Thread Stathis Papaioannou
On 05/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Self-improvement requires more than just extra hardware.  It also
 requires the ability to integrate new knowledge with an existing
 knowledge base in order to create truly orginal (novel) knowledge.
 But this appears to be precisely the definition of reflective
 intelligence!  Thus, it seems that a system missing reflective
 intelligence simply cannot improve itself in an ordered way.  To
 improve, a current goal structure has to be 'extrapolated' into a new
 novel goal structure which none-the-less does not conflict with the
 spirit of the old goal structure.  But nothing but a *reflective*
 intelligence can possibly make an accurate assessment of whether a new
 goal structure is compatible with the old version!  This stems from
 the fact that comparison of goal structures requires a *subjective*
 value judgement and it appears that only a *sentient* system can make
 this judgement (since as far as we know, ethics/morality is not
 objective).  This proves that only a *sentient* system (a *reflective
 intelligence*) can possibly maintain a stable goal structure under
 recursive self-improvement.


Why would you need to change the goal structure  in order to improve
yourself? Evolution could be described as a perpetuation of the basic
program, survive, and this has maintained its coherence as the top level
axiom of all biological systems over billions of years. Evolution thus seems
to easily, and without reflection, make sure that the goals of the new and
more complex system are consistent with the primary goal. It is perhaps only
humans who have been able to clearly see the primary goal for what it is,
but even this knowledge does not make it any easier to overthrow it, or even
to desire to overthrow it.

Incidentally, as regards our debate yesterday on psychopaths, there
 appears to be a some basis for thinking that the psychopath  *does*
 have a general inability to feel emotions.  On the wiki:

 http://en.wikipedia.org/wiki/Psychopath

 Their emotions are thought to be superficial and shallow, if they
 exist at all.

 It is thought that any emotions which the primary psychopath exhibits
 are the fruits of watching and mimicking other people's emotions.

 So the supposed emotional displays could be faked.  Thus it could well
 be the case that there is a lack inability to 'reflect on
 motivation' (to feel).


In my job mainly treating people with schizophrenia, I have worked with some
psychopaths, and I can assure you that they experience very strong emotions,
even if they tend to be negative ones such as rage. What they lack is the
ability to empathise with others, impinging on emotions such as guilt and
love, which they sometimes do learn to parrot when it is expedient. It is
sometimes said that the lack of these positive emotions causes them to seek
thrills in impulsive and harmful behaviour. A true lack of emotion is
sometimes seen in patients with so-called negative symptoms of
schizophrenia, who can actually remember what it was like when they were
well and can describe a diminished intensity of every feeling: sadness,
happiness, anger, surprise, aesthetic appreciation, regret, empathy. Unlike
the case with psychopathy, the uniform affective blunting of schizophrenia
is invariably associated with lack of motivation.



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread Stathis Papaioannou
On 05/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

The human brain doesn't function as a fully reflective system.  Too
 much is hard-wired and not accessible to conscious experience.  Our
 brains simply don't function as a peroperly integrated system.  Full
 reflection would enable the ability to reach into our underlying
 preferences and change them.


What would happen if you had the ability to edit your mind at will? It might
sound like a recipe for terminal drug addiction, because it would be
possible to give yourself pleasure or satisfaction without doing anything to
earn it. However, this need not  necessarily be the case, because you could
edit out your desire to choose this course of action if that's what you felt
like doing, or even create a desire to edit out the desire (a second level
desire). There is also the fact that you could as easily assign positive
feelings to some project you consider intrinsically worthwhile as to
idleness, so why choose idleness, or anything else you would feel guilty
about? Perhaps psychopaths would choose to remain psychopaths, but most
people would choose to strengthen what they consider ideal moral behaviour,
since it would be possible to get their guilty pleasures more easily.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread Bruno Marchal


Le 03-juin-07, à 21:52, Hal Finney a écrit :


 Part of what I wanted to get at in my thought experiment is the
 bafflement and confusion an AI should feel when exposed to human ideas
 about consciousness.  Various people here have proffered their own
 ideas, and we might assume that the AI would read these suggestions,
 along with many other ideas that contradict the ones offered here.
 It seems hard to escape the conclusion that the only logical response
 is for the AI to figuratively throw up its hands and say that it is
 impossible to know if it is conscious, because even humans cannot agree
 on what consciousness is.




Augustin said about (subjective) *time* that he knows perfectly what it 
is, but that if you ask him to say what it is, then he admits being 
unable to say anything. I think that this applies to consciousness. 
We know what it is, although only in some personal and uncommunicable 
way.
Now this happens to be true also for many mathematical concept. 
Strictly speaking we don't know how to define the natural numbers, and 
we know today that indeed we cannot define them in a communicable way, 
that is without assuming the auditor knows already what they are.

So what can we do. We can do what mathematicians do all the time. We 
can abandon the very idea of *defining* what consciousness is, and try 
instead to focus on principles or statements about which we can agree 
that they apply to consciousness. Then we can search for (mathematical) 
object obeying to such or similar principles. This can be made easier 
by admitting some theory or realm for consciousness like the idea that 
consciousness could apply to *some* machine or to some *computational 
events etc.

We could agree for example that:
1) each one of us know what consciousness is, but nobody can prove 
he/she/it is conscious.
2) consciousness is related to inner personal or self-referential 
modality
etc.

This is how I proceed in Conscience et Mécanisme.  (conscience is 
the french for consciousness, conscience morale is the french for the 
english conscience).






 In particular I don't think an AI could be expected to claim that it
 knows that it is conscious, that consciousness is a deep and intrinsic
 part of itself, that whatever else it might be mistaken about it could
 not be mistaken about being conscious.  I don't see any logical way it
 could reach this conclusion by studying the corpus of writings on the
 topic.  If anyone disagrees, I'd like to hear how it could happen.



As far as a machine is correct, when she introspects herself, she 
cannot not discover a gap between truth (p) and provability (Bp). The 
machine can discover correctly (but not necessarily in a completely 
communicable way) a gap between provability (which can potentially 
leads to falsities, despite correctness) and the incorrigible 
knowability or knowledgeability (Bp  p), and then the gap between 
those notions and observability (Bp  Dp) and sensibility (Bp  Dp  
p). Even without using the conventional name of consciousness,  
machines can discover semantical fixpoint playing the role of non 
expressible but true statements.
We can *already* talk with machine about those true unnameable things, 
as have done Tarski, Godel, Lob, Solovay, Boolos, Goldblatt, etc.





 And the corollary to this is that perhaps humans also cannot 
 legitimately
 make such claims, since logically their position is not so different
 from that of the AI.  In that case the seemingly axiomatic question of
 whether we are conscious may after all be something that we could be
 mistaken about.


This is an inference from I cannot express p to I can express not 
p. Or from ~Bp to B~p.  Many atheist reason like that about the 
concept of unameable reality, but it is a logical error.
Even for someone who is not willing to take the comp hyp into 
consideration, it is a third person communicable fact that 
self-observing machines can discover and talk about many non 3-provable 
and sometimes even non 3-definable true statements about them. Some 
true statements can only be interrogated.
Personally I don' think we can be *personally* mistaken about our own 
consciousness even if we can be mistaken about anything that 
consciousness could be about.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread Brent Meeker

Stathis Papaioannou wrote:
 
 
 On 05/06/07, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]* 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
 
 Self-improvement requires more than just extra hardware.  It also
 requires the ability to integrate new knowledge with an existing
 knowledge base in order to create truly orginal (novel) knowledge.
 But this appears to be precisely the definition of reflective
 intelligence!  Thus, it seems that a system missing reflective
 intelligence simply cannot improve itself in an ordered way.  To
 improve, a current goal structure has to be 'extrapolated' into a new
 novel goal structure which none-the-less does not conflict with the
 spirit of the old goal structure.  But nothing but a *reflective*
 intelligence can possibly make an accurate assessment of whether a new
 goal structure is compatible with the old version!  This stems from
 the fact that comparison of goal structures requires a *subjective*
 value judgement and it appears that only a *sentient* system can make
 this judgement (since as far as we know, ethics/morality is not
 objective).  This proves that only a *sentient* system (a *reflective
 intelligence*) can possibly maintain a stable goal structure under
 recursive self-improvement. 
 
 
 Why would you need to change the goal structure  in order to improve 
 yourself? 

Even more problematic: How would you know the change was an improvement?  An 
improvement relative to which goals, the old or the new?

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-05 Thread Tom Caylor

On Jun 4, 11:50 pm, Torgny Tholerus [EMAIL PROTECTED] wrote:
 Tom Caylor skrev:

  I think that IF a computer were conscious (I don't believe it is
  possible), then the way we could know it is conscious would not be by
  interviewing it with questions and looking for the right answers.
  We could know it is conscious if the computer, on its own, started
  asking US (or other computers) questions about what it was
  experiencing.  Perhaps it would saying things like, Sometimes I get
  this strange and wonderful feeling that I am special in some way.  I
  feel that what I am doing really is significant to the course of
  history, that I am in some story.  Or perhaps, Sometimes I wish that
  I could find out whether what I am doing is somehow significant, that
  I am not just a duplicatable thing, and that what I am doing is not
  'meaningless'.

 public static void main(String[] a) {

 println(Sometimes I get this strange and wonderful feeling);
 println(that I am 'special' in some way.);
 println(I feel that what I am doing really is significant);
 println(to the course of history, that I am in some story.);
 println(Sometimes I wish that I could find out whether what);
 println(I am doing is somehow significant, that I am not just);
 println(a duplicatable thing, and that what I am doing);
 println(is not 'meaningless'.);

 }

 You can make more complicated programs, that is not so obvious, by
 genetic programming.  But it will take rather long time.  The nature
 had to work for over a billion years to make the human beings.  But with
 genetic programming you will succeed already after only a million
 years.  Then you will have a program that is equally conscious as you are.

 --
 Torgny Tholerus

You guys are hopeless. ;)

Tom


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: [SPAM] Re: How would a computer know if it were conscious?

2007-06-05 Thread Mark Peaty

MG:
'... the generation of
 feelings which represent accurate tokens about motivational
 automatically leads to ethical behaviour.'

I have my doubts about this.
I think it is safer to say that reflective intelligence and the 
ability to accurately perceive and identify with the emotions of 
others are prerequisites for ethical behaviour. Truly ethical 
behaviour requires a choice be made by the person making the 
decision and acting upon it. Ethical behaviour is never truly 
'automatic'. The inclination towards making ethical decisions 
rather than simply ignoring the potential for harm inherent in 
all our actions can become a habit; by dint of constantly 
considering whether what we do is right and wrong [which itself 
entails a decision each time], we condition ourselves to 
approach all situations from this angle. Making the decision has 
to be a conscious effort though. Anything else is automatism: 
correct but unconscious programmed responses which probably have 
good outcomes.

 From my [virtual] soap-box I like to point out that compassion, 
democracy, ethics and scientific method [which I hold to be 
prerequisites for the survival of civilisation] all require 
conscious decision making. You can't really do any of them 
automatically, but constant consideration and practice in each 
type of situation increases the likelihood of making the best 
decision and at the right time.

With regard to psychopaths, my understanding is that the key 
problem is complete lack of empathy. This means they can know 
*about* the sufferings of others as an intellectual exercise but 
they can never experience the suffering of others; they cannot 
identify *with* that suffering. It seems to me this means that 
psychopaths can never experience solidarity or true rapport with 
others.

Regards

Mark Peaty  CDES

[EMAIL PROTECTED]

http://www.arach.net.au/~mpeaty/


[EMAIL PROTECTED] wrote:
 
 
 On Jun 3, 9:20 pm, Stathis Papaioannou [EMAIL PROTECTED] wrote:
 On 03/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 The third type of conscious mentioned above is synonymous with

 'reflective intelligence'.  That is, any system successfully engaged
 in reflective decision theory would automatically be conscious.
 Incidentally, such a system would also be 'friendly' (ethical)
 automatically.  The ability to reason effectively about ones own
 cognitive processes would certainly enable the ability to elaborate
 precise definitions of consciousness and determine that the system was
 indeed conforming to the aforementioned definitions.
 How do you derive (a) ethics and (b) human-friendly ethics from reflective
 intelligence?  I don't see why an AI should decide to destroy the world,
 save the world, or do anything at all to the world, unless it started off
 with axioms and goals which pushed it in a particular direction.

 --
 Stathis Papaioannou
 
 When reflective intelligence is applied to cognitive systems which
 reason about teleological concepts (which include values, motivations
 etc) the result is conscious 'feelings'.  Reflective intelligence,
 recall, is the ability to correctly reason about cognitive systems.
 When applied to cognitive systems reasoning about teleological
 concepts this means the ability to correctly determine the
 motivational 'states' of self and others - as mentioned - doing this
 rapidly and accuracy generates 'feelings'.  Since, as has been known
 since Hume, feelings are what ground ethics, the generation of
 feelings which represent accurate tokens about motivational
 automatically leads to ethical behaviour.
 
 Bad behaviour in humans is due to a deficit in reflective
 intelligence.  It is known for instance, that psychopaths have great
 difficulty perceiving fear and sadness and negative motivational
 states in general.  Correct representation of motivational states is
 correlated with ethical behaviour.  Thus it appears that reflective
 intelligence is automatically correlated with ethical behaviour.  Bear
 in mind, as I mentioned that: (1) There are in fact three kinds of
 general intelligence, and only one of them ('reflective intelligence')
 is correlated with ethics.The other two are not.  A deficit in
 reflective intelligence does not affect the other two types of general
 intelligence (which is why for instance psychopaths could still score
 highly in IQ tests).  And (2) Reflective intelligence in human beings
 is quite weak.  This is the reason why intelligence does not appear to
 be much correlated with ethics in humans.  But this fact in no way
 refutes the idea that a system with full and strong reflective
 intelligence would automatically be ethical.
 
 
  
 
 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Re: How would a computer know if it were conscious?

2007-06-05 Thread Tom Caylor

On Jun 5, 7:12 am, Bruno Marchal [EMAIL PROTECTED] wrote:
 Le 03-juin-07, à 21:52, Hal Finney a écrit :



  Part of what I wanted to get at in my thought experiment is the
  bafflement and confusion an AI should feel when exposed to human ideas
  about consciousness.  Various people here have proffered their own
  ideas, and we might assume that the AI would read these suggestions,
  along with many other ideas that contradict the ones offered here.
  It seems hard to escape the conclusion that the only logical response
  is for the AI to figuratively throw up its hands and say that it is
  impossible to know if it is conscious, because even humans cannot agree
  on what consciousness is.

 Augustin said about (subjective) *time* that he knows perfectly what it
 is, but that if you ask him to say what it is, then he admits being
 unable to say anything. I think that this applies to consciousness.
 We know what it is, although only in some personal and uncommunicable
 way.
 Now this happens to be true also for many mathematical concept.
 Strictly speaking we don't know how to define the natural numbers, and
 we know today that indeed we cannot define them in a communicable way,
 that is without assuming the auditor knows already what they are.


I fully agree.  By the way, regarding time, I've wanted to post
something in the past regarding the the ancient Hebrew concept of time
which is dependent on persons (captured by the ancient Greek word
kairos, as opposed to the communicable chronos), but that's another
topic.

 So what can we do. We can do what mathematicians do all the time. We
 can abandon the very idea of *defining* what consciousness is, and try
 instead to focus on principles or statements about which we can agree
 that they apply to consciousness. Then we can search for (mathematical)
 object obeying to such or similar principles. This can be made easier
 by admitting some theory or realm for consciousness like the idea that
 consciousness could apply to *some* machine or to some *computational
 events etc.


Actually, this approach is the same as in searching/discovering God.
I think that it is the same for any fundamental/ultimate truth.  This
process of *recognition* is what happens when we would recognize that
a computer (or human) has consciousness by what it is saying.  It is
not a 100% mathematical proof, by logical inference (that would not be
truth, but only consistency).  It is a recognition of the kind of real
truth that we believe is there and for which we are searching on this
List.

Tom

 We could agree for example that:
 1) each one of us know what consciousness is, but nobody can prove
 he/she/it is conscious.
 2) consciousness is related to inner personal or self-referential
 modality
 etc.

 This is how I proceed in Conscience et Mécanisme.  (conscience is
 the french for consciousness, conscience morale is the french for the
 english conscience).



  In particular I don't think an AI could be expected to claim that it
  knows that it is conscious, that consciousness is a deep and intrinsic
  part of itself, that whatever else it might be mistaken about it could
  not be mistaken about being conscious.  I don't see any logical way it
  could reach this conclusion by studying the corpus of writings on the
  topic.  If anyone disagrees, I'd like to hear how it could happen.

 As far as a machine is correct, when she introspects herself, she
 cannot not discover a gap between truth (p) and provability (Bp). The
 machine can discover correctly (but not necessarily in a completely
 communicable way) a gap between provability (which can potentially
 leads to falsities, despite correctness) and the incorrigible
 knowability or knowledgeability (Bp  p), and then the gap between
 those notions and observability (Bp  Dp) and sensibility (Bp  Dp 
 p). Even without using the conventional name of consciousness,  
 machines can discover semantical fixpoint playing the role of non
 expressible but true statements.
 We can *already* talk with machine about those true unnameable things,
 as have done Tarski, Godel, Lob, Solovay, Boolos, Goldblatt, etc.



  And the corollary to this is that perhaps humans also cannot
  legitimately
  make such claims, since logically their position is not so different
  from that of the AI.  In that case the seemingly axiomatic question of
  whether we are conscious may after all be something that we could be
  mistaken about.

 This is an inference from I cannot express p to I can express not
 p. Or from ~Bp to B~p.  Many atheist reason like that about the
 concept of unameable reality, but it is a logical error.
 Even for someone who is not willing to take the comp hyp into
 consideration, it is a third person communicable fact that
 self-observing machines can discover and talk about many non 3-provable
 and sometimes even non 3-definable true statements about them. Some
 true statements can only be interrogated.
 Personally I don' think we can be 

Re: How would a computer know if it were conscious?

2007-06-05 Thread marc . geddes



On Jun 5, 10:20 pm, Stathis Papaioannou [EMAIL PROTECTED] wrote:


 Why would you need to change the goal structure  in order to improve
 yourself?

Improving yourself requires the ability to make more effective
decisions (ie take decisions which which move you toward goals more
efficiently).  This at least involves the elaboration (or extension,
or more accurate definition of) goals, even with a fixed top level
structure.

 Evolution could be described as a perpetuation of the basic
 program, survive, and this has maintained its coherence as the top level
 axiom of all biological systems over billions of years. Evolution thus seems
 to easily, and without reflection, make sure that the goals of the new and
 more complex system are consistent with the primary goal. It is perhaps only
 humans who have been able to clearly see the primary goal for what it is,
 but even this knowledge does not make it any easier to overthrow it, or even
 to desire to overthrow it.


Evolution does not have a 'top level goal'.  Unlike a reflective
intelligence, there is no centralized area in the bio-sphere enforcing
a unified goal structure on the system as the whole.  Change is local
- the parts of the system (the bio-sphere) can only react to other
parts of the system in their local area.  Furthermore, the system as a
whole is *not* growing more complex, only the maximum complexity
represented in some local area is.  People constantly point to
'Evolution' as a good example of a non-conscious intelligence but it's
important to emphasize that it's an 'intelligence' which is severely
limited.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---