Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-18 Thread Harry Chesley
Trent Waddington wrote:
 As I believe the is that conciousness? debate could go on forever,
 I think I should make an effort here to save this thread.

 Setting aside the objections of vegetarians and animal lovers, many
 hard nosed scientists decided long ago that jamming things into the
 brains of monkeys and the like is justifiable treatment of creatures
 suspected by many to have similar experiences to humans.

 If you're in agreement with these practices then I think you should
 be in agreement with any and all experimentation on simulated
 networks of complexity up to and including these organisms.

Yes, my intent on starting this thread was not to define consciousness,
but rather to ask how do we make ethical choices with regard to AGI
before we are able to define it?

I agree with your points above. However, I am not entirely sanguine
about animal experiments. I accept that they're sometimes OK, or at
least the lesser of two evils, but I would prefer to avoid even that
level of compromise when experimenting on AGIs. And, given that we have
the ability to design the AGI experimental subject -- as opposed to
being stuck with a pre-designed animal -- it /should/ be possible.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-14 Thread Jiri Jelinek
On Fri, Nov 14, 2008 at 2:07 AM, John G. Rose [EMAIL PROTECTED] wrote:
there are many computer systems now, domain specific intelligent ones where 
their life is more
important than mine. Some would say that the battle is already lost.

For now, it's not really your life (or interest) vs the system's life
(or interest). It's rather your life (or interest) vs lives (or
interests) of people the system protects/supports. Our machines still
work for humans. At least it still seems to be the case ;-)). If we
are stupid enough to develop very powerful machines without equally
powerful safety controls then we (just like many other species) are
due for extinction for adaptability limitations.

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-14 Thread John G. Rose
 From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
 On Fri, Nov 14, 2008 at 2:07 AM, John G. Rose [EMAIL PROTECTED]
 wrote:
 there are many computer systems now, domain specific intelligent ones
 where their life is more
 important than mine. Some would say that the battle is already lost.
 
 For now, it's not really your life (or interest) vs the system's life
 (or interest). It's rather your life (or interest) vs lives (or
 interests) of people the system protects/supports. Our machines still
 work for humans. At least it still seems to be the case ;-)). If we
 are stupid enough to develop very powerful machines without equally
 powerful safety controls then we (just like many other species) are
 due for extinction for adaptability limitations.
 

It is where the interests of others is more valuable than an individual's
life. Ancient Rome had the entertainment interests of the masses at a higher
value than those being devoured by lions in the arena. I would say that
computers and machines interests today in many cases now are of similar
relational circumstances in some cases.

Our herd mentality makes it easy for rights to be taken away and at the same
time it is accepted and defended as necessary and an improvement. Example -
anonymity and privacy = gone. Sounds paranoiacal but there are many that
agree on this.

It is an icky subject, easy to ignore, and perhaps something that hinders
technological progression.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Why consciousness is hard to define (was Re: [agi] Ethics of computer-based cognitive experimentation)

2008-11-14 Thread Matt Mahoney
--- On Fri, 11/14/08, Colin Hales [EMAIL PROTECTED] wrote:
Try running yourself with empirical results instead of metabelief
(belief about belief). You'll get someplace .i.e. you'll resolve the
inconsistencies. When inconsistencies are testably absent, no
matter how weird the answer, it will deliver maximally informed
choices. Not facts. Facts will only ever appear differently after
choices are made. This too is a fact...which I have chosen to make
choices about. :-) If you fail to resolve your inconsistency then you
are guaranteeing that your choices are minimally informed.

Fine. By your definition of consciousness, I must be conscious because I can 
see and because I can apply the scientific method, which you didn't precisely 
define, but I assume that means I can do experiments and learn from them.

But by your definition, a simple modification to autobliss ( 
http://www.mattmahoney.net/autobliss.txt ) would make it conscious. It already 
applies the scientific method. It outputs 3 bits (2 randomly picked inputs to 
an unknown logic gate and a proposed output) and learns the logic function. The 
missing component is vision. But suppose I replace the logic function (a 4 bit 
value specified by the teacher) with a black box with 3 switches and a light 
bulb to indicate whether the proposed output (one of the switches) is right or 
wrong. You also didn't precisely define what constitutes vision, so I assume a 
1 pixel system qualifies.

Of course I don't expect anyone to precisely define consciousness (as a 
property of Turing machines). There is no algorithmically simple definition 
that agrees with intuition, i.e. that living humans and nothing else are 
conscious. This goes beyond Rice's theorem, which would make any nontrivial 
definition not computable. Even allowing non computable definitions (the output 
can be yes, no, or maybe), you still have the problem that any 
specification with algorithmic complexity K can be expressed as a program with 
complexity K. Given any simple specification (meaning K is small) I can write a 
simple program that satisfies it (my program has complexity at most K). 
However, for humans, K is about 10^9 bits. That means any specification smaller 
than a 1 GB file or 1000 books would allow a counter intuitive example of a 
simple program that meets your test for consciousness.

Try it if you don't believe me. Give me a simple definition of consciousness 
without pointing to a human (like the Turing test does). I am looking for a 
program is_conscious(x) shorter than 10^9 bits that inputs a Turing machine x 
and outputs yes, no, or maybe.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread Matt Mahoney
--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  If you don't define consciousness in terms of an objective test, then
  you can say anything you want about it.
 
 We don't entirely disagree about that. An objective test is absolutely
 crucial. I believe where we disagree is that I expect there to be such a
 test one day, while you claim there can never be.

It depends on the definition. The problem with the current definition (what 
most people think it means) is that it leads to logical inconsistencies. I 
believe I have a consciousness, a little person inside my head that experiences 
things and makes decisions. I also believe that my belief is false, that my 
brain would do exactly the same thing without this little person. I know these 
two views are inconsistent. I just accept that they are and leave it at that.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread Colin Hales

Dear Matt,
Try running yourself with empirical results instead of metabelief 
(belief about belief). You'll get someplace .i.e. you'll resolve the 
inconsistencies. When inconsistencies are *testably *absent, no matter 
how weird the answer, it will deliver maximally informed choices. Not 
facts. Facts will only ever appear differently after choices are made. 
This too is a fact...which I have chosen to make choices about. :-) If 
you fail to resolve your inconsistency then you are guaranteeing that 
your choices are minimally informed. Tricky business, science: an 
intrinsically dynamic process in which choice is the driver (epistemic 
state transition) and the facts (the epistemic state) are forever 
transitory , never certain. You can only make so-called facts certain by 
failing to choose. Then they lodge in your brain (and nowhere else) like 
dogma-crud between your teeth, and the rot sets in. The plus side - you 
get to be 100% right. Personally I'd rather get real AGI built and be 
testably wrong a million times along the way.

cheers,
colin hales


Matt Mahoney wrote:

--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote:

  

Matt Mahoney wrote:


If you don't define consciousness in terms of an objective test, then
you can say anything you want about it.
  

We don't entirely disagree about that. An objective test is absolutely
crucial. I believe where we disagree is that I expect there to be such a
test one day, while you claim there can never be.



It depends on the definition. The problem with the current definition (what 
most people think it means) is that it leads to logical inconsistencies. I 
believe I have a consciousness, a little person inside my head that experiences 
things and makes decisions. I also believe that my belief is false, that my 
brain would do exactly the same thing without this little person. I know these 
two views are inconsistent. I just accept that they are and leave it at that.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread Trent Waddington
As I believe the is that conciousness? debate could go on forever, I
think I should make an effort here to save this thread.

Setting aside the objections of vegetarians and animal lovers, many
hard nosed scientists decided long ago that jamming things into the
brains of monkeys and the like is justifiable treatment of creatures
suspected by many to have similar experiences to humans.

If you're in agreement with these practices then I think you should be
in agreement with any and all experimentation on simulated networks of
complexity up to and including these organisms.

I, personally, say that these experiments are just fine because lab
animals are property and I have more respect for property rights than
I do for save the animals causes.  Even if you doubt that lab animals
are property, I don't expect that your doubts will extend to computer
hardware, whether or not there is a sophisticated simulation running
on it or not.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread John G. Rose
 From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
 On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED]
 wrote:
  is it really necessary for an AGI to be conscious?
 
 Depends on how you define it. If you think it's about feelings/qualia
 then - no - you don't need that [potentially dangerous] crap + we
 don't know how to implement it anyway.
 If you view it as high-level built-in response mechanism (which is
 supported by feelings in our brain but can/should be done differently
 in AGI) then yes - you practically (but not necessarily theoretically)
 need something like that for performance. If you are concerned about
 self-awareness/consciousness then note that AGI can demonstrate
 general problem solving without knowing anything about itself (and
 about many other particular concepts). The AGI just should be able to
 learn new concepts (including self), though I think some built-in
 support makes sense in this particular case. BTW for the purpose of my
 AGI RD I defined self-awareness as a use of an internal
 representation (IR) of self, where the IR is linked to real features
 of the system. Nothing terribly complicated or mysterious about that.
 

Yes, I agree that problem solving can be performed without self-awareness
and I believe that actions involving rich intelligence need not require
consciousness. But yes it all depends on how you define consciousness. It
can be argued that a rock is conscious.

 Doesn't that complicate things?
 
 it does
 
  Shouldn't the machines/computers be slaves to man?
 
 They should and it shouldn't be viewed negatively. It's nothing more
 than a smart tool. Changing that would be a big mistake IMO.

Yup when you need to scuttle the spaceship and HAL is having issues with
that uhm it would be better for HAL to understand that he is expendable.
Though there are AGI applications that would involve humans building close
interpersonal relationships for various reasons. I mean having that AGI
psychotherapist could be useful :) And advanced post-Singularity AGI
applications, yes, I suppose machine consciousness and consciousness
uploading and mixing, ya, in the meantime though for pre-Singularity design
and study I don't see machine consciousness as required, human equiv that
is. Though I do have a fuzzy view of how I would design a consciousness.

 
 Or will they be equal/superior.
 
 Rocks are superior to us in being hard. Cars are superior to us when
 it comes to running fast. AGIs will be superior to us when it comes to
 problem solving.
 So what? Equal/superior in whatever - who cares as long as we can
 progress  safely enjoy life - which is what our tools (including AGI)
 are being designed to help us with.
 

Superior meaning - if it was me or AGI-X due to limited resources does AGI-X
get to live and I am expendable. Unfortunately there are many computer
systems now, domain specific intelligent ones where their life is more
important than mine. Some would say that the battle is already lost.

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
  I thought what he said was a good description more or less. Out of
 600
  millions years there may be only a fraction of that which is an
 improvement
  but it's still there.
 
  How do you know, beyond a reasonable doubt, that any other being is
  conscious?
 
 The problem is, you have to nail down exactly what you *mean* by the
 word conscious before you start asking questions or making
 statements.
   Once you start reading about and thinking about all the attempts that
 have been made to get specific about it, some interesting new answers
 to
 simple questions like this begin to emerge.
 
 What I am fighting here is a tendency for some people to use
 wave-of-the-hand definitions that only capture a fraction of a percent
 of the real meaning of the term.  And sometimes not even that.
 


I see consciousness as a handle to a system. Consciousness is and is not a
unit. Being a system it has components. And the word consciousness may be
semi-inclusive or over-inclusive. As well consciousness can be descripted as
an ether type thing also but consciousness as a system is more applicable
here I think.

I would be interested in how one goes about proving that another being is
conscious. I can imagine definitions of consciousness that would prove that.
Somehow though the mystery is worthy of perpetuation. 



 One of the main conclusions of the paper I am writing now is that you
 will (almost certainly) have no choice in the matter, because a
 sufficiently powerful type of AGI will be conscious whether you like it
 or not.
 

Uhm what is sufficiently mean here? Consciousness may require some
intelligence but I think that intelligence need only possess absolute
minimalistic consciousness. 

Definitions, definitions. Is there someone who has come up with a
consciousness system described quantitatively instead of just fussy word
descriptions?


 The question of slavery is completely orthogonal.

Yes and no. It's related.

 
  I just want things to be taken care of and no issues. Consciousness
 brings
  issues. Intelligence and consciousness are separate.
 
 
 Back to my first paragraph above:  until you have thought carefully
 about what you mean by consciousness, and have figured out where it
 comes from, you can't really make a definitive statement like that,
 surely?
 

Have deeply thought about it. They are not mutually exclusive nor mostly the
same. With both I assume calculations involving resource processing and
space time dynamics. Consciousness needs to be broken up into different
kinds of consciousness with interrelatedness between. Intelligence has less
complexity than consciousness. It is a semi-system. Consciousness can be
evoked using intelligence. Intelligence can be spurred with consciousness.
They both interoperate but intelligence can be distilled out of an existing
conscio-intelligence. And they can facilitate each other yet hinder each
other.

We'd really have to get into the math to get commitant on it.

 And besides, the wanting to have things taken care of bit is a separate
 issue.  That is not a problem, either way.

Heh.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Matt Mahoney
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
  
  Your 'belief' explanation is a cop-out because it does not address
  any of the issues that need to be addressed for something to count
  as a definition or an explanation of the facts that need to be
  explained.
  
  As I explained, animals that have no concept of death have
  nevertheless evolved to fear most of the things that can kill them.
  Humans have learned to associate these things with death, and
  invented the concept of consciousness as the large set of features
  which distinguishes living humans from dead humans. Thus, humans fear
  the loss or destruction of consciousness, which is equivalent to
  death.
  
  Consciousness, free will, qualia, and good and bad are universal
  human beliefs. We should not confuse them with truth by asking the
  wrong questions. Thus, Turing sidestepped the question of can
  machines think? by asking instead can machines appear to think?
  Since we can't (by definition) distinguish doing something from
  appearing to do something, it makes no sense for us to make this
  distinction.
 
 The above two paragraphs STILL do not address any of the
 issues that need to be addressed for something to count as a
 definition, or an explanation of the facts that need to be
 explained.

And you STILL have not defined what consciousness is.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Matt Mahoney
--- On Tue, 11/11/08, Colin Hales [EMAIL PROTECTED] wrote:

 I'm inclined to agree - this will be an issue in the
 future... if you have a robot helper and someone comes by
 and beats it to death in front of your kids, who
 have some kind of attachment to it...a relationship... then 
 crime (i) may be said to be the psychological
 damage to the children. Crime (ii) is then the murder and
 whatever one knows of suffering inflicted on the robot
 helper. Ethicists are gonna have all manner of novelty to
 play with.

Crime (i) is like when a child's favorite puppy is killed in from of them. Yet 
children that grow up on farms or around hunting regularly see animals killed 
and make the necessary emotional adjustment of not attributing consciousness to 
the victim. An important component of this adjustment is to not give the victim 
a name. In some African cultures with a high infant mortality rate, it is 
customary not to name babies until their first birthday.

One may wonder if people would develop emotional attachments to machines, like 
that of the fictional Will Robinson to the robot on Lost in Space, or the 
actual but weaker attachment of subjects to ELIZA. It is certainly possible. 
But history suggests we can make the reverse detachment no matter how closely 
the victims resemble the aggressors. Example include slavery, the Holocaust, 
Pol Pot, and genocides in Rwanda, Sudan, and eastern Congo. The basic traits 
responsible for this behavior are in all of us. 
http://en.wikipedia.org/wiki/Stanford_prison_experiment

Unlike crime (i) which can be experimentally measured, crime (ii) is a matter 
of opinion.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Harry Chesley
This thread has gone back and forth several times concerning the reality 
of consciousness. So at the risk of extending it further unnecessarily, 
let me give my view, which seems self-evident to me, but I'm sure isn't 
to others (meaning they may reasonably disagree with me, not that 
they're idiots (though I'm open to that possibility too)).


1) I'm talking about the hard question of consciousness.

2) It is real, as it clearly influences our thoughts. On the other hand, 
though it feels subjectively like it is qualitatively different from 
other aspects of the world, it probably isn't (but I'm open to being 
wrong here).


3) We cannot currently define or measure it, but some day we will.

4) Until that day comes, it's really hard to have a non-trivial 
discussion of it, and too easy to fly off into wild theories concerning it.


An analogy: How do you know that humans have blood flowing through their 
veins? Looking at them, you can't tell. Dissecting them after death, you 
can't tell -- they have blood, but it's not moving. Cutting them while 
alive produces spurts of blood, but that could be just because the body 
is generally pressurized, not because there's any on-going flow through 
the veins. It requires observing the internals of the body while alive 
to determine that blood actually flows all the time. And it also helps a 
lot to have a model of the circulatory system that includes the heart as 
a pump, etc.


With consciousness, we're at the pre-scientific stage, because we know 
so little about cognition that we're not yet able to open it up and 
observe it as it operates. This will change, hopefully in my lifetime.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]

John LaMuth wrote:

Reality check ***

Consciousness is an emergent spectrum of subjectivity spanning 600

mill.

years of
evolution involving mega-trillions of competing organisms, probably
selecting
for obscure quantum effects/efficiencies

Our puny engineering/coding efforts could never approach this - not

even

in a million years.

An outwardly pragmatic language simulation, however, is very do-able.

John LaMuth

It is not.

And we can.



I thought what he said was a good description more or less. Out of 600
millions years there may be only a fraction of that which is an improvement
but it's still there.

How do you know, beyond a reasonable doubt, that any other being is
conscious? 


The problem is, you have to nail down exactly what you *mean* by the 
word conscious before you start asking questions or making statements. 
 Once you start reading about and thinking about all the attempts that 
have been made to get specific about it, some interesting new answers to 
simple questions like this begin to emerge.


What I am fighting here is a tendency for some people to use 
wave-of-the-hand definitions that only capture a fraction of a percent 
of the real meaning of the term.  And sometimes not even that.




At some point you have to trust that others are conscious, in the same
species, you bring them into your recursive loop of consciousness
component mix.

A primary component of consciousness is a self definition. Conscious
experience is unique to the possessor. It is more than a belief that the
possessor herself is conscious but others who appear conscious may be just
that, appearing to be conscious. Though at some point there is enough
feedback between individuals and/or a group to share consciousness
experience.

Still though, is it really necessary for an AGI to be conscious? Except for
delivering warm fuzzies to the creators? Doesn't that complicate things?
Shouldn't the machines/computers be slaves to man? Or will they be
equal/superior. It's a dog-eat-dog world out there.


One of the main conclusions of the paper I am writing now is that you 
will (almost certainly) have no choice in the matter, because a 
sufficiently powerful type of AGI will be conscious whether you like it 
or not.


The question of slavery is completely orthogonal.




I just want things to be taken care of and no issues. Consciousness brings
issues. Intelligence and consciousness are separate.



Back to my first paragraph above:  until you have thought carefully 
about what you mean by consciousness, and have figured out where it 
comes from, you can't really make a definitive statement like that, surely?


And besides, the wanting to have things taken care of bit is a separate 
issue.  That is not a problem, either way.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Your 'belief' explanation is a cop-out because it does not address
any of the issues that need to be addressed for something to count
as a definition or an explanation of the facts that need to be
explained.

As I explained, animals that have no concept of death have
nevertheless evolved to fear most of the things that can kill them.
Humans have learned to associate these things with death, and
invented the concept of consciousness as the large set of features
which distinguishes living humans from dead humans. Thus, humans fear
the loss or destruction of consciousness, which is equivalent to
death.

Consciousness, free will, qualia, and good and bad are universal
human beliefs. We should not confuse them with truth by asking the
wrong questions. Thus, Turing sidestepped the question of can
machines think? by asking instead can machines appear to think?
Since we can't (by definition) distinguish doing something from
appearing to do something, it makes no sense for us to make this
distinction.

The above two paragraphs STILL do not address any of the
issues that need to be addressed for something to count as a
definition, or an explanation of the facts that need to be
explained.


And you STILL have not defined what consciousness is.


Logically, I don't need to define something to point out that your 
definition fails to address any of the issues that I can read about in 
e.g. Chalmers' book on the subject.  ;-)





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore

Jiri Jelinek wrote:

On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED] wrote:

is it really necessary for an AGI to be conscious?


Depends on how you define it.


H interesting angle.  Everything you say from this point on 
seems to be predicated on the idea that a person can *choose* to define 
it any way they want, and then run with their definition.


I notice that this is not possible with any other scientific concept - 
we don't just define an electron as Your Plastic Pal Who's Fun To Be 
With and then start drawing conclusions.


The same is true of consciousness.



Richard Loosemore







If you think it's about feelings/qualia
then - no - you don't need that [potentially dangerous] crap + we
don't know how to implement it anyway.
If you view it as high-level built-in response mechanism (which is
supported by feelings in our brain but can/should be done differently
in AGI) then yes - you practically (but not necessarily theoretically)
need something like that for performance. If you are concerned about
self-awareness/consciousness then note that AGI can demonstrate
general problem solving without knowing anything about itself (and
about many other particular concepts). The AGI just should be able to
learn new concepts (including self), though I think some built-in
support makes sense in this particular case. BTW for the purpose of my
AGI RD I defined self-awareness as a use of an internal
representation (IR) of self, where the IR is linked to real features
of the system. Nothing terribly complicated or mysterious about that.


Doesn't that complicate things?


it does


Shouldn't the machines/computers be slaves to man?


They should and it shouldn't be viewed negatively. It's nothing more
than a smart tool. Changing that would be a big mistake IMO.


Or will they be equal/superior.


Rocks are superior to us in being hard. Cars are superior to us when
it comes to running fast. AGIs will be superior to us when it comes to
problem solving.
So what? Equal/superior in whatever - who cares as long as we can
progress  safely enjoy life - which is what our tools (including AGI)
are being designed to help us with.

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Jiri Jelinek
Richard,

Everything you say from this point on seems to be predicated on the idea that 
a person can *choose* to define it any way they want

There are some good-to-stick-with rules for definitions
http://en.wikipedia.org/wiki/Definition#Rules_for_definition_by_genus_and_differentia
but (even though it's not desirable) in some cases it's IMO ok for
researchers to use a bit different definitions. If you can give us the
*ultimate* definition of consciousness then I would certainly be
interested. I promise I'll not ask for the ultimate cross-domain
definition of every single word used in that definition ;-)

Regards,
Jiri

On Wed, Nov 12, 2008 at 12:16 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Jiri Jelinek wrote:

 On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED]
 wrote:

 is it really necessary for an AGI to be conscious?

 Depends on how you define it.

 H interesting angle.  Everything you say from this point on seems to
 be predicated on the idea that a person can *choose* to define it any way
 they want, and then run with their definition.

 I notice that this is not possible with any other scientific concept - we
 don't just define an electron as Your Plastic Pal Who's Fun To Be With and
 then start drawing conclusions.

 The same is true of consciousness.



 Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Richard Loosemore

Jiri Jelinek wrote:

Richard,


Everything you say from this point on seems to be predicated on the idea that a 
person can *choose* to define it any way they want


There are some good-to-stick-with rules for definitions
http://en.wikipedia.org/wiki/Definition#Rules_for_definition_by_genus_and_differentia
but (even though it's not desirable) in some cases it's IMO ok for
researchers to use a bit different definitions. If you can give us the
*ultimate* definition of consciousness then I would certainly be
interested. I promise I'll not ask for the ultimate cross-domain
definition of every single word used in that definition ;-)


Hey, no problem, but I'm now embarrassed and in an awkward position, 
because I am literally trying to do that.  I am trying to sort the 
problem out once and for all.  I am finishing it for submission to 
AGI-09, so it will be done, ready or not, by the end of today.


This is something I started as a student essay in 1986, but I have been 
trying to nail down a testable prediction that can be applied today, 
rather than in 20 years time.  I do have testable predictions, but not 
ones that can be tested today, alas.


As for the question about definitions, sure, it is true that the rules 
are not cut in stone for how to do it.  It's just that consciousness is 
a rats nest of conflicting definitions 



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Matt Mahoney
--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote:

 1) I'm talking about the hard question of
 consciousness.
 
 2) It is real, as it clearly influences our thoughts. On
 the other hand, though it feels subjectively like it is
 qualitatively different from other aspects of the world, it
 probably isn't (but I'm open to being wrong here).

The correct statement is that you believe it is real. Everybody does. Those who 
didn't, did not pass on their DNA.

 3) We cannot currently define or measure it, but some day
 we will.

You can define it any time you want, or use the existing common definition. The 
real problem is that the existing definitions lead to absurd conclusions, like 
Chalmer's fading qualia argument. To avoid logical inconsistencies, you 
either have to accept that machines that pass the Turing test have experience 
or qualia (because there is no test to detect qualia), or that qualia does not 
exist. The latter would be the logical conclusion, except that it conflicts 
with a belief that is hard-coded into all human brains.

 4) Until that day comes, it's really hard to have a
 non-trivial discussion of it, and too easy to fly off into
 wild theories concerning it.
 
 An analogy: How do you know that humans have blood flowing
 through their veins? Looking at them, you can't tell.
 Dissecting them after death, you can't tell -- they have
 blood, but it's not moving. Cutting them while alive
 produces spurts of blood, but that could be just because the
 body is generally pressurized, not because there's any
 on-going flow through the veins. It requires observing the
 internals of the body while alive to determine that blood
 actually flows all the time. And it also helps a lot to have
 a model of the circulatory system that includes the heart as
 a pump, etc.

Blood flow can be directly observed, for example, by x-rays during an 
angioplasty. But that isn't the point. Even without direct observation, blood 
flow is supported by a lot of indirect evidence, for example, when you inject a 
drug into a vein it quickly spreads to other parts of the body. Even theories 
for which evidence is harder to observe, for example, the existence of 
fractional electric charges in quarks, are accepted because the theory makes 
predictions that can be tested. But there are absolutely no testable 
predictions that can be made from a theory of consciousness.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread John LaMuth


- Original Message - 
From: John G. Rose [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 11, 2008 11:41 PM
Subject: RE: [agi] Ethics of computer-based cognitive experimentation




I thought what he said was a good description more or less. Out of 600
millions years there may be only a fraction of that which is an 
improvement

but it's still there.


##

Yes, the forebrain has consistently expanded over this timeframe.

www.forebrain.org

In parallel with this physical (neuroanatomical) certainty is the emergent 
concomittant refinement of consciousness


One could say that each reciprocally drives the other in a mutual survival 
sense w/ consc. perhaps emerging through a systematic evolutionary 
refinement of quantum effects (info-entropy -- entanglement -- the list is 
lengthy)...


The biological (refined through evolution) remains the key...

John LaMuth


###



I just want things to be taken care of and no issues. Consciousness brings
issues. Intelligence and consciousness are separate.


###

Very well put !

John LaMuth




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread John LaMuth


- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 12, 2008 9:05 AM
Subject: Re: [agi] Ethics of computer-based cognitive experimentation





One of the main conclusions of the paper I am writing now is that you will 
(almost certainly) have no choice in the matter, because a sufficiently 
powerful type of AGI will be conscious whether you like it or not.

Richard Loosemore



##

Consciousness has only been demonstrated in biological systems.

Until we can understand how consc. emerges within biology / neuroanatomy 
contexts

then your AGI assertions amount to nothing but faith-based conjectures ...

John LaMuth

www.ethicalvalues.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Jiri Jelinek
On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED] wrote:
 is it really necessary for an AGI to be conscious?

Depends on how you define it. If you think it's about feelings/qualia
then - no - you don't need that [potentially dangerous] crap + we
don't know how to implement it anyway.
If you view it as high-level built-in response mechanism (which is
supported by feelings in our brain but can/should be done differently
in AGI) then yes - you practically (but not necessarily theoretically)
need something like that for performance. If you are concerned about
self-awareness/consciousness then note that AGI can demonstrate
general problem solving without knowing anything about itself (and
about many other particular concepts). The AGI just should be able to
learn new concepts (including self), though I think some built-in
support makes sense in this particular case. BTW for the purpose of my
AGI RD I defined self-awareness as a use of an internal
representation (IR) of self, where the IR is linked to real features
of the system. Nothing terribly complicated or mysterious about that.

Doesn't that complicate things?

it does

 Shouldn't the machines/computers be slaves to man?

They should and it shouldn't be viewed negatively. It's nothing more
than a smart tool. Changing that would be a big mistake IMO.

Or will they be equal/superior.

Rocks are superior to us in being hard. Cars are superior to us when
it comes to running fast. AGIs will be superior to us when it comes to
problem solving.
So what? Equal/superior in whatever - who cares as long as we can
progress  safely enjoy life - which is what our tools (including AGI)
are being designed to help us with.

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread John LaMuth

Richard

Your proposal sounds very deep ...

I look forward to reading it.

Please consider adding (to your citations) my own contribution to the field 
...


LaMuth, J. E. (1977). The Development of the Forebrain as an Elementary 
Function of the Parameters of Input Specificity and Phylogenetic Age. J. 
U-grad Rsch: Bio. Sci. U. C. Irvine. (6): 274-294.

(as reproduced at ...)

http://www.angelfire.com/rnb/fairhaven/brainresearch.html

Cordially

John LaMuth

www.forebrain.org

#

- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 12, 2008 1:36 PM
Subject: Re: [agi] Ethics of computer-based cognitive experimentation



John LaMuth wrote:


- Original Message - From: Richard Loosemore 
[EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 12, 2008 9:05 AM
Subject: Re: [agi] Ethics of computer-based cognitive experimentation





One of the main conclusions of the paper I am writing now is that you 
will (almost certainly) have no choice in the matter, because a 
sufficiently powerful type of AGI will be conscious whether you like it 
or not.

Richard Loosemore



##

Consciousness has only been demonstrated in biological systems.

Until we can understand how consc. emerges within biology / neuroanatomy 
contexts
then your AGI assertions amount to nothing but faith-based conjectures 
...


Actually, my proposal is not just about AGI, as you imply, it is about 
understanding how consciousness emerges within biology/neuroanatomy (more 
generally, how it emerges within any system that shows intelligence of a 
cerrtain sort, regardless of substrate).


So it applies equally to the biology and the AGI cases.  I would never 
suggest a solution to the problem if it did not cover both.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Harry Chesley
Matt Mahoney wrote:
 2) It is real, as it clearly influences our thoughts. On the other
 hand, though it feels subjectively like it is qualitatively
 different from other aspects of the world, it probably isn't (but
 I'm open to being wrong here).

 The correct statement is that you believe it is real. Everybody does.
 Those who didn't, did not pass on their DNA.

No, the correct statement is the one I made. It is real. We have
empirical evidence that it is real since it influences observable actions.

Consciousness *may* be a belief. But we have no empirical evidence for
or against that statement, so it's too early to make blanket statements
like yours.

 3) We cannot currently define or measure it, but some day we will.

 You can define it any time you want, or use the existing common
 definition.

No, you can't define it any way you want. I am talking about a specific
phenomenon that has been observed but not understood. And the
definitions from others that I've seen may allow us to identify shared
experiences of the phenomenon, but don't provide either a good model or
empirical tests, so they're less that I, for one, want in order to say
we've defined it.

 Blood flow can be directly observed, for example, by x-rays during an
 angioplasty. But that isn't the point. Even without direct
 observation, blood flow is supported by a lot of indirect evidence,
 for example, when you inject a drug into a vein it quickly spreads to
 other parts of the body. Even theories for which evidence is harder
 to observe, for example, the existence of fractional electric charges
 in quarks, are accepted because the theory makes predictions that can
 be tested.

So far we're in complete agreement. Concluding that blood flows requires
observation which requires technology applicable to the phenomenon
(x-rays, needles, tests to see if the drug spread, etc.).

 But there are absolutely no testable predictions that can
 be made from a theory of consciousness.

But here you suddenly jump from saying we have no empirical tests to
saying there can be no empirical tests. This makes no sense to me.

Even if consciousness is only a belief with no real substance, there are
testable predictions that follow from its existence, and perhaps tests
to determine that it is limited to being only a belief.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Colin Hales

Matt Mahoney wrote:

snip ... accepted because the theory makes predictions that can be tested. 
But there are absolutely no testable predictions that can be made from a theory of 
consciousness.

-- Matt Mahoney, [EMAIL PROTECTED]


  

This is simply wrong.

It is difficult but you can test for it objectively by demanding that an 
entity based on your 'theory of consciousness' deliver an authentic 
scientific act on the a-priori unknown using visual experience for 
scientific evidence. To the best _indirect_ evidence we have, that act 
is critically dependent on the existence of a visual phenomenal field 
within the tested entity. Visual P-consciousness and scientific evidence 
are literal identities in that circumstance. Degrade visual 
experience...scientific outcome is disrupted. You can use this to 
actually discover the physics of qualia as follows:


1) Concoct your theory of consciousness.
2) Build a scientist with it with (amongst other necessities) visual 
phenomenal consciousness which you believe to be there because of your 
theory of consciousness. Only autonomous, embodied entities are valid, 
because it involved actually interacting with an environment the way 
humans do.
3) Test it for delivery of an authentic act of science on the a-priori 
unknown by testing for ignorance at the start followed by the 
acquisition of the requisite knowledge followed by the application of 
the knowledge on a completely novel problem.

4) FAIL: = your physics is wrong or your design is bad.
   PASS = design and physics are good.

REPEAT THE ABOVE for all putative physics END when you get 
success...voila...the physics you dreamt up is the right one or as good 
as the right one.


If the entity delivers the 'law of nature' then it has to have all the 
essential aspects of a visual experience needed for a successful 
scientific act. You can argue about the 'experience' within the entity 
afterwards...on a properly informed basis of real knowledge. Until then 
you're just waffling about theories.


Such a test might involve reward through reverse-engineering chess. 
Initially chess ignorance is demonstrated...followed by repeated 
exposure to chess behaviour on a real board.followed by a demand to 
use chess behaviour in a completely environment and in a different 
manner...say to operate a machine that has nothing to do with chess but 
is metaphorically labelled to signal that chess rules apply to some 
aspect of its behaviour This proves that the laws underneath the 
external behaviour of the original chess pieces was internalised and 
abstracted...which contains all the essential ingredients of a 
scientific act on the unknown. You cannot do this without authentic 
connection to the distal external world of the chess pieces.


You cannot train such an entity. The scientific act itself is the 
training. Neither testers nor tested can have any knowledge of the 'law 
of nature' or the environments to be encountered. A completely novel 
'game' could be substituted for chess, for example. Any entity dependent 
on any sort of training will fail. You can't train for scientific 
outcomes. You can only build the necessities of scientific behaviour and 
then let it loose.


You run this test on all putative theories of consciousness. If you 
can;'t build it you have no theory. If you build it and it fails, tough. 
If you build it and it passes your theory is right.


You can't test for consciousness is a cultural catch phrase identical 
to man cannot fly.
Just like the Wright Bros, we need to start to fly. Not pretend to fly. 
Or not fly and say we did


Objective testing for consciousness is easy. Building the test and the 
entity...well that's not so easy but it is possible. A 'definition' 
of consciousness is irrelevant. Like every other circumstance in 
science...'laws' and physical phenomena that operate according to them 
are discovered, not defined. Humans did not wait for a definition of 
fire before cooking dinner with it. Why should consciousness be any 
different?


cheers
colin hales



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Matt Mahoney
--- On Wed, 11/12/08, Colin Hales [EMAIL PROTECTED] wrote:

 It is difficult but you can test for it objectively by
 demanding that an entity based on your 'theory of
 consciousness' deliver an authentic scientific act on
 the a-priori unknown using visual experience for scientific
 evidence.

So a blind person is not conscious?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Matt Mahoney
--- On Wed, 11/12/08, Harry Chesley [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  2) It is real, as it clearly influences our thoughts. On the other
  hand, though it feels subjectively like it is qualitatively
  different from other aspects of the world, it probably isn't (but
  I'm open to being wrong here).
 
  The correct statement is that you believe it is real. Everybody does.
  Those who didn't, did not pass on their DNA.
 
 No, the correct statement is the one I made. It is real. We have
 empirical evidence that it is real since it influences
 observable actions.

If you don't define consciousness in terms of an objective test, then you can 
say anything you want about it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Colin Hales



Matt Mahoney wrote:

--- On Wed, 11/12/08, Colin Hales [EMAIL PROTECTED] wrote:

  

It is difficult but you can test for it objectively by
demanding that an entity based on your 'theory of
consciousness' deliver an authentic scientific act on
the a-priori unknown using visual experience for scientific
evidence.



So a blind person is not conscious?

-- Matt Mahoney, [EMAIL PROTECTED]


  
A blind person cannot behave scientifically in the manner of the 
sighted. The blind person cannot be a scientist 'of that which is 
visually evidenced'. As an objective test specifically for visual 
P-consciousness, the blind person's  failure would prove the blind 
person has no visual P-consciousness. If a monkey passed the test then 
it would be proved visually P-conscious (as well as mighty smart!). A 
blind-sighted person would fail because they can't handle the radical 
novelty in the test. Again the test would prove they have no visual 
P-consciousness. A computer, if it passed, must have created inside 
itself all of the attributes of P-consciousness as utilised in vision 
applied to scientific evidence. You can argue about the details of any 
'experience' only when armed with the physics _after_ the test is 
passed, when you can discuss the true nature of the physics involved 
from an authoritative position. If the requisite physics is missing the 
test subject will fail.


That is the characteristic of  a useful test. Unambiguous outcomes 
critically dependent on the presence of a claimed phenomenon. You don't 
even have to know the physics details. External behaviour is decisive 
and anyone could administer the test, provided it was set up properly.


Note that experimental-scientists and applied scientists are literally 
scientific evidence of consciousness. They don't have to deliver 
anything except their normal science deliverables to complete the proof. 
They do nothing else but prove they are visually P-conscious for their 
entire lives.


cheers,
colin






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-12 Thread Harry Chesley
Matt Mahoney wrote:
 If you don't define consciousness in terms of an objective test, then
 you can say anything you want about it.

We don't entirely disagree about that. An objective test is absolutely
crucial. I believe where we disagree is that I expect there to be such a
test one day, while you claim there can never be.

(I say don't /entirely/ agree because I think we can talk about things
that are not completely defined -- in this case, I believe most people
reading this do know the subjective feeling of consciousness and
recognize that that's what I mean. A scientific exploration requires a
more thorough definition, but we can still have some meaningful
discourse without it, though we do risk running off into wildly
unsubstantiated theories when we do.)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread wannabe
When people discuss the ethics of the treatment of artificial intelligent
agents, it's almost always with the presumption that the key issue is the
subjective level of suffering of the agent.  This isn't the only possible
consideration.

One other consideration is our stance relative to that agent.  Are we just
acting in a selfish way, using the agent as simply a means to achieve our
goals?  I'll just leave that idea open as there are traditions that see
value in de-emphasizing greed and personal acquisitiveness.

Another consideration is the inherent value of self-determination.  This
is above any suffering that might be caused by being a completely
controlled subject.  One of the problems of slavery was just that it
simply works better if you let people decide things for themselves. 
Similarly, just letting an artificial agent have autonomy for its own sake
may just be a more effective thing than having it simply be a controlled
subject.

So I don't even think the consciousness of an artificial intelligent
agent is completely necessary in considering the ethics of our stance
towards it.  We can consider our own emotional position and the inherent
value of independence of thinking.
andi



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore

Mark Waser wrote:

An understanding of what consciousness actually is, for
starters.

It is a belief.

No it is not.
And that statement (It is a belief) is a cop-out theory.


An understanding of what consciousness is requires a consensus 
definition of what it is.


For most people, it seems to be an undifferentiated mess that includes 
all of attentional components, intentional components, understanding 
components, and, frequently, experiential components (i.e. qualia).


This mess was cleaned up a great deal when Chalmers took the simple step 
of dividing it into the 'easy' problems and the hard problem (which is 
the last one on your list).  The easy problems do not have any 
philosophical depth to them;  the hard problem seems to be a 
philosophical chasm.


You are *very* correct to say that An 'understanding' of what 
consciousness is requires a consensus definition of what it is.  My 
goal is to get a consensus definition, which then contains within it the 
explanation also.  But, yes, if my explanation does not also include a 
definition that satisfies everyone as a good consensus definition, then 
it does not work.


That is why Matt's it is a belief is not an explanation:  it leaves so 
many questions unanswered that it will never make it as a consensus 
definition/explanation.


We will see.  My paper on the subject is almost finished.



Richard Loosemore




If you only buy into the first three and do it in a very concrete 
fashion, consciousness (and ethics) isn't all that tough.


Or you can follow Alice and star debating the real meaning of the 
third and whether or not the truly fourth exists in anyone except yourself.


Personally, if something has a will (intentionality/goals) that it can 
focus effectively (attentional and understanding), I figure that you'd 
better start treating it ethically for your own long-term self-interest.


Of course, that then begs the question of what ethics is . . . . but I 
think that that is pretty easy to solve as well . . . .





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Matt Mahoney
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:

  Would a program be conscious if it passes the Turing
  test? If not, what else is required?
 
  No.
 
  An understanding of what consciousness actually is, for
  starters.
  
  It is a belief.
 
 No it is not.
 
 And that statement (It is a belief) is a cop-out theory.

No. Depending on your definition of consciousness, there is either an objective 
test for it or not. If consciousness results in an observable difference in 
behavior, then a machine that passes the Turing test must be conscious because 
there is no observable difference between it and a human. Or, if consciousness 
is not observable, then you must admit that the brain does something that 
cannot be explained by the known (computable) laws of physics. You conveniently 
avoid this inconsistency by refusing to define what you mean by consciousness. 
That is a cop-out.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:




Would a program be conscious if it passes the Turing

test? If not, what else is required?

No.

An understanding of what consciousness actually is, for 
starters.

It is a belief.

No it is not.

And that statement (It is a belief) is a cop-out theory.


No. Depending on your definition of consciousness, there is either an
objective test for it or not. If consciousness results in an
observable difference in behavior, then a machine that passes the
Turing test must be conscious because there is no observable
difference between it and a human. Or, if consciousness is not
observable, then you must admit that the brain does something that
cannot be explained by the known (computable) laws of physics. You
conveniently avoid this inconsistency by refusing to define what you
mean by consciousness. That is a cop-out.


Your 'belief' explanation is a cop-out because it does not address any 
of the issues that need to be addressed for something to count as a 
definition or an explanation of the facts that need to be explained.


My proposal is being written up now and will be available at the end of 
tomorrow.  It does address all of the facts that need to be explained.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Do you agree that there is no test to distinguish a

conscious human from a philosophical zombie, thus no way to
establish whether zombies exist?

Disagree.


What test would you use?


A sophisticated assessment of the mechanisms inside the cognitive system.



Would a program be conscious if it passes the Turing

test? If not, what else is required?

No.

An understanding of what consciousness actually is, for
starters.


It is a belief.


No it is not.

And that statement (It is a belief) is a cop-out theory.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser

An understanding of what consciousness actually is, for
starters.

It is a belief.

No it is not.
And that statement (It is a belief) is a cop-out theory.


An understanding of what consciousness is requires a consensus definition 
of what it is.


For most people, it seems to be an undifferentiated mess that includes all 
of attentional components, intentional components, understanding components, 
and, frequently, experiential components (i.e. qualia).


If you only buy into the first three and do it in a very concrete fashion, 
consciousness (and ethics) isn't all that tough.


Or you can follow Alice and star debating the real meaning of the third 
and whether or not the truly fourth exists in anyone except yourself.


Personally, if something has a will (intentionality/goals) that it can focus 
effectively (attentional and understanding), I figure that you'd better 
start treating it ethically for your own long-term self-interest.


Of course, that then begs the question of what ethics is . . . . but I think 
that that is pretty easy to solve as well . . . . 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Matt Mahoney
--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 Your 'belief' explanation is a cop-out because it
 does not address any of the issues that need to be addressed
 for something to count as a definition or an explanation of
 the facts that need to be explained.

As I explained, animals that have no concept of death have nevertheless evolved 
to fear most of the things that can kill them. Humans have learned to associate 
these things with death, and invented the concept of consciousness as the large 
set of features which distinguishes living humans from dead humans. Thus, 
humans fear the loss or destruction of consciousness, which is equivalent to 
death.

Consciousness, free will, qualia, and good and bad are universal human beliefs. 
We should not confuse them with truth by asking the wrong questions. Thus, 
Turing sidestepped the question of can machines think? by asking instead can 
machines appear to think?  Since we can't (by definition) distinguish doing 
something from appearing to do something, it makes no sense for us to make this 
distinction.

Likewise, asking if it is ethical to inflict simulated pain on machines is 
asking the wrong question. Evolution favors the survival of tribes that 
practice altruism toward other tribe members and teach these ethical values to 
their children. This does not mean that certain practices are good or bad. If 
there was such a thing, then there would be no debate about war, abortion, 
euthanasia, capital punishment, or animal rights, because these questions could 
be answered experimentally.

The question is not how should machines be treated? The question is how will 
we treat machines?

 My proposal is being written up now and will be available
 at the end of tomorrow.  It does address all of the facts
 that need to be explained.

I am looking forward to reading it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser
This does not mean that certain practices are good or bad. If there was 
such a thing, then there would be no debate about war, abortion, 
euthanasia, capital punishment, or animal rights, because these questions 
could be answered experimentally.


Given a goal and a context, there is absolutely such a thing as good or bad. 
The problem with the examples that you cited is that you're attempting to 
generalize to a universal answer across contexts (because I would argue that 
there is a useful universal goal) which is nonsensical.  All of this can be 
answered both logically and experimentally if you just ask the right 
question instead of engaging in vacuous hand-waving about how tough it all 
is after you've mindlessly expanded your problem beyond solution.



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 11, 2008 5:58 PM
Subject: **SPAM** Re: [agi] Ethics of computer-based cognitive 
experimentation




--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Your 'belief' explanation is a cop-out because it
does not address any of the issues that need to be addressed
for something to count as a definition or an explanation of
the facts that need to be explained.


As I explained, animals that have no concept of death have nevertheless 
evolved to fear most of the things that can kill them. Humans have learned 
to associate these things with death, and invented the concept of 
consciousness as the large set of features which distinguishes living 
humans from dead humans. Thus, humans fear the loss or destruction of 
consciousness, which is equivalent to death.


Consciousness, free will, qualia, and good and bad are universal human 
beliefs. We should not confuse them with truth by asking the wrong 
questions. Thus, Turing sidestepped the question of can machines think? 
by asking instead can machines appear to think?  Since we can't (by 
definition) distinguish doing something from appearing to do something, it 
makes no sense for us to make this distinction.


Likewise, asking if it is ethical to inflict simulated pain on machines is 
asking the wrong question. Evolution favors the survival of tribes that 
practice altruism toward other tribe members and teach these ethical 
values to their children. This does not mean that certain practices are 
good or bad. If there was such a thing, then there would be no debate 
about war, abortion, euthanasia, capital punishment, or animal rights, 
because these questions could be answered experimentally.


The question is not how should machines be treated? The question is how 
will we treat machines?



My proposal is being written up now and will be available
at the end of tomorrow.  It does address all of the facts
that need to be explained.


I am looking forward to reading it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Trent Waddington
On Wed, Nov 12, 2008 at 8:58 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 As I explained, animals that have no concept of death have nevertheless 
 evolved to fear most of the things that can kill them. Humans have learned to 
 associate these things with death, and invented the concept of consciousness 
 as the large set of features which distinguishes living humans from dead 
 humans. Thus, humans fear the loss or destruction of consciousness, which is 
 equivalent to death.

So you're saying you're not a heavy drinker eh?

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread John LaMuth

Reality check ***

Consciousness is an emergent spectrum of subjectivity spanning 600 mill. 
years of
evolution involving mega-trillions of competing organisms, probably 
selecting

for obscure quantum effects/efficiencies

Our puny engineering/coding efforts could never approach this - not even in 
a million years.


An outwardly pragmatic language simulation, however, is very do-able.

John LaMuth

www.forebrain.org

www.emotionchip.net


- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 11, 2008 8:31 AM
Subject: Re: [agi] Ethics of computer-based cognitive experimentation



Mark Waser wrote:

An understanding of what consciousness actually is, for
starters.

It is a belief.

No it is not.
And that statement (It is a belief) is a cop-out theory.


An understanding of what consciousness is requires a consensus 
definition of what it is.


For most people, it seems to be an undifferentiated mess that includes 
all of attentional components, intentional components, understanding 
components, and, frequently, experiential components (i.e. qualia).


This mess was cleaned up a great deal when Chalmers took the simple step 
of dividing it into the 'easy' problems and the hard problem (which is the 
last one on your list).  The easy problems do not have any philosophical 
depth to them;  the hard problem seems to be a philosophical chasm.


You are *very* correct to say that An 'understanding' of what 
consciousness is requires a consensus definition of what it is.  My goal 
is to get a consensus definition, which then contains within it the 
explanation also.  But, yes, if my explanation does not also include a 
definition that satisfies everyone as a good consensus definition, then it 
does not work.


That is why Matt's it is a belief is not an explanation:  it leaves so 
many questions unanswered that it will never make it as a consensus 
definition/explanation.


We will see.  My paper on the subject is almost finished.



Richard Loosemore




If you only buy into the first three and do it in a very concrete 
fashion, consciousness (and ethics) isn't all that tough.


Or you can follow Alice and star debating the real meaning of the third 
and whether or not the truly fourth exists in anyone except yourself.


Personally, if something has a will (intentionality/goals) that it can 
focus effectively (attentional and understanding), I figure that you'd 
better start treating it ethically for your own long-term self-interest.


Of course, that then begs the question of what ethics is . . . . but I 
think that that is pretty easy to solve as well . . . .





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Matt Mahoney
--- On Tue, 11/11/08, Mark Waser [EMAIL PROTECTED] wrote:

  This does not mean that certain practices are good
 or bad. If there was such a thing, then there would be no
 debate about war, abortion, euthanasia, capital punishment,
 or animal rights, because these questions could be answered
 experimentally.
 
 Given a goal and a context, there is absolutely such a
 thing as good or bad. The problem with the examples that you
 cited is that you're attempting to generalize to a
 universal answer across contexts (because I would argue that
 there is a useful universal goal) which is nonsensical.  All
 of this can be answered both logically and experimentally if
 you just ask the right question instead of engaging in
 vacuous hand-waving about how tough it all is after
 you've mindlessly expanded your problem beyond solution.

That's what I just said. You have to ask the right questions.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Your 'belief' explanation is a cop-out because it does not address
any of the issues that need to be addressed for something to count
as a definition or an explanation of the facts that need to be
explained.


As I explained, animals that have no concept of death have
nevertheless evolved to fear most of the things that can kill them.
Humans have learned to associate these things with death, and
invented the concept of consciousness as the large set of features
which distinguishes living humans from dead humans. Thus, humans fear
the loss or destruction of consciousness, which is equivalent to
death.

Consciousness, free will, qualia, and good and bad are universal
human beliefs. We should not confuse them with truth by asking the
wrong questions. Thus, Turing sidestepped the question of can
machines think? by asking instead can machines appear to think?
Since we can't (by definition) distinguish doing something from
appearing to do something, it makes no sense for us to make this
distinction.


The above two paragraphs STILL do not address any of the issues that 
need to be addressed for something to count as a definition, or an 
explanation of the facts that need to be explained.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Richard Loosemore

John LaMuth wrote:

Reality check ***

Consciousness is an emergent spectrum of subjectivity spanning 600 mill. 
years of
evolution involving mega-trillions of competing organisms, probably 
selecting

for obscure quantum effects/efficiencies

Our puny engineering/coding efforts could never approach this - not even 
in a million years.


An outwardly pragmatic language simulation, however, is very do-able.

John LaMuth


It is not.

And we can.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Colin Hales



[EMAIL PROTECTED] wrote:

When people discuss the ethics of the treatment of artificial intelligent
agents, it's almost always with the presumption that the key issue is the
subjective level of suffering of the agent.  This isn't the only possible
consideration.

One other consideration is our stance relative to that agent.  Are we just
acting in a selfish way, using the agent as simply a means to achieve our
goals?  I'll just leave that idea open as there are traditions that see
value in de-emphasizing greed and personal acquisitiveness.

Another consideration is the inherent value of self-determination.  This
is above any suffering that might be caused by being a completely
controlled subject.  One of the problems of slavery was just that it
simply works better if you let people decide things for themselves. 
Similarly, just letting an artificial agent have autonomy for its own sake

may just be a more effective thing than having it simply be a controlled
subject.

So I don't even think the consciousness of an artificial intelligent
agent is completely necessary in considering the ethics of our stance
towards it.  We can consider our own emotional position and the inherent
value of independence of thinking.
andi


  
I'm inclined to agree - this will be an issue in the future... if you 
have a robot helper and someone comes by and beats it to death in 
front of your kids, who have some kind of attachment to it...a 
relationship... then  crime (i) may be said to be the psychological 
damage to the children. Crime (ii) is then the murder and whatever one 
knows of suffering inflicted on the robot helper. Ethicists are gonna 
have all manner of novelty to play with.

cheers
colin



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 John LaMuth wrote:
  Reality check ***
 
  Consciousness is an emergent spectrum of subjectivity spanning 600
 mill.
  years of
  evolution involving mega-trillions of competing organisms, probably
  selecting
  for obscure quantum effects/efficiencies
 
  Our puny engineering/coding efforts could never approach this - not
 even
  in a million years.
 
  An outwardly pragmatic language simulation, however, is very do-able.
 
  John LaMuth
 
 It is not.
 
 And we can.
 

I thought what he said was a good description more or less. Out of 600
millions years there may be only a fraction of that which is an improvement
but it's still there.

How do you know, beyond a reasonable doubt, that any other being is
conscious? 

At some point you have to trust that others are conscious, in the same
species, you bring them into your recursive loop of consciousness
component mix.

A primary component of consciousness is a self definition. Conscious
experience is unique to the possessor. It is more than a belief that the
possessor herself is conscious but others who appear conscious may be just
that, appearing to be conscious. Though at some point there is enough
feedback between individuals and/or a group to share consciousness
experience.

Still though, is it really necessary for an AGI to be conscious? Except for
delivering warm fuzzies to the creators? Doesn't that complicate things?
Shouldn't the machines/computers be slaves to man? Or will they be
equal/superior. It's a dog-eat-dog world out there.

I just want things to be taken care of and no issues. Consciousness brings
issues. Intelligence and consciousness are separate.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-10 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Fri, 11/7/08, Richard Loosemore [EMAIL PROTECTED] wrote:


The question of whether a test is possible at all depends
on the fact that there is a coherent theory behind the idea
of consciousness.


Would you agree that consciousness is determined by a large set of attributes 
that are present in living human brains but absent in dead human brains?


Yes


Do you agree that there is no test to distinguish a conscious human from a 
philosophical zombie, thus no way to establish whether zombies exist?


Disagree.


Would a program be conscious if it passes the Turing test? If not, what else is 
required?


No.

An understanding of what consciousness actually is, for starters.


Richard Loosemore


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-10 Thread Matt Mahoney
--- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote:

  Do you agree that there is no test to distinguish a
 conscious human from a philosophical zombie, thus no way to
 establish whether zombies exist?
 
 Disagree.

What test would you use?

  Would a program be conscious if it passes the Turing
 test? If not, what else is required?
 
 No.
 
 An understanding of what consciousness actually is, for
 starters.

It is a belief.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-10 Thread Colin Hales

Matt Mahoney wrote:

--- On Mon, 11/10/08, Richard Loosemore [EMAIL PROTECTED] wrote:

  

Do you agree that there is no test to distinguish a
  

conscious human from a philosophical zombie, thus no way to
establish whether zombies exist?

Disagree.




What test would you use?

  
The test will be published in the next couple of months in the Open AI 
journal.
= An objective test for scientific behaviour. I call it the 'PCST' for 
P-Conscious Scientist Test.


You can't be a scientist without being visually P-conscious to 
experience your evidence.
You can't deny the test without declaring scientists devoid of 
consciousness whilst demanding it be used for all scientific evidence in 
a verifiable way AND whilst investing in an entire science paradigm 
Neural Correlates of Consciousness dedicated to scientific exploration 
of P-consciousnessThe logic's pretty good and it's easy to design an 
objective test demanding delivery of a 'law of nature'. The execution, 
however, is logistically difficult++. BUT...At least it's doable. A hard 
test is better than no test at all, which is what we currently have.

When it comes out I'll let you know.

RE ETHICS..I say this in the paper:

As was recognised by Gamez [35], one cannot help but notice that there 
is also a secondary ethical 'bootstrap' process. Once a single subject 
passes the PCST, for the first time ever in certain circumstances there 
will be a valid scientific reason obliging all scientists to consider 
the internal life of an artefact as potentially having some level of 
equivalence to that of a laboratory animal, possibly deserving of 
similar ethical treatment. Until that event occurs, however, all such 
discussions are best considered moot.


cheers
colin




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-07 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Wed, 11/5/08, Richard Loosemore [EMAIL PROTECTED] wrote:


In the future (perhaps the near future) it will be possible
to create systems that will have their own consciousness. 


*Appear* to have consciousness, or do you have a test?


Yes.

But the test depends on an understanding of the system architecture.

The question of whether a test is possible at all depends on the fact 
that there is a coherent theory behind the idea of consciousness.



Stepping back for the moment, the entire question of ethics
depends crucially on your theory of how consciousness
arises.


We talk about such things as if we can answer the question of why it is OK to 
stomp on a roach but not a puppy by studying the brains of roaches and puppies.


It is not possible to look at the brains and decide whether or not is 
okay to stomp, but we can decide whether or not the brain has a 
significant level of consciousness that it is comparable to ours.


That is vital information in making a reasoned judgement of stompworthiness.


For the record, I am treading carefully.  As far as what
happens in my lab, I will explicitly put in place measures
to ensure that AGI systems that do have a chance of
reasonably high levels of consciousness will have the
fullest possible ethical protections.  I cannot speak for
anyone else, but that is my policy.


Now I am curious. Given a program P, what is your lab's criteria for 
determining whether P is conscious?


Complicated.

I'll get right backtya on that ;-)




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-07 Thread Matt Mahoney
--- On Fri, 11/7/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 The question of whether a test is possible at all depends
 on the fact that there is a coherent theory behind the idea
 of consciousness.

Would you agree that consciousness is determined by a large set of attributes 
that are present in living human brains but absent in dead human brains?

Do you agree that there is no test to distinguish a conscious human from a 
philosophical zombie, thus no way to establish whether zombies exist?

Would a program be conscious if it passes the Turing test? If not, what else is 
required?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-06 Thread YKY (Yan King Yin)
On Thu, Nov 6, 2008 at 12:55 AM, Harry Chesley [EMAIL PROTECTED] wrote:

  Personally, I'm not making an AGI that has emotions...

 So you take the view that, despite our minimal understanding of the basis of
 emotions, they will only arise if designed in, never spontaneously as an
 emergent property? So you can safely ignore the ethics question.

Well, my AGI system would take special measures to ensure that
emotions do *not* emerge, by making the system acquire *knowledge* of
human values instead of having emotions occurring at the AGI's
*perceptual* level.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread YKY (Yan King Yin)
On Wed, Nov 5, 2008 at 7:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 Personally, I'm not making an AGI that has emotions, and I doubt if
 emotions are generally desirable in AGIs, except when the goal is to
 make human companions (and I wonder why people need them anyway, given
 that there're so many -- *too* many -- human beings around already).

 People may want to simulate loved ones who have died, if the simulation is 
 accurate enough to be indistinguishable. People may also want to simulate 
 themselves in the same way, in the belief it will make them immortal.


Yeah, I should qualify my statement:  different people will want
different things out of AGI technology.  Some want brain emulation of
themselves or loved ones, some want android companions, etc.  All
these things take up free energy (a scarce resource on earth), so it
is just a new form of the overpopulation problem.  I am not against
any particular form of AGI application;  I just want to point out that
AGI-with-emotions is not necessary goal of AGI.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Mike Tintner

YKY:I just want to point out that
AGI-with-emotions is not necessary goal of AGI.

Which AGI as distinct from narrow AI problems do *not* involve *incalculable 
and possibly unmanageable risks*? -


a)risks that the process of problem-solving will be interminable?
b)risks that the agent does not have the skills necessary for the problem's 
solution?

c)risks that the agent hasn't defined the problem properly?

That's what the emotion of fear is - (one of the emotions essential for 
AGI) - a system alert to incalculable and possibly unmanageable risks. 
That's what the classic fight-or-flight response entails - maybe I can deal 
with this danger but maybe I can't and better avoid it fast.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Mike Tintner

YKY,

As I was saying, before I so rudely interrupted myself - re the narrow AI vs 
AGI problem difference:


*the syllogistic problems of logic - is Aristotle mortal? etc - which you 
mainly use as examples  - are narrow AI problems, which can be solved 
according to precise rules


however:

*metacognitive problems - like *which logic should I  use for syllogistic 
problems, eg PLN/NARS? - (which also concerns you) - are AGI problems; 
there are no rules for solving them, and no definitive solutions, only 
possible, temporary resolutions to someone's satisfaction. Those are 
problems which you have been discussing and could continue to discuss 
interminably. And they are also problems which you will have - and any agent 
considering, should have - fear considering, because you can get endlessly 
bogged down in them


[n.b. psychologically, fear comes in many different degrees from panic to 
mild wariness]


similarly

*is cybersex sex? (another of your problems) - if treated by some artificial 
logic with artificial rules, (which might end up saying yes, approx. 0.60 % 
sex), is a narrow AI problem; however, if treated realistically, 
*philosophically*, relying on language, this is an AGI problem, which can be 
and may well be considered interminably by real philosophers (and lawyers) 
into the next century, (*did* Clinton have sex?) and for which there are 
neither definitive rules nor solution . Again fear is, and has to be a part 
of considering such problems - how much life do you have to spend on them? 
Even the biggest computer brain in the world, the superestAGI will not be 
able to solve them definitively, and must be afraid of them


ditto:

*Any philosophical problem of definition:  what is mind? What is 
consciousness? What is intelligence?  Again these are infinitely open-ended, 
open-means problems, which have atttracted and will continue to attract 
interminable consideration. You are, and should be, afraid, of getting too 
deep into them


*Any linguistic problem of definition: what does honour,beautiful, big 
small  etc mean? is an AGI problem  AFAIK literally any word in the 
language is open to endless definition and redefinition and essentially an 
AGI problem. By contrast, *what is ETFUBAIL an anagram of? is a narrow AI 
problem - and no need for any fear there.


*Defining/describing almost anything - describe YKY or Ben Goertzel; what 
kind of guys/ programmers are they? - are AGI problems. You could consider 
them forever. You may be skilled at resolving them quickly, and able to come 
up with a brief description, but that again while perhaps satisfactory 
will never do the subject even remotely perfect justice, and could be 
endlessly improved and sophisticated.


In general, your instinct - and most AGI-ers' instinct - seems to be, 
whenever confronted with an AGI problem, to try and  reduce it to a narrow 
AGI problem - from a real, open-ended/ open-means-and-rules  to an 
artificial, closed-ended, closed-means-and-rules problem. Then, yes, you 
don't need fear and other emotions, but that's not AGI.




YKY:I just want to point out that
AGI-with-emotions is not necessary goal of AGI.

Which AGI as distinct from narrow AI problems do *not* involve 
*incalculable and possibly unmanageable risks*? -


a)risks that the process of problem-solving will be interminable?
b)risks that the agent does not have the skills necessary for the 
problem's solution?

c)risks that the agent hasn't defined the problem properly?

That's what the emotion of fear is - (one of the emotions essential for 
AGI) - a system alert to incalculable and possibly unmanageable risks. 
That's what the classic fight-or-flight response entails - maybe I can 
deal with this danger but maybe I can't and better avoid it fast.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Harry Chesley

On 11/4/2008 2:53 PM, YKY (Yan King Yin) wrote:

 Personally, I'm not making an AGI that has emotions...


So you take the view that, despite our minimal understanding of the 
basis of emotions, they will only arise if designed in, never 
spontaneously as an emergent property? So you can safely ignore the 
ethics question.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Harry Chesley

On 11/4/2008 3:31 PM, Matt Mahoney wrote:

 To answer your (modified) question, consciousness is detected by the
 activation of a large number of features associated with living
 humans. The more of these features are activated, the greater the
 tendency to apply ethical guidelines to the target that we would
 normally apply to humans. For example, monkeys are more like humans
 than mice, which are more like humans than insects, which are more
 like humans than programs. It does not depend on a single feature.


If I understand correctly, you're saying that there is no such thing as 
objective ethics, and that our subjective ethics depend on how much we 
identify/empathize with another creature. I grant this as a possibility, 
in which case I guess my question should be viewed as subjective. I.e., 
how do I tell when something is sufficiently close to me, without being 
able to see all the features directly, that I need to worry about the 
ethics subjectively?


Let me give an example: If I take a person and put them in a box, so 
that I can see none of their features or know how similar they are to 
me, I still consider it unethical to conduct certain experiments on 
them. This is because I believe those important similar features are 
there, I just can't see them.


Similarly, I believe at some point in AGI development, features similar 
to my own mind will arise, but since they will be obscured by a very 
different (and incomplete) implementation from my own, they may not be 
obvious, even though I believe they are there.


So although you've changed the phrasing of the question to a degree, the 
question remains.


(Note: You could argue that ethics, being subjective, are irrelevant, 
and while that may be true, I'm too squeamish to take that view, which 
also leads to allowing arbitrary experiments on people.)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Matt Mahoney
--- On Wed, 11/5/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 On Wed, Nov 5, 2008 at 7:35 AM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
 
  Personally, I'm not making an AGI that has emotions, and I doubt if
  emotions are generally desirable in AGIs, except when the goal is to
  make human companions (and I wonder why people need them anyway, given
  that there're so many -- *too* many -- human beings around already).
 
  People may want to simulate loved ones who have died,
 if the simulation is accurate enough to be
 indistinguishable. People may also want to simulate
 themselves in the same way, in the belief it will make them
 immortal.
 
 
 Yeah, I should qualify my statement:  different people will want
 different things out of AGI technology.  Some want brain emulation of
 themselves or loved ones, some want android companions, etc.  All
 these things take up free energy (a scarce resource on earth), so it
 is just a new form of the overpopulation problem.  I am not against
 any particular form of AGI application;  I just want to point out that
 AGI-with-emotions is not necessary goal of AGI.

I agree. My own AGI design does not require emotion, assuming the goal is to 
automate the economy. My proposed solution is a decentralized message passing 
network that implements distributed compression of the world's knowledge by 
trading in an economy where information has negative value. Peers mutually 
benefit by trading messages that are hard to compress by the sender and easy to 
compress by the receiver. This has the effect that peers tend to specialize and 
that messages get routed to the right experts. If our language model is a 
simple unigram word model, then we have a distributed implementation of 
Salton's tf-idf information retrieval model.

A language model uses three types of learning: eidetic (short term) memory, 
association of concepts (e.g. words) in eidetic memory, and learning new 
concepts by clustering in context space. Vision is learned the same way. In 
both cases, reinforcement learning (a prerequisite of emotion) is not required.

If the goal of AGI is uploading or simulating humans, then of course it is 
necessary to simulate human emotions. Also if we allow agents to modify 
themselves and reproduce, then evolution will favor emotions such as fear of 
death, greed, tribal altruism, and the desire to reproduce.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Matt Mahoney
--- On Wed, 11/5/08, Harry Chesley [EMAIL PROTECTED] wrote:

 If I understand correctly, you're saying that there is
 no such thing as objective ethics, and that our subjective
 ethics depend on how much we identify/empathize with another
 creature. I grant this as a possibility, in which case I
 guess my question should be viewed as subjective. I.e., how
 do I tell when something is sufficiently close to me,
 without being able to see all the features directly, that I
 need to worry about the ethics subjectively?
 
 Let me give an example: If I take a person and put them in
 a box, so that I can see none of their features or know how
 similar they are to me, I still consider it unethical to
 conduct certain experiments on them. This is because I
 believe those important similar features are there, I just
 can't see them.

It is surprisingly easy for humans to lessen their anxiety by blocking the 
stimuli that makes another suffering person seem human. An important feature of 
the Milgram experiments ( http://en.wikipedia.org/wiki/Milgram_experiment ) was 
that the torturer could not see the victim. Likewise, people who wouldn't 
hesitate to jump in the water to save a drowning child will do nothing to stop 
the suffering of millions of starving refugees on the other side of the world.

I don't mean to imply that we should behave differently. I am just describing 
how the human ethical belief model works.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Richard Loosemore

Harry Chesley wrote:

On 11/4/2008 3:31 PM, Matt Mahoney wrote:

 To answer your (modified) question, consciousness is detected by the
 activation of a large number of features associated with living
 humans. The more of these features are activated, the greater the
 tendency to apply ethical guidelines to the target that we would
 normally apply to humans. For example, monkeys are more like humans
 than mice, which are more like humans than insects, which are more
 like humans than programs. It does not depend on a single feature.


If I understand correctly, you're saying that there is no such thing as 
objective ethics, and that our subjective ethics depend on how much we 
identify/empathize with another creature. I grant this as a possibility, 
in which case I guess my question should be viewed as subjective. I.e., 
how do I tell when something is sufficiently close to me, without being 
able to see all the features directly, that I need to worry about the 
ethics subjectively?


Let me give an example: If I take a person and put them in a box, so 
that I can see none of their features or know how similar they are to 
me, I still consider it unethical to conduct certain experiments on 
them. This is because I believe those important similar features are 
there, I just can't see them.


Similarly, I believe at some point in AGI development, features similar 
to my own mind will arise, but since they will be obscured by a very 
different (and incomplete) implementation from my own, they may not be 
obvious, even though I believe they are there.


So although you've changed the phrasing of the question to a degree, the 
question remains.


(Note: You could argue that ethics, being subjective, are irrelevant, 
and while that may be true, I'm too squeamish to take that view, which 
also leads to allowing arbitrary experiments on people.)


I can answer your questions about ethics from the perspective of someone 
trying to build real AGI systems that are similar to human minds.


In principle, there is no reason why an AGI system should not be in need 
of ethical protection, but it depends on the system.


At the moment, the design of AGI systems is such that there is no 
immediate danger of an intelligence being created that is sufficiently 
self-aware that it would have anything resembling human consciousness. 
Simply put, present systems are almost certainly not capable of feeling 
pain or needing ethical protection.  This statement would require quite 
a lengthy justification, but I think it is a fairly safe conclusion.


In the future (perhaps the near future) it will be possible to create 
systems that will have their own consciousness.  However, even then 
there will be quite drastic differences between different designs, and 
we will have to proceed quite carefully.


For example, it will be possible to create systems that are 
fundamentally designed to want to do certain things, like serving 
humans, or like living in virtual worlds where they do not have contact 
with the real world.  Those systems should not be viewed as 'enslaved 
because, in point of fact, they would want to do what they do:  their 
behavior is what makes them happy, and liberating them from this 
behavior would make them unhappy.  It would not be ethical to take such 
a system and treat it as if it were a human slave that needed to be 
liberated.  This would never be true for any human being (no human being 
truly would be happy as a slave), but it would be fundamentally true in 
the case of this hypothetical AGI system.


This possibility of creating systems that get fulfilment in ways that 
are different from the ways that humans get fulfilment must be taken 
into account when ethical considerations are evaluated.


Stepping back for the moment, the entire question of ethics depends 
crucially on your theory of how consciousness arises.  There is no 
consensus on this at the moment, but it is important to understand that 
any judgement about ethics, either way, can only be made in the context 
of a statement about what exactly the theory of consciousness is that 
lies behind the statement.


Nobody could simply say, for example, Let's assume that all AI systems 
need ethical protection right now, as a default assumption, because 
that kind of default has an *implicit* theory of consciousness behind it 
that is pure guesswork, and is not supported by anything we understand 
about consciousness at the moment.


For the record, I am treading carefully.  As far as what happens in my 
lab, I will explicitly put in place measures to ensure that AGI systems 
that do have a chance of reasonably high levels of consciousness will 
have the fullest possible ethical protections.  I cannot speak for 
anyone else, but that is my policy.





Richard Loosemore









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify 

Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread Matt Mahoney
--- On Wed, 11/5/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 In the future (perhaps the near future) it will be possible
 to create systems that will have their own consciousness. 

*Appear* to have consciousness, or do you have a test?

 Stepping back for the moment, the entire question of ethics
 depends crucially on your theory of how consciousness
 arises.

We talk about such things as if we can answer the question of why it is OK to 
stomp on a roach but not a puppy by studying the brains of roaches and puppies.

 For the record, I am treading carefully.  As far as what
 happens in my lab, I will explicitly put in place measures
 to ensure that AGI systems that do have a chance of
 reasonably high levels of consciousness will have the
 fullest possible ethical protections.  I cannot speak for
 anyone else, but that is my policy.

Now I am curious. Given a program P, what is your lab's criteria for 
determining whether P is conscious?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread YKY (Yan King Yin)
On Wed, Nov 5, 2008 at 6:05 AM, Harry Chesley [EMAIL PROTECTED] wrote:
 The question of when it's ethical to do AGI experiments has bothered me
 for a while. It's something that every AGI creator has to deal with
 sooner or later if you believe you're actually going to create real
 intelligence that might be conscious. The following link is a blog essay
 on the subject, which describes my current thinking on the subject, such
 as it is. There's clearly much more that needs to be worked out.
 Comments, either here or at the blog, would be appreciated.

 http://www.mememotes.com/meme_motes/2008/11/ethical-experimentation-on-cognitive-entities.html


Personally, I'm not making an AGI that has emotions, and I doubt if
emotions are generally desirable in AGIs, except when the goal is to
make human companions (and I wonder why people need them anyway, given
that there're so many -- *too* many -- human beings around already).
YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread Matt Mahoney
--- On Tue, 11/4/08, Harry Chesley [EMAIL PROTECTED] wrote:

 The question of when it's ethical to do AGI experiments
 has bothered me for a while.

That's because you're asking the wrong question. Don't confuse belief with 
truth. The question is: what ethical guidelines will we (not should we) adopt 
regarding experiments with AGI?

Some background: all animals with nervous systems advanced enough to be capable 
of reinforcement learning have evolved fears of most of the things that can 
kill them. Humans (and possibly other advanced animals) have learned the 
concept of death and developed the concept of consciousness, defined as the set 
of features that distinguishes alive from dead, for example, ability to 
communicate, to move, to think, to remember things, to experience things, to 
make decisions, to feel emotions, and so on. The list is quite large. In short, 
consciousness is the set of attributes that we fear losing. Furthermore, humans 
are social animals. We evolved (both genetically and memetically through 
culture) an ethical system that respects consciousness in other members of our 
tribe.

Fear, consciousness, qualia, free will, and good and bad, are all beliefs. 
These concepts are useful for describing human behavior. It is not necessary to 
assume that any of these things actually exist in order to do so.

To answer your (modified) question, consciousness is detected by the activation 
of a large number of features associated with living humans. The more of these 
features are activated, the greater the tendency to apply ethical guidelines to 
the target that we would normally apply to humans. For example, monkeys are 
more like humans than mice, which are more like humans than insects, which are 
more like humans than programs. It does not depend on a single feature.

For example, the program http://www.mattmahoney.net/autobliss.txt simulates 
reinforcement learning in a simple agent. You can run it with the second and 
third argument both negative, meaning it is punished no matter what it does. 
You might consider such an experiment unethical if performed on monkeys but not 
on insects, and certainly not on this program.

As a second example, the video game Grand Theft Auto allows you to have 
simulated sex with prostitutes and then beat them to death to get your money 
back. While playing, I declined to do so, even though it was irrational with 
respect to the goal of attaining the highest possible score.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread Matt Mahoney
--- On Tue, 11/4/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 Personally, I'm not making an AGI that has emotions, and I doubt if
 emotions are generally desirable in AGIs, except when the goal is to
 make human companions (and I wonder why people need them anyway, given
 that there're so many -- *too* many -- human beings around already).

People may want to simulate loved ones who have died, if the simulation is 
accurate enough to be indistinguishable. People may also want to simulate 
themselves in the same way, in the belief it will make them immortal.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread Matt Mahoney
--- On Tue, 11/4/08, Trent Waddington [EMAIL PROTECTED] wrote:

 On Wed, Nov 5, 2008 at 9:31 AM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  As a second example, the video game Grand Theft Auto
 allows you to have simulated sex with prostitutes and then
 beat them to death to get your money back. While playing, I
 declined to do so, even though it was irrational with
 respect to the goal of attaining the highest possible score.
 
 Good for you.  You have principles, you stuck by them, even when it
 meant depriving yourself of something (a trivial something, but
 something).

Remember good is only a belief. I behave the way I am programmed to.

 My only fear is that people like you often turn into people who want
 the game banned so that no-one else may engage in the activity you
 disagree with - and this is why discussions about ethical treatment of
 AGI systems makes me gag... because inevitably someone is going to say
 there oughta be a law and the entire industry will come to a
 screeching halt.
 
 Don't say it won't happen.. remember the blanket
 ban on cloning technology.

The issue here is people are concerned about teaching criminal behavior to 
children. (Again, I don't claim we should or shouldn't). So far there is no 
concern about the treatment of programs. Our laws regarding animal treatment 
are similar. We don't object so much to the suffering of animals (e.g. raising 
chickens in tiny cages for slaughter) as we do to public displays of it (e.g. 
cock fighting).

AGI researchers could adopt a similar approach, i.e. not talking about their 
programs in human terms. But eventually, we will have to confront the issue. As 
I posted earlier, people will want to simulate their deceased loved ones, and 
once the technology is demonstrated, themselves. The first uploads are likely 
to be rough: no embodiment, a lot of incomplete and made-up memories, and 
poorly done AI, barely passing the Turing test. As technology improves 
(surveillance, computing power, better AI algorithms, perhaps brain scanning), 
the uploads will get more realistic.

The problem* is when we give uploads legal and property rights. Humans have an 
incentive to do so, not just out of ethical concerns, but also for selfish 
reasons; first to alleviate grief by simulating loved ones, and second when 
people pass their rights to their simulations after they die in the belief that 
doing so will make them immortal.

*I don't mean to imply that human extinction, or viewed another way, our 
evolution into a non-DNA based life form, is good or bad. However, it is normal 
for people to make such judgments.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com